While it may be challenging for near-term generative AI (GenAI) use cases to live up to the hype of the past two years, the consensus is that the technology is a potential game-changer for business innovation, creativity, personalization, efficiency and new business models. It is also being touted -- somewhat ironically, given its intensive use of resources -- as a solution to many challenges related to climate change and sustainability.
AI's risks and challenges -- ethical, social and environmental -- have been left out of the business hype for these tools. The risks have recently gained the attention of regulators and global institutions including the European Commission, White House and United Nations; but they're still not front and center among business leadership and typical AI users. This has generated alarm among information technology (IT) executives, who are sensing a pattern with which they are all too familiar.
Like many widely hyped, new technologies (cloud computing, IoT, blockchain and robotic-process automation come to mind) GenAI has been hastily adopted -- often without an enterprise strategy, business case or governing ruleset. According to the "Responsible AI" section of Info-Tech's Tech Trends 2024 report, 35 percent of surveyed companies deploying AI lacked formal AI governance guidelines, and less than a third conducted AI impact assessments. Such shortcomings typically lead to disappointing results, costly or embarrassing misuses of the technology, and unacceptable risk.
It has typically been up to IT organizations to bring order to innovation chaos -- ensuring that technology scales efficiently, securely, with proper integration, and ongoing monitoring. But compared to previous technology waves such as the cloud, the stakes for AI are higher. Among the business risks of poorly governed AI are the following:
The IT executive members of SustainableIT.org -- a non-profit professional association dedicated to driving sustainability through technology -- want to help businesses avoid these negative impacts while maximizing the transformational benefits of their AI deployment. To that end, in September, they developed and published a framework and set of principles to guide responsible AI application deployment. While the guidance is uniquely informed by the business technologist perspective, it is pertinent to leadership from the boardroom to the C-suite in every industry, and our goal is that it will be used for education.
The framework offers a simple yet comprehensive model that incorporates three stages and nine principles:
Consider intended uses and desired outcomes of AI applications -- assessing potential positive and negative impacts to business stakeholders, strategies, goals and commitments.
Redevise governance rules, processes, roles and skill sets -- as well as enterprise operations and architecture -- to maximize AI benefits and avoid negative impacts.
Conceptualize new business applications, processes and experiences uniquely suited to AI's ability to augment human capabilities through collaboration and automation.
This Responsible AI Framework is only a preliminary step. SustainableIT.org has formed a Responsible AI Working Group -- open to all interested organizations -- with the mission to inform and equip organizations worldwide with guidance and tools to govern the efficient, secure and sustainable implementation of AI. The group will research, curate and adopt the best existing tools and create new resources to fill gaps in areas including AI literacy; data confidentiality, integrity and accessibility; and AI cost-benefit models. Its output will be shared with global businesses, institutions and executives from all business functions; and will be provided to the United Nations to inform the Global Digital Compact and AI For Good initiatives.
It is imperative to establish or elevate responsible governance for GenAI, and organizations should turn to their IT leaders to drive it. Then, AI may indeed live up to its hype -- transcending the boundaries of traditional computing speed and complexity to help humans create outcomes barely imaginable today.