As the use of ever more powerful AI models continues to grow, ensuring trust and accountability must be at the top of the list of goals, on par with any of AI’s potential benefits. It won’t happen overnight, nor will it result from any single step, such as better code, government regulations, or sincere pledges from AI developers. It will require a substantial cultural shift over time involving people, processes, and technology, and it will require widespread collaboration and cooperation among developers and users.
Despite any misgivings about AI’s shortcomings, business leaders can’t ignore its benefits. Gartner found that 79% of corporate strategists believe that their success over the next two years will depend heavily on their use of data and AI. The proliferating use of AI is inevitable. The rise of generative AI in particular has created a gold-rush mentality born of the fear of being at a competitive disadvantage—resulting in significant noise and potential recklessness as companies launch themselves into the ring of AI offerings. For developers and technology leaders considering adding AI to their ecosystem, there are several pitfalls worth examining before choosing a solution. Luckily, the calls for responsible use are also growing.
With great power comes great risk
For all its value, AI does make mistakes. With IT leaders only automating about 15% of the 50% of strategic planning and execution activities that could be partially or fully automated, that leaves a massive swath of business processes available for AI implementation. If even one area of the business’s AI is taught with haphazard training data, it’s likely that segment will exhibit bias or hallucinations. While issues like bias and hallucinations are well documented, even seemingly benign processes automated with AI models can erode profitability due to inaccuracies, insufficient visibility to influential variables, or under-representative training data.
Another often discussed problem with AI is a lack of transparency into the internal workings of AI models, resulting in “black box” solutions that leave analysts unable to understand how a conclusion was reached. According to McKinsey, efforts to develop explainable AI have yet to bear much fruit. McKinsey also revealed that companies seeing the biggest bottom-line returns from AI—those that attribute at least 20% of pre-tax revenue to their use of AI—are more likely than others to follow best practices that enable explainability. Said differently: The greater the financial stakes, the more likely a company is to seek transparency in their AI modelling. The SAS approach to model cards offers a remedy to this problem, enabling executives and developers alike to evaluate model health.
Governments across the globe are also seeking ways to regulate AI development and use. The White House issued an Executive Order last October identifying safety and security standards for AI development, and solicited voluntary commitments from leading AI companies to pursue the responsible development of AI. It has also issued a Blueprint for an AI Bill of Rights aimed at protecting privacy and other civil rights. The European Union’s AI Act recently cleared its final hurdle when members finalized the text after unanimously agreeing on the provisions. The EU AI Act is one of the first comprehensive attempts to regulate AI. Also, SAS was one of more than 200 organizations to join the Department of Commerce’s National Institute of Standards and Technology’s (NIST) Artificial Intelligence Safety Institute Consortium, launched in February. The consortium supports the development and deployment of trustworthy and safe AI.
Regulations alone, however, won’t be enough because they often lag behind the rapid development of new AI technologies. Regulations can provide a general framework and guardrails for AI development and use, but maintaining that framework will require widespread commitment and cooperation among developers and users of AI. Governments such as the United States, meanwhile, also can leverage their considerable purchasing power to set de facto standards and expectations for ethical behavior.
Responsible use of AI is built from the group up
Ensuring ethical use of AI starts before a model is deployed—in fact, even before a line of code is written. A focus on ethics must be present from the time an idea is conceived and persist through the research and development process, testing, and deployment, and must include comprehensive monitoring once models are deployed. Ethics should be as essential to AI as high-quality data.
It can start with educating organizations and their technology leaders about responsible AI practices. So many of the negative outcomes outlined here arise simply from a lack of awareness of the risks involved. If IT professionals regularly employed the techniques of ethical inquiry, the unintended harm that some models cause could be dramatically reduced.
Raising the level of AI literacy among consumers is also important. The public should have a baseline understanding of what AI is and how data is used, as well as a grasp of both the opportunities and the risks, though it’s the job of technology leadership to make sure AI ethics is practiced.
How SAS Viya puts ethical practices to work
To help ensure that AI is operating in a trustworthy and ethical manner, companies need to consider partnering with data and AI organizations that prioritize both innovation and transparency. In the case of SAS, our SAS Viya ecosystem is a cloud-native, high-performance AI and analytics platform that integrates easily with open-source languages and gives users a low-code, no-code interface to work with. SAS Viya can build models faster and scale further, turning a billion points of data into a clear, explainable point of view.
How does SAS Viya solve for the some of the problems facing AI deployment? First, the platform is guided by SAS’s commitment to responsible innovation, which translates to its offerings as well. In 2019, SAS announced a $1 billion investment in AI, a significant amount of which was funneled toward making Viya cloud-first and adding natural language processing and computer vision to the platform. These additions help companies parse, organize, and analyze their data.
Because building a trustworthy AI model requires a robust set of training data, SAS Viya is equipped with strong data processing, preparation, integration, governance, visualization, and reporting capabilities. Product development is guided by the SAS Data Ethics Practice (DEP), a cross-functional team that coordinates efforts to promote the ideals of ethical development—including human centricity and equity—in data-driven systems. The DEP includes data scientists and business development specialists who work with developers, evaluating new features and consulting on solutions that may involve higher risk, such as those for financial services, healthcare, and government. In addition to its foundation of ethics, Viya is built to map across verticals, with useability and transparency at the forefront of design.
SAS Viya platform capabilities
The Viya platform includes technical capabilities designed to ensure trustworthy AI, including bias detection, explainability, decision auditability, model monitoring, governance, and accountability. Bias, for example, has proved to be insidious in AI programs, as well as in a number of public policies, reflecting and perpetuating the biases and prejudices in human society. In AI, it can skew results, favoring one group over another and resulting in unfair outcomes. But training AI models on better, more comprehensive data can help remove bias—and SAS Viya performs best with complex data sets.
SAS Viya makes use of econometrics and intelligent forecasting, allowing IT leaders to model and simulate complex business scenarios based on large quantities of observational or imputed data. To check for data quality, and the real-world outcomes of a certain AI model, a technology executive just needs to run forecasting software in SAS Viya to see outcomes. Another safeguard within the platform is its decisioning features, which can help IT pros react in real time to model results. Using decisioning processes built with a drag-and-drop GUI or written code, developers can create centralized repositories for data, models, and business rules to guide accuracy and ensure transparency. Custom business rules, written by a human hand in SAS Viya, lead to faster deployment and confidence in the integrity of model-driven operational decisions.
Some examples of how Viya has been used to improve operations for organizations:
- The Center for NYC Neighborhoods and SAS partnered to analyze inequities in the city’s housing data and revealed disparities in home values, purchase loans, and maintenance violation reports that put people of color at a disadvantage.
- SAS and the Amsterdam University Medical Center trained a SAS Viya deep learning model to instantly identify tumor characteristics and share vital information with doctors to accelerate diagnoses and help determine the best treatment strategies.
- The Virginia Commonwealth University is using Viya to automate manual, time-consuming data management, analytical, and data visualization processes to accelerate research into higher cancer mortality rates among low-income and vulnerable populations.
AI has the potential to transform the global economy and workforce. It can automate routine tasks, improve productivity and efficiency, and free up humans to do higher-purpose work. AI has helped to achieve breakthroughs in health care, life sciences, agriculture, and other areas of research. Only the most trustworthy AI models, ones that prioritize transparency and accountability, will be responsible for these kinds of breakthroughs in the future. It’s not enough for one platform like Viya to get responsible AI right—it must be industry-wide, or we all fail.
Trustworthy AI requires a unified approach
To judge from the most extreme projections of its potential impact, AI represents either the dawn of a new era or the end of the world. The reality is in the middle—AI poses revolutionary benefits but also significant risks. The key to reaping the benefits while minimizing the risks is through responsible, ethical development and use.
It will require cross-functional teams within industry and cross-sector initiatives involving industry, government, academia, and the public. It will mean involving non-technologists who understand the risks to vulnerable populations. It will mean using technologies like SAS Viya, which helps organizations reach their responsible AI goals. It requires thoughtful regulations that establish consistent guardrails, protect citizens, and spur innovation.
But above all, responsible, trustworthy AI requires us to pursue AI advancements ethically, with a shared vision of reducing harm and helping people thrive.
Reggie Townsend is vice president of the Data Ethics Practice at SAS.
—
Generative AI Insights provides a venue for technology leaders—including vendors and other outside contributors—to explore and discuss the challenges and opportunities of generative artificial intelligence. The selection is wide-ranging, from technology deep dives to case studies to expert opinion, but also subjective, based on our judgment of which topics and treatments will best serve InfoWorld’s technically sophisticated audience. InfoWorld does not accept marketing collateral for publication and reserves the right to edit all contributed content. Contact doug_dineley@foundryco.com.