Ascend to new heights with AI guardrails

 

This content originally appeared in NACD Directorship Q4 2024 as "Embracing AI Governance Through Guardrails"

 

Strong AI governance prioritizes people, systems and ethics

 

Boards are contending with the accelerated reality of artificial intelligence (AI) integration into business and the numerous governance strategy and risk implications it portends. Twelve months ago, most boards were just beginning to have conversations among themselves and with management about AI governance. After the rapid AI advancements in 2024, there is now a tangible urgency to the questions boards are asking about what’s next in AI governance.

 

Companies are implementing and testing many different AI tools in pockets and so are individuals, whether it’s on management’s radar or not. Directors are reading different case studies and pondering AI application to the businesses they serve. Countries around the world, and even some states, are designing and enacting legislation to keep pace with the ever-expanding range and speed of AI-powered technologies, creating a complex regulatory patchwork. Meanwhile, management is eager to benefit from the value that AI can bring and may need a nudge to ensure use cases and processes minimize risk and are fit for purpose. All this underscores the importance of boards leaning in and having robust governance in place for the implementation and use of AI. What boards should consider as companies scale AI varies for each organization, depending on factors such as maturity level and industry and management’s ability to manage change.

 

Having guardrails in place that align with the core issues addressed in the Committee of Sponsoring Organizations of the Treadway Commission’s (COSO’s) Enterprise Risk Management Framework or other well-understood risk management frameworks at the initiation of a company’s AI journey underpins effective governance. Guardrails can serve as the foundation for effective short- and long-term communication to align the board and management as the technology continues to advance and create new dynamics that are unforeseen. Boards must begin by ensuring that they, both as a compositional unit and as individuals, have the appropriate level of AI competence and focus on continual upskilling. Directors also need to verify that there is strong philosophical alignment between the board and management on the people and culture strategy for AI; management has ensured data and systems that support AI are fit for purpose while maintaining security and privacy standards; change management programs are in place to smooth the transition; and alignment with goals and ethics underpins all.

 
 

 

Related resources

 
 
 
 
 

Fostering AI-ready culture

 
 

The rapid acceptance of AI has shifted organizations into a new reality of how people and culture are managed. The future is here, and, perhaps more importantly than in any other facet of the business, AI’s impact on employees requires thoughtful consideration. Employees are understandably concerned about what the technology will mean for their employment and the nature of their jobs today and tomorrow.

 

That’s not to say fear outweighs optimism. A majority (84%) of employees believe AI can have a positive impact on their careers, and different generations want to be a part of the future: roughly two-thirds of workers over the age of 55 say they are interested in AI training, according to an Amazon Web Services report. It went on to say employers are willing to pay a higher salary for workers that possess AI skills in information technology (47%); marketing and sales (43%); finance (42%); business operations (41%); legal, regulatory, and compliance (37%); and human resources (35%). As directors navigate this shifting terrain, guardrails can help them provide guidance and reinforce the importance of a human-centric approach to AI governance.

 

Assess the organization’s current AI attitudes, understanding, and acceptance. Although AI is already being used in numerous organizations, many employees may not have had the training necessary to understand how to use it safely and effectively. Boards will need to manage through a skills gap that may take three to five years to fill. The timeline is not clear: How quickly will a company find the right use cases? How quickly will technology continue to advance, and in what directions? How long before the educational community can catch up and align with employment needs?

 

To address the skills gap, boards may want to ask management about taking a skills-first approach to the workforce through hiring and training opportunities that allow people to retain their positions or move up in the organization. Grant Thornton’s HR Leaders survey report indicates that development and training opportunities are second only to benefits in convincing employees to join the organization. It goes on to say that attracting top talent is directly tied to human resources (HR) investment. Due to a limited talent pool, HR must invest in streamlining and automating hiring processes, smartening up sourcing strategy, and hybrid capabilities. To do this, top talent must be invested in HR, which, according to Gartner’s HR Investment Trends for 2023, is historically under-invested in.

 

Support smart investments to redefine productivity with AI training that follows cultural shifts. With the goal of deploying AI to augment employee performance, forward-facing companies are offering training that gets people up to speed on how to use the technology efficiently, effectively and safely. Training that provides the right degree of upskilling will not overwhelm the workforce and can help protect the desired cultural tone.

 

It’s also important to prioritize training alongside technology investment. If AI is rolled out without proper training, risks immediately follow. Do employees know that AI is available and what it is meant to do? Can they write prompts that get to the information they are looking for? Are they aware of the risks of misuse, such as sharing sensitive client data or trusting that AI serves up the right answer without human oversight?

 

Training on its own isn’t enough; organizations must provide employees with access to generative AI tools that won’t expose sensitive data to the public, but that will enable employee experimentation and productivity enhancement. Providing access to a generative AI tool on top of a private vector database will enable employees to work with sensitive data sets.

 

Change management should also be a part of the investment. Many jobs will transform, some new jobs will be created, and some jobs will be eliminated. The company, the industry, or even the board itself may undergo disruption. Companies that champion change from the leadership level will help drive buy-in, preparing and supporting their people so they can adjust and thrive.

 

Guide the company to safely deploy AI in hiring and HR functions. AI vendors are working hard to eliminate unintended bias in their products, but ultimately, responsibility for safe, legal deployment falls on the company and management using the tool. It may come as no surprise to directors that regulatory agencies, aware that companies are using AI systems, are attuned to risks that may be inherent in these early days of adoption and the electronic paper trail the tools create.

 

The US Department of Labor recently published “Artificial Intelligence and Equal Employment Opportunities for Federal Contractors,” which provided guidance regarding the use of AI in hiring and employment practices. While intended for federal agencies, it serves as foreshadowing for companies and boards. Agencies using AI must comply with existing nondiscrimination laws, and they have “obligations to maintain records related to AI tools, provide reasonable accommodations to contractors’ use of automated systems,” according to the article. It also addresses issues around scheduling, timekeeping, and tracking and the need to apply human oversight to automated systems. Keith Sonderling, commissioner of the Equal Employment Opportunity Commission, is cited in an interview with Politico as saying, “Before, you had one person potentially making a biased hiring decision. AI, because it can be done at scale, can impact hundreds of thousands or millions of applicants.” He went on to emphasize this is a civil rights issue and said, “The stakes are going to be higher.” Directors can use this information to assist their oversight and avoid exposure to unintended instances of hiring bias or discrimination.

 
 

Fostering AI-ready data and systems

 
 

At the heart of any discussion are the fundamental concepts surrounding AI systems and data and ensuring they work together to safely achieve the desired objectives. The systems provide the underlying infrastructure and models, and the data trains them to recognize patterns, perform tasks, and even make decisions. Data and systems represent a particularly risky pillar of AI. Maintaining data quality, privacy and security is one of the biggest challenges leaders face related to the technology. Directors rightly look inward so they can help management meet challenges head on and prevent fear from stifling progress. Instead, fear of missing out can provide a strong impetus to proceed. But directors need to be careful not to let that fear lead them to believe their AI solutions will be the same as everyone else’s. Use cases for AI can differ dramatically based on industry, strategic objectives and other variables, so there’s no single solution that will fit every company’s needs.

 

“I’ve encountered directors who are very curious and very interested to learn how AI can boost success in their company, while their management teams are less so,” said Tony Dinola, Technology Modernization Services Principal at Grant Thornton. “Educated boards can provide management with insights to support them in embracing the technology and finding ways to establish controls to mitigate risks. Blacklisting or filtering out AI tools simply moves employees to use AI tools they find on their own, introducing significant vulnerabilities, particularly around cyber and data security and ethics.”

 

In providing oversight of systems and data, boards should consider these areas:

AI alignment with strategic priorities.
 This strategic consideration is a critical first step when introducing a new technology tool. Frameworks exist to help guide communication when discussing the strategic approach and direction and defining what success will look like. For the best results, though, it’s important to use AI to make a difference on the most impactful, prioritized actions in the business rather than in remote areas where the technology will not have a significant effect.

 

Determining the approach to AI investment. Choosing between building and buying AI solutions is a decision that affects strategy, resources and outcomes. While this is a decision made by management, directors will want to understand the key factors to consider so that they can ask the right questions.

 
 

Since AI is new, a pilot process that measures return on investment (ROI) may be a useful first step when deciding on a use case. Based on the results of the pilot, leadership can decide whether to build out a full implementation or refocus on a different use case.

Tony Dinola

“To fulfill their fiduciary duties, directors will want to encourage management to embrace AI and build a framework around it to protect the company from the risks they fear.”

Tony Dinola

Principal, Technology Modernization Services,
Grant Thornton Advisors LLC

 

Getting ahead of risky behavior. Directors recognize that AI is a reality that their companies need to embrace. While strategic investment decisions are being made and the technology evolves at an astounding pace, companies that create a safe outlet as a baseline and initial starting point can help prevent “shadow” AI — the unauthorized use of AI systems in the organization. Without an internally available AI source, some employees will use their personal devices and whatever AI technology is available externally. Prohibition will not work; a study from the Salesforce Generative AI Snapshot Research Series found that more than half of employees who use generative AI at work do so without the formal approval of their employers. “To fulfill their fiduciary duties, directors will want to encourage management to embrace AI and build a framework around it to protect the company from the risks they fear,” Dinola said.

 

A retrospective view of success. Implementing measures of success keeps a sharp focus on strategic priorities, goals and opportunities for improvement. Boards can guide management to ensure that there is a mechanism or function in place for ongoing monitoring and reporting that evaluates any risks that arise as well as alignment with corporate strategic priorities. Measures of success can form the basis for a continuous feedback loop for ongoing improvements of AI models.

 
 

Fostering AI goals and ethics

 
 

Boards can be a steadying force by ensuring first and foremost that all opportunities are balanced against the organization’s goals, ethics and risk oversight, which must remain at the heart of all AI investments and implementations. A lack of centralized oversight and the absence of clear guidelines or policies (or the lack of awareness thereof) can lead to unintentional outcomes with significant legal, reputational or compliance consequences.

 

 

 

Assess maturity

 

Good governance is linked to an understanding of the company’s level of AI maturity. “Directors will want to be listening for signals that indicate the company’s progress on its AI journey,” said Adam Bowen, Growth Advisory Services Managing Director for Grant Thornton. “These signals indicate if the company is experimenting, formalizing, ready to accelerate, or transform.” Boards can gauge their organization’s maturity based on these signals:

 

Democratization versus experimentation: Do efforts tend to be localized, using small data sets that pose limited risk? Are there early adopters and self-learners running experiments with limited management approval? Is leadership aware but expressing fear?

 

Formalization: Are there cross-functional conversations across lines of business? Are AI projects transitioning to production deployment? Is there a concern about lack of guardrails and a governance model?

 

Reaching scale: Are use case audits taking place? Are there efforts to ensure data quality? Are build or buy decisions taking place?

 

Transformation: Is the governance model continuing to mature to protect against risks around data, ethics, and privacy? Are there production-grade models and oversight in place? Are they measuring ROI?

 

 

 

Leverage AI’s potential

 

AI projects that involve diverse stakeholders can pose alignment challenges with organizational goals due to varying priorities and experience levels. Directors need assurance that these projects align with their organization’s vision, mitigate the risks of this emerging technology, and contribute to financial and strategic objectives.

 

As business challenges evolve, it’s vital to manage productivity shifts to maintain alignment with business goals. While automation is appealing, understanding the benefits and risks of AI integration is crucial. AI, while not a panacea, brings inherent ethical issues and biases. Implementing guardrails ensures that optimized solutions align organizational and ethical priorities. Boards should offer insight into these challenges to maintain transparency in management’s AI approach.

 

 

 

Compliance with ethical values

 

The ramifications of an unintended error or incorrect outputs can be substantial, and responsible AI practices can significantly reduce the risk of negative consequences related to AI.

 

Microsoft’s Responsible AI Framework provides a foundation for AI use that is ethical and consistent with organizational values. Designating a recognized owner for AI governance can improve oversight by centralizing the responsibility for ethical and regulatory compliance, avoiding unnecessary risks, and advancing progress toward strategic goals. Some organizations are naming a chief AI officer (CAIO), a trend that may gain traction now that federal agencies are required to designate CAIOs to “ensure accountability, leadership and oversight for the use of AI in the federal government,” according to the Office of Management and Budget’s Policy to Advance Governance, Innovation, and Risk Management in Federal Agencies’ Use of Artificial Intelligence. “There should be an owner or a select committee that makes sure all the risks associated with AI from an assessment and ethics perspective are contemplated,” said Ethan Rojhani, Risk Advisory Services Principal at Grant Thornton. “They can add the AI-specific elements to their structure tailored to the unique risks of having a machine potentially making decisions.”

Adam Bowen

“Boards will want to ensure management is acting in concert with ethical and responsible practices.”

Adam Bowen

Managing Director, Growth Advisory Services
Grant Thornton Advisors LLC

 

Directors are accustomed to overseeing management’s use of frameworks, guardrails and risk assessments to manage risks that threaten compliance with ethical standards, laws, and regulations. Although AI has many risks in common with other digital technologies, it also introduces elevated risks and unintended consequences. “Boards will want to ensure management is acting in concert with ethical and responsible practices, assessing potential risks that compromise compliance with ethical norms, laws and regulations,” Bowen said.

 

Boards should expect to see a framework mapped to existing standards, such as COSO’s Enterprise Risk Management Framework, the National Institute of Standards and Technology’s Risk Management Framework, or the Microsoft Responsible AI Framework, that enable management to identify risks and opportunities and think critically about the potential consequences. Below is Grant Thornton’s principles-based framework that has risks and controls mapped into broader risk management frameworks.

 

Accountability & responsibility

Ownership and responsibility are clearly defined for governing the use of AI technology.

Transparency

AI systems are explainable to stakeholders; users are informed of use of AI and potential impacts.

Fairness

AI systems are set up with minimized bias or favoritism to ensure equitable outcomes for all.

Security & safety

AI systems are protected from logical and physical threats to ensure the integrity of the AI system.

Privacy

AI systems are compliant with privacy controls to protect individuals' privacy rights.

Reliability & resiliency

AI systems are reliable and resilient to enable business and minimize the impact of disruptive events.

 

The right education and training for boards can help them fulfill these new AI oversight responsibilities.

 

“In our experience, directors perceive AI as a huge opportunity rather than a malicious threat, and they are hungry to become more knowledgeable on the benefits and risks,” Rojhani said. “There are a multitude of options for AI training, but because not all directors are regular users of leading technologies, training should be hands-on, practical, and tailored to board members, with less emphasis on technical or implementation details.”

 

Armed with the right training, board members will be better prepared to ask management the right questions related to AI governance. These may include:

 

 

 

General

  • Can management articulate what the AI strategy is and how it aligns with the overall corporate strategy?
  • What is the maturity level of our AI deployment, and are we taking steps, such as developing a center of excellence, to ensure consistency in AI use across the organization?
  • Has management developed strategies to clearly signal to employees the acceptable AI use?

 

 

People and culture

  • What measures are being taken to identify and address gaps in talent, training, and culture, and how successful are we in hiring for AI skills or identifying candidates with strong AI capabilities?
  • Is management delivering clear messaging so that employees understand what AI adoption means for their jobs? How confident are we that our message is both heard and trusted?
  • Have we established AI best practices that reflect our human-centric strategy, and do we have safeguards in place to prevent AI bias in hiring and other business practices?

 

 

Data and systems

  • How are we evaluating success where AI has been deployed, and what level of success have we had?
  • Is management employing a framework for AI governance? If so, which framework?
  • What data is being used to train our AI systems, how are we ensuring the quality of this training data, and what measures is management taking to protect the organization’s proprietary data?

 

 

Goals and ethics

  • How are we identifying benefits and ethical risks for each AI use case, justifying the AI spend on nonfinancial initiatives, and prioritizing financial efforts based on projected ROI, future strategic positioning, or other criteria?
  • What measures are in place to address AI readiness gaps and ensure transparency, reliability and accountability in AI-supported operations?
  • Has management accounted for potential legal, ethical, or compliance risks? If so, can management share those discussions and considerations with the board?

Directors have faced similar challenges before. Tried and true tools such as a proven framework, relevant guardrails, strategic prioritization, specific performance metrics, and board oversight will help organizations navigate the new age of AI.

 

While the regulatory environment continues to evolve as local, state and international rules emerge, the board’s oversight role takes on elevated significance in ensuring responsible AI research, development and deployment. Directors are in a position to ensure proper risk management and accountability.

 

Directors can provide oversight that helps AI support the people and culture of the organization while remaining consistent with organizational ethics and values while using data and systems that operate effectively and are carefully controlled. Boards need to also hold management accountable for taking full advantage of the AI opportunities in a responsible way. Boards that take a positive view of AI’s potential while exercising risk management vigilance will see their organizations capitalize fully on the opportunities created by this transformative technology. 

 
 

Contacts:

 
 
 
 
 
 
Adam Bowen

Adam is a Managing Director in the Growth advisory practice. He has over 20 years of experience in strategy and consulting with consumer and corporate brands.

Chicago, Illinois

Industries
  • Life sciences
  • Manufacturing, Transportation & Distribution
  • Media & entertainment
  • Technology, media & telecommunications
Service Experience
  • Advisory
  • Commercial and growth
 
 
Content disclaimer

This content provides information and comments on current issues and developments from Grant Thornton Advisors LLC and Grant Thornton LLP. It is not a comprehensive analysis of the subject matter covered. It is not, and should not be construed as, accounting, legal, tax, or professional advice provided by Grant Thornton Advisors LLC and Grant Thornton LLP. All relevant facts and circumstances, including the pertinent authoritative literature, need to be considered to arrive at conclusions that comply with matters addressed in this content.

For additional information on topics covered in this content, contact a Grant Thornton professional.

Grant Thornton LLP and Grant Thornton Advisors LLC (and their respective subsidiary entities) practice as an alternative practice structure in accordance with the AICPA Code of Professional Conduct and applicable law, regulations and professional standards. Grant Thornton LLP is a licensed independent CPA firm that provides attest services to its clients, and Grant Thornton Advisors LLC and its subsidiary entities provide tax and business consulting services to their clients. Grant Thornton Advisors LLC and its subsidiary entities are not licensed CPA firms.

 

Our fresh thinking