News CA

Board oversight of AI

Considerations like these make it essential to balance efficiency and cost savings with long-term talent needs and the social license to operate. Companies still need entry-level workers—just not for the same tasks. The next generation, especially those who have been exploring AI since it first became widely available, brings fresh perspectives and a deep understanding of digital tools that can help organizations make the most of AI itself.

Board members can set the expectation that the firm’s human capital strategy must include rethinking what “entry-level” roles mean at all so the company can take full advantage of what entry-level talent has to offer. Refocusing younger talent on optimizing processes for AI and overseeing “digital workers” would take advantage of their digital savvy, while simulation-based learning, rotational assignments and apprenticeships with AI-augmented workers would develop their ability to bring context, judgment and creativity to the job. As AI takes away differences in executing basic tasks, it is this human judgment and creativity that will drive competitive advantage. 

What boards should do

  • Critically examine how management is balancing cost savings with managing the risks of long-term talent erosion, potential customer and investor backlash, and regulatory action. 
  • Tie executives’ compensation to how well they blend AI with human skills, using metrics such as employee engagement scores in hybrid teams that include both humans and AI agents.
  • Set KPIs for the business around junior worker development, retention and advancement. 

3. You can’t automate accountability

Human accountability and judgment remain central to protecting reputation and performance. AI promises speed, scale and smarter decisions, but it isn’t perfect. Take AI’s well-known potential for bias and hallucinations. A 2025 EY analysis of Fortune 100 10-K risk disclosures revealed that about 1 in 5 Fortune 100 companies (22%) now flag AI hallucinations, inaccuracies, misleading outputs, misinformation, disinformation, or bias as material risks.9 Less visible but still troubling is the phenomenon researchers have dubbed “workslop”: employees using AI to produce work that’s highly polished but inaccurate or lacking substance.10 The damage goes beyond reduced productivity and quality; it can increase risk exposures from failing to apply critical thought and challenge AI’s outputs. 

The stakes are high. It’s no secret that organizations have lost both money and reputation due to careless AI use or unreliable AI behavior.11 In some cases, such as AI mistakes leading to unwarranted arrests or criminal convictions, these lapses have led directly to human harm.12  

The common thread running through these risks is that humans remain accountable for the work they ask AI to do. Overseeing ethical AI is only the beginning—directors must encourage management to consider quality and liability. Individually, employees must carefully evaluate AI outputs and use AI to improve their work, not as a substitute. And organizationally, management must install safeguards to manage the risk of harm, use practices like robust red teaming and third-party assessments to test AI for unintended behaviors,13  and define how accountability will be assigned when AI-based outcomes go wrong.14

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button