Governance for AI (Or Lack Thereof)

While the practical application of Generative AI is attracting plenty of buzz these days, its adoption creates issues around both the content used in Large Language Models (LLMs), as well as the content generated by Artificial Intelligence.

In July 2023, HP with Intel & Microsoft issued a new Point of View White Paper entitled “Governance for AI (Or Lack Thereof).” In it, authors Jeff Malec and Bruce Michelson provide valuable perspectives on the various risks associated with adopting AI technology in commercial environments, and the necessity of strong governance in addressing these risks.

They describe AI as somewhat of a “double edged sword” because it exposes the tension between the technology’s capabilities for efficiency, and concerns involving accuracy, ownership, privacy, and even ethics of the output materials.

You are strongly encouraged to read the full white paper and share it with your customers. In the meantime, here’s a summary.

GOVERNANCE IS LAGGING TECHNOLOGY

Innovation is happening rapidly, but governance is not keeping pace. For example, while strong governance exists for established communications tools such as corporate email and correspondence, governance related to archiving and retention is weak for newer technologies like texting and instant messaging.

Whether due to budgetary constraints or a ‘lack of hope,’ reluctant efforts to manage technology – or at least slow it down – can lead to ‘tacit acceptance’ that innovation is ‘ungovernable.’

AI GOVERNANCE IS CRUCIAL TO ADDRESS RISKS

Implementing AI in an organization poses a range of risks that demand careful governance:

Accuracy: The output of Generative AI (e.g., narratives, images) may appear perfectly legitimate, but in fact be inaccurate. Governance in the form of fact-checking or due diligence is necessary to confirm the output’s accuracy.

Ownership rights: Content creators can be concerned that their original works are being used without permission as LLMs are built, so organizations must drive for clarity around the ownership of Gen AI’s output.

Privacy: If an organization is not completely clear on how confidential corporate data could be used by the LLM, private information may leak out. Strict governance is necessary to ensure corporate Intellectual Property and Personally Identifiable Information remains safeguarded.

Ethics and Disclosure: Organizations must prevent Generative AI from creating ‘deep fakes,’ and they ought to disclose to their customers if a seemingly expensive project was in fact generated quickly and cheaply with AI.

Employee Stewardship: Lacking real governance, many organizations rely on their employees to make the right choices. But since employees are not security experts, it’s inevitable that they will accidentally click on risky emails, access dangerous websites, download harmful programs or leverage AI when they should not. IT need to consider management tools such as Zero Trust to ensure security. AI content must be reviewed, and if problematic, addressed by a series of security countermeasures.

CLOSING THOUGHT: ADOPT AI WISELY

Adopting AI must be a conscious decision, and ensuring security – IT’s primary mission –begins with strong governance that is led by the organization’s CISO.

  • Governance requires management tools for reporting and to enable countermeasures, which will likely be incremental budget items
  • Governance is necessary to enforce reporting of required end-user AI training
  • Ungoverned use of AI essentially constitutes a security cyberattack

AI is a powerful tool that bad actors can, and will, use to deliver misinformation, breach security and otherwise harm the organization.

If for no other reason, AI must be governed.