A Step-by-Step Guide to Building AI Governance
I’ve recently had the chance to talk to numerous organization leaders about their opinions, thoughts, and readiness for the use of Artificial Intelligence. At least half of the leaders I talked to indicated one or more of the following:
“We don’t really understand the risks.”
“We are not managing shadow AI use. “
“We are piloting, but not learning from the pilot.”
“We don’t know about the tools available to govern AI.”
“We are just starting to write guidelines or policies.”
Based on the feedback from those that were successful and best practice recommendations from providers such as Microsoft, the following approach should help you build a solid foundation for progressing both the use and governance of AI.
Educate your team on AI-related risks and rewards.
Structure draft guidelines, policies, or procedures.
Eliminate shadow use of AI by offering safer tools.
Familiarize and begin to leverage AI governance tools.
Run pilots or proof of concepts, QuickStarts, or slower-paced rollouts.
Educate Your Team on AI-Related Risks and Rewards
The first part of the journey is to increase awareness of the new challenges and risks that may be introduced by AI. As AI increasingly enters the public arena and becomes a topic of conversation in the workplace, you’ll likely find there is already a significant number of people who are aware of AI risks and have concerns, formed opinions, or are scared of a future with AI.
I’ve collected numerous publicly available resources at the end of this article that may be useful for your research.
Consider AI-risk and ethics resources from a technology provider. Microsoft, for example, lists its own Responsible AI Principles and offers a short training module on AI ethics and governance.
Facilitate a discussion with your leaders, including the CEO, CIO, security team, legal team, and HR leaders who may have entrenched ideas around risk.
It's crucial to consider the broad stakeholder group within your organization impacted by AI. Employees may worry about job displacement or changes to their work. Governing bodies like Employee Unions will be concerned about significant changes to job descriptions or workforce design. Highly-trained professionals such as doctors, engineers, and lawyers may have unique concerns, including specialization-specific concerns or even worry that the organization is not acting quickly enough to adopt AI.
Even if early AI use cases don't significantly change current workflows, it's important to recognize that AI is moving towards disrupting routine, analytical, and creative jobs. Therefore, long-term risks like workforce displacement should still be addressed in conversations.
Structure Draft Guidelines, Policies, or Procedures
Without an official stance on AI, you should assume that users are operating within their own limited understanding of AI risk and may have already introduced risk to your organization. As a stopgap, you may want to remind people of existing policies such as acceptable use, privacy, or third-party product policies that should be adhered to.
After discussing risks, the next step should be:
Understand the law that governs your organization. This may include reviewing federal, provincial, or municipal laws. Reading and understanding the EU Act may also help you understand what future law may look like by governments that are currently behind.
Ensure you have a good sense of the values that will drive your approach. Governance in AI is not a “one size fits all” problem. Innovation and speed of progress will have a natural tension with risk management and oversight methods.
Review your existing policies, such as those mentioned above, and consider revisions in areas where there are gaps.
To ground your approach, review the differing approaches used by public entities. The difference between the guidelines developed by the City of Boston and the policies created by the City of Seattle are examples of very different governance approaches.
Leverage available templates and resources, such as the GovAI coalition content created by The City of San Jose, and draft your policies for review.
One important consideration is indexing your governance to the varying degrees of AI risk. In other words, some applications of AI – especially those that will have little to low impact on humans – may be able to move forward with little to no oversight. Others that impact humans, such as AI used in hiring decisions or financial lending decisions, or even more impactful AI, such as AI used in healthcare decision-making, may require a tiered risk approach to governance.
Eliminate Shadow Use of AI by Offering Safer Tools
Generative LLMs have generated a lot of attention and are the most commonly used as “shadow IT.” One effective measure to protect privacy, intellectual property, and IP infringement risks may be to purchase corporate-safe options for users ahead of fully formed governance policies. To reduce the use of public generative LLMs, for example, consider:
A Microsoft Copilot license.
OpenAI’s Corporate ChatGPT licenses.
These options will ensure your content is not used to train public models, ensure your content remains encrypted to your organization, and may indemnify you against the content produced by the models. These models may offer a level of reporting and account management so you can configure their use and lower risks.
One other benefit of using Microsoft Copilot is the inclusion of governance tools, such as SharePoint Security Access Monitoring (SharePoint SAM), that will help you assess your readiness for AI on your own content, monitor use, and use AI to flag areas of concern.
Familiarize and Begin to Leverage AI Governance Tools
Before POCs and pilots, it’s important to understand the possible governance tools you may already have access to. Tools such as:
Microsoft 365 Copilot Impact Report – This report will give you a sense of who and how Copilot is used. It may also provide the background data you need for a business case.
Purview Audit – Purview Audit features will allow you to track changes to configuration and review AI-generated content that is flagged by Purview (e.g., possible inappropriate content).
SharePoint Oversharing Reports – SharePoint Oversharing Reports can help you identify security issues that may create an oversharing situation for someone using Copilot or searching for content.
SharePoint SAM – The SharePoint SAM can be configured to limit the content areas that Copilot can include in grounding, eliminating oversharing risks.
Run Organization-Focused Proof of Concepts (POCs) and Pilots
The best approach to understanding both the value and risks related to AI is to build a learning approach that includes the use of generative AI tools and applies them to your organizational context. Generally, this involves:
Organizing a small group of early adopters.
Applying governance (such as monitoring and reporting) in areas that may present risk.
Ensuring users are properly engaged with educational resources (e.g., how to prompt, how the AI works, etc.) and have some mechanism for feedback.
Organizing the feedback from this early adopter group and moving to the next step of adoption.
Through this, you’ll learn about the ideal uses of generative AI, the risks related to truthfulness and hallucination, and begin to understand how to monitor AI use through available reporting tools.
Get Ahead of AI’s Powerful Contribution
AI-enabled products are becoming more powerful by the minute. They have the potential to streamline processes, reduce redundant tasks, and superpower your users with productivity, analysis, and quality support. By moving forward with the governance approaches in this article, you’ll ensure you stay ahead of your users and feel confident you’re adopting AI safely.
Resources for Governance
Risks resources
MIT Risk Repository – This MIT repository is a composite of numerous AI risk assessments, packaged up in a thoughtful architecture for consideration.
The Coming Wave – This book by Mustafa Suleyman, founder of Deep Seek, is required reading for IT leaders. It’s not a short-term tactical guide, but rather presents the long-term view of AI risks.
Law and oversight
Microsoft Responsible AI – You should evaluate the principals of your leading technology providers. Their resources may provide inspiration, but also arm you with insight on how they govern their own use of AI.
https://www.microsoft.com/en-ca/ai/responsible-ai
Global Regulatory Tracker – This resource, assembled by International Law Firm White & Case, tracks law and regulation across the world.
https://www.whitecase.com/insight-our-thinking/ai-watch-global-regulatory-tracker
Guidelines and Policies
San Jose, California – AI Governance Coalition – This resource site includes draft templates for governance and a great set of resources for government organizations. I suggest the tools here are applicable to everyone.
AI guidelines and policy samples – Take a look at what Boston, Seattle, and Tempe have written for policy as a view to what public organizations are doing, and the differing approaches
https://www.boston.gov/sites/default/files/file/2023/05/Guidelines-for-Using-Generative-AI-2023.pdf