Governing AI in Microsoft 365

Artificial Intelligence (AI) is top of mind for everyone these days. Most of us have been using AI daily for several years, even if we didn’t call it as such—things like smart phones with assistants such as Siri and Alexa all use AI technology. The grammar editor you've likely used hundreds of times in Microsoft Word without a second thought also uses AI technology. Smart devices in your home are leveraging AI as well.

AI is taking a massive leap forward in its capabilities recently, especially with all the announcements about Microsoft 365 Copilot, Syntex and similar technology. AI is heavily marketed, so organizations are starting to really consider its role in their employees day-to-day work. This post goes through a few benefits of the technology, and what to consider for governing AI as it’s deployed more broadly in your organization.

Benefits to AI in your organization

Technology like ChatGPT and other open-source generative AI are great to help accelerate employee work — allowing focus on more important and valuable tasks for the business. AI can help by:

  • Reducing mundane and repetitive tasks

  • Generating ideas quicker

  • Providing design ideas or graphics

  • Writing draft emails or responses

  • Generating answers to simple customer questions

  • Supporting humans in problem solving and gaining insights

I highly recommend using generative AI and other AI technologies to help employees fast track some of their tasks. However, understanding the technology, its limitations, and risks as an organization is critical.

 Governing Artificial Intelligence

Whether your first thought is to prevent the use of AI technology or to embrace it in your organization, you likely can't avoid it.

It is, however, imperative that you govern the use of AI at your organization. Some aspects of governance to consider are:

  • Data Privacy and Security – Understand the risks associated with AI technology to your data security

  • Roles and Responsibilities – Define roles and responsibilities of both users and developers of AI technology in your organization

  • Training – Train people in not only how to leverage AI, but also a general understanding of the technology

  • AI Usage Policies – Set policies on which technologies and what content is acceptable for employees to use in AI services

Data privacy and security

Generative AI uses available content, including that from the internet, to create summaries, outlines, emails, and more for users. Your organization likely has confidential information that you do not want to and legally should not share with external sources.

If employees are using open-source AI like ChatGPT, could your content possibly be exposed to others?

In theory, yes. You may lose ownership over that content once it's been pasted into a technology like ChatGPT. Clearly, this poses a risk for your organization.

Should you ban AI then?

Well, no… You can leverage things like Azure Open AI, Copilot in Microsoft 365, and other technologies in the Microsoft 365 platform to get similar results as ChatGPT, but content remains fully within your existing Microsoft tenant security and privacy bounds. Your content is not exposed outside of those bounds and is not used to train the Large Language Models (LLMs) behind the technology.

Copilot for instance, uses the same LLMs that ChatGPT was trained on, however, it operates within the existing security boundaries of your Microsoft 365 tenant.

On top of that, it can also use your business content to help generate better responses, summaries, emails, etc. for your organization.

Consider adopting these AI based technologies instead of having staff leverage potentially unsecure solutions on the internet.

Roles and responsibilities

Consider the roles and responsibilities of users and developers of AI solutions at your company. Are there new roles that you need to define? What are the responsibilities of both developers and users within your organization? Microsoft has identified 6 principles of Responsible AI that should be followed when developing AI:

  • Fairness – AI systems should treat all people fairly

  • Reliability and safety – AI systems should perform reliably and safely

  • Privacy and security – AI systems should be secure and respect privacy

  • Inclusiveness – AI systems should empower everyone and engage people

  • Transparency – AI systems should be understandable

  • Accountability – People should be accountable for AI systems

Training and education

The first steps in training users on the technology is understanding it yourselves.

AI will be prevalent in all businesses and applications within a short period of time. It is critical that your whole organization begins to understand the technology and limitations of the technology. Once there is an understanding of how the technology works, where it fails, what limitations it might have, and how to use it effectively and safely use it, then training should be provided to all staff.

For example, training users on how to craft ‘prompts’ to get more useful answers is a good first step. Microsoft is also making it easier for people to share prompts with colleagues with Copilot Lab.

Usage policies

Finally, define usage policies at your organization. Have staff read and agree, like other policies.

Employees may not be aware of the risks of using ChatGPT or that they may be breaking their contract or NDA when they write prompts with business information.

Consider aspects such as:

  • What types of AI are acceptable to use?

  • What AI solutions are available within the data and security bounds of the organization?

  • What information should not be added to a generative AI system like ChatGPT?

  • Are DALL·E or other text-to-image generators acceptable for graphic design in the organization?

Summary

There is a strong chance you'll struggle to prevent the use of AI technology at your organization. Just like mobile-first and cloud-first quickly took over the way we develop applications and solutions, AI-first is the next evolution of development. If you don't start adopting AI in your organization, you'll be left behind and staff will quickly start to look outside of the bounds of your IT for solutions.

Start now governing and adopting AI that is secure and controlled by your organization, so you aren't scrambling or doing damage control once it's too late.

Understand the data security and privacy risks of AI technology, define and develop the roles and responsibilities of AI use in your organization, implement policies around the use of AI, and train and educate your employees.


Contact us for a Microsoft 365 AI strategy and planning engagement. We can help you prioritize use cases and develop a governance plan for your organization.

Jeff Dunbar

Jeff is a technical expert in the design and configuration of SharePoint, Microsoft/Office 365 and Collabware. Jeff has created and maintained sites, site collections, and applications for SharePoint for small to large scale environments. He assists companies in managing their compliance using third-party add-ons and out-of-the-box records management. Jeff has planned and implemented information architecture, content management, content design, usability studies, and site re-designs.

Previous
Previous

Analytics options for SharePoint

Next
Next

A guide to SharePoint Premium content processing (previously Syntex)