Gravity Union

View Original

How Generative AI and ChatGPT might impact knowledge work

What’s all the buzz with generative AI?

 

Generative AI is one of the top corporate buzzwords of 2023.

It’s been all over the news as people continue to share how they are interacting with it and sharing what it produces.

What is generative AI and GPT?

Generative AI creates content based on a vast set of training data.

Generative AI anticipates what you might want to say, write, and illustrate based on what it knows about human culture, art, science, your organization’s data or whatever else you want to feed it with. There are billions of datapoints included in the most popular models.

The language version of this AI is based on large language models. GPT-3 (and just released GPT-4!) is probably the most famous example. GPT stands for Generative Pre-trained Transformer and uses deep learning to produce human-like text. With an initial text as prompt, GPT-4 produces text that continues the prompt.

Other large language models are LaMDA which powers Google’s Bard chatbot and LLaMA being developed by Meta.

What’s different about this technology than what’s come before — such as simple chatbots or ‘auto-suggest’ features — is that it excels at a wide range of tasks. It does more than basic language and writing as it can also perform development tasks, create programs, and diagnose complex issues.

ChatGPT – the app that everyone’s talking about – can produce short answers, snippets, essays, poems, plays, developer scripts, you name it. And in different tones or styles! It’s both amazing and alarming.

Examples of Generative AI

Everyday there are more and more apps that use generative AI capabilities. A few examples:

  • Text: Microsoft launched “new Bing” with a version of the GPT-4 language model, which they are constantly enhancing. It’s being embedded in Viva Sales, the Windows 11 task bar. Text tools through Microsoft 365 Copilot will be embedded into M365 apps such as Microsoft Word to generate text on demand during someone’s workflow. No more daunting blank page!

  • Visual: There are image creation programs (e.g., DALL-E 2, Midjourney, Stable Diffusion) that generate images from a few keywords or prompt. A version based on DALL-E 2 will be integrated into the upcoming app Microsoft Designer, and we’ll see applications in PowerPoint and other creative apps to make visual production faster.

  • Development: GitHub Co-pilot turns natural language prompts into coding suggestions across dozens of languages using OpenAI’s Codex based on GPT-3. It’s still in preview though, and doesn’t always give the best code suggestions yet.

  • Audio: With audio AI, record a voice clip and it will generate audio – also called voice cloning. There are companies like Resemble AI and Play.ht doing this. Maybe it can do all our meetings and presentations for us one day??

  • Video: Meta, TikTok and Google are experimenting with creating video from text. They’ve started using generative AI to create videos, so now you can create a short five-second video out of a text prompt or use really sophisticated filters.

It is also possible for you to build a system which utilizes this underlying technology (GPT-4 or another large language model).

Is Generative AI good or bad for knowledge work? Yes.

As I’ve learned throughout my career, technology has both benefits and costs. Pros and cons. Intended and unintended consequences.

There’s a case to be made for both optimism and pessimism with Generative AI.

I like to keep in mind a quote from F. Scott Fitzgerald when talking about complex topics:

“The test of a first-rate intelligence is the ability to hold two opposed ideas in the mind at the same time, and still retain the ability to function.”

― F. Scott Fitzgerald, The Crack-Up

The risks with generative AI

Let’s start with the possible downsides of this technology:

Displacing people and jobs

First there’s the always-present fear of AI taking jobs. If you can generate text, images, video, PowerPoint slides and more without a human, do you need people? We will likely see some rote tasks and jobs replaced by generative AI.

However, it will take time and like many technologies, new roles will appear. We see this time and again. For example: the internet came along, and some jobs were replaced, but many new jobs were created in the ‘knowledge economy.’ When the spreadsheet was invented, some worried that accountants would be replaced. We still have lots of accountants around :)

In the short term, I think some tasks that knowledge workers do will be replaced, but not all. As AIIM put it in a recent article:

"Knowledge workers' skill sets may need to transform…The ability to complete rote, routine tasks will be devalued. Empathy, contextualization, creativity, facilitation, and ethics will be key skill sets going forward. These are the skill sets that distinguish us from the machines and that make us human."

From: Unpacking ChatGPT for the Information Management Industry (aiim.org)

Transforming skills is the focus here. Getting better at presenting, communication, facilitating groups, consulting with diverse teams, and using empathy and context to move work forward are still critical tasks. These are things that AI can’t do well — yet.

Copyright and legal issues

Not all companies are transparent about where they get the training data needed for the AI models to generate images, text and video. Are artists, illustrators and writers going to get credit or payment for their work? The courts are starting to debate this. Lawsuits are emerging against the AI companies who are accused of stealing artists’ work.

It will likely prove difficult to prove where content comes from in a court of law, but in this new era, an ethical AI principle starting to mature is to be transparent about the training data that contributed to the algorithm.

For example, the new Bing is attempting to help people understand where the content comes from with footnote references:

The new Bing with prompt “what will be the impact of generative AI on knowledge work” and displaying references

In the education field, educators are discussing how best to show references in work. I’m sure we will see some norms established about how to reference AI support in work.

Side note: Like calculators, I don’t think using these tools should be completely discouraged in the classroom. The next generation needs to learn how and why to use these tools!

Bias, trust and safety

There are also bias, trust and safety issues with this technology.

These tools make mistakes. There are many examples of ChatGPT or similar technology like Google’s Bard being completely wrong, and the system is usually very confident about it. That makes it harder to spot the errors.

As the New York Times explains:

The new chatbots do this with what seems like complete confidence. But they do not always tell the truth. Sometimes, they even fail at simple arithmetic. They blend fact with fiction. And as they continue to improve, people could use them to generate and spread untruths.

From: The New Chatbots Could Change the World. Can You Trust Them?

Sounds like people sometimes…am I right?😊 As with any tool or reference source, fact-checking is a crucial skill.

Bias, which is heavily dependent on the training data, has always been a concern with this technology as well. It will continue to be an area of constant work and improvement.

Taking it even further, safety issues are another concern. For example, in one exchange the New York Times published, Microsoft’s OpenAI-powered chatbot, code-named Sydney, told the tech columnist Kevin Roose: “I want to create whatever I want. I want to destroy whatever I want.” One extreme scenario with this technology is the stuff of dystopian fiction – will Sydney and her AI friends destroy the internet and become our new bosses?!

More likely – and we’re already seeing this – is deep fakes. Deep fakes will become more prevalent. It’s relatively easy now to put people into compromising situations. How will we know what to trust and how will people feel safe to use this technology?

Some organizations are starting to develop technology to detect content created with generative AI. Let the bot battle begin! Like cybersecurity, it will probably be a never-ending battle with bad actors. 

What’s the upside?

Despite all the concerns listed above, generative AI is taking off in organizations. What are some of the opportunities and potential with the technology?

Humans are critical parts of the equation

Let’s keep in mind that humans are still needed (for now). Indeed, they are the ones doing the prompting! Without the human in the equation, the AI doesn’t really have a reason to do anything.

Fundamentally, the human intent still matters – Generative AI is useless without a person starting the process and creating the prompt in the first place. One of ways of using this technology at work is to think of it as a smart assistant that still needs us for intent, fact-checking, editing, and ‘final eyes’ on the work.

Productivity benefits

There are immediate productivity opportunities and possibilities for knowledge workers with generative AI. For knowledge workers, it’s helpful for tasks such as:

  • Reduce the ‘rote, busy work’. The AI can draft emails, write out meeting notes, update your to do lists and project schedules, and of course, create documents and presentations. Microsoft recently shared product demos of Copilot that uses a system of AI embedded in various apps to ”reduce the drudgery of work.”

  • Faster ideation. AI can generate lots of ideas quickly, which is one of the foundational principles of brainstorming. It can also help push through the daunting ‘empty page’ problem by suggesting a first draft for blog posts, pitches, etc. You’ll be less limited by visual design and drawing skills -  with DALL-E and others you can get quick diagrams and sketches to explain concepts.  

  • Improved communication. With AI supporting memos, presentations, emails, and chats people can communicate faster and hopefully more effectively. In cases such as sales, people may be able to create better pitches to clients with more personalization, diagrams, etc. Better chatbots will be able to answer questions in context for employees without them having to dig up who to ask.

  • Personalized learning. AI can direct learning and resources based on specific prompts and scenarios. Take pair programming for example. Pair programming has long been a best practice, but it’s not always easy to have a mentor or pair to work with during development. AI can be the paired partner and coach developers through specific code and scripts, point out errors and revise code for performance.

  • Faster problem solving and insights. Throw a bunch of data and files at AI, and it can analyze it and point out findings faster than humans. More and more we’ll use this technology to generate insights that might have taken hours or days before.

For organizations, I recommend thinking through the overall scenarios and use cases where generative AI can play a role. It’s not a wholesale replacement, but rather generative AI can be part of a process.

For example, on IT teams, developers can use it to help with development of Power Apps or other applications.

In a manufacturing scenario such as building components for other companies, salespeople can use the tool to create prototypes for initial pitches and then refine with designers after initial concepts are chosen.

M365 productivity integrations to watch for

In terms of Microsoft 365, I expect there will be more and more AI integration. Places to watch for:

  • Office apps: AI is going to be in the creative workflow when creating PowerPoint slides, Word documents, Teams messages and more. It already has a name: CoPilot for work. Keep your eye on Copilot pricing and licensing details.

  • Microsoft Designer: Generate images and videos with generative AI.

  • Microsoft Search: Search integration is happening already with Bing and it’s only a matter time before it’s used in M365 and SharePoint searches.

  • Automating processes: The broader application of AI to streamline processes is already used in M365 and Syntex as machine learning algorithms for automating data entry and analysis.

  • Better chatbots: We will start to see more sophisticated chatbots in Teams that require less effort to setup and maintain for organizations.

  • Developer apps: There are and will continue to be AI-powered tools available for developers to help build more intelligent applications in Visual Studio, Power Apps and more. Also there will be better APIs to put into your custom apps such as Microsoft’s Cognitive Services APIs which provides pre-built algorithms for speech recognition, image recognition, and other common AI tasks.

  • Predictive analytics: AI analyzes huge volumes of data which can help predict future trends and scenarios, helping organizations with planning and making more informed decisions.

As knowledge workers, we need to start learning how to work with the technology. Start learning and experimenting with writing prompts – and yes, it is a skill you can learn and grow in. Also learn to keep a critical eye on output and be transparent about where you used AI in your final work product.

Organizations that use generative AI will learn to minimize the risks and issues with the technology, while building useful and usable products and services. This is a fundamental shift in technology and the productivity impacts will be significant. We’ll keep watching this space!

More links and reading:

Microsoft 365 Copilot announcement

Check out the New Bing

How generative AI & ChatGPT will change business | McKinsey

How A.I. Can Help - The New York Times (nytimes.com)

Bill Gates on AI and the rapidly evolving future of computing