Skip to main content
The Keyword

Ask a Techspert

Ask a Techspert: What is generative AI?

illustration of lines and dots

We recently expanded access to Bard, an early experiment that lets you collaborate with generative AI. Bard is powered by a large language model, which is a type of machine learning model that has become known for its ability to generate natural-sounding language. That’s why you often hear it described interchangeably as “generative AI.” As with any new technology, it's normal for people to have lots of questions — like what exactly generative AI even is.

To talk through common questions about generative AI, large language models, machine learning and more, we sat down with Douglas Eck, a senior research director at Google. Doug isn’t only working at the forefront of AI, but he also has a background in literature and music research. That combination of the technical and the creative puts him in a special position to explain how generative AI works and what it could mean for the future of technology and creativity. Here’s what he had to say.

Before we dive into generative AI, we have to talk about AI more broadly. It’s one of those intriguing but often kind of nebulous terms. What exactly is AI?

AI is a broad term often used to describe all sorts of advanced computer systems. I prefer to talk more specifically about “machine learning.” Most of what we see in AI today is really machine learning: endowing computer systems with the ability to learn from examples.

We call machines programmed to learn from examples “neural networks.” One main way they learn is by being given lots of examples to learn from, like being told what’s in an image — we call this classification. If we want to teach a network how to recognize an elephant, that would involve a human introducing the network to lots of examples of what an elephant looks like and tagging those photos accordingly. That’s how the model learns to distinguish between an elephant and other details in an image.

Language models are another type of neural network.

How do language models work?

Language models basically predict what word comes next in a sequence of words. We train these models on large volumes of text so they better understand what word is likely to come next. One way — but not the only way — to improve a language model is by giving it more “reading” — or training it on more data — kind of like how we learn from the materials we study. If you started to type the phrase, “Mary kicked a…,” a language model trained on enough data could predict, “Mary kicked a ball.” Without enough training, it may only come up with a “round object” or only its color “yellow.” The more data involved in training the language model, the more nuanced it becomes, and the better chance it has the insight to know exactly what Mary is most likely to have kicked.

In the last several years, there have been major breakthroughs in how we achieve better performance in language models, from scaling their size to reducing the amount of data required for certain tasks.

Language models are already out there helping people — you see them show up with Smart Compose and Smart Reply in Gmail, for instance. And language models power Bard as well.

Got it. So, we’ve defined AI and language models. What about generative AI?

A generative model can take what it has learned from the examples it’s been shown and create something entirely new based on that information. Hence the word “generative!” Large language models (LLMs) are one type of generative AI since they generate novel combinations of text in the form of natural-sounding language. And we can even build language models to generate other types of outputs, such as new images, audio and even video, like with Imagen, AudioLM and Phenaki.

This brings up a big question a lot of people have: What does generative AI mean for creative fields and creativity in general?

I think there’s huge potential for the creative field — think of it as removing some of the repetitive drudgery of mundane tasks like generating drafts, and not encroaching on their innate creativity. As a music researcher, I think of generative AI the same way one might think of the arrival of the drum machine decades ago. The drum machine generated a rhythm that was different from what human drummers sounded like, and that fueled entirely new genres of music.

Like a lot of music from the 1980s.

Sure, or how hip-hop evolved in the Bronx with the use of the drum machine. That entire genre was advanced by this new backend tech development in music. The drum machine didn’t replace drummers, it just added another layer.

Clearly, there are opportunities here. But what about potential challenges?

I raised two kids and got a literature degree before I went into computer science, so I’m asking myself real questions about how educators measure success in a world where generative AI can write a pretty good eighth- or ninth-grade essay.

If you think back, when the graphing calculator emerged, how were teachers supposed to know whether their students did the math themselves? Education advanced by understanding what tools the students had at their disposal and requiring students to “show their work” in new ways.

Companies — including ours — have a responsibility to think through what these models will be good for and how to make sure this is an evolution rather than a disruption.

I’m glad you brought up responsibility. Can you talk about how Google approaches developing machine learning?

We are and have been taking our time to do this thoughtfully. If we build a product, we want to be confident it can be helpful and avoid harm. In 2018, we were among the first companies to develop and publish AI Principles and put in place an internal governance structure to follow them. Our AI work today involves Google’s Responsible AI group and many other groups focused on avoiding bias, toxicity and other harms while developing emerging technologies.

I’ve heard you say that these big technical leaps are already showing up in small, helpful ways. But how big of a leap could generative AI be for society?

We now know machines can solve simple problems like image classification and generating documents. But I think we’re poised for even more ambitious capabilities, like solving problems with complex reasoning. Today, generative AI could help you write a form letter. Tomorrow, it may overhaul your creative workflows and processes to free you up to solve completely new challenges with a new frame of mind. Through collaboration and experimentation over time, we’ll uncover even more benefits from generative AI.

Let’s stay in touch. Get the latest news from Google in your inbox.

Subscribe