Have you ever imagined a video game you’d love to make? If you have an idea for a story or a picture, you can easily grab a pencil and start writing or sketching. But making a video game is different. Turning your ideas into reality usually requires visuals. And sound. And code to make those and other elements run.
Putting all this together isn’t easy, notes Matt Guzdial.
He used to run a club for gamers. There, Guzdial met dozens of young people who “had all these elaborate visions for the kinds of games that they wanted to make.” But they usually didn’t have the technical skills to create those games. He says it was “heartbreaking” for them to realize that.
Now, as a computer scientist at the University of Alberta in Edmonton, Canada, Guzdial is working to solve this problem. “I want to empower people to be able to make games on their own,” he says. Artificial intelligence, or AI, might be able to help.’
Generative AI is a type that can produce all sorts of things, including stories, images, music and videos. Many popular chatbots, including ChatGPT, Claude and Grok, can generate computer code, too. So you can ask them to write the code to run a video game.
John Hester tried this out. A retired software developer, he lives in Southern California. In February, he asked Grok 3 to write him code for a version of Pac-Man that he could play on his computer.
“It came up with a very primitive version,” Hester says. The Pac-Man character was square. The maze was tiny. And there were no dots to collect. In a back-and-forth conversation, he told the bot what changes he’d like. Some two hours later, he had “a playable, functional game.” And he wrote none of its code himself.
“I was very impressed,” he says.
In the future, Hester thinks game studios might be full of creative people that “just come up with the concept of games and what they want them to look like.” They could then prompt an AI model to do the rest.
Some game developers, however, aren’t so sure this is the future they want. “There’s a taboo,” says Gillian Smith, “that’s forming around AI-generated content.” Smith directs an interactive media and game-development program at Worcester Polytechnic Institute in Massachusetts. Game artists, especially, resist using generative AI in their work, Smith’s research has found.
Different types of AI have been a part of game development for decades. But now generative AI models can create all or parts of games. This is raising new questions about what role this tech could and should play in game-making.
Can generative AI help more people make games — and perhaps even lead to new types of fun games to play? Or are there better ways to help creative people achieve their vision?
Do you have a science question? We can help!
Submit your question here, and we might answer it an upcoming issue of Science News Explores
The O.G. of AI game creators
In 2013, Angelina entered a competition called a game jam. Angelina wasn’t a human — it was an AI that Mike Cook had designed. Cook is a researcher and game designer at King’s College London in England. His Angelina was the first piece of software to participate in a game jam. In this competition, participants created their own game based on a theme. And they had to do it within a limited amount of time.
Angelina’s entry — “To That Sect” — isn’t the AI’s best work, Cook admits. The game involves wandering around a red-walled maze collecting boats and avoiding floating statues. This odd, somewhat creepy game didn’t even come close to winning the competition. But it was a milestone for AI-based game design.
On its own, Angelina had put together a game world, complete with items, rules, music, colors, textures and other aspects.
More recently, Cook developed a new AI game designer called Puck. One game Puck designed is like a reverse Connect Four. Here, you try to avoid getting four in a row. Cook named it Antitrust. Try the game here. (To start playing, once the game loads, click the question mark in the top corner.)
The AI systems behind Angelina and Puck both use search techniques. They design games by combing through a vast space of possibilities.
As Cook describes it, “we break a game up into little puzzle pieces.” For Angelina, those puzzle pieces included game rules, art, music and more. Cook only selected options from existing games or collections of game elements that he had the rights to use.

In contrast, Puck only uses rules for its pieces that were inspired by grid-based puzzle games (such as Connect Four, Candy Crush Saga or Tic-Tac-Toe).
For example, Puck has one puzzle piece that says, “If you have three of the same objects in a row … .” Puck then must find a different puzzle piece to complete the statement. One piece might say, “… then destroy those objects.” Another piece might change those objects into something else.
Puck and Angelina each search for new and interesting ways to put their pieces together. They test out these combos and choose the best results.
“Working with AI creatively is really fun,” says Cook. An unexpected or weird idea from Angelina or Puck might “help you think in a different way.”
Guzdial has also created a game-making tool based on search techniques. It’s called Mechanic Maker. Rather than putting together new ideas for games, it helps people achieve their own ideas.
To use it, you demonstrate what you want to happen in your game. For example, you might press the right arrow key, then drag a character one square to the right. The AI model figures out what code to write to make that action happen during the game.
A dream of Minecraft
Cook and Guzdial’s AI tools for games are intriguing. Still, they’re not what most of the world is paying attention to right now. What’s stolen the spotlight in the last few years has been generative AI. It’s based on what’s known as deep learning.
This type of AI “is kind of like a big electric drill,” Cook says. “It’s really exciting and expensive and you just got it — [so] you kind of want to use it for everything.”
That includes making video games.
Generative AI models, like the ones behind Grok or ChatGPT, work differently from Angelina, Puck and Mechanic Maker. Rather than being programmed to search through a set of options, they learn from a huge number of examples. They use artificial neural networks to recognize patterns in data. Then they follow those patterns to create something that’s new but also resembles past examples.
This has angered many human creators, however. They say such bots mimic their work without permission. Despite that criticism, some video games now use generative AI.
In the game Inworld Origins, you play a detective questioning a cast of chatbot-powered characters. In Astrocade, you describe things you want to add to a game. Then generative AI creates 3-D objects and characters with which you can interact. Roblox Cube is a new tool Roblox creators can use to generate 3-D objects and environments.
To play an AI-generated version of Minecraft, you can try a demo called Oasis. The companies Decart and Etched released it last October. (This demo is not affiliated with Microsoft, the owner of the real Minecraft game.) Oasis is based on a new type of generative AI called a world model. It builds virtual environments you can move through. Millions of hours of online video of people playing Minecraft went into training the world model behind Oasis.
And the result is quite weird.
“If you go up close to a wall and then you back away, you’re in a completely different space,” says Guzdial. In real Minecraft, a map and rules govern everything around you. Not here. Whatever’s on the screen now feeds into the AI world model. It predicts what you will see next based on what you’re seeing now. You can walk around and mine blocks. But everything you see is a video that the AI model generates on the fly.
Oasis is like “a dream of what Minecraft is, or a memory of what it is,” says Cook. “It isn’t an actual game.”
Last December, a few months after Oasis came out, Google DeepMind released its own world model. They call it Genie 2. Like Oasis, it generates video of a 3-D world that you can explore. But it can create any world you desire — not just one based on Minecraft. Then you can explore it for up to one minute.
There’s no public demo of Genie 2 available. But the DeepMind team has shared results of several example prompts, including: “a humanoid robot in ancient Egypt.” This generated a shiny metal figure walking among pyramids.
Creating new games wasn’t the main goal of these projects. Their purpose was to build virtual worlds in which AI bots could interact. Practicing skills in such generated world models could help robots or AI agents learn to interact with the real world, says Cook.
Oasis and Genie 2 might pave the way for unusual new types of game experiences. It does seem cool to be able to ask for any sort of game world you want. Capybaras in space, maybe? Or perhaps a city made of rainbows?
“That brief experience of the thing coming to life” would delight many people, says Smith. But then what? “They wouldn’t know what to do with it after a little bit.” Exploring just to see what you see might get old quickly.
Giant robot or helpful Pokémon?
Cook sees other drawbacks to using generative AI to create all or parts of video games. He compares these models to “huge robot suits that you see in anime.” These can be helpful — but only one person gets to drive. Plus, they’re too big and complex to really understand, Cook says. With generative AI, typically only big companies get to make decisions about how the models work. Why? They’re the only ones with the resources needed to build them.
Something like Puck, on the other hand, is more like a Pokémon, Cook says. “Small, cute, friendly.” It doesn’t need to study huge numbers of examples to learn how to create something.
You can download Puck for free and run it on your personal computer. And you can build onto it or customize it however you want.
When the creative person is in control, AI is more likely to be beneficial, Cook says.
Generative AI also could be put to use supporting creative people, says Katja Hofmann. She develops generative AI tools at Microsoft Research in Cambridge, England.
Her team interviewed game developers about what they’d want or need from a world model to support their creative process. The group used this feedback to develop a generative AI world model called Muse. It learned to simulate one video game, called Bleeding Edge. In that game, teams of fighters battle in a cyberpunk world.

Microsoft’s Xbox Game Studios owns the company that made Bleeding Edge. Players had all accepted an agreement that gave permission for their online gameplay to be recorded. This amounted to 500,000 hours of gameplay data (or more than 50 years worth of time!). It all went toward Muse’s training.
Usually, testing a new idea for a game or a level requires a team of coders, artists, designers and others working together. With a world model, however, a single designer might be able to sketch out an idea and play through it.
For example, a game designer might sketch a level with some platforms leading upward toward a treasure chest. The designer could then feed this image to Muse so that it could generate what might happen as someone plays the level. If the generated player isn’t finding the platforms, the designer could rethink their placement.
Obviously, if you set out to build a brand-new game, you won’t have 500,000 hours of gameplay to train an AI. So perhaps instead of using huge world models to generate entire games or environments, designers could use small ones to test their ideas.
Since Microsoft’s first study on Muse came out in February, Hofmann says her team has simulated a single level of a game using just a few weeks’ worth of data.
“I love video games,” she says. World models, she thinks, could make telling stories through games “easier and more effective.” But the process has to start and end with people who want to tell a story, Hofmann says. Otherwise: “What’s the point?”
Mimicking vs. creating
In the end, we all want new, fun games to play. The right AI tools could empower more people to create games. AI tools could also make possible new types of games. After all, “tools shape what we’re capable of creating,” points out Smith at Worcester Polytechnic.
Guzdial would love to live in a world “where people make little playable experiences to try to explain something.” For example, a teacher might make a game to help their students learn a new concept.
However, using generative AI or world models to spit out lots of automated game content “might lead to more boring stuff being made,” cautions Cook. A person’s creative work reflects their experience of living in the world. And today’s generative AI can only mimic what people have already created.
Tessa Kaur, editor at The Gamer magazine, writes that AI-generated dialog doesn’t produce compelling characters. AI “simply cannot be creative enough,” she writes. When you care about game characters, it’s “because someone took the time to craft that [dialog] for you, over many rewrites and with deep thought.”
Smith agrees. When humans make a game, they share a part of themselves. Or they explore challenging themes and ideas. They could use the AI version of a giant robot suit or cute Pokémon to help achieve this. But on its own, a piece of AI-generated content has no self to share or ideas to explore.
Long-lasting, fun game experiences, says Hofmann, come from “someone who wants to tell a story.”