Google DeepMind Unveils Genie 3 AI for Generating Virtual Worlds
The tool is currently available to select testers as a research preview.

Google DeepMind has introduced Genie 3, the latest version of its AI model for generating dynamic virtual environments using natural language prompts.
Unlike its predecessor, Genie 2, which could render environments for just 20 seconds, Genie 3 sustains scenes for several minutes and now outputs at 720p resolution. It can create and modify detailed settings such as Alpine landscapes, underwater worlds, or indoor spaces, and allows users to tweak elements like weather, camera angle, and object interaction in real-time.
“Generating an environment auto-regressively is generally a harder technical problem than generating an entire video… Genie 3 environments remain largely consistent for several minutes, with visual memory extending as far back as one minute ago,” DeepMind researchers Jack Parker-Holder and Shlomi Fruchter, wrote in a blog post.
Genie 3’s improvements in rendering consistency make it especially suitable for training embodied agents—AI models used in autonomous systems like industrial robots. In tests, DeepMind successfully used its agent model SIMA to complete tasks within Genie-generated environments.
The tool is currently available to select testers as a research preview. DeepMind says it’s exploring broader availability, positioning Genie 3 as a powerful new tool for AI development and simulation training.
Recently, Google DeepMind and the University of Nottingham unveiled Aeneas, a cutting-edge AI model designed to help historians decode and contextualise fragmented Roman inscriptions.
The breakthrough was published in Nature and is set to transform classical scholarship by automating the search for linguistic parallels and reconstructing missing text.