Generative Artificial Intelligence is a systems capable of creating new content, such as images, text, music or game environments, by learning from existing data. Instead of following fixed rules, generative AI learns patterns and produces original outputs.
In video games, this technology goes beyond traditional procedural generation, because it can adapt to players, learn from their behavior and create content that feels more dynamic and alive.
Released in 2016, No Man's Sky stunned players with its promise of 18 quintillion unique planets—all generated algorithmically from a single 64-bit number. Instead of storing each planet on a server, the game used mathematical rules to build worlds on the fly. This approach saved immense storage space and proved that vast, explorable universes could exist without a human creating everyone.
However, these worlds were static. Planets didn't change based on your actions and also aliens followed predictable patterns. The system lacked true understanding it couldn't learn that players preferred caves with hidden treasures or that certain color combinations made environments feel more immersive.
Modern generative AI, like NVIDIA's GameGAN (2020), learns by watching humans play. After analyzing 1,000 hours of Pac-Man footage, it generated fully playable levels, including ghost behaviors and power up placements without access to the game's source code. Unlike No Man's Sky's rule based system, GameGAN adapts: if players consistently avoid a certain path, the AI can redesign the levels to make it different and more dynamic depending on the player.
Key difference: Procedural generation (like No Man's Sky) is like a recipe that follows predefined steps. Generative AI is like an apprentice chef that learns from examples and improvises.
Tools like Inworld AI create NPCs with memory and personality. In a recent collaboration with NetEase, an NPC remembered a player's choice to spare a character three weeks earlier—and changed its dialogue accordingly. Unlike scripted cutscenes, these interactions feel personal and unpredictable.
Indie developers use tools like Leonardo.AI to generate textures, 3D models, and sound effects from simple text prompts. This cuts time of the development time, letting small teams focus on creativity.
Games like Left 4 Dead pioneered "AI Directors" that adjust zombie spawns based on player stress levels. Newer systems use generative AI to reshape entire levels in real time, if you struggle with platforming sections, the AI might generate fewer gaps or add visual cues.
Bias in Data: An AI trained on popular RPGs might assume "female characters = healers" or "villains = dark-skinned," perpetuating harmful stereotypes. In 2023, a major studio scrapped an AI dialogue tool after it generated offensive lines for minority NPCs.
Creative Ownership: When an AI generates a unique sword design based on 10,000 real world artworks, who owns the developer, the player, or the AI company? Courts are still wrestling with these questions.
Human Displacement: While AI automates repetitive tasks, it risks devaluing human artists. The key is balance: AI as a collaborator, not a replacement.
My journey into technology began with taking apart game consoles at age of 5. I later taught myself Python using libraries as scikit-learn, and during a competitive programming camp in Peru, I fell in love with algorithmic problem-solving. Now, in my double degree at UFV, I see generative AI as the perfect fusion of my passions: mathematics, creativity, and human-centered design.
I believe ethical generative AI can make games more inclusive to imagine a system that generates levels accessible to players with motor disabilities, or NPCs that communicate through sign language. But this requires engineers who understand both the math behind diffusion models andthe human impact of their code. That's why I study Calculus alongside game design, to build AI that doesn't just work, but matters.
In the next five years, we'll see generative AI shift from creating assets to shaping experiences. Games might generate personalized storylines based on your emotions, or rebuild worlds after you quit playing so a forest could regrows in your absence. But without guardrails, this power could erode artistic integrity or deepen inequality in game development.
As someone who moved from Argentina to Peru to Spain, I've learned that technology thrives when it bridges differences. My goal is to contribute to AI tools that empower all players, not just as consumers, but as cocreators. The future of gaming isn't just about smarter algorithms; it's about wiser ones.




Developed by the French studio Sandfall Interactive, this turn-based RPG
represents a definitive milestone in technical innovation for 2025. While the core narrative
and character arcs are meticulously handcrafted by human writers, the game leverages a
sophisticated suite of AI-assisted rendering and environment optimization
tools to achieve AAA-quality fidelity.
The cornerstone of this project is the implementation of Neural Rendering.
By utilizing deep learning algorithms, the game engine synthesizes high-resolution frames
and simulates realistic lighting behaviors—such as global illumination and ray
reconstruction—with unprecedented efficiency. This allows for stunning visual detail
without compromising the performance or frame rate on modern hardware.