In Conversation With An AI Exec: 5 Takeaways

In Conversation With An AI Exec: 5 Takeaways
Stephen Peackock | Keywords Studios

“If you don’t want things to change, a lot of stuff will need to change.”

The rift between developers and consumers when it comes to AI in gaming is massive. On the developer side, the conversations coming out of the Game Developers Conference make one thing clear: AI as a toolset is too valuable. It is here to stay. On the consumer side, skepticism is at an all-time high, and visible usage of AI is met with consequence, even for beloved games and studios. Clair Obscur: Expedition 33 had their top honors revoked at the Indie Game Awards for including an AI texture in the final release. Swen Vincke, head of Larian Studios and director of Baldur's Gate 3, was raked over the coals for statements about AI – statements that have been mostly walked back. Most recently, Crimson Desert faced backlash for AI artwork that was left in the game.

Studios left and right are making public commitments to abstain from generative AI use in game development, while developers at these same studios are adopting AI toolsets more and more. How do we find balance and nuance in this conversation?

At the University of Southern California, I had the opportunity to hear from Stephen Peacock, who has a long history leading AI implementation at companies like Amazon Web Services and Keywords Studios, and currently acts as the Director of AI at 2K. He was invited by Gordon Bellamy, who teaches on ethics in the video game industry, and the conversation was largely focused on how to ethically interface with and adopt this evolving toolset, and how to navigate forward into a technological unknown.

Here are a few takeaways from that conversation.


Collaboration is still king.

More than any other point during the talk, Stephen stressed the importance of human collaboration. He placed strict limits on how AI tools should be used, never overriding the need to interact in person with others. He pointed to the recent Game Developers Conference, and how valuable the opportunity was for thousands of game developers to come together and collaborate.

This addresses a lot of the fears surrounding AI adoption. Collaboration is largely built from the need for a skillset that you yourself don't have – if AI has those same skills, where is the need for collaboration?

It will exist in realms that AI can't cross: empathy, friendship, teamwork. It will exist in human-first ideation bringing unique worldviews together that have a direct, irreplaceable impact on game development. From Stephen's perspective, the most valuable asset in any game studio is the people, and no model changes that.


Great art is not at the center of probability.

In Stephen's view, no great art is made with ChatGPT as the genesis – and he had an interesting study to back this up.

In 2025, Nature Human Behaviour published a study where participants were asked to brainstorm product ideas for a toy involving a brick and a fan. They could use either their own ideas or ChatGPT. Among those using ChatGPT, 94% of the ideas shared overlapping concepts, with nine participants independently giving their toy the exact same name: "Build-a-Breeze Castle." By contrast, human-generated ideas were completely unique.

This is the danger of using AI in the ideation stage. AI has great ideas, but they're not nearly as diverse as human-generated ones. This is what Stephen means when he says that great art is not at the center of probability. It comes from the diverse fringes, where the human element tends to live.

AI is a great tool for stress-testing an idea once you have it. It can pressure-test where a concept fails, or generate a placeholder you can use to communicate your vision to others. Stephen mentioned that he used to struggle articulating to an artist what he wanted them to create. Now, he can generate something and say: "like this, but different and better." That's a legitimate and powerful use of the tool. But it should never be used to generate the idea itself.


You can make your own Salesforce... but should you?

When talking about AI's programming ability, Stephen remarked how for the past two years, he hasn't needed to code anything himself – and for the past six months, he hasn't even looked at the code. He attached to this a claim that in the very near future, companies won't need to rely on SaaS enterprise products like Salesforce. They could build their own, in-house.

What he's describing is the SaaS-pocalypse, where companies no longer need a dozen different enterprise applications – with the help of AI, they can build their own. Two light pushbacks when carrying this argument to video games.

First: it's always their product that AI will replace – never mine. But what happens when your product becomes what AI can replace? Video games are software – very complex software, but software nonetheless. If we agree that AI is capable of replacing a core enterprise SaaS product, and we agree that video games are software, then it's reasonable to accept that AI could, at the current pace of acceleration, someday replace your video game. If we're not in favor of that, then we should be careful about quickly rendering other products obsolete, because our own might exist down that same path.

Second: this stance has greater consequences when scaled to a macroeconomic level. Sure, you can make your own Salesforce – but Salesforce is made up of salaried employees who use their discretionary income to buy your product. Scale that logic across the SaaS economy, or the economy as a whole, and at some point, those obsolete companies are forced to do layoffs – at that point, unemployment and an eroded customer base become your problem too. Video games are a luxury product. They will feel it first.

This is similar to the "doom loop" described in a recent Citrini Research report – replacing humans with AI means less discretionary spend, which means more reliance on AI, which means less discretionary spend...

It's a slope, and we should be careful on it.


If you don't want to change, then a lot will need to change.

People rarely stop to think about how much change they've already absorbed in their career. The tools, workflows, and expectations of a developer in 2010 look nothing like those of a developer today, a transition that happened incrementally, without anyone calling it a crisis. Despite the current moment being faster and louder, when we take a step back, it isn't categorically different.

Stephen framed it like this: "If you don't want things to change, a lot of stuff is going to have to change." It's essentially a paradox: choosing not to engage with AI doesn't preserve the status quo – it just means something else gives way instead, whether that's your role, your studio's output, or your competitive position. Pushback is itself a form of change.

This is not an "adapt or die" ultimatum. There is still an ethical conversation to be had. Stephen noted that while nobody ever accused Photoshop of being unethical, there are legitimate challenges to be made to the people building the models, and to the companies deploying them, and the ways they pull their narrative or imaging references without consent.

But it should be understood that resistance to change is itself a decision that will have tangible impact.


Anyone can build.

Stephen described his job at 2K as far more of a culture-leading role than a technology-leading one. A significant portion of it is legal and ethical guidance – helping the company understand where the lines are and why they exist. His number one piece of advice to anyone anxious about AI? Don't worry about losing your job.

As AI absorbs more of the rote and technical execution, the people who know how to design, evaluate, and optimize what's being built will be in higher demand than ever. The bottleneck shifts from mechanical output to judgment and tase.

What AI does unlock is access. "For anyone who has the inclination to build things," Stephen said, "it's never been easier." A solo developer with a clear vision can now prototype, iterate, and communicate ideas with a speed and fidelity that wasn't possible a few years ago.

This can be a threat to the current status-quo, for sure – but it can also be an opportunity for those who know how to capitalize on it.


I appreciated Stephen's overall view on AI in gaming. He's not an evangelist, telling you that AI will fix everything. And he's not dismissal, ignoring the real disruption it can create.

He's holding the tension, and doing his best to stay focused on what truly matters: the people who make games. I don't agree with every point that was made, but I do agree fully with this.

As an industry, we will continue to navigate the divide that AI has created. We should continue discourse, draw necessary lines, and aim for ethical implementation always.