As the ubiquity of AI seems increasingly certain, we must ask ourselves if it’s being implemented in the right way.
This summer marked the return of the NCAA Football Video Game series with the release of College Football 25, or CFB25. Within weeks, the video game exploded in popularity, quickly becoming the highest selling video game of 2024. Gamers loved the electric offense and fun atmosphere, while college football fans appreciated the inclusion of all 134 Football Bowl Subdivision teams.
As artificial intelligence develops, it has started to creep into fields that were once reserved for artists. This is no different with EA Sports, the creator of CFB25. Each team had every player on their roster, multiple uniform designs, and their home stadium included, along with unique mascots and student traditions. This required the production of thousands of accurate 3D art pieces. To generate a model for each of the 11,390 real players included in the game, EA created their own AI that scans a player’s image and references a massive library of different head shapes, hairstyles, etc. to generate a unique model for each player.
There is no question that other companies will turn to AI to try and emulate this success, but are there aspects of AI that should give us pause before we implement them everywhere?
I met with Professor and Doctor of Philosophy Annette Zimmermann to talk about this moral dilemma, which she addresses in her book “Democratizing AI.” Professor Zimmermann has a rich background of AI morality research, studying at Oxford, Yale, and Princeton. She then spent two years as a permanent lecturer on AI at the University of York and as a Harvard technology and human rights fellow. After being drawn to the university’s philosophy program, she is now an assistant professor here at UW-Madison.
Zimmermann compares the use of AI in media like videogames to how painters often use assistants to get the repetitive aspects of their artwork done. “All the most famous artists have a herd of assistants that do the majority of the work – the artist just implements a set of precise instructions. In a way, that’s an algorithm,” Zimmermann explains.
In this sense, the use of AI is not entirely new in substance. Even before AI, we existed in a world where art was produced through outsourcing the menial aspects of creation. If these older paintings are considered valid, then why not works aided by AI?
According to Zimmermann, we are left with one question:
Without AI, creating each football player would require a human artist to create their model for the game. Instead, these models are now pumped out by an algorithm. Turning a person into a purchasable character feels hollow when that character was created by a thoughtless machine.
Zimmermann notes, “There is something that feels quite intrusive about selling your likeness in this sort of all-encompassing way. It’s not like we are just buying your jersey with your number, we are buying the opportunity to pretend for ourselves that we are you.”
However, Zimmerman adds that selling our personality and likeness is not entirely new, as everyone on the job market does this in some form. She says, “We all commodify ourselves in a lot of ways on the job market, like we are always selling something.”
The ethical concerns of these technologies are unique to the twenty-first century, and no answer to this issue is concrete. Zimmermann emphasizes that we must be wary of the explosive rhetoric that seems to be used around this new technology: “Resisting AI hype and doomsday thinking is the only path forward to come to better AI policy and better AI research.”
Nobody can say for certain how AI will shape society, or whether it will be for better or worse. However, it remains clear that we must look at this new technology objectively and continue to ask if we are using it in the right way, even in the art of video game making.