Nothing is off-limits to artificial intelligence — even your favorite old video games.
An artificial intelligence, developed by researchers from the University of Freiburg in Germany, has beaten the Q*bert arcade game by exploiting glitches in its design.
In the game, players take the role of cartoon character Q*bert, who hops around a pyramid of 28 cubes. Every time Q*bert lands on a cube, it changes color. Players are tasked with changing every cube’s color without being captured by enemies that also roam around the pyramid.
The AI found two sleazy ways to beat the game. First, it baited an enemy to follow it, then committed suicide by jumping off its pyramid. Though Q*bert lost a life, killing the opponent in the process left the player with enough points to reincarnate and repeat the cycle.
Additionally, by jumping around the pyramid in a (seemingly) random fashion, the AI caused the pyramid’s tiles to begin to blink, and was granted more than one million points.
The researchers believe that no human has ever uncovered these loopholes before, but this may not be entirely fair. The researchers tested their AI with an updated version of Q*bert — and the game’s developer claims the original version didn’t have such bugs.
Since I designed and programmed the original arcade version, I can’t really say much about any port. This certainly doesn’t look right, but I don’t think you’d see the same behavior in the arcade version.
— Warren Davis (@WarrenDavis29) February 28, 2018
So it’s possible that humans could have found these loopholes as well. Nonetheless, the AI was able to find them after only five hours of training, which is probably less time than it would take most humans to beat the game.
The researchers used sets of algorithms called “evolution strategies.” As the name implies, evolution strategies involve generating many algorithms and identifying those that perform the best through trial and error.
In the paper, researchers suggest that evolution strategies can be considered “a potentially competitive approach to modern deep reinforcement learning algorithms.” Deep reinforcement learning algorithms mimic human neural networks and teach themselves effective strategy. A number of well known artificial agents fall into this category, including Alphabet Inc.’s DeepMind, which recently became one of the world’s most dominant Go players.
It’s also possible that these algorithms could end up working together. “Since evolution strategies have different strengths and weaknesses than traditional deep reinforcement learning strategies, we also expect rich opportunities for combining the strength of both,” the researchers wrote.
This study is a good sign for our robot overlords, which grow more dominant every day. In a recent study, AI outperformed lawyers in interpreting legal contracts. A Google-trained algorithm has trained itself to recognize patients at risk of heart disease — it doesn’t yet outperform existing medical approaches, but it’s on its way.
It’s a serious, but exciting reminder to all humans. When it comes to skilled AI, nothing is out of reach — not even your childhood arcade games.