Google's Artificial Intelligence Now Thinks And Plays Video Games Like We Do

The quest for an artificial intelligence that is only paralleled by human intellect has been ongoing in science and technology for decades. It's clearly evident that it began in science fiction, but it's also evident that it later made its way into science fact. When looking back, we see that very early video games and computer programs were some of the first forms of artificial intelligence ever created..but that has now changed...

In 1951, Dr. Dietrich Prinz wrote a chess computer program for the Manchester Ferranti Mark 1 (the first real "general purpose computer"). This was a very simple program, but it could play against some amateur chess players and actually win; however, it was just a that: a computer program. Most artificial intelligences, such as in video games, are pre-programed to understand the game, and then carry out what it's been programed to do in the game. In a real first for artificial intelligence, Google's DeepMind AI is now capable of learning and then playing Atari 2600 games, in the same manner a human would.

Atari 2600Atari 2600

Google's DeepMind is an artificial intelligence program that was created by DeepMind Technologies (a great startup company that was acquired by Google). The DeepMind AI begins playing a game the same way a human would; it plays through a beginning level, learns the controls, learns the pros and cons of its choices, and then attempts to replay and master each level. It's essentially a learning program that can play video games like a human would, and then grow from it.

The company has show on numerous occasions that it has the ability to learn just like a human does while playing, and then end up beating scores set by human testers. The program has the ability to adapt and decide what is "best" for itself, or the "best" course of action. 

Many people out there are probably thinking "Eh. So what. It's for video games." Well, currently yes, but DeepMind Technologies says that "The ultimate goal is to build general purpose smart machines", and essentially machines capable of learning like we do. This may be decades away, but this is still an advancement, and a very significant small step. A program that can directly learn from experience and teach itself is an incredible feat. This even has the possibility to have developers learn from the program, based off the results they are getting during and after testing. In a way, it's similar to creating a machine that has the ability to speak back to a developer, while they develope it, and have the program give its own "opinions". 

As proud and excited I am to hear this, I keep thinking of 2001: A Space Odyssey, and hearing "I'm sorry, Dave. I'm afraid I can't do that." And now...I can't wait to have my video game console talk back to me and question my decision making! That being said, having Google's artificial intelligence think and play video games like we do is too exciting to pass up.

Source: Wired