Artificial General Intelligence, or AGI for short, is a term often thrown around without a clearly defined meaning. When AGI arrives, we are told, robots will suddenly be smarter than humans. But what does this actually mean – how do we define smarter? The answer is more complex than you might imagine.
The AI field has been around for decades, with the first AGI milestone achieved in 1956 when Arthur Samuel programmed a computer to play checkers. Samuel’s program was not the first AI, however, as several computer scientists were well on their way to developing machines that could “think” before he was even born.
What makes AGI different from other programs that illustrate “thinking” is that it can successfully perform a broad range of cognitive tasks. We are not talking about simple responses to stimuli, but complex ones. In order to reach this level of sophistication, AGI must be able to learn and adapt, to reason and understand, and to communicate. In other words, AGI must be able to perform all of the tasks we define as being distinctly human.
The term AGI was coined in 1956 by John McCarthy, a computer scientist and artificial intelligence pioneer. It was initially used to describe a human-level intelligence that is able to perform any intellectual task that a human can. At that time, the term was used in a narrow sense, defining AGI as being an advanced version of the technology of the time. It was not until the 1980s that the term was used to refer to a hypothetical artificial intelligence with the same capabilities as a human mind.
AGI researchers have recently turned to neurobiology and cognitive psychology to try to better understand how the human brain works. It is believed that by studying the mechanisms of cognition, AGI researchers will eventually figure out how to build a computer that can think like a human. This focus on cognition is particularly important because the AGI field lacks a solid foundation in mathematics or computer science.
The complexity of building an AGI is so great that some researchers have turned to evolutionary algorithms to give them a head start. An evolutionary algorithm uses the principles of evolution to come up with solutions to problems. Instead of hand-coding instructions to perform a task, the algorithm instead starts with a large population of possible solutions and allows them to compete with each other. Over time, the most successful solutions will win out over the failures, and the population of possible solutions will continue to grow and evolve. The results vary, but are sometimes better than the results from traditional programming.
How close are we?
The short answer is no one knows. If you ask any two AI researchers you will get two answers and at least one passionate argument. There are some who say we are at an inflection point. Others say we are closer to the start than the finish, and most will tell you there is still a long way to go.
The problem with predictions is that they are not always correct. We thought we were very close to having an AGI in the 1960s, and then we started to focus on a different problem that seemed easier to solve. So, if you are a fan of artificial intelligence, and would love to see an AGI built in your lifetime, you may be in luck – or you may not.
While the goal of artificial general intelligence is an end in itself, most researchers are not focused on developing a machine that can pass the Turing test. Instead, most researchers hope to use AGI technology to solve real-world problems. In order to develop a machine that can think like a human, you must first understand how a human thinks. It’s a tall order.
Most AGI programs are limited to text-based communication. Some programs do have the ability to communicate using speech, but they still struggle with syntax and other complex language issues. In order to communicate like a human, an AGI must be able to recognize speech and to respond in a way that makes sense to the listener.
Finally, AGI researchers are still trying to figure out how to teach an AGI to reason and to problem-solve. The best AGI programs are currently able to perform only a very limited set of cognitive tasks. If an AGI wants to learn to do something new, it must be programmed to do so – despite recent advances in deep learning techniques.
Conclusion
Artificial general intelligence, if we’re being realistic, is still in its infancy. While the technology is still several decades away, it is an incredibly exciting field that promises to solve problems that have plagued humanity for decades. Optimistic experts predict that AGI will be a reality within the next 20 years – other say it will be longer. In reality, nobody knows.