Given the glut of AI “breakthroughs” and wannabe “breakthroughs,” it’s sometimes tough to pinpoint events that if not seismic of themselves, are the kind of incremental advances that portend an “a-ha!” moment to come — such as the evening you realize Netflix’s recommendation system understands your movie tastes better than your spouse.
Last week, we covered recent advancements made by the OpenAI group in creating reinforcement algorithms that led to grounded, compositional language use by an AI. Now Baidu has weighed in with their own advances on a similar front, in which an artificial agent was taught English by a “virtual teacher” and thereafter could understand written commands and apply them in a video game setting. This allowed the Baidu AI to achieve what’s called zero shot learning — the ability to solve a learning task without having received any previous examples of solving such a task, something that has remained elusive for AIs up to this point.
We humans do the above kind of thing all the time, thanks to our use of language. For example, some time ago, I had to remove a broken window from my house. Unsure of how to proceed, I pulled up a guy on YouTube wearing a camo hunting cap who explained that I needed to go at the sucker with a crow bar, wedging it along the sides of the aluminum frame until it broke free from the plaster molding. Voila: one shot learning.
Baidu has been something of a dark horse where AI breakthroughs are concerned; the company’s pronouncements always seeming to fall short of the shockwaves sent out by DeepMind, Facebook, and IBM. Nevertheless, this recent achievement could have some important consequences. It helps to differentiate the Baidu AI language use from anything you’ve encountered with Siri or Google Assistant. In the Baidu project, a virtual teacher gave positive or negative feedback to an agent who was responding to commands issued by the teacher. If the agent rightly connected the command with the intended action, it received a reward, and in the case of failure, it was penalized. The agent slowly learned the correct meaning and usage of words. Later, when presented with an unfamiliar command, it was able to extrapolate the correct meaning and fulfill the desired goal, an example of one shot learning. While this whole process took place in a simple 2D maze world, it could likely be extrapolated to vivid 3D environments, and from there to real world settings.
Taken individually, it’s unlikely the OpenAI language project nor the Baidu achievements signifies a sea change in how your devices will respond to language commands in the coming months. Collectively, though, they highlight the enormous energies being poured into natural language processing — the ability for machines to understand language as humans do, and suggest that incremental advances are being made on this front. Given the pace of these achievements (two in as many weeks!) it could be a lot sooner than many skeptics believe that we can achieve AI with strong natural language usage. Make no mistake: such an AI could pass a Turing test and house a Joycean virtual machine with a center of narrative gravity of the type elucidated by Daniel Dennet in Consciousness Explained. It’s none too early to talk seriously about the laws and rights governing machine consciousness and their implications for humanity once achieved.