Over the decades since the inception of artificial intelligence, research in the field has fallen into two main camps. The “symbolists” have sought to build intelligent machines by coding in logical rules and representations of the world. The “connectionists” have sought to construct artificial neural networks, inspired by biology, to learn about the world. The two groups have historically not gotten along.
But a new paper from MIT, IBM, and DeepMind shows the power of combining the two approaches, perhaps pointing a way forward for the field. The team, led by Josh Tenenbaum, a professor at MIT’s Center for Brains, Minds, and Machines, created a computer program called a neuro-symbolic concept learner (NS-CL) that learns about the world (albeit a simplified version) just as a child might—by looking around and talking. READ MORE ON: MIT TECHNOLOGY REVIEW