A New Way To Build Tiny Neural Networks Could Create Powerful AI On Your Phone

Neural networks are the core software of deep learning. Even though they’re so widespread, however, they’re really poorly understood. Researchers have observed their emergent properties without actually understanding why they work the way they do.

Now a new paper out of MIT has taken a major step toward answering this question. And in the process the researchers have made a simple but dramatic discovery: we’ve been using neural networks far bigger than we actually need. In some cases they’re 10—even 100—times bigger, so training them costs us orders of magnitude more time and computational power than necessary. READ MORE ON: MIT TECHNOLOGY REVIEW

A diagram of a neural network learning to recognize a lion.  JEFF CLUNE/SCREENSHOT

A diagram of a neural network learning to recognize a lion.

JEFF CLUNE/SCREENSHOT