A.I Can Now Read Your Thought And Turn Them Into Words and Images, A recent article in Nature highlights a discovery that pushes the boundaries of our imaginations and challenges some of the very attributes that make us human. The piece details how artificial intelligence is creating speech by interpreting brain signals (and even offers an audio recording for a chance to hear it for yourself). It’s a key advancement for people who can’t speak because it provides a direct technologically-enabled path from thought to speech.

The implications of this discovery go beyond the recreation of speech: A.I. was used to decode brain waves and then reassemble neural impulses. While the focus of this study was on the mechanistic components of speech, such as direct muscle movement, it still acquired information from the early stages of thought development to construct words that were identifiable about 70% of the time. In other words, A.I. actually translated the code that makes up pre-speech.

A.I. has also enabled the recreation of another sense through the reading of neural output: vision. In a recent study, for instance, functional magnetic resonance imaging (fMRI) data was combined with machine learning to visualize perceptual content from the brain. Image reconstruction from this brain activity—which was translated by A.I.—recreated images on a computer screen that even the casual observer could recognize as the original visual stimuli.

But here’s where it gets really interesting: These advancements create the potential for a new level of direct communication mediated not by humans but by A.I. and technology.

Steps are currently being taken to transition such technology from research to real-life application. The utility of an electronic mesh—a microscopic network of flexible circuits that are placed into the brain and insulated with actual nerve cells—is being tested in animals now. Even Elon Musk has entered the endeavor of processing impulses directly to and from the human brain. His company Neuralink is currently developing an interface between the computer and brain using neural lace technology—a cellular infrastructure that allows microelectrodes to be incorporated into the structure of the brain itself.

What lies ahead is more of the blurred distinction between man and machine: A.I. may soon find a new home as less of an external device and more of a neuromechanical biological system that lives within our bodies. The codification of speech and vision into pre-sensory data and the potential creation of miniature, biologically-compatible interfaces will drive a new vista for biology and technology where the sum of the parts—human and electronic—combine to transcend the limitations of the cell and the electron.

John Nosta is president of NostaLab.

READ MORE ON(A.I. Can Now Read Your Thought And Turn Them Into Words and Images): FORTUNE

x