AI has come a long way in recent years — but as many who work with this technology can attest, it is still prone to surprising errors that wouldn’t be made by a human observer. While these errors can sometimes be the result of the required learning curve for artificial intelligence, it is becoming apparent that a far more serious problem is posing an increasing risk: adversarial data.

For the uninitiated, adversarial data describes a situation in which human users intentionally supply an algorithm with corrupted information. The corrupted data throws off the machine learning process, tricking the algorithm into reaching fake conclusions or incorrect predictions. READ MORE ON: THE NEXT WEB