Can We Make Artificial Intelligence Accountable?
Lack of explain ability of decisions made by Artificial Intelligence (AI) programs is a major problem. This inability to understand how AI does what it does also stops it from being deployed in areas such as law, healthcare and within enterprises that handle sensitive customer data. Understanding how data is handled, and how AI has reached a certain decision, is even more important in the context of recent data protection regulation, especially GDPR, that heavily penalizes companies who cannot provide an explanation and record as to how a decision has been reached whether by a human or computer? READ MORE ON: FORBES