How To Address New Privacy Issues Raised By Artificial Intelligence And Machine Learning, For generations, companies have collected large amounts of information about consumers and have used it for marketing, advertising, and other business purposes. They regularly infer details about some customers based on what others have revealed. Marketing companies can predict what television shows you watch and what brand of cat food you buy because consumers in your demographic and area have revealed these preferences. They add these inferred characteristics to your profile for marketing purposes, creating a “privacy externality” where information others disclose about themselves also implicates you.
Machine learning increases the capacity to make these inferences. The patterns found by machine learning analysis of your online behavior disclose your political beliefs, religious affiliation, race, ethnicity, health conditions, gender, and sexual orientation, even if you have never revealed this information to anyone online. The presence of a digital Sherlock Holmes in virtually all online spaces making deductions about you means that giving consumers control over their own information will not protect them from indirectly disclosing even their most sensitive information.
For this reason, policymakers need to craft new national privacy legislation that accounts for the numerous limitations that scholars such as Woody Hartzog have identified in the notice and consent model of privacy that has guided privacy thinking for decades. The exacerbation of privacy externalities created by machine learning techniques is just one more reason regarding the need for new privacy rules.
SHOULD NEW PRIVACY LEGISLATION REGULATE ARTIFICIAL INTELLIGENCE RESEARCH?
For decades, university institutional review boards (IRBs) have regulated academic research, aiming primarily to protect human research subjects against harm from the research process itself. But now online companies regularly conduct similar research on human subjects outside the academic setting, often through simple A/B testing aiming to improve customer engagement, without needing to seek IRB approval. As Facebook discovered several years ago in reaction to its emotional contagion study, many people are concerned about this loophole and have called for greater controls on company-sponsored research.
Some businesses have responded by setting up internal review boards to assess the ethical issues associated with their projects that might have significant ethical consequences. Interestingly, these reviews go beyond trying to protect the human research subjects and touch on the broader question of whether the insights gained from the research might have harmful downstream consequences on a wider population.
The recent controversy over the development of facial recognition software that can predict the sexual orientation of people based on their facial characteristics reveals why ethical review must move beyond protecting human subjects. Dubbed “gayface” software, this experimental facial recognition tool was trained on publicly available photographs and claims to predict sexual orientation from facial characteristics alone. With no foreseeable beneficial use of this technology, it might not be ethical to develop an algorithm when it can only be used for harmful, discriminatory purposes.
READ MORE ON(How To Address New Privacy Issues Raised By Artificial Intelligence And Machine Learning): BROOKINGS