How to build human friendly AI-Can AI be Aligned with human values, There is no bigger concern than algorithmic bias in daily operations. From trading on Wall Street to social security compensations and managing loan approvals – the use of AI has grown with applications that range from self-driving vehicles to AI-powered diagnostic tools and training datasets are highly susceptible to contain traces of discrimination which can lead to biased decisions. But what happens when bias rules the systems – according to LR. Varshney, Assistant Professor at the University of Illinois at Urbana-Champaign this discrimination, and unfairness present in algorithmic decision making in the field of AI has become one of the biggest concerns than even discrimination by people.
It is a view seconded by Nate Soares of Machine Intelligence Research Institute – with AI algorithms rivalling humans in scientific inference and planning, everyday more and more heavy computational jobs will be delegated to algorithms themselves. And on this path to a greater intelligence, much of the work may be done by smarter-than-human systems.
Can AI unshackle its source code?
Recent AI research has taken on a new slant – ensuring AI alignment with our goals and ensuring highly advanced AI systems are aligned with human interest. The problem of “aligning” a super intelligence to retain our values is the mainstay of forward-thinking research on AI. One of the most common threads that comes up in the argument is about advanced AI systems unshackling their source code and going rogue. Nate Soares who heads the Machine Intelligence Research Institute dismissed such dystopian fears by emphasizing that AI system is its own source code, and its actions will only follow from the execution of the instructions that we initiate.
The more serious question posed in this debate should be how can we ensure that objectives outlined for smarter-than-human AI are correct, and how we can minimize costly accidents and unintended consequences in cases of misspecification, he noted in his talk at Google.
Even famous computer scientist Stuart Russell in his book Artificial Intelligence: A Modern Approach followed the same approach stating that – primary concern in AI is not the emergent of consciousness but simply the ability to make high-quality decisions.
Let’s have a look at new developments in the field of AI Safety
There’s a lot going in the sphere of AI safety research and finding a full solution to the problem of aligning an artificial super-intelligence with human values. In a bid to generate interest amongst AI researchers and scientists, senior industry members are monetizing research papers to garner more interest for building friendly AI with human values.
MIRI outlined four areas of research which have been studied extensively and are relevant to alignment with human goals:
a) Building realistic world-models, the study of agents learning and pursuing goals while embedded within a physical world
b) Decision theory, the study of idealized decision-making procedures
c) Logical uncertainty, the study of reliable reasoning with bounded deductive capabilities
READ MORE ON(How to build human friendly AI-Can AI be Aligned with human values?):Analytics India