It could be simpler to say what aspect of our modern society hasn’t taken on by artificial intelligence (AI) to show how crucial it has become to our daily lives, business operations, and community. Intelligent devices affect nearly every part of our society to help enhance efficiencies and increase our human potential. AI is so entangled with everything that we do; it’s difficult to imagine life without it. But researchers and developers are not willing to stop at these simple applications of AI. This is where tech giants come up with interesting ways of improving the algorithms.
Google has been working on Artificial Intelligence for as long as we know it. From providing personalized suggestions on the search engine to addressing our day to day queries, Google has been using AI to perform many tasks. It also provides personalized services and makes predictions in many of its apps such as Google Maps in which the path to your desired destination is shown which has the least traffic.
Given below are some cool Google AI inventions and frameworks which you must check out:
What Neural Networks See
Have you ever been curious as to how an AI software detects objects? This Investigation from Google allows you to see what a neural network sees. It will list down all the objects that can be seen by the neural network model by using your camera. It can easily detect pillows, faces, chairs, tables and other daily life objects very easily. It just gives you a perspective regarding how smart AI is right now.
Google has employed a Neural Network in AI experiment to find out what you have drawn. The neural network may not always be right but it can make pretty accurate predictions regarding certain drawings. In our testing, the neural network was able to easily the drawings. The interactive game asks users to draw different things within 20 seconds and whatever the neural networking is processing, it is audible.
A.I. Duet applies machine learning on a piano which responds to you as you play the notes. It has been built by the Creative Team at Google using advanced Machine Learning frameworks such as Tensorflow, Tone.js, and open-source tools from the Magenta project. The project is simple as it just plays back keyboard tones and allows you to be creative with a set of tones.
Interplay Mode takes the learning from online videos to a whole new level. It allows AI to analyze your response to certain questions being asked in the video, for instance, a letter in Japanese. If your handwriting matches the one being taught in the video, you are allowed to move forward in the video to learn something new. Otherwise, you have to try again until you get the right pattern. The same technique can be applied to other lessons as well such as grammar and so forth. The experiments use speech recognition and letter recognition to support the learning experience.
This AI experiment lets you take pictures of different things around your house or office and it can detect what the object is. You can select to read and hear the name of the object in nine different languages. It does not know all the objects but tries to make something out of the pictures you give it. Thing Translator is highly useful and addictive, not just for fun but also because it can be used in several applications. A good application can be a traveling app so people can get instant translations of objects in native languages.
This Google AI Project tries to create a song from the pictures you snap with it. Giorgio Cam uses image recognition to assess what you took a picture of and then converts the image names into song lyrics. Afterward, the song lyrics are sung by a rock-star voice over some kind of jazz music. The ML algorithm is still learning and can mess up some recognitions and ultimately the hilariously awesome songs but still, you can see the power of AI through this.
Aside from those experiments above, Google has a number of other experiments which you can explore on the same site above. Not only will they offer you some perspective regarding the advancement of AI and ML so far but also the current that has been done. You can also use the experiments in your own projects as the source code is also been provided alongside the demos of the experiments too.