It could be simpler to say what aspect of our modern society hasn’t been taken on by artificial intelligence (AI). Just to show how crucial it has become to our daily lives, business operations, and community. Intelligent devices affect every part of our society. To help enhance efficiencies and increase our human potential. AI is so entangled with everything that we do. Therefore, it’s difficult to imagine life without it. But researchers and developers are not willing to stop at these simple applications of AI.
 
Tech giants come up with interesting ways of improving the algorithms. Google has been working on Artificial Intelligence for as long as we know it. Google AI experiments providing personalized suggestions on the search engine to addressing our day-to-day queries. They have been using AI to perform many tasks. Moreover, come up with some interesting online ai experiments.
 
It also provides personalized services. Also, makes predictions in many of its apps like Google Maps. Here, the path to your desired destination is there which has the least traffic.

Google AI Experiments

Given below are some cool AI inventions, ai experiments online, and frameworks by google which you must check out:

google ai experiments
 Infographic: Google AI Experiments

What Neural Networks See

Have you ever been curious about how AI software detects objects? This Investigation from Google allows you to see what a neural network sees. It will list down all the objects that it can see by the neural network model by using your camera. Neural Networks can detect pillows, faces, chairs, tables, and other daily life objects. Also, it gives you a perspective about how smart AI is right now.

Quick, Draw!

Google has employed a Neural Network in Artificial Intelligence experiments to find out what you have drawn. The neural network may not always be right. But it can make pretty accurate predictions about certain drawings. In our testing, the neural network was able to the drawings. The interactive game asks users to draw different things within 20 seconds. So, whatever the neural networking is processing, it is audible.

A.I. Duet

A.I. Duet applies machine learning on a piano which responds to you as you play the notes. This experiment is by the Creative Team at Google using advanced Machine Learning frameworks. Such as Tensorflow, Tone.js, and open-source tools from the Magenta project. The project is simple as it plays back keyboard tones and allows you to be creative with a set of tones.

Interplay Mode

Interplay Mode takes the learning from online videos to a whole new level. It allows AI to analyze your response to certain questions during the video, for instance, a letter in Japanese. If your handwriting matches the one being taught in the video. You will be able to move forward in the video to learn something new. Otherwise, you have to try again until you get the right pattern. The same technique applies to other lessons as well such as grammar and so forth. The experiments use speech recognition and letter recognition to support the learning experience.

Thing Translator

These ai experiments with google lets you take pictures of different things around your house or office. Also, it can detect what the object is. You can select to read and hear the name of the object in nine different languages. It does not know all the objects but tries to make something out of the pictures you give it. Thing Translator is useful and addictive. Not for fun but also because it can be used in several applications. A good application can be a traveling app. So people can get instant translations of objects in native languages.

Giorgio Cam

This Google AI Project tries to create a song from the pictures you snap with it. Giorgio Cam uses image recognition to assess what you took a picture of and then converts the image names into song lyrics. Afterward, the song lyrics are from a rock-star voice over some kind of jazz music. The ML algorithm is still learning and can mess up some recognitions. Ultimately, the hilarious awesome songs but still, you can see the power of AI through this.

Talk to Books

Google Research has created many activities to teach AI the art-of-human conversation via Talk to Books. A new experimental way to interact with books. It lets you speak with all 100,000 books in the index. Allows you to make statements and ask questions from within the Google Books database. The AI tracks down sentences that are conversational responses. It is designed in a way to cut through the noise. More on, provide users with a reliable response, backed by expert literature.

FreddieMeter

Want to sing like Freddie Mercury? FreddieMeter experiment uses advanced on-device machine learning models developed by Google Research. It compares your singing with Freddie’s on some of the Queen’s classic hits. It does so by comparing your pitch, melody, and timbre. Then, measure the similarity between your voice and Freddie. Furthermore, It takes your voice into the processing model through WebRTC and processes it using Web audio API.

NSynth: Sound Maker

NSynth is another Google AI Experiment that lets you play with new sounds created with machine learning. Its creation is from “Nsynth,”. A research project which trained a neural network on more than 300,000 instrument sounds. NSynth can combine sounds such as flute and bass into a new hybrid bass-flute sound.

Semi-Conductor

Semi-Conductor uses a machine learning library “PostNet”. It works in the browser and maps out an individual’s movements through a webcam. It allows you to lead your orchestra via your browser by moving your arms to change the tempo, volume, and instrumentation of any music. An algorithm plays along with the score. While you conduct it by using hundreds of diminutive audio files from the live recorded instruments.

Alto

Alto, an open-source google AI experiment that uses machine learning builds teachable objects. Anyone can fork the source code, schematics, and case designs to make their own teachable object. ALTO stands for A Little Teachable Object. Google Creative Lab created it. It makes use of the Coral USB Accelerator and the Raspberry Pi. To demonstrate how machine learning can be easily integrated into your next hardware project.

Imaginary Soundscape

This is a Web-based sound installation on Google street view generated with deep learning models. It is based on the recent development of cross-modal information retrieval techniques. like imagery-to-audio, text-to-image use of deep learning, and focuses on unconscious behavior. The AI soundscapes sometimes astonish expectations, but sometimes ignore the cultural and geographical context. These differences could bring us to think about how imaginatively the sound surroundings work. Also, how fertile they are.

The Infinite Drum Machine

These type of Google experiments uses machine learning. To systemize stacks of everyday sounds into a single, easy-to-control drum machine. The computer is only given the audio. Through the t-distributed stochastic neighbor embedding (t-SNE) technique. The computer places identical sounds close together. Moreover, you can use the map to explore neighborhoods of sounds that are similar. Also, make beats by using the drum sequencer.

Semantics

Semantris experiment is a word association game that is backed by machine learning. Whenever you enter a clue, the AI tracks all the words in play and then selects the ones that it finds most related. Since AI training is on billions of instances of conversational text. The extent is a profusion of topics. It is capable of making many types of associations.

FontJoy

FontJoy experiment allows you to create font combinations with deep learning. You can create a new font pairing and lock certain fonts that you want to keep. The main goal of font pairing is to choose fonts. These fonts should have an overarching theme with pleasing contrast.

Conclusion

Aside from those experiments above. There are many other ai experiments with google that you can explore. Submissions are always open. They’re continuing to add more. They offer you some perspective about the advancement of AI and ML so far. Also, share the current that has been done. Therefore, you can use the experiments in your projects. Because the source code is also been provided alongside the demos of the experiments too. 

x