Google AI Experiments
Given below are some cool AI inventions, ai experiments online, and frameworks by google which you must check out:
What Neural Networks See
Have you ever been curious about how AI software detects objects? This Investigation from Google allows you to see what a neural network sees. It will list down all the objects that it can see by the neural network model by using your camera. Neural Networks can detect pillows, faces, chairs, tables, and other daily life objects. Also, it gives you a perspective about how smart AI is right now.
Google has employed a Neural Network in Artificial Intelligence experiments to find out what you have drawn. The neural network may not always be right. But it can make pretty accurate predictions about certain drawings. In our testing, the neural network was able to the drawings. The interactive game asks users to draw different things within 20 seconds. So, whatever the neural networking is processing, it is audible.
A.I. Duet applies machine learning on a piano which responds to you as you play the notes. This experiment is by the Creative Team at Google using advanced Machine Learning frameworks. Such as Tensorflow, Tone.js, and open-source tools from the Magenta project. The project is simple as it plays back keyboard tones and allows you to be creative with a set of tones.
Interplay Mode takes the learning from online videos to a whole new level. It allows AI to analyze your response to certain questions during the video, for instance, a letter in Japanese. If your handwriting matches the one being taught in the video. You will be able to move forward in the video to learn something new. Otherwise, you have to try again until you get the right pattern. The same technique applies to other lessons as well such as grammar and so forth. The experiments use speech recognition and letter recognition to support the learning experience.
These ai experiments with google lets you take pictures of different things around your house or office. Also, it can detect what the object is. You can select to read and hear the name of the object in nine different languages. It does not know all the objects but tries to make something out of the pictures you give it. Thing Translator is useful and addictive. Not for fun but also because it can be used in several applications. A good application can be a traveling app. So people can get instant translations of objects in native languages.
This Google AI Project tries to create a song from the pictures you snap with it. Giorgio Cam uses image recognition to assess what you took a picture of and then converts the image names into song lyrics. Afterward, the song lyrics are from a rock-star voice over some kind of jazz music. The ML algorithm is still learning and can mess up some recognitions. Ultimately, the hilarious awesome songs but still, you can see the power of AI through this.
Talk to Books
Google Research has created many activities to teach AI the art-of-human conversation via Talk to Books. A new experimental way to interact with books. It lets you speak with all 100,000 books in the index. Allows you to make statements and ask questions from within the Google Books database. The AI tracks down sentences that are conversational responses. It is designed in a way to cut through the noise. More on, provide users with a reliable response, backed by expert literature.
Want to sing like Freddie Mercury? FreddieMeter experiment uses advanced on-device machine learning models developed by Google Research. It compares your singing with Freddie’s on some of the Queen’s classic hits. It does so by comparing your pitch, melody, and timbre. Then, measure the similarity between your voice and Freddie. Furthermore, It takes your voice into the processing model through WebRTC and processes it using Web audio API.
NSynth: Sound Maker
NSynth is another Google AI Experiment that lets you play with new sounds created with machine learning. Its creation is from “Nsynth,”. A research project which trained a neural network on more than 300,000 instrument sounds. NSynth can combine sounds such as flute and bass into a new hybrid bass-flute sound.
Semi-Conductor uses a machine learning library “PostNet”. It works in the browser and maps out an individual’s movements through a webcam. It allows you to lead your orchestra via your browser by moving your arms to change the tempo, volume, and instrumentation of any music. An algorithm plays along with the score. While you conduct it by using hundreds of diminutive audio files from the live recorded instruments.
Alto, an open-source google AI experiment that uses machine learning builds teachable objects. Anyone can fork the source code, schematics, and case designs to make their own teachable object. ALTO stands for A Little Teachable Object. Google Creative Lab created it. It makes use of the Coral USB Accelerator and the Raspberry Pi. To demonstrate how machine learning can be easily integrated into your next hardware project.
This is a Web-based sound installation on Google street view generated with deep learning models. It is based on the recent development of cross-modal information retrieval techniques. like imagery-to-audio, text-to-image use of deep learning, and focuses on unconscious behavior. The AI soundscapes sometimes astonish expectations, but sometimes ignore the cultural and geographical context. These differences could bring us to think about how imaginatively the sound surroundings work. Also, how fertile they are.
The Infinite Drum Machine
These type of Google experiments uses machine learning. To systemize stacks of everyday sounds into a single, easy-to-control drum machine. The computer is only given the audio. Through the t-distributed stochastic neighbor embedding (t-SNE) technique. The computer places identical sounds close together. Moreover, you can use the map to explore neighborhoods of sounds that are similar. Also, make beats by using the drum sequencer.
Semantris experiment is a word association game that is backed by machine learning. Whenever you enter a clue, the AI tracks all the words in play and then selects the ones that it finds most related. Since AI training is on billions of instances of conversational text. The extent is a profusion of topics. It is capable of making many types of associations.
FontJoy experiment allows you to create font combinations with deep learning. You can create a new font pairing and lock certain fonts that you want to keep. The main goal of font pairing is to choose fonts. These fonts should have an overarching theme with pleasing contrast.
Aside from those experiments above. There are many other ai experiments with google that you can explore. Submissions are always open. They’re continuing to add more. They offer you some perspective about the advancement of AI and ML so far. Also, share the current that has been done. Therefore, you can use the experiments in your projects. Because the source code is also been provided alongside the demos of the experiments too.