WHEN MACHINES HAVE IDEAS
Ben Vigoda, Gamalon’s CEO, spoke recently at MIT Technology Review EmTech along with Pedro Domingos from University of Washington, Noah Goodman from Stanford, Ruslan Salakhutdinov from Apple/CMU, Ilya Sutskever from OpenAI, Maya Gupta from Google, and Eric Horvitz from Microsoft.
He describes how deep learning and other state-of-the art machine learning is like training a dog to provide a desired response to a stimulus – ‘ring the bell, give some food’ , ‘ring the bell, give some food’, and so forth, except that with today’s machine learning you typically need to repeat this kind of labeled input/output pair 10,000 times.
By contrast, to teach a human we would just say, ‘This is a dinner bell, when I ring it I am going to serve you some food’ – you would insert that idea directly into their mind in between where the stimulus comes in and the response goes out – by talking to them. The person can still learn from stimulus-response experiences, but you can also teach them by communicating ideas to them. This is how Gamalon’s Idea Learning works.
RECOGNIZING DRAWINGS: DEEP LEARNING VERSUS IDEA LEARNING
In this video, we compare Gamalon’s new Idea Learning technology versus state-of-the-art deep learning while playing Pictionary: we draw something, and the system must guess what we drew.
We show that the Gamalon Idea Learning system learns from only a few examples, not millions. It can learn using a tablet processor, not hundreds of servers. It learns right away while we play with it, not over weeks or months. And it learns from just one person, not from thousands. Someday soon you might even have your own private machine intelligence running on your mobile device!
You have pictures in your imagination, but it is difficult to show your imagination to other people. We imagined this app during one of our Gamalon company hackathons, to make it easier to show other people what you meant to draw, instead of what you did draw.
A collaborative drawing system of this kind would quickly learn from all of the people using it, and rapidly become surprisingly helpful. It could offer autocomplete suggestions for your sketches, help fill in details or surrounding context, or clean up and enhance your drawings. If you are designing a building or a machine, it could do a hierarchical 3-D parts search and find similar parts to fit your needs. If you are creating a business document, full featured bar charts and pie charts could instantly pop into existence just by sketching them. With its knowledge of how parts work together, the system could even add the laws of physics to this sketching world, so that anything you draw instantly becomes animated and interactive.
Unlike deep learning which learns by adjusting millions of numerical parameters, the Idea Learning system learns by (re)writing human-readable code, so we can examine and edit the new concepts that it learns. If one person taught the system something we don’t want it to know, we can simply remove the code that we don’t like.
Going beyond this drawing application, we are starting to teach the system to read, first by building up letters, then words, and then sentences. Language is a much more complex setting, but like with drawing, we expect that the system will learn more and more complex concepts made out of simpler ones. Who knows where it can take us?
TEDx BOSTON: WHEN MACHINES HAVE IDEAS
Our CEO, Ben Vigoda, gave a talk at TEDx Boston 2016 called “When Machines Have Ideas” that describes why building “stories” (i.e. Bayesian generative models) into machine intelligence systems can be very powerful.
TALKING MACHINES INTERVIEW WITH BEN VIGODA
Listen to Katherine Gorman interview our CEO, Ben Vigoda, on Talking Machines.