We’ve trained machine learning systems to identify objects, navigate streets and recognise facial expressions, but as difficult as they may be, they don’t even touch the different levels of sophistication requirements to simulate, for example, a puppy. Well, such projects provides an opportunity to do only that — in a rather limited behavior, of course. By observing the behavior of A Very Good Girl, this AI learned the rudiments of how to act like a dog.
Why do this? Well, although much work has been done to simulate the sub-tasks of perception like recognizing an object and picking it up, little has been done in terms of” understanding visual data to the extent that an agent can take actions and perform tasks in the visual world .” In other terms, act not as the eye, but as the thing controlling the eye.
And why puppies? Because they’re intelligent agents of sufficient intricacy,” yet their goals and motives are often unknown a priori .” In other terms, puppies are clearly smart, but we have no idea what they’re thinking.
As an initial foray into this line of studies, the team wanted to see if by monitoring the dog closely and mapping its motions and actions to the environment it watches, they could create a system that accurately predicted those movements.
In order to do so, they loaded up a Malamute named Kelp M. Redmon with a basic suite of sensors. There’s a GoPro camera on Kelp’s head, six inertial measurement units( on the leg, tail and trunk) to tell where everything is, a microphone and an Arduino that tied the data together.
They recorded many hours of activities — walking in various environs, fetching things, playing at a bird-dog park, feeing — syncing the dog’s motions to what it understand. The ensue is the Dataset of Ego-Centric Actions in a Dog Environment, or DECADE, which they used to train a new AI agent.
This agent, made certain sensory input — mention a position of a chamber or street, or a ball flying past it — was to predict what a dog would do in that situation. Not to any serious degree of detail, of course — but even only figuring out how to move its torso and to where is a fairly major task.
” It reads how to move the joints to walk, reads how to avoid obstacles when walking or running ,” explained Hessam Bagherinezhad, one of the researchers, in an email.” It reads to run for the squirrels, follow the owner, track the winging bird-dog dolls( when playing fetch ). These are some of the basic AI duties in both computer vision and robotics that we’ve been trying to solve by collecting separate the necessary data for each task( e.g. motion plan, walkable surface, object detection, object tracking, person acceptance ).”
That can produce some quite complex data: For example, the dog model is necessary to know, just as the dog itself does, where it can walk when it needs to get from here to there. It can’t walk on trees, or automobiles, or( depending on the house) sofas. So the framework learns that as well, and this can be deployed separately as a computer vision modeling for finding out where a pet( or small-scale legged robot) can get to in a given image.
This was just an initial experimentation, the researchers tell, with success but limited results. Others may consider bringing in more appreciations( smell is an obvious one) or visualizing how a modeling produced from one dog( or many) generalizes to other bird-dogs. They conclude:” We hope this work paves the way towards better understanding of visual intelligence and of the other intelligent beings that inhabit our world .”