[ad_1]

One of Facebook's AI projects involves getting a robotic arm to use "curiosity" to learn better.

One of Facebook’s AI projects involves getting a robotic arm to use “curiosity” to learn better.


James Martin/CNET

Inside a small lab at Facebook’s Menlo Park, California, headquarters, a black-and-red robotic arm moves back and forth but struggles to land its gripper in a spot researchers want it to hit.

First, the arm moves too far to the right. Then, it goes too far to the left. The arm, which researchers call Pluto, acts like a baby figuring out how to move its joints for the first time.

Though the arm doesn’t move to the correct spot at first, the artificial intelligence powering it encourages behavior that helps the robot learn more about itself and its environment. The robot gets a “reward” — think of it as a digital thumbs-up in the software — whenever it takes actions that’ll help it reach its goal. Facebook scientists found the AI learned faster when exploration was encouraged.

https://egorobotics.com/wp-content/uploads/2019/05/why-facebook-wants-this-creepy-crawly-robot-to-teach-itself-to-walk.com


Now playing:
Watch this:

Inside Facebook’s robotics research lab



1:14

“It’s going to use this data to update its model about itself,” Franziska Meier, a Facebook research scientist who focuses on AI, said during a recent demo. “This exploration is guided by its curiosity.”

Facebook’s robotics research isn’t part of a broader effort to build a new class of products, like the social network did with its Portal video chat device or Oculus virtual reality headsets. The social media giant sees value in training robots because it could help the tech industry improve the overall quality of AI, which gives computers flexibility similar to the human brain’s ability to observe, learn and act. It could also help Facebook advance the AI that it already uses to flag offensive content, such as hate speech, spam and nudity. AI helps Facebook display posts you’re more likely to care about and makes it possible for the tech giant to describe photos for the blind.

The social network has also experienced the limitations of AI for corralling its sprawling platform. In one horrific instance, Facebook’s AI failed to automatically detect a video that was livestreamed by a terrorist who murdered 51 people at two mosques in New Zealand. The video then spread to other social media sites.

“The better algorithms that you develop might be used in other ways internally within Facebook and the whole society,” said Roberto Calandra, a research scientist at Facebook. The social network has two labs, the other in Pittsburgh, that are working on robotics research.

Facebook AI researchers in Menlo Park with Daisy, a hexapod robot learning to walk. From left: Roberto Calandra, research scientist; Franziska Meier, research scientist; Yixin Lin, research engineer; Omry Yadan, software engineer; Akshara Rai, research scientist.

Facebook AI researchers in Menlo Park with Daisy, a hexapod robot learning to walk. From left: Roberto Calandra, research scientist; Franziska Meier, research scientist; Yixin Lin, research engineer; Omry Yadan, software engineer; Akshara Rai, research scientist.


Facebook

Using robots made by other companies, Facebook is exploring ways that could help a machine learn without so much hand-holding. AI-powered machines need a lot of data to complete a task that may seem simple to a human, like identifying a dog in a photo. That training data often needs to be labeled by humans so the machine knows what’s in the images it’s being fed.

Teaching a machine, especially a robot, to learn something new is challenging because there often isn’t much training data available, Meier said. AI-powered robots, like babies, learn through trial and error. On top of that, the real world can be unpredictable, and machines have to adapt to new situations.

“If you think about how we humans learn, we learn by interacting with the world,” Meier said. “We try something out, we observe what happens, and then use that information.”

Facebook’s AI researchers are focusing on getting machines to learn independently.

They’re currently trying to get a six-legged robot to teach itself to walk within hours.

Outside of the lab, they showed me a demo of what they hope will happen if they’re successful. A spidery-looking robot nicknamed Daisy, which sports black and red, slowly walks forward on a concrete floor before it stops at a table. They’re hoping it will use what it learns to walk on other surfaces such as grass or sand, which could be more challenging for the robot to navigate.

Working with University of California, Berkeley, researchers, Facebook also developed a way for robots to learn through touch without task-specific training data. Robots were able to roll a ball, move a joystick and identify the right face of a 20-sided die.

facebook-ai-artificial-intelligence-robots-robotics-1391

This red button inside Facebook’s research lab will disable the robotic arm.


James Martin/CNET

“Imagine putting a key into a keyhole,” Meier said. “You might use vision to know exactly where the keyhole is, but you’re actually using the feeling in your hands.”

The thought of building robots that can learn by themselves might conjure up fears about a future in which machines destroy humans. The researchers, however, don’t seem concerned about such a doomsday scenario and poke fun at the idea. An illustration tacked to the outside of the lab reads “Viva La Robolution” and there’s a red button to disable the robotic arm.

Calandra said there’s a joke in the robotics community about a student running out of the lab saying that the robot went crazy. The professor responds by asking the student if he closed the door so that the robot can’t get out.

“Even simple tasks such as opening a door are extremely complicated at the moment,” he said.

Originally published May 20, 3:30 a.m. PT
Update, 11:24 a.m. PT: Adds detail about the Daisy robot.

[ad_2]

Source link