This robot learns to draw and write from photos

A new algorithm allows robots to write in pencil on paper, to transcribe words or to draw features similar to those of handwriting, all from photos.

Computer scientists at Brown University in Rhode Island, USA, have created an algorithm that allows robots to reproduce with pen or brush drawings or handwriting. This work is the work of Atsunobu Kotani, an undergraduate student, assisted by Stefanie Tellex, an assistant professor of computer science.

The purpose of the research was to teach the robot the reproduction of human gestures used in handwriting or drawing. For this, they taught him to identify the traits used to be able to duplicate them. According to Atsunobu Kotani, ” by merely looking at a target image of a word or sketch, the robot can reproduce each stroke of a brush in a continuous gesture. Because of this, it is difficult to distinguish whether a robot or a human wrote it.

New prowess thanks to deep learning

In this research, scientists used deep learning techniques, using neural networks to analyze images of written words or sketches, and then determine how to use the gestures he has learned. The system is based on two distinct models of the picture. A first global model makes it possible to choose the most probable starting point.

Once the stroke has begun, a second model then enlarges the image to determine the course of the plot and its length. Once he completes this first line, he returns to the global model to choose the starting point of the second line and continues to alternate between these two systems until the image is complete.

The robot differs from the usual approaches because it determines the order of the traits of itself. Usually, robots must have a minimum of prior information.

To succeed in writing, they need the request of the characteristics each time. According to Stefanie Tellex, Kotani’s tutor, her student’s algorithm ”  allows you to draw what you want, and the robot can reproduce it. He does not always manage to perform the traits in perfect order, but he gets very close to them.  “

Results that surprised researchers


The robot’s ability to reproduce new elements surprised researchers. The algorithm used in deep learning was trained in Japanese characters. His research showed that the robot was able to reproduce figures and gestures with an accuracy of 93%. 

They then tested it with Latin characters in English, and also achieved excellent results. ”  We would have been happy if he had only learned Japanese characters,  ” said Stefanie Tellex. But, when it worked with English, we were amazed. We decided to see how far he could go.

They then asked all robotics laboratory staff to write the word “hello” in their native language, which allowed them to confront the robot including Greek, Hindi, Urdu, Chinese, and Yiddish, and with good results every time. 

The biggest surprise was when they presented him with a sketch of the Mona Lisa, whose reproduction by the robot left the staff of the laboratory incredulous. The two researchers imagine a future where robots can go memos on Post-it notes, take notes, or draw sketches for their human collaborators.