In this way, Baxter is able to notice and correct his mistakes.
His owner can then direct the machine into performing a different task using subtle hand gestures.
Project supervisor Daniella Russa said the aim was to “move away from a world where people have to adapt to the constraints of machines to develop robotic systems that are a more natural and intuitive extension of us”.
During the course of the test, the robot moves the equipment to one of three possible targets on a mock plane.
The team proved the system works on people Baxter has never seen before, meaning companies could use him in real-world settings without the need to train him to work with different users.
The team makes use of a branch of science known as electroencephalography (EEG) for brain activity and electromyography (EMG) for muscle activity.
Users have a series of electrodes placed on their scalp and forearm to detect impulses.
PhD candidate Joseph DelPreto, who was lead author on the paper, explained: “By looking at both muscle and brain signals, we can start to pick up on a person’s natural gestures along with their snap decisions about whether something is going wrong.
“This helps make communicating with a robot more like communicating with another person.”
In isolation, neither EEG signals and EMG signals are entirely reliable – but by merging the two, the team hit upon a way for more robust bio-sensing.
CSAIL director Daniela Rus, who supervised the work, said: “This work combining EEG and EMG feedback enables natural human-robot interactions for a broader set of applications than we’ve been able to do before using only EEG feedback.
“By including muscle feedback, we can use gestures to command the robot spatially, with much more nuance and specificity.”
The team says that they could envisages the system being useful for the elderly, as well as workers who have difficulties with speech or limited mobility.
Their research will be presented at the Robotics: Science and Systems (RSS) conference in Pittsburgh next week.