WASHINGTON, June 17 (Xinhua) -- A laboratory of Massachusetts Institute of Technology (MIT) has developed an AI that can process and learn from visual and tactile information, according to MIT news published online on Monday.
MIT's Computer Science and Artificial Intelligence Laboratory (CSAIL) used a tactile sensor named GelSight to collect data for the AI to identify objects by touch.
The research team recorded 12,000 videos of nearly 200 various objects being touched to create a dataset of more than 3 million static images.
The AI could use the dataset and machine learning system to understand interactions between vision and touch in a controlled environment.
"By looking at the scene, our model can imagine the feeling of touching a flat surface or a sharp edge ... By blindly touching around, our model can predict the interaction with the environment purely from tactile feelings," said Yunzhu Li, CSAIL PhD student and lead author of the research paper.
Future study will update the dataset with information in more unstructured areas, so that the AI could recognise more objects and better understand scenes.