Building social robots

A service or social robot may have capabilities to perceive the world using inbuilt cameras, interact with humans using speech and make decisions using artificial intelligence algorithms. These kinds of robots are a bit complicated in design, we can see a typical building block diagram of a social robot in the following figure.

Figure 2: Block diagram of a typical social robot

The robot has sensors such as tactile sensor, camera, microphone, and touch screen and will have some actuators for its movement. The actuators will help the robot to move its head or body. There may be mobile service robots which has extra motors for navigation.

Inside the software block, you may can find modules for perception which handle camera data and finding necessary objects from the scene, speech recognition/synthesis, artificial intelligence modules, robot controller modules for controlling the actuators, decision-making node which combine all data from sensors and makes the final decision on what to do next. The ROS driver layer help to interface all sensors, actuators to ROS and the GUI can be an interactive visualization in the LCD panel.

In this chapter, we are going to implement the speech recognition or synthesis block with artificial intelligence which can communicate with people using text and speech. The reply from the bot should be like a human's.

We are going to implement a simple AI Chatbot using AIML (Artificial Intelligence Markup Language) which can be integrated to a social robot.

Let's see how to make software for such an interactive robot, starting with the prerequisites to build the software.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.133.145.217