A turtlebot outfitted with a camera system sends data
Internet of Things (IoT)

ROS & Turtlebot3: A Dive into Robotics Middleware

Autor:in
Lesezeit
16 ​​min

Applications for service robots are a main part of different industries in 2020. Back in 2007 researchers at the Stanford Artificial Intelligence Laboratory started to build an ecosystem and further developed it into what today is known as the Robot Operating System in short ROS [1][2]. It consists of various tools, libraries and features to ease the process of developing robot applications. So this article will explain the usage of the TurtleBot3, cover the fundamentals of ROS and some practical experiences made with SLAM and autonomous navigation.

TurtleBot3

TurtleBot3 is part of the TurtleBot series and is known as a ROS standard platform robot. The first TurtleBot was invented at Willow Garage as was the Robot Operating System [3]. It was designed to be easy to assemble and use, programmable and quite cheap in comparison to other robot hardware platforms [4]. The TurtleBot3 is consists of a four layered burger-like architecture, where each layer made of plastic is responsible for a specific functionality. You can see the assembled TurtleBot3 in figure 1. Thus the robot is extensible for further modules not provided in the standard bundle. The first layer contains a 360 degree LiDAR, the second a single board computer (Raspberry Pi 3 Model B) and a USB interface (USB2LDS), the third layer is an open source control module for ROS (32-bit ARM Cortex-M7) and on the last layer has two robot actuators (Dynamixel), a Li-Po 11V battery, two wheels and tires. Assembling the robot takes about five hours of work as the documentation provided is quite precise [5]. Alternatives with similar capabilities and pricing are e.g. E-puck2 mobile robot, Sparki robot, iCub or the ArduPilot.

Figure 1: Assembled TurtleBot3

As an entry level mobile robot the TurtleBot3 has many features of larger robotic platforms used in industrial scenarios. So it is a great opportunity to collect and build internal competences and experiences for industrial use cases and issues. With the standard TurtleBot components you can do things like simultaneous localization and mapping (SLAM) or autonomous navigation with real time obstacle avoidance. While controlling the TurtleBot3 remotely from a laptop or an Android-based smartphone using the ROS Control application. Or even evaluate different machine learning approaches like reinforcement learning to get an agent to a certain goal while avoiding dynamic obstacles.

For the evaluation of SLAM and autonomous navigation we used a Lenovo Thinkpad T450 with Ubuntu 16.04 LTS and the ROS distribution Kinetic running and the TurtleBot3 burger model. We installed Ubuntu Mate 16.04 on the raspberry pi and ROS Kinetic. The main ROS setup that we used: the roscore node collection, rviz, rqt_reconfigure, the turtlebot3_bringup, the turtlebot3_teleop, the turtlebot3_slam, map_server and the turtlebot3_navigation packages. In the following section the concepts will be discussed briefly and our practical experiences shared.

ROS: Robot Operating System

Figure 2: About ROS [6]

ROS is based on efforts taken by researchers from the Stanford Artificial Intelligence Laboratory and further developed at Willow Garage [7]. From there on ROS kept spreading around the world of robotics and became an important open source robotic middleware. The Robot Operating System’s primary goal is to simplify re-using code when developing applications for service robots. Thus it provides tools and libraries for manipulation, simulation and visualization of the robotic hardware, inter-process communication, package management, over eleven thousand of unique packages and much more. As the ROS core packages are licensed under BSD standard and others have a similar permissive license, those are a goo choice for the commercial product development process.

ROS was designed to be modular and fine-grained as most of the robotic systems are divided into multiple different components. Hence the general ROS Architecture has a graph structure consisting of many ROS nodes as shown in figure 3, which are connected through edges called topics. Nodes represent a single process running and topics can be understood as a stream of messages. Here ROS takes advantage of the publish-subscribe pattern, which might be familiar to you if you already came in touch with machine-to-machine communication protocols like MQTT. The ROS Master, see figure 3, is a unique process and provides registration and naming to the other nodes, so that a peer-to-peer communication between those is possible. Another necessary functionality of the ROS Master relies in the maintenance of a parameter server and in the tracking of nodes which either subscribe to or publish a specific topic. Nodes use the parameter server to store or look up informations at runtime.

We chose ROS as it is well-maintained and developed by a huge community consisting of companies, individuals and university groups [8]. There is also the wide usage of ROS in industrial robotic applications in comparison to other more specific robotic middleware. Last but not least the huge amount of features such as computer vision, grasping, machine learning, autonomous driving etc. covered by the ROS ecosystem spoke for themselves.

Figure 3: ROS using the publish-subscribe pattern

Use-Cases

The main goal of our project was the evaluation of ROS and the TurtleBot3 for the use cases of autonomous navigation and simultaneous localization and mapping (SLAM). In the SLAM scenario we were focusing on the quality of the resulting map, how the light proportion and other information are influencing the map formation process. All the while the initial pose estimation and re-localization were of interest through the navigation use case. Furthermore we investigated how the navigation process reacts to dynamic changes in the environment, e.g. obstacles or people moving, or being executed without a static map of the environment.

Simultaneous Localization and Mapping (SLAM)

SLAM describes a problem in robotics where a robot should compute its position on a map, while it is creating exactly this map of an unknown environment using sensor data [9]. SLAM is an important and widely renowned problem, because many further use cases are based on the information collected and structured during the localization and mapping. Often places of action don’t provide a static map of the environment and no possibility to localize the absolute position of a mobile robot using e.g. GPS data. Time and cost constraints further make it difficult to construct a map before traversing the area and navigate from a start to a goal position.

Figure 4: Theoretical approach of an agent/robot trying to calculate its true position while moving

Therefore these two tasks have to be accomplished simultaneously with a high accuracy. Commonly used are either ultrasonic or LiDAR (light detection and ranging) sensors, but there are also approaches which involve visual data from a camera e.g. visual SLAM. The general process, as visualized in figure 4, is incremental as there exists no map and the robot can only detect a part of the hole environment. It starts with the robot position being the absolute center of the coordinate system. Step by step the robot collects information about the area and merges the collected frames to a global consistent map. In this process the robot moves ahead and measures some known and unknown parts. This information can then be used to compute the movement and the new absolute position of the robot. This procedure is repeated until the complete area is measured. Although the movement and exact position of the robot between two measurements is never exactly calculable, some algorithms are able to detect already measured parts of the environment. With the so-called ‘Loop Closing’ a higher quality of the resulting map can be provided.

Autonomous Navigation

For the use case of autonomous navigation we used the powerful ROS Navigation Stack, shown in figure 5, which can provide a safe path for the robot to execute. Consisting of a global_planer, global_costmap, local_planer, local_costmap and recovery_behaviours as the main components, the stack is greatly configurable. Parameter fine-tuning was required to maximize the performance. The stack also uses platform-specific information from an environment map, odometry and sensor sources to enrich the planning process. For the continuous and initial localization of the TurtleBot3 the adaptive Monte Carlo localization method is utilized by the amcl ROS package.

Figure 5: ROS navigation stack [10]

Through the move_base node we get an interface to communicate with the other processes running. The global_planer node is responsible for the global route computed by an informed search algorithm like A* or Dijkstra. Whereas the local_planer in our setup followed the Dynamic Window Approach [11]. The local_costmap provides information about dynamic obstacle avoidance and the global_costmap is generated by inflating the obstacles provided through the static map from the map_server. All this information is forwarded to the platform-specific base controller, which then can produce velocity and control commands to the mobile base.

There are also two types of recovery behaviours should the robot get stuck. The clear_cost_map_recovery, where the local_costmap gets reset to the global_costmap, and the rotate_recovery, where the robot tries to recover by rotating 360 degrees in place.

Evaluation

The approach we took for SLAM with our TurtleBot3 was the grid-based SLAM with Rao-Blackwellized Particle Filters [12], adaptive proposals and selective resampling. This approach, based on the research by Giorgio Grisetti, Cyrill Stachniss, and Wolfram Burgard, is reducing the number of particles and required resampling steps, while limiting the issue that resampling can eliminate correct detected particles.

Figure 6: Constructed two dimensional map of an real room

made by the TurtleBot3 and a picture of the room

Figure 7: The constructed two dimensional map in detail

We tested the SLAM process in a real physical test environment shown in figure 6 and under different light conditions like in the morning, noon, at night and additionally in a room with complete absence of any additional light sources besides the emitted photons by the LiDAR (see figure 8). It worked well and the outcomes were needed for our autonomous navigation. The setup is easy to run and use, while we recommend the heuristic to drive more often around structures with fine granularity to gain a higher accuracy of the resulting map. In general we emphasize to invest more time in the creation of an accurate static map, which is a great benefit for later use. Autonomous navigation works without a static map, but is lacking the performance of path planning and accuracy in reaching the goal. The dynamical obstacle avoidance behavior wasn’t impacted negatively by the absence of the static map.

Figure 8: Testing the TurtleBot3 in a room without an external light source

  

Figure 9: Equal probe distribution for the re-localization process of the TurtleBot3

Figure 10: The TurtleBot3 reallocated himself quite precisely in the same room as in figure 9

While evaluating the autonomous navigation the initial pose estimation worked well when manually using the provided GUI of the ROS RViz tool and quite good through an automatable initial rotation behavior. Re-localization of the TurtleBot3 was also possible using the global_localization rosservice with either the initial rotation behavior mentioned previously or manually rotating the robot. Still this process is more a heuristic than a deterministic algorithm with predictable quality of outcome. Sometimes it worked really good and even in unknown environment i.e. without a static map provided, sometimes it took me four tries to estimate the pose accurately.

The collision rates with a provided static map in our empirical setup were 2 out of 25 test runs. We let the TurtleBot3 drive 25 routes of about 10 meters and it collided once with a chrome chair leg, as the leg was undetected by the LiDAR, and another time with a much faster moving ball, as the reaction window was too small. Figure 11 shows the same room as in figure 6, but here it is constructed by the TurtleBot3 using the OpenSLAM gmapping algorithm [13].

Figure 11: Overview of the digital test environment we used

The collision rates without a static map were quite similar to the ones with a map, although the robot sometimes got stuck in a single point. This happened because the TurtleBot3 only remembers local structures in a certain radius as we can see in figures 12 and 13. Therefore he kept re-calculating the same two paths over and over again ending in a local deadlock.

Figure 12: TurtleBot3 chooses the lower path as the optimal one

Figure 13: Re-calculation of the optimal path to the same goal as in the previous figure

The last aspect we looked at was the dynamic re-configuration of the parameters used in the autonomous navigation process. Using the rqt_reconfigure ROS service you get a GUI which displays different parameters of the active ROS nodes. Those can be re-configured at runtime of the navigation process. There are configurable parameters like the inflation_radius of obstacles or the cost_scaling_factor for the local and global path planning, some of them are visible in figure 14. This made it easy to test different behaviors and find adequate values for our specific environment on the fly.

Figure 14: Dynamic reconfiguration of different parameters

Conclusion and outlook

The set-up of the needed hardware and middleware for the ROS system and the TurtleBot3 went well. Also the mentioned use cases could be tested and evaluated successfully regarding performance, accuracy and feasibility. We were overall satisfied with those results and the current ecosystem providing libraries and modules for the development of service robot applications.

Although we had some difficulties with the re-localization of the TurtleBot3 and sometimes it took us more time for a single traversal due to bad configuration. But after getting more comfortable tuning the parameters and discovering the dynamic reconfiguration those pain points could be faced.

We now build a stable baseline for further research and development with the robot operating system and practical realization using the TurtleBot3 robot. For the future it should be possible to apply additional sources of information like a camera or an ultrasonic sensor, merge the collected data and achieve even higher performance results.  Here you can see a traversal of the TurtleBot3 through our office space in Munich.

YouTube

By loading the video, you agree to YouTube's privacy policy.
Learn more

Load video

PGlmcmFtZSBsb2FkaW5nPSJsYXp5IiB3aWR0aD0iNTYwIiBoZWlnaHQ9IjMxNSIgc3JjPSJodHRwczovL3d3dy55b3V0dWJlLW5vY29va2llLmNvbS9lbWJlZC9zRUlIRXJ1ZXNCQSIgZnJhbWVib3JkZXI9IjAiIGFsbG93PSJhY2NlbGVyb21ldGVyOyBhdXRvcGxheTsgZW5jcnlwdGVkLW1lZGlhOyBneXJvc2NvcGU7IHBpY3R1cmUtaW4tcGljdHVyZSIgYWxsb3dmdWxsc2NyZWVuPSJhbGxvd2Z1bGxzY3JlZW4iPjwvaWZyYW1lPjwvcD4=

[1] http://www.aaai.org/Papers/Workshops/2007/WS-07-15/WS07-15-008.pdf

[2] https://spectrum.ieee.org/automaton/robotics/robotics-software/the-origin-story-of-ros-the-linux-of-robotics

[3] https://www.turtlebot.com/about/

[4] https://www.researchgate.net/publication/329164856_Open-Source_Robotics_Investigation_on_Existing_Platforms_and_Their_Application_in_Education

[5] http://emanual.robotis.com/docs/en/platform/turtlebot3/overview/

[6] https://www.ros.org/about-ros/

[7] https://pdfs.semanticscholar.org/d45e/aee8b2e047306329e5dbfc954e6dd318ca1e.pdf?_ga=2.128046796.139523203.1584016861-1142175437.1584016861

[8] http://download.ros.org/downloads/metrics/metrics-report-2019-07.pdf

[9] http://people.eecs.berkeley.edu/~pabbeel/cs287-fa09/readings/Durrant-Whyte_Bailey_SLAM-tutorial-I.pdf

[10] http://wiki.ros.org/navigation/Tutorials/RobotSetup

[11] https://www.researchgate.net/publication/3344494_The_Dynamic_Window_Approach_to_Collision_Avoidance

[12] http://www2.informatik.uni-freiburg.de/~stachnis/pdf/grisetti05icra.pdf

[13] https://openslam-org.github.io/gmapping.html

Hat dir der Beitrag gefallen?

Deine E-Mail-Adresse wird nicht veröffentlicht. Erforderliche Felder sind mit * markiert