SlideShare a Scribd company logo
1 of 29
Download to read offline
by
James Anderson (jra798)
Adam Honse (amhb59)
A PROJECT REPORT
Presented to the Electrical and Computer Engineering Faculty of the
MISSOURI UNIVERSITY OF SCIENCE AND TECHNOLOGY
In Partial Fulfillment of the Requirements for
COMPUTER ENGINEERING SENIOR PROJECT II
May 2012
Cost: $500
Advisor
Dr. Kurt Kosbar
Instructor
Dr.Donald Wunsch
P a g e | 1
The project detailed in this document is an attempt to create an autonomous robot that is able to map
indoor environments. This robot is based on an omni-directional platform using the Microsoft Kinect depth and
color camera as its primary sensor. Methods are detailed that attempt to condense three dimensional point
clouds generated by the Kinect down to meshes with color images applied as textures. The robot is able to au-
tonomously determine the best areas to map and is able to gather a complete two dimensional model of any
indoor environment.
P a g e | 2
Executive Summary ..................................................................................................................................................1
0.0 - Project Team.....................................................................................................................................................3
0.1 - James Anderson........................................................................................................................................3
0.2 - Adam Honse .............................................................................................................................................3
1.0 - Introduction......................................................................................................................................................4
2.0 - Project Objectives.............................................................................................................................................6
2.1 - Objective 1 - The Platform........................................................................................................................6
2.2 - Objective 2 - Mapping Software...............................................................................................................6
2.3 - Objective 3 - Navigation Software............................................................................................................6
3.0 - Project Specifications .......................................................................................................................................6
3.1 - Objective 1 - The Platform........................................................................................................................6
3.2 - Objective 2 - Mapping Software...............................................................................................................7
3.3 - Objective 3 - Mapping Software...............................................................................................................8
4.0 - Design...............................................................................................................................................................9
4.1 - Platform Structural and Mechanical ........................................................................................................9
4.2 - Electrical ................................................................................................................................................ 10
4.3 - Software ................................................................................................................................................ 13
5.0 - Experimental Results..................................................................................................................................... 20
6.0 - Timeline and Deliverables ............................................................................................................................. 22
6.1 - Original Schedule................................................................................................................................... 22
6.2 - Final Schedule........................................................................................................................................ 22
7.0 - Budget ........................................................................................................................................................... 24
Appendix A - Itemized Budget............................................................................................................................... 25
Electronics ..................................................................................................................................................... 25
Structural....................................................................................................................................................... 26
Appendix B - Interface Commands........................................................................................................................ 27
Appendix C - Works Cited...................................................................................................................................... 28
P a g e | 3
The project team consists of two undergraduates in Computer Engineering at the Missouri University of Science
and Technology.
 James Anderson – High Level Programming & Mechanical Construction
 Adam Honse – Low Level Programming & Circuit Construction
Bios of the team members along with the descriptions of their roles can be found below.
Figure 1: Team Members (from left to right) Adam Honse and James Anderson
James Anderson is an undergraduate senior in Computer Engineering at the Missouri University of Sci-
ence and Technology. He has been an active member of the university’s Robotics Competition Team for the
past three years and has held the elected positions of President, Treasurer and Computing Division Lead. He
has played a large part in the construction and programming of the team’s autonomous robot which competes
in the Intelligent Ground Vehicle Competition each year. Through his participation on the team and course
work he has gained experience in the programming and construction of autonomous robots.
Adam Honse is an undergraduate senior of Computer Engineering at Missouri University of Science and
Technology. He has been a member of the Missouri S&T Robotics Competition Team for four years, taking the
position of Electrical Division Lead in 2011. He also has experience with embedded and microcontroller appli-
cations and has posted many personal projects on the Internet. He has been interested in robotics since high
school, participating in several LEGO and Vex competitions before joining the S&T team. He has experience in
electronic circuit design, circuit board design, and PC/microcontroller communication and integration.
P a g e | 4
The team’s goal for this project was to design, build and program a robot with the ability to autono-
mously map indoor environments. The mapping of indoor environments has many useful applications from
security to mining to rescue and has the potential to offer detailed views of remote environments. Recent ad-
vancements in processing and sensor technologies have the potential to greatly increase the speed and detail
of mapping. The teams hope was to create an open source robot that could be used as a test platform for fu-
ture development of two dimensional and three dimensional mapping.
Figure 2: Microsoft Kinect Sensor
In the previous year a new sensor has been made available on the general market named the Microsoft
Kinect. The Kinect is a relatively inexpensive RGB-D (Red, Green, Blue, Depth) sensor that captures both a color
and depth image of objects in its frame. A picture of a Microsoft Kinect may be seen in Figure 2. The Kinect of-
fers a detailed view of its environment making it an ideal sensor for indoor mapping and is the primary sensor
on the team’s robot.
The low cost of this sensor has spawned a new breed
of robots classified as hobby robots. These robots are de-
signed to be cheap and easy to build, making good platforms
for creating and testing new robotics software. The forerun-
ner of this class of robots is the TurtleBot which can be seen in
Figure 3. The TurtleBot is open source and was designed by
Willow Garage[9]. The robot makes an ideal platform for all
levels of robotics and is designed to be very inexpensive at
just under $950.
TurtleBot is completely open source and its software
is based in ROS (Robot Operating System). ROS provides a
common software architecture for robots that allows for code that can be run on multiple hardware platforms.
ROS gives programmers a robust framework and a large number of tools to aid in robotics development.
Figure 3: TurtleBot
P a g e | 5
While the TurtleBot platform is very well designed our team wanted to try to improve it. One of the
limiting factors of the TurtleBot is its base. The TurtleBot hardware is mounted on top of the Roomba Create, a
tank drive platform adapted from a vacuum cleaner. While the Create is a good mass production solution it has
the limitations of a tank drive platform and is somewhat expensive.
By combining the Microsoft Kinect with an omni-
directional base, the team has created TurtleBot 360, a ro-
bust holonomic drive alternative to the TurtleBot. The team
was able to reduce the cost of this type of platform while
adding an additional degree of freedom to its motion. The
robot is able to use much of the same software as the Tur-
tleBot and can autonomously map indoor environments.
In addition to two dimensional mapping the team
made several advancements in the field of three dimensional
mapping. The Kinect sensor offers the unique ability to gath-
er depth information as well as color. While some advance-
ments have been made in three dimensional mapping, they
usually store more data than is needed causing a lot of extra
processing to store and display maps. The team has developed and partially implemented a technique that at-
tempts to compress incoming data and then correlate it with previous data. While it is still being implemented,
if successful the software will greatly reduce the
amount of resources needed to create three di-
mensional models of environments.
TurtleBot 360 provides a cheaper alterna-
tive to the TurtleBot while adding mobility. The
robot is able to use all the same software plus
some of its own. The team’s three dimensional
mapping research has also yielded some promis-
ing results. Someday the team hopes that Turtle-
Bot 360 will be able to surpass its predecessor in
both mobility and capabilities.
Figure 5: TurtleBot 360 User Interface
Figure 4: Completed TurtleBot 360
P a g e | 6
Our project was broken down into three objectives that could be approached independently.
Build an omni-directional robot platform with computer control and a ROS interface. This platform fea-
tures a triangular, 3 omni-wheel drive train and can move in any direction as well as rotate about its center.
The platform has a laptop on-board to run navigation software. An on-board sealed lead-acid battery provides
power for the motor controllers and Kinect sensor.
Design a software platform that can build maps of an indoor environment. Two software platforms are
used, one produces two dimensional maps and the other produces a three-dimensional mesh of the environ-
ment. The software operates in real-time and the map data improves as more data is gathered. The data is
made available to navigation software immediately.
Design a software platform that can autonomously navigate the robot through an indoor environment
and avoid obstacles given the Kinect camera data and any existing map data. The software must analyze the
given input data to determine obstacles, and then it must plan paths around obstacles while maintaining a
general direction.
Progress on the robot started in late November 2011 and continued through April 2012. Below is a de-
tailed plan of the milestones and tasks taken to complete the project. The work has been divided into 3 mile-
stones with several tasks which were divided up between the two group members.
1. Design platform – The final platform design consists of an equilateral triangular frame with an omni-
wheel mounted at the center of each side. Powering each omni-wheel is a stepper motor. At the cor-
ners of the triangle, threaded rods support three more triangular frames stacked above the base. The-
se shelves are used to hold robot components including the laptop and Kinect sensor.
The platform deviated significantly from our original design, which was to cut flat, circular shelves from
Lexan plastic and mount the wheels in a triangular pattern in the bottom circle. We decided to change
to a triangular platform because it was easier to build while still maintaining the precision necessary for
proper motor and wheel alignment.
To drive the stepper motors, a stepper motor controller board was designed that interfaces via serial
and I2
C. Each board powers one stepper motor. Our original design was to create one central control
board, but decided instead to use three identical boards for modularity and to reduce cost. By doing
this, we were able to use a cheaper and smaller microcontroller to design one board for each motor.
P a g e | 7
By using the I2
C bus, all three boards can be controlled with the PC connected via serial to any one of
them. A fourth board was designed for power regulation, as the motor control boards require both 5
and 12 volt inputs. The stepper controller board was designed using EAGLE software but the regulator
board was built on prototyping board due to time constraints.
2. Build platform – To build the mechanical platform, we used 2x4 boards keeping the number of cuts to a
minimum. The platform was made open source and the plans are being placed online. The build was
fairly easy and can be completed in a day or two of work.
On the electronics side, we considered having our motor control boards professionally made, but de-
cided it would be cheaper and faster to make them ourselves. To do so, we used the laser printer ton-
er transfer method. This produced clean boards with few errors.
3. Write microcontroller and PC interface code – The microcontroller code is written in C and compiled
with the WinAVR GCC compiler. The code manages the serial and I2
C interfaces, maintains motor pa-
rameters, and drives the motor output MOSFETs. The board’s serial interface is a bridge to the I2
C in-
terface, and all motor control messages are sent as I2
C messages. By doing this, the same protocol is
used to communicate with the board that the serial cable is connected to as well as all other boards
connected on the I2
C bus. On the PC, a ROS node has been created that converts direction vectors into
appropriate motor controller commands, then sends them over the serial port.
1. Import Kinect color image and depth map – This step involves setting up the ROS environment with the
appropriate drivers and software to retrieve information from the Kinect sensor over USB. The ROS
software is capable of providing a color RGB image, a grayscale depth image, a three dimensional RGB
point cloud, and various other pieces of information such as accelerometer data.
2. Identify planes and convert to a mesh – This step uses slopes taken from the depth image to segment
the field of view into multiple planes. By looking at the plane boarders and intersections it is possible to
find vertices that will define the surfaces in the image. These vertices are compiled into COLLADA mesh
files for display and linked with supplementary data needed for mapping. The color image taken by the
camera is then linked and applied to the mesh three dimensional color model as the output.
The local mapping changed significantly from the original design. During software development it be-
came apparent that the original method of finding vertices would be to inaccurate to produce decent
results. To correct this, the new software first looks for planes then parses those into their vertices.
3. Identify points in global model and combine with local mesh– By looking at fixed points in the world
model the software is able to correct position to match the input. After correcting robot position the
software looks to combine points with obstructed points from previous frames. The software continu-
ously updates and outputs a global map of the world.
P a g e | 8
1. Identify obstacles – The navigation software must be able to detect obstacles in the map that would
block the robot’s desired path. The software uses the ROS point cloud to laser module to do this. The
module takes the rectified coordinates of obstacles and creates a simulated laser scan giving the dis-
tance to the nearest obstacle across a range of angles.
The obstacle identification changed significantly from the original design which first looked for vertical
planes then placed them in the world. The software being used currently is open source code that has
been tested and provides fast and accurate results.
2. Identify mapped and unmapped areas – The software is currently using the ROS gmapping stack which
simultaneously corrects position and creates a map of the environment. The gmapping stack uses opti-
cal flow to look for landmarks that carry through frames and then uses those to correct position. Once
the software has corrected position it places obstacle data in the global map. The software outputs an
overhead map of the environment and can give information on areas that have not yet been explored.
This portion of the software has changed significantly from the original design. The original software
was going to attempt custom methods that would create a probability map of obstacles in the world
and allow the robot to re-explore areas. Due to time constraints from problems in other areas of the
software it was decided to use the third party software.
3. Solve for robot's movement – The software uses the ROS navigation stack again to solve for movement.
The navigation stack takes in the laser scan message as well as the map built by the gmapping stack.
The stack has the ability to receive a list of waypoints and can also autonomously map unknown envi-
ronments. The stack outputs velocities for the linear strafe and rotation of the robot to the hardware
interface.
It was planned to write a custom module to compute navigation using potential fields. However due to
time constraints it was decided to change to the navigation stack. The stack is a tested piece of code
that works on many other platforms and provides reliable navigation.
P a g e | 9
The platform’s overall structure consists of a triangular wood-
en base with three triangular adjustable shelves above it. These
shelves are supported by three 3/8” threaded rods, one in each cor-
ner of the triangle, and may be adjusted vertically for flexibility. The-
se shelves hold the robot’s main computer, the Kinect sensor, and any
additional accessories or components that may later be attached to
the robot. The base provides motor and shaft mounting hardware in
addition to three corner plates for mounting the motor controller
boards and a center support for holding the robot’s sealed lead acid
battery while maintaining even weight distribution among the three
wheels. Three end plates are attached around the outside of the base
which secure the outer end of the wheel shafts and provide addition-
al support to the shafts, evening the load on the wheels and motor
shafts.
The mechanical design is based around the
Killough drive train, also known as the kiwi drive. The
drive train uses three omni-wheels mounted 120º
apart in a triangular arrangement. The wheels con-
trol motion in the direction of rotation but do not
hinder movement along the axis. This design is ho-
lonomic, meaning that it allows for independent
forward, strafe, and rotational motion. When com-
bined with tilt on the Kinect sensor this allows the
Kinect to view the world at any angle while still being
able to fully control motion on the ground plane.
Figure 7: Robot Base
Figure 6: TurtleBot 360 Frame
P a g e | 10
The electrical system on the robot is centered around three custom motor control boards. These
boards require +5 and +12 Volt power supplies with a common ground. The +5V supply powers the microcon-
trollers that drive the stepper motor timings and communicate to the PC. The +12V supply powers the
MOSFET’s that supply the stepper motor coils with power. In addition, the three boards are linked together via
an Inter-Integrated Circuit (I2
C) bus which allows them to communicate. Finally, any one of the three boards
must be connected via a serial link to the PC to receive commands for the motor controller network. To pro-
vide this distribution, we constructed a small power distribution board that takes in +12V from the SLA battery,
regulates it to +5V for the logic systems, and branches off power to the three motor boards. It also provides
pull-up resistors for the I2
C bus and a MAX232 serial level shifter to convert the PC’s RS-232 level serial signals
into the 0-5V TTL signals that the microcontroller needs. Finally, a 12V, 30A switch with LED indicator was in-
stalled between the battery and the distribution board as a means to shut off the robot without having to dis-
connect the battery connections.
Figure 8: Electrical Wiring Diagram
As mentioned in the previous section, custom motor control boards were developed for this project.
Our reasoning for this is that the majority of low-cost stepper motor controllers available are either too expen-
sive or do not provide enough current for our large NEMA 23 3A motors. The design is relatively simple as we
chose 6-wire motors which can be driven in either unipolar or bipolar configurations. We chose a unipolar
drive design as it is much easier to construct and is also cheaper to build. At the core of the drive circuit are
four power N-channel MOSFETs. The MOSFETs we ended up using were chosen because they can be driven
with a logic level signal and we were able to get 4 of them as free samples. They are plenty powerful for our
application, having an 80A continuous maximum rating while we are only using around 3-5A. To control this
system, we chose an ATTiny2313 20-pin DIP microcontroller. This was chosen as we are familiar with the AVR
series of microcontrollers, already have the software and programming hardware, and the ATTiny2313 is a rel-
atively cheap but still capable solution. In addition to the MOSFETs and the microcontroller, we needed four
diodes to absorb the high voltages generated in the motor windings when pulsed at high frequencies. We
chose 1N4004 diodes, though these are likely not the most optimal choice for this purpose as they don’t work
as well at high frequencies as other available diodes. Finally, we added power indicator LED’s, motor phase
P a g e | 11
LED’s, a microcontroller data transfer LED, and a motor status and direction RGB LED. For our communication
interface, we chose to use an Inter-Integrated Circuit (I2
C) bus to link the three boards. I2
C was chosen as it
works well for communication between several devices using only two wires, and the ATTiny2313 has a Univer-
sal Serial Interface (USI) module that is capable of performing parts of the I2
C protocol in hardware. While I2
C
works well for communication between the boards, the computer is not easily able to use it, so we also decid-
ed to use the ATTiny2313’s USART to provide a serial port. The serial port accepts a different command set
that can be used to set and read the device’s stored I2
C address as well as push I2
C command messages onto
the I2
C bus. By doing this, we could build three identical boards, all running the same software, and control all
three by plugging the PC into any one of the boards and sending addressed commands to it. The only differ-
ence between the boards is the stored I2
C address, which is saved in the microcontroller’s internal EEPROM
non-volatile memory and may be changed using the serial port. For the communication protocol, a simple
command-parameter set was created. All messages are three bytes – one command byte and up to two data
bytes which allows for 16-bit data values in a single command. For the serial I2
C send command, one of the
data bytes represents a number of additional I2
C message bytes that the PC will send. A full command refer-
ence is available in Appendix B. The devices also support the I2
C global address of 0x00, which allows one
command to be processed by all of the connected boards.
Figure 9: Motor Control Board Schematic
To create the boards, we used EAGLE CAD 6.1 to first create a schematic diagram of our design. This
schematic was built after prototyping the design on a breadboard and running several tests. After we settled
on a design, the schematic was finalized and we began creating a board layout based on it. Our goal was to
make the board single-layered, which means that it would be easy to manufacture at home using the laser
printer toner transfer technique. After routing the board by hand and cleaning up some organizational issues,
the board fit well on a 3”x5” sheet of copper-clad PCB. The first board was made using a PCB scrap and the
design barely fit, but after cleaning the board up and soldering all the parts it worked well and provided a more
reliable platform to develop code on. Once we were sure that the design worked as we wanted, we ordered
parts to build three of them and assembled all three with no major problems. To assemble the boards, we first
P a g e | 12
printed out a black-and-white image of the board’s pads and bottom layer on a laser printer. We used semi-
gloss paper that does not adhere well to toner, and then ironed the toner from the paper onto the copper-clad
PCB. The paper was then peeled off after soaking in water, leaving the toner cleanly adhered to the board.
The toner provides an etchant resisting surface allowing the etchant solution (ferric chloride) to dissolve the
exposed copper while leaving the toner-covered traces. The toner was then cleaned off and the holes were
drilled with a drill press. After cleaning up the copper, the parts were soldered in and then the boards were
programmed.
Figure 10: Motor Control Board PCB Layout
All embedded software and hardware design files are available under the Open Source GPL v3 license. The
software may be found in the team’s GitHub repository at https://github.com/CalcProgrammer1/Stepper-
Motor-Controller[11].
P a g e | 13
TurtleBot 360’s software was
programmed in C++ and designed
around ROS (Robot Operating System).
ROS provides a dynamic and robust
transport layer for the robot. The sys-
tem allows code modules to be linked at
runtime making it easy to edit or replace
a single module without the user being
required to comprehend the program as
a whole.
The software design is very simi-
lar to that of the original TurtleBot and
utilizes much of the same software. Fig-
ure 11 shows a simplified overview of
the current software architecture run-
ning on TurtleBot 360. The software stack is de-
signed to map the current input from the Kinect
and to allow for platform control from either a user with a Wiimote or the guidance software.
All software is available under the Open Source GPL v3 license. The software may be found in the
team’s GitHub repository at https://github.com/Mr-Anderson/turtlebot_360 [1].
The software stack starts with the openni_kinect node obtained from the ROS repository [2]. This node
is a wrapper for the OpenNI driver which interfaces with the Prime Sense chip on the Kinect. The node publish-
es a color image, depth image, and point cloud obtained from the Kinect. The pointcloud is composed of an
ordered list of depth points along with their coordinates in real space for objects in the camera’s field of view.
A custom robot descriptor was written to
describe the dimensions of the robot. Using the
robot and joint state publishers, the positions of
the various frames are published. The Kinect’s sen-
sor data is published on its own frame, allowing
the pointcloud data to be transformed into the
robot’s reference frame. Figure 12 shows the point
cloud output of the Kinect in relation to the robot.
Figure 11: Software Architecture
Kinectpointcloud_to_laserscan
gmapping
Wiimote
Outside World
explore
mesh_mapper
move_base
turtlebot_360_control
turtlebot_360_hardware_interface
Robot Motors
User
Figure 12: Point Cloud Display
P a g e | 14
The pointcloud_to_laserscan node was
obtained from the ROS repository [2]. The node
converts the three dimensional pointcloud data
into a two dimensional laserscan message. The
laserscan gives the distance to the nearest object
for every angle within the robot’s field of view.
This data simulates the data given by a laser rang-
ing unit and can be used by several existing pieces
of software. The output given by the node may
be seen in Figure 13. The red stripe in the model
represents the distance to the closest obstacle in
a given direction. The blue markings on the ground
give a two dimensional representation of the ob-
stacles around the robot.
The gmapping node is the ROS implementation of
the GMapping [3] library. Gmapping is a two dimensional
SLAM (Simulations Location and Mapping) algorithm. The
software may be found in the ROS repository [2]. The map-
ping algorithm uses the data from the laser scan message
and the robot’s odometry to compose a two dimensional
map of the environment. GMapping uses a particle filter
based method to solve for the robot’s movement and build
a two dimensional map. The maps created by the node are
sent to the map server where they can be queried by navi-
gation software or viewed by the user. Figure 14 shows a
small sample of the mapping. Light grey represents areas
that are known to be clear, black represents known obsta-
cles, and dark grey represents unknown areas.
Figure 14: gmapping Map [4]
Figure 13: pointcloud_to_laserscan Output
P a g e | 15
The mesh_mapper node is an experimental piece of software that attempts to do three dimensional
mapping from depth image and point cloud data. The module is currently only partially implemented; however
many techniques have been learned during its development. Two methods were attempted during develop-
ment and are described below.
 Vertex Based Method
The vertex based method of surface reconstruction
is centered on finding the vertices of all objects in the
field of view. The plan for this method was to then link
the vertices together based on edges between them.
This method is partially implemented but was aban-
doned early in the project. A flowchart may be found in
Figure 15 and the description of the method may be
found below.
The method started by applying the X and Y Sobel
operator to the depth image twice. The Sobel operator
was used to obtain a smoothed slope map of the entire
field of view. Applying the Sobel operator twice gives
an approximation of the change in slope or second de-
rivative across the entire field of view. The method
then scans the second derivative images to look for
pixels that have high values in both the X and Y images.
These points are considered to be corners of objects
since there are large changes in slope in both the X and
Y directions and they are marked as vertices.
Once vertices are identified, the method looks for
links between them. To do this the software adds the
values of both the X and Y second derivative images along the line of pixels between two vertices. If the
sum of the line divided by its length is larger than a threshold determined by the user the points are linked
to each other. Checking for large values in either the X or Y directions assures the existence of a strong
edge between the two vertices.
While linking the vertices, the software tracks statistics about their position and slope. After linking is
complete, the software looks for edges with similar slopes that could be considered to be the edges of the
same plane. Edge matches are done by looking for similar slope in the X-Y, Y-Z, or X-Z direction then a
threshold set by the user is used to define matches.
Once the segmentation of planes is complete, the software attempts to parse the data into a mesh file.
This is done by creating triangles to represent each plane, linking vertices of one side to those of the other.
Sobel Operator
Vertex Recognition
Vertex Linking
Input Depth and
ColorImages
Plain Recognintion
Mesh Encoding
Local Mesh
Figure 15: Vertex Based Method
P a g e | 16
The three dimensional location of the vertices is used to store all vertices into a standard COLLADA [4] for-
mat. The RGB image from the camera can then be added as a texture to give color to the model.
Early in the development, this method was abandoned. The method proved to be too volatile to find
reliable vertices. All of the vertices in the frame were not captured, leading to problems with code later in
the pipeline. In addition, the method does not track enough data about vertices to reliably link data into a
world map. The vertex based method appears to be too complicated to reliably perform surface recon-
struction.
 Plane Based Method
The plane based method of surface reconstruction starts
by looking for planes within the image. The plan is to segment all
of the flat surfaces in the field of view, then use that information
to find their edges and vertices. While this method shows prom-
ise it is currently only partially implemented. Figure 16 shows the
flow of the plane based method.
The method starts by attempting to find all the flat sur-
faces in the depth image. To do this the Sobel operator is applied
to the depth image. This gives two images, one with the slope in
the X direction and one with the slope in the Y direction for every
pixel in the depth image. These images are used to find runs of
constant slope within the image. The software runs horizontally
along the X image and vertically along the Y image looking at the
change in Slope. Four types of conditions can mark the start or
end of a run. These conditions are tracked for use in model con-
struction.
1. Continuous Point – The change in slope is large but the
depth value of pixels on either side is similar.
2. Discontinuous Foreground – The change is slope is large
and the point is much closer in depth than its neighbor.
3. Discontinuous Background – The change is slope is large
and the point is much farther in depth than its neighbor.
4. Structure Point – The change in slope is small but the accu-
mulated change in slope across the image has grown larger
than the defined threshold. This condition is used to define faceted surfaces that represent curves in
the real world.
Once all of the runs have been found in both the X and Y direction, the software attempts to weave them
together into continuous surfaces. The weaving is performed by a recursive function that will link together runs
into planes. The function is fed a seed horizontal run that has not already been linked into a plane. The function
Sobel Operator
Vertical and HorizontalRun
Lengths
Run Linking
Input Depth and
ColorImages
Vertex Parsing
Mesh Encoding
Local Mesh
ContinuousVertex Correlation
DiscontinuousVertex
Correlationand Stretching
GlobalMesh
Figure 16: Plane Based Method
P a g e | 17
then runs along the horizontal run looking for vertical runs that it crosses and that have not been linked. The
function traces down unlinked vertical runs and calls itself again for all unlinked horizontal runs that are
crossed. Once all the runs crossed have been linked the function returns a plane with a list of all the runs it con-
tains. While the function executes, statistics are gathered about the size, location, members, and perimeters of
each plane.
The currently implemented method is able to segment
planes with some success. Due to time constrains brought on
by the failures of previous methods, however, it is all that has
been implemented of this method. The following describes
the rest of the process planned for implementation.
After the depth image has been segmented into planes,
the planes must be defined as a list of vertices. To find the
vertices of the image, the software moves along the perime-
ter that is output from plane segmentation. The software
tracks the slope of the points around the perimeter looking
for large sudden or accumulated changes in slope. When
changes occur, new vertices are defined. The vertices are
checked to see if they lie on a continuous or structure point. If the point is continuous or a structure point, the
same vertices are defined on its neighbor plane to maintain continuity of the mesh.
Once all of the vertices for planes have been found, they must be segmented into triangles. To do this, the
software starts with one vertex of a plane then links by alternating clockwise and counter clockwise rotation
around the perimeter. This method assures that the entire perimeter vertices will be linked into triangles that
represent the planes they are associated with.
After all of the vertices have been linked into their polygons, the software parses the information into COL-
LADA format with supplementary data. The supplementary data stores the type of point that each vertex is and
the positions on the original image for use in global model creation. The RGB image is added as a texture using
the original image locations of the vertices to place it on the mesh.
The software continuously outputs meshes of each local frame. As new frames come in it then attempts to
correlate them into a global map using the robot’s odometry data. The global model is started with the first
mesh that is captured and all vertices of that mesh are placed into the global model. For successive frames, the
software first looks for vertices defined as continuous vertices. Continuous vertices are often associated with
the corners of objects and their positions do not change as parts of the world are covered. For each new con-
tinuous point the software attempts to find one that is close to the same position in the world using the robot’s
odometry. The software then attempts to correct the odometry of the robot to line up the vertices exactly with
the existing model. Once vertices line up they are combined with those of the global frame.
After all of the continuous vertices are linked, the software attempts to link discontinuous vertices. Since
discontinuous vertices are often the result of the background being covered by the foreground or the edge of a
curved surface, they will often move in the global model. The software looks for continuous vertices in the
Figure 17: Depth Image from Kinect
P a g e | 18
global model that are close to the new vertices. The vertices in the global model are then moved to match the
position of the new local vertices. The movement of global vertices is tracked and after there has been suffi-
cient movement new vertices are created as structure points. This is to account for the case of the robot mov-
ing around a curved surface and will create a faceted structure to represent it. The structure points of local
frames are ignored after the first frame since their data is already represented in the model.
While the entirety of the plane based surface modeling method has not been implemented the method
shows potential. The team will continue to develop this method. The theory of the implementation seems
sound and will hopefully produce good results in the future.
The explore node uses the current world map to determine the best path for the robot. The software
attempts to create waypoints that will explore an unknown environment. The explore node may be found in
the ROS repository [2]. The node publishes 2D navigation goals that will move the robot to unexplored areas.
The software will continue to publish new waypoints until the entire environment has been explored and it
determines all of the boundary walls.
The move_base node is in charge of local and global
obstacle navigation. The software is part of the ROS naviga-
tion stack [2]. The node looks at the local obstacle map
around the robot and determines the path needed to avoid
close obstacles. The software then looks at the global map
and attempts to find a path that will lead to the next waypoint
given by the explore node or by a user. Once a path is deter-
mined, the node publishes a message that contains the de-
sired robot velocities. The navigation stack was tuned to take
advantage of the holonomic drive system and computes a
forward, strafe, and rotational velocity for the robot. Figure
18 shows the users desired waypoint as a red arrow and the
robots planed path as a green line. The blue markings on the
ground represent obstacles and the green represent those
obstacles inflated by the robots size. The inflation is used to determine safe areas for the robot to travel on.
The turtlebot_360_control node has the final control over what velocity commands are sent to the mo-
tor control software. The node has several modes of operations that decide the behavior of the robot. The
node was custom built for TurtleBot 360 and is the bridge between the software and the user.
Figure 18: Robot Movement Planning
P a g e | 19
The control node starts out in standby mode and waits for a Wiimote controller to connect. Once the
user connects a Wiimote they have the ability to place the robot into either user controlled mode or autono-
mous mode. In autonomous mode the node will pass the velocity commands published by move_base through
to the hardware interface giving it control over the robot. In user controlled mode the software will interface
with the Wiimote and compute velocity commands based on the button, joystick, and accelerometer data from
the Wiimote. The node also interfaces with a
text-to-speech library to provide feedback
about the current state of the robot.
The software also launches a ROS tool
named RViz. RViz is a visualizer that allows the
user to view combined information about the
robots inputs in a three dimensional virtual
environment. The output of the RViz display
may be seen in Figure 19. The display is cus-
tomizable at run time so users may view any
debugging information that is being published.
Users are also able to subscribe to this data
over a network allowing for remote operation.
The turtlebot_360_hardware_interface node computes the velocities that are sent to the motors using
the velocity commands published by control. The control of the kiwi drive train is defined by three linear equa-
tions for the three degrees of freedom. The three equations are then summed to compute the final wheel ve-
locities. The formulas used to compute the wheel velocities may be found below in Equation 1. The wheels
were numbered starting at the front of the robot and rotating to the left.
( ) ( )
( ) (
√
) ( )
( ) (
√
) ( )
Equation 1: Wheel velocity formulas
Once the velocities are computed they are scaled to maintain a maximum speed and acceleration for
each wheel. The scale back value for each wheel is computed then the largest scalar is applied to all of the
wheels to maintain the ratio between velocities. The new wheel velocities are then transmitted to the motors
through the serial port. The motor velocities are then used to compute the odometry of the robot and publish
it back to the software.
Figure 19: RVIZ Robot Dashboard
P a g e | 20
Overall the team is pleased with the performance of the robot. The robot is able to successfully map
two dimensional environments and can navigate autonomously. TurtleBot 360 is a successful holonomic drive
platform and can move in any direction and rotate on the ground plane. The platform is sturdy and will make a
good test bed for future software development.
While the platform was able to demonstrate movement in multiple axes simultaneously, the platform’s
wheels will suddenly lose all torque if they encounter too much resistance or try to drive too fast. This is due to
the stepper motors, which have high torque at low speeds and rapidly decline in torque as speed increases. In
addition, once a motor stalls it has a very difficult time getting back its torque. This is because our software
soft-starts the motors with a gradual acceleration curve, using the high torque at low speeds to get the robot
moving. The hardware is also unaware of any motor skips or stalls, so the odometry data that is fed into the
ROS navigation software loses accuracy quickly when the motors stall.
In future revisions of the platform, the motors could be
replaced with higher torque stepper motors and the controllers
improved to support bipolar drive, microstepping, and feedback
mechanisms. Additionally, the stepper motors could be replaced
with brushed or brushless DC motors with gear boxes. These mo-
tors are not as precise, but with a good encoder and a good gear
box they can produce higher torque and recover from stalls quick-
er. The downside to these systems is that they are much more ex-
pensive than the unipolar stepper drives which increases the cost
of the robot significantly. The required torque may also be re-
duced by using omni-wheels that do not have rubber rollers. The
rubber rollers on the Vex wheels used in our design grip the floor
and require more torque to roll sideways or diagonally than plastic
rollers. A third improvement that could be made is to use two
smaller batteries in series to create a 24 Volt main power rail. This
higher voltage can increase the speed at which the motors are able
to maintain their torque. However, it would pose more risk to the
motors for overheating or over-current, so the control code would
need to limit the pulse width such that the current never exceeds
the motor’s rating (3A in our case).
In addition to the hardware concerns of the robot there were also shortcomings in the software. The
robot was able to perform well when creating two dimensional maps, however the software for three dimen-
sional mapping was never completed. The three dimensional mapping software never behaved quite as ex-
pected.
The team started implementation with a vertex based method that attempted to find vertices of ob-
jects then link them together. This method had many problems during operation that caused the false identifi-
cation of vertices in addition to vertices not being identified. The method also had conceptual problems that
Figure 20: Experimenting
P a g e | 21
would have led to an inability to create a global model. The team abandoned work on the vertex based method
early during development but learned a great deal from the work done on it. From working with the vertex
based method of surface reconstruction the team was able to construct the plane based method.
The team believes the plain based method to be conceptually sound however due to setbacks in devel-
opment and heavy course loads it has not been fully implemented. The portion of the method that has been
implemented is able to segment the depth image of the Kinect into flat surfaces. The software still has some
bugs during operation and more development will need to be done. The team plans to continue work on the
software and will attempt to implement
the method described in the sections
above.
While TurtleBot 360 has some
shortcomings it is able to do many things
very well. The robot is able to successfully
autonomously map indoor environments
in two dimensions. The software on the
robot has been setup to ROS standards,
opening it to a variety of additional soft-
ware packages to expand its abilities. The
team is very happy with the platform and
will continue to use it as a reliable test
bed in the future. Figure 21: TurtleBot 360 Mapping an Indoor Environment
P a g e | 22
The robot was built and programmed from late November 2011 through April of the next year. Two
schedules may also be found bellow showing the proposed timeline and the followed timeline.
Responsibility
(primary/secondary) 1 2 3 4 1 2 1 2 3 4 1 2 3 4 1 2 3 4 1 2 3 4
Adam Honse/
James Anderson
Adam Honse/
James Anderson
Adam Honse/
James Anderson
James Anderson/
Adam Honse
James Anderson/
Adam Honse
James Anderson/
Adam Honse
James Anderson/
Adam Honse
James Anderson/
Adam Honse
James Anderson/
Adam Honse
James Anderson/
Adam Honse
Original Schedule
Feb. Mar. Apr.Dec. Jan.
Solve for robot's movement
Identify mapped and unmapped
areas
Program robot
build indoor maps
autonomously
Identify obstacles
Combine current mesh with world
map
Track and compute optical flow map
and solve for camera location
Identify vertices and convert to a
mesh
Program robot to
build indoor maps
under remote
control
Import Kinect color image and depth
map
Write microcontroller and PC
interface code
Build a computer
controlled omni-
directional robot
Design platform
Build platform
Goal Task
Nov.
Responsibility
(primary/secondary) 1 2 3 4 1 2 1 2 3 4 1 2 3 4 1 2 3 4 1 2 3 4
Adam Honse/
James Anderson
Adam Honse/
James Anderson
Adam Honse/
James Anderson
James Anderson/
Adam Honse
James Anderson/
Adam Honse
James Anderson/
Adam Honse
James Anderson/
Adam Honse
James Anderson/
Adam Honse
James Anderson/
Adam Honse
Solve for robot's movement
Identify mapped and unmapped
areas
Program robot
build indoor maps
autonomously
Identify obstacles
Identify points in global model and
combine with local
Identify planes and convert to a
mesh
Program robot to
build indoor maps
under remote
control
Import Kinect color image and depth
map
Write microcontroller and PC
interface code
Build a computer
controlled omni-
directional robot
Design platform
Build platform
Final Schedule
Goal Task
Nov. Dec. Jan. Feb. Mar. Apr.
P a g e | 23
The discrepancies between the final and original project timelines comes from a major shift in the
software plan early in the project. When it became clear that the finding of vertices would not be reliable
enough to accurately map environments, the team paused software development. The team settled on a new
method of surface reconstruction that uses planes to find surfaces. The loss of development time also caused
the team to choose a third-party software stack that is part of Robot Operating System repository [2] to handle
the robot navigation. These changes led to a more robust software stack but the loss of development time did
not allow for the completion of the three dimensional mesh mapping module.
Another setback occurred on the hardware side. While major components were obtained early on
(namely the wheels and motors), the board development was delayed until the motor controller IC’s could be
obtained. The team chose to sample these parts (ST’s L297 and L298 stepper motor IC’s) to save on develop-
ment cost and to try out a more suited part than the unipolar MOSFET drive system that the team was planning
on using. Testing these IC’s took some time, as the team encountered problems with high heat output and
problems with the chip’s current limiting system. When we finally settled on a schematic, a board was de-
signed using EAGLE CAD board development software. After writing code and testing, we determined that this
design would not be a reliable way to drive our motors and went back to the original idea of a unipolar driver
using four MOSFETs. This design was then put together in EAGLE and tested successfully. The final board de-
sign was based off of this MOSFET driver which worked reliably with lower heat output from the board.
P a g e | 24
Listed below are the original and final overview budgets. The fully itemized budget may be found in
Appendix A. The discrepancies between the final and planned budget are accounted for by the changes in the
hardware design. The team used larger motors as well as increased the cost of the frame by making it more
structurally sound. The motor controller board was also redesigned as three individual boards as opposed to
the one central board proposed in the original budget. The additional cost has led to a robot that is more ro-
bust and easier to build.
Table 1: Original Budget
Item Quantity Unit Cost Cost
Stepper Motors 3 $20.00 $60.00
Omni Wheels 3 $40.00
Lexan Sheets (1024 in
2
) 1 $35.00
Threaded Rod (6’) 1 $35.00
Fasteners 1 $10.00
Velcro 1 $5.00
Battery Pack 1 $30.00
Microcontroller Board 1 $30.00
Infrared Range Sensors 3 $15.00 $45.00
Kinect Sensor 1 $150.00
Kinect Power Supply 1 $10.00
Kinect Mount 1 $5.00
Total $426.00
Table 2: Overall Final Budget
Item Quantity Unit Cost Cost
Stepper Motor Control PCB 3 $20.00 $60.00
Power Regulator PCB 1 $5.00 $5.00
Battery 1 $27.00 $27.00
Microsoft Kinect for Xbox 360 Sensor 1 $150.00 $150.00
VEX Robotics 4in. Large Omni Directional Wheel Kit 2-pack 2 $24.99 $49.98
21.0 kg-cm 6 Wire NEMA 23 Stepping Motor 3 22.5 $67.50
Robot Frame 1 138.75 $138.75
Total 498.23
The final cost of the project came in at just under $500. This cost is well below the cost of similar platforms,
including the original TurtleBot platform which can be built for just under $950. While our platform prototype
has some problems with torque, the price difference can more than make up for the installation of stronger
motors in the final revision. The team believes that with the platform’s added omnidirectional capability and
increased height, it is a better value for hobby robotics than the TurtleBot. Expanded budgets for the individual
motor control boards as well as the robot structural components may be found in appendix A. Note that the
final robot design consists of three motor control boards, which are represented in the overall final budget in
Table 2.
P a g e | 25
Single Motor Control Board
Item Quantity Unit Cost Cost
MG Chemicals Single Sided 3x5 PCB 1 $3.30 $3.30
CTS 20MHz 50pF 30ppm Crystal 1 $0.35 $0.35
Fairchild Semiconductor 1N4004 Diode 4 $0.09 $0.36
Nichicon 470uF 35V Electrolytic Capacitor 1 $0.41 $0.41
Nichicon 1000uF 35V Electrolytic Capacitor 1 $0.50 $0.50
KOA Speer Metal Film Resistor 2.2KOhm 1% 1 $0.06 $0.06
KOA Speer Metal Film Resistor 300Ohm 1% 3 $0.06 $0.18
KOA Speer Metal Film Resistor 100Ohm 1% 4 $0.06 $0.24
KOA Speer Metal Film Resistor 1KOhm 1% 10 $0.06 $0.60
Cree, Inc. 5mm Red High Brightness 2.1V LED 3 $0.14 $0.42
Cree, Inc. 5mm Blue LED 4 $0.19 $0.76
STMicroelectronics STP95N3LLH6 N-Channel MOSFET 4 $1.51 $6.04
Atmel ATTiny2313 AVR Microcontroller 1 $1.91 $1.91
Total $15.13
Other Items Not Purchased Quantity Est. Cost
5mm RGB Common Cathode LED 1 $1.50
Mini Push Button Switch 1 $0.30
Break-away Pin Headers (Male) 16 $0.30
Break-away Pin Headers (Female) 6 $0.10
Power Distribution Board (Items Not Purchased, Cost Approximated)
Item Quantity Unit Cost Cost
LM7805 5 Volt Regulator 1 $1.00 $1.00
MAX232 Serial Level Shifter 1 $1.00 $1.00
1KOhm Resistor (I
2
C Pull-ups) 2 $0.06 $0.12
12V 30A Toggle Switch with LED Indicator 1 $3.19 $3.19
Small Prototyping PCB 1 $1.00 $1.00
Total $6.31
P a g e | 26
Platform Hardware
Item Quantity Unit Cost Cost
VEX Robotics - Shaft Coupler (5-pack) 2 $4.99 $9.98
Shaft Coupler 1/4 inch to 1/4 inch Steel with 2 Set Screws 3 2.99 $8.97
10-32 x 1/8" Long Cup Point Socket Set Screw 1 $2.20 $2.20
6 x 1 1/4" Round Head screws (side gussets) 12 $0.10 $1.20
10 x 2" Round Head Bolts (motor) 12 $0.14 $1.68
#10 Lock Washers (motor) 12 $0.09 $1.08
#10 Nuts (motor) 12 $0.09 $1.08
#10 Washers (motor) 12 $0.09 $1.08
5/16" x 5" Bolts (bearing plate) 6 $0.60 $3.60
5/16" Nuts (bearing plate) 12 $0.10 $1.20
5/16" Washers (bearing plate) 12 $0.11 $1.32
1/4" Bronze Bearing 3 $2.45 $7.35
1/2" Bronze Bearing 3 $2.75 $8.25
8 x 2" Flat Head Screws (bearing plate spacers) 6 $0.10 $0.60
10 x 3" Flat Head Screws (corners) 6 $0.12 $0.72
2x4x8 studs (triangle) 2 $2.27 $4.54
1x6x6 studs (side gussets) 1 $5.34 $5.34
1x4x8 studs (shelf frames) 3 $1.94 $5.82
2x3x8 studs (bearing plate spacer) 1 $1.92 $1.92
2x8 studs (top gussets) 1 $4.95 $4.95
1/2" Threaded Rod (shelf supports) 3 $4.29 $12.87
1/2" Nuts Box (shelf support) 1 $8.00 $8.00
1/2" Washers Box (shelf support) 1 $8.00 $8.00
Lexan (for shelves) 1 $10.00 $10.00
Kinect Mount 1 $10.00 $10.00
Board Mounts 1 $15.00 $15.00
Key stock (Axle) 1 $2.00 $2.00
Total: 138.75
P a g e | 27
27 | P a g e
Serial Port Command Reference
The serial port operates at 19,200 baud, 8 data bits, no parity. All commands are three bytes long, though
some commands can accept additional data as specified in the command. Any N/A values are don’t-cares,
there still must be a dummy value sent to fill that byte position.
Command Byte Data 1 Data 2 Description
0x21 Mode N/A Set I2
C Mode – Mode 1 is master, mode 0 is slave
0x22 Address N/A Set I2
C Address – Address is a 7-bit address value
0x23 N/A N/A Read I2
C Address – Board transmits its address byte
0x24 Length Address Send I2
C Message – Sends a message of <Length> bytes to I2
C
device <Address>, this command requires <Length> additional
bytes to be sent from the PC containing the I2
C data to send.
0x25 Length Address Send I2
C Read Request – Sends a read request to device <Ad-
dress> and reads <Length> bytes, then transmits them back to
the PC.
I2
C Interface Command Reference
All boards initialize as I2
C slaves when power is applied. To use master mode, use the above serial commands
to switch the board into master and begin transmitting messages. This table lists the I2
C slave-mode com-
mands that are used to control the motor. The serial-connected master board will also interpret each message
sent for its own address. Like serial messages, all I2
C slave messages are three bytes long. The I2
C address 0x00
is the global address, the device will respond to this address regardless of its programmed address.
Command Byte Data 1 Data 2 Description
0x01 High Low Set motor speed – This sets the delay between motor steps. It is
a 16-bit unsigned value.
0x02 High Low Set motor step count – This sets the number of steps in the step
counter. The step counter decrements each time the motor
steps. This value may be overwritten at any time, even during
motor operation. This is a 16-bit unsigned value.
0x03 Direction N/A Set Step Direction – Direction = 0 or 1, sets the direction in which
the motor steps. The motor connector may be reversed to re-
verse the directions.
0x04 Enable N/A Set Motor Enable – Setting enable to 1 enables the motor and
starts motion, setting to 0 stops motion. If the step counter
reaches zero then the motor will automatically disable itself.
0x05 Mode N/A Set Stepping Mode – Setting mode to 0 sets normal (one phase at
a time) stepping operation. Setting mode to 1 sets ‘power step-
ping’ (two phase at a time) stepping operation for increased
torque.
P a g e | 28
28 | P a g e
[1] J. Anderson and A. Honse, "TurtleBot 360 Code Repository," [Online]. Available: https://github.com/Mr-
Anderson/turtlebot_360.
[2] "ROS Documentation Wiki," Willow Garage, [Online]. Available: http://www.ros.org/wiki/.
[3] F. Endres, J. Hess, N. Engelhard, J. Sturm and W. Burgard. [Online]. Available:
http://openslam.org/rgbdslam.html.
[4] "COLLADA Documentation Wiki," [Online]. Available: https://collada.org/mediawiki/index.php/COLLADA_-
_Digital_Asset_and_FX_Exchange_Schema.
[5] J. H. N. E. Felix Endres, "RGB-D SLAM Documentation Page," University of freiburg, 2011. [Online].
Available: http://www.ros.org/wiki/rgbdslam. [Accessed 28 November 2011].
[6] J. Bowman, "vision_opencv ROS Documentation," [Online]. Available:
http://www.ros.org/wiki/vision_opencv. [Accessed 28 November 2011].
[7] K. C. T. F. Maintained by Radu Bogdan Rusu, "openni_kinect ROS Documentation," [Online]. Available:
http://www.ros.org/wiki/openni_kinect. [Accessed 28 November 2011].
[8] "TurtleBot Documentation Page," Willow Garage, [Online]. Available:
http://www.ros.org/wiki/Robots/TurtleBot.
[9] G. Grisetti, C. Stachniss and W. Burgard, "GMapping," [Online]. Available:
http://openslam.org/gmapping.html.
[10] "OpenCV documentation wiki," [Online]. Available: http://opencv.willowgarage.com/wiki/.
[11] A. Honse, "TurtleBot 360 Stepper Motor Controller Repository," [Online]. Available:
https://github.com/CalcProgrammer1/Stepper-Motor-Controller.
[12] "OpenNI Documentation," [Online]. Available: http://openni.org/Documentation/ProgrammerGuide.html.

More Related Content

Viewers also liked

Arindam batabyal literature reviewpresentation
Arindam batabyal literature reviewpresentationArindam batabyal literature reviewpresentation
Arindam batabyal literature reviewpresentationArindam Batabyal
 
Tecnologías Aumentaty RV
Tecnologías Aumentaty RVTecnologías Aumentaty RV
Tecnologías Aumentaty RVAumentaty
 
WiSlam presentation
WiSlam presentationWiSlam presentation
WiSlam presentationlbruno236
 
Simultaneous Localization and Mapping for Pedestrians using only Foot-Mounte...
Simultaneous Localization and Mappingfor Pedestrians using only Foot-Mounte...Simultaneous Localization and Mappingfor Pedestrians using only Foot-Mounte...
Simultaneous Localization and Mapping for Pedestrians using only Foot-Mounte...guest5fe3bb
 
Inertial navigaton systems11
Inertial navigaton systems11Inertial navigaton systems11
Inertial navigaton systems11Vikas Kumar Sinha
 

Viewers also liked (6)

Arindam batabyal literature reviewpresentation
Arindam batabyal literature reviewpresentationArindam batabyal literature reviewpresentation
Arindam batabyal literature reviewpresentation
 
Tecnologías Aumentaty RV
Tecnologías Aumentaty RVTecnologías Aumentaty RV
Tecnologías Aumentaty RV
 
Mapping mobile robotics
Mapping mobile roboticsMapping mobile robotics
Mapping mobile robotics
 
WiSlam presentation
WiSlam presentationWiSlam presentation
WiSlam presentation
 
Simultaneous Localization and Mapping for Pedestrians using only Foot-Mounte...
Simultaneous Localization and Mappingfor Pedestrians using only Foot-Mounte...Simultaneous Localization and Mappingfor Pedestrians using only Foot-Mounte...
Simultaneous Localization and Mapping for Pedestrians using only Foot-Mounte...
 
Inertial navigaton systems11
Inertial navigaton systems11Inertial navigaton systems11
Inertial navigaton systems11
 

Similar to Autonomous Indoor Mapping Using The Microsoft Kinect Sensor

Autonomous Vehicle and Augmented Reality Usage
Autonomous Vehicle and Augmented Reality UsageAutonomous Vehicle and Augmented Reality Usage
Autonomous Vehicle and Augmented Reality UsageDr. Amarjeet Singh
 
A Review On AI Vision Robotic Arm Using Raspberry Pi
A Review On AI Vision Robotic Arm Using Raspberry PiA Review On AI Vision Robotic Arm Using Raspberry Pi
A Review On AI Vision Robotic Arm Using Raspberry PiAngela Shin
 
Camouflage Color Changing Robot For Military Purpose
Camouflage Color Changing Robot For Military PurposeCamouflage Color Changing Robot For Military Purpose
Camouflage Color Changing Robot For Military PurposeHitesh Shinde
 
Camouflage color changing robot for miltary purpose
Camouflage color changing robot for miltary purposeCamouflage color changing robot for miltary purpose
Camouflage color changing robot for miltary purposeAtharvaPathak13
 
IRJET- Wearable AI Device for Blind
IRJET- Wearable AI Device for BlindIRJET- Wearable AI Device for Blind
IRJET- Wearable AI Device for BlindIRJET Journal
 
Design and development of DrawBot using image processing
Design and development of DrawBot using image processing Design and development of DrawBot using image processing
Design and development of DrawBot using image processing IJECEIAES
 
4th ARM Developer Day Presentation
4th ARM Developer Day Presentation4th ARM Developer Day Presentation
4th ARM Developer Day PresentationAntonio Mondragon
 
Figure 1
Figure 1Figure 1
Figure 1butest
 
Figure 1
Figure 1Figure 1
Figure 1butest
 
Figure 1
Figure 1Figure 1
Figure 1butest
 
Intelligent Embedded Systems (Robotics)
Intelligent Embedded Systems (Robotics)Intelligent Embedded Systems (Robotics)
Intelligent Embedded Systems (Robotics)Adeyemi Fowe
 
IRJET- Educatar: Dissemination of Conceptualized Information using Augmented ...
IRJET- Educatar: Dissemination of Conceptualized Information using Augmented ...IRJET- Educatar: Dissemination of Conceptualized Information using Augmented ...
IRJET- Educatar: Dissemination of Conceptualized Information using Augmented ...IRJET Journal
 
Nirav joshi mechanical engineer - portfolio
Nirav joshi   mechanical engineer - portfolioNirav joshi   mechanical engineer - portfolio
Nirav joshi mechanical engineer - portfolioNirav Joshi
 
13 9246 it implementation of cloud connected (edit ari)
13 9246 it implementation of cloud connected (edit ari)13 9246 it implementation of cloud connected (edit ari)
13 9246 it implementation of cloud connected (edit ari)IAESIJEECS
 
Quantify Measure App Project concept presentation
Quantify Measure App Project concept presentationQuantify Measure App Project concept presentation
Quantify Measure App Project concept presentationAsheeshK
 
Can body write an essay for me on dream job in Computer Engineerin.pdf
Can body write an essay for me on dream job in Computer Engineerin.pdfCan body write an essay for me on dream job in Computer Engineerin.pdf
Can body write an essay for me on dream job in Computer Engineerin.pdfmanjan6
 
fyp presentation of group 43011 final.pptx
fyp presentation of group 43011 final.pptxfyp presentation of group 43011 final.pptx
fyp presentation of group 43011 final.pptxIIEE - NEDUET
 
KAMESHPRABU M_Resume
KAMESHPRABU M_ResumeKAMESHPRABU M_Resume
KAMESHPRABU M_ResumeKamesh Prabu
 

Similar to Autonomous Indoor Mapping Using The Microsoft Kinect Sensor (20)

Autonomous Vehicle and Augmented Reality Usage
Autonomous Vehicle and Augmented Reality UsageAutonomous Vehicle and Augmented Reality Usage
Autonomous Vehicle and Augmented Reality Usage
 
A Review On AI Vision Robotic Arm Using Raspberry Pi
A Review On AI Vision Robotic Arm Using Raspberry PiA Review On AI Vision Robotic Arm Using Raspberry Pi
A Review On AI Vision Robotic Arm Using Raspberry Pi
 
Camouflage Color Changing Robot For Military Purpose
Camouflage Color Changing Robot For Military PurposeCamouflage Color Changing Robot For Military Purpose
Camouflage Color Changing Robot For Military Purpose
 
Camouflage color changing robot for miltary purpose
Camouflage color changing robot for miltary purposeCamouflage color changing robot for miltary purpose
Camouflage color changing robot for miltary purpose
 
IRJET- Wearable AI Device for Blind
IRJET- Wearable AI Device for BlindIRJET- Wearable AI Device for Blind
IRJET- Wearable AI Device for Blind
 
Design and development of DrawBot using image processing
Design and development of DrawBot using image processing Design and development of DrawBot using image processing
Design and development of DrawBot using image processing
 
4th ARM Developer Day Presentation
4th ARM Developer Day Presentation4th ARM Developer Day Presentation
4th ARM Developer Day Presentation
 
Figure 1
Figure 1Figure 1
Figure 1
 
Figure 1
Figure 1Figure 1
Figure 1
 
Figure 1
Figure 1Figure 1
Figure 1
 
Intelligent Embedded Systems (Robotics)
Intelligent Embedded Systems (Robotics)Intelligent Embedded Systems (Robotics)
Intelligent Embedded Systems (Robotics)
 
IRJET- Educatar: Dissemination of Conceptualized Information using Augmented ...
IRJET- Educatar: Dissemination of Conceptualized Information using Augmented ...IRJET- Educatar: Dissemination of Conceptualized Information using Augmented ...
IRJET- Educatar: Dissemination of Conceptualized Information using Augmented ...
 
robocity2013-jderobot
robocity2013-jderobotrobocity2013-jderobot
robocity2013-jderobot
 
Nirav joshi mechanical engineer - portfolio
Nirav joshi   mechanical engineer - portfolioNirav joshi   mechanical engineer - portfolio
Nirav joshi mechanical engineer - portfolio
 
13 9246 it implementation of cloud connected (edit ari)
13 9246 it implementation of cloud connected (edit ari)13 9246 it implementation of cloud connected (edit ari)
13 9246 it implementation of cloud connected (edit ari)
 
Quantify Measure App Project concept presentation
Quantify Measure App Project concept presentationQuantify Measure App Project concept presentation
Quantify Measure App Project concept presentation
 
Can body write an essay for me on dream job in Computer Engineerin.pdf
Can body write an essay for me on dream job in Computer Engineerin.pdfCan body write an essay for me on dream job in Computer Engineerin.pdf
Can body write an essay for me on dream job in Computer Engineerin.pdf
 
Portfolio
PortfolioPortfolio
Portfolio
 
fyp presentation of group 43011 final.pptx
fyp presentation of group 43011 final.pptxfyp presentation of group 43011 final.pptx
fyp presentation of group 43011 final.pptx
 
KAMESHPRABU M_Resume
KAMESHPRABU M_ResumeKAMESHPRABU M_Resume
KAMESHPRABU M_Resume
 

Recently uploaded

Python Programming for basic beginners.pptx
Python Programming for basic beginners.pptxPython Programming for basic beginners.pptx
Python Programming for basic beginners.pptxmohitesoham12
 
Cost estimation approach: FP to COCOMO scenario based question
Cost estimation approach: FP to COCOMO scenario based questionCost estimation approach: FP to COCOMO scenario based question
Cost estimation approach: FP to COCOMO scenario based questionSneha Padhiar
 
signals in triangulation .. ...Surveying
signals in triangulation .. ...Surveyingsignals in triangulation .. ...Surveying
signals in triangulation .. ...Surveyingsapna80328
 
Research Methodology for Engineering pdf
Research Methodology for Engineering pdfResearch Methodology for Engineering pdf
Research Methodology for Engineering pdfCaalaaAbdulkerim
 
KCD Costa Rica 2024 - Nephio para parvulitos
KCD Costa Rica 2024 - Nephio para parvulitosKCD Costa Rica 2024 - Nephio para parvulitos
KCD Costa Rica 2024 - Nephio para parvulitosVictor Morales
 
Paper Tube : Shigeru Ban projects and Case Study of Cardboard Cathedral .pdf
Paper Tube : Shigeru Ban projects and Case Study of Cardboard Cathedral .pdfPaper Tube : Shigeru Ban projects and Case Study of Cardboard Cathedral .pdf
Paper Tube : Shigeru Ban projects and Case Study of Cardboard Cathedral .pdfNainaShrivastava14
 
Levelling - Rise and fall - Height of instrument method
Levelling - Rise and fall - Height of instrument methodLevelling - Rise and fall - Height of instrument method
Levelling - Rise and fall - Height of instrument methodManicka Mamallan Andavar
 
OOP concepts -in-Python programming language
OOP concepts -in-Python programming languageOOP concepts -in-Python programming language
OOP concepts -in-Python programming languageSmritiSharma901052
 
『澳洲文凭』买麦考瑞大学毕业证书成绩单办理澳洲Macquarie文凭学位证书
『澳洲文凭』买麦考瑞大学毕业证书成绩单办理澳洲Macquarie文凭学位证书『澳洲文凭』买麦考瑞大学毕业证书成绩单办理澳洲Macquarie文凭学位证书
『澳洲文凭』买麦考瑞大学毕业证书成绩单办理澳洲Macquarie文凭学位证书rnrncn29
 
Robotics-Asimov's Laws, Mechanical Subsystems, Robot Kinematics, Robot Dynami...
Robotics-Asimov's Laws, Mechanical Subsystems, Robot Kinematics, Robot Dynami...Robotics-Asimov's Laws, Mechanical Subsystems, Robot Kinematics, Robot Dynami...
Robotics-Asimov's Laws, Mechanical Subsystems, Robot Kinematics, Robot Dynami...Sumanth A
 
Ch10-Global Supply Chain - Cadena de Suministro.pdf
Ch10-Global Supply Chain - Cadena de Suministro.pdfCh10-Global Supply Chain - Cadena de Suministro.pdf
Ch10-Global Supply Chain - Cadena de Suministro.pdfChristianCDAM
 
High Voltage Engineering- OVER VOLTAGES IN ELECTRICAL POWER SYSTEMS
High Voltage Engineering- OVER VOLTAGES IN ELECTRICAL POWER SYSTEMSHigh Voltage Engineering- OVER VOLTAGES IN ELECTRICAL POWER SYSTEMS
High Voltage Engineering- OVER VOLTAGES IN ELECTRICAL POWER SYSTEMSsandhya757531
 
DEVICE DRIVERS AND INTERRUPTS SERVICE MECHANISM.pdf
DEVICE DRIVERS AND INTERRUPTS  SERVICE MECHANISM.pdfDEVICE DRIVERS AND INTERRUPTS  SERVICE MECHANISM.pdf
DEVICE DRIVERS AND INTERRUPTS SERVICE MECHANISM.pdfAkritiPradhan2
 
Immutable Image-Based Operating Systems - EW2024.pdf
Immutable Image-Based Operating Systems - EW2024.pdfImmutable Image-Based Operating Systems - EW2024.pdf
Immutable Image-Based Operating Systems - EW2024.pdfDrew Moseley
 
TechTAC® CFD Report Summary: A Comparison of Two Types of Tubing Anchor Catchers
TechTAC® CFD Report Summary: A Comparison of Two Types of Tubing Anchor CatchersTechTAC® CFD Report Summary: A Comparison of Two Types of Tubing Anchor Catchers
TechTAC® CFD Report Summary: A Comparison of Two Types of Tubing Anchor Catcherssdickerson1
 
Robotics Group 10 (Control Schemes) cse.pdf
Robotics Group 10  (Control Schemes) cse.pdfRobotics Group 10  (Control Schemes) cse.pdf
Robotics Group 10 (Control Schemes) cse.pdfsahilsajad201
 
SOFTWARE ESTIMATION COCOMO AND FP CALCULATION
SOFTWARE ESTIMATION COCOMO AND FP CALCULATIONSOFTWARE ESTIMATION COCOMO AND FP CALCULATION
SOFTWARE ESTIMATION COCOMO AND FP CALCULATIONSneha Padhiar
 
Prach: A Feature-Rich Platform Empowering the Autism Community
Prach: A Feature-Rich Platform Empowering the Autism CommunityPrach: A Feature-Rich Platform Empowering the Autism Community
Prach: A Feature-Rich Platform Empowering the Autism Communityprachaibot
 
ROBOETHICS-CCS345 ETHICS AND ARTIFICIAL INTELLIGENCE.ppt
ROBOETHICS-CCS345 ETHICS AND ARTIFICIAL INTELLIGENCE.pptROBOETHICS-CCS345 ETHICS AND ARTIFICIAL INTELLIGENCE.ppt
ROBOETHICS-CCS345 ETHICS AND ARTIFICIAL INTELLIGENCE.pptJohnWilliam111370
 

Recently uploaded (20)

Python Programming for basic beginners.pptx
Python Programming for basic beginners.pptxPython Programming for basic beginners.pptx
Python Programming for basic beginners.pptx
 
Designing pile caps according to ACI 318-19.pptx
Designing pile caps according to ACI 318-19.pptxDesigning pile caps according to ACI 318-19.pptx
Designing pile caps according to ACI 318-19.pptx
 
Cost estimation approach: FP to COCOMO scenario based question
Cost estimation approach: FP to COCOMO scenario based questionCost estimation approach: FP to COCOMO scenario based question
Cost estimation approach: FP to COCOMO scenario based question
 
signals in triangulation .. ...Surveying
signals in triangulation .. ...Surveyingsignals in triangulation .. ...Surveying
signals in triangulation .. ...Surveying
 
Research Methodology for Engineering pdf
Research Methodology for Engineering pdfResearch Methodology for Engineering pdf
Research Methodology for Engineering pdf
 
KCD Costa Rica 2024 - Nephio para parvulitos
KCD Costa Rica 2024 - Nephio para parvulitosKCD Costa Rica 2024 - Nephio para parvulitos
KCD Costa Rica 2024 - Nephio para parvulitos
 
Paper Tube : Shigeru Ban projects and Case Study of Cardboard Cathedral .pdf
Paper Tube : Shigeru Ban projects and Case Study of Cardboard Cathedral .pdfPaper Tube : Shigeru Ban projects and Case Study of Cardboard Cathedral .pdf
Paper Tube : Shigeru Ban projects and Case Study of Cardboard Cathedral .pdf
 
Levelling - Rise and fall - Height of instrument method
Levelling - Rise and fall - Height of instrument methodLevelling - Rise and fall - Height of instrument method
Levelling - Rise and fall - Height of instrument method
 
OOP concepts -in-Python programming language
OOP concepts -in-Python programming languageOOP concepts -in-Python programming language
OOP concepts -in-Python programming language
 
『澳洲文凭』买麦考瑞大学毕业证书成绩单办理澳洲Macquarie文凭学位证书
『澳洲文凭』买麦考瑞大学毕业证书成绩单办理澳洲Macquarie文凭学位证书『澳洲文凭』买麦考瑞大学毕业证书成绩单办理澳洲Macquarie文凭学位证书
『澳洲文凭』买麦考瑞大学毕业证书成绩单办理澳洲Macquarie文凭学位证书
 
Robotics-Asimov's Laws, Mechanical Subsystems, Robot Kinematics, Robot Dynami...
Robotics-Asimov's Laws, Mechanical Subsystems, Robot Kinematics, Robot Dynami...Robotics-Asimov's Laws, Mechanical Subsystems, Robot Kinematics, Robot Dynami...
Robotics-Asimov's Laws, Mechanical Subsystems, Robot Kinematics, Robot Dynami...
 
Ch10-Global Supply Chain - Cadena de Suministro.pdf
Ch10-Global Supply Chain - Cadena de Suministro.pdfCh10-Global Supply Chain - Cadena de Suministro.pdf
Ch10-Global Supply Chain - Cadena de Suministro.pdf
 
High Voltage Engineering- OVER VOLTAGES IN ELECTRICAL POWER SYSTEMS
High Voltage Engineering- OVER VOLTAGES IN ELECTRICAL POWER SYSTEMSHigh Voltage Engineering- OVER VOLTAGES IN ELECTRICAL POWER SYSTEMS
High Voltage Engineering- OVER VOLTAGES IN ELECTRICAL POWER SYSTEMS
 
DEVICE DRIVERS AND INTERRUPTS SERVICE MECHANISM.pdf
DEVICE DRIVERS AND INTERRUPTS  SERVICE MECHANISM.pdfDEVICE DRIVERS AND INTERRUPTS  SERVICE MECHANISM.pdf
DEVICE DRIVERS AND INTERRUPTS SERVICE MECHANISM.pdf
 
Immutable Image-Based Operating Systems - EW2024.pdf
Immutable Image-Based Operating Systems - EW2024.pdfImmutable Image-Based Operating Systems - EW2024.pdf
Immutable Image-Based Operating Systems - EW2024.pdf
 
TechTAC® CFD Report Summary: A Comparison of Two Types of Tubing Anchor Catchers
TechTAC® CFD Report Summary: A Comparison of Two Types of Tubing Anchor CatchersTechTAC® CFD Report Summary: A Comparison of Two Types of Tubing Anchor Catchers
TechTAC® CFD Report Summary: A Comparison of Two Types of Tubing Anchor Catchers
 
Robotics Group 10 (Control Schemes) cse.pdf
Robotics Group 10  (Control Schemes) cse.pdfRobotics Group 10  (Control Schemes) cse.pdf
Robotics Group 10 (Control Schemes) cse.pdf
 
SOFTWARE ESTIMATION COCOMO AND FP CALCULATION
SOFTWARE ESTIMATION COCOMO AND FP CALCULATIONSOFTWARE ESTIMATION COCOMO AND FP CALCULATION
SOFTWARE ESTIMATION COCOMO AND FP CALCULATION
 
Prach: A Feature-Rich Platform Empowering the Autism Community
Prach: A Feature-Rich Platform Empowering the Autism CommunityPrach: A Feature-Rich Platform Empowering the Autism Community
Prach: A Feature-Rich Platform Empowering the Autism Community
 
ROBOETHICS-CCS345 ETHICS AND ARTIFICIAL INTELLIGENCE.ppt
ROBOETHICS-CCS345 ETHICS AND ARTIFICIAL INTELLIGENCE.pptROBOETHICS-CCS345 ETHICS AND ARTIFICIAL INTELLIGENCE.ppt
ROBOETHICS-CCS345 ETHICS AND ARTIFICIAL INTELLIGENCE.ppt
 

Autonomous Indoor Mapping Using The Microsoft Kinect Sensor

  • 1. by James Anderson (jra798) Adam Honse (amhb59) A PROJECT REPORT Presented to the Electrical and Computer Engineering Faculty of the MISSOURI UNIVERSITY OF SCIENCE AND TECHNOLOGY In Partial Fulfillment of the Requirements for COMPUTER ENGINEERING SENIOR PROJECT II May 2012 Cost: $500 Advisor Dr. Kurt Kosbar Instructor Dr.Donald Wunsch
  • 2. P a g e | 1 The project detailed in this document is an attempt to create an autonomous robot that is able to map indoor environments. This robot is based on an omni-directional platform using the Microsoft Kinect depth and color camera as its primary sensor. Methods are detailed that attempt to condense three dimensional point clouds generated by the Kinect down to meshes with color images applied as textures. The robot is able to au- tonomously determine the best areas to map and is able to gather a complete two dimensional model of any indoor environment.
  • 3. P a g e | 2 Executive Summary ..................................................................................................................................................1 0.0 - Project Team.....................................................................................................................................................3 0.1 - James Anderson........................................................................................................................................3 0.2 - Adam Honse .............................................................................................................................................3 1.0 - Introduction......................................................................................................................................................4 2.0 - Project Objectives.............................................................................................................................................6 2.1 - Objective 1 - The Platform........................................................................................................................6 2.2 - Objective 2 - Mapping Software...............................................................................................................6 2.3 - Objective 3 - Navigation Software............................................................................................................6 3.0 - Project Specifications .......................................................................................................................................6 3.1 - Objective 1 - The Platform........................................................................................................................6 3.2 - Objective 2 - Mapping Software...............................................................................................................7 3.3 - Objective 3 - Mapping Software...............................................................................................................8 4.0 - Design...............................................................................................................................................................9 4.1 - Platform Structural and Mechanical ........................................................................................................9 4.2 - Electrical ................................................................................................................................................ 10 4.3 - Software ................................................................................................................................................ 13 5.0 - Experimental Results..................................................................................................................................... 20 6.0 - Timeline and Deliverables ............................................................................................................................. 22 6.1 - Original Schedule................................................................................................................................... 22 6.2 - Final Schedule........................................................................................................................................ 22 7.0 - Budget ........................................................................................................................................................... 24 Appendix A - Itemized Budget............................................................................................................................... 25 Electronics ..................................................................................................................................................... 25 Structural....................................................................................................................................................... 26 Appendix B - Interface Commands........................................................................................................................ 27 Appendix C - Works Cited...................................................................................................................................... 28
  • 4. P a g e | 3 The project team consists of two undergraduates in Computer Engineering at the Missouri University of Science and Technology.  James Anderson – High Level Programming & Mechanical Construction  Adam Honse – Low Level Programming & Circuit Construction Bios of the team members along with the descriptions of their roles can be found below. Figure 1: Team Members (from left to right) Adam Honse and James Anderson James Anderson is an undergraduate senior in Computer Engineering at the Missouri University of Sci- ence and Technology. He has been an active member of the university’s Robotics Competition Team for the past three years and has held the elected positions of President, Treasurer and Computing Division Lead. He has played a large part in the construction and programming of the team’s autonomous robot which competes in the Intelligent Ground Vehicle Competition each year. Through his participation on the team and course work he has gained experience in the programming and construction of autonomous robots. Adam Honse is an undergraduate senior of Computer Engineering at Missouri University of Science and Technology. He has been a member of the Missouri S&T Robotics Competition Team for four years, taking the position of Electrical Division Lead in 2011. He also has experience with embedded and microcontroller appli- cations and has posted many personal projects on the Internet. He has been interested in robotics since high school, participating in several LEGO and Vex competitions before joining the S&T team. He has experience in electronic circuit design, circuit board design, and PC/microcontroller communication and integration.
  • 5. P a g e | 4 The team’s goal for this project was to design, build and program a robot with the ability to autono- mously map indoor environments. The mapping of indoor environments has many useful applications from security to mining to rescue and has the potential to offer detailed views of remote environments. Recent ad- vancements in processing and sensor technologies have the potential to greatly increase the speed and detail of mapping. The teams hope was to create an open source robot that could be used as a test platform for fu- ture development of two dimensional and three dimensional mapping. Figure 2: Microsoft Kinect Sensor In the previous year a new sensor has been made available on the general market named the Microsoft Kinect. The Kinect is a relatively inexpensive RGB-D (Red, Green, Blue, Depth) sensor that captures both a color and depth image of objects in its frame. A picture of a Microsoft Kinect may be seen in Figure 2. The Kinect of- fers a detailed view of its environment making it an ideal sensor for indoor mapping and is the primary sensor on the team’s robot. The low cost of this sensor has spawned a new breed of robots classified as hobby robots. These robots are de- signed to be cheap and easy to build, making good platforms for creating and testing new robotics software. The forerun- ner of this class of robots is the TurtleBot which can be seen in Figure 3. The TurtleBot is open source and was designed by Willow Garage[9]. The robot makes an ideal platform for all levels of robotics and is designed to be very inexpensive at just under $950. TurtleBot is completely open source and its software is based in ROS (Robot Operating System). ROS provides a common software architecture for robots that allows for code that can be run on multiple hardware platforms. ROS gives programmers a robust framework and a large number of tools to aid in robotics development. Figure 3: TurtleBot
  • 6. P a g e | 5 While the TurtleBot platform is very well designed our team wanted to try to improve it. One of the limiting factors of the TurtleBot is its base. The TurtleBot hardware is mounted on top of the Roomba Create, a tank drive platform adapted from a vacuum cleaner. While the Create is a good mass production solution it has the limitations of a tank drive platform and is somewhat expensive. By combining the Microsoft Kinect with an omni- directional base, the team has created TurtleBot 360, a ro- bust holonomic drive alternative to the TurtleBot. The team was able to reduce the cost of this type of platform while adding an additional degree of freedom to its motion. The robot is able to use much of the same software as the Tur- tleBot and can autonomously map indoor environments. In addition to two dimensional mapping the team made several advancements in the field of three dimensional mapping. The Kinect sensor offers the unique ability to gath- er depth information as well as color. While some advance- ments have been made in three dimensional mapping, they usually store more data than is needed causing a lot of extra processing to store and display maps. The team has developed and partially implemented a technique that at- tempts to compress incoming data and then correlate it with previous data. While it is still being implemented, if successful the software will greatly reduce the amount of resources needed to create three di- mensional models of environments. TurtleBot 360 provides a cheaper alterna- tive to the TurtleBot while adding mobility. The robot is able to use all the same software plus some of its own. The team’s three dimensional mapping research has also yielded some promis- ing results. Someday the team hopes that Turtle- Bot 360 will be able to surpass its predecessor in both mobility and capabilities. Figure 5: TurtleBot 360 User Interface Figure 4: Completed TurtleBot 360
  • 7. P a g e | 6 Our project was broken down into three objectives that could be approached independently. Build an omni-directional robot platform with computer control and a ROS interface. This platform fea- tures a triangular, 3 omni-wheel drive train and can move in any direction as well as rotate about its center. The platform has a laptop on-board to run navigation software. An on-board sealed lead-acid battery provides power for the motor controllers and Kinect sensor. Design a software platform that can build maps of an indoor environment. Two software platforms are used, one produces two dimensional maps and the other produces a three-dimensional mesh of the environ- ment. The software operates in real-time and the map data improves as more data is gathered. The data is made available to navigation software immediately. Design a software platform that can autonomously navigate the robot through an indoor environment and avoid obstacles given the Kinect camera data and any existing map data. The software must analyze the given input data to determine obstacles, and then it must plan paths around obstacles while maintaining a general direction. Progress on the robot started in late November 2011 and continued through April 2012. Below is a de- tailed plan of the milestones and tasks taken to complete the project. The work has been divided into 3 mile- stones with several tasks which were divided up between the two group members. 1. Design platform – The final platform design consists of an equilateral triangular frame with an omni- wheel mounted at the center of each side. Powering each omni-wheel is a stepper motor. At the cor- ners of the triangle, threaded rods support three more triangular frames stacked above the base. The- se shelves are used to hold robot components including the laptop and Kinect sensor. The platform deviated significantly from our original design, which was to cut flat, circular shelves from Lexan plastic and mount the wheels in a triangular pattern in the bottom circle. We decided to change to a triangular platform because it was easier to build while still maintaining the precision necessary for proper motor and wheel alignment. To drive the stepper motors, a stepper motor controller board was designed that interfaces via serial and I2 C. Each board powers one stepper motor. Our original design was to create one central control board, but decided instead to use three identical boards for modularity and to reduce cost. By doing this, we were able to use a cheaper and smaller microcontroller to design one board for each motor.
  • 8. P a g e | 7 By using the I2 C bus, all three boards can be controlled with the PC connected via serial to any one of them. A fourth board was designed for power regulation, as the motor control boards require both 5 and 12 volt inputs. The stepper controller board was designed using EAGLE software but the regulator board was built on prototyping board due to time constraints. 2. Build platform – To build the mechanical platform, we used 2x4 boards keeping the number of cuts to a minimum. The platform was made open source and the plans are being placed online. The build was fairly easy and can be completed in a day or two of work. On the electronics side, we considered having our motor control boards professionally made, but de- cided it would be cheaper and faster to make them ourselves. To do so, we used the laser printer ton- er transfer method. This produced clean boards with few errors. 3. Write microcontroller and PC interface code – The microcontroller code is written in C and compiled with the WinAVR GCC compiler. The code manages the serial and I2 C interfaces, maintains motor pa- rameters, and drives the motor output MOSFETs. The board’s serial interface is a bridge to the I2 C in- terface, and all motor control messages are sent as I2 C messages. By doing this, the same protocol is used to communicate with the board that the serial cable is connected to as well as all other boards connected on the I2 C bus. On the PC, a ROS node has been created that converts direction vectors into appropriate motor controller commands, then sends them over the serial port. 1. Import Kinect color image and depth map – This step involves setting up the ROS environment with the appropriate drivers and software to retrieve information from the Kinect sensor over USB. The ROS software is capable of providing a color RGB image, a grayscale depth image, a three dimensional RGB point cloud, and various other pieces of information such as accelerometer data. 2. Identify planes and convert to a mesh – This step uses slopes taken from the depth image to segment the field of view into multiple planes. By looking at the plane boarders and intersections it is possible to find vertices that will define the surfaces in the image. These vertices are compiled into COLLADA mesh files for display and linked with supplementary data needed for mapping. The color image taken by the camera is then linked and applied to the mesh three dimensional color model as the output. The local mapping changed significantly from the original design. During software development it be- came apparent that the original method of finding vertices would be to inaccurate to produce decent results. To correct this, the new software first looks for planes then parses those into their vertices. 3. Identify points in global model and combine with local mesh– By looking at fixed points in the world model the software is able to correct position to match the input. After correcting robot position the software looks to combine points with obstructed points from previous frames. The software continu- ously updates and outputs a global map of the world.
  • 9. P a g e | 8 1. Identify obstacles – The navigation software must be able to detect obstacles in the map that would block the robot’s desired path. The software uses the ROS point cloud to laser module to do this. The module takes the rectified coordinates of obstacles and creates a simulated laser scan giving the dis- tance to the nearest obstacle across a range of angles. The obstacle identification changed significantly from the original design which first looked for vertical planes then placed them in the world. The software being used currently is open source code that has been tested and provides fast and accurate results. 2. Identify mapped and unmapped areas – The software is currently using the ROS gmapping stack which simultaneously corrects position and creates a map of the environment. The gmapping stack uses opti- cal flow to look for landmarks that carry through frames and then uses those to correct position. Once the software has corrected position it places obstacle data in the global map. The software outputs an overhead map of the environment and can give information on areas that have not yet been explored. This portion of the software has changed significantly from the original design. The original software was going to attempt custom methods that would create a probability map of obstacles in the world and allow the robot to re-explore areas. Due to time constraints from problems in other areas of the software it was decided to use the third party software. 3. Solve for robot's movement – The software uses the ROS navigation stack again to solve for movement. The navigation stack takes in the laser scan message as well as the map built by the gmapping stack. The stack has the ability to receive a list of waypoints and can also autonomously map unknown envi- ronments. The stack outputs velocities for the linear strafe and rotation of the robot to the hardware interface. It was planned to write a custom module to compute navigation using potential fields. However due to time constraints it was decided to change to the navigation stack. The stack is a tested piece of code that works on many other platforms and provides reliable navigation.
  • 10. P a g e | 9 The platform’s overall structure consists of a triangular wood- en base with three triangular adjustable shelves above it. These shelves are supported by three 3/8” threaded rods, one in each cor- ner of the triangle, and may be adjusted vertically for flexibility. The- se shelves hold the robot’s main computer, the Kinect sensor, and any additional accessories or components that may later be attached to the robot. The base provides motor and shaft mounting hardware in addition to three corner plates for mounting the motor controller boards and a center support for holding the robot’s sealed lead acid battery while maintaining even weight distribution among the three wheels. Three end plates are attached around the outside of the base which secure the outer end of the wheel shafts and provide addition- al support to the shafts, evening the load on the wheels and motor shafts. The mechanical design is based around the Killough drive train, also known as the kiwi drive. The drive train uses three omni-wheels mounted 120º apart in a triangular arrangement. The wheels con- trol motion in the direction of rotation but do not hinder movement along the axis. This design is ho- lonomic, meaning that it allows for independent forward, strafe, and rotational motion. When com- bined with tilt on the Kinect sensor this allows the Kinect to view the world at any angle while still being able to fully control motion on the ground plane. Figure 7: Robot Base Figure 6: TurtleBot 360 Frame
  • 11. P a g e | 10 The electrical system on the robot is centered around three custom motor control boards. These boards require +5 and +12 Volt power supplies with a common ground. The +5V supply powers the microcon- trollers that drive the stepper motor timings and communicate to the PC. The +12V supply powers the MOSFET’s that supply the stepper motor coils with power. In addition, the three boards are linked together via an Inter-Integrated Circuit (I2 C) bus which allows them to communicate. Finally, any one of the three boards must be connected via a serial link to the PC to receive commands for the motor controller network. To pro- vide this distribution, we constructed a small power distribution board that takes in +12V from the SLA battery, regulates it to +5V for the logic systems, and branches off power to the three motor boards. It also provides pull-up resistors for the I2 C bus and a MAX232 serial level shifter to convert the PC’s RS-232 level serial signals into the 0-5V TTL signals that the microcontroller needs. Finally, a 12V, 30A switch with LED indicator was in- stalled between the battery and the distribution board as a means to shut off the robot without having to dis- connect the battery connections. Figure 8: Electrical Wiring Diagram As mentioned in the previous section, custom motor control boards were developed for this project. Our reasoning for this is that the majority of low-cost stepper motor controllers available are either too expen- sive or do not provide enough current for our large NEMA 23 3A motors. The design is relatively simple as we chose 6-wire motors which can be driven in either unipolar or bipolar configurations. We chose a unipolar drive design as it is much easier to construct and is also cheaper to build. At the core of the drive circuit are four power N-channel MOSFETs. The MOSFETs we ended up using were chosen because they can be driven with a logic level signal and we were able to get 4 of them as free samples. They are plenty powerful for our application, having an 80A continuous maximum rating while we are only using around 3-5A. To control this system, we chose an ATTiny2313 20-pin DIP microcontroller. This was chosen as we are familiar with the AVR series of microcontrollers, already have the software and programming hardware, and the ATTiny2313 is a rel- atively cheap but still capable solution. In addition to the MOSFETs and the microcontroller, we needed four diodes to absorb the high voltages generated in the motor windings when pulsed at high frequencies. We chose 1N4004 diodes, though these are likely not the most optimal choice for this purpose as they don’t work as well at high frequencies as other available diodes. Finally, we added power indicator LED’s, motor phase
  • 12. P a g e | 11 LED’s, a microcontroller data transfer LED, and a motor status and direction RGB LED. For our communication interface, we chose to use an Inter-Integrated Circuit (I2 C) bus to link the three boards. I2 C was chosen as it works well for communication between several devices using only two wires, and the ATTiny2313 has a Univer- sal Serial Interface (USI) module that is capable of performing parts of the I2 C protocol in hardware. While I2 C works well for communication between the boards, the computer is not easily able to use it, so we also decid- ed to use the ATTiny2313’s USART to provide a serial port. The serial port accepts a different command set that can be used to set and read the device’s stored I2 C address as well as push I2 C command messages onto the I2 C bus. By doing this, we could build three identical boards, all running the same software, and control all three by plugging the PC into any one of the boards and sending addressed commands to it. The only differ- ence between the boards is the stored I2 C address, which is saved in the microcontroller’s internal EEPROM non-volatile memory and may be changed using the serial port. For the communication protocol, a simple command-parameter set was created. All messages are three bytes – one command byte and up to two data bytes which allows for 16-bit data values in a single command. For the serial I2 C send command, one of the data bytes represents a number of additional I2 C message bytes that the PC will send. A full command refer- ence is available in Appendix B. The devices also support the I2 C global address of 0x00, which allows one command to be processed by all of the connected boards. Figure 9: Motor Control Board Schematic To create the boards, we used EAGLE CAD 6.1 to first create a schematic diagram of our design. This schematic was built after prototyping the design on a breadboard and running several tests. After we settled on a design, the schematic was finalized and we began creating a board layout based on it. Our goal was to make the board single-layered, which means that it would be easy to manufacture at home using the laser printer toner transfer technique. After routing the board by hand and cleaning up some organizational issues, the board fit well on a 3”x5” sheet of copper-clad PCB. The first board was made using a PCB scrap and the design barely fit, but after cleaning the board up and soldering all the parts it worked well and provided a more reliable platform to develop code on. Once we were sure that the design worked as we wanted, we ordered parts to build three of them and assembled all three with no major problems. To assemble the boards, we first
  • 13. P a g e | 12 printed out a black-and-white image of the board’s pads and bottom layer on a laser printer. We used semi- gloss paper that does not adhere well to toner, and then ironed the toner from the paper onto the copper-clad PCB. The paper was then peeled off after soaking in water, leaving the toner cleanly adhered to the board. The toner provides an etchant resisting surface allowing the etchant solution (ferric chloride) to dissolve the exposed copper while leaving the toner-covered traces. The toner was then cleaned off and the holes were drilled with a drill press. After cleaning up the copper, the parts were soldered in and then the boards were programmed. Figure 10: Motor Control Board PCB Layout All embedded software and hardware design files are available under the Open Source GPL v3 license. The software may be found in the team’s GitHub repository at https://github.com/CalcProgrammer1/Stepper- Motor-Controller[11].
  • 14. P a g e | 13 TurtleBot 360’s software was programmed in C++ and designed around ROS (Robot Operating System). ROS provides a dynamic and robust transport layer for the robot. The sys- tem allows code modules to be linked at runtime making it easy to edit or replace a single module without the user being required to comprehend the program as a whole. The software design is very simi- lar to that of the original TurtleBot and utilizes much of the same software. Fig- ure 11 shows a simplified overview of the current software architecture run- ning on TurtleBot 360. The software stack is de- signed to map the current input from the Kinect and to allow for platform control from either a user with a Wiimote or the guidance software. All software is available under the Open Source GPL v3 license. The software may be found in the team’s GitHub repository at https://github.com/Mr-Anderson/turtlebot_360 [1]. The software stack starts with the openni_kinect node obtained from the ROS repository [2]. This node is a wrapper for the OpenNI driver which interfaces with the Prime Sense chip on the Kinect. The node publish- es a color image, depth image, and point cloud obtained from the Kinect. The pointcloud is composed of an ordered list of depth points along with their coordinates in real space for objects in the camera’s field of view. A custom robot descriptor was written to describe the dimensions of the robot. Using the robot and joint state publishers, the positions of the various frames are published. The Kinect’s sen- sor data is published on its own frame, allowing the pointcloud data to be transformed into the robot’s reference frame. Figure 12 shows the point cloud output of the Kinect in relation to the robot. Figure 11: Software Architecture Kinectpointcloud_to_laserscan gmapping Wiimote Outside World explore mesh_mapper move_base turtlebot_360_control turtlebot_360_hardware_interface Robot Motors User Figure 12: Point Cloud Display
  • 15. P a g e | 14 The pointcloud_to_laserscan node was obtained from the ROS repository [2]. The node converts the three dimensional pointcloud data into a two dimensional laserscan message. The laserscan gives the distance to the nearest object for every angle within the robot’s field of view. This data simulates the data given by a laser rang- ing unit and can be used by several existing pieces of software. The output given by the node may be seen in Figure 13. The red stripe in the model represents the distance to the closest obstacle in a given direction. The blue markings on the ground give a two dimensional representation of the ob- stacles around the robot. The gmapping node is the ROS implementation of the GMapping [3] library. Gmapping is a two dimensional SLAM (Simulations Location and Mapping) algorithm. The software may be found in the ROS repository [2]. The map- ping algorithm uses the data from the laser scan message and the robot’s odometry to compose a two dimensional map of the environment. GMapping uses a particle filter based method to solve for the robot’s movement and build a two dimensional map. The maps created by the node are sent to the map server where they can be queried by navi- gation software or viewed by the user. Figure 14 shows a small sample of the mapping. Light grey represents areas that are known to be clear, black represents known obsta- cles, and dark grey represents unknown areas. Figure 14: gmapping Map [4] Figure 13: pointcloud_to_laserscan Output
  • 16. P a g e | 15 The mesh_mapper node is an experimental piece of software that attempts to do three dimensional mapping from depth image and point cloud data. The module is currently only partially implemented; however many techniques have been learned during its development. Two methods were attempted during develop- ment and are described below.  Vertex Based Method The vertex based method of surface reconstruction is centered on finding the vertices of all objects in the field of view. The plan for this method was to then link the vertices together based on edges between them. This method is partially implemented but was aban- doned early in the project. A flowchart may be found in Figure 15 and the description of the method may be found below. The method started by applying the X and Y Sobel operator to the depth image twice. The Sobel operator was used to obtain a smoothed slope map of the entire field of view. Applying the Sobel operator twice gives an approximation of the change in slope or second de- rivative across the entire field of view. The method then scans the second derivative images to look for pixels that have high values in both the X and Y images. These points are considered to be corners of objects since there are large changes in slope in both the X and Y directions and they are marked as vertices. Once vertices are identified, the method looks for links between them. To do this the software adds the values of both the X and Y second derivative images along the line of pixels between two vertices. If the sum of the line divided by its length is larger than a threshold determined by the user the points are linked to each other. Checking for large values in either the X or Y directions assures the existence of a strong edge between the two vertices. While linking the vertices, the software tracks statistics about their position and slope. After linking is complete, the software looks for edges with similar slopes that could be considered to be the edges of the same plane. Edge matches are done by looking for similar slope in the X-Y, Y-Z, or X-Z direction then a threshold set by the user is used to define matches. Once the segmentation of planes is complete, the software attempts to parse the data into a mesh file. This is done by creating triangles to represent each plane, linking vertices of one side to those of the other. Sobel Operator Vertex Recognition Vertex Linking Input Depth and ColorImages Plain Recognintion Mesh Encoding Local Mesh Figure 15: Vertex Based Method
  • 17. P a g e | 16 The three dimensional location of the vertices is used to store all vertices into a standard COLLADA [4] for- mat. The RGB image from the camera can then be added as a texture to give color to the model. Early in the development, this method was abandoned. The method proved to be too volatile to find reliable vertices. All of the vertices in the frame were not captured, leading to problems with code later in the pipeline. In addition, the method does not track enough data about vertices to reliably link data into a world map. The vertex based method appears to be too complicated to reliably perform surface recon- struction.  Plane Based Method The plane based method of surface reconstruction starts by looking for planes within the image. The plan is to segment all of the flat surfaces in the field of view, then use that information to find their edges and vertices. While this method shows prom- ise it is currently only partially implemented. Figure 16 shows the flow of the plane based method. The method starts by attempting to find all the flat sur- faces in the depth image. To do this the Sobel operator is applied to the depth image. This gives two images, one with the slope in the X direction and one with the slope in the Y direction for every pixel in the depth image. These images are used to find runs of constant slope within the image. The software runs horizontally along the X image and vertically along the Y image looking at the change in Slope. Four types of conditions can mark the start or end of a run. These conditions are tracked for use in model con- struction. 1. Continuous Point – The change in slope is large but the depth value of pixels on either side is similar. 2. Discontinuous Foreground – The change is slope is large and the point is much closer in depth than its neighbor. 3. Discontinuous Background – The change is slope is large and the point is much farther in depth than its neighbor. 4. Structure Point – The change in slope is small but the accu- mulated change in slope across the image has grown larger than the defined threshold. This condition is used to define faceted surfaces that represent curves in the real world. Once all of the runs have been found in both the X and Y direction, the software attempts to weave them together into continuous surfaces. The weaving is performed by a recursive function that will link together runs into planes. The function is fed a seed horizontal run that has not already been linked into a plane. The function Sobel Operator Vertical and HorizontalRun Lengths Run Linking Input Depth and ColorImages Vertex Parsing Mesh Encoding Local Mesh ContinuousVertex Correlation DiscontinuousVertex Correlationand Stretching GlobalMesh Figure 16: Plane Based Method
  • 18. P a g e | 17 then runs along the horizontal run looking for vertical runs that it crosses and that have not been linked. The function traces down unlinked vertical runs and calls itself again for all unlinked horizontal runs that are crossed. Once all the runs crossed have been linked the function returns a plane with a list of all the runs it con- tains. While the function executes, statistics are gathered about the size, location, members, and perimeters of each plane. The currently implemented method is able to segment planes with some success. Due to time constrains brought on by the failures of previous methods, however, it is all that has been implemented of this method. The following describes the rest of the process planned for implementation. After the depth image has been segmented into planes, the planes must be defined as a list of vertices. To find the vertices of the image, the software moves along the perime- ter that is output from plane segmentation. The software tracks the slope of the points around the perimeter looking for large sudden or accumulated changes in slope. When changes occur, new vertices are defined. The vertices are checked to see if they lie on a continuous or structure point. If the point is continuous or a structure point, the same vertices are defined on its neighbor plane to maintain continuity of the mesh. Once all of the vertices for planes have been found, they must be segmented into triangles. To do this, the software starts with one vertex of a plane then links by alternating clockwise and counter clockwise rotation around the perimeter. This method assures that the entire perimeter vertices will be linked into triangles that represent the planes they are associated with. After all of the vertices have been linked into their polygons, the software parses the information into COL- LADA format with supplementary data. The supplementary data stores the type of point that each vertex is and the positions on the original image for use in global model creation. The RGB image is added as a texture using the original image locations of the vertices to place it on the mesh. The software continuously outputs meshes of each local frame. As new frames come in it then attempts to correlate them into a global map using the robot’s odometry data. The global model is started with the first mesh that is captured and all vertices of that mesh are placed into the global model. For successive frames, the software first looks for vertices defined as continuous vertices. Continuous vertices are often associated with the corners of objects and their positions do not change as parts of the world are covered. For each new con- tinuous point the software attempts to find one that is close to the same position in the world using the robot’s odometry. The software then attempts to correct the odometry of the robot to line up the vertices exactly with the existing model. Once vertices line up they are combined with those of the global frame. After all of the continuous vertices are linked, the software attempts to link discontinuous vertices. Since discontinuous vertices are often the result of the background being covered by the foreground or the edge of a curved surface, they will often move in the global model. The software looks for continuous vertices in the Figure 17: Depth Image from Kinect
  • 19. P a g e | 18 global model that are close to the new vertices. The vertices in the global model are then moved to match the position of the new local vertices. The movement of global vertices is tracked and after there has been suffi- cient movement new vertices are created as structure points. This is to account for the case of the robot mov- ing around a curved surface and will create a faceted structure to represent it. The structure points of local frames are ignored after the first frame since their data is already represented in the model. While the entirety of the plane based surface modeling method has not been implemented the method shows potential. The team will continue to develop this method. The theory of the implementation seems sound and will hopefully produce good results in the future. The explore node uses the current world map to determine the best path for the robot. The software attempts to create waypoints that will explore an unknown environment. The explore node may be found in the ROS repository [2]. The node publishes 2D navigation goals that will move the robot to unexplored areas. The software will continue to publish new waypoints until the entire environment has been explored and it determines all of the boundary walls. The move_base node is in charge of local and global obstacle navigation. The software is part of the ROS naviga- tion stack [2]. The node looks at the local obstacle map around the robot and determines the path needed to avoid close obstacles. The software then looks at the global map and attempts to find a path that will lead to the next waypoint given by the explore node or by a user. Once a path is deter- mined, the node publishes a message that contains the de- sired robot velocities. The navigation stack was tuned to take advantage of the holonomic drive system and computes a forward, strafe, and rotational velocity for the robot. Figure 18 shows the users desired waypoint as a red arrow and the robots planed path as a green line. The blue markings on the ground represent obstacles and the green represent those obstacles inflated by the robots size. The inflation is used to determine safe areas for the robot to travel on. The turtlebot_360_control node has the final control over what velocity commands are sent to the mo- tor control software. The node has several modes of operations that decide the behavior of the robot. The node was custom built for TurtleBot 360 and is the bridge between the software and the user. Figure 18: Robot Movement Planning
  • 20. P a g e | 19 The control node starts out in standby mode and waits for a Wiimote controller to connect. Once the user connects a Wiimote they have the ability to place the robot into either user controlled mode or autono- mous mode. In autonomous mode the node will pass the velocity commands published by move_base through to the hardware interface giving it control over the robot. In user controlled mode the software will interface with the Wiimote and compute velocity commands based on the button, joystick, and accelerometer data from the Wiimote. The node also interfaces with a text-to-speech library to provide feedback about the current state of the robot. The software also launches a ROS tool named RViz. RViz is a visualizer that allows the user to view combined information about the robots inputs in a three dimensional virtual environment. The output of the RViz display may be seen in Figure 19. The display is cus- tomizable at run time so users may view any debugging information that is being published. Users are also able to subscribe to this data over a network allowing for remote operation. The turtlebot_360_hardware_interface node computes the velocities that are sent to the motors using the velocity commands published by control. The control of the kiwi drive train is defined by three linear equa- tions for the three degrees of freedom. The three equations are then summed to compute the final wheel ve- locities. The formulas used to compute the wheel velocities may be found below in Equation 1. The wheels were numbered starting at the front of the robot and rotating to the left. ( ) ( ) ( ) ( √ ) ( ) ( ) ( √ ) ( ) Equation 1: Wheel velocity formulas Once the velocities are computed they are scaled to maintain a maximum speed and acceleration for each wheel. The scale back value for each wheel is computed then the largest scalar is applied to all of the wheels to maintain the ratio between velocities. The new wheel velocities are then transmitted to the motors through the serial port. The motor velocities are then used to compute the odometry of the robot and publish it back to the software. Figure 19: RVIZ Robot Dashboard
  • 21. P a g e | 20 Overall the team is pleased with the performance of the robot. The robot is able to successfully map two dimensional environments and can navigate autonomously. TurtleBot 360 is a successful holonomic drive platform and can move in any direction and rotate on the ground plane. The platform is sturdy and will make a good test bed for future software development. While the platform was able to demonstrate movement in multiple axes simultaneously, the platform’s wheels will suddenly lose all torque if they encounter too much resistance or try to drive too fast. This is due to the stepper motors, which have high torque at low speeds and rapidly decline in torque as speed increases. In addition, once a motor stalls it has a very difficult time getting back its torque. This is because our software soft-starts the motors with a gradual acceleration curve, using the high torque at low speeds to get the robot moving. The hardware is also unaware of any motor skips or stalls, so the odometry data that is fed into the ROS navigation software loses accuracy quickly when the motors stall. In future revisions of the platform, the motors could be replaced with higher torque stepper motors and the controllers improved to support bipolar drive, microstepping, and feedback mechanisms. Additionally, the stepper motors could be replaced with brushed or brushless DC motors with gear boxes. These mo- tors are not as precise, but with a good encoder and a good gear box they can produce higher torque and recover from stalls quick- er. The downside to these systems is that they are much more ex- pensive than the unipolar stepper drives which increases the cost of the robot significantly. The required torque may also be re- duced by using omni-wheels that do not have rubber rollers. The rubber rollers on the Vex wheels used in our design grip the floor and require more torque to roll sideways or diagonally than plastic rollers. A third improvement that could be made is to use two smaller batteries in series to create a 24 Volt main power rail. This higher voltage can increase the speed at which the motors are able to maintain their torque. However, it would pose more risk to the motors for overheating or over-current, so the control code would need to limit the pulse width such that the current never exceeds the motor’s rating (3A in our case). In addition to the hardware concerns of the robot there were also shortcomings in the software. The robot was able to perform well when creating two dimensional maps, however the software for three dimen- sional mapping was never completed. The three dimensional mapping software never behaved quite as ex- pected. The team started implementation with a vertex based method that attempted to find vertices of ob- jects then link them together. This method had many problems during operation that caused the false identifi- cation of vertices in addition to vertices not being identified. The method also had conceptual problems that Figure 20: Experimenting
  • 22. P a g e | 21 would have led to an inability to create a global model. The team abandoned work on the vertex based method early during development but learned a great deal from the work done on it. From working with the vertex based method of surface reconstruction the team was able to construct the plane based method. The team believes the plain based method to be conceptually sound however due to setbacks in devel- opment and heavy course loads it has not been fully implemented. The portion of the method that has been implemented is able to segment the depth image of the Kinect into flat surfaces. The software still has some bugs during operation and more development will need to be done. The team plans to continue work on the software and will attempt to implement the method described in the sections above. While TurtleBot 360 has some shortcomings it is able to do many things very well. The robot is able to successfully autonomously map indoor environments in two dimensions. The software on the robot has been setup to ROS standards, opening it to a variety of additional soft- ware packages to expand its abilities. The team is very happy with the platform and will continue to use it as a reliable test bed in the future. Figure 21: TurtleBot 360 Mapping an Indoor Environment
  • 23. P a g e | 22 The robot was built and programmed from late November 2011 through April of the next year. Two schedules may also be found bellow showing the proposed timeline and the followed timeline. Responsibility (primary/secondary) 1 2 3 4 1 2 1 2 3 4 1 2 3 4 1 2 3 4 1 2 3 4 Adam Honse/ James Anderson Adam Honse/ James Anderson Adam Honse/ James Anderson James Anderson/ Adam Honse James Anderson/ Adam Honse James Anderson/ Adam Honse James Anderson/ Adam Honse James Anderson/ Adam Honse James Anderson/ Adam Honse James Anderson/ Adam Honse Original Schedule Feb. Mar. Apr.Dec. Jan. Solve for robot's movement Identify mapped and unmapped areas Program robot build indoor maps autonomously Identify obstacles Combine current mesh with world map Track and compute optical flow map and solve for camera location Identify vertices and convert to a mesh Program robot to build indoor maps under remote control Import Kinect color image and depth map Write microcontroller and PC interface code Build a computer controlled omni- directional robot Design platform Build platform Goal Task Nov. Responsibility (primary/secondary) 1 2 3 4 1 2 1 2 3 4 1 2 3 4 1 2 3 4 1 2 3 4 Adam Honse/ James Anderson Adam Honse/ James Anderson Adam Honse/ James Anderson James Anderson/ Adam Honse James Anderson/ Adam Honse James Anderson/ Adam Honse James Anderson/ Adam Honse James Anderson/ Adam Honse James Anderson/ Adam Honse Solve for robot's movement Identify mapped and unmapped areas Program robot build indoor maps autonomously Identify obstacles Identify points in global model and combine with local Identify planes and convert to a mesh Program robot to build indoor maps under remote control Import Kinect color image and depth map Write microcontroller and PC interface code Build a computer controlled omni- directional robot Design platform Build platform Final Schedule Goal Task Nov. Dec. Jan. Feb. Mar. Apr.
  • 24. P a g e | 23 The discrepancies between the final and original project timelines comes from a major shift in the software plan early in the project. When it became clear that the finding of vertices would not be reliable enough to accurately map environments, the team paused software development. The team settled on a new method of surface reconstruction that uses planes to find surfaces. The loss of development time also caused the team to choose a third-party software stack that is part of Robot Operating System repository [2] to handle the robot navigation. These changes led to a more robust software stack but the loss of development time did not allow for the completion of the three dimensional mesh mapping module. Another setback occurred on the hardware side. While major components were obtained early on (namely the wheels and motors), the board development was delayed until the motor controller IC’s could be obtained. The team chose to sample these parts (ST’s L297 and L298 stepper motor IC’s) to save on develop- ment cost and to try out a more suited part than the unipolar MOSFET drive system that the team was planning on using. Testing these IC’s took some time, as the team encountered problems with high heat output and problems with the chip’s current limiting system. When we finally settled on a schematic, a board was de- signed using EAGLE CAD board development software. After writing code and testing, we determined that this design would not be a reliable way to drive our motors and went back to the original idea of a unipolar driver using four MOSFETs. This design was then put together in EAGLE and tested successfully. The final board de- sign was based off of this MOSFET driver which worked reliably with lower heat output from the board.
  • 25. P a g e | 24 Listed below are the original and final overview budgets. The fully itemized budget may be found in Appendix A. The discrepancies between the final and planned budget are accounted for by the changes in the hardware design. The team used larger motors as well as increased the cost of the frame by making it more structurally sound. The motor controller board was also redesigned as three individual boards as opposed to the one central board proposed in the original budget. The additional cost has led to a robot that is more ro- bust and easier to build. Table 1: Original Budget Item Quantity Unit Cost Cost Stepper Motors 3 $20.00 $60.00 Omni Wheels 3 $40.00 Lexan Sheets (1024 in 2 ) 1 $35.00 Threaded Rod (6’) 1 $35.00 Fasteners 1 $10.00 Velcro 1 $5.00 Battery Pack 1 $30.00 Microcontroller Board 1 $30.00 Infrared Range Sensors 3 $15.00 $45.00 Kinect Sensor 1 $150.00 Kinect Power Supply 1 $10.00 Kinect Mount 1 $5.00 Total $426.00 Table 2: Overall Final Budget Item Quantity Unit Cost Cost Stepper Motor Control PCB 3 $20.00 $60.00 Power Regulator PCB 1 $5.00 $5.00 Battery 1 $27.00 $27.00 Microsoft Kinect for Xbox 360 Sensor 1 $150.00 $150.00 VEX Robotics 4in. Large Omni Directional Wheel Kit 2-pack 2 $24.99 $49.98 21.0 kg-cm 6 Wire NEMA 23 Stepping Motor 3 22.5 $67.50 Robot Frame 1 138.75 $138.75 Total 498.23 The final cost of the project came in at just under $500. This cost is well below the cost of similar platforms, including the original TurtleBot platform which can be built for just under $950. While our platform prototype has some problems with torque, the price difference can more than make up for the installation of stronger motors in the final revision. The team believes that with the platform’s added omnidirectional capability and increased height, it is a better value for hobby robotics than the TurtleBot. Expanded budgets for the individual motor control boards as well as the robot structural components may be found in appendix A. Note that the final robot design consists of three motor control boards, which are represented in the overall final budget in Table 2.
  • 26. P a g e | 25 Single Motor Control Board Item Quantity Unit Cost Cost MG Chemicals Single Sided 3x5 PCB 1 $3.30 $3.30 CTS 20MHz 50pF 30ppm Crystal 1 $0.35 $0.35 Fairchild Semiconductor 1N4004 Diode 4 $0.09 $0.36 Nichicon 470uF 35V Electrolytic Capacitor 1 $0.41 $0.41 Nichicon 1000uF 35V Electrolytic Capacitor 1 $0.50 $0.50 KOA Speer Metal Film Resistor 2.2KOhm 1% 1 $0.06 $0.06 KOA Speer Metal Film Resistor 300Ohm 1% 3 $0.06 $0.18 KOA Speer Metal Film Resistor 100Ohm 1% 4 $0.06 $0.24 KOA Speer Metal Film Resistor 1KOhm 1% 10 $0.06 $0.60 Cree, Inc. 5mm Red High Brightness 2.1V LED 3 $0.14 $0.42 Cree, Inc. 5mm Blue LED 4 $0.19 $0.76 STMicroelectronics STP95N3LLH6 N-Channel MOSFET 4 $1.51 $6.04 Atmel ATTiny2313 AVR Microcontroller 1 $1.91 $1.91 Total $15.13 Other Items Not Purchased Quantity Est. Cost 5mm RGB Common Cathode LED 1 $1.50 Mini Push Button Switch 1 $0.30 Break-away Pin Headers (Male) 16 $0.30 Break-away Pin Headers (Female) 6 $0.10 Power Distribution Board (Items Not Purchased, Cost Approximated) Item Quantity Unit Cost Cost LM7805 5 Volt Regulator 1 $1.00 $1.00 MAX232 Serial Level Shifter 1 $1.00 $1.00 1KOhm Resistor (I 2 C Pull-ups) 2 $0.06 $0.12 12V 30A Toggle Switch with LED Indicator 1 $3.19 $3.19 Small Prototyping PCB 1 $1.00 $1.00 Total $6.31
  • 27. P a g e | 26 Platform Hardware Item Quantity Unit Cost Cost VEX Robotics - Shaft Coupler (5-pack) 2 $4.99 $9.98 Shaft Coupler 1/4 inch to 1/4 inch Steel with 2 Set Screws 3 2.99 $8.97 10-32 x 1/8" Long Cup Point Socket Set Screw 1 $2.20 $2.20 6 x 1 1/4" Round Head screws (side gussets) 12 $0.10 $1.20 10 x 2" Round Head Bolts (motor) 12 $0.14 $1.68 #10 Lock Washers (motor) 12 $0.09 $1.08 #10 Nuts (motor) 12 $0.09 $1.08 #10 Washers (motor) 12 $0.09 $1.08 5/16" x 5" Bolts (bearing plate) 6 $0.60 $3.60 5/16" Nuts (bearing plate) 12 $0.10 $1.20 5/16" Washers (bearing plate) 12 $0.11 $1.32 1/4" Bronze Bearing 3 $2.45 $7.35 1/2" Bronze Bearing 3 $2.75 $8.25 8 x 2" Flat Head Screws (bearing plate spacers) 6 $0.10 $0.60 10 x 3" Flat Head Screws (corners) 6 $0.12 $0.72 2x4x8 studs (triangle) 2 $2.27 $4.54 1x6x6 studs (side gussets) 1 $5.34 $5.34 1x4x8 studs (shelf frames) 3 $1.94 $5.82 2x3x8 studs (bearing plate spacer) 1 $1.92 $1.92 2x8 studs (top gussets) 1 $4.95 $4.95 1/2" Threaded Rod (shelf supports) 3 $4.29 $12.87 1/2" Nuts Box (shelf support) 1 $8.00 $8.00 1/2" Washers Box (shelf support) 1 $8.00 $8.00 Lexan (for shelves) 1 $10.00 $10.00 Kinect Mount 1 $10.00 $10.00 Board Mounts 1 $15.00 $15.00 Key stock (Axle) 1 $2.00 $2.00 Total: 138.75
  • 28. P a g e | 27 27 | P a g e Serial Port Command Reference The serial port operates at 19,200 baud, 8 data bits, no parity. All commands are three bytes long, though some commands can accept additional data as specified in the command. Any N/A values are don’t-cares, there still must be a dummy value sent to fill that byte position. Command Byte Data 1 Data 2 Description 0x21 Mode N/A Set I2 C Mode – Mode 1 is master, mode 0 is slave 0x22 Address N/A Set I2 C Address – Address is a 7-bit address value 0x23 N/A N/A Read I2 C Address – Board transmits its address byte 0x24 Length Address Send I2 C Message – Sends a message of <Length> bytes to I2 C device <Address>, this command requires <Length> additional bytes to be sent from the PC containing the I2 C data to send. 0x25 Length Address Send I2 C Read Request – Sends a read request to device <Ad- dress> and reads <Length> bytes, then transmits them back to the PC. I2 C Interface Command Reference All boards initialize as I2 C slaves when power is applied. To use master mode, use the above serial commands to switch the board into master and begin transmitting messages. This table lists the I2 C slave-mode com- mands that are used to control the motor. The serial-connected master board will also interpret each message sent for its own address. Like serial messages, all I2 C slave messages are three bytes long. The I2 C address 0x00 is the global address, the device will respond to this address regardless of its programmed address. Command Byte Data 1 Data 2 Description 0x01 High Low Set motor speed – This sets the delay between motor steps. It is a 16-bit unsigned value. 0x02 High Low Set motor step count – This sets the number of steps in the step counter. The step counter decrements each time the motor steps. This value may be overwritten at any time, even during motor operation. This is a 16-bit unsigned value. 0x03 Direction N/A Set Step Direction – Direction = 0 or 1, sets the direction in which the motor steps. The motor connector may be reversed to re- verse the directions. 0x04 Enable N/A Set Motor Enable – Setting enable to 1 enables the motor and starts motion, setting to 0 stops motion. If the step counter reaches zero then the motor will automatically disable itself. 0x05 Mode N/A Set Stepping Mode – Setting mode to 0 sets normal (one phase at a time) stepping operation. Setting mode to 1 sets ‘power step- ping’ (two phase at a time) stepping operation for increased torque.
  • 29. P a g e | 28 28 | P a g e [1] J. Anderson and A. Honse, "TurtleBot 360 Code Repository," [Online]. Available: https://github.com/Mr- Anderson/turtlebot_360. [2] "ROS Documentation Wiki," Willow Garage, [Online]. Available: http://www.ros.org/wiki/. [3] F. Endres, J. Hess, N. Engelhard, J. Sturm and W. Burgard. [Online]. Available: http://openslam.org/rgbdslam.html. [4] "COLLADA Documentation Wiki," [Online]. Available: https://collada.org/mediawiki/index.php/COLLADA_- _Digital_Asset_and_FX_Exchange_Schema. [5] J. H. N. E. Felix Endres, "RGB-D SLAM Documentation Page," University of freiburg, 2011. [Online]. Available: http://www.ros.org/wiki/rgbdslam. [Accessed 28 November 2011]. [6] J. Bowman, "vision_opencv ROS Documentation," [Online]. Available: http://www.ros.org/wiki/vision_opencv. [Accessed 28 November 2011]. [7] K. C. T. F. Maintained by Radu Bogdan Rusu, "openni_kinect ROS Documentation," [Online]. Available: http://www.ros.org/wiki/openni_kinect. [Accessed 28 November 2011]. [8] "TurtleBot Documentation Page," Willow Garage, [Online]. Available: http://www.ros.org/wiki/Robots/TurtleBot. [9] G. Grisetti, C. Stachniss and W. Burgard, "GMapping," [Online]. Available: http://openslam.org/gmapping.html. [10] "OpenCV documentation wiki," [Online]. Available: http://opencv.willowgarage.com/wiki/. [11] A. Honse, "TurtleBot 360 Stepper Motor Controller Repository," [Online]. Available: https://github.com/CalcProgrammer1/Stepper-Motor-Controller. [12] "OpenNI Documentation," [Online]. Available: http://openni.org/Documentation/ProgrammerGuide.html.