Added citations

This commit is contained in:
2018-08-08 10:34:15 +02:00
parent 13b230397a
commit 56be83152a
6 changed files with 149 additions and 79 deletions

View File

@@ -8,9 +8,9 @@ algorithms, on which we could rely when we worked on higher-lever behaviors.
During our tests, there were almost no false detections, i.e.\ foreign objects
were not detected as a ball or a goal. Sometimes the ball and the goal were
missed, even if they were in the field of view, which happened due to imprecise
color calibration under changing lighting conditions. The goal detection was on
of the most difficult project milestones, so we are particularly satisfied with
the resulting performance. It is worth mentioning, that with the current
color calibration under changing lighting conditions. The goal detection was
one of the most difficult project milestones, so we are particularly satisfied
with the resulting performance. It is worth mentioning, that with the current
algorithm, for successful detection, it is not even necessary to have the whole
goal in the camera image.
@@ -19,8 +19,7 @@ the robot could successfully reach the ball, do the necessary alignments and
kick the ball. When the robot decided that he should kick the ball, in the
majority of cases the kick was successful and the ball reached the target. We
performed these tests from many starting positions and assuming many relative
position of the ball and the goal. Naturally, we put some constraints on the
problem, but within th \todo{smth about constraints and such bullshit}.
position of the ball and the goal.
Furthermore, we managed not only to make the whole approach robust, but also
worked on making the procedure fast, and the approach planing was a crucial
@@ -40,13 +39,13 @@ With our objective for this semester completed, there still remains a vast room
for improvement. Some of the most interesting topics for future work will now
presented.
The first important topic is the self-localization. Currently our robot is
The first important topic is self-localization. Currently our robot is
completely unaware of his position on the field, but if such information could
be obtained, then it could be leveraged to make path planning more effective
and precise.
Another important capability that our robot lacks for now is obstacle
awareness, which would be unacceptable in the real RoboCup soccer game. Making
awareness, which would be unacceptable in a real RoboCup soccer game. Making
the robot aware of the obstacles on the field would require the obstacle
detection to be implemented, as well as some changes to the path planning
algorithms to be made, which makes this task an interesting project on its own.
@@ -62,8 +61,8 @@ is not moving. Another constraint that we imposed on our problem is that the
ball is relatively close to the goal, and that the ball is closer to the goal
than the robot, so that the robot doesn't have to run away from the goal. To be
useful in a real game the striker should be able to handle more complex
situations. For example, the \textit{dribbling} skill could help the robot to
avoid the opponents and to bring the ball into a convenient striking position.
situations. For example, \textit{dribbling} skill could help the robot to avoid
the opponents and to bring the ball into a convenient striking position.
Finally, we realized that the built-in moving functions in NAOqi SDK produce
fairly slow movements, and also don't allow to change the direction of movement

View File

@@ -1,26 +1,26 @@
\chapter{Introduction}
RoboCup is an international competition in the field of robotics, the
ultimate goal of which is to win a game of soccer against a human team by the
middle of the 21st century. The motivation behind this objective is the
following: it is impossible to achieve such an ambitious goal with the current
state of technology, which means that the RoboCup competitions will drive
scientific and technological advancement in such areas as computer vision,
mechatronics and multi-agent cooperation in complex dynamic environments. The
RoboCup teams compete in five different leagues: Humanoid, Standard Platform,
Medium Size, Small Size and Simulation. Our work in this semester was based on
the rules of the Standard Platform league. In this league all teams use the
same robot \textit{Nao}, which is being produced by the SoftBank Robotics. We
will describe the capabilities of this robot in more detail in the next
chapter.
RoboCup \cite{robocup} is an international competition in the field of
robotics, the ultimate goal of which is to win a game of soccer against a human
team by the middle of the 21st century. The motivation behind this objective is
the following: it is impossible to achieve such an ambitious goal with the
current state of technology, which means that the RoboCup competitions will
drive scientific and technological advancement in such areas as computer
vision, mechatronics and multi-agent cooperation in complex dynamic
environments. The RoboCup teams compete in five different leagues: Humanoid,
Standard Platform, Medium Size, Small Size and Simulation. Our work in this
semester was based on the rules of the Standard Platform league. In this league
all teams use the same robot \textit{Nao}, which is being produced by the
SoftBank Robotics. We will describe the capabilities of this robot in more
detail in the next chapter.
One of the most notable teams in the Standard Platform League is
\textit{B-Human}. This team represents TU Bremen, and in the last nine years
they won the international RoboCup competition six times and twice were the
runner-up. The source code of the framework that B-Human use for programming
their robots is available on GitHub, together with an extensive documentation,
which makes the B-Human framework an attractive starting point for RoboCup
beginners.
\textit{B-Human} \cite{bhuman}. This team represents TU Bremen, and in the last
nine years they won the international RoboCup competition six times and twice
were the runner-up. The source code of the framework that B-Human use for
programming their robots is available on GitHub, together with an extensive
documentation, which makes the B-Human framework an attractive starting point
for RoboCup beginners.
\section{Out objective and motivation}
@@ -35,7 +35,7 @@ describe in close detail in the next chapter. The work on these tasks would
allow us to acquire new competences, which we could then use to complement the
RoboCup team of TUM. Finally, this objective encompasses many disciplines, such
as object detection, mechatronics or path planning, which means that working on
it might give us a chance to improve over the existing work in these areas.
it might give us a chance to contribute to the research in these areas.
Having said that, we hope that our project \todo{will be good}, and this report
will help future students to get familiar with our results and continue our

View File

@@ -2,24 +2,25 @@
The very first task that needed to be accomplished was to detect the ball,
which is uniformly red-colored and measures about 6 cm in diameter. We decided
to use a popular algorithm based on color segmentation. The idea behind this
algorithm is to find the biggest red area in the image and assume that this is
the ball. First, the desired color needs to be defined as an interval of HSV
(Hue-Saturation-Value) values. After that, the image itself needs to be
transformed into HSV colorspace, so that the regions of interest can be
extracted into a \textit{binary mask}. The contours of the regions can then be
identified in a mask, and the areas of the regions can be calculated using the
routines from the OpenCV library. The center and the radius of the region with
the largest area are then determined and are assumed to be the center and the
radius of the ball.
to use a popular algorithm based on color segmentation \cite{ball-detect}. The
idea behind this algorithm is to find the biggest red area in the image and
assume that this is the ball. First, the desired color needs to be defined as
an interval of HSV (Hue-Saturation-Value) values. After that, the image itself
needs to be transformed into HSV colorspace, so that the regions of interest
can be extracted into a \textit{binary mask}. The contours of the regions can
then be identified in a mask \cite{contours}, and the areas of the regions can
be calculated using the routines from the OpenCV library. The center and the
radius of the region with the largest area are then determined and are assumed
to be the center and the radius of the ball.
It is often recommended to eliminate the noise in the binary mask by applying a
sequence of \textit{erosions} and \textit{dilations}, but we found, that for
the task of finding the \textit{biggest} area the noise doesn't present a
problem, whereas performing erosions may completely delete the image of the
ball, if it is relatively far from the robot and the camera resolution is low.
For this reason it was decided not to process the binary mask with erosions and
dilations, which allowed us to detect the ball even over long distances.
It is sometimes recommended \cite{ball-detect} to eliminate the noise in the
binary mask by applying a sequence of \textit{erosions} and \textit{dilations},
but we found, that for the task of finding the \textit{biggest} area the noise
doesn't present a problem, whereas performing erosions may completely delete
the image of the ball, if it is relatively far from the robot and the camera
resolution is low. For this reason it was decided not to process the binary
mask with erosions and dilations, which allowed us to detect the ball even over
long distances.
The advantages of the presented algorithm are its speed and simplicity. The
major downside is that the careful color calibration is required for the
@@ -61,20 +62,21 @@ The scoring function calculates, how different are the properties of the
candidates are from the properties, an idealized goal contour is expected to
have. The evaluation is happening based on two properties. The first property
is based on the observation, that the area of the goal contour is much smaller
than the area of its \textit{enclosing convex hull}. The second observation is
that all points of the goal contour must lie close to the enclosing convex
hull. The mathematical formulation of the scoring function looks like the
following \todo{mathematical formulation}:
than the area of its \textit{enclosing convex hull} \cite{convex-hull}. The
second observation is that all points of the goal contour must lie close to the
enclosing convex hull. The mathematical formulation of the scoring function
looks like the following \todo{mathematical formulation}:
The contour, that minimizes the scoring function, while keeping its value under
a certain threshold is considered the goal. If no contour scores below the
threshold, then the algorithm assumes that no goal was found. Our tests have
shown, that when the white color is calibrated correctly, the algorithm can
detect the goal almost without mistakes, when the goal is present in the image.
The downside of this algorithm, is that in some cases the field lines might
appear the same properties, that the goal contour is expected to have,
therefore the field lines can be mistaken for the goal. We will describe, how
we dealt with this problem, in the following section.
threshold, then the algorithm assumes that no goal was found.
Our tests have shown, that when the white color is calibrated correctly, the
algorithm can detect the goal almost without mistakes, when the goal is present
in the image. The downside of this algorithm, is that in some cases the field
lines might appear the same properties, that the goal contour is expected to
have, therefore the field lines can be mistaken for the goal. We will describe,
how we dealt with this problem, in the following section.
\section{Field detection}

View File

@@ -0,0 +1,68 @@
@misc{robocup,
title={Robocup Federation Official Website},
howpublished={\url{http://www.robocup.org/}},
note={Accessed: 2018-08-08}
}
@misc{bhuman,
title={B-Human},
howpublished={\url{https://www.b-human.de/index.html}},
note={Accessed: 2018-08-08}
}
@misc{nao,
title={Discover {Nao}, the little humanoid robot from SoftBank Robotics},
howpublished={\url{https://www.softbankrobotics.com/emea/en/robots/nao}},
note={Accessed: 2018-08-08}
}
@misc{naoqi-sdk,
title={{NAOqi} Developer guide},
howpublished={\url{http://doc.aldebaran.com/2-1/index_dev_guide.html}},
note={Accessed: 2018-08-08}
}
@misc{opencv,
title={OpenCV library},
howpublished={\url{https://opencv.org/}},
note={Accessed: 2018-08-08}
}
@article{numpy,
title={A guide to {NumPy}},
author={Travis E, Oliphant},
year={2006}
}
@misc{ros,
title={{ROS.org | Powering} the world's robots},
howpublished={\url{http://www.ros.org/}},
note={Accessed: 2018-08-08}
}
@misc{ball-detect,
title={Ball Tracking with {OpenCV}},
author={Rosenbrock, Adrian},
year={2015},
month={September},
howpublished={\url{
https://www.pyimagesearch.com/2015/09/14/ball-tracking-with-opencv/
}},
note={Accessed: 2018-08-08}
}
@misc{contours,
title={{OpenCV}: Contours: Getting Started},
howpublished={\url{
https://docs.opencv.org/3.4.1/d4/d73/tutorial_py_contours_begin.html
}},
note={Accessed: 2018-08-08}
}
@misc{convex-hull,
title={Convex Hull},
howpublished={\url{
https://docs.opencv.org/2.4/doc/tutorials/imgproc/shapedescriptors/hull/hull.html
}},
note={Accessed: 2018-08-08}
}

View File

@@ -55,7 +55,7 @@
% https://de.sharelatex.com/learn/Bibliography_management_with_bibtex#Bibliography_management_with_Bibtex
\addcontentsline{toc}{chapter}{Bibliography}
\bibliography{references}{}
\bibliographystyle{IEEEtran}
\bibliography{Bibliography/Bibliography}
\end{document}

View File

@@ -2,8 +2,8 @@
\section{Robot}
The aforementioned \textit{Nao} is a small humanoid robot, around 60 cm tall.
Some of its characteristics are:
The aforementioned \textit{Nao} \cite{nao} is a small humanoid robot, around 60
cm tall. Some of its characteristics are:
\begin{itemize}
@@ -39,19 +39,20 @@ it can handle all aspects of robot control, such as reading the sensors, moving
the robot and establishing the network connection.
As a framework for the implementation of the desired behavior we chose the
official NAOqi Python SDK. Our experience with this framework is that it is
easy to use, well documented and also covers most basic functionality that was
necessary for us to start working on the project. A further advantage of this
SDK is that it uses Python as the programming language, which allows for quick
prototyping, but also makes maintaining a large codebase fairly easy.
official NAOqi Python SDK \cite{naoqi-sdk}. Our experience with this framework
is that it is easy to use, well documented and also covers most basic
functionality that was necessary for us to start working on the project. A
further advantage of this SDK is that it uses Python as the programming
language, which allows for quick prototyping, but also makes maintaining a
large codebase fairly easy.
Finally, the third-party libraries that were used in the project are OpenCV and
NumPy. OpenCV is a powerful and one of the most widely used open-source
libraries for computer vision tasks, and NumPy is a popular Python library for
fast numerical computations. Both of these libraries, as well as the NAOqi
Python SDK are included in the NAOqi OS distribution by default, which means
that no extra work was necessary to ensure their proper functioning on the
robot.
NumPy \cite{opencv, numpy}. OpenCV is a powerful and one of the most widely
used open-source libraries for computer vision tasks, and NumPy is a popular
Python library for fast numerical computations. Both of these libraries, as
well as the NAOqi Python SDK are included in the NAOqi OS distribution by
default, which means that no extra work was necessary to ensure their proper
functioning on the robot.
\section{Rejected Software Alternatives}
@@ -66,14 +67,14 @@ never really hit the performance constraints, that couldn't have been overcome
by refactoring our code, but in the future it might be reasonable to migrate
some of the portions of it to C++.
Another big alternative is ROS (Robotic Operating System). ROS is a collection
of software targeted at robot development, and there exists a large ecosystem
of third-party extensions for ROS, which could assist in performing common
tasks such as camera and joint calibration. ROS was an attractive option, but
there was a major downside, that there was no straightforward way to run ROS
locally on the robot, so the decision was made not to spend time trying to
figure out how to do that. However, since Python is one of the main languages
in ROS, it should be possible to incorporate our work into ROS.
Another big alternative is ROS \cite{ros} (Robotic Operating System). ROS is a
collection of software targeted at robot development, and there exists a large
ecosystem of third-party extensions for ROS, which could assist in performing
common tasks such as camera and joint calibration. ROS was an attractive
option, but there was a major downside, that there was no straightforward way
to run ROS locally on the robot, so the decision was made not to spend time
trying to figure out how to do that. However, since Python is one of the main
languages in ROS, it should be possible to incorporate our work into ROS.
Finally, as was already mentioned in the introduction, B-Human Framework is a
popular choice for beginners, thanks to the quality of the algorithms and good