Added citations

This commit is contained in:
2018-08-08 10:34:15 +02:00
parent 13b230397a
commit 56be83152a
6 changed files with 149 additions and 79 deletions

View File

@@ -8,9 +8,9 @@ algorithms, on which we could rely when we worked on higher-lever behaviors.
During our tests, there were almost no false detections, i.e.\ foreign objects During our tests, there were almost no false detections, i.e.\ foreign objects
were not detected as a ball or a goal. Sometimes the ball and the goal were were not detected as a ball or a goal. Sometimes the ball and the goal were
missed, even if they were in the field of view, which happened due to imprecise missed, even if they were in the field of view, which happened due to imprecise
color calibration under changing lighting conditions. The goal detection was on color calibration under changing lighting conditions. The goal detection was
of the most difficult project milestones, so we are particularly satisfied with one of the most difficult project milestones, so we are particularly satisfied
the resulting performance. It is worth mentioning, that with the current with the resulting performance. It is worth mentioning, that with the current
algorithm, for successful detection, it is not even necessary to have the whole algorithm, for successful detection, it is not even necessary to have the whole
goal in the camera image. goal in the camera image.
@@ -19,8 +19,7 @@ the robot could successfully reach the ball, do the necessary alignments and
kick the ball. When the robot decided that he should kick the ball, in the kick the ball. When the robot decided that he should kick the ball, in the
majority of cases the kick was successful and the ball reached the target. We majority of cases the kick was successful and the ball reached the target. We
performed these tests from many starting positions and assuming many relative performed these tests from many starting positions and assuming many relative
position of the ball and the goal. Naturally, we put some constraints on the position of the ball and the goal.
problem, but within th \todo{smth about constraints and such bullshit}.
Furthermore, we managed not only to make the whole approach robust, but also Furthermore, we managed not only to make the whole approach robust, but also
worked on making the procedure fast, and the approach planing was a crucial worked on making the procedure fast, and the approach planing was a crucial
@@ -40,13 +39,13 @@ With our objective for this semester completed, there still remains a vast room
for improvement. Some of the most interesting topics for future work will now for improvement. Some of the most interesting topics for future work will now
presented. presented.
The first important topic is the self-localization. Currently our robot is The first important topic is self-localization. Currently our robot is
completely unaware of his position on the field, but if such information could completely unaware of his position on the field, but if such information could
be obtained, then it could be leveraged to make path planning more effective be obtained, then it could be leveraged to make path planning more effective
and precise. and precise.
Another important capability that our robot lacks for now is obstacle Another important capability that our robot lacks for now is obstacle
awareness, which would be unacceptable in the real RoboCup soccer game. Making awareness, which would be unacceptable in a real RoboCup soccer game. Making
the robot aware of the obstacles on the field would require the obstacle the robot aware of the obstacles on the field would require the obstacle
detection to be implemented, as well as some changes to the path planning detection to be implemented, as well as some changes to the path planning
algorithms to be made, which makes this task an interesting project on its own. algorithms to be made, which makes this task an interesting project on its own.
@@ -62,8 +61,8 @@ is not moving. Another constraint that we imposed on our problem is that the
ball is relatively close to the goal, and that the ball is closer to the goal ball is relatively close to the goal, and that the ball is closer to the goal
than the robot, so that the robot doesn't have to run away from the goal. To be than the robot, so that the robot doesn't have to run away from the goal. To be
useful in a real game the striker should be able to handle more complex useful in a real game the striker should be able to handle more complex
situations. For example, the \textit{dribbling} skill could help the robot to situations. For example, \textit{dribbling} skill could help the robot to avoid
avoid the opponents and to bring the ball into a convenient striking position. the opponents and to bring the ball into a convenient striking position.
Finally, we realized that the built-in moving functions in NAOqi SDK produce Finally, we realized that the built-in moving functions in NAOqi SDK produce
fairly slow movements, and also don't allow to change the direction of movement fairly slow movements, and also don't allow to change the direction of movement

View File

@@ -1,26 +1,26 @@
\chapter{Introduction} \chapter{Introduction}
RoboCup is an international competition in the field of robotics, the RoboCup \cite{robocup} is an international competition in the field of
ultimate goal of which is to win a game of soccer against a human team by the robotics, the ultimate goal of which is to win a game of soccer against a human
middle of the 21st century. The motivation behind this objective is the team by the middle of the 21st century. The motivation behind this objective is
following: it is impossible to achieve such an ambitious goal with the current the following: it is impossible to achieve such an ambitious goal with the
state of technology, which means that the RoboCup competitions will drive current state of technology, which means that the RoboCup competitions will
scientific and technological advancement in such areas as computer vision, drive scientific and technological advancement in such areas as computer
mechatronics and multi-agent cooperation in complex dynamic environments. The vision, mechatronics and multi-agent cooperation in complex dynamic
RoboCup teams compete in five different leagues: Humanoid, Standard Platform, environments. The RoboCup teams compete in five different leagues: Humanoid,
Medium Size, Small Size and Simulation. Our work in this semester was based on Standard Platform, Medium Size, Small Size and Simulation. Our work in this
the rules of the Standard Platform league. In this league all teams use the semester was based on the rules of the Standard Platform league. In this league
same robot \textit{Nao}, which is being produced by the SoftBank Robotics. We all teams use the same robot \textit{Nao}, which is being produced by the
will describe the capabilities of this robot in more detail in the next SoftBank Robotics. We will describe the capabilities of this robot in more
chapter. detail in the next chapter.
One of the most notable teams in the Standard Platform League is One of the most notable teams in the Standard Platform League is
\textit{B-Human}. This team represents TU Bremen, and in the last nine years \textit{B-Human} \cite{bhuman}. This team represents TU Bremen, and in the last
they won the international RoboCup competition six times and twice were the nine years they won the international RoboCup competition six times and twice
runner-up. The source code of the framework that B-Human use for programming were the runner-up. The source code of the framework that B-Human use for
their robots is available on GitHub, together with an extensive documentation, programming their robots is available on GitHub, together with an extensive
which makes the B-Human framework an attractive starting point for RoboCup documentation, which makes the B-Human framework an attractive starting point
beginners. for RoboCup beginners.
\section{Out objective and motivation} \section{Out objective and motivation}
@@ -35,7 +35,7 @@ describe in close detail in the next chapter. The work on these tasks would
allow us to acquire new competences, which we could then use to complement the allow us to acquire new competences, which we could then use to complement the
RoboCup team of TUM. Finally, this objective encompasses many disciplines, such RoboCup team of TUM. Finally, this objective encompasses many disciplines, such
as object detection, mechatronics or path planning, which means that working on as object detection, mechatronics or path planning, which means that working on
it might give us a chance to improve over the existing work in these areas. it might give us a chance to contribute to the research in these areas.
Having said that, we hope that our project \todo{will be good}, and this report Having said that, we hope that our project \todo{will be good}, and this report
will help future students to get familiar with our results and continue our will help future students to get familiar with our results and continue our

View File

@@ -2,24 +2,25 @@
The very first task that needed to be accomplished was to detect the ball, The very first task that needed to be accomplished was to detect the ball,
which is uniformly red-colored and measures about 6 cm in diameter. We decided which is uniformly red-colored and measures about 6 cm in diameter. We decided
to use a popular algorithm based on color segmentation. The idea behind this to use a popular algorithm based on color segmentation \cite{ball-detect}. The
algorithm is to find the biggest red area in the image and assume that this is idea behind this algorithm is to find the biggest red area in the image and
the ball. First, the desired color needs to be defined as an interval of HSV assume that this is the ball. First, the desired color needs to be defined as
(Hue-Saturation-Value) values. After that, the image itself needs to be an interval of HSV (Hue-Saturation-Value) values. After that, the image itself
transformed into HSV colorspace, so that the regions of interest can be needs to be transformed into HSV colorspace, so that the regions of interest
extracted into a \textit{binary mask}. The contours of the regions can then be can be extracted into a \textit{binary mask}. The contours of the regions can
identified in a mask, and the areas of the regions can be calculated using the then be identified in a mask \cite{contours}, and the areas of the regions can
routines from the OpenCV library. The center and the radius of the region with be calculated using the routines from the OpenCV library. The center and the
the largest area are then determined and are assumed to be the center and the radius of the region with the largest area are then determined and are assumed
radius of the ball. to be the center and the radius of the ball.
It is often recommended to eliminate the noise in the binary mask by applying a It is sometimes recommended \cite{ball-detect} to eliminate the noise in the
sequence of \textit{erosions} and \textit{dilations}, but we found, that for binary mask by applying a sequence of \textit{erosions} and \textit{dilations},
the task of finding the \textit{biggest} area the noise doesn't present a but we found, that for the task of finding the \textit{biggest} area the noise
problem, whereas performing erosions may completely delete the image of the doesn't present a problem, whereas performing erosions may completely delete
ball, if it is relatively far from the robot and the camera resolution is low. the image of the ball, if it is relatively far from the robot and the camera
For this reason it was decided not to process the binary mask with erosions and resolution is low. For this reason it was decided not to process the binary
dilations, which allowed us to detect the ball even over long distances. mask with erosions and dilations, which allowed us to detect the ball even over
long distances.
The advantages of the presented algorithm are its speed and simplicity. The The advantages of the presented algorithm are its speed and simplicity. The
major downside is that the careful color calibration is required for the major downside is that the careful color calibration is required for the
@@ -61,20 +62,21 @@ The scoring function calculates, how different are the properties of the
candidates are from the properties, an idealized goal contour is expected to candidates are from the properties, an idealized goal contour is expected to
have. The evaluation is happening based on two properties. The first property have. The evaluation is happening based on two properties. The first property
is based on the observation, that the area of the goal contour is much smaller is based on the observation, that the area of the goal contour is much smaller
than the area of its \textit{enclosing convex hull}. The second observation is than the area of its \textit{enclosing convex hull} \cite{convex-hull}. The
that all points of the goal contour must lie close to the enclosing convex second observation is that all points of the goal contour must lie close to the
hull. The mathematical formulation of the scoring function looks like the enclosing convex hull. The mathematical formulation of the scoring function
following \todo{mathematical formulation}: looks like the following \todo{mathematical formulation}:
The contour, that minimizes the scoring function, while keeping its value under The contour, that minimizes the scoring function, while keeping its value under
a certain threshold is considered the goal. If no contour scores below the a certain threshold is considered the goal. If no contour scores below the
threshold, then the algorithm assumes that no goal was found. Our tests have threshold, then the algorithm assumes that no goal was found.
shown, that when the white color is calibrated correctly, the algorithm can
detect the goal almost without mistakes, when the goal is present in the image. Our tests have shown, that when the white color is calibrated correctly, the
The downside of this algorithm, is that in some cases the field lines might algorithm can detect the goal almost without mistakes, when the goal is present
appear the same properties, that the goal contour is expected to have, in the image. The downside of this algorithm, is that in some cases the field
therefore the field lines can be mistaken for the goal. We will describe, how lines might appear the same properties, that the goal contour is expected to
we dealt with this problem, in the following section. have, therefore the field lines can be mistaken for the goal. We will describe,
how we dealt with this problem, in the following section.
\section{Field detection} \section{Field detection}

View File

@@ -0,0 +1,68 @@
@misc{robocup,
title={Robocup Federation Official Website},
howpublished={\url{http://www.robocup.org/}},
note={Accessed: 2018-08-08}
}
@misc{bhuman,
title={B-Human},
howpublished={\url{https://www.b-human.de/index.html}},
note={Accessed: 2018-08-08}
}
@misc{nao,
title={Discover {Nao}, the little humanoid robot from SoftBank Robotics},
howpublished={\url{https://www.softbankrobotics.com/emea/en/robots/nao}},
note={Accessed: 2018-08-08}
}
@misc{naoqi-sdk,
title={{NAOqi} Developer guide},
howpublished={\url{http://doc.aldebaran.com/2-1/index_dev_guide.html}},
note={Accessed: 2018-08-08}
}
@misc{opencv,
title={OpenCV library},
howpublished={\url{https://opencv.org/}},
note={Accessed: 2018-08-08}
}
@article{numpy,
title={A guide to {NumPy}},
author={Travis E, Oliphant},
year={2006}
}
@misc{ros,
title={{ROS.org | Powering} the world's robots},
howpublished={\url{http://www.ros.org/}},
note={Accessed: 2018-08-08}
}
@misc{ball-detect,
title={Ball Tracking with {OpenCV}},
author={Rosenbrock, Adrian},
year={2015},
month={September},
howpublished={\url{
https://www.pyimagesearch.com/2015/09/14/ball-tracking-with-opencv/
}},
note={Accessed: 2018-08-08}
}
@misc{contours,
title={{OpenCV}: Contours: Getting Started},
howpublished={\url{
https://docs.opencv.org/3.4.1/d4/d73/tutorial_py_contours_begin.html
}},
note={Accessed: 2018-08-08}
}
@misc{convex-hull,
title={Convex Hull},
howpublished={\url{
https://docs.opencv.org/2.4/doc/tutorials/imgproc/shapedescriptors/hull/hull.html
}},
note={Accessed: 2018-08-08}
}

View File

@@ -55,7 +55,7 @@
% https://de.sharelatex.com/learn/Bibliography_management_with_bibtex#Bibliography_management_with_Bibtex % https://de.sharelatex.com/learn/Bibliography_management_with_bibtex#Bibliography_management_with_Bibtex
\addcontentsline{toc}{chapter}{Bibliography} \addcontentsline{toc}{chapter}{Bibliography}
\bibliography{references}{}
\bibliographystyle{IEEEtran} \bibliographystyle{IEEEtran}
\bibliography{Bibliography/Bibliography}
\end{document} \end{document}

View File

@@ -2,8 +2,8 @@
\section{Robot} \section{Robot}
The aforementioned \textit{Nao} is a small humanoid robot, around 60 cm tall. The aforementioned \textit{Nao} \cite{nao} is a small humanoid robot, around 60
Some of its characteristics are: cm tall. Some of its characteristics are:
\begin{itemize} \begin{itemize}
@@ -39,19 +39,20 @@ it can handle all aspects of robot control, such as reading the sensors, moving
the robot and establishing the network connection. the robot and establishing the network connection.
As a framework for the implementation of the desired behavior we chose the As a framework for the implementation of the desired behavior we chose the
official NAOqi Python SDK. Our experience with this framework is that it is official NAOqi Python SDK \cite{naoqi-sdk}. Our experience with this framework
easy to use, well documented and also covers most basic functionality that was is that it is easy to use, well documented and also covers most basic
necessary for us to start working on the project. A further advantage of this functionality that was necessary for us to start working on the project. A
SDK is that it uses Python as the programming language, which allows for quick further advantage of this SDK is that it uses Python as the programming
prototyping, but also makes maintaining a large codebase fairly easy. language, which allows for quick prototyping, but also makes maintaining a
large codebase fairly easy.
Finally, the third-party libraries that were used in the project are OpenCV and Finally, the third-party libraries that were used in the project are OpenCV and
NumPy. OpenCV is a powerful and one of the most widely used open-source NumPy \cite{opencv, numpy}. OpenCV is a powerful and one of the most widely
libraries for computer vision tasks, and NumPy is a popular Python library for used open-source libraries for computer vision tasks, and NumPy is a popular
fast numerical computations. Both of these libraries, as well as the NAOqi Python library for fast numerical computations. Both of these libraries, as
Python SDK are included in the NAOqi OS distribution by default, which means well as the NAOqi Python SDK are included in the NAOqi OS distribution by
that no extra work was necessary to ensure their proper functioning on the default, which means that no extra work was necessary to ensure their proper
robot. functioning on the robot.
\section{Rejected Software Alternatives} \section{Rejected Software Alternatives}
@@ -66,14 +67,14 @@ never really hit the performance constraints, that couldn't have been overcome
by refactoring our code, but in the future it might be reasonable to migrate by refactoring our code, but in the future it might be reasonable to migrate
some of the portions of it to C++. some of the portions of it to C++.
Another big alternative is ROS (Robotic Operating System). ROS is a collection Another big alternative is ROS \cite{ros} (Robotic Operating System). ROS is a
of software targeted at robot development, and there exists a large ecosystem collection of software targeted at robot development, and there exists a large
of third-party extensions for ROS, which could assist in performing common ecosystem of third-party extensions for ROS, which could assist in performing
tasks such as camera and joint calibration. ROS was an attractive option, but common tasks such as camera and joint calibration. ROS was an attractive
there was a major downside, that there was no straightforward way to run ROS option, but there was a major downside, that there was no straightforward way
locally on the robot, so the decision was made not to spend time trying to to run ROS locally on the robot, so the decision was made not to spend time
figure out how to do that. However, since Python is one of the main languages trying to figure out how to do that. However, since Python is one of the main
in ROS, it should be possible to incorporate our work into ROS. languages in ROS, it should be possible to incorporate our work into ROS.
Finally, as was already mentioned in the introduction, B-Human Framework is a Finally, as was already mentioned in the introduction, B-Human Framework is a
popular choice for beginners, thanks to the quality of the algorithms and good popular choice for beginners, thanks to the quality of the algorithms and good