First complete draft

This commit is contained in:
2018-08-08 19:43:11 +02:00
parent 91cd48b619
commit 588fac75c8
16 changed files with 232 additions and 161 deletions

View File

@@ -12,13 +12,14 @@ goalposts.
The walk in circle was implemented in the following way: the robot will step
several steps sideways, then will turn to ball, as described in the section
\ref{j sec turn to ball}, and finally will adjust the distance to the ball by
stepping forwards or backwards, so that the ball is neither too close nor too
far. The distance to the ball, similarly to the stage of the direct approach,
is not measured explicitly, but is approximated through the position of the
ball image in the camera frame. After performing these steps, the check is
performed, if the goal alignment is completed. Otherwise, the steps will be
repeated until alignment is achieved.
\ref{j sec turning to ball}, and finally will adjust the distance to the ball
by stepping forwards or backwards, so that the ball is neither too close nor
too far. The distance to the ball, similarly to the stage of the direct
approach, is not measured explicitly, but is approximated through the position
of the ball image in the camera frame. After performing these steps, the check
is performed, if the goal alignment is completed. Otherwise, the steps will be
repeated until alignment is achieved. The figure \ref{p figure goal-alignment}
depicts the successful completion of this stage.
\begin{figure}[ht]
\includegraphics[width=\textwidth]{\fig goal-alignment}
@@ -26,21 +27,21 @@ repeated until alignment is achieved.
\label{p figure goal-alignment}
\end{figure}
\section{Ball alignment}
\section{Ball Alignment}
Now that the ball and the goal are aligned, the robot has to move to the ball
into a position, from which the kick can be performed. Depending on the
situation, it may be feasible to select the foot, with which the kick should be
performed, but due to time constraints we programmed the robot to kick with the
left foot only. So, the task now is to place the ball in front of the left
foot. We realized, that when the ball is in the correct position, then its
image in the lower camera should be within a certain region. We experimentally
determined the extents of this region. The algorithm therefore is for the robot
to gradually adjust its position in small steps, until the ball image reaches
the target, after which the robot will proceed with the kick. Our tests have
shown, that this approach while being relatively simple, works sufficiently
robust, which means that we didn't have the situations, when the robot missed
the ball after alignment or even hit the ball with an edge of the foot.
Now that the ball and the goal are aligned, the robot has to move into a
position, from which the kick can be performed. Depending on the situation, it
may be feasible to select the foot, with which the kick should be performed,
but due to time constraints we programmed the robot to kick with the left foot
only. So, the task now is to place the ball in front of the left foot. We
realized, that when the ball is in the correct position, then its image in the
lower camera should be within a certain region. We experimentally determined
the extents of this region. The algorithm therefore is for the robot to
gradually adjust its position in small steps, until the ball image reaches the
target, after which the robot will proceed with the kick. Our tests have shown,
that this method while being relatively simple, works sufficiently robust,
which means that we didn't have the situations, when the robot missed the ball
after alignment or even hit the ball with an edge of the foot.
\begin{figure}[ht]
\includegraphics[width=\textwidth]{\fig ball-align}

View File

@@ -1,18 +1,18 @@
\section{Color calibration}
\section{Color Calibration}
All our detection algorithms require color calibration, and when the lighting
conditions on the field change, colors might have to be newly calibrated. For
us this meant that a tool was necessary, that could simplify this process as
far as possible. For this reason, we implemented a small OpenCV-based program,
that we called \verb|Colorpicker|. This program can access various video
sources, as well as use still images for calibration. The main interface
contains the sliders for adjusting the HSV interval, as well as the video area,
conditions on the field change, colors might have to be recalibrated. For us
this meant that a tool was necessary, that could simplify this process as far
as possible. For this reason, we implemented a small OpenCV-based program, that
we called \verb|Colorpicker|. This program can access various video sources, as
well as use still images for calibration. The main interface contains the
sliders for adjusting the HSV interval, as well as the video area,
demonstrating the resulting binary mask. The colors can be calibrated for three
targets: ball, goal and field; and the quality of detection, depending on the
chosen target is demonstrated. When the program is closed, the calibration
values are automatically saved to the settings file \verb|nao_defaults.json|.
The interface of the Colorpicker is demonstrated in the figure \ref{p figure
colorpicker}.
chosen target is demonstrated in the tool's video area. When the program is
closed, the calibration values are automatically saved to the settings file
\verb|nao_defaults.json|. The interface of the Colorpicker is demonstrated in
the figure \ref{p figure colorpicker}.
\begin{figure}[ht]
\includegraphics[width=\textwidth]{\fig colorpicker}

View File

@@ -1,6 +1,6 @@
\chapter{Implementation details}
\chapter{Implementation Details}
\section{Code organization}
\section{Code Organization}
Our code is organized as a standard Python package. The following command can
be used to make the robot run the whole goal scoring sequence:
@@ -32,7 +32,7 @@ The main logic of our implementation can be found in the following files:
or video files.
\item \verb|movements.py| implements convenience movements-related function,
such as walking and kick.
such as walking and also the kick.
\item \verb|nao_defaults.json| stores all project-global settings, such as
the IP-address of the robot, or color calibration results.

View File

@@ -16,4 +16,6 @@ the playback speed of the videos needed to be adjusted afterwards using video
editing programs. Furthermore, due to computational resource limitations of the
Nao, the frames could have been captured only in low resolution. However, the
quality of the resulting videos was sufficient for successful debugging and
also for the presentation.
also for the presentation. Some of the illustrations for this report, such as
the figure \ref{p figure direct-approach} for example, were created with the
help of those videos.

View File

@@ -1,4 +1,4 @@
\section{Text to speech}
\section{Text to Speech}
During the implementation of our solution for the objective stated in \ref{sec
problem statement} we included suitable functions to get a feedback about
@@ -12,7 +12,7 @@ the program by running it in a separate thread. We also ensured, that the robot
does not repeat the same sentence over and over again, if he remains in the
same state.
\section{Goal confirmation}
\section{Goal Confirmation}
It makes sense to let the robot check, if he has actually scored a goal after
he performed a goal kick. We therefore implemented a simple goal confirmation

View File

@@ -1,6 +1,7 @@
\section{Ball approach}
\section{Ball Approach}
\label{p sec approach}
\subsection{Approach from the Side}
\subsection*{Approach from the Side}
The first possibility is that in the approach planing stage, described in the
section \ref{j sec approach planing}, the decision was taken to approach the
@@ -9,13 +10,13 @@ the calculated direction. Normally, after the movement the robot should lose
the sight of the ball. However, the approximate angle, where the ball should be
relative to the robot after the movement, is known. Therefore, the robot will
rotate by that angle and will then try to detect the ball and turn to it, using
the \textbf{Turn to Ball} algorithm, described in the section \ref {j sec turn
to ball}. Once this was done, the approach planning stage is repeated.
Normally, the distance to the ball should now be small, and the ball and the
goal should lie in the same direction, which means that only short direct
approach at this point will be necessary. That might not always be the case, so
in rare situations another step of the approach from the side might be
necessary.
the \textbf{Turn to Ball} algorithm, described in the section \ref {j sec
turning to ball}. Once this was done, the approach planning stage is
repeated. Normally, the distance to the ball should now be small, and the ball
and the goal should lie in the same direction, which means that only short
direct approach at this point will be necessary. That might not always be the
case, so in rare situations another step of the approach from the side might be
required.
\begin{figure}[ht]
\includegraphics[width=\textwidth]{\fig after-sideways}
@@ -23,19 +24,19 @@ necessary.
\label{p figure after-sideways}
\end{figure}
\subsection{Direct Approach}
\subsection*{Direct Approach}
It is also possible that the decision will be taken to approach the ball
directly, either from the start or after the robot already has approached the
ball from the side. In this stage the robot will walk towards the ball trying
to stay centered at it. To do so, it will be constantly checked that the ball
stays within some tolerance angle from the center of the camera frame. If the
ball moves from the center further than by some tolerance angle, then the robot
ball moves from the center further than by the tolerance angle, then the robot
will stop moving, will adjust the movement direction and then will go further.
The robot will continue moving until the ball is close enough to start the goal
alignment. Do determine if that is the case, we don't use trigonometry, but
simply define a threshold, which the image of the ball in the robot's lower
camera should reach. Once this happened, the approach stage is over and the
camera should reach. Once this has happened, the approach stage is over and the
robot will start aligning itself to the goal.
\begin{figure}[ht]

View File

@@ -11,15 +11,15 @@ missed, even if they were in the field of view, which happened due to imprecise
color calibration under changing lighting conditions. The goal detection was
one of the most difficult project milestones, so we are particularly satisfied
with the resulting performance. It is worth mentioning, that with the current
algorithm, for successful detection, it is not even necessary to have the whole
algorithm, for successful detection it is not even necessary to have the whole
goal in the camera image.
Another important achievement is the overall system robustness. In our tests
the robot could successfully reach the ball, do the necessary alignments and
kick the ball. When the robot decided that he should kick the ball, in the
majority of cases the kick was successful and the ball reached the target. We
performed these tests from many starting positions and assuming many relative
position of the ball and the goal.
performed these tests from many starting positions and assuming a variety of
different relative positions of the ball and the goal.
Furthermore, we managed not only to make the whole approach robust, but also
worked on making the procedure fast, and the approach planing was a crucial
@@ -29,9 +29,10 @@ towards the ball directly and aligned to the goal afterwards. The tests have
shown, that in such configuration the goal alignment was actually the longest
phase and could take over a minute. Then we introduced the approach planing,
and as a result the goal alignment stage could in many scenarios be completely
eliminated, which was greatly beneficial for the execution times.
Finally, \todo{the kick was nice}.
eliminated, which was greatly beneficial for the execution times. Finally,
thanks to the strong kick, the goal can be scored from a large range of
distances, which means that in some situations is not necessary to bring the
ball closer to the goal, which can also save time.
\section{Future Work}
@@ -39,27 +40,27 @@ With our objective for this semester completed, there still remains a vast room
for improvement. Some of the most interesting topics for future work will now
presented.
The first important topic is self-localization. Currently our robot is
The first important topic is \textit{self-localization}. Currently our robot is
completely unaware of his position on the field, but if such information could
be obtained, then it could be leveraged to make path planning more effective
and precise.
Another important capability that our robot lacks for now is obstacle
awareness, which would be unacceptable in a real RoboCup soccer game. Making
Another important capability, that our robot lacks for now, is \textit{obstacle
awareness}, which would be unacceptable in a real RoboCup soccer game. Making
the robot aware of the obstacles on the field would require the obstacle
detection to be implemented, as well as some changes to the path planning
algorithms to be made, which makes this task an interesting project on its own.
A further capability that could be useful for the striker is the ability to
perform different kicks depending on the situation. For example, if the robot
could perform a sideways kick, then the goal alignment would in many situations
be unnecessary, which would reduce the time needed to score a goal.
perform \textit{different kicks} depending on the situation. For example, if
the robot could perform a sideways kick, then the goal alignment would in many
situations be unnecessary, which would reduce the time needed to score a goal.
In this semester we concentrated on the ``free-kick'' situation, so our robot
can perform its tasks in the absence of other players, and only when the ball
is not moving. Another constraint that we imposed on our problem is that the
ball is relatively close to the goal, and that the ball is closer to the goal
than the robot, so that the robot doesn't have to run away from the goal. To be
In this semester we concentrated on a ``free-kick'' situation, so our robot can
perform its tasks in the absence of other players when the ball is not moving.
Another constraint that we imposed on our problem is that the ball is
relatively close to the goal, and that the ball is closer to the goal than the
robot, so that the robot doesn't have to move away from the goal first. To be
useful in a real game the striker should be able to handle more complex
situations. For example, \textit{dribbling} skill could help the robot to avoid
the opponents and to bring the ball into a convenient striking position.

Binary file not shown.

After

Width:  |  Height:  |  Size: 1.5 MiB

View File

@@ -9,20 +9,21 @@ drive scientific and technological advancement in such areas as computer
vision, mechatronics and multi-agent cooperation in complex dynamic
environments. The RoboCup teams compete in five different leagues: Humanoid,
Standard Platform, Medium Size, Small Size and Simulation. Our work in this
semester was based on the rules of the Standard Platform league. In this league
all teams use the same robot \textit{Nao}, which is being produced by the
SoftBank Robotics. We will describe the capabilities of this robot in more
detail in the next chapter.
semester was based on the rules of the \textit{Standard Platform league}. In
this league all teams use the same robot \textit{Nao}, which is being produced
by the SoftBank Robotics. We will describe the capabilities of this robot in
more detail in the next chapter.
One of the most notable teams in the Standard Platform League is
\textit{B-Human} \cite{bhuman}. This team represents TU Bremen, and in the last
A couple of words need to be said about the state-of-the-art. One of the most
notable teams in the Standard Platform League is \textit{B-Human}
\cite{bhuman}. This team represents the University of Bremen and in the last
nine years they won the international RoboCup competition six times and twice
were the runner-up. The source code of the framework that B-Human use for
programming their robots is available on GitHub, together with an extensive
documentation, which makes the B-Human framework an attractive starting point
for RoboCup beginners.
documentation, which makes the B-Human framework a frequent starting point for
RoboCup beginners.
\section{Out objective and motivation}
\section{Our Objective and Motivation}
\label{sec problem statement}
In this report we are going to introduce the robotics project, which our team
@@ -38,6 +39,7 @@ RoboCup team of TUM. Finally, this objective encompasses many disciplines, such
as object detection, mechatronics or path planning, which means that working on
it might give us a chance to contribute to the research in these areas.
Having said that, we hope that our project \todo{will be good}, and this report
will help future students to get familiar with our results and continue our
work.
Having said that, we hope that our project will be a positive contribution to
the work, being done at the Institute for Cognitive Systems, and that this
report will help future students to get familiar with our results and continue
our work.

View File

@@ -1,4 +1,4 @@
\section{Turning to ball}
\section{Turning to Ball}
\label{j sec turning to ball}
The task which we try to accomplish here is to bring the robot in a position, so that he is looking straight at the ball.
The robot should be able to find the ball anywhere on the field and rotate itself so that it will focus the ball. \\
@@ -6,7 +6,7 @@ The algorithm which we implemented to solve this problem can be found in figure
\begin{figure}[ht]
\includegraphics[width=\textwidth]{\fig turn-to-ball}
\caption{Turn to ball algorithm}
\caption{Turn to Ball algorithm}
\label{j figure turn to ball}
\end{figure}
@@ -40,7 +40,7 @@ The task which we try to accomplish here is to measure the distance to the ball
The proposed solution to measure the distance to the ball is shown in figure \ref{j figure distance measurement}. In the right upper corner of the picture is the camera frame shown, which belongs to the top camera of the robot.
\begin{figure}[ht]
\includegraphics[width=\textwidth]{\fig distance-meassurement}
\caption{Distance Measurement}
\caption{Distance measurement}
\label{j figure distance measurement}
\end{figure}
@@ -60,8 +60,9 @@ Even so the proposed equation for distance measurement is rather simple it provi
%Mention Stand up to ensure, that robot is always in the same position
%Explain how angles are derived from the camera frames?
%Value of phi cam?
\newpage
\section{Approach planning}
% \newpage
\section{Approach Planning}
\label{j sec approach planing}
An important part of the approaching strategy is to find out, in which direction the robot should start to approach the ball, so that it is later in a good position for the following approach steps.
The task is therefore to choose an appropriate approach path.
@@ -70,7 +71,7 @@ The task is therefore to choose an appropriate approach path.
\begin{figure}[ht]
\includegraphics[width=\textwidth]{\fig choose-approach-start}
\caption{starting condition of choose approach}
\caption{Starting condition of approach planning}
\label{j figure starting condition choose-approach}
\end{figure}
@@ -113,7 +114,7 @@ The task is solved as following. Again the robot is in the standing position and
\label{j figure rdist hypo}
\end{figure}
\newpage
% \newpage
During our tests this approach seemed more suitable for short ball distances.

28
documentation/kick.tex Normal file
View File

@@ -0,0 +1,28 @@
\section{Kick}
The final milestone in the goal scoring project is naturally the kick. Before
we started working on the kick, we formulated some requirements, which our
implementation must satisfy. Firstly and most importantly, the robot shouldn't
fall down when performing the kick. Secondly, the kick must have the sufficient
strength, so that ideally only one kick is necessary for the ball to reach the
goal. Therefore, due to time constraints we implemented the simplest possible
kick, that would satisfy those requirements.
The procedure is as follows. First the robot will use its ankle joints to shift
its weight to the base leg. After this, the robots will be able to lift the
kicking leg for the swing. Finally, the robot will perform the swing and return
to the standing position. Both raising the leg and doing the swing require
precise coordinated joint movements, so we had to conduct experiments to
establish the correct joint angles and the movement speed.
An important drawback of our implementation is that the swing makes the whole
process slower, but we weren't able to design a strong and stable kick without
using the swing. Nevertheless, the tests that we performed have shown, that our
implementation satisfies our requirements, and hence the last milestone was
successfully completed.
% \begin{figure}[ht]
% \includegraphics[width=\textwidth]{\fig kick}
% \caption{Kick sequence}
% \label{p figure kick}
% \end{figure}

View File

@@ -1,4 +1,4 @@
\section{Ball detection}
\section{Ball Detection}
\label{p sec ball detection}
The very first task that needed to be accomplished was to detect the ball,
@@ -6,13 +6,13 @@ which is uniformly red-colored and measures about 6 cm in diameter. We decided
to use a popular algorithm based on color segmentation \cite{ball-detect}. The
idea behind this algorithm is to find the biggest red area in the image and
assume that this is the ball. First, the desired color needs to be defined as
an interval of HSV (Hue-Saturation-Value) values. After that, the image itself
needs to be transformed into HSV colorspace, so that the regions of interest
can be extracted into a \textit{binary mask}. The contours of the regions can
then be identified in a mask \cite{contours}, and the areas of the regions can
be calculated using the routines from the OpenCV library. The center and the
radius of the region with the largest area are then determined and are assumed
to be the center and the radius of the ball.
an interval of HSV (Hue-Saturation-Value) \cite{hsv} values. After that, the
image itself needs to be transformed into HSV colorspace, so that the regions
of interest can be extracted into a \textit{binary mask}. The contours of the
regions can then be identified in a mask \cite{contours}, and the areas of the
regions can be calculated using the routines from the OpenCV library. The
center and the radius of the region with the largest area are then determined
and are assumed to be the center and the radius of the ball.
\begin{figure}[ht]
\includegraphics[width=\textwidth]{\fig ball-detection}
@@ -24,10 +24,10 @@ It is sometimes recommended \cite{ball-detect} to eliminate the noise in the
binary mask by applying a sequence of \textit{erosions} and \textit{dilations},
but we found, that for the task of finding the \textit{biggest} area the noise
doesn't present a problem, whereas performing erosions may completely delete
the image of the ball, if it is relatively far from the robot and the camera
resolution is low. For this reason it was decided not to process the binary
mask with erosions and dilations, which allowed us to detect the ball even over
long distances.
the image of the ball from the mask, if it is relatively far from the robot and
the camera resolution is low. For this reason it was decided not to process the
binary mask with erosions and dilations, which allowed us to detect the ball
even over long distances.
The advantages of the presented algorithm are its speed and simplicity. The
major downside is that the careful color calibration is required for the
@@ -35,11 +35,11 @@ algorithm to function properly. If the HSV interval of the targeted color is
too narrow, then the algorithm might miss the ball; if the interval is too
wide, then other big red-shaded objects in the camera image will be detected as
the ball. A possible approach to alleviate these issues to a certain degree
will be presented further in this chapter. To conclude, we found this algorithm
to be robust enough for our purposes, if the sensible color calibration was
provided.
will be presented further in the section \ref{p sec field detect}. To
conclude, we found this algorithm to be robust enough for our purposes, if the
sensible color calibration was provided.
\section{Goal detection}
\section{Goal Detection}
\label{p sec goal detect}
The goal detection presented itself as a more difficult task. The color of the
@@ -51,9 +51,7 @@ propose the following heuristic algorithm.
\begin{figure}[ht]
\includegraphics[width=\textwidth]{\fig goal-detection}
\caption{Goal Detection. On the right binary mask with all found contours. On
the left the goal, and one contour that passed preselection but was
rejected during scoring.}
\caption{Goal detection}
\label{p figure goal-detection}
\end{figure}
@@ -77,13 +75,13 @@ stage ends here, and the remaining candidates are passed to the scoring
function.
The scoring function calculates, how different are the properties of the
candidates are from the properties, an idealized goal contour is expected to
have. The evaluation is happening based on two properties. The first property
is based on the observation, that the area of the goal contour is much smaller
than the area of its \textit{enclosing convex hull} \cite{convex-hull}. The
second observation is that all points of the goal contour must lie close to the
enclosing convex hull. The mathematical formulation can then look like the
following:
candidates are from the properties, that an idealized goal contour is expected
to have. The evaluation is happening based on two properties. The first
property is based on the observation, that the area of the goal contour is much
smaller than the area of its \textit{enclosing convex hull} \cite{convex-hull}.
The second observation is that all points of the goal contour must lie close to
the enclosing convex hull. The mathematical formulation of a corresponding
scoring function can then look like the following:
\begin{equation*}
S(c)=\frac{A(c)}{A(Hull(c))}+\displaystyle\sum_{x_i \in c}\min_{h \in Hull(c)
@@ -93,21 +91,25 @@ following:
The contour, that minimizes the scoring function, while keeping its value under
a certain threshold is considered the goal. If no contour scores below the
threshold, then the algorithm assumes that no goal was found. An important note
is that the algorithm in such a way, that the preselection and scoring are
modular, which means that the current simple scoring function can later be
replaced by a function with a better heuristic, or even by some function that
employs machine learning models.
is that the algorithm is designed in such a way, that the preselection and
scoring are modular, which means that the current simple scoring function can
later be replaced by a function with a better heuristic, or even by some
function that employs machine learning models.
Our tests have shown, that when the white color is calibrated correctly, the
algorithm can detect the goal almost without mistakes, when the goal is present
in the image. Most irrelevant candidates candidates are normally discarded in
the preselection stage, and the scoring function improves the robustness
further. The downside of this algorithm, is that in some cases the field lines
in the image. Most irrelevant candidates are normally discarded in the
preselection stage, and the scoring function improves the robustness further.
Figure \ref{p figure goal-detection} demonstrates the algorithm in action. On
the right is the binary mask with all found contours. On the left are the goal,
and one contour that passed preselection but was rejected during scoring.
One downside of this algorithm, is that in some cases the field lines
might appear to have the same properties, that the goal contour is expected to
have, therefore the field lines can be mistaken for the goal. We will describe,
how we dealt with this problem, in the section \ref{p sec field detect}.
\section{Field detection}
\section{Field Detection}
\label{p sec field detect}
The algorithm for the field detection is very similar to the ball detection

View File

@@ -66,3 +66,9 @@ https://docs.opencv.org/2.4/doc/tutorials/imgproc/shapedescriptors/hull/hull.htm
}},
note={Accessed: 2018-08-08}
}
@misc{hsv,
title={{HSL and HSV} --- {Wikipedia}},
howpublished={\url{https://en.wikipedia.org/wiki/HSL_and_HSV}},
note={Accessed: 2018-08-08}
}

View File

@@ -23,7 +23,7 @@
}
\author{Pavel Lutskov\\Jonas Bubenhagen\\Yuankai Wu\\Seif Ben Hamida\\Ahmed Kamoun}
\supervisors{Mohsen Kaboli\\and the Tutor (insert name)}
\supervisors{Prof. Dr. Gordon Cheng\\Dr.-Ing. Mohsen Kaboli}
\submitdate{August 2018}
\maketitle % this generates the title page. More in icthesis.sty
@@ -43,6 +43,7 @@
\input{jonas} % Distance, approach planing
\input{approach} % Ball approach
\input{align} % Goal alignment
\input{kick}
\input{overview} % The complete strategy
\input{conclusion} % Results and future work

View File

@@ -1,8 +1,33 @@
\chapter{Our solution}
To achieve our objective, we identified ten big milestones that needed to be
completed. These milestones can roughly be grouped into perception, approach
planing, approach and the kick. In this chapter we will give our solutions to
the problems posed by each of the milestones, and at the end the resulting goal
scoring strategy will be presented. We will now start with the lower level
perception milestones and will gradually introduce higher level behaviors.
completed, which are:
\begin{enumerate}
\item Ball detection;
\item Goal detection;
\item Field detection;
\item Turning to ball;
\item Distance measurement;
\item Approach planning;
\item Ball approach;
\item Goal alignment;
\item Ball alignment;
\item Kick.
\end{enumerate}
In this chapter we will give our solutions to the problems posed by each of the
milestones, and at the end the resulting goal scoring strategy will be
presented. We will now start with the lower level detection milestones and
will gradually introduce higher level behaviors.

View File

@@ -2,7 +2,7 @@
\section{Robot}
The aforementioned \textit{Nao} \cite{nao} is a small humanoid robot, around 60
The aforementioned Nao \cite{nao} is a small humanoid robot, around 60
cm tall. Some of its characteristics are:
\begin{itemize}
@@ -15,9 +15,9 @@ cm tall. Some of its characteristics are:
\item Internet connectivity over Ethernet cable or 802.11g WLAN;
\item Single-Core Intel Atom CPU and 1 GB of RAM;
\item Single-core Intel Atom CPU and 1 GB of RAM;
\item Programmable Joints with overall 25 Degrees of Freedom;
\item Programmable joints with overall 25 degrees of freedom;
\item Speakers;
@@ -27,7 +27,7 @@ cm tall. Some of its characteristics are:
It can be seen from the specifications list, that the multitude of sensors and
interfaces makes Nao an attractive development platform, suitable for the task
of \todo{Robocup}. However, relatively weak CPU and a low amount of RAM require
of playing soccer. However, relatively weak CPU and a low amount of RAM require
the programs running on the robot to be resource-efficient, which had to be
taken into into account during our work on the project.
@@ -39,46 +39,47 @@ it can handle all aspects of robot control, such as reading the sensors, moving
the robot and establishing the network connection.
As a framework for the implementation of the desired behavior we chose the
official NAOqi Python SDK \cite{naoqi-sdk}. Our experience with this framework
is that it is easy to use, well documented and also covers most basic
functionality that was necessary for us to start working on the project. A
further advantage of this SDK is that it uses Python as the programming
language, which allows for quick prototyping, but also makes maintaining a
large codebase fairly easy.
official \textit{NAOqi Python SDK} \cite{naoqi-sdk}. We found this framework
easy to use, well documented and also covering most basic functionality that
was necessary for us to start working on the project. A further advantage of
this SDK is that it uses Python as the programming language, which allows for
quick prototyping, but also makes maintaining a large codebase fairly easy.
Finally, the third-party libraries that were used in the project are OpenCV and
NumPy \cite{opencv, numpy}. OpenCV is a powerful and one of the most widely
used open-source libraries for computer vision tasks, and NumPy is a popular
Python library for fast numerical computations. Both of these libraries, as
well as the NAOqi Python SDK are included in the NAOqi OS distribution by
default, which means that no extra work was necessary to ensure their proper
functioning on the robot.
Finally, the third-party libraries that were used in the project are
\textit{OpenCV} and \textit{NumPy} \cite{opencv, numpy}. OpenCV is a powerful
and one of the most widely used open-source libraries for computer vision
tasks, and NumPy is a popular Python library for fast numerical computations.
Both of these libraries, as well as the NAOqi Python SDK are included in the
NAOqi OS distribution by default, which means that no extra work was necessary
to ensure their proper functioning on the robot.
\section{Rejected Software Alternatives}
Here we will briefly discuss what alternative options were available for the
choice of the base framework, and why we decided not to use those. One
available option was the official NAOqi C++ SDK. Being based on the C++
language, this SDK can naturally be expected to have better performance and be
more resource-efficient, than the Python-based version. We still chose the
available option was the official \textit{NAOqi C++ SDK}. Being based on the
C++ language, this SDK can naturally be expected to have better performance and
be more resource-efficient, than the Python-based version. We still chose the
Python SDK, because C++ is not particularly suitable for fast prototyping,
because of the complexity of the language. It is also worth noting, that we
never really hit the performance constraints, that couldn't have been overcome
by refactoring our code, but in the future it might be reasonable to migrate
some of the portions of it to C++.
Another big alternative is ROS \cite{ros} (Robotic Operating System). ROS is a
collection of software targeted at robot development, and there exists a large
ecosystem of third-party extensions for ROS, which could assist in performing
common tasks such as camera and joint calibration. ROS was an attractive
option, but there was a major downside, that there was no straightforward way
to run ROS locally on the robot, so the decision was made not to spend time
trying to figure out how to do that. However, since Python is one of the main
languages in ROS, it should be possible to incorporate our work into ROS.
Another big alternative is \textit{ROS} \cite{ros} (Robotic Operating System).
ROS is a collection of software targeted at robot development, and there exists
a large ecosystem of third-party extensions for ROS, which could assist in
performing common tasks such as camera and joint calibration, as well as in
more complex tasks such as object detection. ROS was an attractive option, but
there was a major downside, that there was no straightforward way to run ROS
locally on the robot, so the decision was made not to spend time trying to
figure out how to do that. However, since Python is one of the main languages
in ROS, it should be possible in the future to incorporate our work into ROS.
Finally, as was already mentioned in the introduction, B-Human Framework is a
popular choice for beginners, thanks to the quality of the algorithms and good
documentation. However, B-Human has been in development over many years and is
therefore a very complex system. The amount time needed to get familiar with
the code, and then to incorporate our changes would have been too big, for this
reason we decided to use the simpler option as a starting point.
Finally, as was already mentioned in the introduction, \textit{B-Human
Framework} is a popular choice for beginners, thanks to the quality of the
algorithms and good documentation. However, B-Human has been in development
over many years and is therefore a very complex system. The amount time needed
to get familiar with the code, and then to incorporate our changes would have
been too big, for this reason we decided to use the simpler option as a
starting point.