merged jonas

This commit is contained in:
2018-08-09 09:43:57 +02:00
parent bd375373ad
commit 3b15bc7798
6 changed files with 110 additions and 53 deletions

52
documentation/Seif.tex Normal file
View File

@@ -0,0 +1,52 @@
\section{Ball Alignment}
Now that the robot aligned itself with the ball and the goal, it has to move to the
right position, from which it can perform the kick. Depending on the situation, it
was feasible to program the robot to automatically select which foot to kick the ball with;
however, due to time constraints we decided to program the robot to kick only with the left foot.
In order to program the robot to correctly position its left foot in front of the ball,
we identified the right position that the ball should be positioned at,
within the robots lower camera as shown in \ref{p figure ball-alignment}.
We then experimentally determined the extents of this region.
The algorithm therefore is for the robot to gradually adjust its position in small steps,
until the ball image reaches the target, which would trigger the robot to perform the kick. \\
Our tests have shown that this method was quite robust and gave consistent results.
We registered no case where the robot missed the ball or hit it with the edge of the foot.\\
\begin{figure}[ht]
\includegraphics[width=\textwidth]{\fig ball-align}
\caption{Ball alignment}
\label{p figure ball-alignment}
\end{figure}
\section{Kick}
The final milestone in the goal scoring project is naturally the kick. Before
we started working on the kick, we set the requirements that our
implementation must meet. Firstly and most importantly, the robot shouldn't
fall down during and after performing the kick. Secondly, the kick performance should be efficient,
thus ideally only one attempt would be necessary for the ball to reach the goal.
Consequently, we opted for a powerful kick which can cover high distances. \\
As shown in \ref{p figure kick} To obtain our strong kick, First the robot will use its ankle joints to shift
its weight to the base leg to compensate for the gravity and to avoid any collision between the kicking foot and the floor.
After this, the robot will be able to lift the
kicking leg to achieve a stronger swing. Finally, the robot will perform the swing and return
to the standing position safely. Both raising the leg and doing the swing require
precise coordinated joint movements, so we had to conduct experiments to
establish the correct joint angles and the movement speed. \\
An important drawback of our implementation is that the swing makes the whole
process slower, but we weren't able to design a strong and stable kick without
using the swing. Nevertheless, the tests that we performed have shown that our
implementation satisfies our requirements, and hence the last milestone was
successfully completed.\\
\begin{figure}[ht]
\includegraphics[width=\textwidth]{\fig kick}
\caption{Kick sequence}
\label{p figure kick}
\end{figure}

View File

@@ -3,15 +3,15 @@
RoboCup \cite{robocup} is an international competition in the field of
robotics, the ultimate goal of which is to win a game of soccer against a human
team by the middle of the 21st century. The motivation behind this objective is
the following: it is impossible to achieve such an ambitious goal with the
the following: It is impossible to achieve such an ambitious goal with the
current state of technology, which means that the RoboCup competitions will
drive scientific and technological advancement in such areas as computer
vision, mechatronics and multi-agent cooperation in complex dynamic
environments. The RoboCup teams compete in five different leagues: Humanoid,
Standard Platform, Medium Size, Small Size and Simulation. Our work in this
semester was based on the rules of the \textit{Standard Platform league}. In
semester was based on the rules of the \textit{Standard Platform League}. In
this league all teams use the same robot \textit{Nao}, which is being produced
by the SoftBank Robotics. We will describe the capabilities of this robot in
by SoftBank Robotics. We will describe the capabilities of this robot in
more detail in the next chapter.
A couple of words need to be said about the state-of-the-art. One of the most
@@ -35,11 +35,11 @@ effective goal scoring will bring the team closer to victory. Secondly, in
order to score a goal, many problems and tasks need to be solved, which we will
describe in close detail in the next chapter. The work on these tasks would
allow us to acquire new competences, which we could then use to complement the
RoboCup team of TUM. Finally, this objective encompasses many disciplines, such
RoboCup team of the TUM. Finally, this objective encompasses many disciplines, such
as object detection, mechatronics or path planning, which means that working on
it might give us a chance to contribute to the research in these areas.
Having said that, we hope that our project will be a positive contribution to
the work, being done at the Institute for Cognitive Systems, and that this
the work being done at the Institute for Cognitive Systems and that this
report will help future students to get familiar with our results and continue
our work.

View File

@@ -72,7 +72,7 @@ start the \textbf{Turn to Ball algorithm} again.
%Follow the ball always -> problem: movement while walking
%Describe in more Detail??? Are all steps in can not see the ball executed every time?
%Mention stand up
\newpage
\section{Distance Measurement}
\label{j sec distance measurement}
@@ -102,8 +102,8 @@ camera of the robot is not aligned with the parallel to the floor. There is
therefore an offset angle for the center of the camera frame, which has to be
considered in the calculations. As seen in figure \ref{j figure distance
measurement} $ \Phi_{\mathrm{ball}} $ and $
\Phi_{\mathrm{meas}}+\Phi_{\mathrm{cam}} $ are alternate interior angles
therefore the following equations holds:
\Phi_{\mathrm{meas}}+\Phi_{\mathrm{cam}} $ are alternate interior angles.
Therefore, the following equations holds:
\begin{equation}
\Phi_{\mathrm{ball}} = \Phi_{\mathrm{meas}}+\Phi_{\mathrm{cam}} \; .
@@ -157,12 +157,12 @@ head, until it is able to recognize the goal in the view of its top camera
Using the position of the center of the goal, the angle between the ball and
the goal is estimated. Depending on the value of the angle, different approach
directions are chosen. In the figure \ref{j figure choose-approach}, the goal
directions are chosen. In figure \ref{j figure choose-approach}, the goal
is on the right side of the ball. It therefore makes sense to approach the ball
somewhere from the left side. In the current implementation there are three
possible approach directions. The robot could approach the ball either from the
left or the right side; or if the angle between the goal and the ball is
sufficiently small, the robot could also do a straight approach to the ball. As
sufficiently small or the distance between the ball and the robot is sufficiently small, the robot could also do a straight approach to the ball. As
the exact approach angle to the ball is calculated in the next part of the
approach planning, it's enough for now to decide between those three possible
approach directions.
@@ -171,7 +171,6 @@ The proposed algorithm worked fine under the consideration of the
possible scenarios. As the goal detection algorithm works quite reliable, the
appropriate approach direction was found quickly most of the time.
\newpage
As the approach direction is now known, the approach angle and the walking
distance of the robot have to be estimated. The task is to find an approach
@@ -191,9 +190,9 @@ The task is solved as following. Again the robot is in the standing position
and the ball is centered in the camera view of the top camera. The ball
distance has already been estimated as described in section \ref{j sec distance
measurement}. To estimate the approach angle and the walking distance, a
desired distance is defined which defines the distance between the robot and
desired distance is set which defines the distance between the robot and
the ball after the walk. Approach angle and walking distance can then be
computed. Thereby we considered two different approaches depending on the
computed. Thereby we considered three different approaches depending on the
distance between the ball and the robot. If the distance between the robot and
the ball is below or equal to a specified threshold the triangle looks as shown
in figure \ref{j figure rdist hypo}.
@@ -240,19 +239,26 @@ looks like in figure \ref{j figure bdist hypo}.
\end{figure}
To calculate the appropriate walking distance, the following formulas estimate
the approaching angle and calculate the distance.
the approaching angle and calculate the walking distance, depending on the distance to the ball.
\begin{equation}
\Theta_\mathrm{appr}=\arctan\left(\frac{\mathrm{Desired\ distance}}{\mathrm{ball\ distance}} \right) \; \; \mathrm{or} \; \; \arcsin\left(\frac{\mathrm{Desired\ distance}}{\mathrm{ball\ distance}}\right)
\Theta_\mathrm{appr} =
\begin{cases}
\arctan\left(\frac{\mathrm{Desired\ distance}}{\mathrm{ball\ distance}} \right) & \text{for short distances}\\
\arcsin\left(\frac{\mathrm{Desired\ distance}}{\mathrm{ball\ distance}}\right) & \text{for long distances}
\end{cases}
\end{equation}
\begin{equation}
\mathrm{walking\ distance}=\frac{\mathrm{ball\ distance}}{\cos(\Theta_\mathrm{appr})} \; \; \mathrm{or} \; \; \frac{\cos(\Theta_\mathrm{appr})}{\mathrm{ball\ distance}}
\mathrm{walking\ distance} =
\begin{cases}
\frac{\mathrm{ball\ distance}}{\cos(\Theta_\mathrm{appr})} & \text{for short distances}\\
\cos(\Theta_\mathrm{appr}) \cdot \mathrm{ball\ distance} & \text{for long distances}
\end{cases}
\end{equation}
If the distance between the robot and the ball is really small, the robot
starts a direct approach to the ball regardless of the position of the goal.
This makes more sense for short distances, than the two approaches stated
above. In this case the neccessary actions for goal alignment will happen in a
As already mentioned, the robot starts a direct approach to the ball regardless of the position of the goal if the distance between the robot and the ball is really small.
This makes more sense for sufficiently short distances, than the two approaches stated
above. In this case the necessary actions for goal alignment will happen in a
dedicated goal alignment stage, described in the section \ref{p sec goal
align}.

View File

@@ -16,7 +16,7 @@ and are assumed to be the center and the radius of the ball.
\begin{figure}[ht]
\includegraphics[width=\textwidth]{\fig ball-detection}
\caption{Ball detection. On the right is the binary mask}
\caption[Ball detection]{Ball detection. On the right is the binary mask}
\label{p figure ball-detection}
\end{figure}
@@ -30,13 +30,13 @@ binary mask with erosions and dilations, which allowed us to detect the ball
even over long distances.
The advantages of the presented algorithm are its speed and simplicity. The
major downside is that the careful color calibration is required for the
major downside is that a careful color calibration is required for the
algorithm to function properly. If the HSV interval of the targeted color is
too narrow, then the algorithm might miss the ball; if the interval is too
wide, then other big red-shaded objects in the camera image will be detected as
too narrow, the algorithm might miss the ball; if the interval is too
wide, other big red-shaded objects in the camera image will be detected as
the ball. A possible approach to alleviate these issues to a certain degree
will be presented further in the section \ref{p sec field detect}. To
conclude, we found this algorithm to be robust enough for our purposes, if the
conclude, we found this algorithm to be robust enough for our purposes, if a
sensible color calibration was provided.
\section{Goal Detection}
@@ -64,7 +64,7 @@ contours with the largest areas are considered further (in our experiments it
was empirically determined that $N=5$ provides good results). Furthermore, all
convex contours are rejected, since the goal is a highly non-convex shape.
After that, a check is performed, how many points are necessary to approximate
the remaining contours. The motivation behind this is the following: it is
the remaining contours. The motivation behind this is the following: It is
clearly visible that the goal shape can be perfectly approximated by a line
with 8 straight segments. On an image from the camera, the approximation is
almost perfect when using only 6 line segments, and in some degenerate cases
@@ -74,7 +74,7 @@ of line segments to be approximated is probably not the goal. The preselection
stage ends here, and the remaining candidates are passed to the scoring
function.
The scoring function calculates, how different are the properties of the
The scoring function calculates, how different the properties of the
candidates are from the properties, that an idealized goal contour is expected
to have. The evaluation is happening based on two properties. The first
property is based on the observation, that the area of the goal contour is much
@@ -90,7 +90,7 @@ scoring function can then look like the following:
The contour, that minimizes the scoring function, while keeping its value under
a certain threshold is considered the goal. If no contour scores below the
threshold, then the algorithm assumes that no goal was found. An important note
threshold, the algorithm assumes that no goal was found. An important note
is that the algorithm is designed in such a way, that the preselection and
scoring are modular, which means that the current simple scoring function can
later be replaced by a function with a better heuristic, or even by some
@@ -104,7 +104,7 @@ Figure \ref{p figure goal-detection} demonstrates the algorithm in action. On
the right is the binary mask with all found contours. On the left are the goal,
and one contour that passed preselection but was rejected during scoring.
One downside of this algorithm, is that in some cases the field lines
One downside of this algorithm is that in some cases the field lines
might appear to have the same properties, that the goal contour is expected to
have, therefore the field lines can be mistaken for the goal. We will describe,
how we dealt with this problem, in the section \ref{p sec field detect}.

View File

@@ -5,23 +5,23 @@ completed, which are:
\begin{enumerate}
\item Ball detection;
\item Ball detection
\item Goal detection;
\item Goal detection
\item Field detection;
\item Field detection
\item Turning to ball;
\item Turning to ball
\item Distance measurement;
\item Distance measurement
\item Approach planning;
\item Approach planning
\item Ball approach;
\item Ball approach
\item Goal alignment;
\item Goal alignment
\item Ball alignment;
\item Ball alignment
\item Kick.

View File

@@ -7,29 +7,29 @@ cm tall. Some of its characteristics are:
\begin{itemize}
\item Two HD-cameras on the head;
\item Two HD-cameras on the head
\item An ultrasonic rangefinder on the body;
\item An ultrasonic rangefinder on the body
\item An inertial navigation unit (accelerometer and gyroscope);
\item An inertial navigation unit (accelerometer and gyroscope)
\item Internet connectivity over Ethernet cable or 802.11g WLAN;
\item Internet connectivity over Ethernet cable or 802.11g WLAN
\item Single-core Intel Atom CPU and 1 GB of RAM;
\item Single-core Intel Atom CPU and 1 GB of RAM
\item Programmable joints with overall 25 degrees of freedom;
\item Programmable joints with overall 25 degrees of freedom
\item Speakers;
\item Speakers
\item 60 to 90 minutes battery life.
\end{itemize}
It can be seen from the specifications list, that the multitude of sensors and
interfaces makes Nao an attractive development platform, suitable for the task
of playing soccer. However, relatively weak CPU and a low amount of RAM require
interfaces make the Nao an attractive development platform, suitable for the task
of playing soccer. However, a relatively weak CPU and a low amount of RAM require
the programs running on the robot to be resource-efficient, which had to be
taken into into account during our work on the project.
taken into account during our work on the project.
\section{Software}
@@ -58,13 +58,12 @@ to ensure their proper functioning on the robot.
Here we will briefly discuss what alternative options were available for the
choice of the base framework, and why we decided not to use those. One
available option was the official \textit{NAOqi C++ SDK}. Being based on the
C++ language, this SDK can naturally be expected to have better performance and
be more resource-efficient, than the Python-based version. We still chose the
C++ language, this SDK can naturally be expected to have better performance and to be more resource-efficient, than the Python-based version. We still chose the
Python SDK, because C++ is not particularly suitable for fast prototyping,
because of the complexity of the language. It is also worth noting, that we
never really hit the performance constraints, that couldn't have been overcome
by refactoring our code, but in the future it might be reasonable to migrate
some of the portions of it to C++.
some portions of it to C++.
Another big alternative is \textit{ROS} \cite{ros} (Robotic Operating System).
ROS is a collection of software targeted at robot development, and there exists
@@ -79,7 +78,7 @@ in ROS, it should be possible in the future to incorporate our work into ROS.
Finally, as was already mentioned in the introduction, \textit{B-Human
Framework} is a popular choice for beginners, thanks to the quality of the
algorithms and good documentation. However, B-Human has been in development
over many years and is therefore a very complex system. The amount time needed
over many years and is therefore a very complex system. The amount of time needed
to get familiar with the code, and then to incorporate our changes would have
been too big, for this reason we decided to use the simpler option as a
been too big. For this reason we decided to use the simpler option as a
starting point.