Reached the end of the main body
This commit is contained in:
72
documentation/conclusion.tex
Normal file
72
documentation/conclusion.tex
Normal file
@@ -0,0 +1,72 @@
|
|||||||
|
\chapter{Conclusion}
|
||||||
|
|
||||||
|
\section{Results}
|
||||||
|
|
||||||
|
In this section we will summarize our most important achievements during the
|
||||||
|
work on the project. First, we managed to implement robust detection
|
||||||
|
algorithms, on which we could rely when we worked on higher-lever behaviors.
|
||||||
|
During our tests, there were almost no false detections, i.e.\ foreign objects
|
||||||
|
were not detected as a ball or a goal. Sometimes the ball and the goal were
|
||||||
|
missed, even if they were in the field of view, which happened due to imprecise
|
||||||
|
color calibration under changing lighting conditions. The goal detection was on
|
||||||
|
of the most difficult project milestones, so we are particularly satisfied with
|
||||||
|
the resulting performance. It is worth mentioning, that with the current
|
||||||
|
algorithm, for successful detection, it is not even necessary to have the whole
|
||||||
|
goal in the camera image.
|
||||||
|
|
||||||
|
Another important achievement is the overall system robustness. In our tests
|
||||||
|
the robot could successfully reach the ball, do the necessary alignments and
|
||||||
|
kick the ball. When the robot decided that he should kick the ball, in the
|
||||||
|
majority of cases the kick was successful and the ball reached the target. We
|
||||||
|
performed these tests from many starting positions and assuming many relative
|
||||||
|
position of the ball and the goal. Naturally, we put some constraints on the
|
||||||
|
problem, but within th \todo{smth about constraints and such bullshit}.
|
||||||
|
|
||||||
|
Furthermore, we managed not only to make the whole approach robust, but also
|
||||||
|
worked on making the procedure fast, and the approach planing was a crucial
|
||||||
|
element of this. In the project's early stages, the robot couldn't approach the
|
||||||
|
ball from the side, depending on the goal position, and instead always walked
|
||||||
|
towards the ball directly and aligned to the goal afterwards. The tests have
|
||||||
|
shown, that in such configuration the goal alignment was actually the longest
|
||||||
|
phase and could take over a minute. Then we introduced the approach planing,
|
||||||
|
and as a result the goal alignment stage could in many scenarios be completely
|
||||||
|
eliminated, which was greatly beneficial for the execution times.
|
||||||
|
|
||||||
|
Finally, \todo{the kick was nice}.
|
||||||
|
|
||||||
|
\section{Future Work}
|
||||||
|
|
||||||
|
With our objective for this semester completed, there still remains a vast room
|
||||||
|
for improvement. Some of the most interesting topics for future work will now
|
||||||
|
presented.
|
||||||
|
|
||||||
|
The first important topic is the self-localization. Currently our robot is
|
||||||
|
completely unaware of his position on the field, but if such information could
|
||||||
|
be obtained, then it could be leveraged to make path planning more effective
|
||||||
|
and precise.
|
||||||
|
|
||||||
|
Another important capability that our robot lacks for now is obstacle
|
||||||
|
awareness, which would be unacceptable in the real RoboCup soccer game. Making
|
||||||
|
the robot aware of the obstacles on the field would require the obstacle
|
||||||
|
detection to be implemented, as well as some changes to the path planning
|
||||||
|
algorithms to be made, which makes this task an interesting project on its own.
|
||||||
|
|
||||||
|
A further capability that could be useful for the striker is the ability to
|
||||||
|
perform different kicks depending on the situation. For example, if the robot
|
||||||
|
could perform a sideways kick, then the goal alignment would in many situations
|
||||||
|
be unnecessary, which would reduce the time needed to score a goal.
|
||||||
|
|
||||||
|
In this semester we concentrated on the ``free-kick'' situation, so our robot
|
||||||
|
can perform its tasks in the absence of other players, and only when the ball
|
||||||
|
is not moving. Another constraint that we imposed on our problem is that the
|
||||||
|
ball is relatively close to the goal, and that the ball is closer to the goal
|
||||||
|
than the robot, so that the robot doesn't have to run away from the goal. To be
|
||||||
|
useful in a real game the striker should be able to handle more complex
|
||||||
|
situations. For example, the \textit{dribbling} skill could help the robot to
|
||||||
|
avoid the opponents and to bring the ball into a convenient striking position.
|
||||||
|
|
||||||
|
Finally, we realized that the built-in moving functions in NAOqi SDK produce
|
||||||
|
fairly slow movements, and also don't allow to change the direction of movement
|
||||||
|
fluently, which results in pauses when the robot needs to move in another
|
||||||
|
direction. This realization brings us to thought, that the custom-implemented
|
||||||
|
movement might result in much faster and smoother behavior.
|
||||||
18
documentation/overview.tex
Normal file
18
documentation/overview.tex
Normal file
@@ -0,0 +1,18 @@
|
|||||||
|
\section{Strategy Overview}
|
||||||
|
|
||||||
|
Now that all of the milestones are completed, we will present a short overview
|
||||||
|
of the whole goal scoring strategy, the block diagram of which can be found in
|
||||||
|
\todo{learn to do figures and reference them}. At the very beginning the robot
|
||||||
|
will detect the ball and turn to ball, as described in \todo{where}. After
|
||||||
|
that, the distance to the ball will be calculated, the goal will be detected,
|
||||||
|
and the direction to goal will be determined. If the ball is far away
|
||||||
|
\textit{and} the ball and the goal are strongly misaligned, then the robot will
|
||||||
|
try to approach the ball from the appropriate side, otherwise the robot will
|
||||||
|
approach the ball directly. These approach steps will be repeated until the
|
||||||
|
robot is close enough to the ball to start aligning to the goal, but in the
|
||||||
|
practice one step of approach from the side followed by a short direct approach
|
||||||
|
should suffice. When the ball is close, the robot will check if it is between
|
||||||
|
the goalposts, and will perform necessary adjustments if that's not the case.
|
||||||
|
After the ball and the goal are aligned, the robot will align its foot with
|
||||||
|
the ball and kick the ball. For now, we assumed that the ball will reach the
|
||||||
|
goal and so the robot can finish execution.
|
||||||
@@ -38,9 +38,61 @@ goal is white, and there are generally many white areas in the image from the
|
|||||||
robot camera, which have area larger than that of the image of the goal, for
|
robot camera, which have area larger than that of the image of the goal, for
|
||||||
example the white field lines and the big white wall in the room with the
|
example the white field lines and the big white wall in the room with the
|
||||||
field. To deal with the multitude of the possible goal candidates, we
|
field. To deal with the multitude of the possible goal candidates, we
|
||||||
propose the following algorithm.
|
propose the following heuristic algorithm.
|
||||||
|
|
||||||
First, all contours around white areas are extracted by using a procedure
|
First, all contours around white areas are extracted by using a procedure
|
||||||
similar to that described in the section on ball detection. Only $N$ contours
|
similar to that described in the section on ball detection. Next, the
|
||||||
with the largest areas are considered further (in our experiments it was
|
\textit{candidate preselection} takes place. During this stage only $N$
|
||||||
empirically determined that $N=5$ provides good results).
|
contours with the largest areas are considered further (in our experiments it
|
||||||
|
was empirically determined that $N=5$ provides good results). Furthermore, all
|
||||||
|
convex contours are rejected, since the goal is a highly non-convex shape.
|
||||||
|
After that, a check is performed, how many points are necessary to approximate
|
||||||
|
the remaining contours. The motivation behind this is the following: it is
|
||||||
|
clearly visible that the goal shape can be perfectly approximated by a line
|
||||||
|
with 8 straight segments. On an image from the camera, the approximation is
|
||||||
|
almost perfect when using only 6 line segments, and in some degenerate cases
|
||||||
|
when the input image is noisy, it might be necessary to use 9 line segments to
|
||||||
|
approximate the shape of the goal. Any contour that requires a different number
|
||||||
|
of line segments to be approximated is probably not the goal. The preselection
|
||||||
|
stage ends here, and the remaining candidates are passed to the scoring
|
||||||
|
function.
|
||||||
|
|
||||||
|
The scoring function calculates, how different are the properties of the
|
||||||
|
candidates are from the properties, an idealized goal contour is expected to
|
||||||
|
have. The evaluation is happening based on two properties. The first property
|
||||||
|
is based on the observation, that the area of the goal contour is much smaller
|
||||||
|
than the area of its \textit{enclosing convex hull}. The second observation is
|
||||||
|
that all points of the goal contour must lie close to the enclosing convex
|
||||||
|
hull. The mathematical formulation of the scoring function looks like the
|
||||||
|
following \todo{mathematical formulation}:
|
||||||
|
|
||||||
|
The contour, that minimizes the scoring function, while keeping its value under
|
||||||
|
a certain threshold is considered the goal. If no contour scores below the
|
||||||
|
threshold, then the algorithm assumes that no goal was found. Our tests have
|
||||||
|
shown, that when the white color is calibrated correctly, the algorithm can
|
||||||
|
detect the goal almost without mistakes, when the goal is present in the image.
|
||||||
|
The downside of this algorithm, is that in some cases the field lines might
|
||||||
|
appear the same properties, that the goal contour is expected to have,
|
||||||
|
therefore the field lines can be mistaken for the goal. We will describe, how
|
||||||
|
we dealt with this problem, in the following section.
|
||||||
|
|
||||||
|
\section{Field detection}
|
||||||
|
|
||||||
|
The algorithm for the field detection is very similar to the ball detection
|
||||||
|
algorithm, but some concepts introduced in the previous section are also used
|
||||||
|
here. This algorithm extracts the biggest green area in the image, finds its
|
||||||
|
enclosing convex hull, and assumes everything inside the hull to be the field.
|
||||||
|
|
||||||
|
Such rather simple field detection has two important applications. The first
|
||||||
|
one is that the robot should be aware, where the field is, so that it doesn't
|
||||||
|
try to walk away from the field. Due to time constraints, we didn't implement
|
||||||
|
this part of the behavior. The second application of field detection is the
|
||||||
|
improvement of the quality of goal and ball recognition. As was mentioned in
|
||||||
|
the section on ball detection, the current algorithm might get confused, if
|
||||||
|
there are any red objects in the robot's field of view. However, there
|
||||||
|
shouldn't be any red objects on the field, except the ball itself. So, if
|
||||||
|
everything that's not on the field is ignored, when trying to detect the ball,
|
||||||
|
the probability of identifying a wrong object decreases. On the other hand, the
|
||||||
|
problem with the goal detection algorithm was that it could be distracted by
|
||||||
|
the field lines. So, if everything on the field is ignored for goal
|
||||||
|
recognition, then the accuracy can be improved.
|
||||||
|
|||||||
@@ -7,22 +7,16 @@
|
|||||||
\usepackage[hidelinks]{hyperref}
|
\usepackage[hidelinks]{hyperref}
|
||||||
\usepackage{glossaries}
|
\usepackage{glossaries}
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
\newcommand{\fig}{figures/}
|
\newcommand{\fig}{figures/}
|
||||||
\usepackage{graphicx}
|
\usepackage{graphicx}
|
||||||
\usepackage{tikz}
|
\usepackage{tikz}
|
||||||
\usetikzlibrary{quotes,angles}
|
\usetikzlibrary{quotes,angles}
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
% lots of packages are included in the preamble, look there for more
|
% lots of packages are included in the preamble, look there for more
|
||||||
% information about how this.
|
% information about how this.
|
||||||
|
|
||||||
\include{robotum_report.preamble}
|
\include{robotum_report.preamble}
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
% if you don't know where something can be found, click on the pdf, and
|
% if you don't know where something can be found, click on the pdf, and
|
||||||
% Overleaf will open the file where it is described
|
% Overleaf will open the file where it is described
|
||||||
|
|
||||||
@@ -31,14 +25,12 @@
|
|||||||
\vspace*{6mm}
|
\vspace*{6mm}
|
||||||
}
|
}
|
||||||
|
|
||||||
\author{Pavel Lutskov\\Jonas Bubenhagen\\Yuankai Wu\\Seif Ben Hamida\\ Ahmed Kamoun}
|
\author{Pavel Lutskov\\Jonas Bubenhagen\\Yuankai Wu\\Seif Ben Hamida\\Ahmed Kamoun}
|
||||||
\supervisors{Mohsen Kaboli\\ and the Tutor (insert name)}
|
\supervisors{Mohsen Kaboli\\and the Tutor (insert name)}
|
||||||
\submitdate{August 2018}
|
\submitdate{August 2018}
|
||||||
|
|
||||||
\maketitle % this generates the title page. More in icthesis.sty
|
\maketitle % this generates the title page. More in icthesis.sty
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
\preface
|
\preface
|
||||||
% \input{Acknowledgements/Acknowledgements}
|
% \input{Acknowledgements/Acknowledgements}
|
||||||
|
|
||||||
@@ -46,31 +38,14 @@
|
|||||||
|
|
||||||
% \input{Introduction/Introduction}
|
% \input{Introduction/Introduction}
|
||||||
|
|
||||||
|
|
||||||
\setstretch{1.2} % set line spacing
|
\setstretch{1.2} % set line spacing
|
||||||
\input{introduction}
|
\input{introduction}
|
||||||
\input{tools}
|
\input{tools}
|
||||||
\input{solintro}
|
\input{solintro}
|
||||||
\input{perception}
|
\input{perception}
|
||||||
\input{jonas}
|
\input{jonas}
|
||||||
|
\input{overview}
|
||||||
% body of thesis comes here
|
\input{conclusion}
|
||||||
% \input{Body/SoftwareTools} %this loads the content of file SoftwareTools.tex
|
|
||||||
% in the folder Body.
|
|
||||||
% \todo{SoftwareTools} %this is how you add a todo, it will appear in the list
|
|
||||||
% on page 2, and in orange in the margin where you add it.
|
|
||||||
% \input{Body/Setup}
|
|
||||||
% \todo{Setup}
|
|
||||||
% \input{Body/Perception}
|
|
||||||
% \todo{Perception}
|
|
||||||
% \input{Body/Modeling}
|
|
||||||
% \todo{Modeling}
|
|
||||||
% \input{Body/Motion}
|
|
||||||
|
|
||||||
% \input{Body/Behavior}
|
|
||||||
|
|
||||||
|
|
||||||
% \input{Conclusion/Conclusion}
|
|
||||||
|
|
||||||
\begin{appendices}
|
\begin{appendices}
|
||||||
%\input{Appendix/BehaviorImplementation}
|
%\input{Appendix/BehaviorImplementation}
|
||||||
|
|||||||
Reference in New Issue
Block a user