Added figures to perception and overview
This commit is contained in:
BIN
documentation/figures/ball-detection.png
Normal file
BIN
documentation/figures/ball-detection.png
Normal file
Binary file not shown.
|
After Width: | Height: | Size: 366 KiB |
BIN
documentation/figures/colorpicker.png
Normal file
BIN
documentation/figures/colorpicker.png
Normal file
Binary file not shown.
|
After Width: | Height: | Size: 455 KiB |
BIN
documentation/figures/combined-detection.png
Normal file
BIN
documentation/figures/combined-detection.png
Normal file
Binary file not shown.
|
After Width: | Height: | Size: 525 KiB |
BIN
documentation/figures/field-detection.png
Normal file
BIN
documentation/figures/field-detection.png
Normal file
Binary file not shown.
|
After Width: | Height: | Size: 433 KiB |
BIN
documentation/figures/goal-detection.png
Normal file
BIN
documentation/figures/goal-detection.png
Normal file
Binary file not shown.
|
After Width: | Height: | Size: 588 KiB |
BIN
documentation/figures/striker-flowchart.png
Normal file
BIN
documentation/figures/striker-flowchart.png
Normal file
Binary file not shown.
|
After Width: | Height: | Size: 172 KiB |
@@ -1,18 +1,24 @@
|
|||||||
\section{Strategy Overview}
|
\section{Strategy Overview}
|
||||||
|
|
||||||
|
\begin{figure}[ht]
|
||||||
|
\includegraphics[width=\textwidth]{\fig striker-flowchart}
|
||||||
|
\caption{Overview of the goal scoring strategy}
|
||||||
|
\label{p figure strategy-overview}
|
||||||
|
\end{figure}
|
||||||
|
|
||||||
Now that all of the milestones are completed, we will present a short overview
|
Now that all of the milestones are completed, we will present a short overview
|
||||||
of the whole goal scoring strategy, the block diagram of which can be found in
|
of the whole goal scoring strategy, the block diagram of which can be found in
|
||||||
\todo{learn to do figures and reference them}. At the very beginning the robot
|
the figure \ref{p figure strategy-overview}. At the very beginning the robot
|
||||||
will detect the ball and turn to ball, as described in \todo{where}. After
|
will detect the ball and turn to ball, as described in the section \ref{j sec
|
||||||
that, the distance to the ball will be calculated, the goal will be detected,
|
turning to ball}. After that, the distance to the ball will be calculated,
|
||||||
and the direction to goal will be determined. If the ball is far away
|
the goal will be detected, and the direction to goal will be determined. If the
|
||||||
\textit{and} the ball and the goal are strongly misaligned, then the robot will
|
ball is far away \textit{and} the ball and the goal are strongly misaligned,
|
||||||
try to approach the ball from the appropriate side, otherwise the robot will
|
then the robot will try to approach the ball from the appropriate side,
|
||||||
approach the ball directly. These approach steps will be repeated until the
|
otherwise the robot will approach the ball directly. These approach steps will
|
||||||
robot is close enough to the ball to start aligning to the goal, but in the
|
be repeated until the robot is close enough to the ball to start aligning to
|
||||||
practice one step of approach from the side followed by a short direct approach
|
the goal, but in the practice one step of approach from the side followed by a
|
||||||
should suffice. When the ball is close, the robot will check if it is between
|
short direct approach should suffice. When the ball is close, the robot will
|
||||||
the goalposts, and will perform necessary adjustments if that's not the case.
|
check if it is between the goalposts, and will perform necessary adjustments if
|
||||||
After the ball and the goal are aligned, the robot will align its foot with
|
that's not the case. After the ball and the goal are aligned, the robot will
|
||||||
the ball and kick the ball. For now, we assumed that the ball will reach the
|
align its foot with the ball and kick the ball. For now, we assumed that the
|
||||||
goal and so the robot can finish execution.
|
ball will reach the goal and so the robot can finish execution.
|
||||||
|
|||||||
@@ -1,4 +1,5 @@
|
|||||||
\section{Ball detection}
|
\section{Ball detection}
|
||||||
|
\label{p sec ball detection}
|
||||||
|
|
||||||
The very first task that needed to be accomplished was to detect the ball,
|
The very first task that needed to be accomplished was to detect the ball,
|
||||||
which is uniformly red-colored and measures about 6 cm in diameter. We decided
|
which is uniformly red-colored and measures about 6 cm in diameter. We decided
|
||||||
@@ -13,6 +14,12 @@ be calculated using the routines from the OpenCV library. The center and the
|
|||||||
radius of the region with the largest area are then determined and are assumed
|
radius of the region with the largest area are then determined and are assumed
|
||||||
to be the center and the radius of the ball.
|
to be the center and the radius of the ball.
|
||||||
|
|
||||||
|
\begin{figure}[ht]
|
||||||
|
\includegraphics[width=\textwidth]{\fig ball-detection}
|
||||||
|
\caption{Ball detection. On the right is the binary mask}
|
||||||
|
\label{p figure ball-detection}
|
||||||
|
\end{figure}
|
||||||
|
|
||||||
It is sometimes recommended \cite{ball-detect} to eliminate the noise in the
|
It is sometimes recommended \cite{ball-detect} to eliminate the noise in the
|
||||||
binary mask by applying a sequence of \textit{erosions} and \textit{dilations},
|
binary mask by applying a sequence of \textit{erosions} and \textit{dilations},
|
||||||
but we found, that for the task of finding the \textit{biggest} area the noise
|
but we found, that for the task of finding the \textit{biggest} area the noise
|
||||||
@@ -33,6 +40,7 @@ to be robust enough for our purposes, if the sensible color calibration was
|
|||||||
provided.
|
provided.
|
||||||
|
|
||||||
\section{Goal detection}
|
\section{Goal detection}
|
||||||
|
\label{p sec goal detect}
|
||||||
|
|
||||||
The goal detection presented itself as a more difficult task. The color of the
|
The goal detection presented itself as a more difficult task. The color of the
|
||||||
goal is white, and there are generally many white areas in the image from the
|
goal is white, and there are generally many white areas in the image from the
|
||||||
@@ -41,9 +49,19 @@ example the white field lines and the big white wall in the room with the
|
|||||||
field. To deal with the multitude of the possible goal candidates, we
|
field. To deal with the multitude of the possible goal candidates, we
|
||||||
propose the following heuristic algorithm.
|
propose the following heuristic algorithm.
|
||||||
|
|
||||||
|
\begin{figure}[ht]
|
||||||
|
\includegraphics[width=\textwidth]{\fig goal-detection}
|
||||||
|
\caption{Goal Detection. On the right binary mask with all found contours. On
|
||||||
|
the left the goal, and one contour that passed preselection but was
|
||||||
|
rejected during scoring.}
|
||||||
|
\label{p figure goal-detection}
|
||||||
|
\end{figure}
|
||||||
|
|
||||||
First, all contours around white areas are extracted by using a procedure
|
First, all contours around white areas are extracted by using a procedure
|
||||||
similar to that described in the section on ball detection. Next, the
|
similar to that described in the section \ref{p sec ball detection}. Unlike in
|
||||||
\textit{candidate preselection} takes place. During this stage only $N$
|
the ball detection, the resulting binary mask undergoes some slight erosions
|
||||||
|
and dilations, since in the goal shape detection the noise is undesired. Next,
|
||||||
|
the \textit{candidate preselection} takes place. During this stage only $N$
|
||||||
contours with the largest areas are considered further (in our experiments it
|
contours with the largest areas are considered further (in our experiments it
|
||||||
was empirically determined that $N=5$ provides good results). Furthermore, all
|
was empirically determined that $N=5$ provides good results). Furthermore, all
|
||||||
convex contours are rejected, since the goal is a highly non-convex shape.
|
convex contours are rejected, since the goal is a highly non-convex shape.
|
||||||
@@ -64,26 +82,47 @@ have. The evaluation is happening based on two properties. The first property
|
|||||||
is based on the observation, that the area of the goal contour is much smaller
|
is based on the observation, that the area of the goal contour is much smaller
|
||||||
than the area of its \textit{enclosing convex hull} \cite{convex-hull}. The
|
than the area of its \textit{enclosing convex hull} \cite{convex-hull}. The
|
||||||
second observation is that all points of the goal contour must lie close to the
|
second observation is that all points of the goal contour must lie close to the
|
||||||
enclosing convex hull. The mathematical formulation of the scoring function
|
enclosing convex hull. The mathematical formulation can then look like the
|
||||||
looks like the following \todo{mathematical formulation}:
|
following:
|
||||||
|
|
||||||
|
\begin{equation*}
|
||||||
|
S(c)=\frac{A(c)}{A(Hull(c))}+\displaystyle\sum_{x_i \in c}\min_{h \in Hull(c)
|
||||||
|
}(||x_i-h||)
|
||||||
|
\end{equation*}
|
||||||
|
|
||||||
The contour, that minimizes the scoring function, while keeping its value under
|
The contour, that minimizes the scoring function, while keeping its value under
|
||||||
a certain threshold is considered the goal. If no contour scores below the
|
a certain threshold is considered the goal. If no contour scores below the
|
||||||
threshold, then the algorithm assumes that no goal was found.
|
threshold, then the algorithm assumes that no goal was found. An important note
|
||||||
|
is that the algorithm in such a way, that the preselection and scoring are
|
||||||
|
modular, which means that the current simple scoring function can later be
|
||||||
|
replaced by a function with a better heuristic, or even by some function that
|
||||||
|
employs machine learning models.
|
||||||
|
|
||||||
Our tests have shown, that when the white color is calibrated correctly, the
|
Our tests have shown, that when the white color is calibrated correctly, the
|
||||||
algorithm can detect the goal almost without mistakes, when the goal is present
|
algorithm can detect the goal almost without mistakes, when the goal is present
|
||||||
in the image. The downside of this algorithm, is that in some cases the field
|
in the image. Most irrelevant candidates candidates are normally discarded in
|
||||||
lines might appear the same properties, that the goal contour is expected to
|
the preselection stage, and the scoring function improves the robustness
|
||||||
|
further. The downside of this algorithm, is that in some cases the field lines
|
||||||
|
might appear to have the same properties, that the goal contour is expected to
|
||||||
have, therefore the field lines can be mistaken for the goal. We will describe,
|
have, therefore the field lines can be mistaken for the goal. We will describe,
|
||||||
how we dealt with this problem, in the following section.
|
how we dealt with this problem, in the section \ref{p sec field detect}.
|
||||||
|
|
||||||
\section{Field detection}
|
\section{Field detection}
|
||||||
|
\label{p sec field detect}
|
||||||
|
|
||||||
The algorithm for the field detection is very similar to the ball detection
|
The algorithm for the field detection is very similar to the ball detection
|
||||||
algorithm, but some concepts introduced in the previous section are also used
|
algorithm, but some concepts introduced in the section \ref{p sec goal detect}
|
||||||
here. This algorithm extracts the biggest green area in the image, finds its
|
are also used here. This algorithm extracts the biggest green area in the
|
||||||
enclosing convex hull, and assumes everything inside the hull to be the field.
|
image, finds its enclosing convex hull, and assumes everything inside the hull
|
||||||
|
to be the field. In here, when we extract the field, we apply strong Gaussian
|
||||||
|
blurring and erosions-dilations combination to the binary mask, so that the
|
||||||
|
objects on the field are properly consumed.
|
||||||
|
|
||||||
|
\begin{figure}[ht]
|
||||||
|
\includegraphics[width=\textwidth]{\fig field-detection}
|
||||||
|
\caption{Field detection}
|
||||||
|
\label{p figure field-detection}
|
||||||
|
\end{figure}
|
||||||
|
|
||||||
Such rather simple field detection has two important applications. The first
|
Such rather simple field detection has two important applications. The first
|
||||||
one is that the robot should be aware, where the field is, so that it doesn't
|
one is that the robot should be aware, where the field is, so that it doesn't
|
||||||
@@ -98,3 +137,9 @@ the probability of identifying a wrong object decreases. On the other hand, the
|
|||||||
problem with the goal detection algorithm was that it could be distracted by
|
problem with the goal detection algorithm was that it could be distracted by
|
||||||
the field lines. So, if everything on the field is ignored for goal
|
the field lines. So, if everything on the field is ignored for goal
|
||||||
recognition, then the accuracy can be improved.
|
recognition, then the accuracy can be improved.
|
||||||
|
|
||||||
|
\begin{figure}[ht]
|
||||||
|
\includegraphics[width=\textwidth]{\fig combined-detection}
|
||||||
|
\caption{Using field detection to improve ball and goal detection}
|
||||||
|
\label{p figure combined-detection}
|
||||||
|
\end{figure}
|
||||||
|
|||||||
@@ -13,8 +13,7 @@ from .utils import read_config, imresize, hsv_mask, InterruptDelayed
|
|||||||
|
|
||||||
class Colorpicker(object):
|
class Colorpicker(object):
|
||||||
|
|
||||||
WINDOW_CAPTURE_NAME = 'Object Detection (or not)'
|
WINDOW_DETECTION_NAME = 'Colorpicker'
|
||||||
WINDOW_DETECTION_NAME = 'Primary Mask'
|
|
||||||
|
|
||||||
def __init__(self, target=None):
|
def __init__(self, target=None):
|
||||||
parameters = ['low_h', 'low_s', 'low_v', 'high_h', 'high_s', 'high_v']
|
parameters = ['low_h', 'low_s', 'low_v', 'high_h', 'high_s', 'high_v']
|
||||||
@@ -54,7 +53,6 @@ class Colorpicker(object):
|
|||||||
else:
|
else:
|
||||||
self.marker = None
|
self.marker = None
|
||||||
|
|
||||||
cv2.namedWindow(self.WINDOW_CAPTURE_NAME)
|
|
||||||
cv2.namedWindow(self.WINDOW_DETECTION_NAME)
|
cv2.namedWindow(self.WINDOW_DETECTION_NAME)
|
||||||
self.trackers = [
|
self.trackers = [
|
||||||
cv2.createTrackbar(
|
cv2.createTrackbar(
|
||||||
@@ -101,10 +99,10 @@ class Colorpicker(object):
|
|||||||
)
|
)
|
||||||
|
|
||||||
thr = cv2.cvtColor(thr, cv2.COLOR_GRAY2BGR)
|
thr = cv2.cvtColor(thr, cv2.COLOR_GRAY2BGR)
|
||||||
thr = self.marker.draw_last_contours(thr)
|
# thr = self.marker.draw_last_contours(thr)
|
||||||
resulting = np.concatenate((frame, thr), axis=1)
|
resulting = np.concatenate((frame, thr), axis=1)
|
||||||
|
|
||||||
cv2.imshow(self.WINDOW_CAPTURE_NAME, resulting)
|
cv2.imshow(self.WINDOW_DETECTION_NAME, resulting)
|
||||||
# cv2.imshow(self.WINDOW_DETECTION_NAME, thr)
|
# cv2.imshow(self.WINDOW_DETECTION_NAME, thr)
|
||||||
return cv2.waitKey(0 if manual else 50)
|
return cv2.waitKey(0 if manual else 50)
|
||||||
|
|
||||||
|
|||||||
Reference in New Issue
Block a user