Apparently finished appendices

This commit is contained in:
2018-08-08 14:01:26 +02:00
parent 397d580238
commit bb466c3c2b
6 changed files with 105 additions and 27 deletions

View File

@@ -0,0 +1,21 @@
\section{Color calibration}
All our detection algorithms require color calibration, and when the lighting
conditions on the field change, colors might have to be newly calibrated. For
us this meant that a tool was necessary, that could simplify this process as
far as possible. For this reason, we implemented a small OpenCV-based program,
that we called \verb|Colorpicker|. This program can access various video
sources, as well as use still images for calibration. The main interface
contains the sliders for adjusting the HSV interval, as well as the video area,
demonstrating the resulting binary mask. The colors can be calibrated for three
targets: ball, goal and field; and the quality of detection, depending on the
chosen target is demonstrated. When the program is closed, the calibration
values are automatically saved to the settings file \verb|nao_defaults.json|.
The interface of the Colorpicker is demonstrated in the figure \ref{p figure
colorpicker}.
\begin{figure}[ht]
\includegraphics[width=\textwidth]{\fig colorpicker}
\caption{Interface of the Colorpicker}
\label{p figure colorpicker}
\end{figure}

View File

@@ -0,0 +1,40 @@
\chapter{Implementation details}
\section{Code organization}
Our code is organized as a standard Python package. The following command can
be used to make the robot run the whole goal scoring sequence:
\begin{verbatim}
python -m pykick
\end{verbatim}
Alternatively, individual modules can be run with the following command:
\begin{verbatim}
python -m pykick.[filename_without_.py]
\end{verbatim}
The main logic of our implementation can be found in the following files:
\begin{itemize}
\item \verb|__main__.py| contains the state machine described in the section
\ref{p sec overview}.
\item \verb|striker.py| contains implementation of higher level behaviors,
such as aligning the ball and the goal, or turning to ball.
\item \verb|finders.py| contains implementations of our detection algorithms.
\item \verb|imagereaders.py| contains some convenience classes for capturing
video output from various video sources, such as Nao cameras, web-cameras
or video files.
\item \verb|movements.py| implements convenience movements-related function,
such as walking and kick.
\item \verb|nao_defaults.json| stores all project-global settings, such as
the IP-address of the robot, or color calibration results.
\end{itemize}

View File

@@ -0,0 +1,19 @@
\section{Video Recording from the Nao Camera}
For the purposes of debugging and also for the final presentation, we wanted to
record what the robot sees during the program execution. NAOqi SDK provides a
function to write the camera video to a file, but has a limitation of allowing
the capture from only one camera at a time, which was not optimal for us. We
overcame this limitation, by exploiting the fact, that the NAOqi SDK didn't
impose any restrictions on reading individual frames from the cameras into the
memory. So, during the test runs we started a separate thread, where the camera
frames from both cameras were read into memory one by one, and after the robot
has completed the execution of his task, the recorded frame sequences were
written to video files with the help of OpenCV. This approach has a downside,
that the frames can only be read at irregular and unpredictable intervals, so
the framerate of the resulting video couldn't be calculated, which means that
the playback speed of the videos needed to be adjusted afterwards using video
editing programs. Furthermore, due to computational resource limitations of the
Nao, the frames could have been captured only in low resolution. However, the
quality of the resulting videos was sufficient for successful debugging and
also for the presentation.

View File

@@ -1,23 +1,25 @@
\chapter{Text to speech}
\section{Text to speech}
During the implementation of our solution for the objective stated in \ref{sec
problem statement} we included suitable functions to get a feedback about
what the robot is doing at the moment during code execution. In addition to the
text output on the console we decided to let the robot speak about what he is
doing using Voice output. We therefore implemented a Speech output using the
official Aldebaran NAOqi API \cite{nao} which provides a Text-to-Speech
function. We implemented the Speech output in such a way, that the robot does
not repeat the same sentence over and over again, if he remains in the same
state. We also ensured, that the Speech output does not influence the actual
execution of the problem solution by running it in a separate thread.
official Aldebaran NAOqi API \cite{naoqi-sdk} which provides a Text-to-Speech
function, which, unfortunately is blocking. So we extended the Speech output in
such a way, that the Speech output does not influence the actual execution of
the program by running it in a separate thread. We also ensured, that the robot
does not repeat the same sentence over and over again, if he remains in the
same state.
\chapter{Goal confirmation}
\section{Goal confirmation}
It makes sense to let the robot check, if he has actually scored a goal after
he performed a goal kick. We therefore implemented a simple goal confirmation
algorithm, which is visualized in figure \ref{j figure goal confirmation}. The
robot tries to find the goal and the ball. If the ball is between both goal
posts after the goal kick is performed, a successful goal kick is confirmed.
robot tries to find the goal and the ball. If the ball is between both
goalposts after the goal kick is performed, a successful goal kick is
confirmed.
\begin{figure}[ht]
\includegraphics[width=\textwidth]{\fig goal-confirmation}

View File

@@ -1,6 +1,7 @@
\section{Strategy Overview}
\label{p sec overview}
\begin{figure}[ht]
\begin{figure}
\includegraphics[width=\textwidth]{\fig striker-flowchart}
\caption{Overview of the goal scoring strategy}
\label{p figure strategy-overview}

View File

@@ -17,9 +17,6 @@
\include{robotum_report.preamble}
% if you don't know where something can be found, click on the pdf, and
% Overleaf will open the file where it is described
\begin{document}
\title{\LARGE {\bf Report Project Striker 2018}\\
\vspace*{6mm}
@@ -36,26 +33,24 @@
\generatebody % generates table of contents, list of figures and of tables.
% \input{Introduction/Introduction}
\setstretch{1.2} % set line spacing
\setstretch{1.2} % set line spacing
\input{introduction}
\input{tools}
\input{solintro}
\input{perception}
\input{introduction} % Introduction
\input{tools} % Hardware and software
\input{solintro} % Our solution intro
\input{perception} % Ball goal and field
% \input{Yuankai}
\input{jonas}
\input{overview}
\input{conclusion}
\input{jonas} % Distance, approach planing
\input{overview} % The complete strategy
\input{conclusion} % Results and future work
\begin{appendices}
%\input{appendix/BehaviorImplementation}
\input{appendix/tts}
\input{appendix/details} % Code organization
\input{appendix/colorpicker} % Colorpicker
\input{appendix/tts} % Text to speech and goal confirmation
\input{appendix/pov}
\end{appendices}
% Bibliography, see
% https://de.sharelatex.com/learn/Bibliography_management_with_bibtex#Bibliography_management_with_Bibtex
\addcontentsline{toc}{chapter}{Bibliography}
\bibliography{references}{}
\bibliographystyle{IEEEtran}