From c0c85bf0179dfaace31f2966b0765fdf3c3cdb5d Mon Sep 17 00:00:00 2001 From: jonas Date: Wed, 8 Aug 2018 22:59:48 +0200 Subject: [PATCH 1/5] started search for spelling mistakes --- documentation/introduction.tex | 12 ++++++------ documentation/perception.tex | 2 +- documentation/tools.tex | 26 +++++++++++++------------- 3 files changed, 20 insertions(+), 20 deletions(-) diff --git a/documentation/introduction.tex b/documentation/introduction.tex index e9b1dbd..4a173b9 100644 --- a/documentation/introduction.tex +++ b/documentation/introduction.tex @@ -3,22 +3,22 @@ RoboCup \cite{robocup} is an international competition in the field of robotics, the ultimate goal of which is to win a game of soccer against a human team by the middle of the 21st century. The motivation behind this objective is -the following: it is impossible to achieve such an ambitious goal with the +the following: It is impossible to achieve such an ambitious goal with the current state of technology, which means that the RoboCup competitions will drive scientific and technological advancement in such areas as computer vision, mechatronics and multi-agent cooperation in complex dynamic environments. The RoboCup teams compete in five different leagues: Humanoid, Standard Platform, Medium Size, Small Size and Simulation. Our work in this -semester was based on the rules of the \textit{Standard Platform league}. In +semester was based on the rules of the \textit{Standard Platform League}. In this league all teams use the same robot \textit{Nao}, which is being produced -by the SoftBank Robotics. We will describe the capabilities of this robot in +by SoftBank Robotics. We will describe the capabilities of this robot in more detail in the next chapter. A couple of words need to be said about the state-of-the-art. One of the most notable teams in the Standard Platform League is \textit{B-Human} \cite{bhuman}. This team represents the University of Bremen and in the last nine years they won the international RoboCup competition six times and twice -were the runner-up. The source code of the framework that B-Human use for +were the runner-up. The source code of the framework that B-Human uses for programming their robots is available on GitHub, together with an extensive documentation, which makes the B-Human framework a frequent starting point for RoboCup beginners. @@ -35,11 +35,11 @@ effective goal scoring will bring the team closer to victory. Secondly, in order to score a goal, many problems and tasks need to be solved, which we will describe in close detail in the next chapter. The work on these tasks would allow us to acquire new competences, which we could then use to complement the -RoboCup team of TUM. Finally, this objective encompasses many disciplines, such +RoboCup team of the TUM. Finally, this objective encompasses many disciplines, such as object detection, mechatronics or path planning, which means that working on it might give us a chance to contribute to the research in these areas. Having said that, we hope that our project will be a positive contribution to -the work, being done at the Institute for Cognitive Systems, and that this +the work being done at the Institute for Cognitive Systems and that this report will help future students to get familiar with our results and continue our work. diff --git a/documentation/perception.tex b/documentation/perception.tex index fc1d13c..6bb008c 100644 --- a/documentation/perception.tex +++ b/documentation/perception.tex @@ -16,7 +16,7 @@ and are assumed to be the center and the radius of the ball. \begin{figure}[ht] \includegraphics[width=\textwidth]{\fig ball-detection} - \caption{Ball detection. On the right is the binary mask} + \caption[Ball detection]{Ball detection. On the right is the binary mask} \label{p figure ball-detection} \end{figure} diff --git a/documentation/tools.tex b/documentation/tools.tex index d2b57c2..242ab5f 100644 --- a/documentation/tools.tex +++ b/documentation/tools.tex @@ -7,38 +7,38 @@ cm tall. Some of its characteristics are: \begin{itemize} -\item Two HD-cameras on the head; +\item Two HD-cameras on the head -\item An ultrasonic rangefinder on the body; +\item An ultrasonic rangefinder on the body -\item An inertial navigation unit (accelerometer and gyroscope); +\item An inertial navigation unit (accelerometer and gyroscope) -\item Internet connectivity over Ethernet cable or 802.11g WLAN; +\item Internet connectivity over Ethernet cable or 802.11g WLAN -\item Single-core Intel Atom CPU and 1 GB of RAM; +\item Single-core Intel Atom CPU and 1 GB of RAM -\item Programmable joints with overall 25 degrees of freedom; +\item Programmable joints with overall 25 degrees of freedom -\item Speakers; +\item Speakers -\item 60 to 90 minutes battery life. +\item 60 to 90 minutes battery life \end{itemize} It can be seen from the specifications list, that the multitude of sensors and -interfaces makes Nao an attractive development platform, suitable for the task -of playing soccer. However, relatively weak CPU and a low amount of RAM require +interfaces make the Nao an attractive development platform, suitable for the task +of playing soccer. However, a relatively weak CPU and a low amount of RAM require the programs running on the robot to be resource-efficient, which had to be -taken into into account during our work on the project. +taken into account during our work on the project. \section{Software} In our project we used \textit{NAOqi OS} as an operating system for the robot. -This is a standard operating system for Nao robots based on Gentoo Linux, and +This is a standard operating system for Nao robots based on Gentoo Linux and it can handle all aspects of robot control, such as reading the sensors, moving the robot and establishing the network connection. -As a framework for the implementation of the desired behavior we chose the +As a framework for the implementation of the desired behaviour we chose the official \textit{NAOqi Python SDK} \cite{naoqi-sdk}. We found this framework easy to use, well documented and also covering most basic functionality that was necessary for us to start working on the project. A further advantage of From e8a7040d543b33e3a07f59a6fc6e14ccdc13595d Mon Sep 17 00:00:00 2001 From: jonas Date: Wed, 8 Aug 2018 23:51:24 +0200 Subject: [PATCH 2/5] continued reading --- documentation/jonas.tex | 4 ++-- documentation/perception.tex | 22 +++++++++++----------- documentation/solintro.tex | 22 +++++++++++----------- documentation/tools.tex | 11 +++++------ 4 files changed, 29 insertions(+), 30 deletions(-) diff --git a/documentation/jonas.tex b/documentation/jonas.tex index 7cb49fb..2c21c48 100644 --- a/documentation/jonas.tex +++ b/documentation/jonas.tex @@ -45,7 +45,7 @@ camera frames. In the \textbf{Head Adjustment} part all necessary head movements are covered. In this part of the algorithm the head is rotated by a calculated angle, which depends on the ball yaw angle, which was provided by the \textbf{Ball - Detection} part. Therefore, the ball should now be aligned in the center of + Detection} part. Therefore, the ball should now be aligned in the centre of the robots' camera frames. If the angle between the head and the rest of the body is now below a specified threshold, the ball is locked and the algorithm stops, otherwise the algorithm continues with \textbf{Body Adjustment}. @@ -56,7 +56,7 @@ current movement. Then the robot starts to rotate around its z-axis depending on the current head yaw angle. To ensure that the head and body of the robot are aligned, like in the beginning of the whole algorithm, the head is rotated back into zero yaw. The algorithm continues then with another \textbf{Ball - Detection}, to ensure that the robot is properly centered at the ball. + Detection}, to ensure that the robot is properly centred at the ball. The proposed algorithm provided decent results during many test runs. It allows the robot to align itself to the ball fast, while some strategies are in place diff --git a/documentation/perception.tex b/documentation/perception.tex index 6bb008c..41b4a02 100644 --- a/documentation/perception.tex +++ b/documentation/perception.tex @@ -11,8 +11,8 @@ image itself needs to be transformed into HSV colorspace, so that the regions of interest can be extracted into a \textit{binary mask}. The contours of the regions can then be identified in a mask \cite{contours}, and the areas of the regions can be calculated using the routines from the OpenCV library. The -center and the radius of the region with the largest area are then determined -and are assumed to be the center and the radius of the ball. +centre and the radius of the region with the largest area are then determined +and are assumed to be the centre and the radius of the ball. \begin{figure}[ht] \includegraphics[width=\textwidth]{\fig ball-detection} @@ -30,13 +30,13 @@ binary mask with erosions and dilations, which allowed us to detect the ball even over long distances. The advantages of the presented algorithm are its speed and simplicity. The -major downside is that the careful color calibration is required for the +major downside is that a careful color calibration is required for the algorithm to function properly. If the HSV interval of the targeted color is -too narrow, then the algorithm might miss the ball; if the interval is too -wide, then other big red-shaded objects in the camera image will be detected as +too narrow, the algorithm might miss the ball; if the interval is too +wide, other big red-shaded objects in the camera image will be detected as the ball. A possible approach to alleviate these issues to a certain degree will be presented further in the section \ref{p sec field detect}. To -conclude, we found this algorithm to be robust enough for our purposes, if the +conclude, we found this algorithm to be robust enough for our purposes, if a sensible color calibration was provided. \section{Goal Detection} @@ -64,7 +64,7 @@ contours with the largest areas are considered further (in our experiments it was empirically determined that $N=5$ provides good results). Furthermore, all convex contours are rejected, since the goal is a highly non-convex shape. After that, a check is performed, how many points are necessary to approximate -the remaining contours. The motivation behind this is the following: it is +the remaining contours. The motivation behind this is the following: It is clearly visible that the goal shape can be perfectly approximated by a line with 8 straight segments. On an image from the camera, the approximation is almost perfect when using only 6 line segments, and in some degenerate cases @@ -74,7 +74,7 @@ of line segments to be approximated is probably not the goal. The preselection stage ends here, and the remaining candidates are passed to the scoring function. -The scoring function calculates, how different are the properties of the +The scoring function calculates, how different the properties of the candidates are from the properties, that an idealized goal contour is expected to have. The evaluation is happening based on two properties. The first property is based on the observation, that the area of the goal contour is much @@ -90,7 +90,7 @@ scoring function can then look like the following: The contour, that minimizes the scoring function, while keeping its value under a certain threshold is considered the goal. If no contour scores below the -threshold, then the algorithm assumes that no goal was found. An important note +threshold, the algorithm assumes that no goal was found. An important note is that the algorithm is designed in such a way, that the preselection and scoring are modular, which means that the current simple scoring function can later be replaced by a function with a better heuristic, or even by some @@ -104,7 +104,7 @@ Figure \ref{p figure goal-detection} demonstrates the algorithm in action. On the right is the binary mask with all found contours. On the left are the goal, and one contour that passed preselection but was rejected during scoring. -One downside of this algorithm, is that in some cases the field lines +One downside of this algorithm is that in some cases the field lines might appear to have the same properties, that the goal contour is expected to have, therefore the field lines can be mistaken for the goal. We will describe, how we dealt with this problem, in the section \ref{p sec field detect}. @@ -129,7 +129,7 @@ objects on the field are properly consumed. Such rather simple field detection has two important applications. The first one is that the robot should be aware, where the field is, so that it doesn't try to walk away from the field. Due to time constraints, we didn't implement -this part of the behavior. The second application of field detection is the +this part of the behaviour. The second application of field detection is the improvement of the quality of goal and ball recognition. As was mentioned in the section on ball detection, the current algorithm might get confused, if there are any red objects in the robot's field of view. However, there diff --git a/documentation/solintro.tex b/documentation/solintro.tex index cbd57d7..9f505df 100644 --- a/documentation/solintro.tex +++ b/documentation/solintro.tex @@ -5,29 +5,29 @@ completed, which are: \begin{enumerate} - \item Ball detection; + \item Ball detection - \item Goal detection; + \item Goal detection - \item Field detection; + \item Field detection - \item Turning to ball; + \item Turning to ball - \item Distance measurement; + \item Distance measurement - \item Approach planning; + \item Approach planning - \item Ball approach; + \item Ball approach - \item Goal alignment; + \item Goal alignment - \item Ball alignment; + \item Ball alignment - \item Kick. + \item Kick \end{enumerate} In this chapter we will give our solutions to the problems posed by each of the milestones, and at the end the resulting goal scoring strategy will be presented. We will now start with the lower level detection milestones and -will gradually introduce higher level behaviors. +will gradually introduce higher level behaviours. diff --git a/documentation/tools.tex b/documentation/tools.tex index 242ab5f..a6e8c0e 100644 --- a/documentation/tools.tex +++ b/documentation/tools.tex @@ -56,15 +56,14 @@ to ensure their proper functioning on the robot. \section{Rejected Software Alternatives} Here we will briefly discuss what alternative options were available for the -choice of the base framework, and why we decided not to use those. One +choice of the base framework and why we decided not to use those. One available option was the official \textit{NAOqi C++ SDK}. Being based on the -C++ language, this SDK can naturally be expected to have better performance and -be more resource-efficient, than the Python-based version. We still chose the +C++ language, this SDK can naturally be expected to have better performance and to be more resource-efficient, than the Python-based version. We still chose the Python SDK, because C++ is not particularly suitable for fast prototyping, because of the complexity of the language. It is also worth noting, that we never really hit the performance constraints, that couldn't have been overcome by refactoring our code, but in the future it might be reasonable to migrate -some of the portions of it to C++. +some portions of it to C++. Another big alternative is \textit{ROS} \cite{ros} (Robotic Operating System). ROS is a collection of software targeted at robot development, and there exists @@ -79,7 +78,7 @@ in ROS, it should be possible in the future to incorporate our work into ROS. Finally, as was already mentioned in the introduction, \textit{B-Human Framework} is a popular choice for beginners, thanks to the quality of the algorithms and good documentation. However, B-Human has been in development -over many years and is therefore a very complex system. The amount time needed +over many years and is therefore a very complex system. The amount of time needed to get familiar with the code, and then to incorporate our changes would have -been too big, for this reason we decided to use the simpler option as a +been too big. For this reason we decided to use the simpler option as a starting point. From 2fff93fc2e0fbd9fe624fcb6fb04b2b5c1eaccd4 Mon Sep 17 00:00:00 2001 From: ga46zel Date: Thu, 9 Aug 2018 06:51:55 +0200 Subject: [PATCH 3/5] Add new file --- documentation/Seif | 52 ++++++++++++++++++++++++++++++++++++++++++++++ 1 file changed, 52 insertions(+) create mode 100644 documentation/Seif diff --git a/documentation/Seif b/documentation/Seif new file mode 100644 index 0000000..451e315 --- /dev/null +++ b/documentation/Seif @@ -0,0 +1,52 @@ +\section{Ball Alignment} + +Now that the robot aligned itself with the ball and the goal, it has to move to the +right position, from which it can perform the kick. Depending on the situation, it +was feasible to program the robot to automatically select which foot to kick the ball with; +however, due to time constraints we decided to program the robot to kick only with the left foot. +In order to program the robot to correctly position its left foot in front of the ball, +we identified the right position that the ball should be positioned at, +within the robot’s lower camera as shown in \ref{p figure ball-alignment}. +We then experimentally determined the extents of this region. +The algorithm therefore is for the robot to gradually adjust its position in small steps, +until the ball image reaches the target, which would trigger the robot to perform the kick. \\ +Our tests have shown that this method was quite robust and gave consistent results. +We registered no case where the robot missed the ball or hit it with the edge of the foot.\\ + +\begin{figure}[ht] + \includegraphics[width=\textwidth]{\fig ball-align} + \caption{Ball alignment} + \label{p figure ball-alignment} +\end{figure} + + +\section{Kick} + +The final milestone in the goal scoring project is naturally the kick. Before +we started working on the kick, we set the requirements that our +implementation must meet. Firstly and most importantly, the robot shouldn't +fall down during and after performing the kick. Secondly, the kick performance should be efficient, +thus ideally only one attempt would be necessary for the ball to reach the goal. +Consequently, we opted for a powerful kick which can cover high distances. \\ + +As shown in \ref{p figure kick} To obtain our strong kick, First the robot will use its ankle joints to shift +its weight to the base leg to compensate for the gravity and to avoid any collision between the kicking foot and the floor. +After this, the robot will be able to lift the +kicking leg to achieve a stronger swing. Finally, the robot will perform the swing and return +to the standing position safely. Both raising the leg and doing the swing require +precise coordinated joint movements, so we had to conduct experiments to +establish the correct joint angles and the movement speed. \\ + +An important drawback of our implementation is that the swing makes the whole +process slower, but we weren't able to design a strong and stable kick without +using the swing. Nevertheless, the tests that we performed have shown that our +implementation satisfies our requirements, and hence the last milestone was +successfully completed.\\ + + +\begin{figure}[ht] +\includegraphics[width=\textwidth]{\fig kick} +\caption{Kick sequence} +\label{p figure kick} +\end{figure} + From e33fca237bf7a402b21811c692ac87b84fe3c516 Mon Sep 17 00:00:00 2001 From: ga46zel Date: Thu, 9 Aug 2018 06:53:12 +0200 Subject: [PATCH 4/5] Update Seif --- documentation/{Seif => Seif.tex} | 0 1 file changed, 0 insertions(+), 0 deletions(-) rename documentation/{Seif => Seif.tex} (100%) diff --git a/documentation/Seif b/documentation/Seif.tex similarity index 100% rename from documentation/Seif rename to documentation/Seif.tex From f322beed81d34e95d077b6ea98b3c4bd29863bf9 Mon Sep 17 00:00:00 2001 From: jonas Date: Thu, 9 Aug 2018 08:37:09 +0200 Subject: [PATCH 5/5] spell checking --- documentation/jonas.tex | 46 +++++++++++++++++++++++------------------ 1 file changed, 26 insertions(+), 20 deletions(-) diff --git a/documentation/jonas.tex b/documentation/jonas.tex index 2c21c48..db27f43 100644 --- a/documentation/jonas.tex +++ b/documentation/jonas.tex @@ -72,7 +72,7 @@ start the \textbf{Turn to Ball algorithm} again. %Follow the ball always -> problem: movement while walking %Describe in more Detail??? Are all steps in can not see the ball executed every time? %Mention stand up - +\newpage \section{Distance Measurement} \label{j sec distance measurement} @@ -97,13 +97,13 @@ The distance measurement will now be described. At first, the robot is brought to a defined stand-up posture, to ensure that the distance calculations are accurate. The current camera frame is then used to estimate the angle $\Phi_{\mathrm{meas}}$ between the position of the -ball and the center of the camera frame. In the stand-up position, the top +ball and the centre of the camera frame. In the stand-up position, the top camera of the robot is not aligned with the parallel to the floor. There is -therefore an offset angle for the center of the camera frame, which has to be +therefore an offset angle for the centre of the camera frame, which has to be considered in the calculations. As seen in figure \ref{j figure distance measurement} $ \Phi_{\mathrm{ball}} $ and $ -\Phi_{\mathrm{meas}}+\Phi_{\mathrm{cam}} $ are alternate interior angles -therefore the following equations holds: +\Phi_{\mathrm{meas}}+\Phi_{\mathrm{cam}} $ are alternate interior angles. +Therefore, the following equations holds: \begin{equation} \Phi_{\mathrm{ball}} = \Phi_{\mathrm{meas}}+\Phi_{\mathrm{cam}} \; . @@ -143,7 +143,7 @@ approach path. \end{figure} The task is solved as following. At the beginning the robot is in the standing -position and the ball is in the center of the camera view. As the position of +position and the ball is in the centre of the camera view. As the position of the ball is therefore known, it is important to find out, where the goal is to determine an appropriate approach path. The robot will therefore rotate its head, until it is able to recognize the goal in the view of its top camera @@ -155,14 +155,14 @@ head, until it is able to recognize the goal in the view of its top camera \label{j figure choose-approach} \end{figure} -Using the position of the center of the goal, the angle between the ball and +Using the position of the centre of the goal, the angle between the ball and the goal is estimated. Depending on the value of the angle, different approach -directions are chosen. In the figure \ref{j figure choose-approach}, the goal +directions are chosen. In figure \ref{j figure choose-approach}, the goal is on the right side of the ball. It therefore makes sense to approach the ball somewhere from the left side. In the current implementation there are three possible approach directions. The robot could approach the ball either from the left or the right side; or if the angle between the goal and the ball is -sufficiently small, the robot could also do a straight approach to the ball. As +sufficiently small or the distance between the ball and the robot is sufficiently small, the robot could also do a straight approach to the ball. As the exact approach angle to the ball is calculated in the next part of the approach planning, it's enough for now to decide between those three possible approach directions. @@ -171,7 +171,6 @@ The proposed algorithm worked fine under the consideration of the possible scenarios. As the goal detection algorithm works quite reliable, the appropriate approach direction was found quickly most of the time. -\newpage As the approach direction is now known, the approach angle and the walking distance of the robot have to be estimated. The task is to find an approach @@ -188,12 +187,12 @@ for a later kick. %bdist is hypo and walking distance is hypo The task is solved as following. Again the robot is in the standing position -and the ball is centered in the camera view of the top camera. The ball +and the ball is centred in the camera view of the top camera. The ball distance has already been estimated as described in section \ref{j sec distance measurement}. To estimate the approach angle and the walking distance, a -desired distance is defined which defines the distance between the robot and +desired distance is set which defines the distance between the robot and the ball after the walk. Approach angle and walking distance can then be -computed. Thereby we considered two different approaches depending on the +computed. Thereby we considered three different approaches depending on the distance between the ball and the robot. If the distance between the robot and the ball is below or equal to a specified threshold the triangle looks as shown in figure \ref{j figure rdist hypo}. @@ -240,19 +239,26 @@ looks like in figure \ref{j figure bdist hypo}. \end{figure} To calculate the appropriate walking distance, the following formulas estimate -the approaching angle and calculate the distance. +the approaching angle and calculate the walking distance, depending on the distance to the ball. \begin{equation} -\Theta_\mathrm{appr}=\arctan\left(\frac{\mathrm{Desired\ distance}}{\mathrm{ball\ distance}} \right) \; \; \mathrm{or} \; \; \arcsin\left(\frac{\mathrm{Desired\ distance}}{\mathrm{ball\ distance}}\right) +\Theta_\mathrm{appr} = +\begin{cases} +\arctan\left(\frac{\mathrm{Desired\ distance}}{\mathrm{ball\ distance}} \right) & \text{for short distances}\\ +\arcsin\left(\frac{\mathrm{Desired\ distance}}{\mathrm{ball\ distance}}\right) & \text{for long distances} +\end{cases} \end{equation} \begin{equation} - \mathrm{walking\ distance}=\frac{\mathrm{ball\ distance}}{\cos(\Theta_\mathrm{appr})} \; \; \mathrm{or} \; \; \frac{\cos(\Theta_\mathrm{appr})}{\mathrm{ball\ distance}} +\mathrm{walking\ distance} = +\begin{cases} +\frac{\mathrm{ball\ distance}}{\cos(\Theta_\mathrm{appr})} & \text{for short distances}\\ +\cos(\Theta_\mathrm{appr}) \cdot \mathrm{ball\ distance} & \text{for long distances} +\end{cases} \end{equation} -If the distance between the robot and the ball is really small, the robot -starts a direct approach to the ball regardless of the position of the goal. -This makes more sense for short distances, than the two approaches stated -above. In this case the neccessary actions for goal alignment will happen in a +As already mentioned, the robot starts a direct approach to the ball regardless of the position of the goal if the distance between the robot and the ball is really small. +This makes more sense for sufficiently short distances, than the two approaches stated +above. In this case the necessary actions for goal alignment will happen in a dedicated goal alignment stage, described in the section \ref{p sec goal align}.