let's say it's final

This commit is contained in:
2019-03-01 17:45:35 +01:00
parent 745be42d8d
commit 9738f72c20

View File

@@ -11,7 +11,7 @@
\usepackage{textcomp}
\usepackage{xcolor}
\usepackage{subcaption}
\usepackage{hyperref}
\usepackage[colorlinks]{hyperref}
\usepackage{fancyhdr}
\pagestyle{fancy}
@@ -174,16 +174,14 @@ commands. As a confirmation, the NAO repeats the recognized command, or says
Such brevity greatly speeds up the speech-based interaction, compared to the
case if NAO would talk in full sentences.
\paragraph{Teleoperation Interface}
\paragraph{Calibration}
In order to make our system more robust, we have included a routine to
calibrate it for different users. It can be run in an optional step before
executing the main application. Within this routine different threshold values,
which are required for the ``Human Joystick'' approach that is used to control
the NAO's walker module, as well as various key points, which are needed to
properly map the operator's arm motions to the NAO, are determined.
calibrate it for different users. It can run as an optional step before the
execution of the main application. Within this routine different threshold
values, which are required for the ``Human Joystick'' approach that is used to
control the NAO's walker module, as well as various key points, which are
needed to properly map the operator's arm motions to the NAO, are determined.
When the module is started, the NAO is guiding the operator through a number of
recording steps via spoken prompts. After a successful completion of the
@@ -198,7 +196,7 @@ developed a teleoperation interface. It allows the operator to receive visual
feedback on the NAO as well as an estimation of the operators current pose and
of the buffer and movement zones which are needed to navigate the robot.
The NAO-part contains feeds of the top and bottom cameras on the robots head.
The NAO-part contains feeds of the top and bottom cameras on the robot's head.
These were created by subscribing to their respective topics using the
\verb|rqt_gui| package. Moreover, it additionally consists of a
visualization of the NAO in rviz. For this, the robot's joint positions are
@@ -211,14 +209,14 @@ Furthermore, the interface also presents an estimation of the current pose of
the operator as well as the control zones for our "Human Joystick" approach in
an additional \textit{rviz} window. For this, we created a separate node that
repeatedly publishes a model of the operator and the zones consisting of
markers to \textit{rviz}. Initially, the \textit{YAML-file} that contains the
parameters which were determined within the system calibration is read out.
According to those, the size of markers that estimate the control zones are
set. Further, the height of the human model is set to 2.2 times the determined
arm-length of the operator. The size of the other body parts is then scaled
dependent on that height parameter and predefined weights. We tried to match
the proportions of the human body as good as possible with that approach. The
position of the resulting body model is bound to the determined location of
concentric circles to \textit{rviz}. Initially, the \textit{YAML-file} that
contains the parameters which were determined within the system calibration is
read out. According to those, the size of circles that represent the control
zones are set. Further, the height of the human model is set to 2.2 times the
determined arm-length of the operator. The size of the other body parts is then
scaled dependent on that height parameter and predefined weights. We tried to
match the proportions of the human body as good as possible with that approach.
The position of the resulting body model is bound to the determined location of
the Aruco marker on the operators chest, which was again received by
subscription to the \verb|tf| topic. Thus, since the model is recreated and
re-published in each iteration of the node it is dynamically moving with the
@@ -232,7 +230,7 @@ elaborate to implement, we decided to use markers of the type
the hands for the model's arms. By using the shoulder points that were defined
in the body model and locking the points on the hands to the positions that
were determined for the markers in the operators hands, we finally created a
model that represents the operators arm positions and thereby provides support
model that represents the operators arm position's and thereby provides support
for various tasks such as grabbing an object. The final model is shown in
figure \autoref{fig:rviz-human-model}. Just for reference, we also included a
marker of type \textit{sphere} that depicts the position of the recording
@@ -286,15 +284,15 @@ schematically illustrated in \autoref{fig:joystick}.
\begin{figure}
\centering
\includegraphics[width=0.8\linewidth]{figures/usr_pt.png}
\caption{User position tracking model}
\caption{``Human Joystick''.}
\label{fig:joystick}
\end{figure}
There is a small region around the original position, in which the operator
can stay without causing the robot to move. As soon as the operator exceeds the
movement threshold into some direction, the robot will slowly start moving in
that direction. We use the following relationship for calculating the robot's
speed:
There is a small region around the original position, called the Buffer Zone,
in which the operator can stay without causing the robot to move. As soon as
the operator exceeds the movement threshold into some direction, the robot will
slowly start moving in that direction. We use the following relationship for
calculating the robot's speed:
$$v = v_{min} + \frac{d - d_{thr}}{d_{max} - d_{thr}}(v_{max} - v_{min})$$
@@ -320,13 +318,13 @@ between the relative locations of the detected ArUco markers and the desired
hand positions of the robot needs to be calculated. Then, based on the
target coordinates, the robot joint rotations need to be calculated.
\paragraph{Posture retargeting}
\paragraph{Posture Retargeting}
First, let us define the notation of the coordinates that we will use to
describe the posture retargeting procedure. Let $r$ denote the 3D $(x, y, z)$
coordinates, then the subscript defines the object which has these coordinates,
and the superscript defines the coordinate frame in which these coordinates are
taken. So, for example, $r_{NAO hand}^{NAO torso}$ gives the coordinate of the
taken. So, for example, $r_{hand,NAO}^{torso,NAO}$ gives the coordinate of the
hand of the NAO robot in the frame of the robot's torso.
\begin{figure}
@@ -334,17 +332,17 @@ hand of the NAO robot in the frame of the robot's torso.
%\hfill
\begin{subfigure}[b]{0.45\linewidth}
\includegraphics[width=\linewidth]{figures/operator_frames.png}
\caption{Operator's chest and shoulder frames}
\caption{Operator's chest and shoulder frames.}
%{{\small $i = 1 \mu m$}}
\label{fig:operator-frames}
\end{subfigure}
\begin{subfigure}[b]{0.45\linewidth}
\includegraphics[width=\linewidth]{figures/robot_torso.png}
\caption{NAO's torso frame}
\caption{NAO's torso frame.}
%{{\small $i = -1 \mu A$}}
\label{fig:nao-frames}
\end{subfigure}
\caption{Coordinate frames}
\caption{Coordinate frames.}
\label{fig:coord-frames}
\end{figure}
@@ -394,10 +392,10 @@ r_{hand,NAO}^{shoulder,NAO} + r_{shoulder,NAO}^{torso,NAO}$$
The coordinates of the NAO's shoulder in the NAO's torso frame can be obtained
through a call to the NAOqi API.
Now that the desired position of the NAO's hands are known, the appropriate
Now that the desired positions of the NAO's hands are known, the appropriate
joint motions need to be calculated by the means of Cartesian control.
\paragraph{Cartesian control}
\paragraph{Cartesian Control}
At first, we tried to employ the Cartesian controller that is shipped with the
NAOqi SDK. We soon realized, however, that this controller was unsuitable for
@@ -427,11 +425,11 @@ formula:
$$\dot{\theta} = J^{-1}\dot{r}$$
In this formula $\dot{r}$ denotes the 3D speed of the target, which is the
result of the posture retargeting, namely $r_{hand,NAO}^{torso,NAO}$. $J$ is
the Jacobian matrix \cite{jacobian}. The Jacobian matrix gives the relationship
between the joint angle speed and the resulting speed of the effector on the
end of the kinematic chain which the Jacobian matrix describes.
In here $\dot{r}$ is the desired speed of the end effector and $\dot{theta}$ is
the vector of the necessary joint angular speeds. $J$ is the Jacobian matrix
\cite{jacobian}. The Jacobian matrix gives the relationship between the joint
angle speed and the resulting speed of the effector on the end of the kinematic
chain which the Jacobian matrix describes.
We now apply a common simplification and state that
@@ -441,8 +439,13 @@ Here $\Delta$ is a small change in angle or the position. We use
$$\Delta r = \frac{r_{desired} - r_{current}}{K},\ K = 10$$
This means that we want the $r$ to make a small movement in the
direction of the desired position.
In this formula $r_{desired}$ denotes the 3D position of the target, which is
the result of the posture retargeting, namely $r_{hand,NAO}^{torso,NAO}$
\footnote{ In here we mean not the real position of the NAO's hand, but the
desired position calculated from the user's hand position. The real position
of the NAO's hand is the $r_{current}$. The proper distinguishing would
require even further abuse of notation. }. We want the $r$ to make a small
movement in the direction of the desired position.
Now we need to calculate a Jacobian matrix. There are 2 main ways to determine
the Jacobian matrix. The first way is the numerical method, where this
@@ -479,7 +482,7 @@ the joint. The following relation gives us one column of the Jacobian matrix.
$$
J_j = \frac{\partial r_{end}}{\partial\theta_j} =
(e \times (r_{end}-r_j))
(e_j \times (r_{end}-r_j))
$$
We can get the rotational axis of a joint and the position of the joint in the