let's say the draft is ready
This commit is contained in:
@@ -37,11 +37,27 @@
|
||||
@misc{ros,
|
||||
title={{ROS.org | Powering} the world's robots},
|
||||
howpublished={\url{http://www.ros.org/}},
|
||||
note={Accessed: 2019-01-03}
|
||||
note={Accessed: 2019-03-01}
|
||||
}
|
||||
|
||||
@misc{naoqi,
|
||||
title={{NAOqi} Developer guide},
|
||||
howpublished={\url{http://doc.aldebaran.com/2-1/index_dev_guide.html}},
|
||||
note={Accessed: 2018-08-08}
|
||||
note={Accessed: 2019-03-01}
|
||||
}
|
||||
|
||||
@misc{yaml,
|
||||
author = {Ben-Kiki, Oren, and Evans, Clark and Ingerson, Brian},
|
||||
title = {{YAML} ain't markup language version 1.1},
|
||||
howpublished={\url{http://yaml.org}},
|
||||
year = {2005},
|
||||
note={Accessed: 2019-03-01}
|
||||
}
|
||||
|
||||
@article{jacobian,
|
||||
author = {Buss, Samuel R.},
|
||||
title = {Introduction to inverse kinematics with jacobian transpose,
|
||||
pseudoinverse and damped least squares methods},
|
||||
journal = {IEEE Journal of Robotics and Automation},
|
||||
year = {2004}
|
||||
}
|
||||
|
||||
@@ -11,7 +11,6 @@
|
||||
\usepackage{textcomp}
|
||||
\usepackage{xcolor}
|
||||
\usepackage{subcaption}
|
||||
\usepackage{todonotes}
|
||||
\usepackage{hyperref}
|
||||
|
||||
\usepackage{fancyhdr}
|
||||
@@ -126,7 +125,7 @@ the transforms of the markers with respect to the \verb|odom| frame
|
||||
\begin{figure}
|
||||
\centerline{\includegraphics[width=0.8\linewidth]{figures/aruco.png}}
|
||||
\caption{ArUco marker detection on the operator.}
|
||||
\label{fig:aruco_detection}
|
||||
\label{fig:aruco-detection}
|
||||
\end{figure}
|
||||
|
||||
\subsection{Interface}\label{ssec:interface}
|
||||
@@ -177,39 +176,92 @@ case if NAO would talk in full sentences.
|
||||
|
||||
\paragraph{Teleoperation Interface}
|
||||
|
||||
In order to make it possible to operate
|
||||
the NAO without visual contact, a teleoperation interface was developed. This
|
||||
interface allows the operator to receive visual feedback on the NAO as well as
|
||||
additional information regarding his own position.
|
||||
\paragraph{Calibration}
|
||||
|
||||
The NAO-part contains video streams of the top and bottom cameras on the robots
|
||||
head. These were created by subscribing to their respective topics (FIND NAME)
|
||||
using the \textit{rqt\_gui} package. Moreover, it also consists of a rviz
|
||||
window which gives a visual representation of the NAO. For this, the robot's
|
||||
joint positions are displayed by subscribing to the topic \verb|tf| where the
|
||||
coordinates and the different coordinate frames are published. We further used
|
||||
the \textit{NAO-meshes} package to create the 3D model of the NAO.
|
||||
In order to make our system more robust, we have included a routine to
|
||||
calibrate it for different users. It can be run in an optional step before
|
||||
executing the main application. Within this routine different threshold values,
|
||||
which are required for the ``Human Joystick'' approach that is used to control
|
||||
the NAO's walker module, as well as various key points, which are needed to
|
||||
properly map the operator's arm motions to the NAO, are determined.
|
||||
|
||||
When the module is started, the NAO is guiding the operator through a number of
|
||||
recording steps via spoken prompts. After a successful completion of the
|
||||
calibration process, the determined values are written to the
|
||||
\textit{YAML-file} \verb|config/default.yaml| \cite{yaml}. This file can then
|
||||
be accessed by the other nodes in the system.
|
||||
|
||||
\paragraph{Teleoperation Interface}
|
||||
|
||||
In order to make it possible to operate the NAO without visual contact, we have
|
||||
developed a teleoperation interface. It allows the operator to receive visual
|
||||
feedback on the NAO as well as an estimation of the operators current pose and
|
||||
of the buffer and movement zones which are needed to navigate the robot.
|
||||
|
||||
The NAO-part contains feeds of the top and bottom cameras on the robots head.
|
||||
These were created by subscribing to their respective topics using the
|
||||
\verb|rqt_gui| package. Moreover, it additionally consists of a
|
||||
visualization of the NAO in rviz. For this, the robot's joint positions are
|
||||
displayed by subscribing to the \verb|tf| topic where the coordinates and the
|
||||
different coordinate frames are published. We further used the
|
||||
\verb|nao_meshes| package to render a predefined urdf-3D-model of the NAO. It
|
||||
is shown in \autoref{fig:rviz-nao-model}.
|
||||
|
||||
Furthermore, the interface also presents an estimation of the current pose of
|
||||
the operator as well as the control zones for our "Human Joystick" approach in
|
||||
an additional \textit{rviz} window. For this, we created a separate node that
|
||||
repeatedly publishes a model of the operator and the zones consisting of
|
||||
markers to \textit{rviz}. Initially, the \textit{YAML-file} that contains the
|
||||
parameters which were determined within the system calibration is read out.
|
||||
According to those, the size of markers that estimate the control zones are
|
||||
set. Further, the height of the human model is set to 2.2 times the determined
|
||||
arm-length of the operator. The size of the other body parts is then scaled
|
||||
dependent on that height parameter and predefined weights. We tried to match
|
||||
the proportions of the human body as good as possible with that approach. The
|
||||
position of the resulting body model is bound to the determined location of
|
||||
the Aruco marker on the operators chest, which was again received by
|
||||
subscription to the \verb|tf| topic. Thus, since the model is recreated and
|
||||
re-published in each iteration of the node it is dynamically moving with the
|
||||
operator.
|
||||
|
||||
Moreover, for a useful interface it was crucial to have a dynamic
|
||||
representation of the operator's arms in the model. After several tries using
|
||||
the different marker types (e.g. cylinders and arrows) turned out to be too
|
||||
elaborate to implement, we decided to use markers of the type
|
||||
\textit{line-strip} starting from points at shoulders and ending on points on
|
||||
the hands for the model's arms. By using the shoulder points that were defined
|
||||
in the body model and locking the points on the hands to the positions that
|
||||
were determined for the markers in the operators hands, we finally created a
|
||||
model that represents the operators arm positions and thereby provides support
|
||||
for various tasks such as grabbing an object. The final model is shown in
|
||||
figure \autoref{fig:rviz-human-model}. Just for reference, we also included a
|
||||
marker of type \textit{sphere} that depicts the position of the recording
|
||||
webcam.
|
||||
|
||||
In addition, we added camera feed showing the operator. Within the feed ArUco
|
||||
markers are highlighted once they are detected. This was done by including the
|
||||
output of the ArUco detection module in the interface. A sample output is shown
|
||||
in figure \autoref{fig:aruco-detection}.
|
||||
|
||||
\begin{figure}
|
||||
\centering
|
||||
%\hfill
|
||||
\begin{subfigure}[b]{0.4\linewidth}
|
||||
\includegraphics[width=\linewidth]{figures/rviz_human.png}
|
||||
\caption{}
|
||||
%{{\small $i = 1 \mu m$}}
|
||||
\label{fig_human_model}
|
||||
\end{subfigure}
|
||||
\begin{subfigure}[b]{0.4\linewidth}
|
||||
\includegraphics[width=\linewidth]{figures/interface_nao.png}
|
||||
\caption{}
|
||||
%{{\small $i = -1 \mu A$}}
|
||||
\label{fig_nao_model}
|
||||
%{{\small $i = 1 \mu m$}}
|
||||
\label{fig:rviz-nao-model}
|
||||
\end{subfigure}
|
||||
\caption{Operator and NAO in rviz.}
|
||||
\label{fig_interface}
|
||||
\begin{subfigure}[b]{0.4\linewidth}
|
||||
\includegraphics[width=\linewidth]{figures/rviz_human.png}
|
||||
\caption{}
|
||||
%{{\small $i = -1 \mu A$}}
|
||||
\label{fig:rviz-human-model}
|
||||
\end{subfigure}
|
||||
\caption{NAO and operator in rviz.}
|
||||
\label{fig:interface}
|
||||
\end{figure}
|
||||
|
||||
|
||||
\subsection{Navigation}\label{ssec:navigation}
|
||||
|
||||
Next, our system needed a way for the operator to command the robot to a
|
||||
@@ -606,32 +658,4 @@ interesting topic for future semester projects.
|
||||
\bibliography{references}{}
|
||||
\bibliographystyle{IEEEtran}
|
||||
|
||||
% \begin{table}[htbp]
|
||||
% \caption{Table Type Styles}
|
||||
% \begin{center}
|
||||
% \begin{tabular}{|c|c|c|c|}
|
||||
% \hline
|
||||
% \textbf{Table}&\multicolumn{3}{|c|}{\textbf{Table Column Head}} \\
|
||||
% \cline{2-4}
|
||||
% \textbf{Head} & \textbf{\textit{Table column subhead}}& \textbf{\textit{Subhead}}& \textbf{\textit{Subhead}} \\
|
||||
% \hline
|
||||
% copy& More table copy$^{\mathrm{a}}$& & \\
|
||||
% \hline
|
||||
% \multicolumn{4}{l}{$^{\mathrm{a}}$Sample of a Table footnote.}
|
||||
% \end{tabular}
|
||||
% \label{tab_sample}
|
||||
% \end{center}
|
||||
% \end{table}
|
||||
|
||||
% \begin{thebibliography}{00}
|
||||
|
||||
% \bibitem{b1}
|
||||
|
||||
% G. Eason, B. Noble, and I. N. Sneddon,
|
||||
% ``On certain integrals of Lipschitz-Hankel type involving
|
||||
% products of Bessel functions,''
|
||||
% Phil. Trans. Roy. Soc. London, vol. A247, pp. 529--551, April 1955.
|
||||
|
||||
% \end{thebibliography}
|
||||
|
||||
\end{document}
|
||||
|
||||
Reference in New Issue
Block a user