started writing the system integration
This commit is contained in:
@@ -12,6 +12,7 @@
|
|||||||
\usepackage{xcolor}
|
\usepackage{xcolor}
|
||||||
\usepackage{subcaption}
|
\usepackage{subcaption}
|
||||||
\usepackage{todonotes}
|
\usepackage{todonotes}
|
||||||
|
\usepackage{hyperref}
|
||||||
|
|
||||||
\def\BibTeX{{\rm B\kern-.05em{\sc i\kern-.025em b}\kern-.08em
|
\def\BibTeX{{\rm B\kern-.05em{\sc i\kern-.025em b}\kern-.08em
|
||||||
T\kern-.1667em\lower.7ex\hbox{E}\kern-.125emX}}
|
T\kern-.1667em\lower.7ex\hbox{E}\kern-.125emX}}
|
||||||
@@ -32,7 +33,7 @@ chest and hands, the position and the posture of the operator should have been
|
|||||||
determined by detecting the markers' locations with a webcam, and then the
|
determined by detecting the markers' locations with a webcam, and then the
|
||||||
appropriate commands should have been sent to the robot to imitate the motions
|
appropriate commands should have been sent to the robot to imitate the motions
|
||||||
of the operator. The overview of the
|
of the operator. The overview of the
|
||||||
process can be seen in \ref{fig:overview}. The main takeaway from
|
process can be seen in \autoref{fig:overview}. The main takeaway from
|
||||||
fulfilling this objective was practicing the skills that we acquired during the
|
fulfilling this objective was practicing the skills that we acquired during the
|
||||||
Humanoid Robotic Systems course and to get familiar with the NAO robot as a
|
Humanoid Robotic Systems course and to get familiar with the NAO robot as a
|
||||||
research and development platform.
|
research and development platform.
|
||||||
@@ -48,7 +49,7 @@ In closer detail, once the markers are detected, their coordinates relative to
|
|||||||
the webcam are extracted. The position and the orientation of the user's
|
the webcam are extracted. The position and the orientation of the user's
|
||||||
chest marker is used to control the movement of the NAO around the environment.
|
chest marker is used to control the movement of the NAO around the environment.
|
||||||
We call this approach a ``Human Joystick'' and we describe it in more detail in
|
We call this approach a ``Human Joystick'' and we describe it in more detail in
|
||||||
\ref{ssec:navigation}.
|
\autoref{ssec:navigation}.
|
||||||
|
|
||||||
The relative locations of the chest and hand markers can be used to determine
|
The relative locations of the chest and hand markers can be used to determine
|
||||||
the coordinates of the user's end effectors (i.e.\ hands) in the user's chest
|
the coordinates of the user's end effectors (i.e.\ hands) in the user's chest
|
||||||
@@ -57,7 +58,7 @@ to be appropriately remapped into the NAO torso frame. With the knowledge of the
|
|||||||
desired coordinates of the hands, the commands for the NAO joints can be
|
desired coordinates of the hands, the commands for the NAO joints can be
|
||||||
calculated by using the Cartesian control approach. We present a thorough
|
calculated by using the Cartesian control approach. We present a thorough
|
||||||
discussion of the issues we had to solve and the methods we used for arm motion
|
discussion of the issues we had to solve and the methods we used for arm motion
|
||||||
imitation in \ref{ssec:imitation}.
|
imitation in \autoref{ssec:imitation}.
|
||||||
|
|
||||||
Furthermore, in order to enable the most intuitive teleoperation, a user
|
Furthermore, in order to enable the most intuitive teleoperation, a user
|
||||||
interface was needed to be developed. In our system, we present the operator
|
interface was needed to be developed. In our system, we present the operator
|
||||||
@@ -70,7 +71,7 @@ Finally, to be able to accommodate different users and to perform control in
|
|||||||
different conditions, a small calibration routine was developed, which would
|
different conditions, a small calibration routine was developed, which would
|
||||||
quickly take a user through the process of setting up the teleoperation.
|
quickly take a user through the process of setting up the teleoperation.
|
||||||
We elaborate on the tools and approaches that we used for implementation of the
|
We elaborate on the tools and approaches that we used for implementation of the
|
||||||
user-facing features in \ref{ssec:interface}.
|
user-facing features in \autoref{ssec:interface}.
|
||||||
|
|
||||||
An example task, that can be done using our teleoperation package might be the
|
An example task, that can be done using our teleoperation package might be the
|
||||||
following. The operator can safely and precisely navigate the robot through an
|
following. The operator can safely and precisely navigate the robot through an
|
||||||
@@ -90,7 +91,7 @@ and readable resulting code.
|
|||||||
|
|
||||||
\section{System Overview}
|
\section{System Overview}
|
||||||
|
|
||||||
\subsection{Vision}
|
\subsection{Vision}\label{ssec:vision}
|
||||||
|
|
||||||
- Camera calibration
|
- Camera calibration
|
||||||
- Aruco marker extraction
|
- Aruco marker extraction
|
||||||
@@ -211,7 +212,7 @@ taken. So, for example, $r_{NAO hand}^{NAO torso}$ gives the coordinate of the
|
|||||||
hand of the NAO robot in the frame of the robot's torso.
|
hand of the NAO robot in the frame of the robot's torso.
|
||||||
|
|
||||||
After the ArUco markers are detected and published on ROS TF, as was described
|
After the ArUco markers are detected and published on ROS TF, as was described
|
||||||
in \ref{ssec:vision}, we have the three vectors $r_{aruco,chest}^{webcam}$,
|
in \autoref{ssec:vision}, we have the three vectors $r_{aruco,chest}^{webcam}$,
|
||||||
$r_{aruco,lefthand}^{webcam}$ and $r_{aruco,righthand}^{webcam}$. We describe
|
$r_{aruco,lefthand}^{webcam}$ and $r_{aruco,righthand}^{webcam}$. We describe
|
||||||
the retargeting for one hand, since it is symmetrical for the other hand. We
|
the retargeting for one hand, since it is symmetrical for the other hand. We
|
||||||
also assume that all coordinate systems have the same orientation, with the
|
also assume that all coordinate systems have the same orientation, with the
|
||||||
@@ -225,10 +226,11 @@ r_{aruco,chest}^{webcam}$$.
|
|||||||
Next, we remap the hand coordinates in the chest frame into the user shoulder
|
Next, we remap the hand coordinates in the chest frame into the user shoulder
|
||||||
frame, using the following relation:
|
frame, using the following relation:
|
||||||
|
|
||||||
$$r_{hand,user}^{shoulder,user} = r_{hand,user}^{chest,user} - r_{shoulder,user}^{chest,user}$$
|
$$r_{hand,user}^{shoulder,user} =
|
||||||
|
r_{hand,user}^{chest,user} - r_{shoulder,user}^{chest,user}$$
|
||||||
|
|
||||||
We know the coordinates of the user's shoulder in the user's chest frame from
|
We know the coordinates of the user's shoulder in the user's chest frame from
|
||||||
the calibration procedure, described in \ref{ssec:interface}.
|
the calibration procedure, described in \autoref{ssec:interface}.
|
||||||
|
|
||||||
Now, we perform the retargeting of the user's hand coordinates to the desired
|
Now, we perform the retargeting of the user's hand coordinates to the desired
|
||||||
NAO's hand coordinates in the NAO's shoulder frame with the following formula:
|
NAO's hand coordinates in the NAO's shoulder frame with the following formula:
|
||||||
@@ -291,12 +293,31 @@ $$
|
|||||||
which gives us one column of the Jacobian matrix. This can be repeated for
|
which gives us one column of the Jacobian matrix. This can be repeated for
|
||||||
each rotational joint until the whole matrix is filled.
|
each rotational joint until the whole matrix is filled.
|
||||||
|
|
||||||
The next step for the cartesian controller is to determine the inverse Jacobian
|
The next step for the Cartesian controller is to determine the inverse Jacobian
|
||||||
matrix for the inverse kinematic. For this singular value decomposition is
|
matrix for the inverse kinematic. For this singular value decomposition is
|
||||||
used. - Cartesian Controller
|
used.
|
||||||
|
|
||||||
\section{System Integration}
|
\section{System Implementation and Integration}
|
||||||
|
|
||||||
|
Now that the individual modules were designed and implemented, the whole system
|
||||||
|
needed to be assembled together. It is crucial that the states of the robot and
|
||||||
|
the transitions between the states are well defined and correctly executed. The
|
||||||
|
state machine, that we designed, can be seen in the \autoref{fig:overview}.
|
||||||
|
|
||||||
|
The software package was organized as a collection of ROS nodes, controlled by
|
||||||
|
a single master node. The master node keeps track of the current system state,
|
||||||
|
and the slave nodes consult with the master node to check if they are allowed
|
||||||
|
to perform an action. To achieve this, the master node creates a server for a
|
||||||
|
ROS service, named \verb|inform_masterloop|, with this service call taking as
|
||||||
|
arguments a name of the caller and the desired action and responding with a
|
||||||
|
Boolean value indicating, whether a permission to perform the action was
|
||||||
|
granted. The master node can then update the system state based on the received
|
||||||
|
action requests and the current state. Some slave nodes, such as the walking or
|
||||||
|
imitation nodes run in a high-frequency loop, and therefore consult with the
|
||||||
|
master in each iteration of the loop. Other nodes, such as the fall detector,
|
||||||
|
only inform the master about the occurrence of certain events, such as the fall
|
||||||
|
or fall recovery, so that the master could deny requests for any activities,
|
||||||
|
until the fall recovery is complete.
|
||||||
|
|
||||||
\section{Drawbacks and conclusions}
|
\section{Drawbacks and conclusions}
|
||||||
|
|
||||||
|
|||||||
Reference in New Issue
Block a user