A natural gesture interface for operating robotic systems

A gesture-based interaction framework is presented for controlling mobile robots. This natural interaction paradigm has few physical requirements, and thus can be deployed in many restrictive and challenging environments. We present an implementation
of 7
All materials on our website are shared by users. If you have any questions about copyright issues, please report us to resolve them. We are always happy to assist you.
  A Natural Gesture Interface for Operating Robotic Systems Anqi Xu, Gregory Dudek and Junaed Sattar  Abstract — A gesture-based interaction framework is pre-sented for controlling mobile robots. This natural interactionparadigm has few physical requirements, and thus can bedeployed in many restrictive and challenging environments. Wepresent an implementation of this scheme in the control of anunderwater robot by an on-site human operator. The operatorperforms discrete gestures using engineered visual targets,which are interpreted by the robot as parametrized actionablecommands. By combining the symbolic alphabets resulting fromseveral visual cues, a large vocabulary of statements can beproduced. An Iterative Closest Point algorithm is used to detectthese observed motions, by comparing them with an establisheddatabase of gestures. Finally, we present quantitative datacollected from human participants indicating accuracy andperformance of our proposed scheme. I. I NTRODUCTION Gestures are one of the most expressive ways of commu-nicating between people. Whether they are initiated usinghands, facial features, or the entire body, the benefits of using gestures in comparison with other media such asspeech or writing comes from the vast amount of informationthat can be associated with a simple shape or motion. Inthis paper we present an approach for adapting gestures asa communication scheme in the Human-Robot Interaction(HRI) context. More specifically, our work deals with robotcontrol in the underwater domain, where available modes of communication are highly constrained due to the restrictionsimposed by the water medium. This paper describes aframework for controlling an amphibious legged robot, bytracing out trajectories using bar-code-like markers.We are particularly interested in the application where anunderwater scuba diver is assisted by a semi-autonomousrobotic vehicle. This setup can be thought of as the human-robot counterpart of a broader communication problem. Ingeneral, divers converse with each other using hand signals asopposed to speech or writing. This is because the aquatic en-vironment does not allow for simple and reliable acoustic andradio communication, and because the physical and cognitiveburdens of writing or using other similar communicationmedia are generally undesirable. On the other hand, visualgestures do not rely on complicated or exotic hardware, donot require strict environmental settings, and can convey awide range of information with minimal physical and mentaleffort from the user. Furthermore, by combining spatialgestures with other visual communication modes, a large andexpressive vocabulary can be obtained. Centre for Intelligent Machines, McGill University, 3480University Street, Montreal, Quebec, Canada H3A 2A7. { anqixu,dudek,junaed } Fig. 1. Comparison of C (left) and RoboChat (right) syntax. While our approach is motivated by underwater robotics,the methods we employ can be used in other human-robotinteraction (HRI) contexts as well. Conventional approachesof robot interaction rely on keyboards, joysticks and spokendialog. These traditional methods can be problematic inmany contexts, such as when speech and radio signalscannot be used (i.e. underwater). The approach presentedin this paper extends prior work using an interface calledRoboChat [5]. Using RoboChat, an underwater diver displaysa sequence of symbolic patterns to the robot, and uses thesymbol sequence to generate utterances using a specializedlanguage (Fig. 1), which includes both terse imperative ac-tions commands, as well as complex procedural statements.The RoboChat language also features syntactic structuresthat serve to minimize user input, as well as to increasethe flexibility of the language. It is designed to employ anysystem of fiducial markers to permit robust target detection.The present implementation uses the ARTag marker set [7],although we are transitioning to an alternative deploymentbased on Fourier Tags [12].In spite of its utility, RoboChat suffers from three criticalweaknesses in its user interface. First of all, because a sep-arate fiducial marker is required for each robot instruction,the number of markers associated with robot commands maybe significantly large for a sophisticated robotic system. Thisrequirement can impede the diver’s locomotive capabilities,since he must ensure the secure transportation of this largeamount of marker cards underwater. Secondly, the mappingbetween robot instructions and symbolic markers are com-pletely arbitrary, as the diver must first read the labels on eachcard to locate a particular token. Thirdly, as a consequenceof the previous two deficiencies, the diver may require asignificant amount of time to locate the desired markersto formulate a syntactically correct script, which may beunacceptable for controlling a robot in real-time.This paper proposes an interaction paradigm calledRoboChat Gestures, which can be used as a supplementaryinput scheme for RoboChat. It is designed specifically toremedy all three aforementioned weaknesses in the coreinterface. The main premise is for the diver to formulate dis-crete motions using a pair of fiducial markers. By interpreting  different motions as robot commands, the diver no longer isrequired to carry one marker per instruction. The trajectoriesof RoboChat Gestures are derived from different types of traditional gestures, to take advantage of existing associationsand conventions in the form of embedded information. Thisintroduces a natural relationship between trajectories andtheir meanings, which alleviates the cognitive strain onthe user. Additionally, the robot can process the observedgestures and extract features from the motion, such as itsshape, orientation, or its size. Each gesture is mapped to acommand, while the extracted features are associated withvarious parameters for that instruction. Because much of theinformation is now embedded in each trajectory, RoboChatGestures can express the same amount of information thatthe previous RoboChat interface could, but in significantlyless time, and using only two fiducial markers.The rest of the paper is organized as follows. Section IIpresents a brief literature survey. Sections III and IV elabo-rate on the concept of RoboChat Gestures, and in particularexplains the inner workings of the gesture detection process.Implementation results of the proposed scheme is discussedin Section V, both quantitatively and qualitatively. We con-clude the paper in Section VI and present possible avenuesfor future work.II. R ELATED W ORK Our work described in this paper is based on four principalideas: a navigating underwater robot, the use of robust visualtargets, gesture recognition in the abstract, and gestures forrobot control.Sattar et al. looked at using visual communications, andspecifically visual servo-control with respect to a human op-erator, to handle the navigation of an underwater robot [13].In that work, while the robot follows a diver to maneuver,the diver can only modulate the robot’s activities by makinghand signals that are interpreted by a human operator on thesurface. Visual communication has also been used by severalauthors to allow communication between robots on land, orbetween robots and intelligent modules on the sea floor, forexample in the work of Vasilescu and Rus [16].The work of Waldherr, Romero and Thrun [17] exemplifiesthe explicit communication paradigm in which hand gesturesare used to interact with a robot and lead it through an envi-ronment. Tsotsos et. al [15] considered a gestural interfacefor non-expert users, in particular disabled children, based ona combination of stereo vision and keyboard-like input. As anexample of implicit communication, Rybski and Voyles [11]developed a system whereby a robot could observe a humanperforming a task and learn about the environment.Fiducial marker systems, as mentioned in the previoussection, are efficiently and robustly detectable under difficultconditions. Apart from the ARTag toolkit mentioned previ-ously, other fiducial marker systems have been developedfor use in a variety of applications. The ARToolkit markersystem [10] consists of symbols very similar to the ARTagflavor in that they contain different patterns enclosed withina square black border. Circular markers are also possiblein fiducial schemes, as demonstrated by the PhotomodelerCoded Targets Module system [1] and the Fourier Tags [12].Vision-based gesture recognition has long been consideredfor a variety of tasks, and has proven to be a challengingproblem examined for over 20 years with diverse well-established applications [6] [9]. The types of gestural vo-cabularies range from extremely simple actions, like simplefist versus open hand, to very complex languages, such as theAmerican Sign Language (ASL). ASL allows for the expres-sion of substantial affect and individual variation, making itexceedingly difficult to deal with in its complete form. Forexample, Tsotsos et al. [3] considered the interpretation of elementary ASL primitives (i.e simple component motions)and achieved 86 to 97 per cent  recognition rates undercontrolled conditions.Gesture-based robot control is an extensively exploredtopic in HRI. This includes explicit as well as implicitcommunication frameworks between human operators androbotics systems. Several authors have considered specializedgestural behaviors [8] or strokes on a touch screen to controlbasic robot navigation. Skubic et al. have examined thecombination of several types of human interface components,with special emphasis on speech, to express spatial relation-ships and spatial navigation tasks [14].III. M ETHODOLOGY  A. Motivation and Setup RoboChat Gestures is motivated partly by traditional handsignals used by all human scuba divers to communicate withone another. As mentioned in Sec. I, the srcinal RobotChatscheme was developed as an automated input interface to pre-clude the need for a human interpreter or a remote video link.Usability studies of RoboChat suggests that naive subjectswere able to formulate hand signals faster than searchingthrough printed markers. This difference was apparent evenwhen the markers were organized into indexed flip books toenhance rapid deployment. We believe that this discrepancyin performance was due to the intuitive relationships thatexisted between the hand signals and the commands theyrepresented. These natural relationships served as usefulmnemonics, which allowed the diver to quickly generate theinput without actively considering each individual step inperforming the gesture.The RoboChat Gestures scheme employs the same tech-nique as hand signals to increase its performance. Eachgesture comprises of a sequence of motions performed usingtwo fiducial markers, whose trajectory and shape imply arelevant action known to be associated with this gesture.Because different instructions can now be specified usingthe same pair of markers, the total number of visual targetsrequired to express the RoboChat vocabulary is reducedconsiderably, making the system much more portable. Thisbenefit is particularly awarding to scuba divers, who alreadyhave to attend to many instruments attached to their divegear. In general, the expression space for RoboChat Gesturescomprises of several dimensions. Different features may beused in the identification process, including the markers’ ID,  the shape of the trajectory drawn, its size, its orientation,and the time taken to trace out the gesture. In addition, thegestures provide a way to communicate out-of-band signals,for example to stop the robot in case of an emergency. Tooptimize the system’s usability, numerical values for thesenon-deterministic features are converted from a continuousrepresentation to a discrete one, for both signal types.  B. Gesture design criteria The selection of gestures for our system depends highly onthe target application. Designing shapes and motions suitablefor an aquatic robot comes with a number of restrictions.Firstly, in the water medium, both the diver and the robot arein constant motion, which makes performing and detectinggestures more complex compared to the terrestrial domain.To address this issue, we use two fiducial markers to performgestures, by using one marker as a reference point or “srcin”in the image space, and using the other “free” marker todraw the actual gesture shapes. This approach compensatesfor the constant motion of the vehicle and the operator, butalso reduces the effective field of view of the camera. Thisproblem can also be addressed by increasing the distancebetween the operator and the camera. With our current im-plementation with ARTags, successful detection is possiblewith a separation of up to 2 meters. Also, since the markerdetection scheme is impeded by motion blur, we impose onthe operator the requirement to pause briefly at the verticesof the gestures. The time span of the pause is usually verysmall, resulting directly from the robustness of the fiducialdetection scheme.IV. R OBO C HAT G ESTURES D ETECTION A LGORITHM  A. Overview Our gesture recognition system exploits the positions of the visual targets on the image plane over time. Thusthe raw input data to the system is a series of points of the form ( x,y,t ) . We use an Iterative Closest Point (ICP)algorithm [2] to determine whether a given point cloudrepresents a known gesture. Traditional ICP methods match3-D points independent of their ordering, typically usingeither a Euclidean or Mahalanobis distance metric. In ourcase, we augment the ICP distance metric to use the positionof the gesture points on the 2-D image plane, as well as thetemporal sequence (but not the speed) associated with thegesture. This algorithm attempts to pair up an observationpoint cloud to different reference clouds, each representinga unique gesture.The ICP algorithm has two simple steps. First, for each of the points in the observation cloud, we identify the closestpoint in the reference set. Each point pair returns a distance,which is stored into an error metric vector. Then, we findthe optimal method of transforming the observation cloud,to minimize the least square error for the previously obtainedvector. Afterwards, we apply the transformation and iteratethese two steps until the improvement in the algorithm fallsbelow a certain threshold. When this terminating criterion isreached, we evaluate the final error metric vector, and use it Fig. 2. Raw and pre-processed data for two RoboChat Gestures clouds. to determine whether the observation accurately resemblesthe selected reference.  B. Pre-processing To be able to properly compare point clouds, we need toensure that the data are on a similar scale. First, we identifythe position of the static marker as the srcin, by looking forthe point sequence with the smallest covariance in the 2-Dpositional space. We generate the data cloud by centering theother marker about this (time-dependent) srcin. To detectrotated shapes, we first obtain the principal eigenvector foreach cloud, and rotate the data so that this vector is aligned inevery cloud. Additionally, to be able to match gestures withdifferent shapes, we unit-normalize the positional values onthe principal eigenvector axis, as well as on its perpendicularaxis. This last operation generally does not constrain pro-portions, which is not an issue if we assume that only non-degenerate 2-D shapes are allowed ( i.e. no lines). Finally, weunit-normalize the time axis as well, to allow for gestures atdifferent speeds to be compared. We perform these threesteps to ensure that similar shapes are already somewhataligned with each other prior to the detection phase, as shownin Fig. 2. Additionally, it minimizes the number of iterationsrequired by the ICP algorithm, and also minimizes the chancefor the optimization part of the algorithm to be trapped by alocal minimum.In order to increase detection rates, we compare theobservation cloud against different variants of each refer-ence cloud. We generate these variants by rotating the databy 180’ in the positional plane, by inverting points aboutthe principal eigenvector axis, by inverting the time axis,and by permutations of these three transformations. Thesetransformations allow for detection of mirrored shapes, andalso cancels out the sign of the eigenvector, which may bedifferent even for similar clouds. C. Point-to-point matching step We first obtain the distance vectors between an observationpoint and all data in the reference cloud. We then compute  ] Fig. 3. Effect of trimming the point cloud. the magnitude array using the Euclidean distance formula.Next, we identify point pairs whose temporal componentssurpass a certain absolute value, and penalize their cor-responding error magnitude by manually adding to it afixed value. This way, when searching for the minimummagnitude, we select the closest point pair from those withtolerable temporal distances, if such pairs are available. Afterpairing up each point in the observation cloud with one inthe reference, we assemble all the distances into the errormetric vector.Since markers can be detected when the user is bringingthem into their starting positions, and also when they arebeing removed after a gesture has been completed, this canintroduce “terminal” outliers. For this reason, we provide theoption to trim the observation cloud following the pairingprocess. If the first few observation points all match to asingle reference point, we discard all but the last point. Thesame operation is also performed on the last few observationpoints as well. We then stretch the temporal values for theresulting cloud to match the range of the initial set. As shownin Fig. 3, this process can eliminate outliers at both ends of the data.  D. Cloud optimization step In the subsequent step, we use the error metric vectorto solve for an optimal transformation that minimizes thesquared distance of this new error metric vector. We in-troduce two different types of transformations: in the firstvariant, the algorithm minimizes the point cloud by allowingit to rotate about the positional plane, and to translate inall 3 dimensions. The second variant also allows for 3-dimensional translation, but it employs proportional scalingin the positional plane instead of rotation. These two variantsare either linear or can be linearized, and thus both haveclosed-form solutions to their optimization rules.The solution for the rotational variant is not exact, becausewe approximate the cosine and sine of the angle of rotationby 1 and the angle, respectively. As a precaution, we alwaysverify the fidelity of this approximation to ensure that thesolution is still qualitatively consistent.We will now outline the derivation for this variant’ssolution. Given each point p in the observation (with N  totalnumber of points) and each point q in the reference, weattempt to minimize the error magnitude E  by computingthe rotational matrix R with angle θ and the translationalvector T  . E  = Σ ∀  p ( Rp + T  − q ) 2 · [1;1;1] The rotation matrix R is approximated as follows: R = cos ( θ ) − sin ( θ ) 0 sin ( θ ) cos ( θ ) 00 0 1  1 − θ 0 θ 1 00 0 1 After expanding E  , taking its derivatives with respect to θ , and equating to zero, Σ(  p 2 x + p 2 y ) θ − Σ(  p y ) T  x + Σ(  p x ) T  y = Σ(  p x q y −  p y q x ) Similarly, taking derivatives of  E  with respect to T  x , T  y and T  t , and equating to zero as before, we have: − Σ(  p y ) θ + NT  x = Σ( q x ) − Σ(  p x )Σ(  p x ) θ + NT  y = Σ( q y ) − Σ(  p y ) NT  t = Σ( q t ) − Σ(  p t ) Solving the above equations for the unknowns, we have: θ = [ N  Σ(  p x q y ) − N  Σ(  p y q x )] − Σ(  p x )Σ( q y )+Σ(  p y )Σ( q x )[ N  Σ(  p 2 x )+ N  Σ(  p 2 y ) − Σ(  p x ) 2 − Σ(  p y ) 2 ] T  x = Σ(  p y ) θ +Σ( q x ) − Σ(  p x ) N  T  y = − Σ(  p x ) θ +Σ( q y ) − Σ(  p y ) N  T  t = Σ( q t ) − Σ(  p t ) N  The scaling variant, on the other hand, produces a linearoptimization rule and thus returns an exact solution. Weprovide a similar outline to obtain a , the scale factor, and T  , the translational vector, by minimizing E  : E  = Σ ∀  p ([ a ; a ;1]  p + T  − q ) 2 · [1;1;1] We solve a system of equations, similar to the one above,for a and T  : a = [ N  Σ(  p x q x )+ N  Σ(  p y q y ) − Σ(  p x )Σ( q x ) − Σ(  p y )Σ( q y )][ N  Σ(  p 2 x )+ N  Σ(  p 2 y ) − Σ(  p x ) 2 − Σ(  p 2 y )] T  x = − Σ(  p x ) a +Σ( q x ) N  T  y = − Σ(  p y ) a +Σ( q y ) N  T  t = Σ( q t ) − Σ(  p t ) N   E. Algorithmic flow After pre-processing the observation cloud, we first set theoptimization type to translation and rotation. Since the datahas just been scaled in the pre-processing stage, naturallythis variant produces a better result than the scaling version.The algorithm iterates until the difference in overall normal-ized error magnitude between two successive iterations fallsbelow a threshold. At this stage, we trim the edges of theobservation cloud and perform a translational and scalingoptimization on the data. If this recovery attempt results inan improved match, we switch back to the rotational variantand begin the loop anew. Otherwise, we terminate the processand return the final error metric vector.The result obtained by comparing an observation to areference may not be identical to that obtained by comparing  Fig. 4. Triangle gesture compared against the Square gesture, demonstrat-ing the need for inverse matching. the reference to the observation. In order to account for thisasymmetry, we compute an inverse error metric vector bymatching the reference to the final observation cloud. Wedefine the average error magnitude as the arithmetic meanof the two magnitudes. The last step in the algorithm can be justified by the following example: assume the observationconsists of a right-angle triangle and the reference representsa square, as seen in Fig. 4. The forward ICP loop will yielda very small error vector. However, the same cannot be saidfor the reverse ICP loop, since the forth vertex on the squarewill have no homologue in the observation cloud, and thuswill increase the overall error magnitude.As mentioned previously, we compare the observationwith each reference cloud several times, once for each trans-formed variant of the data. At the end, we select the referencevariant with the smallest error magnitude, and then select thebest-matching reference shape using the same criterion. If the resulting error magnitude performs better than a certainacceptance threshold, we output the appropriate gesture towhich the observation cloud corresponds to. F. Choice of Reference Data For each gesture, we systematically pick out a referencecloud from a set of training data. The selection mechanismis achieved by evaluating each cloud against the rest of the data using our algorithm and picking the one withthe smallest average error. We have experimented with twoother types of references as well. In the first of these, weaverage the data by first selecting a cloud with an averagenumber of points in the training set. We then locate foreach point in this cloud the closest points on the otherclouds and finally average these point matches. However,because no two gestures are produced at the same rhythm,the temporal component completely distorts the positionalvalues. As result, the averaged cloud generally no longermanifests the srcinal shape. In the second approach, wherewe attempt to smooth the trajectory of the reference cloudsmanually, produces poor results. Since the observation datais not smoothed (to ensure real time performance), matchingsmoothed reference trajectories with the raw observationsresults in significantly poorer matching. G. Experimental Validation To rapidly prototype our system, we have implemented thealgorithm using MATLAB. Currently, the detection speedis approximately 0 . 5 second, with the database containing5 different reference shapes, each with 6 transformationvariants. This result is not ideal, but it does satisfy our goals Fig. 5. Set of RoboChat Gestures used in our assessment. for this prototype. Currently, we begin capturing gesturemotions when two fiducial markers are detected by therobot’s camera. Similarly, we stop the data capture and sendthe observation cloud to the ICP algorithm when the robotsees less than two markers for longer than a pre-determinedtimeout.V. E XPERIMENTAL R ESULTS  A. Parameter Influence Despite being algorithmically simple, the ICP code con-tains a number of parameters, which all can be fine-tunedto increase the performance of the overall system. The mostimportant parameter is arguably the maximum allowed tem-poral distance, which is required to prevent nearby point pairswith distant temporal values to be associated. However, wehave found that this threshold is very user-dependent, mostlikely due to the fact that every subject has a different senseof rhythm when performing the gestures. The importance of this value also depends on the roster of recognizable gestures.For example, we allowed in our experiments both the squareand the hourglass shape. The temporal parameter can alwaysbe set to distinguish these two trajectories apart, but thenumerical value of this parameter is different for each user. (a) Per user performance data(all gestures/user).(b) Per gesture performance data(all users/gesture).Fig. 6. RoboChat Gestures best-match performance data. We use two more values to determine the terminationcriteria for the overall ICP data flow – the minimum improve-ment in error magnitudes between consecutive iterations,and a maximum number of iterations allowed. These two
Related Search
We Need Your Support
Thank you for visiting our website and your interest in our free products and services. We are nonprofit website to share and download documents. To the running of this website, we need your help to support us.

Thanks to everyone for your continued support.

No, Thanks

We need your sign to support Project to invent "SMART AND CONTROLLABLE REFLECTIVE BALLOONS" to cover the Sun and Save Our Earth.

More details...

Sign Now!

We are very appreciated for your Prompt Action!