Logo for the Journal of Rehab R and D

Volume 45 Number 6, 2008
   Pages 921 — 930

Introduction and preliminary evaluation of the Tongue Drive System: Wireless tongue-operated assistive technology for people with little or no upper-limb function

Xueliang Huo, MS;1 Jia Wang, BS;2 Maysam Ghovanloo, PhD1-2*

1GT Bionics Laboratory, School of Electrical and Computer Engineering, Georgia Institute of Technology, Atlanta, GA; 2NC Bionics Laboratory, Electrical and Computer Engineering Department, North Carolina State University, Raleigh, NC

Abstract — We have developed a wireless, noncontact, unobtrusive, tongue-operated assistive technology called the Tongue Drive System (TDS). The TDS provides people with minimal or no movement ability in their upper limbs with an efficacious tool for computer access and environmental control. A small permanent magnet secured on the tongue by implantation, piercing, or tissue adhesives is used as a tracer, the movement of which is detected by an array of magnetic field sensors mounted on a headset outside the mouth or on an orthodontic brace inside. The sensor output signals are wirelessly transmitted to an ultraportable computer carried on the user's clothing or wheelchair and are processed to extract the user's commands. The user can then use these commands to access a desktop computer, control a power wheelchair, or interact with his or her environment. To conduct human experiments, we developed on a face shield a prototype TDS with six direct commands and tested it on six nondisabled male subjects. Laboratory-based experimental results show that the TDS response time for >90% correctly completed commands is about 1 s, yielding an information transfer rate of ~120 bits/min.

Key words: assistive technologies, computer access, environment control, information transfer rate, magnetic field sensors, permanent magnets, rehabilitation, telemetry, tongue control, wireless.


Abbreviations: 3-D = three-dimensional, AT = assistive technology, BCI = brain-computer interface, CCC% = percentage of correctly completed commands, EEG = electroencephalogram, ET = elapsed time, GUI = graphical user interface, ITR = information transfer rate, kNN = k-nearest-neighbor, PC = personal computer, PCA = principal components analysis, PDA = personal digital assistant, PWC = power wheelchair, SCI = spinal cord injury, SSP = sensor signal processing, TCI = tongue-computer interface, TDS = Tongue Drive System, TTK = Tongue Touch Keypad.
*Address all correspondence to Maysam Ghovanloo, PhD; Georgia Institute of Technology, Electrical and Computer Engineering, 85 Fifth Street NW, TSRB-419 Georgia Electronic Design Center, Atlanta, GA 30308; 404-385-7048; fax: 404-894-4701. Email: mghovan@ece.gatech.edu
DOI: 10.1682/JRRD.2007.06.0096
INTRODUCTION

Persons with disabilities as a result of various causes, from traumatic brain injury and spinal cord injury (SCI) to amyotrophic lateral sclerosis and stroke, generally find performing everyday tasks extremely difficult without continuous help [1-3]. In the United States alone, an estimated 11,000 new cases of SCI are added every year to a population of a quarter of a million as a result of acts of violence, falls, and accidents [2]. Fifty-five percent of SCI patients are between 16 and 30 years old and will need lifelong special care that currently costs about $4 billion each year [3]. With the help of assistive technologies (ATs), people with severe disabilities can lead self-supportive, independent, and high-quality lives. ATs can not only ease these individuals' need to receive continuous help, thus releasing a family member or dedicated caregiver and reducing their healthcare costs, but may also provide them with opportunities to return to full, active, and productive lives within society by helping them to be employed.

Although many devices are available to assist people with lower levels of disabilities, people who have minimal or no movement ability (e.g., individuals with tetraplegia) and who probably need assistance the most have very limited options. Even the existing ATs for this group of people have limitations such that only a small number have become popular among their intended users. The sip-n-puff switch, for example, is a simple, easy-to-learn, and relatively low-cost AT. However, it is slow, cumbersome, and inflexible, with only 2 to ~4 direct commands [4].1 It also requires its users to have airflow and diaphragm control, which patients who use ventilators do not have.

Another group of ATs tracks eye movements from corneal reflection and pupil position [5-6]. Electrooculographic potentials have also been used to detect eye movements [7-8]. An inherent drawback of these methods is that they interfere with the users' vision by requiring extra eye movements for eye control. In many cases, whether the user is issuing a command or simply gazing at an object is not clear; this is also known as the "Midas touch" problem [9]. Head pointers, another group of ATs, require a certain level of head movement ability that may not exist in many patients with high-level SCI [10]. These devices also require the user to always be in a sitting position while using them.

Some ATs, such as electroencephalogram (EEG) systems, directly use brain waves [11]. These devices require user concentration, a long procedure for electrode attachment, and daily removal. EEG systems are also prone to external interference and motion artifacts due to the small magnitude of the EEG signals. More recently, invasive brain-computer interfaces (BCIs) have emerged based on subdural electrocorticograms or intracortical neural recording [12-15]. These procedures are highly invasive, costly, and involve risks associated with brain surgeries. Finally, voice-activated ATs are quite popular for computer access and operate well in quiet settings. However, they are unreliable in noisy and outdoor environments. They also require diaphragm control, similar to the sip-n-puff, and functional vocal cords [10].

The tongue and mouth occupy an amount of sensory and motor cortex in the human brain that rivals that of the fingers and the hand. Hence, they are inherently capable of sophisticated motor control and manipulation tasks with many degrees of freedom [16]. The tongue is connected to the brain by the hypoglossal nerve, which generally escapes severe damage in SCI. The tongue muscle is similar to the heart muscle in that it does not fatigue easily [17]. Further, the tongue is noninvasively accessible and not influenced by the position of the rest of the body, which can be adjusted for maximum comfort.

The just-named reasons have resulted in the development of a few tongue-operated ATs, such as the Tongue Touch Keypad (TTK).2 Despite being innovative for the time in which it was introduced (early 1990s), the TTK has not been widely adopted because it is bulky and obtrusive [17]. TonguePoint is another AT, based on the IBM TrackPoint device used in laptops, and takes the form of a small pressure-sensitive joystick placed inside the mouth [18]. Even though this device provides proportional control, it is always restricted to a joystick operation and any selection or clicking must be performed through additional switches. The tip of the joystick also protrudes about 1 cm into the mouth, which could interfere with speech and ingestion. A few other tongue- or mouth-operated joysticks have been developed, such as Jouse2 and IntegraMouse.3 These devices can only be used when the user is sitting and require a certain level of head movement to grab the mouth joystick if the stick is not to be held inside the mouth at all times.

Our goal was to develop a minimally invasive, unobtrusive, easy-to-use, reliable, and low-cost AT that could potentially substitute for some of the users' lost arm and hand functions [19]. The device, called the Tongue Drive System (TDS), can wirelessly detect the tongue position inside the oral cavity and translate its motions into a set of user-defined commands. These commands could then be used to access a computer, operate a power wheelchair (PWC), or control devices in the user's environment.

METHODS

In the TDS, shown in Figure 1, a small permanent magnet the size of a grain of rice is secured to the tongue as a magnetic tracer by using tissue adhesives, tongue piercing, or simple implantation under the tongue mucosa through injection. The magnetic field generated by the tracer inside and around the mouth varies as a result of the tongue movements. These variations are detected by an array of sensitive magnetic sensors mounted on a headset outside the mouth, similar to a head-worn microphone, or mounted on a dental retainer inside the mouth, similar to an orthodontic brace. The sensor outputs are wirelessly transmitted to a personal digital assistant (PDA) also worn by the user. A sensor signal processing (SSP) algorithm running on the PDA classifies the sensor signals and converts them into user control commands that are then wirelessly communicated to the targeted devices in the user's environment [20].


Figure 1. Tongue Drive System component diagram and proof-of-concept prototype on dental model.

The principal advantage of the TDS is that a few magnetic sensors and a small magnetic tracer can potentially capture a large number of tongue movements, each of which can represent a particular user command. A set of specific tongue movements can be tailored for each individual user and mapped onto a set of customized functions based on his or her abilities, oral anatomy, personal preferences, and lifestyle. The user can also define a command to switch the TDS to standby mode when he or she wants to sleep, engage in a conversation, or eat.

Tongue Drive System Prototypes

We have built several TDS prototypes using commercially available components (Figure 1) [21-22]. One prototype for human trials, shown in Figure 2, was built on a face shield to facilitate positioning of the sensors for different subjects. The main function of this prototype was to directly emulate the mouse pointing and selection functions with the tongue movements. Six commands were defined: left, right, up, and down pointer movements and single- and double-click movements. As long as the SSP algorithm was running in the background, no additional software or learning was needed if the user was familiar with the mouse operation and any piece of software that was operable by a mouse.


Figure 2. External Tongue Drive System prototype implemented on face shield for human trials.

Small, cylindrical, rare-earth permanent magnets, whose specifications are listed in Table 1, were used as magnetic tracers. A pair of two-axis magnetic field sensor modules (PNI; Santa Rosa, California) was mounted symmetrically at right angles on the face shield close to the user's cheeks. Each two-axis module contained a pair of orthogonal magneto-inductive sensors, shown in Figure 1 inset and specified in Table 1.4 Hence, we had one sensor along the x-axis, one along the y-axis, and two along the z-axis with respect to the imaginary coordinates of the face shield (Figure 2). To minimize the effects of external magnetic field interference, including the earth magnetic field, we used a three-axis module as a reference electronic compass. The reference compass was placed on top of the face shield so as to be far from the tongue magnet and to only measure the ambient magnetic field. The reference compass output was then used to predict and cancel out the interfering magnetic fields at the location of the main two-axis sensor modules.


Table 1.
Tongue drive system specifications.
Specification
Value

Control Unit
 
Microcontroller*
 
Source and Type
Texas Instruments; MSP430F1232 Ultralow Power Microcontroller
Dimensions
22.5 × 18 × 16 mm3
Clock Frequency
1 MHz
Sampling Rate
11 sample/s/sensor
Wireless Transceiver
 
Source and Type
Nordic Semiconductor; nRF2401single chip 2.4 GHz transceiver
Dimensions
15 × 12 × 3 mm3
Operating Voltage/Current
2.2 V/~4 mA
Magnetic Sensor Module
 
Source and Type
PNI; magneto-inductive, MicroMag2, 2-Axis Magnetic Sensor Module
Sensor Dimensions
6.3 × 2.3 × 2.2 mm3
Sensor Module Dimensions
15 × 12 × 3 mm3
Resolution/Range
0.015 mT/1100 mT
Inductance
400 to 600 mH at 100 kHz, 1 Vp-p
Magnetic Tracer
 
Source and Type
RadioShack, Rare-Earth Super Magnet 64-1895
Size (diameter × thickness)
ý 5 mm × 1.3 mm
Residual Magnetic Strength
10,800 Gauss



All seven sensor outputs, already in digital form, were sent serially to the ultralow-power MSP430 microcontroller (Texas Instruments; Dallas, Texas) that is the heart of the control unit.5 The microcontroller took 11 samples/s from each sensor while activating only one module at a time to reduce power consumption. After reading all sensors, we arranged the samples in a data frame and wirelessly transmitted them to a personal computer (PC) across a 2.4 GHz wireless link established between two identical nRF2401 transceivers (Nordic Semiconductor; Trondheim, Norway).6 The entire system was powered by a 3.3 V coin-sized battery (CR2032), which together with the control unit and reference compass were hidden under the face shield cap (Figure 2 inset).

Sensor Signal Processing Algorithm

The SSP algorithm running on the PC was developed in LabVIEW (National Instruments; Austin, Texas) and MATLAB (The MathWorks; Natick, Massachusetts) environments. It operates in two phases: training and testing. The training phase uses principal components analysis (PCA) to extract the most important features of the sensor output waveforms for each specific command [23]. During a training session, the user repeats each of the six designated commands 10 times in 3-second intervals, while a total of 12 samples (3 per sensor) are recorded in 12-variable vectors for each repetition and labeled with the executed command. The PCA-based feature-extraction algorithm calculates the eigenvectors and eigenvalues of the covariance matrix in a three-dimensional (3-D) space based on the 12-variable vectors offline. Three eigenvectors with the largest eigenvalues are then chosen to set up the feature matrix [v1, v2, v3]. By multiplying the training vectors with the feature matrix, the SSP algorithm forms a cluster (class) of 10 data points from training for each specific command in the PCA virtual 3-D feature space.

Once a cluster is formed for each command, the testing phase can be executed, during which a three-sample window is slid over the incoming sensor signals to reflect them onto the 3-D feature space as new data points by using the aforementioned feature matrix. The k-nearest-neighbor (kNN) classifier is then used in real time to evaluate the proximity of the incoming data points to the clusters formed earlier in the training phase [24]. The kNN starts at the incoming new data point and inflates an imaginary sphere around that data point until it contains a certain number (k) of the nearest training data points. Then, it associates the new data point to the command that has the majority of the training data points inside that spherical region. In the current version, we chose k = 6.

After finding the intended user command, the mouse pointer starts moving slowly in the selected direction to give the user fine control. However, for faster access to different regions of the computer screen, the user can hold his or her tongue in the position of the issued command and the pointer will gradually accelerate until it reaches a certain maximum velocity.

Human Subjects

In order to evaluate the performance of the external TDS prototype in practice, we conducted several experiments on six nondisabled male human subjects. We obtained necessary approvals from the North Carolina State University Institutional Review Board and informed consent from each subject before the experiments. Subjects had no prior experience with other ATs. Two of the subjects were among the TDS research team and were familiar with the TDS operation. The other four subjects had no prior knowledge of the TDS.

Human Trial Procedure

Detailed human trial instructions were prepared ahead of the experiment, discussed with the subjects, and strictly followed to ensure that every subject followed the same procedure.

Magnet Attachment and Familiarization

A disposable permanent magnet was disinfected with 70 percent isopropyl rubbing alcohol, washed with tap water, dried, and attached to the subject's tongue about 1 cm from the tip with tissue adhesive (Colgate Orabase; New York, New York). Drying the subject's tongue with a hair drier before attachment turned out to result in better adhesion. The subjects were allowed to familiarize themselves with the magnet on their tongue for ~20 minutes and asked to find various comfortable positions in their mouth where they could hold the magnet stationary. The subjects then wore the external TDS prototype while the operator observed the recorded sensor signals and recommended the preferred positions for different commands.

Training Session

Once all command-related tongue positions were indicated and practiced, the subject was ready for the training session. The purpose of this session was to train the SSP algorithm on how the subject wanted to define the specific tongue movement for each command. During the training session, the graphical user interface (GUI) prompted the subject to define each command by turning on its associated indicator on the screen in 3-second intervals. The subject was asked to issue the prompted command by moving his tongue from its resting position to the corresponding command position when the command light was on and returning it back to the resting position when the light went off. This procedure was repeated 10 times for the entire set of six commands plus the tongue resting position, resulting in a total of 70 training data points.

Experiment I: Percentage of Correctly Completed Commands Versus Response Time

Experiment I was designed to provide a quantitative measure of the TDS performance by measuring how quickly a command is given to the computer from the time it is intended by the user. This time, which we refer to as TDS response time, includes thinking about the command and its associated tongue movement; the tongue movement transients; and any delays associated with hardware sampling, wireless transmission, and SSP computations. Obviously, the shorter the response time, the better. However, the accuracy of intending and performing tongue movements by the user and the discerning of those intended commands by the SSP algorithm are also affected by the response time. In other words, it is important not only to issue commands quickly but also to detect them correctly. Therefore, we considered the percentage of correctly completed commands (CCC%) as an additional parameter along with the response time.

A GUI was developed for this experiment to randomly select one out of six direct commands and turn on its indicator. The subject was asked to issue the indicated command within a specified time period T while the action light was on by moving his tongue from its resting position the same way that he had trained the TDS for that particular command. The GUI also provided the subject with real-time visual feedback by changing the size of a bar that indicated how close the subject's tongue was to the position of the specific command. This feedback helped the subject adjust the position of his tongue toward the intended command, thus reducing the probability of misclassification and improving overall performance. After 40 trials, we changed T from longer to shorter periods (2.0, 1.5, 1.0, and 0.8 s) and generated results by calculating the CCC% for each T.

Since the function and purpose of the TDS are similar to those of BCIs, but with the significant advantages of being unobtrusive and minimally invasive, we can use some of the same metrics that are used to evaluate and compare BCIs. One of these measures, known as information transfer rate (ITR), shows how much useful information the BCI can transfer from brain to computer within a certain period of time. Various researchers have defined the ITR differently. We have calculated the ITR using Wolpaw et al.'s definition [11]:

Wolpaw et al. definition.

where N is the number of individual commands that the system can issue, P is the system accuracy (P = CCC%), and T is the BCI system response time.

Experiment II: Maze Navigation

The purpose of Experiment II was to examine the TDS performance in navigation tasks, such as controlling a PWC, on a computer. The subject navigated the mouse pointer within a track shown on the screen, moving the pointer from a starting point by issuing a double-click (start command) to a stopping point by issuing a single-click (stop command); meanwhile, the GUI recorded the pointer path and measured the elapsed time (ET) between the start and stop commands. The track was designed such that all six commands had to be used during the test. When the pointer moved out of the track, the subject was not allowed to move forward unless he led it back onto the track. Therefore, the subject had to move the cursor within the track very carefully, accurately, and as quickly as possible to minimize the ET. Each subject was instructed to repeat the maze task three times to conclude the trial.

RESULTS

All subjects successfully completed the training session in 3 to 5 minutes and moved onto the testing phase, which took about 1.5 hours per subject including the resting period.

Experiment I

This experiment was repeated twice with each subject. Figure 3(a) shows the average CCC% versus response time as well as the 95% confidence interval. It can be seen that the highest CCC% of 96.25 percent was achieved for the longest period, T = 2.0 s, after which the CCC% dropped for shorter T as expected, down to 81.67 percent for T = 0.8 s. A satisfactory performance for a TDS beginner with a CCC% >90 percent can be achieved with T Š 1.0 s. The highest ITR, calculated with Equation 1, can also be achieved at T = 1.0 s, as shown in Figure 3(b). Therefore, we concluded that the response time of the present TDS prototype with six individual commands is about 1.0 s, yielding an ITR of ~120 bits/min with an accuracy of >90 percent.


Figure 3.(a) Percentage of correctly completed commands vs Tongue Drive System (TDS) response time in six human trials.
Experiment II

Although the participants sometimes missed the maze track at its corners, they were able to quickly bring the pointer back on track and complete the task. The average ET value together with 95% confidence interval for different subjects in 22 navigation experiments was 61.44 ± 5.00 s, which was about three times longer than the time required for the subjects to navigate the mouse pointer through the maze using their hand. Considering the fact that participants had much more experience moving the mouse pointer with their hand than with their tongue, this experiment shows the potential of the TDS for more complicated navigation tasks, such as controlling a PWC in a crowded environment.

One should note that all the results were obtained under a simulated laboratory environment that closely resembled computer access in real-life conditions. However, TDS has not yet been tested in real life by people with severe disabilities.

DISCUSSION
Benchmarking

Our goal is to develop an unobtrusive and minimally invasive AT that will enable people with severe disabilities to access computers and control their environment. In our external TDS prototype, we used only off-the-shelf components and ran the SSP algorithm in a LabVIEW environment to reduce the development time, although at the cost of larger size and slower run time. Nevertheless, the first set of nondisabled human trials showed that the TDS has the potential to substitute some arm and hand functions with tongue movements. The 1.0 s response time and >90 percent accuracy of the present TDS prototype represent an acceptable performance for a device with six direct commands that are all simultaneously accessible to the user. Even though the TDS hardware and SSP algorithm still have significant room for improvement, the preliminary results with TDS prototypes are already better than the ATs evaluated by Lau and O'Leary [17] as well as the recent tongue-computer interface (TCI) reported by Struijk [25]. The ITR achieved by our TDS prototype is compared with other TCIs and BCIs in Table 2.


Table 2.
Comparison of Tongue Drive System and other assistive technologies.
Reference
Type
Response
Time (s)
No. of
Commands
ITR
(bits/min)

Wolpaw et al. [1]
EEG-BCI
6.0-8.0
2-4
25.2
Chen et al. [2]
Head-tracking
9.8
30
24.6
Lau & O'Leary [3]
TCI
3.5
9
39.8
Struijk [4]
TCI
2.4
5
57.6
Huo et al. [5]
TCI
1.5
6
87.0
Present Work
TCI
1.0
6
120.0

1. Wolpaw JR, Birbaumer N, McFarland DJ, Pfurtscheller G, Vaughan TM. Brain-computer interfaces for communication and control. Clin Neurophysiol. 2002;113(6):767-91. [PMID: 12048038]
2. Chen YL, Tang FT, Chang WH, Wong MK, Shih YY, Kuo TS. The new design of an infrared-controlled human-computer interface for the disabled. IEEE Trans Rehabil Eng. 1999;7(4):474-81. [PMID: 10609635]
3. Lau C, O'Leary S. Comparison of computer interface devices for persons with severe physical disabilities. Am J Occup Ther. 1993;47(11):1022-30.
[PMID: 8279497]
4. Struijk LN. An inductive tongue computer interface for control of computers and assistive devices. IEEE Trans Biomed Eng. 2006;53(12 Pt 2):2594-97.
[PMID: 17152438]
5. Huo X, Wang J, Ghovanloo M. A magnetic wireless tongue-computer interface. In: Proceedings of the 3rd International IEEE/EMBS Conference on Neural Engineering; 2007 May 2-5; Kohala Coast, Hawaii. New York (NY): IEEE; 2007. p. 322-26.
EEG-BCI = electroencephalogram-brain-computer interface, ITR = information transfer rate, TCI = tongue-computer interface.


Flexibility

During the training session, the user is free to associate any specific tongue movement with any one of the six commands defined in the system based on his or her preference, abilities, and lifestyle. These tongue movements should be unique and far from other tongue movements that are either associated with other TDS commands or are natural tongue movements used during speaking, swallowing, coughing, sneezing, etc. Fortunately, most of these voluntary or involuntary movements are back and forth movements in the sagittal plane. Therefore, we advised the subjects to define their TDS commands by moving their tongue from its resting position to the sides or by curling their tongue up or down, movements that do not usually occur in other tongue activities. In the future, we intend to add new commands that will put the TDS in standby mode when the user intends to eat or sleep. For reduced power consumption in the standby mode, the TDS sampling rate will be reduced and the control unit will only look for the specific command that brings the system back online.

Learning

We did not observe a significant difference between individuals who had prior knowledge of the TDS and those who were completely unfamiliar with it. The novice group rapidly learned how to use their tongue movements to control the mouse cursor and produced results similar to the relatively more experienced group. Therefore, a 30-minute explanation of the TDS operation, a 10-minute preparation period including attachment of the permanent magnet and adjustment of the face shield, and a 20-minute familiarization time to play with the system were enough for a new user to produce test results comparable to the users that had occasionally used the TDS before. More accurate results are expected once we conduct long-term trials with individuals with and without disabilities in laboratory and real-life settings.

Native Language

Another expected observation from our human trials was that the individual's performance when using the TDS was independent of his native language. In fact, our six human subjects had four different native languages, and we did not observe any correlation between their native language and their performance. This result contrasts with those found with the voice-activated or speech-recognition-based ATs that are popular mainly among users who speak English well.

CONCLUSIONS

Our ultimate goal in developing the TDS is to help people with severe disabilities experience and preserve an independent, self-supportive life. The system uses an array of magnetic sensors to wirelessly track tongue movements by detecting the position and orientation of a permanent magnetic tracer secured on the tongue. The tongue movements can then be translated into various commands for computer access, navigation, or environment control.

The current external TDS prototype consists of four magneto-inductive sensors mounted on a face shield along with a 3-D electronic compass, all of which are driven by a control unit that is equipped with a wireless link to a nearby desktop computer. Laboratory-based human trials on six nondisabled male subjects have demonstrated that the present TDS prototype can help users potentially substitute some of their lost arm and hand functions with tongue movements when accessing a computer by controlling the mouse pointer movements and button clicks with six direct commands. The system response time was 1.0 s with >90 percent accuracy, and the ITR was about 120 bits/min, results which are better than those previously reported for AT or BCI devices.

Our future directions include improving the TDS hardware and SSP algorithms to make them smaller, faster, and more efficient. We will add more control commands in the SSP algorithm, including commands that put the TDS in standby mode and bring it back online. We also plan to substitute the operator feedback in selecting proper tongue movements with automated visual feedback to help the users define their commands more accurately. We intend to link the TDS to PWCs as well as other home and/or office appliances by either directly replacing the original input devices (e.g., joystick, switch array, remote control) with the TDS or building specialized hardware interfaces between them. We are also working toward adding proportional control to the SSP algorithm, especially for navigation and pointing tasks. We will also develop software to connect the TDS to a wide variety of readily available augmentative and alternative communication tools, such as text generators, speech synthesizers, and readers. Finally, assessing the usability and acceptability of the TDS by people with severe disabilities, who are the intended end-users of this new technology, are among the main directions of our future research.

ACKNOWLEDGMENTS

We would like to thank members of the NC Bionics Laboratory for helping with the human trials and other experiments. We also thank Ms. Elaine Rohlik and Dr. Patrick O'Brien from the WakeMed Rehabilitation Hospital in Raleigh, North Carolina, for their constructive comments.

This material was based on work supported in part by the Christopher and Dana Reeve Foundation and the National Science Foundation (grant IIS-0803184).

The authors have declared that no competing interests exist.

REFERENCES
1. Christopher and Dana Reeve Foundation [Internet]. Short Hills (NJ): The Association; c2008. Areas of research; [about 5 screens]. Available from: http://www.christopherreeve.org/site/c.geIMLPOpGjF/b.1034087/k.A619/Areas_of_Research.htm
2. National Spinal Cord Injury Statistical Center [Internet]. Birmingham (AL): University of Alabama at Birmingham Department of Physical Medicine and Rehabilitation; c2008. Facts and figures at a glance, January 2008; [about 6 screens]. Available from: http://www.spinalcord.uab.edu/show.asp?durki=116979&site=1021&return=19775
3. National Institute of Neurological Disorders and Stroke [Internet]. Bethesda (MD): National Institutes of Health; c2008 [updated 2008 Jul 24]. Spinal cord injury: Hope through research; [about 38 screens]. Available from: http://www.ninds.nih.gov/disorders/sci/detail_sci.htm
4. Bilmes JA Malkin J, Li X, Harada S, Kilanski K, Kirchhoff K, Wright R, Subramanya A, Landay JA, Dowden P, Chizeck H. The vocal joystick. In: Proceedings of the IEEE International Conference on Acoustics, Speech, and Signal Processing; 2006; Toulouse, France. New York (NY): IEEE. p. 625-28.
5. Chen YL, Tang FT, Chang WH, Wong MK, Shih YY, Kuo TS. The new design of an infrared-controlled human-computer interface for the disabled. IEEE Trans Rehabil Eng. 1999;7(4):474-81. [PMID: 10609635]
6. Hutchinson T, White KP Jr, Martin WN, Reichert KC, Frey LA. Human-computer interaction using eye-gaze input. IEEE Trans Syst Man Cybern. 1989;19(6):1527-34.
7. Law CK, Leung MY, Xu Y, Tso SK. A cap as interface for wheelchair control. In: Proceedings of the 2002 IEEE/RSJ International Conference on Intelligent Robots and Systems; 2002 Oct; Laussanne, Switzerland. New York (NY): IEEE. p. 1439-44.
8. Barea R, Boquete L, Mazo M, Lopez E. System for assisted mobility using eye movements based on electrooculography. IEEE Trans Neural Syst Rehabil Eng. 2002;10(4):209-18.
[PMID: 12611358]
9. Moore MM. Real-world applications for brain-computer interface technology. IEEE Trans Neural Syst Rehabil Eng. 2003;11(2):162-65. [PMID: 12899263]
10. Cook AM, Hussey SM. Assistive technologies: Principles and practice. 2nd ed. St. Louis (MO): Mosby; 2002.
11. Wolpaw JR, Birbaumer N, McFarland DJ, Pfurtscheller G , Vaughan TM. Brain-computer interfaces for communication and control. Clin Neurophysiol. 2002;113(6):767-91.
[PMID: 12048038]
12. Kennedy P, Andreasen D, Ehirim P, King B, Kirby T, Mao H, Moore M. Using human extra-cortical local field potentials to control a switch. J Neural Eng. 2004;1(2):72-77.
[PMID: 15876625]
13. Kennedy PR, Kirby MT, Moore MM, King B, Mallory A. Computer control using human intracortical local field potentials. IEEE Trans Neural Syst Rehabil Eng. 2004;12(3): 339-44. [PMID: 15473196]
14. Hochberg LR, Serruya MD, Friehs GM, Mukand JA, Saleh M, Caplan AH, Branner A, Chen D, Penn RD, Donoghue JP. Neuronal ensemble control of prosthetic devices by a human with tetraplegia. Nature. 2006;442(7099):164-71.
[PMID: 16838014]
15. Hochberg LR, Donoghue JP. Sensors for brain-computer interfaces. IEEE Eng Med Biol Mag. 2006;25(5):32-38. [PMID: 17020197]
16. Kandel ER, Schwartz JH, Jessell TM. Principles of neural science. 4th ed. New York (NY): McGraw-Hill; 2000.
17. Lau C, O'Leary S. Comparison of computer interface devices for persons with severe physical disabilities. Am J Occup Ther. 1993;47(11):1022-30. [PMID: 8279497]
18. Salem C, Zhai S. An isometric tongue pointing device. In: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems; 1997 Mar 22-27; Atlanta, Georgia. New York (NY): Association for Computing Machinery; 1997. p. 22-27.
19. Anderson KD. Targeting recovery: Priorities of the spinal cord-injured population. J Neurotrauma. 2004;21(10): 1371-83. [PMID: 15672628]
20. Huo X, Wang J, Ghovanloo M. A magnetic wireless tongue-computer interface. In: Proceedings of the 3rd International IEEE/EMBS Conference on Neural Engineering; 2007 May 2-5; Kohala Coast, Hawaii. New York (NY): IEEE; 2007. p. 322-26.
21. Krishnamurthy G, Ghovanloo M. Tongue drive: A tongue operated magnetic sensor based wireless assistive technology for people with severe disabilities. In: Proceedings of the 2006 IEEE International Symposium on Circuits and Systems; 2006 May 21-24; Kos, Greece. New York (NY): IEEE; 2006. p. 5551-54.
22. Huo X, Wang J, Ghovanloo M. Use of tongue movements as a substitute for arm and hand functions in people with severe disabilities. In: Proceedings of the RESNA 2007 Annual Conference; 2007 Jun 16-19; Phoenix, Arizona. Arlington (VA): RESNA; 2007.
23. Cohen A. Biomedical signal processing. Vol. 2. Boca Raton (FL): CRC Press; 1988. p. 63-75.
24. Duda RO, Hart PE, Stork DG. Pattern classification. 2nd ed. New York (NY): Wiley; 2001. p. 174-87.
25. Struijk LN. An inductive tongue computer interface for control of computers and assistive devices. IEEE Trans Biomed Eng. 2006;53(12 Pt 2):2594-97. [PMID: 17152438]
Submitted for publication June 28, 2007. Accepted in revised form March 10, 2008.
1Personal communication with sip-n-puff users participating in user groups.
2Tongue Touch Keypad™, <http://www.newabilities.com/>.
3Jouse2, Compusult Limited, <http://www.jouse.com/>; and USB Integra Mouse, Tash Inc, <http://www.tashinc.com/catalog/ca_usb_integra_mouse.html>.
4PNI, MicroMag2, 2-Axis Magnetic Sensor Module, <https://www.pnicorp.com>.
5Texas Instruments, Ultralow Power Microcontroller, <http://focus.ti.com/mcu/docs/mcuprodoverview.tsp?sectionId=95&tabId=140&familyId=342>.
6Nordic Semiconductor, nRF2401 single chip 2.4 GHz transceiver, <http://www.sparkfun.com/datasheets/RF/nRF2401rev1_1.pdf>.

Go to TOP
Go to the Table of Contents of Vol. 45 No. 6

Last Reviewed or Updated  Wednesday, November 12, 2008 11:47 AM