INTELLIGENT ROBOTS: DO WE NEED THEM AND CAN THEY BE BUILT?
This article also appears in the Oak Ridge National Laboratory Review
(Vol. 26, No. 1), a quarterly research and development magazine. If
you'd like more information about the research discussed in the article
or about the Review, or if you have any helpful comments, drop us a line. Thanks for reading the Review.
For avid watchers of science fiction movies, the mention of robotics and
artificial intelligence conjures up images of humanlike machines, as
well as memories of HAL, the awesome computer in 2001: A Space Odyssey.
Often, news reports of scientific advances that enable machines to
behave in a flexible manner for a limited set of tests draw parallels to
science fiction robots. The effect of this unfortunate kind of publicity
is that the scientific disciplines of robotics and artificial
intelligence are sometimes regarded as a playground for slightly crazed
scientists trying to create artificial humans. In reality, the fields of
robotics and artificial intelligence can best be described by answering
a few commonly asked questions: What is an intelligent robot, anyway?
Why would we need things like that? Could we build them and make them
reliable for certain uses?
Instead of giving Webster's definition of intelligence or reviewing what
has been written on the subject of intelligence, I will present an
example of an intelligent machine, or robot, and address the question of
whether intelligent robots are needed. I will close by describing the
impact of ORNL research on uses for intelligent machines.
SMART AUTOMOBILES?
Imagine that you are cruising down a freeway in an automobile. You
really have only three ways to control the car: acceleration, braking,
and steering. The procedure is simple, but the automobile itself is
rather complicated. Many things are happening simultaneously and rapidly
inside the engine, and in other subsystems, as you drive at 50 mph (80
km/h). Yet you do not have to bother with all the details; you simply
control the speed with two different pedals and keep the car on the road
by using the steering wheel. Is the car intelligent? No. The machine
itself is not capable of planning, decision making, or perceiving the
road conditions--capabilities that go beyond the low-level control of
the various subsystems required to drive the car.
What happens if you add cruise control, automatic wheel-slip control,
and antilock brakes? You still do not have an intelligent car, but you
are now dealing with a system that can very effectively extend human
control capabilities for safer operation. Now, add the capabilities to
automatically keep the car in a lane at highway speeds and to follow a
lead car, even through lane changes. The machine can now process and
analyze sensory input and make decisions based on this information. The
role of the human operator becomes more flexible because the driver may
choose to operate the car directly or to execute primarily supervisory
functions. The machine has a significant level of autonomy and
intelligence for certain key functions, enabling the human operator and
the machine to work together in a symbiotic system.
This concept is not science fiction, by the way. The U.S. Defense
Advanced Research Projects Agency (DARPA) has been supporting research
that resulted in recent initial demonstrations of these capabilities
using prototype vehicles. For a number of years, the European Community
has been supporting a major program on intelligent vehicles/highway
systems (IVHS), which has demonstrated automobile road following, lane
changes, and obstacle avoidance. The United States has a similar IVHS
effort under way.
Recently, car rental agencies in the Orlando, Florida, area began
participating in an experiment using automobiles equipped with
electronic means to obtain position data from the Global Positioning
System (GPS). These smart cars can also communicate with sensors
installed along the highways and with traffic control centers to obtain
map information, as well as the latest news on traffic jams and
recommended alternate routes. Combining these features places this kind
of automobile in the spectrum of intelligent machines ranging from
effective human-machine symbiotic systems to robotic devices that have
few requirements for human interaction or supervision.
As increasingly intelligent mechanisms and subsystems are incorporated
into the automobile, the environment in which the automobile operates
changes. For example, the smart car will be able to locate itself and to
avoid traffic jams if the highway system includes an infrastructure of
sensors and information and communication systems (i.e., the operational
environment of the intelligent machine).
APPLICATIONS FOR ROBOTS
A robot has intelligence if it can plan and adapt its actions and
behavior to knowledge stored in its computer memory or acquired through
sensors or other supervisory input from other robots or a human
operator. Distinctions between robots, telerobots, and remotely operated
systems and devices are often made to indicate the degree to which
sensing, decision making and planning, and acting are incorporated into
one device or split between the machine and the human. These
distinctions may be useful for categorizing different robot systems. All
these machines, however, are being developed for the same purpose: to
extend human capabilities into work environments that are hazardous to
people; to perform work that people cannot or should not do; and to
transcend the limitations of human sensory, manipulatory, and control
capabilities.
This is a most exciting time for scientists and engineers who are
developing intelligent robots and for everyone anticipating the benefits
of working robots. The development of intelligent robots, long a dream,
has accelerated because of (1) new, important uses for reliable robots,
which exert a significant pull that encourages the integration of
enabling technologies into robotic test beds, and (2) significant
advances in the enabling technologies and research breakthroughs that
push toward the development of robots that have increasingly advanced
capabilities. Examples of both are given here.
In 1990, Frank Sweeney gave an excellent synopsis in the Review (Vol.
23, No. 3) of our efforts to develop robots for nuclear power stations
and environ-mental restoration and waste management (ERWM). Since then
the use of robots in nuclear power stations has increased significantly.
More than 25 utility companies are using remotely operated robots in
nuclear power plants for tasks ranging from pipe inspections to routine
in-service surveillance. The ERWM robotics program evolved into a
significant multilaboratory effort to combine existing robotics
technologies to demonstrate faster, better, and cheaper ERWM operations
through the use of remotely operated systems. In addition, the
international community for developing fusion energy recognized the need
for robotics and remote systems for maintenance of the proposed
International Thermonuclear Experimental Reactor (ITER). ORNL remote
systems engineering staff are making key contributions to the ITER
design project.
In response to the presidential initiative in space exploration, the
Department of Energy, through its new Office of Space, is contributing
technological expertise to the U.S. space program. The Stafford
Committee report identifies robotics as a key enabling technology for
all missions planned for the moon and Mars. Lunar missions using robots
are being planned for the second half of this decade. ORNL, which brings
to the table significant capabilities in many enabling technologies, was
assigned to head the DOE space robotics working group.
Robotics and artificial intelligence are on the list of technologies
that the Department of Defense considers critical to accomplishing its
missions in the world under political conditions that have changed
significantly. ORNL researchers have a successful track record of
rapidly moving results from research in intelligent machines into
systems that can be tested in the field.
Manufacturing has been included in the list of critical technologies
that the U.S. government has identified to make our country competitive
in world commerce. The United States is rapidly falling behind Europe
and Japan in advanced manufacturing capabilities. At least two related
problems are of major concern: (1) insufficient mechanisms to move
research results and technologies into the commercial arena and (2)
difficulties with applying advanced technologies to flexible
manufacturing to fabricate high-quality products competitively. Advanced
manufacturing, a presidential initiative in 1994, includes
computer-aided design and manufacturing (CAD/CAM, or
CIM--computer-integrated manufacturing), robotics and automation, "just
in time" logistics and supplies organization, nondestructive testing and
multisensor monitoring for quality assurance, statistical modeling and
testing for quality control, and human-machine interfaces for advanced
instrumentation and control of complex processes. Many of these advanced
manufacturing capabilities will require intelligent robots.
ORNL'S CONTRIBUTIONS
An intelligent robot is a functioning system resulting from the
integration of many components. To make this systems integration
successful, a number of component technologies must come together. These
include mobility systems, multiple sensors, robot manipulators (arms),
dextrous end-effectors (hands), high-performance computers,
communications systems, reliable control and decision-making
methodologies, and effective operator control stations. ORNL has helped
advance the state of the art in robotics and artificial intelligence in
several ways. Since the early 1980s ORNL's Center for Engineering
Systems Advanced Research (CESAR) in the Engineering Physics and
Mathematics Division has been studying and developing intelligent
machines. DOE's Office of Basic Energy Sciences supports our core
research program. Other DOE offices (e.g., Nuclear Energy, Environmental
Restoration and Waste Management) support our applied development
activities. CESAR staff have also successfully met research and
development goals for a number of other sponsors, including the U.S.
Army, the Air Force Wright Research and Development Center, and the
Office of Naval Research. Much of the work done at CESAR has been
published in the refereed literature and has had a significant impact on
applications.
Many basic problems that humans solve without even consciously making an
effort are incredibly difficult for intelligent robots. For example, a
person can easily go from one location in a room to another without
bumping into things, even if other people or objects are moving around
in the same room in a way that is not completely understood. Solving
this highly complex problem for a mobile robot is very difficult
mathematically, even for computers. No optimal solution can be found in
any practical amount of time.
But do we need an optimal solution? Not really. The robot must get from
A to B swiftly without having to spend a lot of time replanning when the
situation in the room changes, and it needs to get there safely without
bumping into people and objects. Ideally, we want the machine to react
as quickly as the sensors provide new data. We want it to behave as we
would: to react to new, uncertain, and imprecise sensor data by
executing appropriate behaviors. We guess the likely motion of a moving
obstacle, and we evade an object when we get close enough without being
able to say precisely what distance is "close enough."
Some time ago, Francois Pin of CESAR realized that the theory of fuzzy
sets and the corresponding fuzzy logic and decision-making theory
provide a mechanism for qualitative reasoning. This mechanism may allow
a robot to solve the navigation problem as effectively as we solve it.
In collaboration with Hiroyuki Watanabe at the University of North
Carolina, Pin also found that this kind of decision making can be
performed in real time, even faster than the sensors provide the data.
Together they developed unique computer boards that incorporate very
large-scale integration fuzzy logic (VLSI) chips. The computer boards
were mounted on one of the new mobile platforms at CESAR. That robot now
navigates reliably and safely in its work space. It "knows" nothing
about the obstacles that it needs to avoid. Without solving any
computationally complex problems in high-level planning, it reacts to
sensor data based on as few as 13 different rules or behaviors. The
computing hardware that allows this real-time performance is now being
replicated to support further work in real-time qualitative reasoning
for other seemingly intractable planning and control problems.
Intelligent machines must learn from and adapt to their environment. For
environments with little predefined structure, learning is a difficult,
yet essential, task. Most information is sensor-based, and active
sensing must be possible to create and continuously update a model of
the robot's surroundings. CESAR researcher Ed Oblow has investigated the
use of random set representations for probably approximately correct
(PAC) learning and reported significant improvements in learning speed.
Oblow, Chuck Glover, and the late Gunar Liepins, together with Nageswara
Rao of Old Dominion University, investigated the capabilities of a
system consisting of n (any number greater than 1) learners combined by
an algorithm that can integrate the knowledge generated by the
individual learners. Given a system of n PAC learners, a method was
developed that makes the composite system perform better than the best
of the individual learners.
This work represents a development that benefits applications of machine
learning, including identification of parameters that govern complex
processes, such as flexible manufacturing systems and adaptive pattern
recognition systems for robot sensors and other applications. The
results are expected to have considerable impact on the important
problem of designing systems consisting of multiple, relatively simple
agents that cooperate to solve problems of realistic complexity.
Examples include crews of multiple mobile robots, each capable of
executing relatively simple behaviors, such as obstacle avoidance, or
other control algorithms and capable of adapting their behavior through
learning.
Part of CESAR's research addresses issues that arise when the
technologies developed in our programs are integrated in robot test beds
and prototypes to demonstrate increasingly complex capabilities. In the
fall of 1991, Pin and Alex Bangs of CESAR and Steve Killough of ORNL's
Robotics and Process Systems Division automated several manipulation and
safety-enhancing functions on an outdoor material-handling vehicle
prototype. The customer, Fort Belvoir Research and Development Center in
Fort Belvoir, Virginia, is now field testing the prototype. The operator
of the vehicle can now control the position of the forklift directly,
instead of having to control separately each joint of the long-reach
manipulator to which the forklift is attached, as was the case before
the vehicle was outfitted with new controls. This work is a good
example of our ability to move basic research results (in this case,
control of kinematically redundant mechanisms) into fieldable systems in
a short amount of time (in this case, 15 months).
In early 1991 another technology integration demonstrated the ability of
ORNL's mobile robot, HERMIES-III, to perform surveys of surface
radiation contamination in waste storage containers as a way of reducing
hazards to workers at waste facilities. This experiment used HERMIES-III
as a telerobot controlled by a human operator. Many functions were
executed automatically, such as path planning, navigation with local
obstacle avoidance, range image analysis to locate the waste storage
containers, control of the seven-degree-of-freedom manipulator on board
HERMIES-III, and positioning of the platform so the robot hand holding
the beta-radiation detector could scan the container surface. The
experiment was the result of collaborative research among four teams at
the universities of Florida, Michigan, Tennessee, and Texas. It involved
the integration of over 100,000 lines of software developed by the
university teams and at CESAR, running on a total of 27 central
processing units on board and off board HERMIES-III in a heterogeneous
network. The integration was made possible by the Helix programming
environment developed by Judson Jones at CESAR and Phil Butler of the
Robotics and Process Systems Division. Helix simulates shared memory on
a heterogeneous network of computers. The machines in the network can
vary with respect to their native operating systems and internal
representation of numbers. Helix, which was designed to present a simple
programming model to developers, also considers the needs of designers,
system integrators, and maintainers.
NEW OPPORTUNITIES
Can we build useful intelligent robots? Yes, but for quite a while they
may not have the "right stuff" for the movies. Intelligent robots
represent exciting new opportunities for research that transcend the
boundaries of traditional scientific and engineering disciplines, such
as mechanical and electrical engineering, physics, biology, mathematics,
computer science, and materials science. The dialogue among experts in
these disciplines who are making a serious commitment to communicate and
learn from each other is a vital ingredient for a successful program in
intelligent robots.
We also must meet new challenges in educating the next generation of
experts. Since the start of the CESAR programs, we have been working
closely with our colleagues in academia. A great number of undergraduate
students have had their first exposure to interdisciplinary research
with us. Many graduate students have performed a significant part of
their thesis research using our special intelligent machines facilities.
We were also fortunate to be able to host a number of pre- and
postdoctoral fellows from abroad (including researchers from Belgium,
Denmark, Germany, France, Japan, Norway, and South Korea). As part of a
new Center for Neural Engineering at Tennessee State University, CESAR
provides undergraduate and graduate, as well as faculty, research
opportunities in neural network research for robotics applications. The
influx of new ideas from all these collaborations keeps the CESAR "old
timers" on their toes.
BIOGRAPHICAL SKETCH
Reinhold C. Mann has been head of the Engineering Physics and
Mathematics Division (EPMD) Intelligent Systems Section and director of
ORNL's Center for Engineering Systems Advanced Research (CESAR) since
1989. He is also an adjunct associate professor in the Computer Science
Department at the University of Tennessee in Knoxville. He received an
M.S. degree in mathematics in 1977 and a Ph.D. degree in physics in 1980
from the Johannes Gutenberg University in Mainz, Germany. From 1978
until 1980 Mann was a research associate in the Biophysics Department at
Mainz University and a consultant on digital image analysis and pattern
recognition with the Laser and Optics Group at Battelle Institute in
Frankfurt, Germany. In 1980 he joined the Image Analysis Group at the
Fraunhofer Institute for Data and Information Processing in Karlsruhe,
Germany.Mann was awarded a Feodor-Lynen Fellowship by the Alexander von
Humboldt Foundation in Bonn, Germany, which supported his research in
pattern recognition and analysis in 1981 and 1982 as a visiting
scientist in ORNL's Biology Division. He was a staff member in the
Biology Division from 1983 until 1986, when he joined the EPMD to work
on multisensor systems for intelligent machines and mobile robots. He
was leader of EPMD's Advanced Computing and Integrated Sensor Systems
Group from 1987 until 1989.
Reinhold C. Mann
(keywords: robotics, artificial intelligence)
------------------------------------------------------------------------
Please send us your comments.
Date Posted: 1/11/94 (ktb)