Download Autonomous Agents PDF

TitleAutonomous Agents
Author
LanguageEnglish
File Size8.9 MB
Total Pages120
Document Text Contents
Page 1

AUTONOMOUS AGENTS

Edited By

George A. Bekey
University of Southern California

Reprinted from
a Special Issue of

AUTONOMOUS ROBOTS
Volume 5, No. 1

March 1998

SPRINGER SCIENCE+BUSINESS MEDIA, LLC

Page 2

AUTONOMOUS ROBOTS
Volume 5, No.1, March 1998

Special Issue on Autonomous Agents

Introduction ......................................................... George A. Bekey 5

Development of an Autonomous Quadruped Robot for Robot Entertainment ..................... .
· .................................................. Masahiro Fujita and Hiroaki Kitano 7

Basic Visual and Motor Agents for Increasingly Complex Behavior Generation on a Mobile Robot ..... .
· ............................................ Maria C. Garcia-Alegre and Felicidad Recio 19

An Autonomous Spacecraft Agent Prototype ............................................ .
Barney Pell, Douglas E. Bernard, Steve A. Chien, Erann Gat, Nicola Muscettola, P. Pandurang Nayak,
· .............................................. Michael D. Wagner and Brian C. Williams 29

Map Generation by Cooperative Low-Cost Robots in Structured Unknown Environments ............ .
· ...................... M Lopez-Simchez, F. Esteva, R. Lopez de Mimtaras, C. Sierra and J. Amat 53

Grounding Mundane Inference in Perception ............................. Ian Douglas Horswill 63

Interleaving Planning and Robot Execution for Asynchronous User Requests ..................... .
· .............................................. Karen Zita Haigh and Manuela M Veloso 79

Integrated Premission Planning and Execution for Unmanned Ground Vehicles ................... .
· .................................... Edmund H. Durfee, Patrick G. Kenny and Karl C. Kluge 97

Learning View Graphs for Robot Navigation ............................................ .
· .............. Matthias 0. Franz, Bernhard SchOlkopf, Hanspeter A. Mallot and Heinrich H. Bulthoff III

The cover shows TITAN VIII, a quadruped walking machine developed at Tokyo Institute of Technology in the
Laboratory of Professor Shigeo Hirose. Professor Hirose is one of the world's leading designers and builders of
autonomous robots.

WKAP ARCHIEF

Page 60

...
, _ Autonomous Robots, 5, 63-77 (1998)

"l1li" © 1998 Kluwer Academic Publishers, Boston. Manufactured in The Netherlands.

Grounding Mundane Inference in Perception *

IAN DOUGLAS HORSWILL
The Institutefor the Learning Sciences, Northwestern University, 1890 Maple Avenue, Evanston, IL 60201

[email protected]

Abstract. We describe a uniform technique for representing both sensory data and the attentional state of an
agent using a subset of modal logic with indexicals. The resulting representation maps naturally into feed-forward
parallel networks or can be implemented on stock hardware using bit-mask instructions. The representation has
"circuit-semantics" (Nilsson, 1994, Rosenschein and Kaelbling, 1986), but can efficiently represent propositions
containing modals, unary predicates, and functions. We describe an example using Kludge, a vision-based mobile
robot programmed to perform simple natural language instructions involving fetching and following tasks.

Keywords: vision, active perception, reasoning, knowledge representation, agent architectures

1. Introduction

Suppose you want to program a robot to accept simple
instructions like "bring the green ball here." A likely
plan for solving this task might be:

1. Search for a green ball and track it visually as the
ball and/or robot move

2. Drive to the ball
3. Grab the ball
4. Drive to the starting position
5. Release the ball

Executing such a plan robustly requires the robot to per-
form error detection and recovery. If the ball slips out
of the gripper during transit, the robot needs to detect
the situation and rerun steps 2 and 3. Unfortunately,
the plan doesn't say that the ball needs to stay in the
gripper, just that the robot should perform a grasp at a
certain point in the plan.

This paper addresses two problems in autonomous
agency. The first is the problem of making mundane

* A preliminary version of this paper was presented at the First In-
ternational Conference on Autonomous Agents in February 1997.

inference and problem solving processes, such as deter-
mining how to deliver a ball, run fast enough that they
are effectively instantaneous. If the problem solver
runs fast enough, then error detection and recovery are
easy: we just keep rerunning the problem solver from
scratch. The second problem is grounding the problem
solver's inferences in continually updated sensor infor-
mation so that decisions about what to do next change
in real time as sensor readings change.

I will argue that these two problems are related. I
will present an architecture that allows a useful subset
of modal logic to be compiled into fast feed-forward
networks. The networks both drive and are driven by a
modem active vision system on a real mobile robot (fig-
ure I). The architecture allows designers to build sys-
tems with the performance characteristics of behavior-
based systems (Mataric, 1997), while simultaneously
providing much of the generativity of traditional sym-
bolic problem solvers.

1.1. Agent architectures

Much of the work in this decade on agent architectures
has been concerned with the problem of merging the

Page 61

64 Horswill

strengths of reactive and deliberative systems (see for
example,(Hexmoor, Horswill and Kortenkamp, 1997,
Hexmoor, Horswill and Kortenkamp, 1997)).1 A pure
reactive system contains no internal state (memory) and
consists of a set oftask -specific, pre-programmed rules
for firing actions based on immediate sensory input.
The rules are typically implemented using some kind of
parallel network whose inputs are driven by sensors and
whose outputs drive the effectors. Purely deliberative
systems (planners) compute in advance, and commit
to, a complete series of actions intended to achieve
the goal. The series of actions (the plan) is typically
computed from the goal using a combination of search
and simulation.

Both approaches have obvious flaws. It is easy to
construct examples of non-Markovian environments in
which a reactive system's lack of internal state will get
it into trouble. The task-specificity of their rules also
raises questions about scaling and generativity. On the
other side, deliberative systems typically require expo-
nential search, making them slow for simple problems
and unusable for complex ones. Another issue is that
purely deliberative systems commit to a plan in its en-
tirety. The only way they can respond to unexpected
contingencies is for the plan to fail entirely, triggering
a restart ofthe planner. However, it can be difficult for
the executive running the plan to know how to judge
when the plan has failed, since the plan contains pnly
the actions to perform, not their rationale. In a simple

Fig. 1. Kludge the robot during a delivery task. Its job is to search
for, approach, and grasp a ball of specified color and to deliver it to
a person wearing a specified color or to a designated point in space.
It must quickly recover from problems such as the ball slipping out
of its mandibles or the ball being momentarily occluded.

planner/executive architecture, all the domain knowl-
edge is in the planner, but understanding whether a
plan has failed requires at least as much knowledge as
formulating it in the first place.

Clearly, one wants a system that combines the
strengths of reactive and deliberative systems. The
most common approach to this has been to build a
hybrid system incorporating reactive and deliberative
systems as components, typically in a three-tiered ar-
chitecture: a planner computes plans from goals, an
executive sequences them, and a set of reactive sys-
tems implement the low level symbolic actions (see
(Arkin, 1997) for an extensive survey). One common
argument for such an architecture is that deliberation
will always be slower than reaction, so any division of
labor between the two will have to allocate fast time-
scale activities to the reactive system and slower time-
scale activities to deliberative systems (Bonasso et aI.,
1997).

The other major approach is to select special cases
of planning that can be mapped efficiently into parallel
hardware. The networks then compute the same in-
put/output mapping as a planner, but run in bounded
time. The earliest example of this is Rosenschein
and Kaelb1ing's system, which compiles propositional
logic axioms into sequential circuits (Rosenschein and
Kaelbling, 1986). Kaelbling's GAPPS system (Kael-
bling, 1988) used goal regression to compile a (propo-
sitional) planner-like formalism into sequential cir-
cuits. Mataric implemented a Dijkstra-like shortest
path finder using spreading activation in a behavior-
based system (Mataric, 1992). Maes' behavior net-
work system computed an approximation to propo-
sitional STRIPS planning using spreading activation
(Maes, 1989).

The use of propositional logic (logic without vari-
ables or predicate/argument structure) is severely lim-
iting, however, and there have been a few attempts
to rectify it. Nilsson's TRT system (nilsson:teleo-
reactive) handles predicate/argument structure by in-
crementally growing a propositional network as new
combinations of arguments are encountered. Agre
and Chapman's deictic representation (Agre and Chap-
man, 1987, Agre, 1988) is an explicit attempt to over-
come the limitations of propositional representations in
which variable binding is performed in the perceptual
system rather than the reasoning system.

Page 119

124 Franz et al.

and could be used to switch from one subgraph to the
other.

Using a purely topological representation, our sys-
tem is necessarily confined to the known path seg-
ments coded in the graph. Although it is able to detect
neighbouring unconnected vertices, there is no sim-
ple way to find novel paths over terrain not contained
in the catchment areas of recorded views. However,
our experiments have shown that our simple topolog-
ical representation contains implicit metrical knowl-
edge which might be used to accomplish tasks ususally
attributed to a metrical representation. This has impli-
cations for the interpretation of experimental results:
If an animal can be shown to utilize metrical informa-
tion, one cannot directly conclude that it was acquired
explicitly during exploration.

Several information sources can be integrated into a
common graph representation, with vertices containing
information about different sensory input and internal
states. Lieblich and Arbib (1982) propose that animals
use a graph where vertices correspond to recognizable
situations. The same idea was also used in the robot
implementation of Mataric (1991) where vertices are
combinations of robot motions with compass and ul-
trasonic sensorreadings. Ifmetric information is avail-
able, graph labels can include directions or distances
to the neighbouring vertices. This allows not only for a
wider spacing between snapshots but also to find short-
cuts between snapshot chains over unknown terrain. A
generalization of purely topological maps are graphs
where edges are labelled by actions (e.g., Kuipers and
Byun, 1991; Sch6lkopf and Mallot, 1995; Bachelder
and Waxman, 1995). This way, systems can be built
which do not depend on just one type of action (in our
case this was a homing procedure). Although presented
for navigation problems, similar graph approaches may
well be feasible for other cognitive planning tasks, as,
e.g., in the means-end-fields of Tolman (1932).

Clearly, the system we presented here is extremely
simple compared to biological systems. Our intention
is not to build models of animals, but to identify some
of the basic building blocks that might play a role in
biological navigation. This focus on understanding and
synthesizing behaviour in a task-oriented way leads
to parsimonious solutions with both technological and
ethological implications.

Acknowledgments

The present work has profited from discussions and
technical support by Philipp Georg, Susanne Huber,

and Titus Neumann. We thank Fiona Newell and our
reviewers for helpful comments on the manuscript.
Financial support was provided by the Max-Planck-
Gesellschaft and the Studienstiftung des deutschen
Volkes.

Note

I. If the views are recorded using sensors with overlapping Gaussian
receptive fields, the view will be a smooth function of the position.

References

Bachelder, LA. and Waxman, A.M. 1995. A view-based neurocom-
putational system for relational map-making and navigation in
visual environments. Robotics and Autonomous Systems, 16:267-
289.

Brooks, R.A. 1986. A robust layered control system for a mobile
robot. IEEE Journal orRobotics and Automation, RA-2(l).

Cartwright, B.A. and Collett, T.S. 1983. Landmark learning in bees.
J. Compo Physiol. A, 151:521-543.

Chahl, 1.S. and Srinivasan, M.Y. 1996. Visual computation of ego-
motion using an image interpolation technique. Bioi. Cybern.,
74:405-411.

Collett, T.S. 1992. Landmark learning and guidance in insects. Phil.
Trans. R. Soc. Lond. B, 337:295-303.

Collett, T.S. 1996. Insect navigation en route to the goal: Multiple
strategies for the use oflandmarks. 1. Exp. Bioi., 199:227-235.

Franz, M.O., SchOikopf, B., and Biilthoff, H.H. 1997. Homing by
parameterized scene matching. In Fourth Europ. Conf on Artificial
Lire, P. Husbands and 1. Harvey (Eds.), MIT Press: Cambridge,
MA. pp. 236-245.

Franz, M.O., Scholkopf, B., Georg, P., Mallot, H.A., and Biilthoff,
H.H. 1997. Learning view graphs for robot navigation. In Proc.
I. IntI. Con/ on Autonomous Agents, W.L. Johnson (Ed.), ACM
Press: New York, pp. 138-147.

Gallistel, R. 1990. The Organization ()fLearning, MIT Press: Cam-
bridge, MA.

Gillner, S. and Mallot, H.A. 1997. Navigation and acquisition of
spatial knowledge in a virtual maze. J. Cognitive Neuroscience. In
press.

Goldman, S. 1953. Information Theory, Dover: New York.
Hong, 1., Tan, X., Pinette, B., Weiss, R., and Riseman, E.M. 1991.

Image-based homing. In Proc. IEEE IntI. Coni on Robotics and
Automation, pp. 620-625.

Kuipers, B.1. and Byun, Y. 1991. A robot exploration and mapping
strategy based on a semantic hierarchy of spatial representations.
Robotics and Autonomous Systems, 8:47-63.

Lieblich, 1. and Arbib, M.A. 1982. Multiple representations of space
underlying behavior. Behavioral and Brain Sciences, 5:627-659.

Mallot, H., Biilthoff, H., Georg, P., SchOikopf, B., and Yasuhara, K.
1995. View-based cognitive map learning by an autonomous robot.
In Proc. ICANN'95-Int. Conf on Artificial Neural Networks, F.
Fogelman-SouM and P. Gallinari (Eds.) EC2, Nanterre, France,
Vol. II, pp. 381-386.

Mataric, M.1.1991. Navigating with a rat brain: AneurobiologicalJy-
inspired model for robot spatial representation. In From Animals
to Animats, 1.-A. MeyerandS.W. Wilson (Eds.), MIT Press: Cam-
bridge, MA.

Page 120

O'Keefe, J. and Nadel, L. 1978. The Hippocampus as a Cognitive
Map , Clarendon Press: Oxford.

0' Neill, MJ. 1991. Evaluation of a conceptual model of architectural
legibility. Environment and Behavior, 23:259-284.

Piaget, J. and Inhelder, B. 1967. The Child's Conception otSpace ,
Norton: New York.

Poucet, B. 1993. Spatial cognitive maps in animals: New hypothe-
ses on their structure and neural mechanisms. Psychological Rev.,
100:163-182.

Rofer, T. 1995. Controlling a robot with image-based homing. Center
for Cognitive Sciences, Bremen, Report 3/95.

Scholkopf, B. and Mallot, H.A. 1995. View-based cognitive mapping
and path planning. Adaptive Behavior, 3:311-348.

Thrun, S. 1995. Exploration in active learning. In The Handbook ot
Brain Theory and Neural Networks , M.A. Arbib (Ed.), MIT Press,
pp.381-384.

Tolman, E.C. 1932. Purposive Behavior (JtAnimal., and Men, Irv-
ington: New York.

Wehner, R., Michel , B. , and Antonsen, P. 1996. Visual navigation
in insects : Coupling of egocentric and geocentric information. 1.
Exp. Bio/', 199:129-140.

Yagi, Y. , Nishizawa, Y. , and Yachida, M. 1995. Map-based naviga-
tion for a mobile robot with omnidirectional image sensor COPIS.
IEEE Trans. Robotics Automat., 11:634-648.

Matthias O. Franz graduated with a M.Sc. in Atmospheric Sciences
from SUNY at Stony Brook, NY, in 1994, and with a Diplom in
Physics from the Eberhard-Karls-Universitat, Tiibingen, Germany,
in 1995. He is currently a Ph.D. student at the Max-Planck-Institut
fiir biologische Kybernetik. His main research activities are in au-
tonomous navigation, ego motion extraction from optic flow and nat-
ural image statistics.

Bernhard Schiilkopfreceived an M.Sc. degree in Mathematics from
the University of London in 1992. Two years later, he received the
Diplom in Physics from the Eberhard-Karls-Universitat, Tiibingen,

Learning View Graphs for Robot Navigation 125

Germany. He recently finished his Ph.D. in Computer Science at the
Technische Universitat Berlin, with a thesis supervised by Heinrich
Biilthoff and Vladimir Vapnik (AT&T Research). He has worked on
machine pattern recognition for AT&T and Bell Laboratories. His
scientific interests include machine learning and perception.

Hanspeter A. Mallot studied Biology and Mathematics at the Uni-
versity of Mainz where he also received his doctoral degree in 1986.
He was a postdoctoral fellow at the Massachusetts Institute of Tech-
nology in 1986/87 and held research positions at Mainz University
and the Ruhr-Universitat-Bochum. In 1993, he joined the Max-
Planck-Institut fiir biologische Kybernetik. In \996/97, he was a
fellow at the Institute of Advanced Study, Berlin. His research in-
terests include the perception of shape and space in humans and
machines, as well as neural network models of the cerebral cortex.

Heinrich H. Biilthoff is Director at the Max-Planck-Institute for
Biological Cybernetics in Tiibingen, Germany and Honorar Profes-
sor at the Eberhard-Karls-Universitat Tiibingen. He studied Biology
in Tiibingen and Berlin and received his doctoral degree in 1980.
He spent 3 years as a visiting scientist at the Massachusetts Insti-
tute of Technology and joined in 1988 the faculty of the Depart-
ment of Cognitive and Linguistic Sciences at Brown University. In
1993, he was elected as Scientific Member of the Max-Planck So-
ciety and currently directs at the Max-Planck-Institute in Tiibingen
a group of about 30 biologists , computer scientists, mathematicians,
physicists and psychologists working on psychophysical and com-
putational aspects of higher level visual processes. Dr. Biilthoff
has published more than 50 articles in scholarly journals in the ar-
eas of Object and Face Recognition, Integration of Visual Modules ,
Sensori-Motor Integration, Autonomous Navigation and Artificial-
Life. He has active collaborations with researchers at Brown Univer-
sity, Massachusetts Institute of Technology, NEC Research Institute,
Oxford University, Tiibingen University, Tel Aviv University, Uni-
versity of Minnesota, University of Western Ontario, University of
Texas and the Weizmann Institute of Science.

Similer Documents