Brian Williams and Danny Bobrow position paper
Date: Wed, 20 Mar 91 23:05 PST
Subject: Brian Williams and Danny Bobrow position paper
Message-id: <>


{\LARGE "The Physicist, Chemist and Biologist:\\
A Multi-level View on Knowledge Sharing"}


{\large by Brian C. Williams and Daniel G. Bobrow}


position paper for the Workshop on Shared, Reusable Knowledge Bases.

One of the more ambitious knowledge sharing efforts has been the
development of our current text book knowledge about physical systems.
In this technical note we explore what characteristics of this process 
have been productive, and why. 

{\it Knowledge is best captured in task specific ways.}

It is easy for our knowledge sharing efforts to fall into a knowledge
representation black hole.  Its important to avoid the urge to represent
knowledge in an absolute, use independent manner.  Many of us
have learned this lesson painfully.  One of us spent the summer of
82 using Krypton to represent everything one might need to know in order
to qualitatively analyze MOS circuits.  The effort digressed to defining
holes, electrons, summations, functions, mappings and the like.  Some at
PARC have slithered down the same hole while trying to construct models
of copiers.

All scientific theories are an abstraction of the world.  They have been
constructed carefully based on their adequacy with respect to a set of
tasks.  For example, circuit theory, electroquasistatics,
electrodynamics, and quantum electrodynamics, are all devoted to
determining the electromagnetic behavior of a device at successively
more detailed levels.  Given a particular task an electrical engineer
selects among them for the simplest adequate theory.

The adequacy of the knowledge we capture should be measured, and is most
fruitfully explored with respect to a set of tasks.  For example, tasks
include diagnosis, design, process control, repair, reconfiguration,
explanation and theory formation.

The representations used to describe a domain of discourse is only a
small element of what it takes to perform a task.  Thus our goal should
not be to provide knowledge representation languages devoid of
reasoning.  Rather we should produce combinations of
representation/reasoning capabilities.

{\it Frameworks should facilitate sharing across different tasks.}

While our representation of knowledge depends on how it is used, we
would not be please if we had to construct a different representation
for every task we automated.  Instead we take a middle ground.  For a
particular domain of discourse there is a set of concepts and a set of
reasoning processes on these concepts that the tasks share in common.
For example in the area of reasoning about physical systems, the
concepts include structure, behavior, role, device, process, cause,
history, episode, interaction, and so forth.  The reasoning processes
address particular questions about these core concepts.  For example,
the basic questions underlying most engineering tasks are what does a
device do, and how does it work?  Building on top of these questions,
are other questions like what is it like?  What is most relevant?  How
did we create it?  Does it work correctly?  Where is it broken?  And how
do we fix it?  No doubt the most important goal of our knowledge sharing
effort should be to construct a series of computational theories of
knowledge by identifying and analyzing the basic concepts, questions and
reasoning processes that underly our domains of discourse.

{\it Frameworks may be constructed on top of existing theories of knowledge.} 

A theory in physics or chemistry identifies a set of basic phenomena and
questions of interest about those phenomena.  The theory then provides a
set of analysis techniques and representations that are appropriate to
answer the questions.  Much of the work in qualitative physics builds
directly on top of the lumped parameter theories of physical systems
dynamics and circuit theory.  More recent work builds directly on top of
kinematics and thermodynamics.  It seems likely that we will be able to
build a broad set of computational theories directly on top of existing
theories of physics and chemistry.

{\it Knowledge can capture the same phenomena at different levels.} 

The notion of domain dependent and domain independent are
misguided.  Most of us would find a statement like the following
quite peculiar -- ``physics is a domain independent theory, while
chemistry and biology are domain dependent.''  The concepts of
domain dependence/independence are too black and white to be useful.
Each theory has a range of applicability somewhere in the vast middle
between case specific and all encompassing.  

Further, no theory is clearly more general than the other.  Rather each
theory deals with physical phenomena at a different level of
granularity; biology deals with organisms, chemistry with molecules, and
physics with atoms and their makeup.  While biology builds upon
chemistry and chemistry upon physics, one would not say that chemistry
is subsumed by physics or that one would want to apply physics to
chemistry problems.  Each is simply different.

Knowledge sharing should respect the different
granularites of knowledge and reasoning about that knowledge.  For
reasoning about physical systems, one might draw on a 
metaphor taken from physics.  The ``molecules'' 
are behavior/structure objects, like lumped elements,
processes, bond graphs, or kinematic pairs.  The ``atoms'' are
individual interactions relating the parameters of a process, lumped
element and so on.  Finally, the ``subatomic particles'' are the
constituents of interactions, such as qualitative and quantitative
algebraic operators that are used to describe the individual

A theory at each level consists of a set of questions, or reasoning
tasks most appropriately viewed at that level.  It should provide a set
of representations of objects at that level and argue the efficacy of
the representations with respect to those questions.  And it should
characterize the interrelationship between objects in that theory and
ones at higher and lower levels.

An example at the molecular level is Forbus' qualitative process theory.
The work develops a process vocabulary that spans several domains (e.g.,
hydraulics, mechanics and thermodynamics) and a variety of tasks (e.g.,
envisionment and measurement interpretation).  At the atomic level,
Williams' research has been working towards a general theory of physical
interactions.  The theory is intended to formalize both the structural
and behavioral aspects of interactions.  The theory addresses questions
like: Are two descriptions of interactions equivalent?  Is one
interaction an abstraction of another?  What is the composite behavior
of a set of interactions?  What role does each interaction play towards
achieving some behavior of interest (i.e., what properties does each
interaction contribute?).  Interactions are classified along several
dimensions:  static versus dynamic, causal versus constraining,
intensional or extensional, and qualitative versus quantitative.  The
goal of the theory is to provide representations for each and to analyze
the formal properties of these representations relative to the above set
of questions.

An important goal of a theory of interaction is to unify those parts of
existing molecular theories (e.g., processes, devices, bonds, and
kinematic pairs) that describe aspects of individual interactions.
These include structural constructs, like terminals, nodes, ports,
individuals, and parameters, as well as behavioral constructs like
confluences, qualitative influences and proportionalities, quantity
conditions, episodes and causal orderings.  The goal is to provide a
computational framework in which we can rationally reconstruct and
combine these different molecular theories.  To this end in the
theory being developed structural aspects of interactions are represented in
a terminological language Iota, behavioral aspects of interactions are
represented with a qualitative/quantitative algebra Q1, and their
interrelationship is represented by embedding these languages in a
sorted first order horn logic.

{\it Probes of the same artifact can illuminate connections.}
Once representations for different tasks and different levels have
been explored, it is often useful to consider substantially
different tasks with respect to a particular artifact.  For example, one
of our modeling projects uses a xerox copier as the artifact.  The
copier involves a sufficiently varied set of technology that its
analysis is likely to expose new types of phenomena to be represented.
This modeling effort is focused by only capturing that knowledge which
is necessary to perform a selected set of tasks (e.g., diagnosis and
natural language generation of behavior).  Developing the model based on
several tasks is important both because the tasks expose different
features of the artifact that need be represented, and they allow us to
identify thoses features that are shared across tasks.

A similar approach has been taken in developing our theory of
interactions.  Early work like the temporal qualitative analysis of MOS
circuits provided a coarse probe intended to expose gross features of
interactions in the context of explanation.  Additional probes have
explored the use of interactions in a variety of tasks: explanation,
design verification, design debugging, and model abstraction.  More
recent work has been devoted to more fine tuned probes that isolate
particular features of interactions.  With respect to behavior the work
on concise histories and temporal constraint propagation was used to
isolate the causal/dynamic aspects of interactions, while work on the
hybrid qualitative/quantitative algebras Q1 was used to isolate the
static/ intensional aspects.  Similar probes have been explored for
structure.  In the work on interaction-based design, the terminological
language Iota was developed as an atomic level representation of device
structure.  The adequacy of this representation is being tested for
other molecular representations (currently kinematic pairs and hopefully
Forbus' processes in the not too distant future).