ontologies, theories, and primitives

Tom Gruber <Gruber@Sumex-AIM.Stanford.edu>
Message-id: <2923419669-3548858@KSL-Mac-69>
Date: Fri, 21 Aug 92  14:01:09 PDT
From: Tom Gruber <Gruber@Sumex-AIM.Stanford.edu>
To: Shared KB working group <srkb@ISI.EDU>
Subject: ontologies, theories, and primitives
Recent messages to srkb have brought up some important issues.
I believe some can be clarified with a more careful definition of
terms, and others should be elevated and examined closely.  Let me
start with a short review on "what is an ontology".

In papers representing the DARPA Knowledge Sharing Effort (KSE),
we have consistently defined ontology as a set of definitions in
english and logic.  Just to be absolutely clear, that means

Formally, an ontology is:
  a set of terms (relation, function, and object constants)
  a set of documentation strings for these terms
  a set of of defining axioms for these terms

For example:

(define-class AUTHOR (?x)
  "An author is a person who writes things.  An author must have a name."

  :def (and (person ?x)
            (has-one-of-type ?x author.name biblio-name)
            (have-same-values ?x author.name person.name)))

is equivalent to the following KIF axioms:

(class author)

(relation-constant 'author)

(documentation author
   "An author is a person who writes things.  An author must have a name.")

(subclass-of author person)
 which means:
 (forall ?x (=> (author ?x) (person ?x))

(has-single-slot-value-of-type author author.name biblio-name)
 which means:
  (forall ?x (=> (author ?x)
                 (and (exists ?slot-value 
                              (and (author.name ?x ?slot-value)
                                   (biblio-name ?slot-value)))))
                      (=> (and (author.name ?x ?slot-value1)
                               (author.name ?x ?slot-value2))
                          (= ?slot-value1 ?slot-value2)))

(forall ?x (=> (author ?x) (= (author.name ?x) (person.name ?x))))

As you can see, the definition is of a relation constant (denoting a
class), it has a documentation string for human reading pleasure, 
and it includes a set of axioms that describe how the term AUTHOR can
be meaningfully used in relation to other terms such as the function
PERSON.NAME and the class PERSON.  These axioms say 
  1. every author has exactly one name, and it is of type biblio-name.
  2. the functions AUTHOR.NAME and PERSON.NAME denote the same thing
when applied to instances of the class AUTHOR.

This definition does NOT say a lot of things that you might expect in
a comprehensive knowledge base, such as Cyc.  It doesn't say that
authors tend to be poor or misunderstood, and it doesn't say much
about what's in a person's name.  And the ontology doesn't contain the
ground facts about all the books in the library of congress -- just
the commitments, Maam.

An ontology of such definitions is a MINIMAL, DECLARATIVE statement of
what a set of cooperating agents need to agree on: what objects exists
in a particular conceptualization, what they are called, and how they
are related to other objects.  

One such agent is a database server.  For a database, the axioms are
called1 *INTEGRITY CONSTRAINTS.  A DBMS might enforce these constraints
by analyzing transactions.  If someone adds an author entry in some
person database, and then asserts that the person is a doc.author of
some book, then the constraints on documents say that the person must
be an author.  [1(**1)* see note below] The DBMS can check these kinds of
constraints.  Then by the definition of the author, we know that the
name-as-author is the same as the name-as-person.  (Of course, this
assumption could be false, because of pseudonyms.  The exception
proves the point of stating the rule.) From the definition of
person.name, we know that the name is a function of the person -- that
is, there is at most one such name per person.  The DBMS might
optimize the query based on this constraint.

An agent that wants to use the database service needs to know these
ontological commitments.  Imagine that the agent wants to find all the
works written by a person.  The query for this information request
could be formulated to get the person's name and then ask for all
references whose ref.author field matches the person's name.  This
query would not produce the correct answer if there were multiple
names per author, or if author names differed from person names.

Note that in both agents, the formal parts of the definitions (the
axioms) did the work, and the ontological commitments were not
dependent on the inferential powers of the services. The only
assumption is _soundness_: that the inferences produced by agents
operating under these ontologies will produce answers that are
logically consistent with the definitions.

  (*) Footnote. Analyzing this example uncovered some bugs in the biblio
     ontology, which I'll report in a separate message.

Given that overview, let me address some of the email and attempt to
clarify some points.


--- Excerpt from Charles Petrie (Thu, 20 Aug 92 14:07:02 MET DST) ---

> I'm in the camp that believes that an ontology is meaningless
> without an associated theory.  (By "theory", I mean the set of valid
> inferences in the traditional computer science sense, not Guha's
> "contexts".) The meaning of words in a natural language lies in
> their *use* - not static descriptions; similarly with objects.  The
> question here is how to represent object use in a formal
> "declarative form".

I also hold the philosophical opinion that Meaning arises from use in
context.  However, it does not follow that a set of formal definitions
augmented by natural language definitions is "meaningless" for the
purpose of sharing knowledge and facilitating knowledge-level
communication among agents.  In any given ontology, most of the words
will be "open textured" or "primitive" -- they won't be
_completely_defined_ in terms of other primitives.  In the
bibliography ontology, only three classes could be completely defined
in terms of other classes.  The rest are _partially_defined_. The
ontology is the locus of these primitives, plus those derived terms
based on the primitives.  It says which, of the many primitives one
might choose, are the ones that matter for some shared domain and set
of tasks.  

The big assumption is that between the documentation and the axioms we
(the human programmers of agents and users of services) can understand
enough.  We can build programs that enforce the axioms, or at least
the ones we need enforced.  We can interpret answers given to our
queries by reference to the definitions of terms mentioned.

Charles said as much when he described the two uses of the ontology:
as a specification of a protocol for invoking services and as a
reusable framework upon which to build knowledge bases.  

--- Excerpt from Charles Petrie (Thu, 20 Aug 92 14:07:02 MET DST) ---

> The first way is *concurrent use*.  Let's imagine that the bibliography
> is a network server called BIB.  Its Ontolingua definition determines its
> services.  It's "portable" in that anyone can use it if they understand
> the ontology. ...

> The second, and perhaps more conventional, way in which the
> bibliography might be "portable" is as part of a *library* of
> ontologies. 

> We will all never use the same implementation language; just the same
> language for semantic definition.  We can implement BIB with LISP or
> SMALLTALK as long as the program semantics conform to the Ontolingua
> definition.

--- Excerpt from John Sowa (Thu, 20 Aug 92 12:47:31 EDT) ---

> Yes.  If you have well-defined axioms and a disciplined coding style,
> it should be possible to write programs that conform to those axioms
> in any programming language you find convenient.

So far so good.

The controversial part of Charles's point, if I understand it, is that we
need something else besides the definitions: we need to know which
_inferences_ are sanctioned by an ontology.

--- Excerpt from Charles Petrie (Thu, 20 Aug 92 14:07:02 MET DST) ---

> I am claiming that one does have to "share programs" only to the
> extent that one has to define the inferences associated with the
> ontology.  This can be done declaratively, even if in KIF.  Ontolingua
> provides (and should provide) some inferencing semantics.

I think this is an important research question, not a matter of
definition :-).   Part of the declarativist hypothesis is that one
can write down the meaning of terms WITHOUT saying which system
of inference is going to be applied to them.  As I stated above, 
the only guarantees specified in an ontology is soundness.

Now, we may wish to explore the idea -- experimentally rather than
>From our armchairs -- that we need to specify a restricted set of
inferences that are sanctioned from a given knowledge base.  For
example, we might say that participating agents that commit to an
ontology will all be able to do prolog-style backward chaining, but
not recognize term equivalence.  Or we might constrain the set of
inferences in a domain-specific way.  Feel free to propose an
experiment to explore this issue: that is part of the SRKB reason for

--- Excerpt from Charles Petrie (Thu, 20 Aug 92 14:07:02 MET DST) ---

> Of course, now I want to say that the Ontolingua language needs to be
> augmented, say, by general KIF constructs that allow one to more
> completely define a theory.  For example, when I sent you my
> description of REDUX', it included a set of concepts and a theory, in
> standard PC, about the relationships between those concepts.  The
> theory says how, for example, when some object of some type is
> added to the database, the values of the attributes of other objects
> should be changed.  The user of a network server REDUX' need only
> specify the object types, but he/she must understand the semantics of
> those types.  And the semantics are defined by the theory.

But Ontolingua DOES include "KIF constructs" for that purpose: namely,
the use of the KIF sentences in the definitions.  As far as anyone has
shown to date, KIF is capable of representing declarative constraints,
and it offers a solid semantic foundation for interpretation.   What
it doesn't do is described limits on the completeness of inferences or
symbol-level state of databases.  For example, we can easily say in a
declarative form that the "values of attributes of objects" must
be a function of "some object of some type".  That's what is stated in
the formal parts of definitions in an ontology.  What cannot be stated
in KIF, and thus Ontolingua, is that the "attributes of other objects
should be changed" when a new object is asserted to be of some type.
That speaks to the internal state of a particular symbol system, e.g.,
a data or knowledge base.  So to explore this issue we need an
example where we need to describe this kind of symbol-level behavior
and can't get away with a purely knowledge-level account.  

The ball is in your court, Charles.  ;-)


--- Excerpt from John Sowa (Thu, 20 Aug 92 12:47:31 EDT) ---

> The Knowledge Sharing Effort must have three components:

>  1.  An abstract system of logic that is independent of any notation.
>     Every theory, of course, must be expressed in some notation, but by
>     developing it in two or more notations simultaneously, the syntactic
>     quirks that are peculiar to the notation can be exposed and
>     smoothed out.  In all my writings on knowledge representation, I
>     make sure that every semantic construct can be expressed in four
>     different ways: informal English, predicate calculus, conceptual
>     graphs in the graphic notation, and CGs in the linear notation.  That
>     exercise helps to weed out any notation-dependent quirks.

>  2.  One or more concrete languages that express the semantics in
>     exactly equivalent ways.  The AI field has been subdivided into too
>     many competing camps based on irrelevant notational differences.
>    The areas of agreement are very large when you translate everything
>    into the same notation, but everybody wants a different notation.
>     Therefore, we should work towards a common core, and let people
>     develop any notation they please as long as it maps to and from the
>     common core.

>  3.  An open-ended family of theories, organized in a hierarchy (most
>    likely a lattice).  Each theory would have its own "ontology",
>     but I would be happier to avoid that term altogether.  I would prefer
>     to say that a theory must have the following components:

>     a) An uninterpreted formal language L.

>     b) A set S of constants (call them predicates, relations, concept
>        types, or whatever).

>     c) A set A of axioms (or laws, or constraints, or whatever).

>     d) A model theory that provides a consistency check and a basis
>        for verifying the soundness of any proposed inference schemes.

>     In addition to points a, b, c, and d, I would like S to have a sort or
>     type structure that can be tested by simple polynomial-time checks,
>     instead of the NP-complete proofs that any realistic axiom scheme
>     invariably runs into.

> In this formulation, what people have been calling "ontology"
> roughly corresponds to the set S of constants and the type hierarchy
> defined over S together perhaps with some of the simpler axioms in A.
> In the conceptual graph project, we use the term CCAT (for conceptual
> catalog) as a somewhat broader term.  It also has the advantage of
> avoiding the term "ontology", which many AI people tend to use as
> more or less synonymous with "taxonomy".

John, I think you defined "theory" as an alias of "ontology", almost.
  a) is KIF.
  b) is the set of constants defined in an ontology.
  c) is the set of axioms included in each definition.
  d) is given in the KIF document, and is not specific to an ontology.

  The "sort or type structure" is the set of classes defined in an
  ontology.  The frame ontology effectively introduces types into
  the definitional language, calling them classes.   (It does not yet
  distinguish between unary relations intended to be used as 
  adjectival predicates and unary relations intended to be used as
  classes.   Maybe it should.  Let's keep that discussion to a
  different message thread.)

The vision "family of theories" is exactly what the SRKB group
is now looking into: collecting, comparing, and analyzing families of
ontologies in those areas where we think it is useful and feasible to
share formally-represented knowledge.  

So I think we have a case of agreement on semantics, and a difference
on terminology.  Sound familiar?

Thanks for your comments, and please keep them coming.