Ontolinqua Expressiveness

Charles Petrie <petrie@informatik.uni-kl.de>
Date: Tue, 25 Aug 92 14:22:57 MET DST
From: Charles Petrie <petrie@informatik.uni-kl.de>
To: gruber@sumex-aim.stanford.edu
Cc: srkb@isi.edu
Subject: Ontolinqua Expressiveness
Message-id: <9208251422.aa01251@inti.informatik.uni-kl.de>

> I also hold the philosophical opinion that Meaning arises from use in
> context.  However, it does not follow that a set of formal definitions
> augmented by natural language definitions is "meaningless" for the
> purpose of sharing knowledge and facilitating knowledge-level
> communication among agents. 

 As you go on to point out by quoting my two uses of an ontology, I
also do not believe that such a set is meaningless.  Most of us will
agree that semistructured communications are useful, and therefore,

> The controversial part of Charles's point, if I understand it, is that we
> need something else besides the definitions: we need to know which
> _inferences_ are sanctioned by an ontology.

Yes. But I admit that after looking at Ontolinqua a bit closer,
perhaps nothing more than Ontolinqua definitions are needed. The BNF
description says that we can define stand-alone relations and axioms,
as well as classes. Axioms include sentences, which can be "any old
KIF sentence".  Perhaps these can be used to sanction inferences.
If so, never mind.

> ...I would therefore like to have two separate pots for storing
> the definitions and the laws.

I may be happy with the current state of affairs.  In Ontolinqua,
there are several kinds of definitions, only one of which are
class/object definitions. Two other pots are axioms and relations.

> For example, F=ma relates force F, to mass m, and acceleration a.  
> Since it relates all three of them, it must be in a theory of
> mechanics, but it is inappropriate to include it in the "definition"
> of any one of the three -- F, m, or a.

This is a good example and concisely captures the problem I thought I
was having casting my own theory (REDUX) in Ontolinqua.  The good news
is that one can state an Ontolinqua axiom relating F,m, and a apart
>From any definition of the classes of force, mass, or acceleration.
But I'm still confused about whether I can say F=ma in Ontolinqua.

> The only assumption is _soundness_: that the inferences produced by
> agents operating under these ontologies will produce answers that are
> logically consistent with the definitions.
> Part of the declarativist hypothesis is that one can write down the
> meaning of terms WITHOUT saying which system of inference is going to
> be applied to them.

So far, so good.  I also hope one does not need to know which system of
inference will execute the axioms. (It may turn out otherwise.)

> What cannot be stated in KIF, and thus Ontolinqua, is that the
> "attributes of other objects should be changed" when a new object is
> asserted to be of some type.  That speaks to the internal state of a
> particular symbol system, e.g., a data or knowledge base. 

This part I find confusing, as it suggests that I couldn't say F=ma in
KIF as an axiom.  And that KIF is insufficient for REDUX, where I
should like to say that a decision becomes invalid if a contingency
object is added.  Is it really impossible to specify valid inferences
in KIF? Isn't inheritance such an inference? Don't "iff-def" sentences
state such inferences?  Other than this passage, I was perfectly
willing to admit that a closer reading of Ontolinqua revealed that it
already had that for which I was asking.

The issue I am most interested in resolving is this one: is
Ontolinqua, with KIF axioms, strong enough to represent a theory that
defines the inferential use of the class definitions?  We should be
able to resolve this with REDUX. I will take this off-line with you
until we have a clear resolution. But it may be that I have
misunderstood you and you can clear up my confusion very easily.

PS There is another question that depends upon experiment and judgement:
are incomplete formal definitions, augmented with English, sufficient
to specify the use and reuse of ontologies? How much do we put in KIF,
and how much in English? If we depend greatly upon the English, and
our shared understanding, as in the bibliography example, do we get
into trouble because different implementations use the ontology
differently?  On the other hand, how much formal specificity is
useful, or practical? We'll just have to see in a few years with some
real examples.