Re: A simplistic definition of "ontology" (Pat Hayes)
Message-id: <>
Mime-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Date: Thu, 5 Oct 1995 13:15:14 -0600
To: (Eduard Hovy)
From: (Pat Hayes)
Subject: Re: A simplistic definition of "ontology"
Precedence: bulk
At  2:59 PM 10/4/95 -0500, Eduard Hovy wrote:
>I think you are not quite reading all of what I wrote: 
>At 4:59 PM 10/4/95, Pat Hayes wrote:
>>At  8:41 AM 10/4/95 -0500, Eduard Hovy wrote:
>>>  An ontology is a collection of symbols that represent (i.e., name) some 
>>>  set of phenomena in the "external world" within a computer (or possibly 
>>>  within other, non-implemented, systems, although who knows what that 
>>>  would be interesting for).  Typically, the phenomena include objects 
>>>  and processes and states, and typically, these entities are related 
>>>  among themselves; usually, the ontology names (some of) these relations.  
>>I think the issue can be focussed by asking whether it is enough to simply
>>*name* the relations, or should we ask an ontology to somehow *specify*
>What does "specify a concept" mean, if not just to list the relationships 
>of the concept with other concepts? 

It means giving a structured *theory* of those relationships, not just
listing relation names and assuming that the reader knows what they mean. I
like my theories written as axioms, but everyone has their taste: but in
any case, its a lot more than just a list. You can't infer anything from a

 and  Trying to define "chair", one talks 
>about "functionality" and uses that relationship to link "chair" to "sit" 
>or "support", etc.  That's what I meant by "these entities are related 
>among themselves".  
>>Coming as I do from the 'knowledge representation' (rather
>>than the 'glossary') side of the divide, I always wonder just how far away
>>the other side can be taken to be. 
>I'm not sure I believe in this divide you mention (and tried to define at 
>the IJCAI workshop).  It seems to me everyone is trying to do essentially 
>the same thing -- come up with a set of representation elements and their 
>interrelations (under which I include property constraints, etc., that 
>constrain inference, and attached inferences too) -- but those you call 
>'glossers' try to do it on a larger but less rich scale (since that's what 
>supports their systems) while the 'KRers' do the opposite, for purposes 
>of detailed reasoning in small applications and domains.  

No. This isnt the distinction. Its hard to imagine a grander-scale project
than CYC, but its firmly on the knowledge-representation side precisely
because the CYC database doesnt just sit there waiting to be read by
someone: it supports conclusions. Machines can infer things from it because
its written in a formalism which has an inference process defined on it. My
worry about the glossaries is that they lack such a thing, and indeed seem
not intended to ever have such a thing: they are organised repositories of
information for human readers, and thats ALL. 

So what? The point is that these two games may have different rules. The
criteria for a good glossary are probably quite different from those for a
good job of knowledge representation, so to merge the two together under
the single term 'ontology' is likely to be confusing and lead to
misunderstanding. (If we also include, say, a structured lexicon for NLP
usage, the confusion is likely to get even worse.) I'm not wanting to say
that there are no relationships beteen all these things, but just to lump
them together under a single term makes it harder, if anything, to see what
these relationships might be. I know this may not be in the West Coast
spirit of Unification and Wholeness, but I think that in science it is
better to actually seek out and catalog differences rather than obscure

>>(One obvious answer might be that such a glossary is a useful step along
>>the way to creating a true ontology, a kind of ontology-sketch, but I dont
>>think that is how the compilers of such glossaries regard them.)
>Actually I think it is, certainly for the 'glossers' I talk to.  For example 
>no NLP person would claim that a mere taxonomy of symbols was enough;

Enough for WHAT? What is the longrange goal of such glossarising? Maybe for
NLP what you say is true, but take another look at the stuff referred to in
the conference announcement: that has nothing to do with NLP (I think); its
more like an organised handbook/lexicon for business managers.


Beckman Institute                                      (217)244 1616 office
405 North Mathews Avenue              (415)855 9043 or (217)328 3947 home
Urbana, Il.  61801                                     (217)244 8371 fax