RE: Bulding a Knowledge-Level 3D Shape OntologyJames Rice <rice@HPP.Stanford.EDU>
Date: Wed, 12 Jul 1995 12:18:40 -0700 (PDT)
From: James Rice <rice@HPP.Stanford.EDU>
Subject: RE: Bulding a Knowledge-Level 3D Shape Ontology
To: "Joaquin.A.Delgado" <firstname.lastname@example.org>
cc: email@example.com, firstname.lastname@example.org, email@example.com,
firstname.lastname@example.org, email@example.com, firstname.lastname@example.org,
ontolingua@HPP.Stanford.EDU, email@example.com, firstname.lastname@example.org,
In-reply-to: Joaquin.A.Delgado's message of Tue, 11 Jul 95 18:01:26 JST: <9507110901.AA04028@cs25.atr-sw.atr.co.jp>
Content-Type: TEXT/PLAIN; CHARSET=US-ASCII
(Hope that the zillion people in the CC list for this will forgive
this message. I can't easily tell which ones might be interested).
>> I'm involved in a project for creating an "intelligent" virtual
>>reality system that could be able to use a knowledge-level 3D shape
>>ontology in order to give "meaning" to virtual objects based on simple
>>3D-shapes used as primitives. Once the system "knows" what a complex
>>3D shape is, knowledge processing can occure and interesting results
>>can be driven out.
>> For a higher level modelling of the world, knowledge sharing is
>>desired, either for the reuse of what other ontologies have to offer, or
>>for network, distributed "agent-like" knowledge processing. For that
>>purpose, and at the design stage of the ontology I would like to
>>discuss with you several issues :
>>1) For some technical reasons we are using NeXPert Object (from
>>Neuron Data) as the main Knowledge Processing Engine software and we have
>>ontologies already set up in it's object-oriented, rule-based langange.
>>As we wish to reuse existing knowledge, such as Cyc and other ontologies,
>>we are now on the crossroad of deciding wich KR language to use in
>>order to achive our goals. At first galance we have thought in CommonLisp
>>due to it's expressiveness and because it's widly used, but because we want
>>to take advantage of what has been done, it seems better to use KIF and
>>the Online Stanford-KSL ONTOLINGUA system for editing, parsing and checking
>>the consitancy of our ontologies. KIF can also be integrated easily with
>>KQML as in Magenta and the ABSI tools, so it also a good reason for using
>>it for the next step agent processing. The main problem resides in creating
>>a KIF to NeXPert translator (as the one existing for LOOM and Cyc).
>>Does anyone know if this has been done yet? Or were can I find an API for
>>constructing such a translator? Any suggestions?
Common Lisp isn't a representation language by any leap of the
imagination, so you are wise not to be seriously considering
using it directly.
I've never looked at NeXPert, so I can't tell how difficult it would
be to translate into it. In general, it is easy to translate from
Ontolingua into expressive and regular representations. The more
special-purpose hacks there are either syntactically or semantically,
the harder it is to write a translator. This all depends on what
you mean by having a (portable) ontology. Writing an ontology in (say)
Ontolingua and then expecting to be able to translate it into (say)
NeXPert and be able to take advantage of all of the magic features
in the underlying representation system simply isn't going to fly.
Any ontology that is written with the expectation of "running" it and
using such special-purpose features is, by definition, not portable.
The other problem with defining a translation is one of coverage.
It may well be that you, as a user, will be content simply with being
able to import the class hierarchy and the slots into NeXPert from
some portable ontology (for all I know, maybe this is all that NeXPert
can do, anyway). This is generally going to be pretty easy.
If, however, you want to get useful translations out of all of the
random axioms that the ontology author might have put into the ontology
then you're in for a much harder time. If, for example, NeXPert
doesn't support n-ary relations, what do you do with all of the
definitions (and uses) of n-ary relations in the ontology? As long
as you don't mind throwing them on the floor, the translation process
is made easier. This is obviously a decision that has to be made
by the user by considering his own intended use for the ontology.
This means that it is easier (modulo programming difficulties)
for a random user to come up with a translation into a particular target
than it is for a developer of a system such as Ontolingua to do so,
since an Ontolingua developer will always want to get the maximum
possible coverage from the translation in order to satisfy the largest
set of potential users. This is very hard to do in general.
Having said all this, if there was a significant demand for an
Ontolingua -> NeXPert translation we would probably get around to
writing such a thing eventually. If we had funding for it we'd
do it faster. However, as far as I'm aware, nobody has ever
mentioned NeXPert to us as a potential target, so I wouldn't
hold your breath hoping for a huge out-pouring of popular demand.
The Stanford ontology server also supports an API that allows
you to use the GFP frame protocol remotely over the net. This
means that as long as your target representation system looks
reasonably frame-like, and you're most interested in getting
translations for the classes and slots in any given ontology,
it should be fairly easy to write the translator at your end.
We have client side stub code that will let you do this.
Another way to do this is to translate from Ontolingua into KIF
(a pretty trivial translation that we already support) and then
translate from the KIF version using a public-domain KIF parser.
I'm pretty sure that such a beast exists.
Similarly, if NeXPert can read different input formats, it might
be easier to translate from Ontolingua into (say) Prolog syntax,
and then read the ontology into NeXPert an mung on it there.
>>2) Defining an ontology is definitly not an easy task, therefore I would
>>like to contact people that have done such work, specially in basic
>>levels as "things", "shapes", "objects", "components",etc in order to see
>>how these concepts can be integrated with the underlying mathematical
>>equations, used to describe the graphical models of the objects and
>>there relations in the virtual worlds.
Can't help you on this one.
>> Any suggestions or comments are fairly welcomed!
>>Joaquin Delgado e-mails: email@example.com