models and depictions

Dan Schwartz <schwartz@iota.cs.fsu.edu>
Date: Mon, 17 May 93 19:17:35 -0400
From: Dan Schwartz <schwartz@iota.cs.fsu.edu>
Message-id: <9305172317.AA21882@iota.cs.fsu.edu>
To: cg@cs.umn.edu, interlingua@ISI.EDU
Subject: models and depictions
John,

I'm glad to hear we have a meeting of minds on my diagram:

>>          Formal Logical System <---> Depictions/Interpretations
>>                    |                             |
>>                    |                             |
>>                    |                             |
>>                   \/                            \/
>>            Human Perceiver     <--->      Things Perceived
>
>I'm quite happy with this diagram of yours.  The top line is a
>purely formal system with a formal language interpreted in terms of
>a mathematical construction -- both of which can be implemented in
>an AI system.  The bottom line brings in all the complexity of the
>human cognitive system and the real world.  Tarski limited himself
>exclusively to the top line.  If we want to extend AI systems to
>deal with the relationship to the bottom line, we must begin with
>a recognition that the formal logical system is not identical to
>the human cognitive system, and the formal depictions are at best
>similar to, but not identical to the things perceived.

and I basically agree with your interpretation of it.

My knowledge of that paper by Tarski is second hand, through my
professors during grad school.  Thank you for citing the complete
reference.  I'm teaching summer session and things are fairly hectic,
but at my earliest opportunity I'll dig it up and read it.

What I was told about that paper, however, was that the primary issue
there was in distinguishing between different notions of ``truth''.  For
example, when a physicist writes ``F=ma'' it is understood that the
truth of this proposition is not something which one can know
absolutely, but only insofar as it is corroborated by experiment.  And
this is a totally different sort of truth than employed in formal model
theory, which would require one to examine every single-body system
(i.e., past, present, and future) and determine in each instance whether
the given equation holds true---a task which is obviously impossible.

It does not seem reasonable, however, to deny that the symbols in
equation refer directly to properties of real-world objects.  I suppose
that if one wants to become very rigorous, one may note that actually
those symbols are taken as variables ranging over real numbers, and
those real numbers a taken respectively as ``measurements'' of force,
mass, and acceleration.  But it is still implicit in the equation as
stated that the real number values of ``F'' are measurements of force,
the values of ``m'' are measurements of mass, etc., so that the symbols
are still presumed to connect directly to real world things.  Here I
would draw the picture:


                 F=ma
                  | \
                  |  \
                  |   \
                  |    \
                  |    real number measurements
                  |    /
                 \/  \/
               physical system

My main point in the former diagram is that when one moves to the
problem of modeling cognitive processes one in fact moves to a higher
level of abstraction, where it becomes necessary to formalize the
semantics as well as the original language.  This is so that one can
then simulate the process of referring to the semantics for purposes of
determining whether certain formulas are true (in the Tarski sense).  I
believe that this is essentially what is going on in database query
systems, for example.  Databases are essentially collections of
depictions.  Such things as depictions are not really needed in
traditional science, however, since for purposes of scientific modeling
there is no need to work at this level of abstraction.  I tend to think
that this is the source of the dissagreement between you and Pat, that
you're talking about two different sorts of modeling situations.

Incidentally, as far as I can tell, this capacity of the human mind to
move to successively higher levels of abstraction (evidently through
some process of self-reflection) has yet to be simulated in any Krep
language.  Do you think this can be done via conceptual graphs?

--Dan