Getting back to the notes of May 10th
sowa <sowa@turing.pacss.binghamton.edu>
Date: Sat, 15 May 93 10:30:17 EDT
From: sowa <sowa@turing.pacss.binghamton.edu>
Message-id: <9305151430.AA04523@turing.pacss.binghamton.edu>
To: cg@cs.umn.edu, interlingua@ISI.EDU, phayes@cs.uiuc.edu
Subject: Getting back to the notes of May 10th
Cc: eileen@turing.pacss.binghamton.edu, jerry@turing.pacss.binghamton.edu,
sowa@turing.pacss.binghamton.edu
Pat,
I deferred responding to your two notes of last Monday, since Len's
notes addressed similar issues without raising the question of who was
being muddled, confused, or eccentric. As a result of that discussion,
I believe that we reached a satisfactory conclusion that enables us
to agree on a notation for KIF that we both like, while allowing us to
maintain our own private metaphysical preferences.
I'll delete a considerable part of your two notes that overlap discussions
that Len and I continued this past week. But there were a few other
points that I wanted to comment on.
> But in any case, your alternative is even worse. In order to decide
> whether one of your set-theoretic models is a reasonable simlacrum of
> the world of cats and mats, I have to have a COMPLETE theory of cat/mat
> imagery, since everyone of these is an entire world-simulation. Thats
> Principia MatCatica. You may claim that this isnt necessary in order
> to do the model theory, but how do I know that one of your models isnt
> quite inappropriate as a way of interpreting cat-mat talk?
It is questions like these that lead me to believe that situation
semantics with its preference for limited regions of space-time is
on the right track. Our use of language and logic doesn't depend
on things that neither we nor anyone else has ever observed. And
I believe that a system of semantics that is based only on observable
regions of space time is preferable to one that claims to speak about
the entire world as a completed whole. For propositional attitudes,
plans, and talk about the future, the semantics must also take into
account imaginable situations, but again, these are still finite
(both in our brains and in our computers).
> ... Now, can we PLEASE stop referring to Tarski??
Fine. If you stop trying to push me out of the mainstream with claims
that I'm eccentric or not conforming to established usage, I will
stop citing and quoting "Big Names" who I believe support my position.
> Most of what I talk and think about couldnt be accessed or recognised
> in these ways, anyway. "Julius Caesar" denotes some guy who is long
> since passed from my sight, and 'tomorrows lunch' doesnt even exist
> yet, but I can refer to it. Commercial databases don't usually encode
> propositions that require pattern recognition to confirm.
There are very complex issues in psychology of language about how
children learn the denotations of words, in philosophy of science
about how theoretical terms and symbols are related to observation,
and in philosophy about the denotations of names of historical
figures and about references to as yet nonexistent futures. When
you blithely put such things into your "models", you have ignored
all of the most difficult questions about how words (or symbols in
a system of logic) can refer to things.
My point is that model theory solves only one problem: the relation
between formal symbols and mathematical constructions. Although this
is a very important part of semantics, it does not and cannot by itself
address the question of how those mathematical constructions are
related to the real world. Claiming that the real world things are
somehow included in those constructions is just begging all the most
difficult questions. By assuming that those models contain symbolic
surrogates instead of actual physical objects, I have openly admitted
that model theory doesn't solve those problems. Then I can begin to
address the separate question of how those surrogates map to the world.
(Or I can, like you, just ignore that question if it is not of interest
to me at the moment.)
As for commercial databases, you would be amazed at the kinds of garbage
they contain. And much of that garbage is caused by a failure to make
clear philosophical distinctions about the reference of various terms.
One of my favorite examples is from a system that came up with the
following response:
Q: What is the largest state in the U.S.?
A: Wyoming.
For numbers and character strings, this system used the greater-than
relation to answer questions about size. Therefore, it found the
last state in alphabetical order.
> NO!! The denotation functions do not recognise (and are not computed,
> and do not access or DO anything else). They are simply a mathematical
> way of talking about correspondences between names and things.
Exactly!!! Model theory is a system of pure mathematics. The only
thing it can do is relate mathematical symbols to mathematical things.
To relate those mathematical things ("surrogates" in DB terminology)
to physical objects presupposes philosophy of science, psychology of
perception & language learning, or pattern recognition in AI.
> Use conventional model theory. There is no problem!
My "depictions" are conventional models. But as I pointed out many
times, philosophy of science is a separate subject that is not included
in and cannot be presupposed by anyone's claim to be using "conventional
model theory".
> I reject this distinction between 'formal, mathematical' and 'messy,
> real-world'. Mathematics can refer to the real world. When I use
> arithmetic in carpentery I am using mathematics to reason about
> the lengths of real pieces of wood I hold ion my hands. Similarly
> for much of engineering. Being a formalist, you probably disagree:
> but I am not a formalist.
Pure mathematics doesn't refer to anything in the world (as in your
quote from Bertrand Russell). In order to refer to physical objects,
you must APPLY mathematics. And that application always involves
methodological assumptions. A skilled carpenter might be able to
make those applications without much conscious thought, but that is
only because he or she has spent long years of apprenticeship.
When I try to do carpentry, it's much harder for me to make accurate
measurements -- not because I don't know mathematics, but because
I don't have the experience of applying mathematics to that task.
> ... This makes no claim to magic; only plain, ordinary
> interpretations of the usual definitions to be found in any logic
> textbook :-)
As Ronald Reagan said, "There you go again." I promise to stop
mentioning Tarski if you promise to stop mentioning your mythical
"logic textbook". I have repeatedly asked you to find one of those
textbooks and quote exactly what it says about the "ordinary
interpretations of the usual definitions".
> Your new talk of 'depictions' is just an example of the kind of muddle
> that is going to emerge if we let you get away imposing your constructivist
> religion on our formalisms.
This is the kind of verbal escalation that drives me to "pseudo-
scholarly" quotations and citations. As I have pointed out many times,
computer science in general and AI in particular not only deal with
constructive techniques, but finite and even small finite (i.e. polynomial)
techniques. And I can cite any number of examples you please to show
that my religion is just as popular, established, or respectable as yours.
> These things are not properly defined
> anywhere, are supposed to be made of symbols but to be something like
> images; like databases but able to play the role of possible worlds,
> to be built of datastructures but in 1:1 correspondence with reality,
> and to act as a kind of unifying information blackboard in a robot.
To avoid accusing you of being confused about what I have said, I will
charitably assume that you are using a debating ploy. For a "proper
definition" of my depictions, look in your mythical "any textbook"
for a definition of "model". That is what I was originally calling
them, but I changed to the term "depiction" because you said that the
term "model" was being overused. Those depictions or models are
mathematical constructions (i.e. pure mathematics) that can be applied
in any of several different ways -- to databases, robotics, or patterns.
> To insist that these strange things MUST serve as (or even be involved
> in) our semantic theories is ridiculous. I simply reject these
> things: to hell with them, I don't need them.
First of all, they are not strange -- they are simply Tarski-style
models. Second, Len Schubert said that he found similar constructions
useful computationally. That is all I ask for -- if you grant them
to me, I don't care whether you call them useful computational
devices or religious idols.
> First, a TMT model (a possible world) is not a
> representation, it is an account of how a representation might be
> understood to mean something.
I will stop mentioning Tarski if you stop talking about this mythical
TMT that isn't defined anywhere. As I have been trying to get across
many times in these notes, there are two very distinct issues: the
formal, mathematical operations defined by Tarski and the mathematical
logicians and the question about how that formalism is to be applied
to the real world. Tarski never addressed that second question. If
your TMT adresses it, please quote the mythical "any textbook" that
defines TMT.
> ... This distinction, between assertion and counterexample,
> has been understood and used in reasoners for 25 years.
Nobody is questioning that distinction. A model, like most mathematical
constructions, can be used for multiple purposes. Evaluating the
denotation of a formula was Tarski's original reason for inventing
model theory. You can hardly claim that using a DB for the purpose
of evaluating denotations is "not appropriate for a model".
> We can argue about philosphical matters for ever, but the central issue
> is that you refuse to let me talk about how representations represent,
> and I insist on keeping this clear.
On the contrary, I have been begging you to explain that. And please
be clear about what kind of representations you are discussing. Are
you talking about logic-like or language-like representations such as
KIF, or are you talking about model-like representations consisting of
nothing but sets of symbols and relations over them (but no quantifiers
or Boolean operators). And please note that calling a relational DB
a model or a collection of ground-level assertions is purely a matter
of taste or convenience, since formally, they are isomorphic.
> ... and exhibit a zealot's fierceness in defending him [Tarski].
I am not trying to defend Tarski. I am trying to defend myself against
your charge that I am "eccentric" or outside the mainstream or contrary
to years of AI practice. All I was trying to do by quoting Tarski
was to prevent you from using him as a cover for your position.
> This entire discussion started because you asserted that you would
> not permit people to say that representations could modelled the
> world. They can.
Of course they can. That's exactly what representations (model-like
ones at least) do: they serve as formal surrogates for the real
world things that cannot occur directly in formulas or databases.
> ... Your mappings to
> databases do not respect their functional meanings (see Reiter on this),
I did discuss this point with Ray at a conference on AI and DB a couple
of years ago. He prefers to think of a DB as a theory, but I prefer to
think of it as a model. Since a theory consisting of nothing but
ground-level assertions is isomorphic to a model, there is no formal
(or functional) difference between us.
> and you cite robotics, virtual reality, etc etc without any authority.
What are you trying to do? Push me out of the mainstream again?
Do you want to provoke another barrage of quotes and citations?
As it happens, I mentioned virtual reality for a very good reason:
We are setting up a joint project at SUNY Binghamton with the VR Lab
in the computer science department. Jerry Aronson recently wrote a
chapter on virtual reality and counterfactuals in a forthcoming book:
_Realism, Similarity, and Type Hierarchies_ by Jerrold Aronson,
Rom Harre, & Eileen Way, Duckworth Publishers, in press; publication
scheduled for September 1993.
Jerry, Eileen, and I have been talking about setting up a joint
project. Jerry has arranged for the people in the VR Lab to develop
the software we need for the simulations and the graphic output and
the sensors for input. Eileen and I are planning to use conceptual
graphs as the KR language that links to English on one side and to what
Jerry and I have been calling depictions. Then the depictions serve
as the intermediate form between the symbolic system on one side and
the analog simulations and graphics on the other.
The goal of the system is to integrate NL input and output with a VR
system that has rich graphics and analog sensors. There are many
practical applications ranging from manufacturing process control to
automobile driving and aircraft piloting. Theoretically, it gives us
a very rich platform for exploring issues in hypothetical reasoning,
counterfactuals, analogies, nonmonotonic reasoning, and approximations
resulting from the mismatch between discrete symbols and a continuous
reality. I'm not claiming that we have solved or will solve any or
all of these problems. But Jerry, Eileen, and I have found the
intermediate level of models or depictions to be a useful device for
clarifying our discussions and guiding our implementation plans.
As far as the design of KIF or CGs is concerned, there is nothing
in either of those languages that forces anyone to adopt such an
intermediate level in either their metaphysics or their computational
practice. Len said that he found such a level to be computationally
useful, but dispensable "in principle". That's OK with me.
John