Final odds & ends
Message-id: <>
Date: Fri, 14 Feb 92 18:45:16 EST
Subject: Final odds & ends

> OK, last message from me on this stuff.

I think we have reached a state of mutual exhaustion.  The major issues
are fairly clear, but there are still a few misunderstandings that
could be clarified.

> I dont think that every poetic variation in how an idea is expressed does in
> fact constitute a nuance of meaning. If we are to look for every nuance of
> meaning, I would be more inclined to look at the kinds of visual form that can
> be perceived and about which conclusions can be drawn, for example.( I will
> place a bet that you think that we have different internal representations for
> things seen and for things heard?)

I would bet that the early stages of processing visual and linguistic
information are very different, but that it is possible to relate them
in a later stage.  I believe that the conceptual stage is an intermediate
form between the linguistic stage and some sort of analog or image-like
processes that are used in visual perception and imagery.

>>> We were disagreeing about whether the
>>> knowledge representation language should, as a matter of doctrine, have a
>>> representational distinction corresponding to every surface distinction of
>>> English. You say, essential: I say, unnecessary and potentially misleading.
>> No, I didn't say that.)( You didnt? Sure seemed like that. If you really
> didn't then we are probably arguing past each other.)

Although I believe that Bollinger was right when he said "Every
difference makes a difference," I would agree that for any particular
purpose, some possible distinctions may not be very significant.  But
whenever a distinction is significant, it should be possible to express
it in KIF with a suitable predicate, but it is not necessary to reflect
it in the syntax of KIF.

Perhaps an example would clarify the misunderstanding:  Since modern
English has lost the form "thou", we cannot conveniently represent the
distinction du/Sie in German or tu/vous in French.  For many purposes,
we could translate both "Wo bist du?" and "Wo sind Sie?" as "Where
are you?" and just ignore the distinction.  But if you had a passage
where a German speaker suddenly switched from "Sie" to "du", that would
probably be a significant distinction that a translator would have to
represent by some kind of paraphrase or comment.

In KIF, the distinction might be represented by adding an extra predicate
like polite(x) or familiar(x).  That would change the ontology, but not
the syntax of KIF (or whatever KR language you may be using).

>>  I'll certainly give you an exclusive-OR if you give me a lambda.
> Not a fair exchange! If I give you lambdas will you find a way to stop them
> breeding?

I admit that a lambda is a much richer extension than an exclusive-OR,
but I do believe that we need a way of adding definitions dynamically,
and anything else we do would probably be equivalent to lambda.

>> Many people are so put off by the syntax of predicate calculus that
>> they invent those "odd syntaxes" that you dislike.
> No, you miss the point. Let syntaxes blossom if it makes the users happy. What
> I dislike are claims, explicit or implicit, that new syntaxes for logic are
> something more than just improved user interfaces. I don't love FOL, believe
> me, its thoroughly inadequate. But I havn't seem many improvements, and I've
> seen lots and lots of syntactic variations being touted as improvements or
> alternatives. We have some really hard problems in KR, and we need real
> advances, not syntactic fancy dress for familiar ideas.  Lets not confuse
> issues of user acceptability with those of representational language design.

I certainly agree that human factors and theoretical structures are
both important, but distinct issues.  But I have been insisting all
along that there is more to theoretical structure than truth-functional
equivalence.  Every theorem in mathematics is a tautology that is
equivalent to T.  But when you prove a difficult theorem, I believe
that you really have discovered something "meaningful".

Hans Kamp (a former student of Tarski's, by the way) invented his
Discourse Representation Structures as an alternative notation for
logic, because he was not able to state his rules for resolving
anaphoric references in terms of the structures available in the
standard predicate calculus.  It turns out that Peirce's contexts
in existential graphs are isomorphic to Kamp's contexts, and
that all of Kamp's rules can be carried over to both Peirce's
graphs or conceptual graphs.  But Gary Hendrix's partitioned nets
have a different structure for contexts, and Kamp's rules cannot
be stated in Hendrix's form.  That is an example of a theoretically
significant distinction between systems that are equally
expressive in a truth-functional sense.

> If you can get the 'final stage' of linguistic
> analysis expressed in a tame sorted FOL, why do you need all that other stuff?
> Thats what I mean by the conceptual content: what the sentence means AFTER one
> has understood it, after all the linguistic analysis is over with.

After the analysis has been done, there may not be any sentence left.
Suppose for example, that your friend Mary was applying for a job at a
local high school and you learned that they had just hired a new teacher.
So you might think that it's too bad she hadn't applied sooner.  But then
someone else tells you "Mary is the teacher."  In my terms, the first
stage of representing that sentence would be the following graph:

   [PERSON: Mary]- - -[TEACHER: #],

where the dotted line represents a coreference link and the # marker
represents an indexical reference to something of type TEACHER.  After
the analysis has been done, all of the sentence will have disappeared
except a coreference link between two previously separate memory
structures.  I would say that the original conceptual graph represents
the "meaning" of the sentence, and the final coreference link represents
the effect of interpreting that sentence.  At the end, you can do a
memory dump in FOL, but you can't identify any part of it as "the meaning
of the sentence."

>>  6. Metametalanguages:  As I've said in other notes, I prefer to do
>>     my reasoning in a fairly conventional FOL.  In order to reconcile
>>     ....
>>      I would handle
>>     defaults, etc., as recommendations for things to add in a belief
>>     revision or theory revision stage; but the actual reasoning would
>>     be purely first-order.
> I largely agree with your intuitions about nonmonotonicity: however, it is not
> easy to see quite how it can be made to work in practice. The process of
> context change is not first-order and needs to be integrated with the
> conventional inference in an uncomfortably close way.

I agree that there is a lot of research to be done to work out those
intuitions.  And most of the nonmonotonic part would indeed be outside
the realm of FOL.  In fact, statistical techniques or neural networks
might also be helpful for selecting defaults to consider in the belief
revision stage.  That's where I would put fuzzy logic, by the way --
in a belief revision stage, rather than the later deductive stage.
I have called the popular approaches to fuzzy logic "a fallacy of
misplaced fuzziness."

> Your example illustrates my point exactly. To be placed in memory, the
> indexicals must be replaced ('resolved') by 'antecedents for the # markers'.
You were calling them different kn. representations, and I was calling
them the same, but with some added features in the communication form.
But we largely agree on the features and operations.

> Let me just emphasise that this had better be done pretty promptly for
> the time # marker, or it will get the wrong value: the proposition will be
> associated with the time you decided to record it rather than when it happened.
> If you record a temporal indexical and wait, it is impossible to retrieve its
> appropriate value.

Resolving indexicals is like resolving variable bindings in LISP.
You can only postpone the resolution if you save the entire environment.
But with both contexts and indexicals as first-class objects in the KR,
you can choose to save the environment if you like.  However, much of
the work on indexicals is still in the active research stage; I include
them in the richer CG formalism, but I would not ask for them in KIF.

> By the way, I have to remark that there is nothing in your example which needs
> all the conceptual graphs stuff you cite (formula operator, situation box,
> etc.). Just extend FOL by allowing indexical markers in argument places.

To resolve the indexicals, you need both the markers and the nested
context structures.  Kamp's DRS form, Peirce's existential graphs, and
my conceptual graphs have isomorphic contexts.  Predicate calculus and
Hendrix's partitioned nets do not have the right kind of context
structure to allow you to resolve the bindings properly.

The formula operator is not part of the CG formalism -- it's just a
handy way to export information from CGs to predicate calculus.

> Thanks for the history. I didn't know that Russell picked up the notation from
> Peano and only later 'rediscovered' Frege. I will go and see where in his
> autobiography he mentions this.

I learned that bit of history at the Peirce Sesquicentennial at Harvard
in 1989.  They also said that some of Russell's glowing praise for Frege
in the intro to the Principia was motivated by a fight he was having
with Peirce (who could be charming, brilliant, and irascible -- the
latter trait causing him to end up impoverished without a full-time
job anywhere).

> In writing "color(x,red)" one has transformed "red" from a property
> to a real thing, has first-orderised it.
> By the way, I agree we should be ontologically promiscuous in this way
> when it is convenient.

Fine.  I think that we agree about the value of such a mechanism and
the need for an apply-like operator ("holds" in KIF or my rho and tau).

> Years ago I wrote a little article on deadly sins in AI, and one of them was
> called "rally round the flag": insisting that everything must be stated in an
> idiosyncratic ad-hoc formalism which is essentially equivalent to all the other
> formalisms.  It is exasperating because in some sense it doesnt matter, since
> they ARE all intertranslatable: but people feel that the use of this or that
> formalism somehow solves a real problem; and that is, well, misleading to new
> students.

I agree.  For the ANSI IRDS project, we are representing everything in
both predicate calculus and conceptual graphs, and we are collaborating
with people who also want an SQL-like notation.  But context structure
is a theoretically significant feature that predicate calculus notation
has tended to obscure, but Kamp's DRS and the existential and conceptual
graphs have been highlighting.  That is just one example of a "real
problem" where the choice of formalism makes a difference.

> ....  There is a difference, I suggest, between two
> enterprises. One is essentially scientific, trying to model intuitive human
> thinking: 'conceptual analysis' of common-sense reasoning and of everyday
> language. The other, essentially engineering, is making efficient reasoning
> systems which have useful applications.  These are not the same enterprise and
> may sometimes point in different directions, and this is a plausible example of
> such a divergence.

Yes, I agree with that point.  But I also believe that the engineers
and the cognitive scientists have already learned a great deal from one
another and should continue to pay attention to each other's work.

> Many economically important databases are complete in this
> sense, and universal quantifiers can be interpreted as ranging over these
> explicit closed worlds accessible to direct computational inspection, but this
> is almost never true for cognitive modelling of intuitive human thinking.

There are actually very few commercial databases that are truly
"closed world".  SQL has a feature known as a "null value", which is
a half-vast attempt to support open-world reasoning.  But it is so
inconsistent that Chris Date constantly warns people not to use it.
That is an example of the kinds of ad hoc features that the "practical
programmers" implement when the theoreticians leave them alone.