Re: Higher-order KIF & CGs

phayes@cs.uiuc.edu
Message-id: <199311032226.AA12544@dante.cs.uiuc.edu>
Reply-To: cg@cs.umn.edu
Date: Wed, 3 Nov 1993 16:28:55 +0000
To: fritz@rodin.wustl.edu (Fritz Lehmann)
From: phayes@cs.uiuc.edu
X-Sender: phayes@dante.cs.uiuc.edu
Subject: Re: Higher-order KIF & CGs
Cc: boley@dfki.uni-kl.de, cg@cs.umn.edu, interlingua@isi.edu
Hi Fritz

Let me repeat a parable. A student comes up and announces that he ('he'
because this guy is going to turn out to be an idiot) has a invented a
formalism - he has called it BIG - which is so expressive that he can
describe Niagara Falls in it. Impressed, I ask him to show me, and I see:
Very-Big(Niagara-Falls). I ask him what justifies his claim that 'Very-Big'
means very big, and he says it does because he, the student, invented this
logic, and so he gets to define the semantics, and he has decided that its
just a *logical truth* of BIG that 'Very-Big' denotes the property of being
very big. Thats part of the semantics. He shows me another page with things
like 'Meaning("Very-Big") = lambda x. (x is very big)' written on it.

Now, how is this different from the claims that a FOL user makes that, say,
"and" denotes conjunction? The difference is that FOL comes with an
apparatus for making inferences which can be shown to correspond to the
semantics that is being claimed for it. That correspondence is a
completeness theorem. Without such a correspondence - or some similar
account of how deduction relates to meaning - claims that formal symbols
mean something are no better than my student's claims about BIG. In
particular, a user of higher-order logic may claim that their second-order
quantifiers range over all predicates, but I see no reason to accept such a
claim, since given any inferential apparatus they they could possibly
define, I can consistently interpret their second-order quantifiers as
being essentially first-order (ie Henkian). They might protest that this
isnt what they meant, but my first-order interpretation will justify the
same theorems, so they won't be able to cite me a counterexample.  Any
claim that a higher-order logic is *really* higher-order must be based on
faith rather than reason. 

(What they might be able to do, should we become embroiled in a debate, is
show me that on my interpretation there are nonstandard models of
arithmetic, say. And I repeat, I believe that there ARE nonstandard models
of arithmetic. I even know what some of them are like. I put them in the
same category as space-filling curves and other such topological
wierdos.(see PS note)
You say that we can eliminate nonstandard numbers by n-order fiat, but that
sounds to me like BIG.)

Thats why I dont regard completeness theorems as mere technical
curiosities. Also, I think your point about Godel sentences is misleading:
thats not the same sense of completeness. Look rather at a minimal logical
vocabulary of FOL. It is very small, consisting maybe of the symbols OR,
NOT, FORALL. Incompleteness means that one or more of these isnt going to
quite mean what is supposed to. Which one? There are some answers to this
(for example, restricting the language to Horn clauses means that OR doesnt
quite mean what it usually does), but by and large it is difficult to see
how to damage completeness in a way which preserves much of the essential
character of FOL itself. Take out negation, say, and you just don't have
the same kind of language. Of course real systems will often be incomplete
for all sorts of practical reasons. But these are almost always traceable
to, as it were, failures to search all of the space. But inmcompleteness of
the underlying representation language is a gap in the search space itself,
which is a more serious matter.

Your point about equality would be a good one except that we know neat ways
to overcome the apparent lack of expressibility. Closed-world assumptions
handle the 100 senators nicely, for example. One might object that this
involves the use of apparently higher-order 'schemas' or some such in order
to be compactly stated. But thats the point: such things really are only
abbreviations of large numbers of first-order axioms, introduced for
reasons of computational and expressive elegance. That is *not* the same as
a genuine second-order quantifier, as the model theory tells us so
eloquently. And the semantic analysis of these languages should reflect the
actual use, not make grand aspirations to an extraordinary expressibility
which is both unnecessary and computationally impossible.

I was interested that when challenged to go above third order you cited
category theory. Yes, I can believe that category theory might find some of
the more rarified heights useful. I expected you to cite Montague at me,
who also found himself using fourth and maybe fifth orders, and was
concerned with down-to-earth matters of language comprehension. Montague
was clearly using the lambda-abstraction of type theory as a computational
tool, and was not concerned with the veracity of his model theory. I would
be very interested to find out more about Ontek's use; is it in any way
related to Montague?

Look, I think we may be arguing past each other. If you wish to argue for
the computational usefulness of lambda-conversion, then of course I will
agree and point to LISP. You ask for 'REAL EXAMPLES' of the trouble caused
by incompleteness. But that misses the point. The systems behave the way
they do. Its the semantic accounts of them that make them incomplete or
not. You can use beta-conversion and fourth-order schemas all you like, but
its the claim that this is all higher-order logic and I have to think of
these things as quantifiers over uncountable sets of predicates, and hence
the semantics justifies asserting by fiat that there just are no
nonstandard models of arithmetic, that sticks in my craw.

For example, you say: 'I think it's perfectly clear what "character" means
here; it means predicate.' But thats not good enough. The question at issue
is how many predicates there are. For example, are there uncountably many
predicates of the integers? Diagonalisation arguments would certainly
suggest so, but now let me turn the tables on you and ask why I should
believe in these things, most of which could never possibly be named or
conceptualised. Until one plays with it for a while, it is hard to
appreciate just what an extraordinarly unintuitive idea an 'uncountable
set' really is. For example, suppose we let you somehow catalog an infinite
number of them in a finite lifetime (so you could name every integer) you
still wouldnt have begun to name all of these things. You can return to
another life infinitely many times, and in each one of them you can name
infinitely many of them again: you still have only named so few of them
that the set remaining might be dense. Is *this* what you are claiming to
have quantified over when you write (forall P) ?

Best wishes

Pat Hayes


PS. It important to clear up one misunderstanding that even you seem to
have. That there are nonstandard models of arithmetic doesnt change the
actual process of *doing* arithmetic one jot, nor any processes of
reasoning about arithemtical entities. It just means that we have to accept
that, if challenged, we have to be creatures of this uncertain century and
admit that we have no means of being absolutely certain that we have the
real line pinned down. My remark about uncomputable resources was only to
indicate that to claim that our arithmetical talk really *must* refer to
artithmetic could only be justified, ultimately, if we could claim to have
processed infinitely many sentences. 

PPS. Your point about the multi-adicity of KIF screwing up some of the
conventional instantiation schemes is a good one, and I think more work
needs to be done on this in general. I agree that this is expressively very
useful and natural, but it has given rise to a number of annoying technical
problems, and maybe it needs to be looked at more carefully.

----------------------------------------------------------------------------
Beckman Institute                                    (217)244 1616 office
405 North Mathews Avenue        	   (217)328 3947 or (415)855 9043 home
Urbana, IL. 61801                                    (217)244 8371 fax  	  

hayes@cs.stanford.edu  or Phayes@cs.uiuc.edu