Re: Frames are (almost) enough for EDI (Pat Hayes)
Message-id: <>
Mime-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Date: Fri, 23 Sep 1994 14:48:38 +0000
To: (Paul van der Vet),
From: (Pat Hayes)
Subject: Re: Frames are (almost) enough for EDI
Precedence: bulk
At 11:04 AM 9/22/94 +0200, Paul van der Vet wrote:

>.... there is, I believe, no such
>thing as "common sense knowledge" that you can understand in general
>terms, without paying attention to the precisely specified desired
>behaviour of the concrete artefact under construction.

Well, that is an interesting position to take. It is in opposition to the
presumptions of a large body of work in AI, which has assumed that there is
indeed such a thing as "common sense knowledge" which people share, utilise
in many ways to think with and to help decipher what they say to one
Much AI research is predicated on the hypothesis that the construction of a
formalisation of such knowledge is an important (engineering) task. We may
be wrong, but my original note was intended only to suggest a possible
technical problem that we might be having. If this is not your goal, then
my comment was probably irrelevant to you.

>....... Discussions about the
>feasibility of artefacts are common in mature engineering disciplines;
>why not in AI? Because it isn't engineering? Then what is it?

It is part of cognitive science, I would say, just to give an
answer.Scientists and engineers have been having this debate in many areas
for decades now, and it doesnt get anywhere. Engineers - even mature
engineers - are always deeply suspicious of any grand goal that isnt to
build some particular thing, and scientists are always more interested in
finding something out than in getting some gadget to work. AI can span both
science and engineering.

 And why
>do we bother about making working systems at all?

To test hypotheses about what will work. And to make money, of course.

>..... Formal tools .... impose only very
>global restrictions on what you express; for instance, we would agree
>that any organisation is to be consistent. For the rest we have to
>think it out ourselves.

Of course: the logics dont solve the formalising problems for us
automatically. But (at the risk of sounding repetitive) let me emphasise
again my original point by taking up that 'consistent'. In ordinary
thinking we often seem able to accept a general proposition as being
consistent with several minor exceptions. This has been modelled by default
logics, nonmonotonic logics, probabilistic logics, etc etc.; but Ive never
seen a really convincing account of it. Repeated use of any one of these
formalisms will gently warp the way you think about how to organise
knowledge: you will becomne sensitive to the order in which things are
asserted; or you will come to think of every universal quantifier as
basically probabilistic and then become worried about how to axiomatise
arithmetic; or you will start to routinely qualify things with 'unless
exceptional' predicates, or whatever. I dont know how to "think it out" for
myself in a completely neutral way, uninfluenced by the formal tool I am
using to write the oprganisation down with, and I suspect it can't be done.
(It may not even make sense, in fact.) Maybe these are only 'very global'
restrictions, but they may nevertheless (or even for just this reason) be
also very pervasive distortions of how we are able to organise things.

OK, no more from me on this topic. 


Beckman Institute                                    (217)244 1616 office
405 North Mathews Avenue        	   (217)328 3947 or (415)855 9043 home
Urbana, IL. 61801                                    (217)244 8371 fax