Re: Frames are (almost) enough for EDIvet@cs.utwente.nl (Paul van der Vet)
Date: Thu, 22 Sep 94 11:04:22 +0200
From: firstname.lastname@example.org (Paul van der Vet)
To: email@example.com, firstname.lastname@example.org
Subject: Re: Frames are (almost) enough for EDI
In-reply-to: Mail from 'email@example.com (Pat Hayes)'
dated: Wed, 21 Sep 1994 13:26:39 +0000
In a rejoinder to a previous message of mine, Pat Hayes writes:
> >Now the carpet example is perhaps not so clarifying because it deals
> >with common-sense knowledge. Common-sense knowledge doesn't give you
> >much to hold on to anyway, so even if you do inspect consequence sets
> >you often find yourself wondering whether the differences matter.
> Yes, exactly. But AI has been trying to codify such common-sense knowledge
> using logic (or some equivalently expressive notation) now for about 20
> years. It is widely thought to be an essential task in order to achieve
> intelligence: and certainly it seems that the knowledge that is needed in
> order to help language understanding or reasoning about any domain (other
> than very precise mathematical or scientific ones) is going to involve
> these concepts. Legal reasoning, for example, surely needs to understand
> the notions of 'in' and 'part of'. And the same kinds of difficulty arise
> even in such apparently well-organised domains as describing the managerial
> structure of an organisation. (Can someone be their own manager? It depends
> quite what you mean by 'manager'.)
My reply was partly motivated by the lack of attention to the
engineering aspects of the task. In fact, in Hayes's first posting
there is a remarkable contrast between issues being called unimportant
and their being capable of getting in the way later. If they can get
in the way, then surely they are important. It seems to me that the
"getting in the way" part reflects an engineering goal while the
"unimportant" part reflects some other, wider, and to some at least
more lofty goal. It pays to keep the two goals apart.
Let's stick to the engineering goal. I think we would agree that
solving issues such as the manager issues is vital to the task of
designing a useful artefact. The criterion for a satisfactory solution
is whether it gives the artefact the desired behaviour.
"Understanding" is a somewhat misleading term to use here. There
exists localised knowledge of a common sense-like variety whose
purpose is to make an artefact work. But there is, I believe, no such
thing as "common sense knowledge" that you can understand in general
terms, without paying attention to the precisely specified desired
behaviour of the concrete artefact under construction.
The argument that AI research is seeking to codify common sense
knowledge (in general terms?) for over 20 years is not decisive. The
history of engineering shows plenty examples of envisaged artefacts
that turned out to be impossible in practice. Discussions about the
feasibility of artefacts are common in mature engineering disciplines;
why not in AI? Because it isn't engineering? Then what is it? And why
do we bother about making working systems at all?
> ..... you are by necessity solving
> >two problems in one stroke: (1) organising an otherwise unorganised
> >bunch of intuitions which might moreover differ from person to person;
> >and (2) expressing what you have found in first-order logic. I suggest
> >that in dealing with common-sense knowledge the first problem is the
> I agree, but I should add that the only way we have to do such organising,
> is to try to express the stuff in some logic-equivalent notation. Solving
> the second problem gives us the tools to solve the first problem. The point
> of my comment was to suggest that maybe part of our difficulties with the
> first problem might be because we are using the wrong kind of tools.
We thoroughly agree on the need for formal tools; indeed, without them
we would be utterly lost in such tasks. However, no matter which logic
we use the real problem is finding or, more correctly, establishing
organisation. Formal tools help you in expressing the organisation
you've imposed and in tracing consequences. But they impose only very
global restrictions on what you express; for instance, we would agree
that any organisation is to be consistent. For the rest we have to
think it out ourselves.
Paul van der Vet.
Paul van der Vet Phone +31 53 89 36 94 / 36 90
Knowledge-Based Systems Group Fax +31 53 33 96 05
Dept. of Computer Science Email firstname.lastname@example.org
University of Twente
P.O. Box 217
7500 AE Enschede