IntensionsRobert MacGregor <firstname.lastname@example.org>
To: Michael Genesereth <email@example.com>
Cc: firstname.lastname@example.org, email@example.com
Date: Tue, 18 Sep 90 09:39:32 PDT
From: Robert MacGregor <firstname.lastname@example.org>
On labelling propositions. Your first suggestion:
> First of all, we can axiomatize things like probability:
> (= (probability `(and ,$p ,$q)) (probability `(and ,$q ,$p)))
is less desirable, because it places the semantics on the
operator (in this case "probability"). Suppose I want to say
several different things about a material implication (e.g., what
its probability is, whether to forward-chain or backward-chain
on it, statistical information like how often it is applied
during chaining, etc.). I don't want to develop a
set of axioms repeatedly for each different operator. Instead,
I approve of your second suggestion:
> Secondly, At the sb meetingb, several people proposed a function
> (called, say, CONCEPT) that maps a sentence into its intension.
> With this, we could do what you want as follows
> (= (probability (concept '(black raven))) 0.3)
> ... we could even make these things operators
> so that the hated quote does not appear
> (cbelieves joe (black raven))
This latter form without the quote is the kind of thing I'm looking
> Of course, to do that we would need to agree on a method
> for quantifying in to formulas. Perhaps free variables are
> quantified outside teh scope of the cbelieves. I would
> appreciate some suggestions here.
I agree that there is work to be done here. There are advantages
though. For example, if it works out, we may have an alternative to
bquote. Here is one of the tough nuts from Peter Norvig's examples:
"John thought the person at the party was Mary's friend, and Mary
thought the person was John's friend." One of Peter's translations was:
(bel john `(friend ',$p mary))
(bel mary `(friend ',$p john)))
With "cbelieves" this might be phrased as
(and (at-party ?p)
(cbelieves john (friend ?p mary))
(cbelieves mary (friend ?p john))
> I have not figured out just what those intensions are, hoping
> that someone with more intuition on those things could ghelp
> me out.
Probably the major problem with an intensional semantics is deciding
when two intensions/concepts are equivalent. The general problem is of
cource, undecidable. KL-one style classifiers vary in their ability to
discover analytic equivalence between concepts. One might figure that
this problem is not relevant for KR systems that don't have classifiers.
However, this is not the case. Many systems compile axioms into internal
structures, and in doing so, exploit properties such as commutativity
(of logical AND, logical OR, equality, ...), associativity properties, etc.
Any system that applies some kind of canonicalization to input sentences is
working with at least a partially intensional semantics. Classifier-based
KR systems represent an extreme case of this kind of processing.
Summing up, there is indeed a new problem here. However, the
alternative to solving this problem seems to involve mailing
axiomatizations representing the limitations/quirks of each KR system's
rule interpreter with each set of sentences to be transmitted via the
Interlingua, which is really no alternative at all.