Comments on Richard's proposal for definitions

Robert MacGregor <>
Message-id: <>
Subject: Comments on Richard's proposal for definitions
Date: Mon, 06 Jan 92 10:07:07 PST
From: Robert MacGregor <>

> First of all, we are proposing to distinguish definitions (and
> inference rules) from sentences, so that a knowledge base will consist
> of a collection of definitions, rules, and sentences.  Definitions
> (and inference rules) therefore do not have a truth value, cannot be
> embedded inside of sentences, cannot appear as arguments to operators,
> cannot be derived, etc.

I think this is a very positive step forward.

> Analogously, "kappa" is a relation-forming operator defined as
> follows.  The semantic value of the term "(kappa (<indvar>*
> [<seqvar>]) <sentence>)" is the set of tuples associated with the
> relation that holds for every tuple of objects "matching" the variable
> list for which the sentence is satisfied when the variables in the
> variable list are assigned to the objects in the tuple.  That meaning
> is expressed with the following axiom schema:
>      (= (kappa ($v1 ... $vk [@w]) phi)
>         (setof (list $v1 ... $vk [@w]) phi))
> We now have the alternative, as suggested by Bob, of considering the
> domain assertion to be:
>      (= r (kappa ($v1 ... $vk [@w]) (and s1 ... sn)))

I think mathematical logic rather than Lisp should be the guide for
choosing the KIF meaning for the term "lambda".  Hence, I agree with
the spirit of what your "lambda" and "kappa" operators are doing,
but not with your choice of operators.  Further, I think of
"lambda" as something that is a primitive logical operator -- defining
it in terms of a set constructor seems to be a backwards way of
doing things (perhaps "setof" should be defined using "lambda").
Regarding the sequence variables -- I think KIF would be a lot cleaner
and more palatable if sequence variables were eliminated, so I have no
opinion on subtleties that depend on that particular aspect of the language.

The recent discussions on definitions do seem to be moving in what
I feel is right direction.  However, regarding analytic truths and whatnot,
I think that efforts to combine everything that a definition
represents into a *single* statement are off the mark.  For
example, suppose I have:
   (defrelation R (?x) (and (A ?x) (B ?x)))
   (defrelation S (?x) (and (B ?x) (A ?x))))

Are the definitions of R and S the same? Answer: No.
Are the definitions of R and S equivalent? Answer: Yes.  What
do I mean by equivalent?  The following lambda expressions (my usage, not
   (lambda (?x) (and (A ?x) (B ?x))) and
   (lambda (?x) (and (B ?x) (A ?x)))
denote exactly the same predicate.  Thus, I would call the definitions
of two symbols R and S equivalent if they denote the same thing.  However,
the SOURCE of the definitions is not the same thing as their denotation,
and that distinction has been lost in the sentence
   (= R (lambda (?x) (and (A ?x) (B ?x)))).
I propose inventing a binary relation DEFINITION that maps symbols to
the expressions that they define.  For example, my interpretation of
the declaration
   (defrelation R (?x) (and (A ?x) (B ?x)))
would be the two statements
   (definition 'R '(lambda (?x) (and (A ?x) (B ?x)))).
   (= R (lambda (?x) (and (A ?x) (B ?x)))).
with some kind of wrapper around the second statement indicating that
this is an analytic-truth, law, or whatever we want to call it.  Note: 
when one considers systems that support revision of definitions, (e.g.,
LOOM) it is mandatory that something equivalent to the source of a definition
be available to the reasoner -- lambda expressions lose important information.

What about primitive definitions?  I propose first reaching agreement on
non-primitive definitions, and then tackling primitiveness.

- Bob