Definitions and analytic truth
Message-id: <>
Date: Sat, 4 Jan 92 14:11:40 EST
Subject: Definitions and analytic truth
Some comments on the recent comments.

First, Len Schubert described some of the definitional capabilities
in the interlingua:

>   (defobject <name> <wff1> ... <wffn>)
>   (defunction <name> (<var1> ... <varn>) <wff>)
>   (defrelation <name> (<var1> ... <varn>) <wff1> ... <wffm>)
>   (defprimrelation <name> (<var1> ... <varn>) <wff1> ... <wffm>)
> (We might also use a single "definition" construct which subsumes all
> of these, essentially quoting their parts; there are pros and cons to
> this...) The first 3 presume "full" definitions (and only one definition
> may be given per name), but the last allows partial "definitions", and
> there may be any number per name, each one placing some constraint on
> the "defined" term. The example in the draft document handed out at
> one of the working group meetings was
>   (defprimrelation person ($x)
>      (mammal $x))

Aside:  I think that an overwhelming majority of the potential users
of the interlingua would prefer to say that Person is a subtype of
Mammal.  I can think of any number of arguments for making KIF a
typed logic, but I do not know why it is still untyped.

> In this way we can write down any number of "laws" about how terms are
> related, without fully defining them.
> I think a reasonable complaint here is that such laws aren't, and
> shouldn't be called, "definitions"; in fact, note that it can be rather
> arbitrary what name we insert at the head of defprimrelation. For
> instance, if we want to say, as a "law", that no whale is a fish
> (i.e., no fish is a whale!), is this a partial definition of "whale"
> or of "fish"? Still, the consensus in the interlingua group seemed
> to be that the use of partial definitions, with the partial definition
> attached to a particular name, was so entrenched in KR languages that
> it made sense to have such a construct in the interlingua. I should
> also point out that the "analytic-truth" predicate (below) in principle
> affords a means to express laws WITHOUT attaching them to particular
> names.

John McCarthy commented on this point:

> It seems to me that KIF definitions should be _conservative_ extensions
> of the language just as definitions are in logic.  One should not
> be able to sneak in a new assertion in the guise of a definition.

I agree.  It doesn't make sense to cite the "entrenched use" in the KR
community here, when there is a 30-year history of entrenched use of
type hierarchies that is being ignored.

Continuing with Len's comments:

> ... we (Rich and I) decidedly didn't affirm equivalence between
> definitions and analytic truths. Rather, we proposed such analytic
> truths as PART of the content of definitions (the "domain assertion"
> part, in our terminology); an entailment, if you like (this depends on
> viewing definitions as wffs, i.e., assertions, a view  which remains
> somewhat controversial). The notion of analytic truth was introduced
> to account for the intuitive distinction between contingent truth and
> truth-in-virtue-of-meaning, the latter having necessity-like force.

I would prefer to put it the other way around:  analytic truths include
definitions and laws.  Then the axioms could be clearly separated from
the definitions, which could remain conservative extensions, as in logic.

Most object-oriented systems have gotten into a bind by stuffing too
much into their definitions.  One system put the fact that a car driver
must be over 16 into the definition of CAR.  But then if you wanted to
ask about properties of TEENAGER, that fact was not accessible.  The
O-O style of packaging definitions (a.k.a. ravioli code) is convenient
for some purposes, but not for others.  The interlingua should allow
an O-O style of packaging, but should not require it.

> Bob [MacGregor] says
>> The traditional means for assigning a semantics for definitions
>> is to make reference to lambda expressions, e.g.,
>>    (defrelation A ?x (B ?x)
>> is equated with something like
>>    (define A (lambda (?x) (B ?x)))
>> meaning that "A" is a constant that denotes a particular
>> lambda expression. Rather than inventing a new and controversial
>> denotation for definitions, why aren't we using something that's
>> already tried and true?
> I'm as big a fan of lambda as Bob (that's why I put it in my semantic
> net syntax in AIJ'76 and in Hwang's & my KR for natural language in
> KR'89). However, the suggestion that "(define A (lambda (?x) (B ?x)))
> just means that "A" is a constant that denotes a particular lambda
> expression" is problematic. Is the idea that "A" is just a name for
> the syntactic expression "(lambda (?x) (B ?x))"? In that case, we
> are talking about SUBSTITUTIONAL rather than ANALYTIC definitions,
> which were also included in the interlingua proposal as a macro-like
> facility. I think one can indeed get by with substitutional definitions
> wherever one might otherwise use defobject, defunction, or defrelation.

It's not the lambda expression that allows you to do substitutions, but
the equal sign.  The statement "The number of planets = 9" is not a
definition, but it still gives me the right to substitute "9" for "the
number of planets" in any context where that fact happens to be known.
However, if I have a statement about a different context, e.g. "Hegel
thought that the number of planets was prime," I am not justified in
inferring that Hegel thought that 9 was a prime number.

I would also distinguish macro facilities from lambda expressions.
To me, macros are syntactic sugar that is outside the semantics of
the knowledge base.  But a statement of equality between a symbol and
a lambda expression can have a truth value in the same way as any other

> However, you can't reconstruct defprimrelation that way (and a lot of
> people seem to want that notion); and since substitutional definitions
> are metalinguistic statements, you can't express them and reason with
> them IN the object language, if that's what you want to do.

No, I could assert A = (lambda x)B(x), and later ask whether
A = (lambda y)C(y).  That assertion might be true in one context and
false in another, but it is not appropriate to ask about the truth of
a macro.

> You can't express that IF the definition of a bachelor is an unmarried
> man, THEN Joe is a bachelor. Nor can you express the argument that since
> Joe is a man but not a bachelor, it follows that the definition of a
> bachelor cannot simply be that he is a man. Nor can you express that
> Joe thinks that (the definition of) a conifer is a generally cone-shaped
> tree, whereas in fact it is a cone-bearing tree or shrub. Etc.

Any English sentence that contains words like "definition", "sentence",
"grammar", etc., is a metalanguage statement.  English lets you switch
>From object language to metalanguage by adding a few such words to your
vocabulary while using the same old syntax.  I think that the interlingua
should support a similar ability to switch from object to metalanguage.

> On the other hand, if we take the above suggestion to be that "A"
> and "(lambda (?x) (B ?x))" DENOTE the same thing, then this is just
>   A = (lambda (?x) (B ?x)),
> an object-language assertion (assuming we admit lambda into the object
> language). But this leads exactly to the same problems that Bob points
> out for an interpretation of definitions as "analytic-truth" assertions.
> E.g. (under standard semantics for lambda and equality), we can deduce
>   B = (lambda (?x) (A ?x))
> and hence (incorrectly) that (define B (lambda (?x) (A ?x))).
> In fact the equalities are equivalent to
>   (forall ?x ((A ?x) <=> (B ?x))).
> So we would STILL want to say that (define A (lambda (?x) (B ?x)))
> merely entails A = (lambda (?x) (B ?x)), but that in addition, the
> fact that it IS a definition, and its particular syntactic form, are
> significant. (Again I would use a normal-form function to reason
> about the truth of definitions.)

I would prefer to say that analytic truths include definitions, laws, and
anything provable from them.  So just proving an analytic truth that B is
equal to some lambda expression would not mean that that expression was
the definition of B.

> Finally, concerning John's further comments:
>> I also find the "analytic-truth" predicate very distasteful.  Presumably,
>> it means that the assertion is provable from some set of laws.  But in
>> that case, there is a missing argument -- namely, which laws?
>> There are some very serious philosophical arguments about the distinction
>> between analytic and synthetic (see for example, Quine's _Word and
>> Object_).  I think we should avoid using terms that are so highly charged
>> with philosophical controversy.  If we say, instead, that there are named
>> contexts, each with its own set of laws, then the "analytic-truth"
>> predicate could be replaced by the following predicates:
>>  1. islaw(c,p), which says that p is a law in the context c.
>>  2. necessary(c,p), which says that p is provable from the laws in c.
>>  3. possible(c,p), which says that p is consistent with the laws in c.
>> If you want to define A as a synonym for B(x) in the context c, you
>> could do so by an assertion of the following form:
>>     islaw(c, A = (lambda x)B(x)).
> Yes, there is a "missing argument" in the notion of analytic truth,
> in the sense that there is tacit reference to a set of definitions.
> This is clear from the truth conditions for definitions sketched above,
> which DO refer to a particular set of definitions. I think the
> suggestion to make this argument syntactically explicit is an
> interesting one, requiring further thought (from my perspective,
> anyway). The issue seems completely analogous to the issue of whether
> modal notions like "provable", "consistent to assume", "probable",
> "known", etc. -- modalities often introduced into logics (e.g.,
> nonmonotonic, probabilistic, or autoepistemic ones) -- should have
> explicit "knowledge base" or "context" arguments. (In many proposals,
> they don't.)

I would very much like to see contexts brought in as "first-class objects",
a la McCarthy, Guha, and others.

> As for the term "analytic truth" itself, it does not seem terribly
> out of joint with Quine's notion. I'm not particularly attached to the
> term, but I doubt that "necessary" is an uncontroversial substitute.
> That term, too, is loaded (since there are many forms of necessity --
> logical, physical, epistemic, deontic, etc.) Moreover, I take it that
> "necessary truths" are generally taken as a proper subclass of analytic
> truths (see Word and Object, p.66), and I've been told (perhaps
> incorrectly) that the term "analytic truth" was used by people like
> Bar-Hillel precisely to avoid confusion between truth-in-virtue-of-
> meaning with (the narrower notion of) truth as a matter of logical
> necessity. But I don't think anyone views the terminological issues
> as settled at this point. Maybe Quine's "truth by convention" would
> serve better? Just a thought.

The term is being used here in much the same way that Quine used it,
but he was severely criticizing it, since he did not believe that there
was a clear distinction between analytic and synthetic.  I agree with
Quine in not believing in global or eternal analytic truths.  But I would
be willing to go along with "analytic with respect to a given context."
Another term that might be useful here is "microtheory", which Guha
introduced into Cyc.  A microtheory would be a collection of definitions
and laws, which could be imported into a context.

Following is the continuation of John McCarthy's comments:

> Jussi Ketonen's EKL interactive theorem prover handles it as follows.
> Let foo be a new symbol and P{foo} any formula containing foo.
> Provided one can prove (exists foo)P{foo}, one can write P{foo} giving
> "by definition of foo" as the reason.  The easiest case of this is
> when P{foo} has the form foo = rhs, where rhs does not contain foo.
> Of course, P{foo} could also be (all x y z)(foo(x,y,z) iff rhs), where
> again rhs does not contain foo.
> The use of definition can make arguments shorter, but in first order logic
> and, I believe, the higher order logic of EKL, definitions don't allow
> any formula not containing the defined term defined to be proved that
> isn't provable without the definition.
> Since KIF allows nonmonotonic reasoning, the inference of (exists foo)P{foo}
> may be nonmonotonic also.
> This method of definition is essentially one of partial definition, since
> one is not required to infer that there is a unique  foo  satisfying
> P{foo}.
> We can also achieve the successive refinement of a definition by adding
> just a little syntactic sugar to the previous notion.  Suppose  foo
> has already been defined.  Then our previous method allows defining
> a new symbol  baz  by the formula  P{baz}  taking the form
> baz = foo  and P1{baz},  where  P1{baz}  gives the new desired
> properties.  We still have to infer  P{baz}.  No this isn't correct,
> because that would would require proving  P{baz},  where  foo  is
> any entity satisfying  P{foo},  whereas we want merely to have to
> show that there is an entity  foo  satisfying  P{foo} and P1{foo}.
> Doubtless this can be fixed up.

Having a type hierarchy would make it easy to fix this up.  You could
introduce FOO and BAZ as undefined types, with BAZ as a subtype of FOO.
Then P{FOO} would be introduced as a law for things of type FOO, and
P1{BAZ} would be a law for things of type BAZ.  By inheritance, you
would also have P{BAZ}, but you could not infer P1{FOO}.

There is also a question of whether you need to prove existence.
You may want to define the type UNICORN without having to prove that
unicorns exist.  Consistency of the axioms might be sufficient.

But even consistency might be too strong a requirement, especially
since you may have to define a system and do a lot of exploration of
its consequences before you can conclude that it is consistent.  And
in the case of arithmetic, people have been using the system for
centuries without being certain that it is consistent.

John Sowa