Re: Ex-Con logic
Message-id: <>
Mime-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Date: Tue, 22 Feb 1994 15:22:20 +0000
To:,,,, interlingua@ISI.EDU
Subject: Re:  Ex-Con logic
At  6:22 PM 2/21/94 -0500, wrote:
>As a reminder, my example was 
>   (E x)(E z)~((E v)(E w)~((~(P(x)&P(v)) & ~(P(z)&P(x))) & R(v,w,x))
>   (E v)(E w)~((E x)(E y)~((~(P(v)&P(w)) & ~(P(y)&P(w))) & R(y,x,w))
>(the second is obtained from the first by variable renaming x z v w
>                                                            w v y x
>and permutation of conjuncts).
>The observation that the 2 wffs have the "same" existential graph
>seems to me unhelpful, since all we can ever display, or compute with,
>are their tokens (embodiments, as you say). Those tokens stubbornly
>insist on having the properties -- like node identities, locations, 
>relative configuration of parts in the physical medium, etc. -- that 
>you have abstracted away. The problem is to match the tokens, not the 
>abstract structures they are tokens OF. 

But look. Suppose I were to insist that since graphs must be printed or
displayed, any perceptible 'token' graph will have its nodes and lines
displayed in some position on the screen; therefore, the problem of graph
matching is the problem of matching bitmapped pictures of graphs. This
would clearly be a perverse position to take, since graphs can be
represented as data structures which are independent of the details of
these graphical tokens. (Even if we were talking about an interface which
allowed people to draw graphs on a screen, so that the particular token was
indeed a bitmap, it still might be helpful to think in terms of graph data
structures which abstracted irrelevant geometric details.)

>My challenge was to show this 
>is easy for propositions in John's sense, and this remains unanswered.

True. The issue is whether, and how, the 'graphical' nature of the
variable-binding structure (ie the fact that it does not fit naturally into
the usual recursive syntax) can be utilised. I confess I can't see how to
do this in any reliable way, but it might be worth trying.

Heres a way to draw the expressions which make this kind of abstraction:
                  _|_____________________________________  |
           ______|_|_____________|_________              | |
        | |      | |        |    |         |    |        | | |
     (E o o)~((E o o)~((~(P(])&P(])) & ~(P(])&P(]))) & R(],],]))

       _|______|_|________     |         |    |        | | |
      | |      | |        |    |         |    |        | | |
   (E o o)~((E o o)~((~(P(])&P(])) & ~(P(])&P(]))) & R(],],]))
Here, a quantifier takes a list of arguments, each of which is the root
(indicated here by o) of a 'harness-tree' whose tips are argument locations
(indicated here by ]). To test whether two such structures are
propositionally identical, match their bodies and check whether the induced
graph-matchings locate all the harness-roots in the same quantifier list. 

It will be necessary to consider permutations of &, of course, and that
seems to me to be the real difficulty. Unfortunately it does not seem that
anything is going to be able to help with the worst case, since one can
always imagine the perverse example of a relation with n arguments, the
conjunction of n! atoms with all permutations of x1,..xn in their argument
places, and these n variables all universally bound. It is hard to see how
anything other than an exhaustive seach is going to be able to detect this

>Look, one can play the abstract identity game without graphs
>So now I can say that if only we were to change those ugly standard
>logical formulas above to conceptual formulas, they'd be the same!

Yes, I agree.

>> Also, the conceptual lexicon interacts via graph-grammar definitional 
>> expansions of types. The theory of this structure is necessarily 
>> intensely graph- theoretic, dealing with subgraph iso- and 
>> homo-morphisms and symmetries and cycles of graphs, which I feel 
>> answers Len's challenge re semantic nets.  (In Episodic Logic's 
>> notation, he is apostate.)
>My point, in my position paper in John's book, was that if you view
>expressions of a (standard) logical syntax as "vertices", and the
>relation between an expression and its immediate ordered constituents
>as labeled edges, then sets of logical formulas ARE graphs (DAGs),
>mathematically speaking. So to say "I'm using graphs rather than 
>formulas" is rather meaningless. 

I agree that as a slogan its not worth much, and to use graph notation to
encode propositional connectives and relational assertions and so forth is
of relevance only for interface design. But I do think there is something
real in this discussion. It amounts to saying that quantifier binding
structure does not naturally 'fit' into the usual recursive may of parsing
syntax, and that bound variables are only a device, and perhaps a rather
artificial one, for making it fit. In your terms here, sets of logical
formulaes are graphs, indeed: but there are two kinds of structure in them.
Much of their graph structure is essentially tree-like (maybe  DAGs). But
it may be more natural to describe the quantifier-binding machinery not by
inserting 'bound variables' into that immediate-constituent tree, but
rather as a kind of auxiliary apparatus attached on the side. 

I agree, ordinary surface syntax is fine. It's all in the way you look at
it. (Actually, in the way you parse it.)


PS Heres one sign that this may be a worthwhile way to try thinking: I
havn't yet found any linguist who can tell me of a formalism that can
describe such a structure. Do you know of such a thing?

Beckman Institute                                    (217)244 1616 office
405 North Mathews Avenue        	   (217)328 3947 or (415)855 9043 home
Urbana, IL. 61801                                    (217)244 8371 fax  or