Jackendoff Ray: SEMANTIC INTERPRETATION IN  GENERATIVE  GRAMMAR  (MIT
     Press, 1972)

     One of the milestone works in government and binding theory.

     The author shows  that  theta  roles  determine  to  some  extent  the
     wellformedness  of  anaphoric  relations. Theta roles form a hierarchy
     and binding must respect such hierarchy by placing the  antecedent  of
     an anaphor higher on the hierarchy than the anaphor itself.

Jackendoff Ray: X'SYNTAX (MIT Press, 1977)(MIT Press, 1972)

     A monumental study of the phrase structure of the english language  in
     the light of Chomsky's X-bar theory.

Jackendorff Ray: SEMANTICS AND COGNITION (MIT Press, 1983)

     Jackendorff develops conceptual structures to explain language,  in  a
     fashion similar to Fodor's mentalese.

     The structure of meaning ought to be pursued on the same first princi-
     ples as phonology and syntax.

     Meaning of verbs can be reduced to a few spacetime primitives, such as
     motion and location.

     The "extended standard theory" enhances Chomsky's standard  theory  by
     using  interpretation rules to extract the meaning of a sentence. Such
     rules apply to the  intermediate  syntactic  structures  used  in  the
     derivation of the phonetic representation.
      Jackendoff Ray: CONSCIOUSNESS AND THE COMPUTATIONAL MIND (MIT  Press,
     1987)
     Jackendorff believes in a hierarchy of levels  of  mental  representa-
     tion.

     The book resumes Jackendorff's claim that phonology and syntax are key
     to  the structure of meaning, then extends the framework developed for
     language to vision and music (hinting at a possible  unification  with
     Marr's theory of vision).

     Each cognitive function exists at different levels of  interpretations
     and cognitive functions generally interact at intermediary levels.

     Jackndorff refines and extends Fodor's idea of the modularity  of  the
     mind.

     Consciousness arises from a level of representation  which  is  inter-
     mediate between the sense-data and the form of thought.

Jackendorff Ray: SEMANTIC STRUCTURES (MIT Press, 1990)

     Jackendorff's conceptual semantics is applied to lexical and syntactic
     expressions  in English. Jackendorff proposes a formalism for describ-
     ing lexical semantic facts and  expressing  semantic  generalizations.
     He  employs multi-dimensional representations analogous to those found
     in phonology.

Jackendorff Ray: LANGUAGES OF THE MIND (MIT Press, 1992)

This collection of papers summarizes Jackendorff's formal theory on the nature of language and a modular approach to "mental anatomy", and applies the same concepts to learning and common sense reasoning. There is a tight relationship between vision and language. A lexical item contains the stereotipical image of the object or concept. Know- ing the meaning of a word implies knowing how the object or concept looks like.

Jackendoff Ray: PATTERNS IN THE MIND (Harvester Wheatsheaf, 1993)

     Following Chomsky, Jackendoff thinks that  the  human  brain  contains
     innate linguistic knowledge and that the same argument can be extended
     to all facets of human experience: all experience  is  constructed  by
     unconscious  genetically  determined  principles  that  operate in the
     brain.

     The experience of spoken language is constructed by the hearer's  men-
     tal  grammar:  speech  per se is only a meaningless sound wave, only a
     hearer equipped with the proper device can make sense of it.

     These same conclusions can be applied to thought itself, i.e.  to  the
     task  of  building  concepts.  Concepts  are constructed by using some
     innate, genetically determined, machinery, a sort of "universal  gram-
     mar  of  concepts".   Language  is but one aspect of a broader charac-
     teristic of the human brain.

Jackson Frank: CONDITIONALS (Basil Blackwell, 1987)

     A collection of articles by David Lewis, Robert Stalnaker,  Grice  and
     Frank Jackson on the subject of conditionals. A theory of conditionals
     must offer an account of the truth conditions of a conditional  (under
     which conditions "if A then B" is true or false, or acceptable to some
     degree). The traditional view that a conditional is true if  and  only
     if the antecedent is false or the consequent is true is too simplicis-
     tic and allows conditionals such as "if Jones lives in London, then he
     lives  in Scotland" to be true (if he does not live in London or lives
     in Scotland) when it is obviously senseless.

     Stalnaker and Lewis solve some of the problems of (subjective)  condi-
     tionals  ("if  it  were  that  A  then  it  would be that B") by using
     possible-world semantics. Lewis also reviews Ernest Adams' thesis that
     the  assertability  of  (indicative)  conditionals  ("if A then B") is
     measured by the conditional probability of the  consequent  given  the
     antecedent.

Jackson Frank: PERCEPTION (Cambridge University Press, 1977)

     The immediate objects of perception are mental. To perceive an  object
     is  to  be  in  a perceptual state as a causal result of the action of
     that object.

     On epiphenomenal qualia Jackson proposed a famous thought experiement:
     a  blind neurophysiologist that knows everything of how the brain per-
     ceives colors still cannot know what it feels like to see a color.

     Color is not  a  property  of  material  things.  Sense-data  are  not
     material, they are mental.

Jauregui Jose: THE EMOTIONAL COMPUTER (Blackwell, 1995)

This is the english translation of 1990's "El Ordenador Cerebral".

     Jauregi, like Wilson, views sociology as a  branch  of  biology.   The
     same emotional system controls social, sexual and individual behavior.


     Such emotional system originates from the neural organization  of  the
     brain: emotions  are rational and predictable events. Jauregi believes
     that the brain is a computer, but introduced the novelty  of  emotions
     as  the  direct  product of that computer's processing activity. It is
     emotions, not reason, that directs and informs the  daily  actions  of
     individuals.  Jauregi  deals  with  humans that feel pleasure and pain
     rather than with abstract problem solvers.

     Jauregi begins by separating the brain and  the  self:  the  brain  is
     aware  of  what  is  going on in the digestive system of the body, but
     will inform the self only when some  correction/action  is  necessary.
     Normally,  an  individual is not aware of her digestive processes. Her
     brain is always informed, though. The  communication  channel  between
     the  brain  and  the self is made of emotions.  The brain can tune the
     importance of the message by controlling the  intensity  of  the  emo-
     tions.  Far  from  being  an irrational process, the emotional life is
     mathematically calculated to achieve exactly  the  level  of  response
     needed.   Feelings  are subjective and inaccessible, but they also are
     objective and precise.

     The self has no idea of the detailed process that was going on in  the
     body  and  of  the  reason  why  that  process must be corrected.  The
     brain's emotional system, on the other hand, is  a  sophisticated  and
     complex  information-processing  system.  The brain is a computer pro-
     grammed to inform the self (through emotions) of what must be done  to
     preserve  her  body  and  her society. It is through emotions that the
     brain informs the self of every single detail  in  the  body  that  is
     relevant  for  survival. There almost is no instant without an emotion
     that tells the individual to do something rather than something  else.
     "For  human  beings the reality that ultimately matters is the reality
     of their feelings".

     The self keeps a level of freedom: while it cannot suppress the  (emo-
     tional)  messages it receives from the brain, it can disobey them. The
     brain may increase the intensity of the message as the  self  disobeys
     it  a  painful conflict may arise. The brain and the self are not only
     separate, but they may fight each other.

     Only the self can be conscious and feel, but the brain has control  of
     both consciousness and feelings.

     If we view the brain as a computer, the hardware is made of the neural
     organization.  There  are  two  types  of software, though: bionatural
     (knowledge about  the  natural  world)  and  biocultural  (such  as  a
     language  or  a  religion).   A program has three main components: the
     sensory, the mental and the emotional systems. Any sensory  input  can
     be translated automatically by the brain into  a mental (idea) or emo-
     tional (feeling) message; and viceversa.  Biocultural  and  bionatural
     programs exhert emotional control over the body.

     Jauregi distinguishes five systems of communication: the natural  sys-
     tem (the sender is a natural thing, such as a tree), the cultural sys-
     tem (the sender is culture, something created by humans), the  somatic
     system (the sender is the individual's own body), the imaginary system
     (the sender is imagination) and  the  social  system  (the  sender  is
     another  individual).   The  human  brain  is  genetically equipped to
     receive and understand all five kinds of  messages.   What  ultimately
     matters is the emotional translations of sensory inputs.
      Jaynes Julian: THE ORIGIN OF CONSCIOUSNESS IN THE  BREAKDOWN  OF  THE
     BICAMERAL MIND (Houghton Mifflin, 1977)
     Jaynes makes a  number  of  interesting  points  about  consciousness.
     Consciousness  is not necessary for concepts, learning, reason or even
     thinking.  Awareness of an action tends to follow,  not  precede,  the
     action.   Awareness  of  an action bears little or no influence on the
     outcome.  Before one utters a sentence, one is not conscious of  being
     about to utter those specific words.

     Consciousness is an operation rather than a thing. It is an  operation
     of analogy that transforms things of the real world into meanings in a
     metaphorical space. Consciousness is a metaphor-generated model of the
     world. Consciousness is based on language, therefore it appeared after
     the emergence of language. By reviewing historical documents  of  past
     civilizations, Jaynes tries to identify when and how consciousness was
     born. Causes include the advent of writing,  the  loss  of  belief  in
     gods, epics, and natural selection itself.

     Jaynes thinks that some social institutions and religions, psychologi-
     cal  phenomena  such as hypnosis and schizophrenia, and artistic prac-
     tices such as poetry and music are vestiges of  an  earlier  stage  of
     human consciousness.
      Jeanerrod Marc: THE  COGNITIVE  NEUROSCIENCE  OF  ACTION  (Blackwell,
     1996)

     A survey of findings on the representations and processing  that  lead
     to action, from neurophysiological data to the role of mental imagery.











      Johnson-Laird Philip: HUMAN AND MACHINE THINKING  (Lawrence  Erlbaum,
     1993)

A theory of deduction, induction and creation.

Johnson-Laird Philip: THINKING (Cambridge Univ Press, 1977)

     A collection of articles that reviews the study  of  thinking  in  the
     aftermath of the conceptual revolution that forced the transition from
     behaviorism to information-processing. Contributions range from philo-
     sophy (Popper, Kuhn) to artificial intelligence (Minsky, Schank).

Johnson-Laird Philip: MENTAL MODELS (Harvard Univ Press, 1983)

     Johnson-Laird's representational theory assumes that  mind  represents
     and  processes  models  of the world. The mind solves problems without
     any need to use logical reasoning. A linguistic representation such as
     Fodor's is not necessary.
     A sentence is a procedure to build, modify,  extend  a  mental  model.
     The  mental  model  created  by  a discourse exhibits a structure that
     corresponds directly to the structure of the world  described  by  the
     discourse.

     To perform an inference on a problem  the  mind  needs  to  build  the
     situation  described  by  its  premises.  Such mental model simplifies
     reality and allows the mind to find an "adequate" solution.

     Johnson-Laird draws on several phenomena to  prove  the  psychological
     inadequacy  of a mental logic.  People often make mistakes with deduc-
     tive inference because it is  not  a  natural  way  of  thinking.  The
     natural  way is to construct mental models of the premises: a model of
     discourse has a structure that corresponds directly to  the  structure
     of  the  state of affairs that the discourse describes.  How can chil-
     dren acquire inferential capabilities before they have any inferential
     capabilities?  Children  solve problems by building mental models that
     are more and more complex.

     Johnson-Laird admits three  types  of  representation:  "propositions"
     (which  represent  the  world  through  sequences of symbols), "mental
     models" (which are structurally analogous to the world)  and  "images"
     (which are perceptive correlates of models).

     Images are ways to approach models.  They  represent  the  perceivable
     features of the corresponding objects in the real world.

     Models, images and propositions are functionally and structurally dif-
     ferent.

     Linguistic  expressions  are  first  transformed  into   propositional
     representations.   The  semantics  of the mental language then creates
     correspondences  between  propositional  representations  and   mental
     models,  i.e.  propositional representations are interpreted in mental
     models.

     Turning to meaning and model-theoretic semantics,  Johnson-Laird  pro-
     poses  that  a mental model is a single representative sample from the
     set of models satisfying the assertion. Semantic properties of expres-
     sions are emergent properties of the truth conditions. Johnson-Laird's
     procedural semantics assumes that there are procedures that  construct
     models on the basis of the meaning of expressions.

     Johnson-Laird believes that consciousness is computable. The mind con-
     tains  a  high-level operating system and a hierarchy of parallel pro-
     cessors.  Conscious mind is due to a serial process of symbolic  mani-
     pulation  that  occurs at the higher level of the hierarchy of proces-
     sors (in the operating system), while unconscious mind  is  due  to  a
     parallel process of distributed symbolic representation.  Emotions are
     non-symbolic signals,  caused  by  cognitive  interpretations  of  the
     situation, that propagate within the hierarchy.
      Johnson-Laird Philip: THE COMPUTER AND THE MIND (Harvard Univ  Press,
     1988)
     An introduction to the themes and methods of cognitive science, with a
     review  of porduction and connectionist architectures.  Speech, vision
     and language are devoted long chapters. Johnson-Laird also  introduces
     his  theory  of  mental models and resumes his theory of consciousness
     and emotions.

Johnson-Laird Philip & Byrne Ruth: DEDUCTION (Lawrence Erlbaum, 1991)

     The authors  advance a comprehensive theory to explain  all  the  main
     varieties of deduction: propositional reasoning (that uses the connec-
     tives "and", "or" and "not"), relational reasoning  (that  depends  on
     relations  between  entities),  quantificational  reasoning (that uses
     quantifiers such as "any" and "some").  And justify it with a  variety
     of psychological experiments.

     In  order  to  understand  discourse,  humans  construct  an  internal
     representation  of  the  state  of  affairs  that is described in that
     discourse. These mental models have the same structure as  human  con-
     ceptions  of  the situations they represent. Deduction does not depend
     on formal rules of inference but rather on a  search  for  alternative
     models  of the premises that would refute a putative conclusion.  Cen-
     tral to the theory is the principle that people use models  that  make
     explicit  as  little  information  as  possible.  The theory also make
     sense of how people deal with conditionals.

     The theory explains phenomena such as: that modus ponens ("if  p  then
     q"  and  "p" then "q") is easier than modus tollens ("if p then q" and
     "not q" then "not p").
      Josephson John &  Josephson  Susan:  ABDUCTIVE  INFERENCE  (Cambridge
     University Press, 1993)
     Abduction (inference  to  the  best  explanation,  i.e.  building  the
     hypothesis  that best accounts for the data) is ubiquitous in ordinary
     life as well as in scientific theory formation.  The book  presents  a
     dynasty of systems that explored abduction.  Intelligence is viewed as
     a cooperative community  of  knowledge-based  specialists  (performing
     "generic  tasks").   Knowledge  arises from experience by processes of
     abductive inference.

Jouvet Michel: LE SOMMEIL ET LE REVE (Jacob, 1992)

     Jouvet was the first to localize the trigger zone for  REM  sleep  and
     dreaming in the brain stem. In this book he provides a neurobiological
     and psychological analysis of sleep and dreaming.

     According to his findings, a dream  is  the  vehicle  employed  by  an
     organism  to cancel or archive the day's experiences on the basis of a
     genetic program. Dreaming is a process that absorbs a lot of energy.

     This theory would  also  solve  the  dualism  between  hereditary  and
     acquired  features.   An  hereditary  component  is activated daily to
     decide how new data must be acquired.

Kaku Michio: HYPERSPACE (Oxford University Press, 1994)

          A popular introduction to modern cosmology, including black hole,
          time travel, parallel universes and alien civilizations.

          The title refers to the fact that the universe may actually exist
          in dimensions beyond the commonly accepted four of spacetime. The
          laws of nature become simpler when  expressed  in  higher  dimen-
          sions.  In fact, all forces can be unified in the ten-dimensional
          hyperspace of superstring theory.  Kaku shows how the concept  of
          supergravity  was  derived from the intuitions of the old Kaluza-
          Klein theory, which first unified the two great  field  theories,
          light and gravity (Maxwell and Einstein).
           Kandell Abraham: FUZZY MATHEMATICAL TECHNIQUES (Addison  Wesley,
          1986)

          A very technical and very well organized introduction to the con-
          cepts  and  theorems of fuzzy logic: fuzzy sets, theory of possi-
          bility, fuzzy functions (integration and  differentiation),  mul-
          tivalent logics, linguistic approximation and applications.

Kanerva Pentti: SPARSE DISTRIBUTED MEMORY (MIT Press, 1988)

          The sparse distributed memory is a model of long-term  memory  in
          which situations are encoded by patterns of features and episodes
          are encoded by sequences of them. Any pattern in a  sequence  can
          be  used  to  retrieve  the  entire sequence. Memories are stored
          based on features. The senses must extract the invariant features
          of objects to retrieve the corresponding memories. The motor sys-
          tem is also controlled by sequences of patterns in memory. A cen-
          tral  site,  the "focus", stores all the features that are needed
          to define the specific moment in time, to account for  subjective
          experience.  The model is capable of learning.
          Most of the study is a computational analysis of the  feasibility
          of a very large address space whose units of address decoding are
          linear threshold functions (neurons).

Kaplan David: THEMES FROM KAPLAN (Oxford Univ Press, 1989)

          This book is a tribute to Kaplan by a number  of  thinkers  (Cas-
          taneda,  Church, Deutsch, etc), but also contains Kaplan's famous
          "Demonstratives" (1977).

          Indexicals include the personal pronouns, the demonstrative  pro-
          nouns,  some adverbs ("here", "now", "tomorrow"), etc, i.e. words
          whose referent depends on the context of use (whose meaning  pro-
          vides  a  rule which determines the referent in terms of the con-
          text).  The logic of demonstratives, based on first-order  predi-
          cate  logic,  is a theory of word meaning, not speaker's meaning,
          based on linguistic rules shared by all linguistic users.

          Indexicals are "directly referential",  i.e.  refer  directly  to
          individuals without the mediation of Fregean sense (unlike nonin-
          dexical  definite  descriptions,  which  denote  their   referent
          through   their  sense).   Kaplan's  indexicals  are  similar  to
          Kripke's "rigid designators",   expressions  that  designate  the
          same thing in every possible world in which they exist and desig-
          nate nothing  elsewhere. Indexicals  provide  directly  that  the
          referent  in  every  circumstance  is  fixed  to  be  the  actual
          referent. In Kaplan's case, though, the expression is  the  "dev-
          ice" of direct reference.

          Kaplan distinguishes between  the  "character"  of  a  linguistic
          expression  (its grammatical meaning, i.e. what the hearer learns
          when she learns the meaning of that expression) and its "content"
          in  a  context  (the  proposition,  the  primary bearer of truth-
          values, the object  of  thought).   Indexicals  have  a  context-
          sensitive character, nonindexicals have a fixed character.  Char-
          acters are functions that map contexts into contents.

          The theory of  direct  reference  for  indexicals  includes:  the
          language  system  (to  which meanings and characters belong), the
          contexts of uses (through which referents are assigned to expres-
          sions) and the circumstances of evaluation (at which truth-values
          are allocated to sentential referents).

Karmiloff-Smith Annette: BEYOND MODULARITY (MIT Press, 1992)

          A developmental model is proposed  that  embraces  both  Piaget's
          constructivism  and  Fodor's  nativism, both innate capacities of
          the human mind and subsequent representational changes. Based  on
          a  number  of  experiments on children, Karmiloff-Smith  believes
          that initially children learn by instinct,  or  at  least  impli-
          citly;  then  their  thinking develops, by redescribing the world
          from an implicit form to more and more explicit  forms,  to  more
          and  more verbal knowledge.  She contends that there are separate
          cognitive categories, each with its  own  innate  structure;  but
          modularization  is  seen  as a product of the child's development
          and development proceeds along the same sequential steps for  all
          mental  activities.  Language  is just one of them.  The model is
          then applied to connectionist models of the mind.

Katz Jerrold: THE METAPHYSICS OF MEANING (MIT Press, 1990)

          A critique of naturalism,  particularly  Wittgenstein's  argument
          against  intensionalist  theories of meaning and Quine's argument
          for indeterminacy.  By examining Wittgenstein's own  critique  of
          pre-existing theories of meaning, Katz salvages a theory of mean-
          ing (the "proto-theory") which postulates underlying sense struc-
          ture  (just  like  Chomsky's  postulation of underlying syntactic
          structure) and constructs a decompositional semantics (i.e., pro-
          vides a preliminary theory of decompositional sense structure).

          Katz replaces Frege's referentially defined notion of sense  with
          a  notion  defined  in  terms  of  sense properties and relations
          internal to the grammar of the language, thereby accomplishing  a
          separation of sense structure and logical structure (a separation
          of grammatical meaning from reference and use).

          Katz thinks that words' meaning can be  decomposed  in  atoms  of
          meaning that are universal for all languages.

          This  may  well  be  the   most   detailed   critique   ever   of
          Wittgenstein's thought.
           Katz Jerrold: AN INTEGRATED THEORY  OF  LINGUISTIC  DESCRIPTIONS
          (MIT Press, 1964)

          Two components are necessary for a theory of  semantics:  a  dic-
          tionary,  which  provides  for  every lexical item a phonological
          description, a syntactic  classification  ("grammatical  marker",
          e.g.  noun  or verb) and a specification of its possible distinct
          senses ("semantic marker", e.g. light as in color  and  light  as
          the  opposite  of heavy); and projection rules, which produce all
          valid interpretations of a sentence.

Katz Jerrold: THE PHILOSOPHY OF LANGUAGE (Harper & Row, 1966)

          According to Katz, a theory of language is a theory of linguistic
          universals  (features  that  all languages have in common).  Katz
          argues that the basic ontological categories are  those  semantic
          markers  that  are  implied  by  other semantic markers but never
          imply other markers themselves.

Katz Jerrold: SEMANTIC THEORY (Harper & Row, 1972)

          Two components are necessary for a theory of  semantics:  a  dic-
          tionary,  which  provides  for  every lexical item a phonological
          description, a syntactic  classification  ("grammatical  marker",
          e.g.  noun  or verb) and a specification of its possible distinct
          senses ("semantic marker", e.g. light as in color  and  light  as
          the  opposite  of heavy); and projection rules, which produce all
          valid interpretations of a sentence.

          "The logical form of a sentence is identical with its meaning  as
          determined  compositionally  from the senses of its lexical items
          and the  grammatical  relations  between  its  syntactic  consti-
          tuents."
           Kaufmann Arnold & Gupta Madan: INTRODUCTION TO FUZZY ARITHMETICS
          (Van Nostrand Reinhold)

          A technical (and one of the most rigorous)  introduction  to  the
          properties  of  fuzzy  numbers.  A  fuzzy  number is viewed as an
          extension of an interval of confidences, once it is related to  a
          level  of  presumption.  The addition of fuzzy numbers and random
          data yields hybrid numbers, which transform a measurement  of  an
          objective data into a valuation of a subjective value without any
          loss of information. Definitions are provided for derivatives  of
          functions of fuzzy numbers, fuzzy trigonometric functions, etc.
           Kauffman Stuart: THE ORIGINS OF ORDER (Oxford University  Press,
          1993)

          Darwin's vision of natural selection as a creator of order is not
          sufficient to explain all the spontaneous order exhibited by both
          the living and the dead universe.  At every level of science  the
          spontaneous  emergence  of order, or self-organization of complex
          systems, is a common theme.

          Living organisms happen to be mere accidents  in  this  universal
          process.  Natural selection and self-organization complement each
          other: they create complex systems poised  at  the  edge  between
          order  and  chaos,  which are fit to evolve in a complex environ-
          ment. The target of selection is a type of adaptive system at the
          edge  between  chaos and order. This is one of the three types of
          behaviors that  are  possible  for  large  networks  of  elements
          (besides  chaotic  and  ordered).  This  applies at all levels of
          organization, from living organisms to ecosystems.

          Kauffman's mathematical model involves "fitness  landscapes".   A
          fitness  landscape  is  a distribution of fitness values over the
          space of genotypes. Adaptive evolution can be  represented  as  a
          local  hill  climbing search converging via fitter mutants toward
          some local or  global  optimum.   Adaptive  evolution  occurs  on
          rugged  (multipeaked)  fitness landscapes.  The very structure of
          these landscapes implies that radiation and stasis  are  inherent
          features  of  adaptation.  The Cambrian explosion and the Permian
          extinction may be the natural consequences of inherent properties
          of rugged landscapes.

          Kauffman also advances his theory  of  how  life  may  have  ori-
          ginated.   When  a  system  of simple chemicals reaches a certain
          level  of  complexity,  it  undergoes  a  phase  transition.  The
          molecules  spontaneously  combine  to  yield  larger molecules of
          increasing complexity and catalytic  capability.  Such  autocata-
          lytic  chemical  processes  may  have  formed the basis for early
          life.  Life began complex, with a metabolic web which was capable
          of capturing energy sources.

          Arrays of interacting genes do not evolve randomly  but  converge
          toward  a  relatively  small number of patterns, or "attractors".
          This ordering principle may have played a larger  role  than  did
          natural selection in guiding the evolution of life.

          Principles of self-organization also drive  the  genetic  program
          which  drives  morphogenesis.  A  few behaviors at the cell level
          (e.g., differentiation) are actually unavoidable consequences  of
          the properties of self-organization.  They are not the product of
          selection, but rather of  the  properties  of  the  systems  that
          selection  acts  upon. Laws of form complement selection.  In any
          event, the genetic program is not a sequence of instructions  but
          rather  a  regulatory  network  that behaves, again, like a self-
          organizing system.

          Kauffman is searching for the fundamental force that  counteracts
          the  universal  drift towards disorder required by the second law
          of thermodynamics.
           Kauffman Stuart: AT HOME IN THE  UNIVERSE  (Oxford  Univ  Press,
          1995)

          The whole is greater than its parts: life is not located  in  any
          of  the  parts of a living organism, but arises from the emergent
          properties of the whole they compose.  Such  emergent  properties
          are the result of a ubiquitous trend towards self-organization.

          Self-organizing principles are inherent in our universe and  life
          is a direct consequence of self-organization. Therefore, both the
          origin of life and  its  subsequent  evolution  were  inevitable.
          Kauffman  refutes  the theory that life started simple and became
          complex in favor of a scenario in which life started complex  and
          whole  due  to  a  property of some complex chemical systems, the
          self-sustaining process of autocatalytic metabolism. Life is  but
          a  phase  transition  that occurs when the system becomes complex
          enough.  Life is vastly more probably than traditionally assumed.

          The theme of science is order. Order can  come  from  equilibrium
          systems  and from non-equilibrium systems that are sustained by a
          constant source of matter/energy or (udally) by a persistent dis-
          sipation  of  matter/energy. In the latter systems, order is gen-
          erated by the flux of matter/energy.  All  living  organisms  (as
          well as systems such as the biosphere) are nonequilibrium ordered
          systems.

          Kauffman advocates a "theory of emergence" that deals with  none-
          quilibrium  ordered systems. Such a theory would explain why life
          emerged at all.

          Evolution is viewed as the traversing  of  a  fitness  landscape.
          Peaks  represent  optimal  fitness.  Populations wander driven by
          mutation, selection and  drift  across  the  landscape  in  their
          search  for peaks. It turns out that the best strategy for reach-
          ing the peaks occurs at the phase transition  between  order  and
          disorder  (the  "edge of chaos"). The same model applies to other
          biological phenomena and even nonbiological  phenomena,  and  may
          therefore represent a universal law of nature.

          Kauffman's view of life can be summarized as: autocatalytic  net-
          works  arise  spontaneously; natural selection brings them to the
          edge of chaos; a genetic regulatory mechanism accounts for  onto-
          geny. Natural selection is not the only source of order: there is
          also order for free.

          The main theme of Kauffman's research is  that  the  requirements
          for  order  to  emerge  are far easier than traditionally assumed
          ("order for free").

Kavanaugh Robert: EMOTION (Lawrence Erlbaum, 1996)

An overview of studies on emotion.

Kaye Jonathan: PHONOLOGY (Lawrence Erlbaum, 1989)

          A cognitive approach to phonology. Besides reviewing the  history
          of  the  field  and  the recent developments (syllable structure,
          tones and nonlinear phonology,  harmony,  parametrized  systems),
          Kaye  advances  his  own theory that the function of phonological
          processes is to help process language in  a  fashion  similar  to
          punctuation  by  providing information about domain boundaries. A
          theory of markedness was also sketched to explain the  fact  that
          certain features condition other features.
           Kearns Michael & Varizani Umesh: INTRODUCTION  TO  COMPUTATIONAL
          LEARNING THEORY (MIT Press, 1994)

          A very technical survey of the main issues  of  learning  theory,
          built  around  Valiant's  "probably  approximately correct" model
          (1992), which defines learning in terms of the  predictive  power
          of  the hypothesis output by the learning algorithm. Notions such
          as the Vapnik & Chervonenkis dimension, a measure of  the  sample
          complexity of learning, and various extensions to Valiant's algo-
          rithm are presented.
           Keil Frank: SEMANTIC AND CONCEPTUAL  DEVELOPMENT  (Harvard  Univ
          Press, 1979)

          Following Fred Sommers, Keil develops  a  formal  theory  of  the
          innate  constraints that guide and limit the acquisition of onto-
          logical knowledge (knowledge about the basic  categories  of  the
          world).   Two  terms  are of the same type if all predicates that
          span one of them also span the other one; and two predicates  are
          of  the same type if they span exactly the same sets of terms. No
          two terms have intersecting predicates.  No two  predicates  span
          intersecting  sets  of  terms  (the "M constraint").  Ontological
          knowledge is therefore organized in a rigid hierarchical fashion.
           Keil Frank: CONCEPTS, KINDS AND COGNITIVE  DEVELOPMENT  (  (Cam-
          bridge University Press, 1989)

          Concepts are always related to other concepts. No concept can  be
          understood in isolation from all other concepts. Concepts are not
          simple sets of features.  Concepts  embody  "systematic  sets  of
          causal beliefs" about the world and contain implicit explanations
          about the world. Concepts are  embedded  in  theories  about  the
          world,  and  they  can  only be understood in the context of such
          theories.

          In contrast with stage-based developmental theories, Keil  argues
          for the continuity of cognition across development. Continuity is
          enforced by native constraints on developmental directions.

          Perceptual procedures through which objects are  categorized  are
          not part of the categories: an animal is a skunk if its mother is
          a skunk regardless of what it looks like.

          Keil refines Quine's ideas.  Natural kinds are not defined  by  a
          set of features or by a prototype: they derive their concept from
          the causal structure  that  underlies  them  and  explains  their
          superficial  features.  They are defined by a "causal homeostatic
          system", which tends to stability over time in order to  maximize
          categorizing.   Nominal kinds (e.g., "odd numbers") and artifacts
          (e.g., "cars") are similarly defined by  the  theories  they  are
          embedded  in, although such theories are qualitatively different.
          There is a continuum between pure nominal kinds and pure  natural
          kinds with increasing well-definedness as we move towards natural
          kinds. What develops over time is the awareness of the network of
          causal  relations  and  mechanisms  that  are  responsible  for a
          natural kind's essential  properties.  The  theory  explaining  a
          natural kind gets refined over the years.
           Kelso Scott & Mandell Arnold: DYNAMIC PATTERNS IN  COMPLEX  SYS-
          TEMS (World Scientific, 1988)

          Proceedings of a 1988 conference on self-organizing systems.

          Hermann Haken discusses the dualism between  pattern  recognition
          and pattern formation.

          Kelso  shows  that  the  brain  exhibits   processes   of   self-
          organization  that obey to nonlinear dynamics features (multista-
          bility, abrupt phase transitions, crises and intermittency).  The
          human behavior is therefore also subject to nonlinear dynamics.

Kelso Scott: DYNAMIC PATTERNS (MIT Press, 1995)

          Kelso believes that all levels of behavior, from neural processes
          to  mind,  are governed by laws of self-organization. He explains
          human behavior from phenomena of  multistability,  phase  transi-
          tions, etc.
           Kessel Frank: SELF AND CONSCIOUSNESS (Lawrence Erlbaum, 1993)
          A collection of essays on the subject, with contributions by Den-
          nett, Neisser and Gazzaniga.
           Kim Jaegwon: SUPERVENIENCE AND MIND (Cambridge University Press,
          1993)

          A collection of philosophical essays, particularly on  superveni-
          ence.

          The world has a structure: the existence of  an  object  and  its
          properties depend on, or are determined by, the existence and the
          properties of other objects. With Hume, "causation is the  cement
          of  the  universe".  Supervenience  is a type of relation between
          objects that occurs between their properties: if two  individuals
          are  alike  in  all  their physical properties, then they must be
          alike also in their  nonphysical  properties,  i.e.  the  set  of
          valuational  (nonphysical)  properties  supervenes  on the set of
          nonvaluational (physical) ones.

          "Supervenience" theory assumes that objects with the same  physi-
          cal  properties also exhibit the same mental properties. A causal
          relation between two states can be explained both in mental terms
          and in physical terms.  The mental and the physical interact only
          to guarantee consistence.  The mental supervenes on the physical,
          just  like  the  macroscopic  properties  of objects supervene on
          their microscopic structures.

          In general, supervenience is a relation between two sets of  pro-
          perties  over  a single domain (e.g., mental and physical proper-
          ties over the domain of organisms).   Weak  supervenience  occurs
          when  indiscernibility  with  respect  to  a  class of properties
          entails indiscernibility with respect to another class of proper-
          ties.   Strong supervenience claims that if individuals share the
          same physical properties, then they must share  the  same  mental
          properties.   Global  supervenience  occurs  when worlds that are
          indiscernible with respect to an individual are also  indiscerni-
          ble with respect to another individual.

          Kim is a physicalist (the world is a physical world  governed  by
          physical  laws) and a mental realist (mentality is a real feature
          of the world and has the power to cause events of the world). His
          goal  is  to  understand how the mind can "cause" anything in the
          physical world.

Kirkham Richard: THEORIES OF TRUTH (MIT Press, 1992)

          A philosophical (and probably unique) introduction to  a  variety
          of  modern theories of truth: Charles Peirce's pragmaticism, Wil-
          liam James' instrumentalism, Brand Blanshard's  coherence  theory
          (truth  as a fully coherent set of beliefs), Russell's congruence
          theory,nd theory of types Austin's correlation  theory,  Tarski's
          correspondence  theory.   Theories of justification (how to iden-
          tify the properties of true statements by reference to which  the
          truth of a statement can be judged) are treated as separated from
          theories of truth, as well as theories of speech acts.  The  sys-
          tems  of Davidson, Dummett, Kripke, Prior are reviewed and criti-
          cized.
           Kitchener Robert: PIAGET'S THEORY OF KNOWLEDGE (Yale  University
          Press, 1986)

          One of the best introduction to genetic epistemology.

Kittay Eva: METAPHOR (Clarendon Press, 1987)

          Drawing from Black's interactionist theory,  and  its  vision  of
          metaphor's  dual content (literal and metaphorical, "vehicle" and
          "topic"), Kittay develops a theory of metaphor

          Kittay's theory of metaphor is  based  on  her  own  "relational"
          theory  of  meaning,  which  is  inspired by Saussure's theory of
          signs.  The meaning of a word is determined by other  words  that
          are  related  to  it by the lexicon. Meaning is not an item, is a
          field. A semantic field is a group of words that are semantically
          related  to  each other.  Language is context-dependent, and con-
          textual features are constitutive of meaning.

          Metaphor is a process that transfers semantic structures  between
          two  semantic  fields: some structures of the first field creates
          or reorganizes a structure in the second field.

          The meaning of a word consists of all the literal senses of  that
          word.  A literal sense consists of a conceptual content, a set of
          conditions, or semantic combination rules  (permissible  semantic
          combinations   of  the  word,  analogous  to  Fodor's  selection-
          restriction rules) and a semantic field  indicator  (relation  of
          the  conceptual  content  to other concepts in a content domain).
          An interpretation of an utterance is any of the  senses  of  that
          utterance.   Projection  rules   combine  lower-level  units into
          higher-level units according to their semantic combination rules.
          A  first-order  interpretation  of an utterance is derived from a
          valid combination of the  first-order  meanings  of  its  consti-
          tuents.  Second-order interpretation is a function of first-order
          interpretation and expresses the intuitive fact that what has  to
          be  communicated  is  not  what  is  indicated by the utterance's
          literal meaning.

          Kittay outlines the formal conditions for recognizing  an  utter-
          ance as a metaphor. An explicit cue to the metaphorical nature of
          an  utterance  is  when  the  first-order  and  the  second-order
          interpretation   point   to   two   distinct   semantic   fields.
          Equivalenty, an  incongruity  principle  (incongruity  between  a
          focus  and  a  frame)  can  be used.  discriminate a metaphorical
          utterance.

          Metaphor can be interpreted as second-order meaning.

          The cognitive force of metaphor comes from a  reconceptualization
          of information about the world that has already been acquired but
          possibly not conceptualized.  Metaphor turns out to be one of the
          primary ways in which humans organize their experience.

          Metaphorical meaning is not reducible to literal meaning.
           Klahr David: PRODUCTION SYSTEM MODELS OF LEARNING  AND  DEVELOP-
          MENT (MIT Press, 1987)

          A set of articles that provide an overview of production  systems
          from  the  perspective of cognitive psychology and in the context
          of working computer programs.  Includes Pat Langley's "A  general
          theory  of  discrimination learning" (the PRISM project) and Paul
          Rosenbloom's "Learning by chunking" (the XAPS project).
           Kleene Stephen: INTRODUCTION TO METAMATHEMATICS  (North-Holland,
          1964)

          Kleene's three-valued logic was conceived to accomodate undediced
          mathematical statements. The third truth value signals a state of
          partial ignorance. The undecided value is assigned to  any  well-
          formed formula that has at least one undecided component.

Klopf Harry: THE HEDONISTIC NEURON (Hemisphere, 1982)

          Organisms actively seek stimulation. If homeostasis is the  seek-
          ing of a steady-state condition, "heterostasis" is the seeking of
          a maximum stimulation. All parts of the brain  are  independently
          seeking  positive  stimulation (or "pleasure") and avoiding nega-
          tive stimulation (or "pain").  All parts are goal-driven in that,
          when  responding  to  a  given  stimulus leads to "pleasure", the
          brain part will respond more frequently to that stimulus  in  the
          future; and viceversa.

          In his neural model cognition and emotion cohexist and complement
          each  other.   Emotion provides the sense of what organisms need.
          Cognition provides the means for achieving those needs.
           Kodratoff Yves: INTRODUCTION TO MACHINE LEARNING  (Morgan  Kauf-
          man, 1988)

          A technical, Prolog-oriented textbook on  machine  learning  that
          starts  with  the  theoretical foundations of production systems,
          deals with truth maintenance and then surveys a number of  learn-
          ing methods: Mitchell's version spaces, explanation-based (deduc-
          tive) learning, analogical learning, clustering.

Koestler Arthur: THE GHOST IN THE MACHINE (Henry Regnery, 1967)

          Koestler  brings  together  a  wealth  of  biological,  physical,
          anthropological  and  philosophical arguments to construct a uni-
          fied theory of open hierarchical systems.

          Language has to do with a  hierarchic  process  of  spelling  out
          implicit ideas in explicit terms by means of rules and feedbacks.
          Organisms and societies also exhibit the same hierarchical struc-
          ture.  Each  intermediary  entity  ("holon")  function  as  self-
          contained wholes relative to their subordinates and as  dependent
          parts  to  their  superordinates. Each holon tends to persist and
          assert its pattern of activity.

          Wherever there is life,  it  must  be  hierarchically  organized.
          Life  exhibits  an integrative property (that manifests itself as
          symbiosis) that  enables  the  gradual  construction  of  complex
          hierarchies  out  of  simple  holons.   In  nature  there  are no
          separated, indivisible, self-contained units.  An "individual" is
          an  oxymoron.   An  organism  is  a  hierarchy of self-regulating
          holons (a  "holarchy")  that  work  in  coordination  with  their
          environment.  Holons  at  the higher levels of the hierarchy have
          progressively more degrees of freedom and  holons  at  the  lower
          levels of the hierarchy have  progressively less degrees of free-
          dom.  Moving up the hierarchy, we encounter more  and  more  com-
          plex, flexible and creative patterns of activity. Moving down the
          hierarchy behavior becomes more and more mechanized.

          A hierarchical process (which gradually reduces  the  percept  to
          its  fundamental  elements)  is  also  involved in perception and
          memorization.   A  dual  hierarchical  process  (which  gradually
          reconstructs the percept) is involved in recalling.

          Hierarchical processes of the same nature can  be  found  in  the
          development  of  the  embryo,  in the evolution of species and in
          consciousness itself (which should be analyzed not in the context
          of the mind/body dichotomy but in the context of a multi-levelled
          hierarchy and of degrees of consciousness).

          They all share common themes: a tendency towards  integration  (a
          force  that  is inherent in the concept of hierarchic order, even
          if it seems to challenge the second law of thermodynamics  as  it
          increases order), an openess at the top of the hierarchy (towards
          higher and higher levels of complexity) and  the  possibility  of
          infinite regress.

Kohonen Teuvo: ASSOCIATIVE MEMORY (Springer Verlag, 1977)

          The retrieval of information in memory occurs  via  associations.
          An associative memory is a system from which a set of information
          can be recalled by using any of its members. An adaptive associa-
          tive  network  is  viewed  as  a  reasonable model for biological
          memory.  Kohonen also argues for the biological  plausibility  of
          holographic  associative  memories.   For  each  model a thorough
          mathematical treatment is provided.
           Kohonen  Teuvo:   SELF-ORGANIZATION   AND   ASSOCIATIVE   MEMORY
          (Springer Verlag, 1984)

          A formal study of memory from a system theory's viewpoint.

          Kohonen built a psychologically-plausible model of how the  brain
          represents  topographically the world, with nearby units respond-
          ing similarly.  His model is therefore capable of self-organizing
          in regions.

          Kohonen's connectionist architecture, inspired by Malsburg's stu-
          dies  on  self-organization  of  cells in the cerebral cortex, is
          able to perform unsupervised training, i.e. it learns  categories
          by itself.

          Instead of  using  Hebb's  learning,  Kohonen  assumes  that  the
          overall  synaptic  resources of a cell are approximately constant
          and what changes is the relative efficacies of the  synapses.   A
          neural network has learned a new concept when the weights of con-
          nections converge towards a  stable  configuration.   This  model
          exhibits  mathematical properties that set it apart: the layering
          of neurons plays a specific  role  (the  wider  the  intermediate
          layer,  the  faster  but  the  more  approximate  the  process of
          categorization).

          A variant of Hebb's law yields competitive behavior.

          Kohonen also reviews classical learning systems (Adaline, Percep-
          tron) and holographic memories.

Kohonen Teuvo: SELF-ORGANIZING MAPS (Springer Verlag, 1995)

          The Adaptive-Subspace Self Organizing Map (ASSOM) is an algorithm
          for neural networks that combines Learning Subspace Method (LSM),
          the first supervised  competitive-learning  algorithm  ever,  and
          Self Organizing Map (SOM), another algorithm invented by Kohonen,
          that maps patterns close to each other in the  input  space  onto
          contiguous  locations  in the output space (topology preserving).
          The new algorithm is capable of detecting invariant features.
           Kolodner Janet & Riesbeck Christopher: EXPERIENCE,  MEMORY,  AND
          REASONING (Lawrence Erlbaum, 1986)

          An introduction to computational  theories  of  memory  that  are
          derived  from  the  conceptual dependency theory. Each article is
          written  by  an  expert  in  the  field.   Schank  writes   about
          explanation-based  learning.  Lebowitz  describes  his RESEARCHER
          project. Lytinen  discusses  his  word-based  parsing  technique.
          Riesbeck introduces to his direct memory access parsing system.

Kolodner Janet: CASE-BASED REASONING (Morgan Kaufmann, 1993)

          A monumental summary of the discipline of case-based systems that
          also attempts ot lay logical foundations for the field.  Emphasis
          is placed on the views of learning as a by-product of  reasoning,
          and  reasoning  as remembering; on the essential task of adapting
          old solutions to solve new problems (old  cases  to  explain  new
          situations).   Schank's  cognitive  model of dynamic memory (MOPs
          and the likes) is introduced at length. Some  of  the  historical
          systems  (CHEF,  CYRUS,   etc)  are discussed.  The book provides
          detailed techniques for  storing, indexing, retrieving,  matching
          and using cases.
           Kolodner Janet: RETRIEVAL AND ORGANIZATIONAL STRATEGIES IN  CON-
          CEPTUAL MEMORY (Lawrence Erlbaum, 1984)

          A description of the CYRUS system, which was  based  on  Schank's
          conceptual dependency theory.
           Kosko Bart: NEURAL NETWORKS AND FUZZY  SYSTEMS  (Prentice  Hall,
          1992)

          A textbook on adaptive fuzzy systems that presents a unified view
          of  neural networks and fuzzy systems. Kosko presents neural net-
          works as stochastic gradient systems and fuzzy sets as points  in
          unit hypercubes.

          All the main learning algorithms for neural networks are reviewed
          and  formalized.  It is shawn that neural computations is similar
          to statistics in that its goal is  to  approximate  the  function
          that relates a set of inputs to a set of outputs.

          In Kosko's formalization, a fuzzy set is a point in  the  unitary
          hypercube  equivalent  to  Zadeh's  universe  of discourse, and a
          non-fuzzy set is one of the vertexes of such a  cube.  The  para-
          doxes of classical logic occur in the middle points of the hyper-
          cube.

          A fuzzy set's entropy (which could be thought of  as  its  "ambi-
          guity")  is  defined  by  the  number of violations of the law of
          non-contradiction compared with the number of violations  of  the
          excluded  middle. Entropy is zero when both laws hold, is maximum
          in the center of the hypercube.   Alternatively,  a  fuzzy  set's
          entropy  can  be defined as a measure of how a set is a subset of
          itself.

          A fuzzy system is a relationship between hypercubes, a  relation-
          ship of fuzzy sets into families of fuzzy sets.

          Fuzzy associative memories are balls of fuzzy sets into balls  of
          fuzzy sets.

          Fuzzy logic, which can account for all results of the  theory  of
          probability,  better  represents the real world, without any need
          to assume the existence of randomness. For example, relative fre-
          quency is a measure of how a set is a subset of another set.

          Many of Physics' laws are not reversible  because  if  they  were
          casuality  would  be violated (after a transition of state proba-
          bility turns into certainty and cannot be rebuilt  working  back-
          wards). If they were expressed as "ambiguity", rather than proba-
          bility, they would be reversible, as the ambiguity  of  an  event
          remains the same before and after the event occurred.

          The space of neural states (the set of all possible outputs of  a
          neural  net)  is identical to the power fuzzy set (the set of all
          fuzzy subsets of the set of neurons). A set of "n" neurons (whose
          signals  vary  continously between zero and one) defines a family
          of n-dimensional fuzzy sets. That space is the unitary hypercube,
          the  set of all vectors of length "n" and coordinates in the uni-
          tary continous interval (zero to one).

          Hopfield's nets tend to push the state of the system  towards one
          of  the  2  to  the  "n" vertexes of the hypercube. This way they
          dynamically disambiguate fuzzy descriptions by  minimizing  their
          fuzzy entropy.

Kosko Bart: FUZZY THINKING (Hyperion, 1993)

          Fuzziness is pervasive in nature  ("everything  is  a  matter  of
          degree"), while science does not admit fuzziness.

          Even probability theory still assumes that properties are  crisp.
          And  probability  (according to Kosko's "subsethood" theorem) can
          be interpreted as a measure of how much the whole (the  space  of
          all  events)  is  contained in the part (the event).  Kosko shows
          how logical paradoxes such as Russell's  can  be  interpreted  as
          "half truths" in the context of fuzzy logic.  Heisenberg's uncer-
          tainty principle (the more a quantity is  accurately  determined,
          the less accurately a conjugate quantity can be determined, which
          holds for position and momentum, time and energy) can be  reduced
          to the Cauchy-Schwarz inequality (which is related to Pythagora's
          theorem, which is in turn related to the subsethood theorem).

          Applications such as fuzzy associative memories,  adaptive  fuzzy
          systems and fuzzy cognitive maps are discussed at length.

          Kosko even discusses why the universe exists  (because  otherwise
          the fuzzy entropy theorem would exhibit a singularity) and specu-
          lates that the universe is information and maybe God  himself  is
          information.
          Too much autobiography and too many references to  eastern  reli-
          gion  try  to  make  the book more accessible but probably merely
          detract from the subject.

Kosslyn Stephen: IMAGE AND MIND (Harvard University Press, 1980)

          "Mental imagery" is seeing something in the absence of  any  sen-
          sory signal, such as the perception of a memory. Kosslyn analyzes
          what is seen when in the brain there is no such image, and why we
          need mental imagery at all.

          Based on numerous psychological  experiments,  Kosslin  maintains
          that  mental  imagery is pictorial in character, i.e. that mental
          imagery involves scanning an internal picture-like entity. Mental
          images can be inspected and classified using pretty much the same
          processes used to inspect and classify visual perceptions.

          To explain the structure of mental imagery Kosslyn puts  forth  a
          representational  theory  of  the  mind of a "depictive" type, as
          opposed to Fodor's propositional theory and related  to  Johnson-
          Laird's  models.   Kosslyn  thinks that the mind can build visual
          representations, which are coded in parts of the brain, and which
          reflect   what  they  represent.   Such  representations  can  be
          inspected  by  the  mind  and  transformed  (rotated,   enlarged,
          reduced).

          There exist two levels of visual  representation:  a  "geometric"
          level,  which  allows  one  to mentally manipulate images, and an
          "algebric" one, which allows one to "speak" about those images.

          Kosslyn thinks that mental imagery achieves two  goals:  retrieve
          properties  of  objects and predict what would happen if the body
          or the objects should move in a given way.  Reasoning  on  shapes
          and  dimensions is far faster when we employ mental images rather
          than concepts.

Kosslyn Stephen: GHOSTS IN THE MIND'S MACHINE (W. Norton, 1983)

          An introduction to Kosslyn's theory of  mental  imagery  oriented
          towards a computer implementation.

Kosslyn Stephen & Koenig Olivier: WET MIND (Free Press, 1992)

          An overview of cognitive neuroscience, i.e. of psychological stu-
          dies  based  on  the  principle  that "the mind is what the brain
          does", i.e. theories that describe  mental  events  by  means  of
          brain activities.

          Chapters  on  neural  computation,  vision,  language,  movement,
          memory.

Kosslyn Stephen: IMAGE AND BRAIN (MIT Press, 1994)

          This book revises and expands the  contents  and  conclusions  of
          "Image and Mind".

          Kosslyn's proposal for the resolution of the imagery debate is an
          interdisciplinary theory of high-level vision in which perception
          and representation are  inextricably  linked.  Visual  perception
          (visual  object  identification)  and visual mental imagery share
          common mechanisms.

          Visual processing is decomposed in a number of subsystems, each a
          neural  network:  visual  buffer (located in the occipital lobe),
          attention window (selects a pattern of  activity  in  the  visual
          buffer),  two  cortical visual systems, the ventral system (infe-
          rior temporal lobe, encodes object  properties)  and  the  dorsal
          system  (posterior  paretal  lobe,  encodes  spatial properties),
          associative memory (which integrates the two classes  of  proper-
          ties),  information  lookup  subsystem  (dorsolaterla  prefrontal
          cortex, accesses information about the most  relevant  object  in
          associative  memory),  attention  shifting  subsystems  (frontal,
          parietal and subcortical areas, directs the attention  window  to
          the  appropriate  location).   The  subsystems  may  overlap  and
          exchange feedback.  More detailed analysis of the visual recogni-
          tion  process  identify more specialized subsystems. The model is
          therefore gradually extended to take into account the full  taxo-
          nomiy of visual abilities.

          Mental imagery shares most of this processing  architecture  with
          high-level visual perception.

          During the course of the development of the theory, a  wealth  of
          psychological and neurophysiological findings is provided.

Koza John: GENETIC PROGRAMMING (MIT Press, 1992)

          One of the seminal books on "genetic"  programming  by  means  of
          natural selection.  The solution to a problem is found by geneti-
          cally breeding populations of computer programs.  A  computer  is
          therefore enabled to solve problems without being explicitly pro-
          grammed to solve them.  The process of finding a  solution  to  a
          problem is turned into the process of searching the space of com-
          puter programs for a highly fit individual  computer  program  to
          solve such a problem.

Koza John: GENETIC PROGRAMMING II (MIT Press, 1994)

          Focuses on automatic function definition for the decomposition of
          complex problems.
           Kripke Saul: NAMING AND  NECESSITY  (Harvard  University  Press,
          1980)

          Kripke developed  a  model-theoretic  interpretation  of  various
          axiom sets for modal logic. Modality can be represented by recur-
          ring to the notion of possible worlds. In  Kripke's  semantics  a
          property  is necessary if it is true in all worlds, a property is
          possible if there is at least a world in which it is true.

          The extensional analysis of language cannot account for sentences
          that  are  very  common such as those that employ opaque contexts
          (to know, to believe, to  think)  and  those  that  employ  modal
          operators (all words that can be reduced to "it is possible that"
          and "it is necessary that").  These senteces are not extensional,
          meaning  that they do not satisfy Leibniz's law.  These sentences
          can be interpreted  in  Kripke's  model-theoretic  semantics.   A
          statement  that  is false in this universe can be true in another
          universe.  The truth values of a sentence are always relative  to
          a particular world.

          Tarski's theory is purely extensional (for each model  the  truth
          of  a predicate is determined by the list of objects for which it
          is true), Kripke's modal logic  is  intensional.  An  extensional
          definition would actually be impossible, as the set of objects is
          infinite.

          Proper names and definite descriptions  are  designators.  Proper
          names  are  rigid  designators, i.e. in every possible world they
          designate the same object. Kripke (unlike Frege)  carefully  dis-
          tinguishes  the meaning of a designator and the way its reference
          is determined (which are both "sense" in  Frege).  Then  he  puts
          forth  his causal theory of naming: initially, the reference of a
          name is fixed by some operation (e.g., by description), then  the
          name  is  passed from link to link. A name is not identified by a
          set of unique properties satisfied by the referent:  the  speaker
          may  have erronous beliefs about those properties or they may not
          be unique. The name is passed to the speaker  by  tradition  from
          link  to link. Terms for natural kinds behave in a similar way to
          proper names.

          Kripke rejects the view that either proper or  common  nouns  are
          associated  with properties that serve to select their referents.
          Names are just "rigid designators". Both proper and common  names
          have  a  referent,  but non a Fregean sense.  The property cannot
          determine the reference as the object might not  have  that  pro-
          perty  in  all  worlds.  For example, gold might not be yellow in
          all worlds.

          Kripke's causal theory of names assumes that names are linked  to
          their  referents  through a casual chain. A term applies directly
          to an object via a connection that was set in place by  the  ini-
          tial naming of the object.

          A nonrigid designator is a term that changes its referent  across
          possible  worlds.  Mental  states cannot be identical to physical
          states because both are rigid designators and they  might  desig-
          nate different objects in different worlds.

Kuipers Benjamin: QUALITATIVE REASONING (MIT Press, 1994)

A unified theory of qualitative reasoning.

          Qualitative  reasoning  is  viewed  as  a  set  of  methods   for
          representing and reasoning with incomplete knowledge about physi-
          cal systems. A qualitative description of  a  system  allows  for
          common sense reasoning that overcomes the limitations of rigorous
          logic. Qualitative descriptions capture the essential aspects  of
          structure, function and behavior, at the expense of others. Since
          most phenomena that matter to  ordinary  people  depend  only  on
          those  essential aspects, qualitative descriptions are enough for
          moving about in the world.

          Kuipers presents his QSIM algorithm and representation for quali-
          tative  simulation.  His  model  deals  with partial knowledge of
          quantities (through landmark values  and  fuzzy  values)  and  of
          change  (by using discrete state graphs and qualitative differen-
          tial equations). A qualitative differential equation is a quadru-
          ple  of  variables, quantity spaces (one for each variable), con-
          straints (that apply to the variables) and transitions (rules  to
          define the domain boundaries).

          The framework prescribes a number of constraint propagation tech-
          niques,  including for higher-order derivatives and global dynam-
          ics.  First of all, it  is  necessary  to  build  a  model  which
          includes  all  the  elements  needed  for  simulating  the system
          (close-world assumption). Then the model can  be  simulated.  The
          ontological  problem  is  solved  drawing  from varius techniques
          (Forbus' qualitative process theory,  Sussman's  device  modeling
          approach, DeKleer's "no function in structure").
           Kulas Jack, Fetzer James & Rankin  Terry:  PHILOSOPHY,  LANGUAGE
          AND ARTIFICIAL INTELLIGENCE (Kluwer, 1988)

          A collection  of  historical  articles  on  semantics,  including
          Davidson's  "Truth  and meaning" (1967), Grice's "Utterer's mean-
          ing" (1968), Hintikka's "Semantics for  propositional  attitudes"
          (1969),  Montague's  "The  proper  treatment of quantification in
          ordinary english"  (1973),  Gazdar's  "Phrase-structure  grammar"
          (1982), Stalnaker's "Possible worlds and situations".

          Kulas provides a historical introduction to the  field,  starting
          with Aristotle.
           Kuppers Bernd-Olaf: INFORMATION AND  THE  ORIGIN  OF  LIFE  (MIT
          Press, 1990)

          Kuppers thinks that all living phenomena, such as metabolism  and
          inheritance,  can  be  reduced  to  the interaction of biological
          macromolecules, i.e. to the laws of Physics and  Chemistry,  and,
          in  particular,  the  living  cell  originated from the iterative
          application of the same fundamental rules  that  preside  to  all
          physical and chemical processes.

          The issue of the origin of life is reduced to the  issue  of  the
          origin  of  biological  information. Information is viewed in its
          different aspects: syntactic (as in information theory), semantic
          (function and meaning of information for an organism's survival),
          and pragmatic (following von  Weiszacker,  "information  is  only
          that  which  produces information").  Following Manfred Eigen and
          in opposition to Jacques Monod,  Kuppers  favors  the  hypothesis
          that  the origin of life from inorganic matter is due to emergent
          processes of self-organization and evolution  of  macromolecules.
          Natural selection applies to the molecular level.

          Kuppers presents rigorous  mathematical  proofs  of  his  theory,
          often  resorting  to  algorithmic theory (e.g., Gregory Chaitin's
          quantitative determination of information in a structure).

          Since evolution depends on the semantic  aspect  of  information,
          there  is no contradiction with the second law of thermodynamics,
          which is about the structural aspect of matter (i.e., the syntac-
          tic aspect of information).

          The origin of life is the origin of biological  information.  The
          origin  of  syntactic  information  relates to the prebiotic syn-
          thesis of  biological  macromolecules.  The  origin  of  semantic
          information relates to the self-organization of macromolecules.

          In the balance between law and chance, only the general direction
          of  evolution  is determined by natural law: the detailed path is
          mainly determined by  chance.   Natural  law  entails  biological
          structures, but does not specify which biological structures.
           Jackendoff Ray: SEMANTIC INTERPRETATION  IN  GENERATIVE  GRAMMAR
          (MIT Press, 1972)

          One of the milestone works in government and binding theory.

          The author shows that theta roles determine to  some  extent  the
          wellformedness of anaphoric relations. Theta roles form a hierar-
          chy and binding  must  respect  such  hierarchy  by  placing  the
          antecedent of an anaphor higher on the hierarchy than the anaphor
          itself.

Jackendoff Ray: X'SYNTAX (MIT Press, 1977)(MIT Press, 1972)

          A monumental  study  of  the  phrase  structure  of  the  english
          language in the light of Chomsky's X-bar theory.

Jackendoff Ray: SEMANTICS AND COGNITION (MIT Press, 1983)

          Jackendoff develops conceptual structures to explain language, in
          a fashion similar to Fodor's mentalese.

          The structure of meaning ought to be pursued on  the  same  first
          principles as phonology and syntax.

          Meaning of verbs can be reduced to a  few  spacetime  primitives,
          such as motion and location.

          The "extended standard theory" enhances Chomsky's standard theory
          by  using  interpretation  rules to extract the meaning of a sen-
          tence. Such rules apply to the intermediate syntactic  structures
          used in the derivation of the phonetic representation.
           Jackendoff Ray: CONSCIOUSNESS AND THE  COMPUTATIONAL  MIND  (MIT
          Press, 1987)

          Jackendoff believes in a hierarchy of levels of mental  represen-
          tation.

          The book resumes Jackendoff's claim that phonology and syntax are
          key  to  the  structure  of  meaning,  then extends the framework
          developed for language to vision and music (hinting at a possible
          unification with Marr's theory of vision).

          Each cognitive function exists at different levels of interpreta-
          tions  and cognitive functions generally interact at intermediary
          levels.

          Jackndoff refines and extends Fodor's idea of the  modularity  of
          the mind.

          Consciousness arises from a  level  of  representation  which  is
          intermediate between the sense-data and the form of thought.

Jackendoff Ray: SEMANTIC STRUCTURES (MIT Press, 1990)

          Jackendoff's conceptual semantics is applied to lexical and  syn-
          tactic  expressions  in  English. Jackendoff proposes a formalism
          for describing lexical semantic  facts  and  expressing  semantic
          generalizations.   He  employs  multi-dimensional representations
          analogous to those found in phonology.

Jackendoff Ray: LANGUAGES OF THE MIND (MIT Press, 1992)

          This collection of papers summarizes Jackendoff's  formal  theory
          on  the nature of language and a modular approach to "mental ana-
          tomy", and applies the same concepts to learning and common sense
          reasoning.

          There is a tight relationship between vision and language. A lex-
          ical  item contains the stereotipical image of the object or con-
          cept. Knowing the meaning of  a  word  implies  knowing  how  the
          object or concept looks like.
           Jackendoff Ray: PATTERNS  IN  THE  MIND  (Harvester  Wheatsheaf,
          1993)

          Following Chomsky, Jackendoff thinks that the  human  brain  con-
          tains  innate linguistic knowledge and that the same argument can
          be extended to all facets of human experience: all experience  is
          constructed by unconscious genetically determined principles that
          operate in the brain.

          The experience of spoken language is constructed by the  hearer's
          mental  grammar:  speech per se is only a meaningless sound wave,
          only a hearer equipped with the proper device can make  sense  of
          it.

          These same conclusions can be applied to thought itself, i.e.  to
          the  task of building concepts. Concepts are constructed by using
          some  innate,  genetically  determined,  machinery,  a  sort   of
          "universal grammar of concepts".  Language is but one aspect of a
          broader characteristic of the human brain.

Jackson Frank: CONDITIONALS (Basil Blackwell, 1987)

          A collection of articles by David Lewis, Robert Stalnaker,  Grice
          and  Frank  Jackson  on  the subject of conditionals. A theory of
          conditionals must offer an account of the truth conditions  of  a
          conditional  (under  which  conditions  "if  A then B" is true or
          false, or acceptable to some degree). The traditional view that a
          conditional is true if and only if the antecedent is false or the
          consequent is true is too simplicistic  and  allows  conditionals
          such  as "if Jones lives in London, then he lives in Scotland" to
          be true (if he does not live in London or lives in Scotland) when
          it is obviously senseless.

          Stalnaker and Lewis solve some of the  problems  of  (subjective)
          conditionals  ("if  it  were  that A then it would be that B") by
          using possible-world semantics. Lewis also reviews Ernest  Adams'
          thesis that the assertability of (indicative) conditionals ("if A
          then B") is measured by the conditional probability of the conse-
          quent given the antecedent.

Jackson Frank: PERCEPTION (Cambridge University Press, 1977)

          The immediate objects of perception are mental.  To  perceive  an
          object  is  to be in a perceptual state as a causal result of the
          action of that object.

          On  epiphenomenal  qualia  Jackson  proposed  a  famous   thought
          experiement:  a  blind neurophysiologist that knows everything of
          how the brain perceives colors still cannot know  what  it  feels
          like to see a color.

          Color is not a property of material things.  Sense-data  are  not
          material, they are mental.

Jauregui Jose: THE EMOTIONAL COMPUTER (Blackwell, 1995)

          This is the english translation of  1990's  "El  Ordenador  Cere-
          bral".

          Jauregi, like Wilson, views sociology as  a  branch  of  biology.
          The  same emotional system controls social, sexual and individual
          behavior.  Such  emotional  system  originates  from  the  neural
          organization of the brain: emotions  are rational and predictable
          events. Jauregi believes that the brain is a computer, but intro-
          duced  the  novelty  of  emotions  as  the direct product of that
          computer's processing activity. It is emotions, not reason,  that
          directs  and  informs  the  daily actions of individuals. Jauregi
          deals with humans that feel pleasure and pain  rather  than  with
          abstract problem solvers.

          Jauregi begins by separating the brain and the self: the brain is
          aware  of  what  is going on in the digestive system of the body,
          but will inform the self  only  when  some  correction/action  is
          necessary.  Normally, an individual is not aware of her digestive
          processes. Her brain is always informed, though.  The  communica-
          tion  channel between the brain and the self is made of emotions.
          The brain can tune the importance of the message  by  controlling
          the  intensity of the emotions. Far from being an irrational pro-
          cess, the emotional life is mathematically calculated to  achieve
          exactly  the  level  of response needed.  Feelings are subjective
          and inaccessible, but they also are objective and precise.

          The self has no idea of the detailed process that was going on in
          the  body  and  of the reason why that process must be corrected.
          The brain's emotional system, on the other hand, is  a  sophisti-
          cated  and  complex information-processing system. The brain is a
          computer programmed to inform the self (through emotions) of what
          must  be done to preserve her body and her society. It is through
          emotions that the brain informs the self of every  single  detail
          in  the  body  that  is relevant for survival. There almost is no
          instant without an emotion that tells the individual to do  some-
          thing  rather than something else.  "For human beings the reality
          that ultimately matters is the reality of their feelings".

          The self keeps a level of freedom: while it cannot  suppress  the
          (emotional)  messages  it receives from the brain, it can disobey
          them. The brain may increase the intensity of the message as  the
          self  disobeys it a painful conflict may arise. The brain and the
          self are not only separate, but they may fight each other.

          Only the self can be conscious and feel, but the brain  has  con-
          trol of both consciousness and feelings.

          If we view the brain as a computer, the hardware is made  of  the
          neural  organization.  There  are  two types of software, though:
          bionatural (knowledge about the natural  world)  and  biocultural
          (such  as  a  language  or a religion).  A program has three main
          components: the sensory, the mental and  the  emotional  systems.
          Any  sensory  input  can be translated automatically by the brain
          into  a mental (idea) or emotional (feeling) message; and  vicev-
          ersa.   Biocultural and bionatural programs exhert emotional con-
          trol over the body.

          Jauregi distinguishes five systems of communication: the  natural
          system  (the sender is a natural thing, such as a tree), the cul-
          tural  system  (the  sender  is  culture,  something  created  by
          humans),  the  somatic system (the sender is the individual's own
          body), the imaginary system (the sender is imagination)  and  the
          social  system  (the  sender  is  another individual).  The human
          brain is genetically equipped to receive and understand all  five
          kinds  of  messages.   What  ultimately  matters is the emotional
          translations of sensory inputs.
           Jaynes Julian: THE ORIGIN OF CONSCIOUSNESS IN THE  BREAKDOWN  OF
          THE BICAMERAL MIND (Houghton Mifflin, 1977)

          Jaynes makes a number of interesting points about  consciousness.
          Consciousness  is not necessary for concepts, learning, reason or
          even thinking.  Awareness of an action tends to follow, not  pre-
          cede,  the  action.   Awareness  of  an action bears little or no
          influence on the outcome.  Before one utters a sentence,  one  is
          not conscious of being about to utter those specific words.

          Consciousness is an operation rather  than  a  thing.  It  is  an
          operation  of  analogy  that  transforms things of the real world
          into  meanings  in  a  metaphorical  space.  Consciousness  is  a
          metaphor-generated  model of the world. Consciousness is based on
          language, therefore it appeared after the emergence of  language.
          By  reviewing  historical documents of past civilizations, Jaynes
          tries to identify when and how  consciousness  was  born.  Causes
          include the advent of writing, the loss of belief in gods, epics,
          and natural selection itself.

          Jaynes  thinks  that  some  social  institutions  and  religions,
          psychological  phenomena  such as hypnosis and schizophrenia, and
          artistic practices such as poetry and music are  vestiges  of  an
          earlier stage of human consciousness.
           Jeanerrod Marc: THE COGNITIVE NEUROSCIENCE OF ACTION (Blackwell,
          1996)

          A survey of findings on the representations and  processing  that
          lead  to action, from neurophysiological data to the role of men-
          tal imagery.
           Johnson-Laird Philip: HUMAN AND MACHINE THINKING (Lawrence  Erl-
          baum, 1993)

          A theory of deduction, induction and creation.

Johnson-Laird Philip: THINKING (Cambridge Univ Press, 1977)

          A collection of articles that reviews the study  of  thinking  in
          the  aftermath of the conceptual revolution that forced the tran-
          sition from behaviorism to information-processing.  Contributions
          range  from  philosophy (Popper, Kuhn) to artificial intelligence
          (Minsky, Schank).

Johnson-Laird Philip: MENTAL MODELS (Harvard Univ Press, 1983)

          Johnson-Laird's  representational  theory   assumes   that   mind
          represents  and  processes  models  of the world. The mind solves
          problems without any need to use logical reasoning. A  linguistic
          representation such as Fodor's is not necessary.

          A sentence is a procedure  to  build,  modify,  extend  a  mental
          model.  The mental model created by a discourse exhibits a struc-
          ture that corresponds directly to  the  structure  of  the  world
          described by the discourse.

          To perform an inference on a problem the mind needs to build  the
          situation described by its premises. Such mental model simplifies
          reality and allows the mind to find an "adequate" solution.

          Johnson-Laird draws on several phenomena to prove the psychologi-
          cal  inadequacy  of  a  mental logic.  People often make mistakes
          with deductive inference because it  is  not  a  natural  way  of
          thinking.  The  natural  way is to construct mental models of the
          premises: a model of discourse has a structure  that  corresponds
          directly  to  the  structure  of  the  state  of affairs that the
          discourse describes.  How can children acquire inferential  capa-
          bilities  before they have any inferential capabilities? Children
          solve problems by building mental models that are more  and  more
          complex.

          Johnson-Laird admits three  types  of  representation:  "proposi-
          tions"  (which represent the world through sequences of symbols),
          "mental models" (which are structurally analogous to  the  world)
          and "images" (which are perceptive correlates of models).

          Images are ways to approach models. They represent  the  perceiv-
          able features of the corresponding objects in the real world.

          Models, images and propositions are functionally and structurally
          different.

          Linguistic expressions are first transformed  into  propositional
          representations.   The  semantics  of  the  mental  language then
          creates correspondences between propositional representations and
          mental models, i.e. propositional representations are interpreted
          in mental models.

          Turning to meaning and model-theoretic  semantics,  Johnson-Laird
          proposes  that  a  mental model is a single representative sample
          from the set of models satisfying the assertion. Semantic proper-
          ties  of  expressions are emergent properties of the truth condi-
          tions. Johnson-Laird's procedural semantics  assumes  that  there
          are  procedures that construct models on the basis of the meaning
          of expressions.

          Johnson-Laird believes that consciousness is computable. The mind
          contains  a high-level operating system and a hierarchy of paral-
          lel processors.  Conscious mind is due to  a  serial  process  of
          symbolic  manipulation  that  occurs  at  the higher level of the
          hierarchy of processors (in the operating system), while  uncons-
          cious  mind  is due to a parallel process of distributed symbolic
          representation.  Emotions are  non-symbolic  signals,  caused  by
          cognitive interpretations of the situation, that propagate within
          the hierarchy.
           Johnson-Laird Philip: THE COMPUTER AND THE  MIND  (Harvard  Univ
          Press, 1988)

          An introduction to the themes and methods of  cognitive  science,
          with  a  review  of  porduction  and connectionist architectures.
          Speech, vision and language are devoted long  chapters.  Johnson-
          Laird also introduces his theory of mental models and resumes his
          theory of consciousness and emotions.
           Johnson-Laird Philip & Byrne Ruth: DEDUCTION (Lawrence  Erlbaum,
          1991)

          The authors  advance a comprehensive theory to  explain  all  the
          main  varieties  of deduction: propositional reasoning (that uses
          the connectives "and",  "or"  and  "not"),  relational  reasoning
          (that  depends  on  relations between entities), quantificational
          reasoning (that uses quantifiers such as "any" and "some").   And
          justify it with a variety of psychological experiments.

          In order to understand discourse, humans  construct  an  internal
          representation  of the state of affairs that is described in that
          discourse. These mental models have the same structure  as  human
          conceptions  of the situations they represent. Deduction does not
          depend on formal rules of inference but rather on  a  search  for
          alternative  models  of the premises that would refute a putative
          conclusion.  Central to the theory is the principle  that  people
          use  models that make explicit as little information as possible.
          The theory also make sense of how people deal with conditionals.

          The theory explains phenomena such as: that modus ponens  ("if  p
          then  q"  and  "p"  then "q") is easier than modus tollens ("if p
          then q" and "not q" then "not p").
           Josephson John & Josephson Susan: ABDUCTIVE INFERENCE (Cambridge
          University Press, 1993)

          Abduction (inference to the best explanation, i.e.  building  the
          hypothesis  that  best  accounts  for  the data) is ubiquitous in
          ordinary life as well as in  scientific  theory  formation.   The
          book  presents  a  dynasty  of  systems  that explored abduction.
          Intelligence is viewed as a cooperative community  of  knowledge-
          based specialists (performing "generic tasks").  Knowledge arises
          from experience by processes of abductive inference.

Jouvet Michel: LE SOMMEIL ET LE REVE (Jacob, 1992)

          Jouvet was the first to localize the trigger zone for  REM  sleep
          and dreaming in the brain stem. In this book he provides a neuro-
          biological and psychological analysis of sleep and dreaming.

          According to his findings, a dream is the vehicle employed by  an
          organism  to cancel or archive the day's experiences on the basis
          of a genetic program. Dreaming is a process that absorbs a lot of
          energy.

          This theory would also solve the dualism between  hereditary  and
          acquired features.  An hereditary component is activated daily to
          decide how new data must be acquired.
           Laird John, Rosenbloom Paul & Newell Allen: UNIVERSAL SUBGOALING
          AND CHUNKING (Kluwer Academics, 1986)

          The book describes in detail an architecture (SOAR)  for  general
          intelligence.   The  universal  weak  method is an organizational
          framework whereby knowledge determines the weak methods  employed
          to  solve  the  problem, i.e.  knowledge controls the behavior of
          the rational agent.  Universal subgoaling  is  a  scheme  whereby
          goals  can be created automatically to deal with the difficulties
          that the rational agent encounters during problem solving.

          The engine of the architecture is driven by production rules that
          fire  in  parallel  and  represent task-dependent knowledge.  The
          architecture maintains a context which is  made  of  four  slots:
          goal,  problem space, state and operator.  A fixed set of produc-
          tion rules determines which objects have to become current,  i.e.
          fill  those  slots.  In other words, they determine the strategic
          choices to be made after each round of parallel processing.

          A model of practice is developed based on the concept  of  chunk-
          ing,  which  is  meant  to produce the power law of practice that
          characterizes the improvements in human performance during  prac-
          tice  at  a  given  skill. Rosenbloom describes the XAPS3 system,
          which was designed to model goal hierarchies and chunking.   Each
          task has a goal hierarchy. When a goal is successfully completed,
          a chunk that represent the results of the task is created. In the
          next instance of the goal, the system will not need to fully pro-
          cess it as the chunk already contains the solution.  The  process
          of  chunking  proceeds  bottom-up  in  the  goal  hierarchy.  The
          process of chunking eventually leads to a chunk for the top-level
          goal for every situation that it can encounter.

Lakoff George: METAPHORS WE LIVE BY (Chicago Univ Press, 1980)

          Once metaphor is defined as the process of experiencing something
          in  terms  of something else, metaphor turns out to be pervasive,
          and not only in language but also in action and thought.

          The human conceptual  system  is  fundamentally  metaphorical  in
          nature.  Most concepts are understood in terms of other concepts.
          There is a continuum that extends  between  subcategorization  (a
          category  "is"  another  category  in  the  sense that a category
          belongs to  another  category)  and  metaphor  (a  category  "is"
          another category in the metaphorical sense).

          Metaphors are used to partially structure  daily  concepts.  They
          are  not  random,  but  rather form a coherent system that allows
          humans to conceptualize their experience. Metaphors create  simi-
          larities.

          Lakoff defines three types of metaphor: "orientational" (in which
          we  use  our  experience with spatial orientation), "ontological"
          (in which we use our experience with physical  objects),  "struc-
          tural"  (in  which  natural  types  are used to define other con-
          cepts). Each metaphor can be reduced to a  more  primitive  meta-
          phor.

          Conceptual metaphors transport properties from structures of  the
          physical  world to non-physical structures. Language was probably
          created to deal only with physical objects, and later extended to
          non-physical objects by means of metaphors.  The human conceptual
          system is shaped by positive feedback from the environment.

          Lakoff uses a theory of categories that draws from Wittgenstein's
          family  resemblance,  Eleanor Rosch's prototype-based categoriza-
          tion and Zadeh's fuzziness.

          Language comprehension always consists in comprehending something
          in terms of another. All our concepts are of metaphorical and are
          based on our physical experience.

          In accordance with Edward  Sapir  and  Benjamin  Whorf,  language
          reflects the conceptual system of the speaker.
          Metonymy differs from metaphor in that metaphor is a way to  con-
          ceive  something in terms of another thing, whereas metonymy is a
          way to use something to stand for something else (i.e.,  it  also
          has a referential function).

          Objective truth does not exist. Truth is  a  function  of  under-
          standing,  i.e.  a function of an individual's conceptual system,
          i.e. a function of coherence with such a system.

          Ritual is viewed as a crucial process in preserving and propagat-
          ing cultural metaphors.
           Lakoff George: WOMEN, FIRE AND DANGEROUS THINGS (Univ of Chicago
          Press, 1987)

          Categorization is the main way that humans make  sense  of  their
          world.   The traditional view that categories are defined by com-
          mon properties of their members  is  being  replaced  by  Rosch's
          theory  of  prototypes.  Lakoff's  "experientialism" assumes that
          thought is embodied (grows out of bodily experience), is imagina-
          tive  (capable  of employing metaphor, metonymy and imagery to go
          beyond the literal representation of reality), is holistic (i.e.,
          is not atomistic), has an ecological structure (is more than just
          symbol manipulation).

          Lakoff reviews studies on categories (Wittgenstein, Berlin,  Bar-
          salou,  Kay, Rosch, Tversky) and summarizes the state of the art:
          categories are organized in a taxonomic hierarchy and  categories
          in  the middle are the most basic.  Knowledge is mainly organized
          at the basic level and is organized around part-whole  divisions.
          Lakoff  claims that linguistic categories are of the same type as
          other categories.

          In order to deal with categories, one needs cognitive  models  of
          four  kinds:  propositional models (which specify elements, their
          properties and  relations  among  them);  image-schematic  models
          (which  specify  schematic  images); metaphoric models (which map
          from a model in one domain to a model  in  another  domain);  and
          metonymic  models  (which  map an element of a model to another).
          The structure of  thought  is  characterized  by  such  cognitive
          models.  Categories  have  properties  that are determined by the
          bodily nature of the categorizer and that may be  the  result  of
          imaginative  processes  (metaphor,  metonymy,  imagery).  Thought
          makes use of symbolic structures which are  meaningful  to  begin
          with.   Language  is  characterized  by symbolic models that pair
          linguistic information with models in the conceptual system.

          Categorization is implemented  by  "idealized  cognitive  models"
          that provide the general principles on how to organize knowledge.
           Lakoff George: MORE THAN  COOL  REASON  (University  of  Chicago
          Press, 1989)

          While studying poetic metaphors, Lakoff emphasizes that  metaphor
          is  not  only  a  matter  of words, but a matter of thought, that
          metaphor is central to our understanding of  the  world  and  the
          self.  Poetry is simply the art of extending metaphors and there-
          fore the mind's power of grasping concepts.

Langton Christopher: ARTIFICIAL LIFE (Addison-Wesley, 1989)

          Proceedings of the first A-life workshop at the Santa Fe`  Insti-
          tute.

          In Langton's own theory, living beings and cellular automata have
          in  common  the transfer and conservation of information.  Living
          organisms use information, besides matter and energy, in order to
          grow  and reproduce. In living systems the manipulation of infor-
          mation prevails over the manipulation of energy.

          Life depends on a balance of information: too little  information
          is  not enough to produce life, too much can actually be too dif-
          ficult to deal with. Life is due to a reasonable amount of infor-
          mation  that can move and be stored.  Life happens at the edge of
          chaos.

          Complexity is an inherent property of life. And life  is  a  pro-
          perty of the organization of matter.
          In order to build artificial life Langton defines a  "generalized
          genotype"  as  the  set of low-level rules serving as the genetic
          blueprint and the "generalized phenotype" as the  structure  that
          is created from those instructions.

Langton Christopher: ARTIFICIAL LIFE II (Addison-Wesley, 1992)

          Proceedings of the second A-life workshop at the Santa Fe` Insti-
          tute.

Langton Christopher: ARTIFICIAL LIFE (MIT Press, 1995)

          A collection of articles by various authors originally  published
          in the Artificial Life journal.
           Larson Richard & Segal Gabriel: KNOWLEGDE OF MEANING (MIT Press,
          1995)

          An  introduction  to  truth-theoretic   semantics   for   natural
          languages,  viewed  as part of cognitive psychology.  Unlike most
          semantic studies, which are based on Montague's  semantics,  this
          one is from Davidson's perspective.
           Lashley Karl Spencer: BRAIN MECHANISMS AND INTELLIGENCE  (Dover,
          1963)

          This 1929 study set the standard  for  cognitive  neurophysiology
          and  psychology.  The 1963 reissue comes with a preface by Donald
          Hebb that puts Lashley's achievements in perspective.

          In Lashley's mnemonic distribution model each  mnemonic  function
          is not localized in a specific point of the mind, but distributed
          over the entire mind.

          Later Lashley also noted how the dualism between mind  and  brain
          resembles  the  one  between waves and particles. A memory in the
          brain behaves like a wave in an electromagnetic field.
           Laszlo Ervin:  INTRODUCTION  TO  SYSTEMS  PHILOSOPHY  (Gordon  &
          Breach, 1972)

          Von Bertalanffy's general  systems  theory   lends  itself  to  a
          natural  wedding  of scientific information and philosophic mean-
          ing. General  systems  theory  consists  in  the  exploration  of
          "wholes",  which are characterized by such holistic properties as
          hierarchy, stability, teleology. Laszlo advocates a  return  from
          analytic to synthetic philosophy.

          Laszlo starts by offering his own take at a  "theory  of  natural
          systems"  (i.e., a theory of the invariants of organized complex-
          ity). At the center of his theory  is  the  concept  of  "ordered
          whole"  (a  non-summative  system subject to a set of constraints
          that define its structure and allow it to achieve adaptive  self-
          stabilization). Laszlo then adopts a variant of Ashby's principle
          of self-organization, according to  which  any  isolated  natural
          system  subject  to  constant  forces  is inevitably inhabited by
          "organisms" that  tend  towards  stationary  or  quasi-stationary
          non-equilibrium  states.  In  Laszlo's  view  the  combination of
          internal constraints and external forces  yields  adaptive  self-
          organization. Natural systems evolve towards increasingly adapted
          states,  corresponding  to  increasing  complexity  (or  negative
          entropy).

          Natural systems  sharing  an  environment  tend  to  organize  in
          hierarchies.   The  set  of such systems tends to become itself a
          system, its subsystems providing the constraints for the new sys-
          tem.
          Laszlo then offers rigorous foundations to deal  with  the  emer-
          gence  of  order  at the atomic ("micro-cybernetics"), organismic
          ("bio-cybernetics") and social levels ("socio-cybernetics").

          A systemic view also permits a formal analysis  of  a  particular
          class  of  natural  systems, that of cognitive systems. The mind,
          just like any other natural system, exhibits an holistic  charac-
          ter, adaptive self-organization, and hierarchies, and can be stu-
          died with the same tools ("psycho-cybernetics").

          The basic building blocks of reality are therefore  natural  sys-
          tems.

Lavine Robert: NEUROPHYSIOLOGY (Collamore, 1983)

          A comprehensive introduction to the neuron, the structure of  the
          brain, senses and to higher cognitive functions.

Layzer David: COSMOGENESIS (Oxford University Press, 1990)

          Inspired by cosmology, Layzer deals with the paradox of  creation
          of  order by saying that, if entropy in the environment increases
          more than the entropy of the system, then the system becomes more
          ordered in that environment.  Entropy and order can both increase
          at the same time without violating the second law of thermodynam-
          ics.   This phenomenon can be described as: if the expansion of a
          set of systems is so quick that a  number  of  states  which  are
          occupied  increases  less rapidly than the number of states which
          are available (i.e., the phase space gets  bigger),  entropy  and
          order can increase at the same time.

          Unlike Prigogine, Layzer does not need to assume that  an  energy
          flow  from the environment of a system can cause a local decrease
          in  entropy  within  the  system.   Entropy  and  order  increase
          together  because  the  realization  of structure lags behind the
          expansion of phase space.

          Drawing from Shannon's  theory  of  communication,  David  Layzer
          defines  information  as the difference between potential entropy
          (the largest possible value that the entropy can assume under the
          specified  conditions) and actual entropy.  As actual information
          increases, actual entropy decreases  (information  is  "negative"
          entropy  in  Shannon's theory).  Potential entropy is also poten-
          tial information: maximum entropy equals maximum information.

          In biological and astronomical systems the potential entropy  may
          increase  with time, thereby creating information if it increases
          faster than actual entropy. In particular, both  contraction  and
          expansion  of the universe from an initial state of thermodynamic
          equilibrium would generate potential entropy.  Genetic  variation
          always  generates  entropy  as information flows unidirectionally
          from the genotype to the phenotype: when it makes  the  distribu-
          tion  of genotypes more uniform in a genotype space, it generates
          entropy and destroys information; when it allows  the  population
          to populate previously uninhabited regions fo the genotype space,
          it generates potential  entropy  without  necessarily  generating
          entropy.

          Layzer then proves that several evolutionary processes (mutation,
          differential  reproduction, gene duplication, differentiation and
          integration) generate  biological  information.   Natural  itself
          selection always increases the proportion of relatively fit vari-
          ants in a population and decreases the proportion  of  relatively
          unfit variants, therefore natural selection always generates bio-
          logical order.

          Layzer thinks that biological evolution  is  not  driven  by  the
          growth  of  entropy (as a counterweight to the loss of order), it
          is not (directly or indirectly) driven by the second law of ther-
          modynamics.  That  law  presupposes  certain initial and boundary
          conditions that are not present in biological systems.

          Influenced by Schmalhausen's theory that evolution is  a  process
          of  hierarchical construction, Layzer thinks that there is a sin-
          gle universal law governing processes that dissipate  order,  but
          order   is   also  generated  by  several  hierarchically  linked
          processes (including cosmic expansion and biological evolution).
           Lazarus Richard: EMOTION  AND  ADAPTATION  (Oxford  Univ  Press,
          1991)

          Lazarus argues that the final goal of our emotions is to help the
          organism survive in the environment. His theory is a "relational"
          theory of emotions, in that it assumes that emotions  arise  from
          an  adaptational  situation of the individual in the environment.
          Emotions are reactions to attempted goals ("motivational  princi-
          ple"),  emotions  represent reactions to evaluations of relation-
          ships with the environment.   Stable  relationships  between  the
          individual and the environment result in recurrent emotional pat-
          terns in the individual.  Emotion is due to an evaluation of  the
          potential consequences of a situation.

          The development of the self is a fundamental event for  the  emo-
          tional life.  Emotions depend on an organizing principle in which
          the self is distinguished from the non-self, because  only  after
          that principle has become established can evaluations of benefits
          and harms be performed.  Differentiation of self and other  is  a
          fundamental property of living organisms (even plants use protein
          discrimination mechanisms, and most organisms could  not  survive
          without the ability to distinguish alien organisms).

          An emotion is a process in four  stages:  anticipation,  provoca-
          tion,  unfolding,  outcome.  Both biological and social variables
          contribute to this process, and this explains why emotions change
          through the various stages of life.

          Each type of emotion can be defined by a relational meaning which
          expresses the set of benefits and harms in a relationship between
          individual and environment and is constructed  by  a  process  of
          appraisal.  Each type of emotion is distinguished by a pattern of
          appraisal factors.  The relational meaning is about the  signifi-
          cance  of  the  event for the well-being of the individual.  Emo-
          tions express the personal meaning of an individual's experience.

          Lazarus, unlike Zajonc, emphasizes cognition in the  relationship
          between  emotion and cognition.  After all, appraisal is the fun-
          damental process for the occurrence of emotion.
           Lazarus Richard & Lazarus Bernice: PASSION  AND  REASON  (Oxford
          Univ Press, 1994)

          Lazarus reiterates his point that emotions  are  as  rational  as
          anything can be in a language accessible to anybody.
           Ledoux Joseph & William Hirst: MIND AND  BRAIN  (Cambridge  Univ
          Press, 1986)

          A collection of articles on  perception,  attention,  memory  and
          emotion  that  are organized as debates between psychologists and
          neurobiologists.
           Lehnert  Wendy:  STRATEGIES  FOR   NATURAL   LANGUAGE   LANGUAGE
          (Lawrence Erlbaum, 1982)

          A practical textbook on natural language processing in  the  con-
          ceptual  dependency  tradition.  Each  chapter  is  written by an
          authority  of  the  field.  Includes  Steven  Small's  word-based
          parser,  Gerald  DeJong's  FRUMP  system,  Wilensky's PAM system,
          Wendy Lehnert's plot units, Schank's MOPs.   Jerry  Hobbs  writes
          about  coherence  in discourse. Yorick Wilks discusses procedural
          semantics.
           Leiser David  &  Gillieron  Christiane:  COGNITIVE  SCIENCE  AND
          GENETIC EPISTEMOLOGY (Plenum Press, 1989)

          The book analyzes the relations between procedures and structures
          from a Piagetian perspective and attempts to bridge a gap between
          cognitive psychology and artificial intelligence.
           Lenat Douglas: BUILDING LARGE KNOWLEDGE-BASED SYSTEMS  (Addison-
          Wesley, 1990)

          The book describes the CYC system, whose  goal  is  to  represent
          common  knowledge  (i.e.,  develop a global ontology) and perform
          common-sense reasoning (i.e., employ a set of  reasoning  methods
          as  a  set  of  first  principles)  on large knowledge bases.  to
          explain

          Units of knowledge for common sense are units of "reality by con-
          sensus":  all  the  things we know and we assume everybody knows;
          i.e., all that is implicit in our acts of communication. A  prin-
          ciple  of  economy  of communications states the need to minimize
          the acts of communication and maximize the  information  that  is
          transmitted.   World regularities belong to this tacitly accepted
          knowledge.

Lenneberg Eric: BIOLOGICAL FOUNDATIONS OF LANGUAGE (Wiley, 1967)

          Language should be studied  as  an  aspect  of  man's  biological
          nature,  in the same manner as anatomy. Chomsky's universal gram-
          mar is to be viewed as an underlying biological framework for the
          growth  of  language. Genetic predisposition, growth and develop-
          ment apply to language faculties just like to any other organ  of
          the  body.   Behavior  in  general  is  an  integral  part  of an
          organism's constitution.

          Language and speech are represented in the cortex and  also  seem
          to  be  hosted  in subcortical and midbrain structures. The large
          size of the human brain  is  probably  a  direct  consequence  of
          language  functions. Children start learning language when struc-
          tural changes in the brain make it possible.

          Animals organize the sensory world through a process of categori-
          zation.   They  exhibit propensities for responding to categories
          of stimuli.  In humans this  process  of  categorization  becomes
          "naming",  the  ability  to  assign a name to a category. Even in
          humans the process of categorization is  still  a  process  whose
          function is to enable similar response to different stimuli.  The
          meaning-bearing elements of language do not  stand  for  specific
          objects,  but for the act of categorization.  The basic cognitive
          mechanisms of semantics are processes of categorization
           LePore Ernest: NEW  DIRECTIONS  IN  SEMANTICS  (Academic  Press,
          1987)

          A collection  of  articles  on  semantics,  including  Hintikka's
          game-theoretical  semantics,  Gilbert  Harman's  conceptual  role
          semantics (the ultimate source of meaning is the functional  role
          that  symbols  play  in thought) and dual aspect semantics (which
          contain one theory relating language to the world and one  theory
          relating language to the mind).

Lesniewski Stanislaw: COLLECTED WORKS (Kluwer Academic, 1991)

          In the Thirties the polish logician Lesniewski noted that in  any
          language  containing  its semantics logical laws cannot hold con-
          sistently.  A contradiction can be avoided only by reconstructing
          the    object    language   through   hierarchical   levels,   or
          metalanguages. This is similar to Russell's conclusion that  some
          hierarchy  is  necessary for a system to be coherent.  Lesniewski
          developed a  hierarchy  of  categories  (a  grammar  of  semantic
          categories).   Lesniewski's  system  consists  of three axiomatic
          theories: protothetic (a  calculus  of  equivalent  propositional
          functions,  with a single axiom), ontology (a calculus of classes
          in terms of a theory of nominal predication, with a single axiom)
          and mereology (based on the part-whole relation, containing rules
          to avoid paradoxes).  Functorial categories can be generated from
          a set of basic categories (the propositions defined by the single
          axiom of protothetic and the nouns defined by the single axiom of
          ontology)  and are categories of functions from certain arguments
          to certain values.
           Levine Daniel: INTRODUCTION TO  NEURAL  AND  COGNITIVE  MODELING
          (Lawrence Erlbaum, 1991)

          A broad survey of cognitive science from a  neuroscientific  per-
          spective.   After  a historical outline (McCulloch-Pitts neurons,
          Hebb's law, Rosenblatt's perceptron, etc), Levine  details  algo-
          rithms  (and physiological justifications) for associative learn-
          ing, competition, conditioning,  categorization,  representation.
          All  the  main  connectionist models are surveyed.  The book pro-
          vides a detailed, technical compendium of data and ideas  in  the
          field.

Levinson Stephen: PRAGMATICS (Cambridge Univ Press, 1983)

          An excellent and relatively accessible introduction  to  pragmat-
          ics.

          Levinson surveys the issues of pragmatics, defined essentially as
          the  relationship  between  language  and context.  Approaches to
          indexicals or  deixis  (Fillmore,  Lyons,  Lakoff),  implicatures
          (Grice,  Gazdar),  presupposition  (Stalnaker, Karttunen), speech
          acts (Austin, Searle), and discourse analysis are dealt  with  at
          length.  This  is  the  best  introduction  to  the theories that
          emerged during the late Seventies.

           Levy Steven: ARTIFICIAL LIFE (Pantheon, 1992)

          An introduction for the wider audience to the world of artificial
          life.   Includes  history  of  the  field  (from  Von  Neumann to
          viruses), biographies of its visionaries (Kauffman, Holland, Haw-
          kins,   Ray,   Brooks)  and  simplified  presentations  of  their
          theories.

Lewin Roger: COMPLEXITY (Macmillan, 1992)

          Complexity is presented as a discipline that can unify  the  laws
          of  physical, chemical, biological, social and economic phenomena
          through the simple principle that all things in nature are driven
          to  organize  themselves  into  patterns.  The  book,  written in
          conversational english, devotes much time to describing the  pro-
          tagonists  of  the  field and relating interviews in a celebrity-
          centered fashion.

Lewis Clarence Irving: SYMBOLIC LOGIC (Mineola, 1932)

          In Lewis' modal logic a proposition is necessary if it is true in
          every  possible  world,  it is possible if it is true in at least
          one possible world.   "Necessity"  and  "possibility"  are  modal
          operators,  i.e.  they  operate  on logical expressions just like
          logical connectives.  The two modal operators are dual  (one  can
          be expressed in terms of the other), thereby reflecting the dual-
          ism of the  two  corresponding  quantificators  (existential  and
          universal).   A  modal logic is built by adding a few axioms con-
          taining the modal operators to the axioms of a non-modal logic.

Lewis David K.: COUNTERFACTUALS (Harvard Univ Press, 1973)

          Lewis uses possible-world semantics in his theory of  counterfac-
          tuals.   Lewis  defines  a  pair of conditional operators ("if it
          were the case that, then it would be the case that"  and  "if  it
          were  the  case that, then it might be the case that"), which can
          be defined one in terms of the other.   Counterfactuals  are  not
          strict  conditionals  (material conditionals preceded by a neces-
          sity operator), but  rather  "variably"  strict  conditionals  (a
          counterfactual  is  as strict as it must be to escape vacuity and
          no stricter).

          Lewis then defends possible worlds and claims that each  possible
          world is as "real" as ours.

          Lewis also compares his theory to Stalnaker's own, which is  also
          based on possible worlds.

Lewis David K.: PHILOSOPHICAL PAPERS (Oxford Press, 1983)

The ultimate collection of Lewis' papers.

          In "An argument for the identity theory" Lewis argues that a men-
          tal state can be defined by a physical state, which is not neces-
          sarily the same for all species, and by  a  "causal  role",  that
          expresses behavior that such a state induces in the organism.

          In "Radical interpretation" he contends that intentional  ascrip-
          tion  (the  task of redescribing the information of an individual
          in intentional terms) is a kind of constraint-satisfaction  prob-
          lem:  the correct intentional ascription is the one that provides
          a best fit to the demands that the constraints impose.  The prob-
          lem  of  radical  interpretation  is  tackled by identifying four
          parts: the intentional system  (e.g.,  a  person);  the  system's
          attitudes  (beliefs  and  desires) as expressed in the observer's
          language;  the  system's  attitudes  (beliefs  and  desires)   as
          expressed  in  the  system's own language; and the system's mean-
          ings.

          The constraints are derived from six principles: the principle of
          charity  constrains the relation between a system and its beliefs
          expressed in the observer's language (the  system's  beliefs  are
          somehow  constrained by the observer's beliefs); the rationaliza-
          tion  principle  constrains  the  relation  between  the  beliefs
          expressed  in  the observer's language and the system (the system
          is a rational agent, his beliefs being what make  sense  for  its
          behavior);  the principle of truthfulness constrains the relation
          between the observer's beliefs and  the  system's  meanings;  the
          principle  of  generativity  constrains  the  meaning  in that it
          should assign truth conditions to the  system's  sentences  in  a
          reasonable  way; the manifestation principle constrains the rela-
          tion between the system and its beliefs (they must be  consistent
          with  its speech behavior); and the triangle principle constrains
          the relation between the system's meaning, its  beliefs  and  the
          observer's beliefs.
           Lewis David K.: ON THE PLURALITY  OF  WORLDS  (Basil  Blackwell,
          1986)

          Lewis advocates an indexical theory of actuality. Every  possible
          world  is  actual  from its own point of view, and every possible
          world is merely possible from the point of view of other  worlds.
          Worlds  are never causally related to other worlds. The isolation
          of possible worlds constitutes their being merely possible  rela-
          tive to each other.

          A proposition is a function from possible worlds to truth-values.
          Each world provides a truth value for a proposition.
           Lewontin Richard:  THE  GENETIC  BASIS  OF  EVOLUTIONARY  CHANGE
          (Columbia University Press, 1974)

          It is not yet clear which percentage of  evolutionary  change  is
          due  to  natural  selection  and  which  is due to random events.
          Modern evolutionary genetics stems from the merging of two tradi-
          tions, the Darwinian and the Mendelian, both of which take varia-
          tion as the crucial aspect of life. The  Darwinian  view  can  be
          summarized  as  "evolution is the conversion of variation between
          individuals into variation between  populations  and  species  in
          time  and  space".  The paradox is that Mendelian theory dictates
          the frequencies of genotypes as the appropriate genetic  descrip-
          tion  of  a population, whereas variation is much more important.
          "What we can measure is uninteresting and what we are  interested
          in is unmeasurable".  Most theories of genetic variation in popu-
          lations (allelic variation) are also theories of  natural  selec-
          tion.  Variation and selection turn out to be dual aspects of the
          same problem.

          Even worse is the  situation  with  respect  to  "the  origin  of
          species",  i.e.   theories  of  the genetic changes that occur in
          species formation.  Geographic isolation (or, better,  ecological
          divergence)  is  recognized as the preliminary stage, causing the
          appearance of genetic differences sufficient to restrict severely
          the  genetic exchange with other populations (reproductive isola-
          tion). The second stage occurs  when  isolated  populations  come
          into  contact  and  the  third stage starts when the newly formed
          species continue to develop independently.

          Lewontin reviews evidence in favor of each  theory.  His  conclu-
          sion,  in  ragarding  the genome as the uit of selection, is that
          "context and interaction are of essence".

Lewontin Richard: HUMAN DIVERSITY (W.H.Freeman, 1981)

          Each organism is the subject of continous development  throughout
          its  life  and such development is driven by mutually interacting
          genes and environment.  Genes per se cannot determine the  pheno-
          type, capacity or tendencies.

          The organism is both the subject and  the  object  of  evolution.
          Organisms  construct  environments  that  are  the conditions for
          their own further evolution and  for  the  evolutions  of  nature
          itself  towards new environments.  Organism and environment mutu-
          ally specifify each other.

Leyton Michael: SYMMETRY, CAUSALITY, MIND (MIT Press, 1992)

          Leyton's idea is that shape is used by the mind  to  recover  the
          past. Shape is time. Shape equals the history that created it.

          By studying the  psychological  relationship  between  shape  and
          time,  Leyton  offers  a  working model of how perception is con-
          verted into memory.

          There is a relationship  between  perceived  asymmetry,  inferred
          history  and  environmental  energy.   The  energy  of  a  system
          corresponds to memory of the causal interactions that transferred
          to  the  system.Shape,  or  asymmetry,  is a memory of the energy
          transferred to an object in causal interactions.

          All vision is the recovery of the past: vision  simply  "unlocks"
          time  from  the image. In general, perceptual representations are
          representations of stimuli in terms of causal histories. This  is
          also true of cognitive representations.

          Any cognitive representation is the description of a stimulus  as
          a  state  in a history that causally explains the stimulus to the
          organism.  A cognitive system is a system that creates and  mani-
          pulates causal explanations.
          Ming Li and Paul Vitanyi: AN INTRODUCTION TO KOLMOGOROV  COMPLEX-
          ITY AND ITS APPLICATIONS (Springer-Verlag, New York, 1997)

          Written by two experts in the field, this is the only  comprehen-
          sive  and unified treatment of the central ideas and their appli-
          cations of Kolmogorov complexity (the  theory  dealing  with  the
          quantity  of information in individual objects), also known vari-
          ously  as  `algorithmic  information',   `algorithmic   entropy',
          `Kolmogorov-  Chaitin  complexity',  `descriptional  complexity',
          `shortest program length', `algorithmic randomness', and others.
           Lieberman Phipip: THE BIOLOGY AND EVOLUTION OF LANGUAGE (Harvard
          Univ Press, 1984)

          Language is found to be a by-product of the neural processes that
          underly cognition in general (unlike Chomsky's vision of separate
          "language organs").  The  only  language-specific  processes  are
          essentially  those  that  contribute  to speech, and they evolved
          from processes that are common to many animals.  Speech, not syn-
          tax, is the fulcrum of language.

Lieberman Philip: UNIQUELY HUMAN (Harvard Univ Press, 1992)

          Human language is a  relatively  recent  evolutionary  innovation
          that  came  about when speech and syntax were added to older com-
          munication systems. The function  of  speech  and  syntax  is  to
          enhance the speed of communication: speech allows humans to over-
          come the limitations of the mammalian auditory system and  syntax
          allows them to overcome the limits of memory.

          Two principles are recalled. Natural selection acts on  individu-
          als who each vary: species that successfully change and adapt are
          able to maintain a stock of varied traits coded in the  genes  of
          the  individuals who make up their population. The "mosaic" prin-
          ciple states that parts of the body of an organism  are  governed
          by  independent genes. There are no central genes who control the
          overall assembly of the body.

          Given these principles, two phenomena can be explained: a  series
          of  small,  gradual  changes  in  structure can lead to an abrupt
          change in the behavior of the organism; and an abrupt  change  in
          behavior  may  cause  an abrupt change in morphology which causes
          the formation of a new species (at "functional branch-points").

          The structure of the brain reflects its evolutionary history. The
          brain  consists of a set of specialized circuits with independent
          evolutionary  histories.   Unlike  modular   theories   such   as
          Chomsky's  and Fodor's, Lieberman's "circuit model" (derived from
          Norman Geschwind's connectionist model) assumes  that  the  brain
          bases  for  language  are  mostly  language-specific  and  mostly
          located in the newest part of  the  brain,  the  neocortex.   The
          brain  consists  of  many specialized units that work together in
          different circuits (the same unit can work in many circuits). The
          overall circuitry reflects the evolutionary history of the brain,
          with units that adapted to serve a different purpose  from  their
          original one. Therefore, rapid vocal communication is responsible
          for the evolution of the human brain.

          The theory is supported by a wealth of anthropological and neuro-
          physiological  data  (particularly  from  Broca's  and Wernicke's
          experiments).

          Besides language, the other unique trait of the human  race  (and
          therefore  of  the  human  brain)  is  moral  code, in particular
          altruism. This is  also  a  relatively  recent  development,  and
          presupposes language and cognition.

Lightfoot David: THE LANGUAGE LOTTERY (MIT Press, 1982)

          The book is basically an introduction to  Chomsky's  theories  of
          language  with  an  emphasis  on  biological  aspects.  First and
          foremost, Lightfoot examines how children can  learn  a  language
          without  significant  instruction  and  despite  a  deficiency of
          experiential data. The  only  rational  explanation  is  that  an
          innate structure, a "universal grammar", guides the learning pro-
          cess.

          Lightfoot  applies  Gould's  theory  of  evolutionary  change  to
          linguistics: language changes gradually but every now and then is
          subject to catastrophic revisions.
           Lockwood Michael: MIND, BRAIN AND THE QUANTUM (Basil  Blackwell,
          1989)

          Drawing from quantum mechanics and from Bertrand  Russell's  idea
          that  consciousness  provides  a kind of "window" onto the brain,
          Lockwood offers a theory of consciousness as a process of percep-
          tion of brain states.

          By using special relativity (mental states must be in space given
          that  they  are  in  time), he leans towards the identity theory.
          Then Lockwood interprets the role  of  the  observer  in  quantum
          mechanics  as the role of consciousness in the physical world (as
          opposed as a simple interference with the system being observed).

          Lockwood thinks that our sensations are intrinsic  attributes  of
          physical  states  of  the brain. Consciousness scans the brain to
          look for sensations. It does not create them, it just seeks them.

          Each observable attribute (e.g., each sensation)  corresponds  to
          an observable of the brain. The external world is a physical sys-
          tem in which a set of compatible observables  is  defined,  whose
          state  is  therefore  defined  by  a  sum  of eigenstates of such
          observables (i.e., by a sum of perspectives).

          Lockwood mentions Deutsch David's "Quantum theory and the univer-
          sal  quantum  computer"  (1975), which generalizes Turing's ideas
          and defines a "quantum" machine in which  Turing  states  can  be
          linear  combinations of states. The behavior of a quantum machine
          is a  linear  combination  of  the  behavior  of  several  Turing
          machines. A quantum machine can only compute recursive functions,
          as much as Turing's machine, but it turns out to be  much  faster
          in  solving problems that exhibit some level of parallelism. In a
          sense a quantum computer is capable of decomposing a problem  and
          delegating   the   subproblems  to  copies  of  itself  in  other
          universes.

Luger George: COMPUTATION AND INTELLIGENCE (MIT Press, 1995)

          Seminal papers  by  Turing,  Minsky,  McCarthy,  Newell,  Schank,
          Brooks.
           Luger George: COGNITIVE SCIENCE (Academic Press, 1993)
          An introduction to the field.
           Lukaszewicz  Witold:  NON-MONOTONIC  REASONING  (Ellis  Harwood,
          1990)

          A formal survey of mathematical theories for nonmonotonic reason-
          ing.

          After an  introduction  to  monotonic  logic  (first  and  second
          order),  the  book  delves  into  nonmonotonic  logics: Sussman's
          MICRO-PLANNER, Doyle's and de Kleer's truth maintenance  systems,
          Mc  Carthy's  circumscription, McDermott and Doyle's modal logic,
          Moore's autoepistemic  logic,  Reiter's  default  logic  and  the
          closed world assumption.
           Lycan William: LOGICAL FORM  IN  NATURAL  LANGUAGE  (MIT  Press,
          1984)

          Lycan's theory of linguistic meaning rests on  truth  conditions.
          All  other  aspects of semantics (verification conditions, use in
          language games, illocutionary force, etc) are derived  from  that
          notion.  A  sentence  is meaningful in virtue of being true under
          certain conditions and not others.

          This is consistent with Davidson's program of assigning  meanings
          to  sentences  of  natural languages by associating the sentences
          with truth-theoretically interpreted formulas of a logical system
          (their  "logical  form").   Lycan  basically  refines  Davidson's
          metatheory. Instead of assigning only a pair of arguments to  the
          truth  predicate, Lycan defines truth as a pentadic relation with
          reading (the logical form), context (truth is relative to a  con-
          text  of  time and speaker, as specified by some assignment func-
          tions), degree (languages are  inherently  vague,  and  sentences
          normally contain fuzzy terms and hedges), and idiolect (the truth
          of a sentence is relative to the language of which it is a  gram-
          matical string).

          Lycan  argues  that  pragmatics  (implicatures,  presuppositions)
          should  be  kept  separate from semantics. Context determines the
          interpretation of a sentence at several levels: it singles a log-
          ical  form out of a set of potential candidates; it completes its
          proposition by binding all free variables; it provides  a  secon-
          dary  meaning (e.g., implicatures); it clarifies lexical presump-
          tions; and it determines the illocutionary force.

          Lycan defends truth-condition semantics against the  most  common
          attacks,  in  particular  against Quine's theory of indeterminacy
          and Dummett's antirealism.

          Lycan finally presents a cognitive architecture based on  a  ver-
          sion of humuncular functionalism.

Lycan William: CONSCIOUSNESS (MIT Press, 1987)

          Lycan reviews behaviorist and dualist theories of the mind,  then
          focuses  on  Dennett's  homuncular  functionalism  and defends it
          against its critics.

          Lycan thinks  that,  besides  the  low  level  of  physiochemical
          processes  and  the  high  level  of  psychofunctional processes,
          Nature is organized in a number  of  hierarchical  levels  (suba-
          tomic,  atomic,  molecular, cellular, biological, psychological).
          And each level is both physical  and  functional:  physical  with
          respect  to  its  immediately  higher  level  and functional with
          respect to its immediately lower level.  Going from lower  levels
          to higher levels we obtain a physical, structural, description of
          nature (atoms make molecules that make cellules that make  organs
          that  make  bodies...). Backwards we obtain a functional descrip-
          tion (the behavior of something is explained by the  behavior  of
          its parts).

          The  aggregative  ontology  ("bottom-up")  and   the   structured
          epistemology  ("top-down") of Nature are dual aspects of the same
          thing.  The apparent irreducibility of the mental is due  to  the
          irreducibility of the various levels.

Lycan William: MIND AND COGNITION (MIT Press, 1990)

          A massive collection of articles on theories of the mind.  Homun-
          cular  functionalism is championed by Dennett and Lycan. Elimina-
          tivism is presented by Churchland  and  Feyerabend.  Language  of
          thought (Fodor), folk psychology (Stich), qualia (Block) are also
          discussed.

Lycan William: MODALITY AND MEANING (Kluwer Academic, 1994)

          Lycan presents a theory  of  possible  individuals  and  possible
          worlds  in which a world is viewed as a structured set of proper-
          ties. A number of philosophical puzzles are examined from a  very
          technical perspective.

Lycan William: CONSCIOUSNESS AND EXPERIENCE (MIT Press, 1996)

A continuation of "Consciousness".

Lyons John: SEMANTICS (Cambridge Univ Press, 1977)

          A discussion of semantics within the framework of semiotics, i.e.
          taking language as a semiotic system.

          Lyons discusses behaviorist semantics, logical semantics  (model-
          theoretic  and  truth-conditional semantics, reference, sense and
          naming)  and  structuralist  semantics  (in  particular  semantic
          fields and componential analysis).

          The second volume is more specifically linguistic,  dealing  with
          grammar, deixis, illocutionary force, modality.

MacCormac Earl: A COGNITIVE THEORY OF METAPHOR (MIT Press, 1985)

          A unified theory of metaphor, with implications for  meaning  and
          truth.

          MacCormac rejects the tension theory (which locates  the  differ-
          ence  between  metaphor and analogy in the emotional tension gen-
          erated by  the  juxtaposition  of  anomalous  referents),  Monroe
          Beardsley's  controversion  theory (which locates that difference
          in the falsity produced by a literal reading of  the  identifica-
          tion of the two referents) and the deviance theory (which locates
          that difference in the ungrammaticality of the  juxtaposition  of
          two  referents).   A  metaphor is recognized as a metaphor on the
          basis of the semantic anomaly produced by  the  juxtaposition  of
          referents.  Metaphor  is  distinct  from  ordinary  language  (as
          opposed to the view that all language is metaphorical).

          MacCormac  modifies  Black's  interactionist  theory  and  adopts
          Wheelwright's   classification   of  "epiphors"  (metaphors  that
          express the existence of  something)  and  "diaphors"  (metaphors
          that  imply  the  possibility  of something). Diaphor and epiphor
          measure the likeness and the dissimilarity of  attribute  of  the
          referents.   A  diaphor can become an epiphor (when the object is
          found to really exist)  and  an  epiphor  can  become  a  literal
          expression  (when  the term has been used for so long that people
          have forgotten its origin).

          Metaphor is a process that exists at  three  levels:  a  language
          process  (from  ordinary  language  to diaphor to epiphor back to
          ordinary  language);  a  semantic  and  syntactic  process   (its
          linguistic  explanation); and a cognitive process (to acquire new
          knowledge).  Therefore a theory of metaphor requires  three  lev-
          els:   a  surface or literal level, a semantic level and a cogni-
          tive level.

          The semantics of metaphor is then formalized  using  mathematical
          tools.  "Partial" truths of metaphorical language are represented
          by fuzzy values: the meaning of a sentence can belong to  several
          concepts  with  different degrees of memberships. The paradigm is
          one of language as a hierarchical network in n-dimensional  space
          with  each  of  the  nodes of the network a fuzzy set (defining a
          semantic marker).  When  unlikely  markers  are  juxtaposed,  the
          degrees  of  membership  of  one semantic marker in the fuzzy set
          representing the other semantic marker  can  be  expressed  in  a
          four-valued logic (so that a metaphor is not only true or false).

          MacCormac also sketches the theory that metaphors are speech acts
          in  Austin's  sense. Metaphors both possess meaning and carry out
          actions. An account of their meaning must include an  account  of
          their locutionary and perlocutionary forces.

          Finally, the third component of a theory of meaning for metaphors
          (besides  the semantic and speech act components) is the cultural
          context.

          The meaning of metaphors results from the semantical  aspects  of
          communication, culture and cognition.

          MacCormac claims that, as cognitive processes, metaphors  mediate
          between  culture and the mind, influencing both cultural and bio-
          logical evolution.

MacLean Paul: THE TRIUNE BRAIN IN EVOLUTION (Plenum Press, 1990)

          From the study  of stimulation  and  lesion  of  different  brain
          areas,  MacLean  developed  his theory of the "triune brain": the
          human brain is divided in three layers, that correspond to  three
          different  stages  of  evolution.  The oldest one, the "reptilian
          brain", is a midbrain reticular formation that has changed little
          from  reptiles  to  mammals  and to humans and is responsible for
          species-specific behavior (instinctic behavior). The limbic  sys-
          tem  is the old mammalian brain which is responsible for emotions
          that are functional to the survival of the individual and of  the
          species. The cerebral cortex is the new mammalian brain, which is
          responsible for higher cognitive functions such as  language  and
          reasoning.
           MacNamara John & Reyes Gonzalo: THE LOGICAL FOUNDATIONS OF  COG-
          NITION (Oxford University Press, 1994)

          A collection of papers that try to bridge  logic  and  cognition.
          The  editors  believe that the most basic properties of cognitive
          psychology show  up  as  the  universal  properties  of  category
          theory.  Category  theory  is  better  suited than set theory for
          representing basic intentional capabilities such as to refer,  to
          count  and  to  learn.  Category  theory generalizes set theory's
          notions of set and function into the  "universal"  properties  of
          object  and morphism.  Logic becomes the study of what is univer-
          sal.

          The editors reach the conclusion that "there is no purely physio-
          logical  explanation for the acquisition of intentional skills or
          the existence of intentional states." As a corollary, there  must
          exist  unlearned  (innate) "logical resources" (e.g., membership,
          typed equality, reference to symbols), sort of universals of  the
          human mind.

          Most papers revolve  around  Reyes'  seminal  contribution  to  a
          semantic  theory.   Kinds  are  interpretations  of common nouns.
          Reference to an individual by means of a proper noun  involves  a
          kind  (e.g.,  reference to the name of a person involves the kind
          "person").  Therefore any reference to an individual  involves  a
          kind.  Kinds  are  modally  constant  (don't  decay in time), but
          predicates (properties) of kinds may change. All  predicates  are
          typed by kinds.

Maes Patti: DESIGNING AUTONOMOUS AGENTS (MIT Press, 1990)

          A collection of articles on action-oriented systems  (as  opposed
          to  knowledge-based  systems),  which are based on a tighter cou-
          pling between perception and action, a distributed  architecture,
          dynamic interaction with the environment. Cognitive faculties are
          viewed as "emergent functionalities", properties that arise  from
          the  interaction  of the system with the environment. The proper-
          ties of the environment determine the behavior of the  system  as
          much  as  the  system's own properties. A system is made of auto-
          nomous specialized subsystems and the  overall  behavior  is  the
          result of the intended behaviors of all the subsystems.

          Rodney Brooks introduces his situated agents.

          Mae models action  selection  through  behavior  networks,  which
          exhibit  planning  capabilities halfway between traditional goal-
          oriented planning and situated action.
           Mamdani E.H. & Gaines B.R.:  FUZZY  REASONING  (Academic  Press,
          1981)

          Each chapter is written by  an  authority  in  the  field.  Zadeh
          introduces  PRUF,  a  meaning representation language for natural
          languages that considers the intrinsic imprecision  of  languages
          as possibilistic rather than probabilistic in nature. Goguen pro-
          vides some mathematical foundations to the theory of fuzzy  sets,
          that  lead from a few axioms (in the language of category theory)
          to the definition of operations on fuzzy sets that  are  parallel
          to  those  for ordinary sets (which, on the other hand, cannot be
          categorical).

          Some applications to linguistics, expert systems and  controllers
          are also discussed.
           Mandelbrot Benoit: THE FRACTAL GEOMETRY OF NATURE  (W.H.Freeman,
          1982)

          This is the book (a revision  of  1977's  "Fractals")  that  made
          fractals  popular.  Mandelbrot emphasizes the inhability of clas-
          sical geometry to model the shapes of the real world, in particu-
          lar  the  complexity of natural patterns.  Natural patterns (such
          as coastlines) are of infinite length. In order to provide  meas-
          ures,  Mandelbrot  resorts to Felix Hausdorff's fractal dimension
          (a fraction that exceeds one, even if a curve's dimension  should
          intuitively be one).

          Scaling and nonscaling  fractals,  self-mapping  fractals,  Brown
          fractals,   trema   fractals  are  introduced  along  with  their
          mathematical properties.

          Mandelbrots describes applications to coastal lines, galaxy clus-
          ters,  the physics of turbulence, the cosmological principle, and
          so forth, and  discusses  the  relationship  to  artificial  life
          (organic  looking nonlinear fractals) and chaos theory (nonlinear
          fractals that play the role of attractors for dynamic systems).

Mandler George: MIND AND BODY (Norton, 1984)

          An expanded and revised edition of  "Mind  and  Emotion"  (1975),
          which  first analyzed the relationship between cognition and emo-
          tion.

          After a generous history and survey of research  on  emotions  in
          cognitive  psychology,  Mandler offers his view on mind and cons-
          ciousness: the mind is a  general  information-processing  system
          that  employs  schemas  as  basic  cognitive  structures. Schemas
          represent environmental regularities.

          Mandler emphasizes  the  constructive  nature  of  consciousness:
          "consciousness  is a construction of phenomenal experience out of
          one or more of the available preconscious  schemas,"   a  process
          driven  by the most abstract schema relevant to the current goals
          of the individual.  One of the functions of consciousness  is  to
          enable  the  individual  to evaluate environmental conditions and
          action alternatives.

          Emotions are constructed out of autonomic arousal (arousal  of  a
          part of the nervous system called autonomic nervous system, which
          determines the intensity of the emotion) and evaluative cognition
          (meaning  analysis, which determines the quality of the emotion).
          Therefore, emotion is a product of  schemas,  arousal  and  cons-
          ciousness.  The function of emotions is to provide the individual
          with an optimal sense of the world, with the most general picture
          of  the  world  that  is consistent with current needs, goals and
          situations.
           Marcus Mitchell: A THEORY OF SYNTACTIC RECOGNITION  FOR  NATURAL
          LANGUAGE (MIT Press, 1980)

          This book describes the famous Marcus parser.
           Marek  Wiktor  &  Truszczynski  Miroslav:  NON-MONOTONIC   LOGIC
          (Springer Verlag, 1991)

          A rigorous, monumental work on the  foundations  of  nonmonotonic
          logic,  based  on  nonmonotonic  rules  of  proof  (defaults), or
          context-dependent  derivation  (the  context   determines   which
          derivation  rule  is valid). First-order default theories such as
          Reiter's and  modal  nonmonotonic  logics  such  as  Doyle's  and
          McDermott's  are  given  extensive treatments, while second-order
          logics such as McCarthy's circumscription are merely mentioned.
           Margalef Ramon: PERSPECTIVES IN ECOLOGICAL THEORY (Univ of  Chi-
          cago Press, 1968)

          In this study of the ecosystem as a cybernetic system a number of
          biological quantities are given mathematical definition.

          A basic property of nature is that any exchange between two  sys-
          tems  of  information  increases  the  difference  of information
          between the two systems: the less organized system  gives  energy
          to  the  more  organized  one and in parallel information is des-
          troyed in the less organized system and information is created in
          the  more organized one. The less organized system feeds the more
          organized.

          An ecosystem is a system controlled by the second law of  thermo-
          dynamics.

          A measure of ecological efficiency is given by  the  energy  flow
          per  unit  biomass (the primary production of  the system divided
          by the total biomass).

          Succession (the occupation of a  territory  by  organisms)  is  a
          self-organizing  process  that  develops  a  biological system in
          which the production of entropy per unit of information is minim-
          ized.   Such  process  consists  in  substituting biological com-
          ponents of the system with other biological components so  as  to
          preserve  the  same or more information at the same or less ener-
          getic cost. Paradoxically, the system seeks to  gain  information
          from  the  environment  only to use such information to block any
          further assimilation of information.  During succession there  is
          trend towards increase in biomass, complexity stratification, and
          diversity.  The more entropy/energy efficient systems  are  those
          that are best fit to survive. Therefore, succession is to ecology
          what evolution is to biology.

Marr David: VISION (MIT Press, 1982)

          Marr thinks that the vision system employs innate information  to
          decipher the ambigous signals that it perceives from the world.

          Processing of perceptual data is  performed  by  "modules",  each
          specialized  in  some function, which are controlled by a central
          module.  In a similar fashion to Chomsky and Fodor the brain con-
          tains  semantic  representations  (in particular a grammar) which
          are innate and universal (of biological nature) in  the  form  of
          modules  that are automatically activated and all concepts can be
          decomposed in such semantic representations.  The  processing  of
          such semantic representations is purely syntactic.

          The physical signal sent to the world is received (in the form of
          physical  energy) by transductors, which transform it into a sym-
          bol (in the form of a neural code) and pass it on  to  the  input
          modules,  which  extract  information  and send it to the central
          module in charge of higher cognitive tasks.

          Each module corresponds to neural subsystems in the  brain.   The
          central  module  exhibits the property of being "isotropic" (able
          to build hypotheses based on any other  available  function)  and
          "quinian"  (the  degree of confirmation assigned to an hypothesis
          is conditioned by the entire system of beliefs).

          The visual system  is  decomposed  in  a  number  of  independent
          subsystems.   Such  subsystems  provide  a  representation of the
          visual scene at three different levels of abstraction: the  "pri-
          mal sketch", which is a symbolic representation from the meaning-
          ful features of the image (anything causing  sudden  discontinui-
          ties in light intensity, such as boundaries, contours, textures);
          a 2 and a half dimensional sketch, which is a representation cen-
          tered  on  the visual system of the observer (e.g., describes the
          surrounding surfaces and their properties, mainly  distances  and
          orientation)  and  computed  by  a  set of modules specialized in
          parameters of motion, shape, color, etc;  and  finally  the  tri-
          dimensional  representation,  which is centered on the object and
          is computed by Ullman's correspondence rules.

          Marr thinks that  one  can  be  at  either  at  three  levels  of
          analysis:  the  computational  level (which mathematical function
          the system must compute, i.e.  an account  of human  competence),
          the  algorithmic  level  (which  algorithm  must be used, i.e. an
          account  of human performance)  and  the  physical  level  (which
          mechanism must implement the algorithm). Cognitive science should
          investigate the mind at the computational level.
           Martin James: A COMPUTATIONAL MODEL OF  METAPHOR  INTERPRETATION
          (Academic Press, 1990)

          Martin does not believe that the process of comprehending a meta-
          phor is a process of reasoning by analogy. A metaphor is simply a
          linguistic convention within a linguistic community, an "abbrevi-
          ation" for a concept that would otherwise require too many words.
          There is no need for transfer of properties from one  concept  to
          another.

          A number of Lakoff-style primitive classes  of  metaphors  (meta-
          phors  that  are  part  of the knowledge of language) are used to
          build  all  the  others.   A  metaphor  is  therefore  built  and
          comprehended just like any other lexical entity.

Martin-Lof Per: INTUITIONISTIC TYPE THEORY (Bibliopolis, 1984)

          The theory of types is an application of intuitionistic logic. It
          provides  a  framework in which to implement the tasks of program
          specification, program  construction  and  program  verification.
          Expressions are built up from variables and constants by applica-
          tion and functional abstraction. The meaning of an expression  is
          provided  by  a  rule of computation. The mechanical procedure of
          computing the value of an expression  is  its  "evaluation".  The
          statement  "a  is  an  element of A" can be understood as "a is a
          proof of proposition A" or "a is a program for  the  solution  of
          A".  The  specification of a program is a type definition and the
          program itself can be derived formally as a proof.

Mason Stephen: CHEMICAL EVOLUTION (Clarendon Press, 1991)

          Mason attempts an explanation of  the  origin  of  the  elements,
          molecules and living systems. His theory is close to Julius Rebek
          and Stanley Miller, who  are  trying  to  create  molecules  that
          behave like living organisms.
           Matthews Robert:  LEARNABILITY  AND  LINGUISTIC  THEORY  (Kluwer
          Academics, 1989)

          A collection of papers that cover the relations between  learning
          theory  and  natural  language from Gold's "identification in the
          limit" to Osherson's proof that the class of natural language  is
          finite.

Maturana Humberto: AUTOPOIESIS AND COGNITION (Reidel, 1980)

          The book contains two landmark essays: "Biology of cognition" and
          "Autopoiesis".

          Maturana argues that the relation with the environment molds  the
          configuration  of  a cognitive system. Autopoiesis is the process
          by which an organism can continously reorganize  its  own  struc-
          ture.   Adaptation consists in regenerating the organism's struc-
          ture so that its relationship to  the  environment  remains  con-
          stant.   An  organism is therefore a structure capable to respond
          to the environment, and the stimulus is the part  of  environment
          that is absorbed by the structure.

          Living systems are units of interaction. They only  exist  in  an
          environment.   They  cannot  be understood independently of their
          environment. They exhibit exergonic  metabolism,  which  provides
          energy  for endergonic synthesis of polymers, i.e. for growth and
          replication. The circular organization of living  organisms  con-
          stitutes  a homeostatic system whose function is to maintain this
          very same circular organization. It is such circular organization
          that  makes  a  living  system a unit of interaction. At the same
          time it is this circular organization  that  helps  maintain  the
          organism's  identity  through  its interactions with the environ-
          ment. Due to this circular organization, a  living  system  is  a
          self-referring system.

          Cognition is biological in the sense that the cognitive domain of
          an  organism is defined by its interactions with the environment.
          A living system operates as an inductive system and in a  predic-
          tive  manner:  what its organization reflects regularities in the
          environment ("what happened once will occur again").  Living sys-
          tems are cognitive systems. Living is a process of cognition. The
          internal state of a living organism is  changed  by  a  cognitive
          interaction in a way relevant to the circularity of its organiza-
          tion (to its identity). The nervous system enables a broader  way
          to interact and eventually self-consciousness.

          Maturana  assumes  that  intelligent   behavior   originates   in
          extremely  simple  processes: the living cell is nothing special,
          but many living cells one next to the other become a complex sys-
          tem thanks to autopoiesis.

          An autopoietic system is a network of transformation and destruc-
          tion  processes  whose  components interact to continously regen-
          erate the network An autopoietic system holds costant its organi-
          zation  (its  identity).  Autopoiesis generates a structural cou-
          pling with the environment: the structure of the  nervous  system
          of  an organism generates patterns of activity that are triggered
          by perturbations from the environment and that contribute to  the
          continuing autopoiesis of the organism.  Autopoiesis is necessary
          and sufficient to characterize a living system.

          All living systems are cognitive systems. Cognition is simply the
          process of maintaining itself by acting in the environment.

          Language is connotative and not denotative. Its  function  is  to
          orient  the  organism  within  its  cognitive  domain.   Maturana
          extends the term "linguistic" to any mutually generated domain of
          interactions (any "consensual domain").

          Maturana assumes that multi-cellular organisms are born when  two
          or  more  autopoietic  units  engage in an interaction that takes
          place more often than any of the interactions of each  unit  with
          the rest of the environment (a "structural coupling"). Inert ele-
          ments become macromoleculs,  and  macromolecules  become  organic
          cells,  and  so  on  towards  cellular  organisms and intelligent
          beings.

          The structures that are effectively built  are  those  that  make
          sense in the environment.

          Cognition is a purely biological phenomenon. Organisms do not use
          any  representational  structures, but their intelligent behavior
          is due only to the continous change in their  nervous  system  as
          induced  by perception.  Intelligence is action. Memory is not an
          abstract entity but simply the ability to recreate  the  behavior
          that  best couples with a recurring situation within the environ-
          ment.

          Even human society as a whole operates as a homeostatic system.
           Maturana Humberto & Varela  Francisco:  THE  TREE  OF  KNOWLEDGE
          (Shambhala, 1992)

          A popular introduction to Maturana's biology of  cognition,  cen-
          tered  around  the  concept  that  action and cognition cannot be
          separated: "all doing is knowing and all knowing  is  doing".   A
          living  organism  is  defined  by  the fact that its organization
          makes it continually self-producing (autopoietic), i.e. not  only
          autonomous  but  also  self-referring ("the being and doing of an
          autopoietic system are inseparable").  Life's origins  is  not  a
          mystery:  at some point of its history the Earth presented condi-
          tions that made the formation of autopoietic systems almost inev-
          itable.  The  whole process of life depends not on the components
          of a living organism, but on  its  organization.  Autopoiesis  is
          about organization, not about the nature of the components.

          Basic concepts are defined: replication as a  process  that  gen-
          erates  unities  of  the  same class, copy as a process that gen-
          erates an identical unity, reproduction as a  process  that  gen-
          erates  two unities of the same class, ontogeny as the history of
          structural change in a unity  that  preserves  its  organization.
          Since ontogeny always happens in an environment, the organism has
          to use the environment as a medium to  realize  its  autopoiesis.
          There  occurs  a  "structural  coupling"  between a unity and its
          environment.

          Evolution is a natural drift, a consequence of  the  conservation
          of  autopoiesis  and adaptation. There is no need for an external
          guiding force to direct evolution. All is needed is  conservation
          of identity and capacity for reproduction.

          The nervous system enables the living organism to expand the  set
          of  possible  internal  states ant to expand the possible ways of
          structural coupling.

          When two or more living organisms interact recurrently, they gen-
          erate  a  social coupling. Language emerges from such social cou-
          pling. Language is a necessary condition for  self-consciousness.
          Consciousness therefore belongs to the realm of social coupling.
           Mayr Ernst: POPULATION,  SPECIES  AND  EVOLUTION  (Harvard  Univ
          Press, 1970)

          Mayr surveys the history of  evolutionary  theories  and  current
          evolutionary research. The modern synthesis can be summarized as:
          evolution is due to "the production of variation and the  sorting
          of variants by natural selection".

          Mayr focuses on the biological properties  of  species  and  then
          deals  with  population  variation  and genetics ("phenotypes are
          produced by genotypes  interacting  with  the  environment",  and
          genotypes  are  produced by the recombination of genes of a local
          population).

          Mayr focuses on variation ("the study of variation is  the  study
          of  populations").  All populations contain enough genetic varia-
          tion to fuel evolutionary change.  Variation in turn poses  prob-
          lems for adaptation and speciation.  Mayr explain the genetics of
          speciation by downplaying the role of  geographic  isolation  and
          emphasizing   and   emphasizing  the  genetic  reconstruction  of
          populations.

          The species are the units of evolution. Speciation is the  method
          by which evolution advances.

          The structure of an organism necessarily reflects its  evolution-
          ary history.
           Maynard Smith John:  EVOLUTIONARY  GENETICS  (Oxford  University
          Press, 1989)

          Maynard Smith also faced the paradox of the origin of life.  Even
          if one assumes that self-replicating entities came to exist spon-
          taneously, growing a body was almost impossible: long strings  of
          RNA  need enzyme assistants in order to replicate, but specifying
          these assistants requires long  strings  of  RNA...   One  cannot
          exist without the other already existing.
           Maynard Smith John: THEORY OF  EVOLUTION  (Cambridge  University
          Press, 1993)

          Smith is a darwinist who tried to define progress in evolution as
          "an  increase  of  information  transmitted between generations".
          Smith believes that evolution was completely  random:  if  played
          again, evolution may lead to completely different beings.
           Mayr Ernst: THE  GROWTH  OF  BIOLOGICAL  THOUGHT  (Harvard  Univ
          Press, 1982)

          A monumental history of evolutionary biology, from  Aristotle  to
          Darwin, from Mendel to the DNA.
           Mayr Ernst: TOWARDS A NEW PHILOSOPHY OF  BIOLOGY  (Harvard  Univ
          Press, 1988)

          In this collection os essays Mayr tackles biological themes  from
          a philosophical standpoint.  Mayr debates extraterrestrial intel-
          ligent life, speciation, punctuated equilibria, etc.

          Mayr reiterates that the genes are not the units of evolution.
           McClelland James & Rumelhart David:  PARALLEL  DISTRIBUTED  PRO-
          CESSING vol. 2 (MIT Press, 1986)

          The second volume of the seminal  collection  of  articles  deals
          with   psychological   processes  (thought,  learning,   reading,
          speech) and biological mechanisms  (plausible  models  of  neural
          behavior) in the light of connectionism.

          Paul Smolensky attempts to bridge the symbolic level of cognitive
          science and the subsymbolic level of neurosciences.
           McGinn Colin: THE PROBLEM OF CONSCIOUSNESS (Oxford  Univ  Press,
          1991)

          Consciousness cannot be explained  by  humans,  but  only  by  an
          external  being,  because  consciousness  does  not belong to the
          "cognitive closure" of the human organism.

McNeill David: HAND AND MIND (Univ of Chicago Press, 1992)

          Following Adam Kenton,  McNeill  presents  a  unified  theory  of
          speech  and gestures, according to which gestures are an integral
          part of language.

          Gestures directly transfer mental images to visible  forms,  con-
          veying  ideas  that language cannot always express. Gestures con-
          tribute directly to the semantics  and  pragmatics  of  language.
          Gestures  transform  mental images into visual form and therefore
          express more than spoken language  can  express;  and,  symmetri-
          cally,  they build in the listener's mind mental images that spo-
          ken language alone could not build.  Gestures complement words in
          that  they  represent the individual's personal context and words
          carry this context to the level of  social  conventions.   Unlike
          words,   gestures   are  synthetic,  noncombinatorial  and  never
          hierarchical: they present meaning complexities without  undergo-
          ing the kind of (linear and hierarchical) decomposition that spo-
          ken language undergoes.  Gestures provide a holistic and  imagis-
          tic  kind of representation, while speech provides a analytic and
          linguistic representation. Speech  and  gesture  arise  from  the
          interaction (dialectic) of imagistic and linguistic mental opera-
          tions through a process of self-organization.

          The book offers a classification  of  gestures  (including  meta-
          phoric gestures) and a narrative theory of gestures.

McNeill David: PSYCHOLINGUISTICS (Harper & Row, 1987)

          The main thesis is that Saussure's linguistic paradigm  (language
          as  a system of static contrasts on the social level, i.e. langue
          vs parole, signifier vs signified, synchronic vs diachronic, syn-
          tagmatic  vs  paradigmatic,  linguistic value vs intrinsic value)
          and Vygotsky's psychological paradigm (language as a dynamic pro-
          cess  on  the  individual level) can be reconciled by positioning
          them at different points on the speech  developmental  time  axis
          (the  time it takes to think and build the sentence, not the time
          it takes to utter it).

          The book contains a clear introduction to Saussure's linguistics.
          McNeill's  methodology  relies  on gesture ("gestures are part of
          the sentence") as an additional source of evidence.

          McNeill also offers his own theory of spontaneous speech  genera-
          tion:  inner  speech symbols self-activate in appropriate concep-
          tual situations and generate speech. He recognizes two  fundamen-
          tal types of thinking, and assumes that during linguistic actions
          imagistic thinking is unpacked by syntactic thinking.

          Linguistic actions create self-aware consciousness: an individual
          becomes  self-conscious by mentally simulating social experience.
          Individual consciousness is social.
           Mead George Herbert: MIND, SELF AND  SOCIETY  (Univ  of  Chicago
          Press, 1934)

          Mind and consciousness are products of socialization  among  bio-
          logical  organisms.  Language provides the medium for their emer-
          gence. The mind is therefore socially constructed.  Society  con-
          stitutes  an  individual  as  much  as the individual constitutes
          society.

          The mind emerges through a  process  of  internalization  of  the
          social  process of communication: reflecting to oneself the reac-
          tion of other individuals to one's gestures. The minded  organism
          is  capable  of  being an object of communication to itself. Mead
          focuses on the role of gestures, which signal the existence of  a
          symbol (and a meaning) that is being communicated (i.e., recalled
          in the other individual), and therefore constitute  the  building
          blocks  of language.  "A symbol is the stimulus whose response is
          given in advance".  Meaning is defined by  the  relation  between
          the  gesture  and the subsequent behavior of an organism as indi-
          cated to another organism by  that  gesture.   The  mechanism  of
          meaning  is  therefore present in the social act before the cons-
          ciousness of it emerges.

          Consciousness is not in the brain, but in the world. It refers to
          both the organism and the environment, and cannot be located sim-
          ply in either.  What is in the brain is the process by which  the
          self gains and loses consciousness (analogous to pulling down and
          raising a window shade).

          Mead draws a distinction between the "me" (the self of  which  we
          are mostly aware) and the "I" (the self that is unpredictable).

Metzinger Thomas: CONSCIOUS EXPERIENCE (Springer Verlag, 1996)

          A collection of papers on the problem of consciousness. Contribu-
          tions by Michael Tye, William Lycan, Daniel Dennett.
           Michalski Ryszard,  Carbonell  Jaime  &  Mitchell  Tom:  MACHINE
          LEARNING I (Morgan Kaufman, 1983)

          A collection of seminal papers  on  machine  learning,  including
          Michalski's "A theory and methodology of inductive learning" (his
          "Star" methodology, i.e.  learning as a  heuristic  search  in  a
          space of symbolic descriptions driven by a incremental process of
          specialization and generalization) and  "Conceptual  clustering",
          Carbonell's "Learning by analogy", Mitchell's "Learning by exper-
          imentation" (his "version  spaces"  technique,  where  a  version
          space  is the partially ordered set of all plausible descriptions
          of the heuristic and an incremental process of refinement narrows
          down the space to one description).

          Doug Lenat surveys his projects of  learning  by  discovery  (AM,
          Eurisko), Langley reports on BACON.
           Michalski Ryszard,  Carbonell  Jaime  &  Mitchell  Tom:  MACHINE
          LEARNING II (Morgan Kaufman, 1986)

          A second set of articles on machine learning  research.  Includes
          reports  from  Patrick  Winston,  Thomas Dietterich, Paul Utgoff,
          Ross Quinlan, Michael Lebowitz, Yves Kodratoff, Gerald DeJong.

          Included are contributions  from  cognitive  architectures  (Paul
          Rosenbloom, John Anderson), qualitative physics (Kenneth Forbus),
          genetic algorithms (John Holland).

          Carbonell's analogical reasoning includes "trasformational"  rea-
          soning  (that  transfers  properties  from a situation to another
          situation) and "derivational" reasoning (that derives the proper-
          ties of a situation from another situation).
           Michalski Ryszard & Kodratoff Yves: MACHINE LEARNING III (Morgan
          Kaufman, 1990)

          A third set of articles that reports on new developments from the
          main protagonists of the field.

Michalski Ryszard: MACHINE LEARNING IV (Morgan Kaufman, 1994)

          New developments in machine learning, with a  section  on  theory
          revision.
           Miller George Armitage & Johnson-Laird Philip: LANGUAGE AND PER-
          CEPTION (Cambridge Univ Press, 1976)

          This monumental book, from a wealth of psychological  investigan-
          tions of a number of perceptual phenomena, attempts a psychologi-
          cal study of the lexical component of language.

          "Sense" has two meanings, one perceptual and the  other  linguis-
          tic. The relation between perceptual and linguistic structures is
          mediated by a complex conceptual system: percepts and  words  are
          just  channels  to enter and exit this complex system. Labels are
          learned not by pure  association,  but  through  an  attentional-
          judgmental  abstraction  of  perception. We don't learn automatic
          links between percepts and words, we learn rules relating percep-
          tual  judgments  to  assertible utterances.  The relation between
          perception and language consists in learning metalinguistic rules
          that  specify  how  perceptual judgments can be used to verify or
          falsify sentences. The meaning of a sentence is the way of  veri-
          fying it.

          In Johnson-Laird's procedural semantics, a word's meaning is  the
          set  of conceptual elements that can contribute to build a mental
          procedure necessary to comprehend  any  sentence  including  that
          word.  Those  elements depend on the relations between the entity
          referred by that word and any other entity it can be related  to.
          Rather  than  atoms  of  meanings,  we are faced with "fields" of
          meaning, each including a number of concepts that are related  to
          each other.  The representation of the mental lexicon handles the
          intensional relations between words  and  their  being  organized
          into semantic fields.

          Along the way, the authors review hundreds of cognitive  theories
          about memory, perception and language.
           Millikan Ruth: LANGUAGE, THOUGHT AND OTHER BIOLOGICAL CATEGORIES
          (MIT Press, 1987)

          Millikan aims for a general theory of "proper functions" that can
          be  applied  to  body  organs, instinctive behaviors and language
          devices (all elements used in verbal communication, from words to
          intonation).  Such proper functions explain the survival of those
          entities, in particular of language devices, and therefore eluci-
          date  what they "do".  Proper functions are related with the his-
          tory of a thing, with what it was designed to do.  Language  dev-
          ices  survive  because  they  establish  a symbiotic relationship
          between speakers and hearers. A proper  function  is  a  function
          that  stabilizes  the realtionship between a speaker and a hearer
          with respect to a language device.

          Speaker meaning and sentence meaning are related, but neither can
          be used as a base for defining the other.

          Millikan then develops a general theory of  signs  and  thoughts.
          Intentionality is a natural phenomenon: intentions are members of
          proper-function categories  (i.e.,  biological  categories)  that
          have been acquired through an evolutionary process for their sur-
          vival value.  The intentionality of  language  can  be  described
          without  reference  to the speaker's intentions.  Representations
          are a special class of intentional devices,  which  include  sen-
          tences  and  thoughts:  when  they perform their proper function,
          their referents are identified. Beliefs are representations.

          Meaning has three parts: the proper function, Fregean  sense  and
          intension.

Millikan Ruth: WHAT IS BEHAVIOR? (MIT Press, 1991)

          Millikan, inspired by Dawkins, believes  that,  when  determining
          the  function of a biological "system", the "system" must include
          more than just the organism, something that  extends  beyond  its
          skin.  Furthermore,  the  system  often  needs the cooperation of
          other systems: the immune  system  can  only  operate  if  it  is
          attacked by viruses.

Mines Robert: ADULT COGNITIVE DEVELOPMENT (Praeger, 1986)

          A collection of essays on the subject, including Patricia Arlin's
          seminal "Problem finding" and Karen Kitchener's "Reflexive judge-
          ment model".

          Arlin studies the cognitive developmental  process  that  enables
          creativity  in  art  and  science, or the emergence of postformal
          operational thinking that follows Piaget's traditional stages  in
          the young adult.

          Kitchener assumes that an adult keeps developing his or her  cog-
          nitive  faculties  and  therefore  refining the way decisions are
          taken in complex situations. Cognitive development continues  for
          the entire lifetime.

Minsky Marvin: SEMANTIC INFORMATION PROCESSING (MIT Press, 1968)

          A collection of articles  about  seminal,  historical  artificial
          intelligence systems, including Bertram Raphael's SIR for natural
          language understanding and Daniel Bobrow's STUDENT. Also includes
          John  McCarthy's 1958 article on "Programs with common sense" and
          Ross Quillian's 1966 paper on "Semantic memory".

          McCarthy proposes to build a  program  that  reasons  deductively
          from  a  body  of  knowledge until it concludes that some actions
          ought to be performed; then it adds the results of the actions to
          its  body  of  knowledge;  and  repeats its cycle.  McCarthy also
          sketches for the first time his situation calculus  to  represent
          temporally limited events as "situations".

          Quillian defines of a semantic network  as  a  relational  direct
          acyclical  graph  in  which  nodes  represent  entities  and arcs
          represent binary relations between entities.

Minsky Marvin: THE SOCIETY OF MIND (Simon & Schuster, 1985)

          The book summarizes all of the cognitive ideas  of  Minsky,  from
          frames to K-lines.

          In a similar vein to Dennett's homunculi, the cognitive architec-
          ture  of the society of mind assumes that intelligent behavior is
          due to the non-intelligent behavior of  a  very  high  number  of
          agents  organized  in a bureaucratic hierarchy.  The set of their
          elementary actions and their communications can produce more  and
          more complex behavior.

          Minsky assumes that a data structure called  "K-Line"  (Knowledge
          Line)  records  the  current  activity  (all the agents currently
          active) when a perception or problem solving task takes place and
          that the memory of that event or problem is a process of rebuild-
          ing what was active (the agents that were active) in the mind  at
          that  time.  Agents are not all attached the same way to K-lines.
          Strong connections are made at a certain  level  of  detail,  the
          "level-band",  weaker  connections  are  made at higher and lower
          levels. Weakly activated features correspond  to  assumptions  by
          default,  which  stay  active  only  as long as there are no con-
          flicts.  K-lines connect to K-lines and eventually form societies
          of their own.

          Minsky defines a frame as  a  packet  of  information  that  help
          recognize  and  understand a scene, represent sterotypical situa-
          tions and find shortcuts to ordinary problems.

          Memory is a network of frames, one relative to  each  known  con-
          cept.   Each  perception  selects  a  frame (i.e., classifies the
          current situation in a category) which then must  be  adapted  to
          that  perception; and that is equivalent to  interpret the situa-
          tion and decide which action  must  be  performed.  Reasoning  is
          adpating  a  frame to a situation. Knowledge imposes coherence to
          experience.

          The frame offers computational  advantages  (because  it  focuses
          reasoning  on the information that is relevant to the situation),
          is  biologically  plausible  (it  does  not  separate   cognitive
          phenomena such as perception, recognition, reasoning, understand-
          ing and memory).

          A frame is the description of a category by means of a prototypi-
          cal  member (i.e., its properties) and a list of actions that can
          be performed on any member of the category. Any other  member  of
          the  category can be described by a similar frame that customizes
          some properties of the prototype.  A prototype is simply a set of
          default properties. Default values express a lack of information,
          which can be remedied by new information (unlike  with  classical
          logic, which is monotonic).

          A frame provides multiple representations of an object: taxonomic
          (conjuctions  of  classification rules), descriptive (conjunction
          of propositions of the default values) and functional (a proposi-
          tion on the admissible predicates).
           Minsky Marvin: PERCEPTRONS;  AN  INTRODUCTION  TO  COMPUTATIONAL
          GEOMETRY (MIT Press, 1969)

          The book that virtually delivered a near-fatal blow  to  research
          on  neural networks by exposing mathematically the limitations of
          perceptrons.

Minksy Marvin: COMPUTATION (Prentice-Hall, 1967)

          Minsky built a  computational  connectionist  theory  on  top  of
          McCulloch's and Pitts' binary neuron.

Mitchell Melanie: ANALOGY-MAKING AS PERCEPTION (MIT Press, 1993)

          The book describes the program built by the  author  and  Douglas
          Hofstadter, COPYCAT.

          Analogy making is viewed as a perceptual process, rather  than  a
          purely reasoning process. The interaction of perceptions and con-
          cepts gives rise to analogies.

          The computer model entails a large number of parallel processors,
          halfway between connectionist and symbolic systems.  Concepts and
          perceptions are not well defined entities, but dymanic  processes
          that  arise  from  such  a  configuration.  The system employs an
          innovative stochastic search to find solutions.
           Mitchell Melanie: INTRODUCTION TO GENETIC ALGORITHMS (MIT Press,
          1996)

          A brief survey of the field.

Monod Jacques: CHANCE AND NECESSITY (Knopf, 1971)

English edition of "Le hasard et la necessite'".

          Based on probabilities, Monod analyzes the  interplay  of  chance
          and natural selection in the evolution of life and concludes that
          life was born by accident; then Darwin's natural  selection  made
          it   evolve.   The  origin  of  biological  information  is  also
          inherently determined by chance: there is  no  causal  connection
          between  the  syntactic  (genetic)  information  and the semantic
          (phenotypic) information that results from it, as the former  and
          its effects are completely independent of one another.

          Monod deals with the paradox that  a  mono-dimensional  structure
          like  the  genome can specify the function of a three-dimensional
          structure like the body: the function of a protein is underspeci-
          fied in the code, but the environment of the protein determines a
          unique interpretation.
           Montague Richard:  FORMAL  PHILOSOPHY  (Yale  University  Press,
          1974)

          Montague developed an  intensional  logic  that  employs  a  type
          hierarchy,  higher-order  quantification,  lambda abstraction for
          all types, tenses and modal operators. Its model theory is  based
          on coordinate semantics.

          Reality consists of two truth values, a set of entities, a set of
          possible  worlds and a set of points in time. A function space is
          constructed inductively from these elementary objects.

          The sense of an expression is supposed to  determine  its  refer-
          ence. The intensional logic makes explicit the mechanism by which
          this can happen.  The logic  determines  the  possible  sorts  of
          functions from possible indices (sets of worlds, times, speakers,
          etc)  to  their  denotations  (or  extensions).  These  functions
          represent the sense of the expression.

          In other words sentences denote extensions in the real world. The
          denotation is compositional, meaning that a subpart of the inten-
          sion extends or delimits the extension denotated by another.

          A name denotes the infinite set of properties of  its  reference.
          Common  nouns,  adjectives  and intransitive verbs denote sets of
          individual concepts  and  their  intensions  are  the  properties
          necessarily shared by all those individuals.

          Montague's semantics is truth conditional (to know the meaning of
          a  sentence is to know what the world must be for the sentence to
          be true, the meaning of a sentence is the set of its truth condi-
          tions),  model  theoretic  (a  way  to  carry  out the program of
          truth-conditional semantics that involves building models of  the
          world  which yield interpretations of the language) and uses pos-
          sible worlds (the meaning of a sentence depends not just  on  the
          world  as  it  is  but on the world as it might be, i.e. on other
          possible worlds).

          Montague used his intensional logic to derive a semantic,  model-
          theoretic  interpretation  of a fragment of the english language:
          through a rigorously mechanical process, a  sentence  of  natural
          language  is  translated  into  an  expression of the intensional
          logic and the model-theoretic interpretation of  this  expression
          serves as the interpretation of the sentence.

          Montague relalized that categorial grammars provide  a  unity  of
          syntactic and semantic analyses.

          Rather than proving a semantic interpretation directly on syntac-
          tic  structures, Montague provides the semantic interpretation of
          a sentence by showing how to translate it into formulas of inten-
          sional  logic  and  how to interpret semantically all formulas of
          that logic.  Montague assigns a set of basic expressions to  each
          category  and  then defines 17 syntactic rules to combine them to
          form complex phrases. An analysis tree shows  graphically  how  a
          meaningful  expression is constructed from basic expressions. The
          tree shows all applications of syntactic rules down to the  level
          of  basic  expressions.  The translation from natural language to
          intensional logic is then performed by  employing  a  set  of  17
          translation rules that correspond to the syntactic rules. Syntac-
          tic structure determines semantic interpretation.  The  semantics
          of  the  intensional logic is given as a possible-world semantics
          relative to moments of time:  "points  of  reference"  (pairs  of
          worlds and moments) determine the extensions of expressions whose
          meanings are intensions.

          Montague believes  there  should  be  no  theoretical  difference
          between natural languages and artificial languages of logicians.

          A universal grammar is a mathematical framework capable  of  sub-
          suming  a description of any system that might be considered as a
          language.

Moore Robert: LOGIC AND KNOWLEDGE REPRESENTATION (CSLI, 1995)

          Collects Moore's  writings  on  "knowledge  and  action",  belief
          theory and autoepistemic logic.

Morowitz Harold: ENERGY FLOW IN BIOLOGY (Academic Press, 1968)

          The thesis of the book is that the flow of energy through a  sys-
          tem acts to organize the system. The apparent paradox between the
          second law of thermodynamics (the universe tends towards increas-
          ing  disorder)  and  biological  evolution  (life  tends  towards
          increasing organization) is solved by realizing that thermodynam-
          ics  applies  to systems that are approaching equilibrium (either
          adiabatic, i.e. isolated, or isothermal), whereas natural systems
          are  usually  subject  to flows of energy/matter to or from other
          systems. Steady-state systems (where the inflow and  the  outflow
          balance  each  other) are particular cases of nonequilibrium sys-
          tems.

          Schrodinger's vision that "living organisms  feed  upon  negative
          entropy"  (they  attract  negative entropy in order to compensate
          for the entropy increase they create by living) can  be  restated
          as:  the existence of a living organism depends on increasing the
          entropy of the rest of the universe.
           Morowitz Harold: FOUNDATIONS OF BIOENERGETICS  (Academic  Press,
          1978)

          A  classic  textbook  that  introduces   thermodynamic   concepts
          (energy,  temperature,  entropy  and information) and the laws of
          statistical mechanics and then applies them to biological  struc-
          tures  such as the solar radiation. Nonequilibrium (irreversible)
          thermodynamics is introduced.

          Morowitz's theorem states that the flow of energy through a  sys-
          tem  leads  to  cycling in that system. The flux of energy is the
          organizing factor in a dissipative system.  When energy flows  in
          a system from a higher kinetic temperature, the upper energy lev-
          els of the system become occupied and take a finite time to decay
          into  thermal  modes.  During  this  period energy is stored at a
          higher free energy than at equilibrium state. Systems of  complex
          structures  can  store large amounts of energy and achieve a high
          amount of internal order.

          Therefore, a dissipative system develops an internal order with a
          stored  free  energy that is stable, has a lower internal entropy
          and resides some distance from thermostatic equilibrium. Further-
          more, a dissipative system selects stable states with the largest
          possible stored energy.

          The cyclic nature of dissipative  systems  can  be  seen  in  the
          periodic  attractors.  Their cyclic nature allows them to develop
          stability and structure within themselves.
           Morowitz Harold: ENTROPY AND THE MAGIC FLUTE (Oxford  University
          Press, 1993)

          A collection of short and very entertaining essays on  intriguing
          topics.
          Morris C.W.: FOUNDATIONS OF THE THEORY OF  SIGNS  (University  Of
          Chicago Press, 1938)

          Morris revised Peirce's theory of signs and introduced the modern
          terminology.
           Myers Terry: REASONING AND DISCOURSE PROCESSING (Academic Press,
          1986)

          A collection of papers on discourse structure and analysis.

          Includes Johnson-Laird's "Reasoning without logic", a critique of
          mental  logic, and Wilson's and Sperber's "Inference and implica-
          ture in utterance interpretation", on their theory of relevance.

Nagel Thomas: MORTAL QUESTIONS (Cambridge Univ Press, 1979)

          Contains the famous "What is it like to be a bat": we  can  learn
          all  about  the  brain  mechanism of a bat's sonar system without
          having the slightest idea of what it is like to  have  the  sonar
          experiences of a bat.

Nagel Thomas: THE VIEW FROM NOWHERE (Oxford Univ Press, 1986)

          "We can conceive of things only as they appear to us and never as
          they  are  in themselves." We can only experience how it feels to
          be ourselves. We can never experience how it feels  to  be  some-
          thing else, for the simple reason that we are not something else.
          As Nagel wrote in a famous paper, we  can  learn  all  about  the
          brain  mechanisms  of  a  bat's  sonar  system without having the
          slightest idea of what it is like to have the  sonar  experiences
          of a bat.

          Nagel stresses the possibility that the human brain may be inade-
          quate to fully understand the world and therefore itself.

          Therefore we can never be sure that our viewpoint is an objective
          viewpoint. We can only be sure that our viewpoint is definitely a
          very subjective viewpoint. At the same time we should  not  limit
          ourselves to objective reality, as the subjectivity of conscious-
          ness is part of reality. A fully  objective  view  of  the  world
          would omit the fact that, of all things, the world contains "me",
          my self, and it is through this self that I  observe  the  world.
          Demonstrative  words  such  as  "I",  "here"  and "now" cannot be
          reduced to objective reality.

          Nagel thinks that identity comes from the (physical) brain, since
          it  is  the  only  part of a body that a person would not survive
          without. Brain transplants would  be  inherently  different  from
          heart  or  kidney  transplants. If a region of the brain does not
          perform its function properly  and  a  region  can  be  found  in
          another  brain  that would perform that function properly, can we
          transplant it and still have the  same  person?  Nagel  does  not
          answer this question.

          Consciousness cannot be "counted":  schizophrenic  patients  have
          neither one nor two consciousnesses. Brain emispheres cannot com-
          pete, even when they have been separated.  They  have  been  pro-
          grammed to work in tandem.

Neal Stephen: DESCRIPTIONS (MIT Press, 1990)

          Neal's semantic theory is based on Bertrand Russell's  theory  of
          descriptions.

Neisser Ulric: COGNITION AND REALITY (Freeman, 1975)

          Cognition is defined as the skill of dealing with knowledge. Such
          knowledge  comes  from  the  environment.  Rather  than trying to
          understand which information  the  mind  can  process,  attention
          should  be paid to which information is available in the environ-
          ment. The mind developed to cope with that information.

          Neisser, in partial agreement with Gibson, presents  an  alterna-
          tive  approach  to  perception  which  is  based on an ecological
          orientation. Organisms pick up information from the  environment.
          Neisser  differs from Gibson in that he argues in favor of direc-
          tionality of exploration by the organism:  the  organism  is  not
          completely  passive  in the hands of the environment, but somehow
          it has a cognitive apparatus that directs its search for informa-
          tion.

          Schemata (analogous to Selz's and  Bartlett's  schemata)  account
          for  how  the  organism can gather the available information from
          the environment.  Between perception and action  there  exists  a
          direct relation.  The schema accounts for adaptive behavior while
          conserving the preminence of cognitive  processes.  The  organism
          selects  information  from the environment based on its anticipa-
          tory schemata. "We can see only what we know how to look for". At
          every istant the organism constructs anticipations of information
          that enable it to pick it up  when  it  becomes  available.  Once
          picked up, information may in turn result in a change of the ori-
          ginal schema, to direct further exploration of  the  environment.
          Perception  is  therefore  a  perennial  cycle,  from schemata to
          action (schemata directs action) to information (action picks  up
          information)  to schemata (information modifies schemata).  Sche-
          mata are part of the nervous system.

          Perception of meaning also depends on schematic control of infor-
          mation  pickup.   Perception  is not about classifying objects in
          categories.

          The cyclical theory of perception  also  explains  how  the  mind
          "filters"  the  huge  amount of information that would exceed its
          capacity.

          An orienting schema of  the  nearby  environment,  or  "cognitive
          map", guides the organism around the environment. A cognitive map
          contains schemata of the objects in the environment  and  spatial
          relations between the objects.

          Even mental imagery is reduced to  perceptual  anticipations  for
          picking  up  useful  information.  Images are plans for obtaining
          information.

          Perception and cognition transform  the  perceiver:  an  organism
          "is" the cognitive acts it engages in.

Neisser Ulric: MEMORY OBSERVED (Freeman, 1982)

          A collection of historical papers on psychological research about
          memory.  Included are Freud, Luria, Stern, Bateson, etc.
           Neisser Ulric: CONCEPTS AND  CONCEPTUAL  DEVELOPMENT  (Cambridge
          University Press, 1987)

          Neisser identifies five kinds of self-knowledge:  the  ecological
          self  (situated  in  the  environment),  the "interpersonal self"
          (situated in the society of selves), both  based  on  perception,
          the private self, the conceptual self and the narrative self.
           Neisser Ulric: CONCEPTS   RECONSIDERED  (Cambridge  Univ  Press,
          19##)

          A collection of papers, including Barsalou's 1987 "The  instabil-
          ity  of  graded  structures",  which proved that concepts are not
          stable structures (concepts are built on the fly, given the  con-
          text,  and each instance can be quite different from the previous
          one).
           Neisser  Ulric:  THE  REMEMBERING   SELF  (Cambridge  University
          Press, 1994)

          A collection of articles on the "narrative self", the  fact  that
          human  beings  remember  what happened to them.  Remembering is a
          skill that must be learned. Thus, the remembering self must  have
          a development of its own.

          Jerome Bruner believes in a multiplicity of narratives.  There is
          not  a single, static remembered self. What we remember is influ-
          enced by social and cultural factors.  Self-narratives don't even
          depend so much on memory as on thinking.  "Self is  a perpetually
          rewritten story".

Neisser Ulric: THE PERCEIVED SELF (Cambridge Univ Press, 1994)

          A collection of articles from distinguished authors on the  "eco-
          logical  self"  (situated  in the environment) and the "interper-
          sonal self" (situated in the society of selves).

Nelson Raymond: LOGIC OF MIND (Kluwer Academics, 1989)

          The book presents a comprehensive and  ambitious  theory  of  the
          mind as a computational system made of rules that are embodied in
          the nervous system.

          Nerve networks, grammars and cognitive systems are all reduced to
          automata  (systems  of  computational  rules). Nelson defends the
          view that computers and minds are the  same  type  of  automaton,
          especially against the misapplication of Godel's theorem.

          A theory of belief and action is also  developed  using  abstract
          Turing  machines  and  automata  models.   Reference derives from
          intentionality, and intentions can be reduced to ways of  comput-
          ing  (based  on  the  idea  of  partial recursive functions). The
          intentional features of the mind can therefore  be  explained  in
          mathematical  terms.   Nelson  builds  a "logic of acceptance" to
          deal with perception: a stimulus pattern is perceived  if  it  is
          accepted  by  the  perceiver  as  a  given type, and this process
          depends on the perceiver's expections (i.e., perceptual belief is
          fullfilled  expectation).  Desire  is  then  defined  in terms of
          belief and action. Both belief and desire are  therefore  reduced
          to  mathematical  quantities, and ultimately to automata computa-
          tion.

          A theory of truth for  a  language  corresponding  to  perceptual
          belief and a theory of meaning (based on recursive functions) are
          worked out.
           Newell Allen & Simon Henry:  HUMAN  PROBLEM  SOLVING  (Prentice-
          Hall, 1972)

          A gigantic study on human behavior from  the  point  of  view  of
          information  processing  and  one  of the milestones of cognitive
          science.

          By conducting experiments, the  authors  concluded  that  problem
          solving  involves a mental search through a problem space of pos-
          sible solutions in which each step is guided by rules  of  thumb,
          or heuristics.

          A problem space consists of a set of knowledge states, a  set  of
          operators  on  knowledge  states,  the initial state, the desired
          final state. Problem solving takes place by search in the problem
          space  until  the  desired knowledge state is achieved. Knowledge
          about the environment is fundamental  in  order  to  guarantee  a
          highly selective search through the problem space.

          As an example, the Logic Theorist is a  heuristics-based  problem
          solver  whose task is to find proofs for theorems in the proposi-
          tional calculus.  The General Problem  Solver  is  an  even  more
          ambitious program.

          The cognitive model is one in which human intelligence is due  to
          a  set  of  production rules controlling behavior and to internal
          information processing.  Both  the  mind  and  the  computer  are
          physical-symbol systems.
           Newell Allen & Rosenbloom Paul:  THE  SOAR  PAPERS  (MIT  Press,
          1993)

          A collection of papers  on  the  unified  cognitive  architecture
          developed  over  a  decade  by  Rosenbloom,  John Laird and Allen
          Newell that attempts  to  explain  how  a  cognitive  system  can
          improve its skills through experience.

          The universal weak method is an organizational framework  whereby
          knowledge determines the weak methods employed to solve the prob-
          lem, i.e.  knowledge controls the behavior of the rational agent.
          Universal  subgoaling  is  a  scheme whereby goals can be created
          automatically to deal with the  difficulties  that  the  rational
          agent encounters during problem solving.

          The engine of the architecture is driven by production rules that
          fire  in  parallel  and  represent task-dependent knowledge.  The
          architecture maintains a context which is  made  of  four  slots:
          goal,  problem space, state and operator.  A fixed set of produc-
          tion rules determines which objects have to become current,  i.e.
          fill  those  slots.  In other words, they determine the strategic
          choices to be made after each round of parallel processing.
           Newell Allen: UNIFIED THEORIES OF COGNITION (Harvard Univ Press,
          1990)

          Newell divides cognition into several levels. The  program  level
          represents  and manipulates the world in the form of symbols. The
          knowledge level is built on top of the symbolic level and is  the
          level  of rational agents: an agent has a body of knowledge, some
          goals to achieve and some actions that it can perform. An agent's
          behavior  is  determined  by  the "principle of rationality": the
          agent performs those actions that, on the basis of the  knowledge
          it has, bring it closer to the goals.

          General intelligent behavior requires  symbol-level  systems  and
          knowledge-level systems.

          Newell then broadens his division of cognitive levels by  includ-
          ing physical and biological states. The whole band can be divided
          into four bands: neural, cognitive, rational and social. The cog-
          nitive  band  can  be divided based on the response times: at the
          memory level the response time (the time required to retrieve the
          referent  of a symbol) is about ten milliseconds; at the decision
          level the response time is 100 milliseconds (the time required to
          manipulate  knowledge),  at the compositional level is one second
          (time required to build actions),  a  tthe  execution  level  ten
          seconds (time required to perform the action).

          In the rational band the system appears as a  goal-driven  organ-
          ism,  capable  of processing knowledge and of exhibiting adaptive
          behavior.

          Newell surveys a  number  of  cognitive  theories  and  cognitive
          architectures, particularly SOAR, which is offered as a candidate
          for a unified theory of cognition.
           Nicolis Gregoire & Prigogine Ilya:  SELF-ORGANIZATION  IN  NONE-
          QUILIBRIUM SYSTEMS (Wiley, 1977)

          A milestone and monumental work that redefined the way scientists
          approach  natural phenomena and brought self-organizing processes
          to the forefront of the study of complex systems such as biologi-
          cal and social ones.

          The book introduces nonequilibrium thermodynamics, which leads to
          bifurcation   theory   and   to   the   stochastic   approach  to
          fluctuations.

          Under special circumstances the distance from equilibrium and the
          nonlinearity  of  a  system  become sources of order, driving the
          system to ordered configurations (or  "dissipative  structures").
          In  dissipative  structures  nonequilibrium  becomes  a source of
          order.

          The multiplicity of solutions in nonlinear systems can be  inter-
          preted  as  a process of gradual "emancipation" from the environ-
          ment.

          A stunning number and variety  of  fields  of  application,  from
          chemistry  to  sociology.  In  this  framework the most difficult
          problems of biology, from  morphogenesis  to  evolution,  find  a
          natural  model.  A thermodynamics of evolution and even equations
          for ecosystems are proposed.
           Nicolis  Gregoire  &  Prigogine   Ilya:   EXPLORING   COMPLEXITY
          (W.H.Freeman, 1989)

          An introduction to the theory of dynamical systems. After provid-
          ing  examples  of self-organization in chemical, cosmological and
          biological systems, systems  are  partitioned  into  conservative
          systems  (which  are  governed  by  conservation laws for energy,
          translational momentum and angular momentum,  and  give  rise  to
          reversible processes) and dissipative systems (which give rise to
          irreversible processes). Equilibrium  states  and  nonequilibrium
          constraints  are  defined  operationally,  with  the  emphasis on
          fluxex between a system and the environment. A system subject  to
          the  action of a nonequilibrium constraint becomes susceptible to
          change as localized tendencies to deviate  from  equilibrium  are
          amplified,  thus  becoming sources of innovation and diversifica-
          tion. The potentialities of nonlinearity are dormant  at  equili-
          brium  but  are  revelead  by  nonequilibrium: multiple solutions
          appear and therefore diversification of behavior  becomes  possi-
          ble.  Dissipative  structures  emerge under nonequilibrium condi-
          tions.  Therefore, nonlinear systems driven away from equilibrium
          can generate instabilities that lead to bifurcations and symmetry
          breaking beyond bifurcation. The methodology of phase  spaces  is
          introduced  to study nonlinear nonequilibrium systems, leading to
          formal definitions of limit cycles,  attractors,  fractals,  etc.
          Catastrophe  and  chaos  theories are viewed as special cases.  A
          model  of  bifurcation  and  evolution  is   worked   out.    The
          relationship   between   stochastic  and  deterministic  behavior
          (between chance and necessity) is analyzed, as well as the origin
          of irreversibility.
           Nicolis Gregoire: INTRODUCTION TO NONLINEAR  SCIENCE  (Cambridge
          University Press, 1995)

          The ultimate textbook on nonlinear methods for describing complex
          systems.   From  an interdisciplinary introduction, the book goes
          on to introduce in a rigorous manner the vocabulary and tools  of
          invariant manifolds, attractors, fractals, stability, bifurcation
          analysis, normal forms, chaos, Lyapunov exponents, entropies.
           Nilsson Nils: THE MATHEMATICAL FOUNDATIONS OF LEARNING  MACHINES
          (Morgan Kaufmann, 1990)

          A revised edition of his seminal 1965 "Learning Machines".
           Nilsson Nils:  PRINCIPLES  OF  ARTIFICIAL  INTELLIGENCE  (Tioga,
          1980)

          One of the most popular textbooks of artificial intelligence. The
          focus  is  on  production  systems  (heuristic search algorithms,
          resolution and unification, planning systems) with a  brief  men-
          tion of semantic networks.
           Nunberg Geoffrey: THE  PRAGMATICS  OF  REFERENCE  (Indiana  Univ
          Linguistic Club, 1978)

          There's a fundamental ambiguities in all terms: there  is  always
          potentially  an infinite number of referents of a term, depending
          on the context.  Nunberg argues that a term cannot have  a  stan-
          dard  referent,  but  its  referents  can be derived one from the
          other through a number of elementary functions  (such  as  "owner
          of"  or  "location  of")  which can be recursively applied in any
          combination.

          Four principles determine which functions a listener is going  to
          employ to derive the most appropriate referent.

          A term is used in a "normal" way when it is consistent  with  the
          conventions of the linguistic community.

          A metaphor is a discourse in which  the  speaker  a)  employs  an
          expression  E  to  refer  to  F in context C even if there exists
          another expression to refer to F which the speaker  knows  it  is
          easier  to  understand; b) knows that employing E is not rational
          but expects the listener to realize this and that he is aware  of
          it;  c) acts according to a cooperative principle and expects the
          listener to be aware of it.  Metaphors are not  an  exclusive  of
          poets.  Quite  the opposit: people who are not very fluent in the
          language tend to use metaphors more often.