Dalenoort G.J.: THE  PARADIGM  OF  SELF-ORGANIZATION  (Gordon  &
          Breach, 1989)

          A collection of articles from experts in various disciplines that
          all  deal  with  autonomous  systems. Topics include cybernetics,
          evolution,  complexity,  morphogenesis,  self-reference.   Csanyi
          offers  a  general  theory  of  evolution based on a "replicative
          model" of self-organization.
           Dalenoort G.J.: THE PARADIGM OF SELF-ORGANIZATION II  (Gordon  &
          Breach, 1994)

          The new collection includes articles on learning,  the  arrow  of
          time, cellular automata, cognition, etc.

Damasio Antonio: DESCARTES' ERROR (G.P. Putnam's Sons, 1995)

          Damasio is trying to build a neurobiology of rationality. In this
          book  he  provides  a neurophysiological analysis of memory, emo-
          tions and consciousness.

          The book has three  themes.   1.  Human  reason  depends  on  the
          interaction  among  several brain systems rather than on a single
          brain centre.  2. Feelings  are  views  of  the  body's  internal
          organs.  Feelings  are  percepts and they are as cognitive as any
          other percept.  3.  The  mind  is  about  the  body:  the  neural
          processes  that  are  experienced  as  the  mind  are  about  the
          representation of the body in the brain. The mental requires  the
          existence  of  a body for more than mere support: the mind is not
          a phenomenon of the brain alone. The mind derives from the entire
          organism as a whole.  The mind reflects two types of interaction:
          between the body and the brain, and between them and the environ-
          ment.

          Damasio formulates the somatic-marker hypothesis: a special class
          of  feelings,  acquired  by  experience, express predicted future
          outcomes of known situations and help the mind make decisions.

          The neural basis for the self resides with  the  continous  reac-
          tivation  of  1. the individual's past experience (which provides
          the individual's sense of identity) and 2.  a  representation  of
          the individual's body (which provides the individual's sense of a
          whole). The self is continously reconstructed.  This is a  purely
          non-verbal process: language is not a prerequisite for conscious-
          ness. Nonetheless, language is the source of the  "I",  a  second
          order  narrative  capacity.  Damasio's "embodied mind" is closely
          related to Edelman's "self imbued with value".

          Damasio's theory of convergence  zones  (not  presented  in  this
          book)  is  tackling  the  issue  of consciousness.  When an image
          enters the brain via the visual cortex, it is channelled  through
          "convergence  zones"  in  the  brain until it is identified. Each
          convergence zone handles a category of objects  (faces,  animals,
          trees, etc): a convergence zone does not store permanent memories
          of words and concepts but helps  reconstructing  them.  Once  the
          image has been identified, an acoustical pattern corresponding to
          the image is constructed by another area of the brain. Finally an
          articulatory  pattern  is  constructed  so that the word that the
          image represents can be spoken.  There  are  about  twenty  known
          categories   that   the   brain   uses   to  organize  knowledge:
          fruits/vegetables, plants, animals, body parts, colors,  numbers,
          letters,  nouns,  verbs, proper names, faces, facial expressions,
          emotions, sounds.  "Convergence  zones"  are  indexes  that  draw
          information  from  other  areas of the brain. The memory of some-
          thing is stored in bits at the back of the brain (near the  gate-
          ways  of the senses): features are recognized and combined and an
          index of these features is formed  and  stored.  When  the  brain
          needs  to  bring back the memory of something, it will follow the
          instructions in that index, recover all  the  features  and  link
          them  to  other  associated  categories.   As information is pro-
          cessed, moving from station to station through  the  brain,  each
          station creates new connections reaching back to the earlier lev-
          els of processing. These connections always allows the  brain  to
          work  in reverse.  Convergence zones may be common to all indivi-
          duals or  different  from  individual  to  individual,  based  on
          experience.

          Emotions are the brain's interpretation of reactions  to  changes
          in  the  world.   Emotional  memories involving fear can never be
          erased The prefrontal cortex, amygdala and right cerebral  cortex
          form a system for reasoning that gives rise to emotions and feel-
          ings.  The prefrontal cortex and the amygdala  process  a  visual
          stimulus  by comparing it with previous experience and generate a
          response that is transmitted both to the body and to the back  of
          the brain.

          Convergence zones are organized in a hierarchy: lower convergence
          zones  pass information to higher convergence zones.  Lower zones
          select relevant details from sensorial information and send  sum-
          maries  to  higher zones, which successively refine and integrate
          the information.  In order to be conscious of something a  higher
          convergence  zone  must retrieve from the lower convergence zones
          all the sensory fragments that are  related  to  that  something.
          Therefore, consciousness occurs when the higher convergence zones
          fire signals back to lower convergence zones.
           Davidson  Donald:  INQUIRIES  INTO  TRUTH   AND   INTERPRETATION
          (Clarendon Press, 1984)

          Davidson is the main proponent of "truth-conditional  semantics",
          which  asserts  the  central  place in the theory of meaning of a
          theory of truth.

          With his "anomalous monism", Davidson promotes the  token  theory
          of  identity:  the same instance of a mental state may correspond
          to different neural states at different times.   Given  a  mental
          state,  it  is  not  possible to relate it to a specific physical
          state.  The same event may be both mental and physical, but there
          is no relationship between the two descriptions.  There cannot be
          any relationship between the  psychological  vocabulary  and  the
          neurophysiological vocabulary.

          Davidson's theory of the mind  rests  on  three  principles.   At
          least  some  mental events interact causally with physical events
          (causal interaction).  Events related as cause  and  effect  fall
          under  strict  deterministic  law  (the  nomological character of
          causality).  There are no strict deterministic laws  under  which
          mental  events  can  be predicted and explained (the anomalism of
          the mind).  The physical and the  mental  realms  have  essential
          features which are somehow mutually incompatible. There can be no
          laws connecting the mental with the physical. Therefore there can
          be no theory connecting psychology and neurophysiology.

          Davidson's conception of the mind is based  on  the  intentional.
          Propositional  attitudes  constitute  the basic vocabulary of the
          mind. Laws of the mind would then be laws expressed in  terms  of
          intentional expressions.

          Davidson thinks that rationality (interpreting agents in terms of
          beliefs  and desires) provides the sole criterion for psychologi-
          cal judgement.  His view of the mental is holistic: the  attribu-
          tion  of  any  mental  state  to a person requires that the total
          system of  propositional  attitudes  be  maximally  coherent  and
          rational.

          Tarski simply replaced the  universal  and  intuitive  notion  of
          "truth"  with an infinite series of rules which define truth in a
          language relative to truth in another language.   Davidson  would
          rather  assume  that  the concept of "truth" need not be defined,
          that it be known to everybody. Then he can use the  corrisponden-
          tial theory of truth to define meaning: the meaning of a sentence
          is defined as what would be if the sentence were true.

          The task for a theory of meaning is then to  generate  all  meta-
          sentences  (or  "T-sentences")  for all sentences in the language
          through a recursive  procedure.  This  account  of  meaning  only
          relies on truth conditions.

          A sentence is meaningful in virtue of being  true  under  certain
          conditions  and not others.  To know the meaning of a sentence is
          to know the conditions under which the sentence would be true.  A
          theory  of  a  language must be able to assign a meaning to every
          possible sentence of the  language.  Just  like  Chomsky  had  to
          include  a  recursive  procedure  in  order  to explain speaker's
          unlimited ability to recognize  sentences  of  the  language,  so
          Davidson has to include a recursive procedure in order to explain
          speaker's  unlimited  ability  to  understand  sentences  of  the
          languages.

          Natural languages exhibit an additional  difficulty  over  formal
          languages:  they  contain  deictic elements (demonstratives, per-
          sonal pronouns, tenses) which cause truth value to  fluctuate  in
          time and speaker. Davidson therefore proposes to employ a pair of
          arguments for his truth predicate, one specifying the speaker and
          one specifying the point in time.

          Language transmits information.  The  speaker  and  the  listener
          share  a fundamental principle to make such transmission as effi-
          cient as possible.  Such "principle of charity" asserts that  the
          interpretation  to  be  chosen is the one in which the speaker is
          saying the highest number of true statements.  During the conver-
          sation  the  listener  tries  to build an interpretation in which
          each sentence of the speaker is coupled with  a  truth-equivalent
          sentence.
           Davies Paul: GOD AND THE NEW PHYSICS (Penguin, 1982)
          The book surveys the mysteries of the universe, life, mind, cons-
          ciousness,  particle  physics  by  updating  the  debate  to  the
          theories of non-linear dynamics and self-organization.

Davies Paul: ABOUT TIME (Touchstone, 1995)

          A popular introduction to relativistic and quantum time,  roaming
          from  big  bang to black holes, speculating on time reversal  and
          tachyons.

Davies Paul: THE MIND OF GOD (Touchstone, 1993)

          Davies reviews quantum cosmological theories of the universe  and
          recent  mathematical  advances  to prove that there is still room
          for a God.  A wealth of philosophical and scientific notions  are
          mixed,  related  and  compared.  Davies  investigates whether the
          universe can create itself, the relationship between the world of
          Mathematics and the physical world, Artificial Intelligence, etc.
           Davis Ernest: REPRESENTATION OF COMMON-SENSE  KNOWLEDGE  (Morgan
          Kaufman, 1990)

          A comprehensive  and  well-organized  survey  of  research  areas
          related to common sense.
          Common sense is a key factor in acting in the real world.  Common
          sense  encompasses  both reasoning methods and knowledge that are
          obvious to humans but that are quite distinct from the  tools  of
          classical mathematics.

          To prepare adequate logical  theories  for  dealing  with  common
          sense, Davis introduces the notation of first-order logic. Essen-
          tial to reproducing the power of ordinary language  is the use of
          operators on sentences. Operators on sentences that apply only to
          a limited class of sentences, commute with  the  quantifiers  and
          the  boolean  operators,  are  referentially  transparent and are
          closed under inference, are "extensional  operators"  (e.g.,  the
          temporal  operator).  Another  class of operators on sentences is
          that of modal operators  (possible  and  necessary),  which  obey
          their  own set of axioms. The meaning of a modal logic is defined
          in terms of possible-world semantics.

          Classical logic needs also to be extended with plausible  reason-
          ing:  degrees  of belief, default rules, inference in the face of
          absence of information, inference about vague quantities, analog-
          ical reasoning, induction and so forth.  A crucial tool for plau-
          sible reasoning is non-monotonic logic, which  allows  inferences
          to  be  made  provisionally  and,  if necessary, withdrawn at any
          time.  Next, the domain of inference must be somehow closed,  and
          this can be done in a number of ways: the closed-world assumption
          (all relations relevant to the problem are mentioned in the prob-
          lem statement), circumscription (extends the closed-world assump-
          tion to non-ground formulas as well, i.e.  assumes  that  as  few
          objects  as  possible have a given property), default theory (all
          members of a class have all the properties characteristic of  the
          class  if  it  is  not otherwise specified). Uncertainties can be
          represented with probability theory.

          Common sense domains to be dealt with include:  physical  quanti-
          ties  (whose  values  can  be ordered, that  can be subdivided in
          partially ordered intervals, that can be assigned signs based  on
          their derivaties, whose relations can be expressed in the form of
          transition networks, whose behavior can be expressed in the  form
          of qualitative differential equations); time (whose operators can
          be either introduced  in  a  world  of  discrete,  self-contained
          situations  and  events  or as part of a modal logic); space (and
          the related concepts of distance, containment, overlapping, boun-
          daries); physics (according to both DeKleer's component model and
          Forbus'  qualitative  process  theory);  propositional  attitudes
          (specifically  the  relationship  between  belief and knowledge);
          actions (planning systems); and socializing (speech acts.

          Davis does not discuss how common sense is  learned  and  whether
          some common sense is innate.

          Davis thinks that a first-order logic can be endowed with  axioms
          that reflect the laws of the physical world. Davis' world is made
          of a finite set of solid objects that move in space  and  do  not
          overlap.  Each  object has three properties: a mass distribution,
          an elasticity  coefficient  and  a  friction  coefficient.  Davis
          defines an onthology which includes terms such as: quantity, dis-
          tance, objects, and so forth. The theory suggests  that  an  ade-
          quate  representation  of  the physical needs to employ Euclides'
          geometry, an ontology of space-temporal properties and a  set  of
          axioms about what is going on in the world.
           Davis Randall & Lenat Douglas: KNOWLEDGE-BASED SYSTEMS IN ARTIF-
          ICIAL INTELLIGENCE (McGraw-Hill,1982)

          The first part is devoted to the system AM,  that  was  built  to
          study  and simulate discovery of heuristics in solving mathemati-
          cal problems.  The second part describes TEIRESIAS, a  system  to
          acquire and maintain large knowledge bases.

Davis Steven: CONNECTIONISM (Oxford University Press, 1992)

          An introduction to the field with emphasis on how  higher  cogni-
          tive tasks can be explained by lower connectionist models.

Davis Steven: PRAGMATICS (Ocford University Press, 1991)

          An ambitious collection of seminal papers on speech acts  (Grice,
          Kripke,  Searle),  indexicals (Kaplan's logic of demonstratives),
          implicature and relevance (Grice's "Logic and conversation", Wil-
          son  &  Sperber's  "Inference  and  implicature"), presupposition
          (Lewis, Stalnaker), metaphor (Davidson, Searle,  Sperber  &  Wil-
          son).

          Robyn Carston advances a proposal to  distinguish  two  kinds  of
          semantics:  a  linguistic semantics (a theory of utterance) and a
          truth-conditional semantics (a theory of propositions).  Linguis-
          tics  semantics  provides  the  input  to  pragmatics and the two
          together provide the input to truth-conditional semantics.

          Kent Bach views linguistic communication as an  inferential  pro-
          cess and presents a theory of speech acts.

          John Searle attempts to explain "indirect speech acts"  in  terms
          of his theory of speech acts and metaphor as "speaker's utterance
          meaning" (a set of principles allow the  hearer  to  compute  the
          possible meanings).

Dawkins Richard: THE SELFISH GENE (Oxford Univ Press, 1976)

          This is one of the books that introduced new methods of  thinking
          about  life, behavior and evolution. Dawkins argues that the gene
          is the fundamental unit of evolution. Genes drive  evolution  and
          genes drive behavior.

          Darwin's assumption that natural selection favors those individu-
          als  best  fitted  to  survive  and reproduce can be restated as:
          natural selection favors those genes that replicate through  many
          generations.

          The level at which selection occurs is not that of the individual
          organism,  but  that of particular stretches of genetic material.
          Organisms are merely the  means  that  genes  use  to  perpetuate
          copies of themselves.

          The universe is dominated by stable structures. And one  particu-
          lar stable structure is a molecule that makes copies of itself.

          Dawkins proves with a number of examples at all levels that self-
          ishness is pervasive in nature.

          Dawkins also introduces the concept of "memes", the analogous  of
          genes  for  cultural  transmission. A meme is an idea that repli-
          cates itself from mind to mind, such as a slogan or a refrain  or
          a proverb.  Memes behave in a very similar way to genes.

Dawkins Richard: THE EXTENDED PHENOTYPE (OUP, 1982)

          The main claim of the book is  that  the  gene  is  the  unit  of
          natural  selection.   Genes  are  selected  by  their  phenotypic
          effects. Such phenotypic effects are not  limited  to  individual
          organism, but reach out to an "extended" phenotype, consisting of
          the world the organism interacts with. Genes  ensure  their  sur-
          vival by means of phenotypic effects on the world.

          The organism alone does not have biological relevance. What makes
          sense  is  an open system made of the organism and its neighbors.
          For example, a cobweb is still part of the spider. The control of
          an  organism  is never complete inside and null outside: there is
          rather a continuum of degrees of control, which allows partiality
          of  control inside (e.g., parasites operate on the nervous system
          of their hosts) and an extension of control outside  (as  in  the
          cobweb).  The  genome of a cell can be viewed as a representation
          of the environment inside the cell.

          Conversely, within the boundaries of an  organism  there  can  be
          more than one psychology (as in the case of schizophrenics).

          The same arguments apply to memes, which are nonbiological repli-
          cators. The extended phenotype of a meme is defined by phenotypic
          effects such as words, music, images, gestures, fashion, ...

          Throughout the book, Dawkins downplays the importance  of  single
          organisms  and  emphasizes the "extended phenotype" which extends
          as far as its control reaches out.

          The book is mainly written for biologists  and  debates  numerous
          alternative theories.

Dawkins Richard: THE BLIND WATCHMAKER (Norton, 1987)

          A very accessible introduction to a variety of topics  in  evolu-
          tionary biology.

          The theme of the book is paradox of natural selection,  which  on
          one  hand  proceeds  in  a  blind and purpose-less way and on the
          other hand produces the illusion of more and more complex design.

          Dawkins compares biological systems and artificial  systems:  the
          theory  of  radar  vision  and  the  theory  of bats' ecolocation
          developed in parallel, unaware one of the results of  the  other,
          and eventually formulated the same computational model.

          Complex organisms came to be by gradual,  cumulative  transforma-
          tions  from simple beginnings.  Dawkins emphasizes that darwinism
          is not a theory of random chance.  Order is created by the "cumu-
          lative" property of selection.

          Dawkins  speculates  on  the  processes  that  originated   life,
          reiterates  his  view  that genes are selected by virtue of their
          interaction with the environment (including other genes),  proves
          that  punctuated  equilibrium  is  consistent with darwinism, and
          compares differing darwinist theories.

Dawkins Richard: RIVER OUT OF EDEN (Basic, 1995)

          This is an introduction for  the  general  audience  to  Dawkins'
          ideas and to modern evolutionary theories.

          Within his own theory of the genes' struggle and competition  for
          survival, Dawkins tries to answer philosophical questions such as
          how life began and why are we alive at  all.   Nature's  excesses
          and  cruelties  are explained by the need of genes to survive and
          reproduce.  Suffering,  pain  and  fear  to  the  most   horrible
          extremes, are part of this game.

DeDuve Christian: VITAL DUST (Basic, 1995)

          A detailed and fascinating history of life on the earth,  how  it
          "emerged"  and how it developed, from the first catalysts of life
          all the way down to the mind.
           Delahaye JeanPaul: FORMAL  METHODS  IN  ARTIFICIAL  INTELLIGENCE
          (Halsted, 1987)

          An introduction to recursive functions, Church's  thesis,  Lambda
          calculus,  first-order  predicate  calculus, resolution, unifica-
          tion; all the logical tools needed to understand the Prolog  pro-
          gramming language.

Depew David & Weber Bruce: DARWINISM EVOLVING (MIT Press, 1994)

          A competent, comprehensive and exhaustive history and  survey  of
          evolutionary theories from Darwin to Gould and Lewontin.

          The first half of this book  is  a  history  of  darwinism.   The
          second  part  deals with Galton, Mendel, Fisher, Wright up to the
          modern day synthesis. The third part starts with the discovery of
          the DNA and ends with modern models of evolution.

          The book shows that the idea of natural selection  has  undergone
          three stages of development, parallel to developments in the phy-
          sical sciences: the deterministic dynamics of Isaac  Newton,  the
          stochastic  dynamics  of  Clerk Maxwell and Ludwig Boltzmann, and
          now the dynamics of complex systems. If initially Darwin's theory
          could be related to Newton's physics in that it assumed an exter-
          nal force (natural selection) causing change in living  organisms
          (just  like  Newton  posited  an external force, gravity, causing
          change in the motion of astronomical objects), with the invention
          of  population  genetics  by  Ronald  Fisher and others darwinism
          became stochastic (the thermodynamic  model  of  genetic  natural
          selection, in which fitness is maximized like entropy), just what
          physics had become with Boltzmann's theory of gases.

          Population genetics showed  that  Darwin's  theory  (that  change
          occurred  by the natural selection of many minute variations) and
          Mendel's theory (that change occurred suddenly, by mutation) were
          complementary: changes occur in the frequencies of genes.

          The authors point to the dynamics of complex systems, and specif-
          ically  to the idea of self-organization, as the next step in the
          study of evolution.

DeMey Marc: THE COGNITIVE PARADIGM (Univ of Chicago Press, 1982)

          Philosophical reflections on the emergence of  a  new  scientific
          revolution, the cognitive paradigm.

Dennett Daniel: CONTENT AND CONSCIOUSNESS (Routledge, 1969)

          The distinction between mental and physical is ambigous. The dis-
          tinction  between psychological language and scientific language,
          on the other hand, corresponds to the distinction between  inten-
          tional  sentences  and  extensional sentences. In order to reduce
          the mind to the body,  one  must  reduce  intentionality  to  the
          extensional.

          An extensional reduction of  intentional  sentences  is  possible
          with  internal  events  serving  as the conditions of ascription.
          There could be a system  of  internal  states  whose  extensional
          description provides also an intentional description. The problem
          is whether it make sense to ascribe content to neural states.

          Dennett distinguishes between consciousness (conscious  of,  non-
          intentional sense) and awareness (aware that, intentional sense).
          Consciousness is then merely awareness of the contents of  inter-
          nal states.

          Knowledge does not divide into independent  parts  and  therefore
          cannot  be listed and therefore people can't really say what they
          know.

Dennett Daniel: THE INTENTIONAL STANCE (MIT Press, 1987)

          Dennett's theory of intentionality is based on the folk  concepts
          of belief, desire, intention and expectation.

          In order to explain and predict the behavior of a system one  can
          employ  three  strategies:  a "physical stance", which infers the
          behavior from the physical structure and the laws of  Physics;  a
          "design  stance", which infers the behavior from the function for
          which it was designed (we know when a clock alarm will go on even
          if  we  don't  know  the internal structure of the clock); and an
          "intentional stance", which infers the behavior from the  beliefs
          and desires that the system must exhibit to be rational.

          The "intentional stance" is the set of beliefs and desires of  an
          organism  that  allow  an observer to predict its actions. Belief
          and desires are not internal  states  of  the  mind  which  cause
          behavior,  but  simply  tools  which  are  useful  to predict the
          behavior. No system is really intentional.

          The process that defines how beliefs and desires are shaped,  and
          how they affect the organism's behavior, has biological roots. If
          an organism survived  natural  selection,  the  majority  of  its
          beliefs  are  true  and  the way the organism employs them is the
          most "rational" (beliefs are used to satisfy its desires).

          From a biological standpoint, the intentional stance defines  the
          relationship  between an organism and its environment. The organ-
          ism continously reflects its environment, as the organization  of
          its  system  implicitly contains a representation of the environ-
          ment.

          Intentional states are not internal states  of  the  system,  but
          descriptions  of  the  relationship  between  the  system and its
          environment.  An intentional state is not separate from the  oth-
          ers,  but,  holistically,  it  makes  sense only to deal with the
          cognitive state of an organism as a whole, and with its relation-
          ship as a whole with the environment.  The propositional attitude
          is defined  by a "notional attitude", which is independent of the
          real world, and a component which depends from the real world.

          A notional attitude is defined in a "notional world". An  agent's
          notional  worlds  are the worlds in which all the agent's beliefs
          are true and all the agent's desires are feasible. Me and my dop-
          pelganger  on  Putnam's  twin Earth have the same notional world,
          but different propositional attitudes (because  we  live  in  two
          different environments).

          Intentionality defines an organism as a function of  its  beliefs
          and desires, which are products of natural selection. The more an
          agent's notional worlds stride away from the real world, the less
          the agent is capable of adapting to it.

          What creates beliefs and desires is the  biological  function  of
          cognitive  mechanisms.  Beliefs  must be true and desires must be
          feasible to be useful to survival.

          However, Brentano's thesis (that the intentional  is  irreducible
          to  the physical) is true, because strickly speaking there are no
          such things as beliefs and desires.

          Dennett's theory allows for an interpretation within an  ecologi-
          cal  context,  in agreement with Gibson's and Neisser's theories;
          within an ethological context (cognitive profile of  a  species);
          and  within  a  philogenetic  context (how an organism evolved do
          adapt continously to its environment).

Dennett Daniel: CONSCIOUSNESS EXPLAINED (Little & Brown, 1991)

          Dennett's ambition is an empirical theory of the mind. By extend-
          ing  the  Cartesian  Theatre  (the  idea that there is a centered
          locus in the brain that directs consciousness)  with  a  multiple
          draft  model  (in which all varieties of perceptions and thoughts
          are accomplished by parallel, multitrack brain  processes),  Den-
          nett  offers  an  explanation  of  how the brain represents time,
          anchored around the principle that "probing  precipitates  narra-
          tives"  (people  are not always conscious of what is happening to
          them).  Consciousness is spread around the  brain  and  in  time.
          Consciousness   is   nonlocalized  and  nonlinear.   Despite  the
          apparent unity and continuity of  our  experience,  consciousness
          does  not  involve  the  existence  of a single central self, but
          arises from the interaction with the environment.

          Consciousness exists because it helps survive and it evolved from
          non-consciousness to reasoning and then to memes.  Dennett thinks
          that qualia, and conscious states in general, don't exist.  Cons-
          ciousness  is a collection of memes. The brain is a computer that
          collects memes.

          The mind must be reduced to a set of  cognitive  functions.  Each
          function  must  be  reduced to simpler cognitive problems. And so
          forth, each time reducing the intelligence needed  to  solve  the
          problem,  until  we reach a level at which problems can be solved
          with no more intelligence than the one that can  be  found  in  a
          machine.  At each level the behavior  of a system is given by the
          interaction of a set of interconnected components  ("homunculi").
          Each component's behavior is itself defined by a set of intercon-
          nected components.

          Having relied massively on Artificial Intelligence ideas, Dennett
          also  takes  aim  at Searle's chinese room thought experiment and
          attacks each of its three premises.

Dennett Daniel: DARWIN'S DANGEROUS IDEA (Simon & Schuster, 1995)

          Dennett offers a personal view of Darwin's contribution  to  Sci-
          ence.   Darwin's "dangerous idea" is that design can emerge spon-
          taneously via an algorithmic process, since evolution by  natural
          selection can be viewed as an algorithmic process. A mindless and
          mechanical (and relatively simple)  process  is  responsible  for
          creating  the  complex systems of life. Design is created at each
          run of the algorithm and conserved as the starting point for  the
          next  run.  Complex design such as exhibited in living systems is
          therefore the product of a process of  "accumulation  of  design"
          carried out over time.

          Dennett emphasizes that what appears as a very  intelligent  pro-
          cess  is in reality made of many tiny stupid steps (he proposed a
          similar explanation for the intelligence of the human mind).

          Dennett defends James Mark Baldwin's effect, originally  proposed
          in  1896:  that  species  capable of "reinforced learning" evolve
          faster.  Unlike Lamarck, who thought organisms  can  pass  on  to
          their  offsprings  acquired characteristics, Baldwin thought that
          organisms can pass on their capacity to acquire  certain  charac-
          teristics.

          The actual genomes that have ever existed are  obviously  just  a
          tiny  percentage  of  all  the genomes that could possibly exist.
          Biological possibility can be reduced to a search (the  "tree  of
          life") in the space of all possible genomes (the "design" space).
          An  organism  is  more  or  less  biologically  possible  if  the
          corresponding genome can be more or less easily accessed from one
          of the existing genomes. There may be local  and  universal  con-
          straints  (biological  laws) that limit the possible routes, just
          like there are physical laws that limit which objects can  exist.
          For  example,  laws  of  form may  constrain the relation between
          genotype and phenotype.

          Similar considerations apply to human artifacts,  from  books  to
          religions,  from  languages  to  Dawkins'  memes.   They are also
          indirectly artifacts of the  same  process  that  created  living
          organisms.  Therefore  one can conceive of a unified design space
          that is navigated by both biological and human creativities.

          Dennett  discusses  how  life  can  have  created  itself.  Self-
          replicators  are  too complex to have occurred by coincidence. He
          resorts to hypotheses advanced by Cairns-Smith and Eigen.

          Dennett emphasizes that the code reader is as  important  as  the
          code:  there are infinite ways that the instructions contained in
          the DNA could be implemented, and the "decoder" determines  which
          one  will actually be chosen.  The message is ambigous and it can
          be disambiguated only by the specific decoder that was  meant  to
          decode  it. From this observation Dennett concludes that the code
          and the decoder must have evolved together. In  general,  Dennett
          argues  that "any functioning structure carries information about
          the environment in which its function works".

          Biology is just another form of engineering.

          Dennett strenously  defends  "adaptionism"  against  Gould's  and
          Lewontin's critique (a famous 1979 paper) and argues that it must
          form the core of evolutionary biology.

          The human species  is  unique  in  that  it  relies  on  cultural
          transmission  of  information, and such process is carried out by
          Dawkins' memes, the units of cultural  evolution.  The  mind  was
          created  when  the brain was invaded by memes: memes have created
          the mind, not the other way around.  Consciousness is therefore a
          collection of memes that is implemented in the brain as a sort of
          software in a machine that evolved in nature. Dennett in practice
          denies  the  existence of truly conscious states.  Meaning itself
          is an emergent product of the meaningless algorithm that  carries
          out evolution.

          Dennett defends Artificial Intelligence from  Penrose's  critique
          (based    on    Godel's   incompleteness   theorem).   Artificial
          Intelligence actually fits very well in this  scenario  of  algo-
          rithmic (physical and mental) evolution.

DeSousa Ronald: THE RATIONALITY OF EMOTION (MIT Press, 1987)

          A study on emotions from a biological (rather than psychological)
          perspective.

          Emotions are not irrational behavior. They play the same role  as
          perceptions: they contribute to create beliefs and desires.  Emo-
          tions are perceptions that play a role in  beliefs  and  desires.
          Emotions  are  learned  like  a language. Their semantics derives
          from the paradigm scenarios in terms  of  which  they  have  been
          learned.   The  intentionality of emotions leads to a classifica-
          tion of objects of emotion.  Emotions are also defined  by  their
          relation  to  time (e.g., an event lasting for years cannot count
          as a surprise).

          Emotions and reason are not antagonists.  Reason and emotion  are
          complementary  cognitive  skills.  DeSousa  talks of "axiological
          rationality".  Emotions control the crucial  factor  of  salience
          and  can  therefore restrict the combinatorial possibilities that
          reason has to face (thereby avoiding the frame problem).
           Deutsch, J. Anthony: THE STRUCTURAL BASIS OF  BEHAVIOR  (Univer-
          sity of Chicago Press, 1960)

          Deutsch's studies on the rat's behavior  reached  the  conclusion
          that  rats  make  purely topological maps of their environment. A
          map contains a representation of points in  the  environment  and
          conncetions   between  such  representations.   A  point  in  the
          environment is recognized by comparing its sensory representation
          with  the  representations in memory until a corresponding one is
          found. Once a representation is found, the connections relate  it
          to  other representations.  The pattern of connectivity in memory
          reflects the topology of points in the environment. The cognitive
          maps  simply  specify  the possible "routes", they do not specify
          which one to take. The specific "motivation" of  the  rat  deter-
          mines  which route is selected. A motivation  spreads through the
          network as a signal that decreases from  node  to  node:  farther
          nodes  from  the  node  first hit by the motivational signal will
          reach a very weak signal.  Action is determined  by  the  motiva-
          tional gradient on the map.
           DeVega Manuel et al: MODELS OF  VISUOSPATIAL  COGNITION  (Oxford
          Univ Press, 1996)

          A  survey  of  theories  on  visual-spatial  processing,   mental
          imagery,  visual and propositional representations, etc.
           Donald Merlin: ORIGINS OF THE MODERN MIND (Harvard  Univ  Press,
          1991)

          The book presents a theory of how  the  mind  (symbolic  thought)
          arose  from  a  nonsymbolic  form of intelligence through gradual
          absorbtion of new representational systems
           Donaldson Margaret: CHILDREN'S MINDS (Norton, 1972)
          A classic of developmental  psychology,  that  expanded  Piaget's
          theory of different stages of mental development.

Donaldson Margaret: HUMAN MINDS (Penguin Press, 1992)

          Donaldson provides a unifying vision of post-Piaget developmental
          psychology (i.e., the growth of intellectual competence) by view-
          ing the child's mental  development  as  an  organically  growing
          neural network shaped by the child's intentions.  Piaget's stages
          were  defined  by  the  ability  to  perform  mental  operations.
          Donaldson's stages are defined by the child's focus of attention.
          The first stage (first eight months of life), the  "point  mode",
          is limited to things that the child can perceive directly ("here"
          and "now").  The  second  stage,  the  "line  mode",  expands  to
          embrace  the  concepts  of  past and future ("there" and "then").
          The third stage, the "construct mode" (second year  of  life)  is
          one  of  concern  abut the nature of things ("anywhere and at any
          time").  The fourth stage is the "transcendent  mode",  when  the
          child starts using its imagination ("nowhere").

          The second part of the book delves into cultural history with far
          less success.
           Dougherty Ray: NATURAL  LANGUAGE  COMPUTING  (Lawrence  Erlbaum,
          1994)

          Prolog implementations of english, french and german grammars.

Dowty David: WORD MEANING AND MONTAGUE GRAMMAR (Reidel, 1979)

          Drawing from the aristotelic classification  of  state,  activity
          and  eventuality,  Dowty  thinks  that  the modal operators "do",
          "become" and "cause" can be  the  foundations  for  building  the
          meaning  of  any other verb.  A thematic role is a set of proper-
          ties that are common to all roles that belong  to  that  thematic
          role.  A thematic role being also a relationship that ties a term
          with an event or a state, a  Lambda  calculus  can  be  built  on
          thematic  roles.  Tematic roles are actually cognitive structures
          that favor the acquisition of language.  See Fillmore, Schank and
          Jackendoff.

Dowty David: INTRODUCTION TO MONTAGUE SEMANTICS (Reidel, 1981)

          One of the best books to understand Montague's thinking and prac-
          tice.   His  intensional language is incrementally built starting
          from  truth-conditional,   model-theoretic   and   possible-world
          approaches to semantics, then introducing variables, quantifiers,
          tense, modality and lambda calculus.  The concepts underlying his
          program  for  a   "universal grammar" are also greatly simplified
          and explained.

Drescher Gary: MADE-UP MINDS (MIT Press, 1991)

          Drescher's "schema mechanism" is a  computational  implementation
          of Piaget's theory of early child development. Concepts are built
          through a stepwise process of synthesis and abstraction.
           Dretske Fred: KNOWLEDGE AND THE FLOW OF INFORMATION (MIT  Press,
          1981)

          Dretske is inspired by Shannon's and Weaver's theory of  informa-
          tion.   In order to extend what is a purely "quantitative" theory
          (dealing with the amount of information present in the state of a
          system  and  the  amount  of  information  which is received in a
          transmission between two  systems)  into  a  semantic  theory  of
          information,  Dretske  distinguishes  information from meaning (a
          signal may have meaning but it certainly carries information) and
          then  relates  information,  knowledge  and  belief: knowledge as
          information-caused belief (an agent knows that something is  true
          if  having  that information causes one to believe that it is the
          case).

          A state carries information about another to the degree  that  it
          is  lawfully  dependent on that other state. The lawful relation-
          ship between a cause and its effect accounts for the effect being
          about  the  cause. Intentionality is not unique of mental states,
          but quite ubiquitous in physical systems (for example, a thermom-
          eter). Mental intentional states are somewhat limited compared to
          physical systems' intentional states,  as  they  miss  a  lot  of
          information that physical systems would not miss. In a sense, the
          mind distorts the information that is available in  the  environ-
          ment.

          A state transports information about another state to the  extent
          that  it  depends  on  that state. Intentionality is reduced to a
          cause-effect relationship:  each  effect  refers  to  its  cause.
          There  are  systems outside the human mind which are intentional.
          Having contents is not unique to the human mind, but having  some
          contents  may be.  What is unique is the transition from analogi-
          cal information (as presented by sensors) to digital  information
          (the  cognitive  representation).   Intentionality is "caused" by
          the  information  perceived  by  the  sensors.   Coherently  with
          Gibson's  and  Neisser's theories, information is in the environ-
          ment and cognitive agents simply absorb it, thereby creating men-
          tal states.

          The difference between sensory processes and cognitive  processes
          is  reduced to the difference between analog processing and digi-
          tal processing.

          Perceptual systems are designed to maintain a stable  correlation
          between percept and the perceived world.

          Intelligence is a function of the total capacity  of  information
          processing.

          A belief is a semantic structure whose content determines what is
          believed.   Beliefs require concepts and concepts imply the capa-
          city for holding beliefs.  A perceptual act creates a belief  out
          of  a  concept.  A  concept has both a backward-looking, informa-
          tional aspect, and a  forward-looking,  functional  aspect.  What
          concepts a system possesses is determined by the kind of informa-
          tion to which its internal states are sensitive.

          Similarly to Fodor ("narrow content" and  "broad  content"  of  a
          mental  representation)  and Putnam (self-contained psychological
          states such as pain versus world-related states such as "X  loves
          Y"),  Dretske  too  has  a two-factor theory of mental states: an
          "indicator" (the "information", the causal relation  to  external
          states)  and  a  second  factor  which expresses the dependencies
          between the internal states in a fashion reflecting the  external
          world.

Dretske Fred: EXPLAINING BEHAVIOR (MIT Press, 1988)

          The term "behavior" is used in many different ways to  mean  dif-
          ferent  things. The behavior of an animal is commonly taken to be
          the actions it performs more or less by instinct  or  by  nature.
          This is not necessarily "voluntary" behavior. The fact that women
          have menstruations is part of "female behavior", but  it  is  not
          voluntary.   Behavior  is pervasive in nature, and cannot be res-
          tricted to animals: plants exhibit behavior too.  Behavior is the
          production  of  some  external  effect  by  some  internal cause.
          Behavior is a complex causal  process  wherein  certain  internal
          conditions   produce   certain  external  movements.   First  and
          foremost, behavior is a process. A process is caused  by  both  a
          triggering  cause (the reason why it occurs now) and a structural
          cause (the reason why the process is the way it is).  This  holds
          both  for human behavior and the behavior of machines (a thermos-
          tat switches on a furnace both because the temperature fell below
          a  threshold and because it has been designed to turn on furnaces
          under certain conditions).

          The explanation of purposive behavior in terms of intentions  and
          beliefs  is  not  contradictory with a physical account of neural
          and muscular  activity.   Generally,  humans  are  interested  in
          structural  behavior, which in plants and animals has been deter-
          mined by natural evolution and in  machines  has  been  built  by
          humans.

          The elements of a representational system have a content  defined
          by  what  it  is their function to indicate (Grice's "non-natural
          meaning"). Dretske distinguishes three types of  representational
          systems:  Type  I  have elements (symbols) that show no intrinsic
          power of representation (includes maps, codes, etc); Type II have
          elements  (signs) that are causally related to what they indicate
          (includes gauges); Type III (or natural) have their own intrinsic
          indicator  functions  (unlike Type I and Type II, in which humans
          are the source of the functions) and therefore a natural power of
          representation.

          Dretske separates the reference  of  a  representation  from  the
          object  that  is  causally  responsible for the representation (a
          gauge carries information about the item it is connected to,  not
          about which item it is that it is connected to)

          In discussing the ccausal role of meaning, Dretske finds that the
          intentional idiom of beliefs, desire, knowledge and intention can
          as well be referred to primitive organisms that not only  have  a
          system  of internal structures whose relevance to the explanation
          of behavior resides in what they indicate  (they  mean  something
          and mean something "to" the organism of which they are part).

Dretske Fred: NATURALIZING THE MIND (MIT Press, 1995)

          Five lectures on consciousness, revolving around the thesis  that
          all  mental  facts  are representational facts, which are in turn
          facts about informational functions. What one thinks and feels is
          determined by history and by the environment.

          "Sense  experience  is  the  primary  locus  of   consciousness".
          Phenomenal  experience  dominates  mental  life.   The phenomenal
          aspects of perceptual experience are one and the same as external
          real-world  properties that experience represents objects as hav-
          ing.  Introspection is reduced to knowledge of internal facts via
          an  awareness  of external objects. Sensations (seeing, smelling,
          etc) are perceptual forms of consciousness.

          Dretske provides an evolutionary account of  sensory  representa-
          tion  and ultimately of awareness. Animals that are conscious  of
          objects and events can do things in the environment that  uncons-
          cious animals cannot do.
           Dretske Fred: SEEING AND KNOWING (University of  Chicago  Press,
          1969)

          Dretske believes that  there  are  two  fundamental  versions  of
          vision:  a  non-epistemic seeing, that requires no belief in what
          is being seen, and an epistemic seeing, which requires  believing
          in  what is being seen. The object of the non-epistemic vision is
          still  a  well-defined  object,  otherwise  people  who  have  no
          knowledge  of  an  object  (or  have different beliefs about that
          object, such as an expert and a novice) would end up seeing  dif-
          ferent  things when they look at it. In the epistemic mode, noth-
          ing can be seen without first acquiring some  true  belief  about
          what  is  seen.  This  second way of seeing is subjective and may
          vary considerably among individuals with different knowledge  and
          beliefs.  Within  epistemic seeing, a difference is drawn between
          primary epistemic seeing (an object is identified  in  virtue  of
          how  it looks) and secondary epistemic seeing (an object is iden-
          tified not in virtue of the way it looks but in virtue of the way
          other  objects look with respect to it).  A detailed mathematical
          account of both ways of seeing is worked out.

Dreyfus Hubert: WHAT COMPUTERS CAN'T DO (Harper & Row, 1979)

          The second edition of the book that started  the  anti-artificial
          intelligence movement.

          Inspired by Husserl's phenomenology (intelligence as  a  context-
          determined,   goal-directed   activity),   Dreyfus   thinks  that
          comprehension can never  do  without  the  context  in  which  it
          occurs.  The  information in the environment is fundamental for a
          being's intelligence. Dreyfus reviews ten years of  research  and
          failures in artificial intelligence and proves that the four fun-
          damental assumptions, biological (that the brain must operate  as
          a  symbolic  processor), psychological (that the mind must obey a
          heuristic program), epistemological (that there must be a  theory
          of  practical  activity)  grounds, and ontological (that the data
          necessary for intelligent behavior must  be  discrete,  expliciti
          and determinate), are not plausible.

          Dreyfus emphasizes the role of the body in  intelligent  behavior
          and  that human experience is intelligible only when organized in
          terms of a situation (as a function of human needs).

          The introduction to the second edition takes on  Minsky's  frames
          and Schank's scripts, two noveties that apparently meet Husserl's
          criteria for intelligence (in that they perform search for  anti-
          cipated  facts). But they too assume that the context is a set of
          rigidly defined situations, while in reality the  context  cannot
          be separated from the rest of our everyday's lives.
           Dreyfus Hubert & Dreyfus Stuart: MIND OVER MACHINE (Free  Press,
          1985)

          A sobering critique of the  foundations  of  artificial  intelli-
          gence, and more specifically symbolic problem solver.

          Dreyfus claims that only novices behave like expert systems.  The
          expert has synthesized experience in an unconscious bahavior that
          reacts istantaneously to a complex  situation.  What  the  expert
          knows cannot be decomposed in rules.

          The foundation of Dreyfus' argument is that minds do  not  use  a
          theory  about  the  everyday  world  because  there  is no set of
          context-free primitives  of  understanding.  Human  knowledge  is
          skilled   "know-how",  as  opposed  to  expert  systems'  logical
          representations, or "know-that".
           Dubois Didier & Prade Henri: POSSIBILITY THEORY  (Plenum  Press,
          1988)

          The english translation of the original 1985 french text.

          Possibility theory (formulated by Zadeh in 1977) developed  as  a
          branch  of  the  theory  of  fuzzy sets  to deal with the lexical
          elasticity of ordinary language (i.e.,  the  fuzziness  of  words
          such as "small" and "many"), and other forms of uncertainty which
          are not probabilistic  in  nature.  The  subject  of  possibility
          theory is the possible (not probable) values of a variable.
          Imprecision is related to the value of an attribute of an object.
          Uncertainty is related to the confidence in that value (probable,
          possible, plausible, etc).  Possibility theory is both  a  theory
          of imprecision (represented by fuzzy sets) and a theory of uncer-
          tainty.  The uncertainty of an event is described by  a  pair  of
          degrees:  the  degree  of  possibility  of  the event and the the
          degree of possibility of the contrary event. The  definition  can
          be  dually stated in terms of necessity, necessity being the com-
          plement to one of possibility.

          When the degrees of possibility can only take the value zero  and
          one,  the  calculus  of  possibility  is  identical  to  interval
          analysis, in which imprecision is represented as sets of possible
          values. Wuith continuous degrees of possibility those sets become
          fuzzy sets.

          The book introduces the mathematical tools of fuzzy logic.

          Possibility logic (a logic of partial  ignorance)  extends  modal
          logic by assigning a degree of possibility and a degree of neces-
          sity to each axiom.

          Its basic axioms are that: 1. grade of possibility is one  for  a
          proposition  that is true in any interpretation and is zero for a
          proposition that is false in any interpretation; 2. grade of pos-
          sibility of a disjunction of propositions is the maximum grade of
          the two. When the grade of necessity of  a  proposition  is  one,
          the  proposition is true. When the grade of possibility of a pro-
          position is zero, the proposition is false.  When  the  grade  of
          necessity is zero, or the grade of possibility is one, nothing is
          known about the truth of the proposition.

          Possiblity logic has a graded notion of  possibility  and  neces-
          sity,  whereas  in  modal logic they are all-or-nothing concepts.
          Possiblity logic admits only one set of axioms, while modal logic
          admits many.
           Dubois Didier, Prade Henri & Yager  Ronald:  READINGS  IN  FUZZY
          SETS (Morgan Kaufmann, 1993)

          All the historical papers from Lotfi Zadeh's 1965 "Fuzzy sets" to
          Brat Kosko's "Adaptive inference in fuzzy knowledge systems". The
          editors provide an intriguing survey of  the  prehistory  of  the
          field,  reaching  back  to  Max Black and Karl Menger's "ensemble
          flou". They also compare fuzzy logic with competing  theories  of
          uncertainty, such as interval analysis and probabilities.

          A few articles cover the foundations of fuzzy set theory.  Dubois
          and Prade discuss fuzzy numbers (fuzzy sets in the real line) and
          possibility theory.  Many articles cover applications to  process
          control and decision analysis.
           Duchan Judith, Bruder Gall & Hewitt Lynne: DEIXIS  IN  NARRATIVE
          (Lawrence Erlbaum, 1995)

          Based on an interdisciplinary research program, the authors argue
          in  favor  of   a  representational system (the "deictic center")
          which readers construct when trying to understand a text by using
          available  knowledge.  The deictic center contains temporal, spa-
          tial and character information.
           Dummett Michael: ELEMENTS  OF  INTUITIONISM  (Oxford  University
          Press, 1977)

          A general introduction to intuitionism.  Intuitionism  prescribes
          that  all  proofs  of  theorems  must be constructive.  Only con-
          structable objects are legitimate.  The meaning  of  a  statement
          resides  not in its truth conditions but in the means of proof or
          verification.
           Dummett Michael: TRUTH AND OTHER ENIGMAS  (Harvard  Univ  Press,
          1978)

          The book collects many papers written by Dummett on various  sub-
          jects.

          Dummett's theory of meaning is a variant of intuitionistic logic:
          a  statement  can  be  said to be true only when it can be proven
          true in a finite time (it can be "effectively  decided",  similar
          to "intuitionistic justified").  In deciding truth one thing that
          is required is understanding. A theory of meaning  must  explicit
          what  it  is  to  know.  A theory of meaning is an account of how
          language is used.  A theory of meaning is a theory of understand-
          ing.

          Dummett criticizes holism because it cannot explain how an  indi-
          vidual  can  learn  language.  If  the meaning of a sentence only
          exists in relationship to the entire system of sentences  in  the
          language,  it  would  never be possible to learn it. For the same
          reason it is not possible to understand the meaning of a  theory,
          if  its  meaning  is given by the entire theory and not by single
          components.

Dummett Michael: SEAS OF LANGUAGE (Clarendon, 1993)

          A collection of many articles about philosophy of  language  from
          the point of view of his theory of meaning.

Dyson Freeman: INFINITE IN ALL DIRECTIONS (Harper & Row, 1988)

          A physicist's speculations on the origins  of  life  and  on  the
          relationship between science and faith.

          Life is metabolism  and  replication,  and  they  are  separable.
          Therefore  it  is  possible  that  life  began  twice, first with
          creatures capable of metabolism and then with  creatures  capable
          of  replication.   Dyson argues that the fundamental characteris-
          tics of life  must  be  homeostasis  (rather  than  replication),
          diversity  (rather  than  uniformity),  the cell (rather than the
          gene). The origins of life must be consistent with life's macros-
          copic features: looseness of structure and tolerance of errors.

          Dyson also speculates on the  connection  between  cosmology  and
          biology.   Inspired  by  Jamal  Islam,  who calculated how matter
          would evolve in universes which expand forever, Dyson  calculates
          mathematically  what  life  is  and how it will evolve.  A closed
          universe is doomed to collapse and life with it. Since a system's
          entropy  is  a measure of the number of alternative states of the
          system, the complexity of a living  organism  should  be  propor-
          tional  to  the  negative of its entropy. Dyson even computed the
          entropy of a human being (the  rate  at  which  humans  dissipate
          energy times the human body's temperature times the duration of a
          unit of consciousness): 10 to the 23th.

Eccles John: EVOLUTION OF THE BRAIN (Routledge, 1991)

          The book offers a history of human evolution, of the evolution of
          the hominid brain, of the evolution of speech production, of evo-
          lution of visual skills, of evolution of learning and  memory.  A
          key role is assigned to the limbic system and, in general, to the
          latest evolutionary additions to the human  brain,  the  cerebral
          neocortex.

          Then Eccles delves into a study of the  evolution  of  conscious-
          ness.   Drawing  from Margenau, Eccles argues that the mind-brain
          interaction is  analogous  to  a  probability  field  of  quantum
          mechanics.  Mental  "energy" can cause neural events by a process
          analogous to the way a probability field causes action. He  calls
          "psychon"  the mental unit that transmit mental intentions to the
          neural units.

          From a  detailed  analysis  of  the  cerebral  neocortex,  Eccles
          derives  that cerebral asymmetry is a fundamental property of the
          human brain, that the self is unique to the left hemisphere,  and
          that  the  neo-neocortex  is the site of gnostic functions. Cons-
          ciousness resides in a psycological  world  that  transcends  the
          physical.  The  soul  is a separated entity from the body, and is
          created by God.

           Eccles John: THE SELF AND ITS BRAIN (Springer, 1994)

          The anti-materialist view of this book  focuses  on  a  spiritual
          self  that is capable of controlling the materic brain and bring-
          ing about voluntary movement.

Edelman Gerald: NEURAL DARWINISM (Basic, 1987)

          Gerald Edelman is possibly the main  contributor  to  the  selec-
          tional  theory of the immune system. When the body is attacked by
          a virus, it produces specially adapted protein molecules, antibo-
          dies,  that  attach  themselves to the invaders and destroy them.
          Those antibodies are created by the thousands "before"  the  body
          is  attacked by anything. An invasion results in a rapid increase
          in the rate of production of the one antibody  that  matches  the
          intruder.  Edelman  is  now applying the same concept to a selec-
          tional theory for brain development, thereby introducing  popula-
          tion thinking to neurobiology.

          Before birth the genetic instructions in  each  organism  provide
          general  constraints  for  neural development, but cannot specify
          the exact location and configuration of each  cell.  After  birth
          innate  "values",  i.e.   adaptive  cues  (such  as  "looking for
          food"),  generate  behavior  and  therefore  feedback  from   the
          environment,  which in turns helps "select" the neural configura-
          tions that are more suitable for survival. During  this  on-going
          process  of  "learning"  the  brain develops categories by selec-
          tively strengthening  or  weakening  connections  between  neural
          groups.  Experience  "selects" one configuration of neural groups
          out of all the configurations that are possible.

          The functioning of the brain can be explained as resulting from a
          morphological selection of neural groups. Neural groups "compete"
          to respond to environmental stimuli. Each brain is therefore dif-
          ferent,  depending  on  the stimuli that it encounters during its
          development.

          Adhesion molecules determine  the  initial  structure  of  neural
          groups,  the "primary repertory".  Behavior determines the secon-
          dary repertory. Repertories are organized  in  "maps",  each  map
          having  a  specific neural function. A map is a set of neurons in
          the brain that has a number of links to a set of  receptor  cells
          or other maps.

          Maps communicate through parallel  bidirectional  channels,  i.e.
          the  "reentrant" signaling.  Reentry is not just feedback because
          there can be many  parallel  pathways  operating  simultaneously.
          The  process of reentrant signaling allows a perceptual categori-
          zation of the world, i.e. to  relate  independent  stimuli.  This
          ability enables higher level functions such as memory.

          In Edelman's view brain processes are dynamic and stochastic.

          The brain is not an "instructional" system  but  a  "selectional"
          system.  It  evolves  not by changes in a constant set of neurons
          but by selection of the most valuable neural groups  among  those
          that  exist since birth.  And the elementary unit of this process
          is not the single neuron, but the neural group.

Edelman Gerald: TOPOBIOLOGY (Basic, 1988)

          The title refers to location-dependent development of body cells:
          how  can  a cell know where in the body it is supposed to grow in
          order  to  generate  the  shape  and  function  of  the   animal?
          Edelman's  molecular  embryology claims that development is based
          on topobiological events (division, movement, death and so  forth
          of  cells,  which  are regulated by cell-adhesion and  substrate-
          adhesion molecules on the surface of the cell).   A  cell's  com-
          petence is due essentially to its location.

          Animate systems exhibit  three  properties  that  allow  them  to
          exist: heredity, variation in their hereditary material, competi-
          tion as  the  environment  changes.  Animate  systems  are  self-
          replicating  systems,  whose  genetic code undergoes mutation and
          whose variant individuals undergo natural selection.

          Characteristic of animate systems is development,  in  particular
          morphogenesis,  the  emergence  of form during embryonic develop-
          ment. Roughly the same cell types appear in  different  parts  of
          the  body.  The difference in position and shape results from the
          interaction of a number of driving forces (namely cell  division,
          cell  motion and cell death), which determine the number of cells
          in a particular region, and  regulatory  processes  (namely  cell
          adhesion  and cell differentiation), which determine the interac-
          tion among cells.

          Evolution can be viewed as a process of phenotypic transformation
          resulting   largely   from   genetically   mediated   change   in
          developmental dynamics that is itself altered  throughout  phylo-
          geny.

          Edelman than analyzes in detail what he considers  the  molecular
          mechanisms of epigenesis.

          Development is under genetic control,  but  developmental  events
          are  nonetheless epigenetic and topobiologically controlled. Pat-
          tern, and not mere  cell  differentiation,  is  the  evolutionary
          basis of morphogenesis. The cell surface, not its core, plays the
          fundamental role in this process,  because  it  mediates  signals
          from  other  cells and links with other surfaces to form tissues.
          A sequence of interactions between certain special types of genes
          via epigenetic signal paths provides the basis of pattern by con-
          trolling temporal  sequences  of  mitosis,  movement,  death  and
          further signaling.

          In order to explain how  this  process  can  be  reconciled  with
          extensive changes in animal form in relatively short evolutionary
          time periods, Edelman points to the  nonlinear  relation  between
          genetics, development and evolution.

Edelman Gerald: THE REMEMBERED PRESENT (Basic, 1989)

          Edelman's biological theory  of  consciousness  begins  with  his
          theory of how higher-level cognitive functions emerge: from reen-
          trant processes.  Consciousness arises from  the  interaction  of
          two  parts  of  the neural system that differ in their anatomical
          structure and  evolutionary  history:  the  one  responsible  for
          categorizing  (external  stimuli)  and  the  one  responsible for
          "instinctive" behavior (homeostatic  control  of  behavior).   At
          this level concepts are not absolute, but can be remembered.

          "Primary consciousness" (being aware  of  things  in  the  world)
          therefore  arises  from "reentrant loops" that interconnect "per-
          ceptual categorization" and "value-laden" memory.  Primary  cons-
          ciousness  has  an  evolutionary  reason  to  be,  since it helps
          abstract and organize complex changes in the environment.

          In order to have higher consciousness the brain must also be able
          to  make  the  distinction  between  the self and the rest of the
          world and to order events in time.  A higher-level  consciousness
          (being  aware  of  itself), unique to humans, is then possible if
          the brain  is  capable  of:  perceptual  categorization,  memory,
          learning and self-nonself discrimination.

          Edelman thinks that two parts of the nervous system differ  radi-
          cally  in  their  evolution,  organization and function. And that
          consciousness emerges as the product of  an  ongoing  categorical
          comparison  of the workings of those two kinds of nervous system.
          The part that is crucial to consciousness has evolved to be dedi-
          cated to adaptive, homeostatic and endocrine functions related to
          the individual's immediate needs  for  survival.  Such  functions
          therefore  reflect  evolutionarily selected values that have con-
          tributed to fitness. Regions that are  assigned  to  define  self
          within  a species include the amygdala, the hippocampus, the lim-
          bic system, the hypothalamus. Regions that operate to define non-
          self include the cortex, the thalamus and the cerebellum.

          From an evolutionary point of view, the milestone moment was when
          a  category-value  link emerged, because then the basis for cons-
          ciousness was laid.

          Edelman then provides a detailed neurophysiological model of  how
          memory  works,  in particular how time and space (and successions
          within them) are represented can be represented by brain organs.

          Edelman thinks that concept formation preceded language. Concepts
          are  driven  by the perceptual system and stored in memory.  With
          the advent of language concepts become absolute,  independent  of
          time.  The brain structures that are responsible for concept for-
          mation are those that can categorize, discriminate and  recombine
          patterns  of  activity  in  different  kinds  of global mappings.
          Language was enabled by the  evolutionary  emergence  of  special
          anatomy:  the acquisition of phonological capacities provided the
          means forst for semantics and then for syntax to arise by linking
          the  preexisting  conceptual  learning  with the emerging lexical
          learning.

Edelman Gerald: BRIGHT AIR BRILLIANT FIRE (Basic, 1992)

          The book summarizes Edelman's theory of  neural  development  and
          consciousness formation.  In practice, Edelman extends an account
          of the  development  of  perceptual  categories  into  a  general
          account of consciousness.

          The reentry mechanism between maps yields a  process  of  "global
          mapping"  that leads to the creation of perceptual categories and
          generalization.  Edelman distinguishes between primary conscious-
          ness  (imagery  and  sensations)  and  higher-order consciousness
          (language   and   self-consciousness).    Primary   consciousness
          requires  memory  (a process of both storing and recategorizing),
          value (a way to rank stimuli and eventually to learn),  discrimi-
          nation of the self from the non-self, a way to represent chronol-
          ogy, and global reentrant pathways connecting  all  these  struc-
          tures.  Higher-order consciousness.

          Edelman thinks that science cannot solve the  problem  of  qualia
          because no two people will have the same qualia.
           Eigen Manfred & Schuster Peter: THE HYPERCYCLE (Springer Verlag,
          1979)

          The origin of life from  inorganic  matter  is  due  to  emergent
          processes of self-organization.

          Hypercycles are a class of nonlinear reaction networks  that  can
          originate  spontaneously  within  the  population  of  a  species
          through natural selection and naturally evolve to higher complex-
          ity  by allowing for the coherent evolution of a set of function-
          ally coupled self-replicating entities.  Natural selection itself
          is inevitable: given a set of self-reproducing entities that feed
          on a common and  limited  source  of  energetic/material  supply,
          natural selection will spontaneously appear.

          A hypercycle is based on  nonlinear  autocatalysis  (reproduction
          cycles  which  are  linked  by  cyclic catalysis, i.e. by another
          autocatalysis). A hypercycle is therefore the next  higher  level
          in the hierarchy of autocatalytic systems.

          The second part of the book analyses the behavior and  mathemati-
          cal properties of hypercycles.

          The model explains the simultaneous unity (due to the  use  of  a
          universal  genetic  code)  and  diversity  (due to the "trial and
          error" approach of natural selection)  in  evolution.  This  dual
          process  started  even  before  life  was  created.  Evolution of
          species was preceded by an analogous stepwise process of  molecu-
          lar evolution.

          Systems can be classified in four groups according to their  sta-
          bility   with   respect  to  fluctuations:  stable  systems  (the
          fluctuations are self-regulating), indifferent systems (the fluc-
          tuations have no effect), unstable systems (self-amplification of
          the fluctuations) and  variable  systems  (the  system  can  show
          either  regulation,  indifference  or  amplification  of fluctua-
          tions).  Only the last type (indifference towards a broad  mutant
          spectrum,  stability towards selective advantages and instability
          towards unfavorable configurations) is suitable for generation of
          biological  information.  Selection is a mathematical consequence
          of the dynamics of self-reproducing systems of this kind.

          Eigen's experiments with RNA proved that  under  suitable  condi-
          tions  a  solution  of  nucleotides  give rise spontaneously to a
          molecule that replicates, mutates and competes with  its  progeny
          for survival.  The replication of RNA appears to be the fundamen-
          tal event around which the  rest  of  biology  developed.   First
          genes  were created, then proteins, then cells. Cells simply pro-
          vide physical cohesion.
           Eigen Manfred: STEPS  TOWARDS  LIFE  (Oxford  University  Press,
          1992)

          By employins his "hypercycle"  technique,  Eigen  speculates  how
          living cells and bodies may have come to be, starting from molec-
          ular tools: cells first learned to  self-replicate  and  then  to
          surround themselves with protective membranes.

          Eigen also uncovers a  feedback  mechanism  inherent  in  natural
          selection  that  favors  (or accelerates the search for) superior
          mutants. This explains the apparently  impossibly  fast  rate  of
          adaptation  by  viruses. That feedback mechanism turns what would
          be a steady function of improvement into an exponential  function
          of  improvement,  thereby  explaining  how  viruses  can adapt so
          quickly. The feedback mechanism is due to the fact that the "wild
          type"  of  a genotype (the pure genotype) is always surrounded by
          almost identical variants, and this accelerates the emergence  of
          superior mutants.
           Ekman Paul & Davidson Richard: THE  NATURE  OF  EMOTION  (Oxford
          Univ Press, 1994)

          A series of articles on emotion from psychologists.

Engelmore Robert: BLACKBOARD SYSTEMS (Academic Press, 1988)

          All the historical papers on the subject, from Barbara Hayes-Roth
          to  Nii.   Opportunistic  planning  was first used in the HEARSAY
          system in the mid Seventies, then formalized in 1979 by Frederick
          and Barbara Hayes-Roth ("A cognitive model of planning").

          Hayes-Roth's opportunistic and  incremental  model  of  reasoning
          contemplates  many  independent  agents  cooperating  to find the
          solution to a problem.  Each specialized agent  is  triggered  by
          information  written  by  other  agents  on a blackboard and each
          agent can in turn write information  for  other  agents  on  that
          blackboard.

          The system keeps two agendas, one for the actions it "wishes"  to
          perform (those that at least one agent needs to continue its rea-
          soning) and one for the actions  that  it  "can"  perform  (those
          whose  preconditions  have been satisfied). By matching necessary
          and possible actions  the  system  determines  which  agents  are
          active at any time.  The computational advantage of this model of
          inference is that only actions that are relevant to the  solution
          of the problem are taken into consideration.
           Epstein Richard: SEMANTIC FOUNDATIONS OF LOGIC (Kluwer Academic,
          1990)

          A general introduction to the most popular varieties of  proposi-
          tional  logics. Epstein sets himself to defining his "relatedness
          logic", a logic which takes into account the  subject  matter  of
          propositions,  and  "dependency logic", which, similarly, focuses
          on the referential content of a proposition.

          A broad coverage of modal logics (and Kripke's semantics), intui-
          tionism   (Brouwer's   manifestos,  Heyting's  formalization  and
          Kripke's  semantics),  many-valued  logics  (Lukasiewicz,   Post,
          Kleene) is also provided.
           Epstein Richard: SEMANTIC FOUNDATIONS OF LOGIC: PREDICATE  LOGIC
          (Kluwer Academic, 1994)

          A vast, technical introduction  to  predicate  logic,  semantics,
          identity, quantifiers, descriptive names, functions, second-order
          logic.
           Estes William: CLASSIFICATION AND COGNITION  (Oxford  University
          Press, 1994)

          Estes offers a psychological theory of memory organization  based
          on categorization. Estes distinguishes classification (partition-
          ing a set of objects in a set of groups) and categorization (par-
          titioning  plus each category implies a set of properties for its
          members).  After an historical overview, Estes advances his  core
          model,  a  combination  of  an  array  framework (in which memory
          interfaces with perception by means of a mechanism based on simi-
          larity  and  in  which  the association between memory and action
          varies according to a learning mechanism) and  the  product  rule
          (by  which similarity of two patterns is computed as a product of
          the differences between each pair of  corresponding  features  of
          the  two  patterns).   Basically,  Estes  adopts  both a storage-
          retrieval model and an adaptive network model,  thereby  marrying
          cognitive psychology and connectionism.

          A system of categorization based  on  the  product  rule  differs
          considerably from prototype-based systems such as Rosch's.

          Estes' model is based on empirical data and provides  a  rigorous
          mathematical formulation.
           Eysenck Michael: PRINCIPLES OF  COGNITIVE  PSYCHOLOGY  (Lawrence
          Erlbaum, 1993)

          A short introduction to the field.

Fauconnier Gilles: MENTAL SPACES (MIT Press, 1994)

          A revised edition of the 1985 cognitive linguistics classic  that
          described  how  discourse constructs mental spaces. Mental spaces
          are domains that are built by the hearer  as  she  listens  to  a
          speech.  They  are interconnected and consist of elements, roles,
          strategies and relations between them.   Fauconnier  applies  the
          theory to presuppositions and counterfactuals.

Feigenbaum Edward: COMPUTERS AND THOUGHT (MIT Press, 1995)

          A collection  of  articles  by  Turing,  Newell,  Simon,  Minsky,
          Feigenbaum, etc.

           Feigl Herbert: THE MENTAL AND THE PHYSICAL  (Univ  of  Minnesota
          Press, 1967)

          In this 1957 essay Feigl argues in favor of  the  class  identity
          theory of the mind.  Physical and mental terms may have different
          senses but identical referents: mental states may  refer  exactly
          to  the same states as do physical states, even if they  describe
          the states in a completely different manner.  Mental  idioms  and
          physical  idioms  are  different descriptions of the same states.
          Mental states and physical states have  the  same  extension  but
          different intension: they describe the same states, but in a dif-
          ferent way.

          In the postscript to the second edition Feigl rejected his origi-
          nal theory and opted for eliminativism: there is no evidence of a
          relation between mental and physical states, and only the  physi-
          cal  (neuroscientific)  language should be employed in discussing
          people's feelings.

Fetzer James: ASPECTS Of ARTIFICIAL INTELLIGENCE (Kluwer, 1988)

          A collection of philosophical articles on  machine  intelligence,
          notably  Fetzer's own introduction to the theory of semiotic sys-
          tems. Newell's and Simon's hypothesis of the  mind  as  a  symbol
          processing  system  can  be extended by considering the mind as a
          semiotic system, i.e. sign  processing  systems.   Fetzer  thinks
          that  symbol systems simulate mental processes that semiotic sys-
          tems replicate.

Fetzer James: ARTIFICIAL INTELLIGENCE (Kluwer, 1990)

          Fetzer thinks that the standard model of Artificial Intelligence,
          that  views  minds as symbol processing systems, is fundamentally
          flawed, because minds are semiotic systems. Fetzer introduces  to
          the  theory  of  semiotic  systems.  The notions of semantic net-
          works, frames, scripts are reviewed in the philosophical  context
          of a theory of knowledge, belief and action.

Fetzer James: EPISTEMOLOGY AND COGNITION (Kluwer, 1991)

          A collection of philosophical papers (mainly critiques) that deal
          with  Fodor's  computational  theory  of the mind, connectionism,
          scripts, frames and so forth.
           Fiesler Emile & Beale Russell: HANDBOOK  OF  NEURAL  COMPUTATION
          (Oxford Univ Press, 1996)

          The ultimate  handbook for professional neural network designers.
          It includes applications to Biology, Medicine, Economics, etc.

Finke Ronald: PRINCIPLES OF MENTAL IMAGERY (MIT Press, 1989)

          A survey of psychological findings about mental imageries.  Finke
          identifies  five  principles  of  equivalence  between  a  mental
          imagery and the  perceived  object:  the  principle  of  implicit
          encoding  (informatin  about  the  properties of an object can be
          retrieved from  its  mental  image),  the  principle  of  spatial
          equivalence  (parts  of a mental image are arranged in a way that
          corresponds to the way that the parts of the physical object  are
          arranged),  the  principle  of  perceptual  equivalence  (similar
          processes are activated in the brain when the  objects  are  ima-
          gined  as  when they are perceived), the principle of transforma-
          tional  equivalence   (imagined  transformations   and   physical
          transformations  are  governed  by  the same laws of motion), the
          principle of structural equivalence (the mental imagery  exhibits
          structural  features  corresponding  to  those  of  the perceived
          object such that the relations between the object's parts can  be
          both preserved and interpreted).

Finke Ronald: CREATIVE IMAGERY (Lawrence Erlbaum, 1990)

          A book devoted to the psychological phenomenon  that  people  can
          detect  emergent  patterns in imagery even if they were not aware
          of them when the image was formed.  Most  of  these  recognitions
          occur only when people inspect their images.

Finke Ronald: CREATIVE COGNITION (MIT Press, 1992)

          A study of creativity in terms of  the  cognitive  processes  and
          structures  that  make it possible.  The model includes a genera-
          tive phase, in which mental  representations  (or  "preinventive"
          structures)  that  promote creative discovery, and an exploratory
          phase, in which they are interpreted in meaningful ways.

Finke Ronald: CHAOTIC COGNITION (Lawrence Erlbaum, 1996)

          Chaotic thinking is the process by  which  the  individual  copes
          with a world full of unpredictability, changes and uncertainties.
           Fisher Ronald Aylmer: THE GENETICAL THEORY OF NATURAL  SELECTION
          (Dover, 1929)

          Seminal work that highlighted how  genes  from  the  parents  are
          reshuffled  in  each  new  generation.  Fisher used sophisticated
          mathematics in dealing with evolution, thereby providing a scien-
          tific account of how a distribution of genes in a population will
          change as a result of natural selection.

          Fisher erred in thinking about the evolution of the single  gene,
          neglecting  the influence of all the other genes, and in assuming
          that evolution was a process of achieving stable equilibrium.

Flanagan Owen: CONSCIOUSNESS RECONSIDERED (MIT Press, 1992)

          Flanagan's book is an introduction to the issues concerning cons-
          ciousness:  qualia,  self-consciousness,  memory,  sensations and
          multiple personalities disorders. It does not provide a model  to
          explain  what  consciousness  arises  from,  but  it examines the
          phenomena that may lead to such an explanation. Consciousness  is
          considered  as a natural phenomenon that can be explained by sci-
          ence.

Flanagan Owen: THE SCIENCE OF THE MIND (MIT Press, 1991)

          Descartes' dualism violates  the  principle  of  conservation  of
          energy.   William  James'  work  is  the first formulation of the
          naturalistic position in the philosophy of mind:  the  mental  is
          physical, although it cannot be explained by mechanical laws, and
          it has an evolutionary purpose; consciousness is not  an  entity,
          but   a   function.   Flanagan  reviews  Freud's  psychoanalysis,
          Skinner's behaviorism, Piaget's and Kohlberg's theories of cogni-
          tive development, the main themes of Cognitive Science and Artif-
          icial Intelligence, and Wilson's sociobiology.

          Consciousness is an heterogeneous set of processes which have  in
          common  the  property of being felt. Flanagan does not believe in
          "one" consciousness, but in a  group  of  "conscious"  phenomena.
          Some  of  the  processes of our body are unconscious and non per-
          ceived (the heartbeat), some are  unconscious  but  perceived  by
          other  processes  (sensors), and some are conscious, perceived by
          themselves.

Flanagan Owen: CONSCIOUSNESS RECONSIDERED (MIT Press, 1992)

          A review of phenomena related to consciousness,  from  qualia  to
          multiple personalities.

Flanagan Owen: SELF EXPRESSION (Oxford Univ Press, 1996)

          A series of essays on subjects related to consciousness,  dreams,
          and psychological disorders.
           Flood  Raymond  &  Lockwood  Michael:  NATURE  OF  TIME   (Basil
          Blackwell, 1986)

          A collection of essays about the arrow of time  (time's  inherent
          directionality, in spite of the apparent symmetricity of the fun-
          damental laws of nature) and the  second  law  of  thermodynamics
          (the only law of nature which is not symmetric).

          Penrose's "Big bangs, black holes and time's  arrow"  deals  with
          the  apparent  contradiction of increasing entropy in an universe
          that started in a state of maximum entropy  (thermal  equilibrium
          before  the  big  bang) and in an universe whose fundamental laws
          are all symmetric.

          Paul Davies relates the direction of time to the quantum collapse
          of  the  wave  function.  Davies also suggests that the mind-body
          problem may be related  to  quantum  mechanics'  dualism  between
          waves  and particles, as the mind's role (of information encoding
          and  processing) is similar to the wave's role.

          Dummett's "Causal loops" refutes all arguments against the possi-
          bility that we can influence our past.

Fodor Jerry: LANGUAGE OF THOUGHT (Crowell, 1975)

          Fodor's computational theory of the mind views the mind as a spe-
          cial symbolic processor. Propositional attitudes can be explained
          by assigning a symbolic memory to each possible  attitude  (hope,
          desire,  fear,  etc)  and  and each symbol to one of the possible
          propositions. A proposition in an attitude constitutes a proposi-
          tional attitude. Each symbol is a "mental representation" and the
          mind is endowed with a set of rules to operate on such  represen-
          tations.  Cognitive  life  is  the transformation of those rules.
          Mental representations constitute a language  of  thought,  "men-
          talese".

          Evidence of an internal language in the mind comes from  rational
          behavior  (the ability to compute the consequences of an action),
          concept learning (the ability to form and verify hypotheses)  and
          perception  (the  ability  to  recognize  an object or an event).
          These phenomena would not be possible if the agent was  not  able
          to represent to itself the elements of the problem.

          Such language cannot be one of the languages we speak because the
          very  ability  to  speak  requires  the  existence of an internal
          language of representation.

          But the language of thought exhibits features that are shared  by
          human  languages: productivity (ability of understanding and pro-
          ducing propositions from  an  infinite  set  by  using  recursive
          operations  over  finite  resources),  systematicity  (a physical
          relation between mental representations so  that  one  can  yield
          others),  coherence  (ability  to make syntactically and semanti-
          cally plausible inferences).

          The mind processes symbols without  knowing  what  those  symbols
          mean,  in a purely syntactic fashion. Behavior is due only to the
          internal structures of the mind.

          All knowledge is represented syntactically.

Fodor Jerry: REPRESENTATIONS (MIT Press, 1981)

          A collection of  philosophical  essays  on  the  representational
          theory of the mind.

          Fodor looks for an explanation of how propositional attitudes can
          have  semantic properties.  Propositional attitudes are relations
          (between an agent and a state of the world). Among the relata are
          mental   representations.  Mental  representations  are  symbols,
          endowed with both syntactic and semantic properties. They possess
          their causal role in virtue of their syntactic properties. Propo-
          sitional attitudes inherit their  semantic  properties  from  the
          mental representations that function as their objects.

Fodor Jerry: MODULARITY OF THE MIND (MIT Press, 1983)

          Fodor advances a theory of the mind that exsumes Gall's  view  of
          vertical faculties.  Cognitive faculties can be divided in verti-
          cal faculties (which are domain-specific, genetically determined,
          computationally  autonomous  and  associated with distinct neural
          structures) and horizontal faculties.  Modular cognitive  systems
          are  vertical  faculties.   systems of input analysis and systems
          that subserve the fixation of belief.

Fodor Jerry: A THEORY OF CONTENT (MIT Press, 1990)

A collection of papers on Fodor's theory of mental content.

          Fodor speculates that there exist two types  of  meaning.   Fodor
          discriminates  between  "narrow content" and "broad content" of a
          mental representation: the former is a  semantic  representation,
          is purely mental and does not depend on anything else; the latter
          is a function that yields the referent in every  possible  world,
          and depends on the external world.

          Meaning is the ordered set of narrow and broad contents.   Narrow
          content  is  a conceptual role. As in Sellars, a role is a purely
          syntactic property, as they occur in formal systems.

          Fodor claims that there is no type  identity  but  only  instance
          identity.  Mental instances that constitute a mental class can be
          used by neural events which do not form a neural class.

Fodor Jerry & Lepore Ernest: HOLISM (Basil Blackwell, 1992)

          The book is a critical survey  of  the  theory  that  only  whole
          languages  or  whole belief systems really have meanings; and the
          meanings of smaller units are  merely  derivative.  Each  chapter
          attacks the thinking of an influential philosopher: Quine, David-
          son, Lewis, Dennett, Block and Churchland.

          Fodor's "rational fixation" of  beliefs  is  a  non-demonstrative
          process that employs analogy and induction.

Fodor Jerry: THE ELM AND THE EXPERT (MIT Press, 1994)

          A lively introduction to the issues of the mental language plus a
          critique of the critique of his theory. His opponents' claim that
          referential semantics cannot provide a robust  theory  of  inten-
          tional  explanation  is  rebuffed  by positing that psychological
          laws are intentional, psychological processes are   computational
          and   the  semantic  properties  of  mental  representations  are
          referential (semantics is purely informational).
           Forbus Kenneth & DeKleer Johan: BUILDING  PROBLEM  SOLVERS  (MIT
          Press, 1993)

          A textbook that focuses on truth maintenance systems.

Forrest Stephanie: EMERGENT COMPUTATION (MIT Press, 1991)

          A collection of papers on  the  topic  of  emergent  computation.
          Most  papers  assume that physical systems exist that can support
          computation, and analyze  under  which  conditions  computational
          processes may come to be spontaneously.
          Emergent computation is to standard  computation  what  nonlinear
          systems  are to linear systems: it deals with systems whose parts
          interact in a nontrivial way.

          Chris Langton presents his theory of computation ad the  edge  of
          chaos:  physical  systems achieve the prerequisites for the emer-
          gence of computation (i.e., transmission, storage,  modification)
          in  the vicinity of a phase transition. Specifically, information
          becomes an important factor in the dynamics of cellular  automata
          in  the  vicinity  of  the  phase transition between periodic and
          chaotic behavior. In that neighborhood, information can propagate
          over  long distances without decaying appreciably, thereby allow-
          ing for long-range correlation in  behavior  (ordered  configura-
          tions  do  not  allow  for  information  to propagate at all, and
          disordered configurations cause information to quickly decay into
          random  noise).  This conclusion is consistent with Von Neumann's
          findings.   A  fundamental  connection  is  therefore   displayed
          between computation and phase transition.

          Kauffman  debates  orderly  dynamics  and  frozen  components  as
          requirements  for  the  evolvability of complex systems.  He also
          notes how nonlinear dynamical systems  which  interact  with  the
          external  world  classify  and  know  their  world  through their
          attractors.

          Holland, as well as Forrest, looks  at  emergent  computation  in
          classifier  systems.   Hillis  proves  that co-evolving parasites
          help improve evolution.

          A number of papers deal with connectionism.  Daniel Greening sur-
          veys  a  variety  of  parallel  simulated  annealing  techniques.
          Churchland views explanatory understanding,  perceptual  recogni-
          tion  and abductive inference as different instances of prototype
          activation.

Franklin Stan: ARTIFICIAL MINDS (MIT Press, 1995)

          An excellent interdisciplinary survey of artificial intelligence,
          cognitive   science,   artificial  life,  neurobiology.  Franklin
          presents recent theories of the mind by Chalmers,  Sloman,  Grif-
          fin, Minsky, Ornstein; describes the SOAR cognitive architecture,
          Brooks'  subsumption   architectures,   Brustoloni's   autonomous
          agents, Drescher's schemata, Kanerva's sparse distributed memory,
          Edelman's neural darwinism,  Maturana's  autopoiesis.   discusses
          Dreyfus'  and  Penrose's  critiques  of  artificial intelligence;
          introduces the theory of dynamic systems.
           Frost Richard: INTRODUCTION TO KNOWLEDGE BASED SYSTEMS  (MacMil-
          lan, 1986)

          A comprehensive introduction on how we can build systems that are
          capable  of  storing  and processing complex pieces of knowledge.
          Notions and techniques from database  technology,  formal  logic,
          expert systems research and advances in natural language (each of
          which are discussed at length in a very  scientific  manner)  are
          linked  to yield the foundations of a complete and unified theory
          of knowledge representation.

          Frost covers many-sorted logics, non-monotonic logic, many-valued
          logics  (including  fuzzy logic), modal logics (alethic, deontic,
          epistemic), the main variants of temporal logic,  the  theory  of
          types,  Montague's  intensional logic and theories of uncertainty
          (probability, possibility, plausibility)

          Then Frost delves into knowledge representation techniques:  pro-
          duction  rules, semantic networks, frames, scripts and formalizes
          the types of inference that they enable.

          Functional language is described, with  emphasis  on  the  Lambda
          calculus,

          Throughout the book a rigorous mathematical notation is employed.

Gallistel C.R.: THE ORGANIZATION OF ACTION (Erlbaum, 1980)

          The nature of intelligence lies in  the  organization  principles
          that  enable  living  organisms to make rapid adjustments of pat-
          terns of action in response to the environment.  No  movement  in
          nature  is random, it always serves the purpose of "adapting" the
          state of the system to the external  conditions.  No  matter  how
          intelligent  a  living  being's action appears to be, that action
          satisfies the same general principle. The  reason  human  actions
          look  more  complex  than  the  actions  of  inanimated matter is
          because of the complexity of  the  human  machine,  i.e.  of  the
          brain's  neural  circuitry.  The subtleties of goal, intent, pur-
          pose are but consequences of the hierarchical synthesis of inter-
          mediate units.

          The  elementary  units  of  behavior  (reflex,  oscillator,  ser-
          vomechanism,  i.e.  externl stimulus to internal signal to muscle
          contraction) are "catalyzed" by units at the higher levels of the
          system.  Gallistel  describes  the  interaction  principles  that
          govern the units of behavior (reciprocal facilitation, reciprocal
          inhibition, chaining, superimposition, acceleration/deceleration,
          corollary discharge, etc).  The goal is to explain how an  action
          that  looks  like  a  whole can be decomposed in many coordinated
          lower-level levels.

          Drawing from Paul Weiss' concept of a central program,  Gallistel
          assumes  that  units are organized in a hierarchy that allows for
          competition and antagonism.  A  central  program  is  a  unit  of
          behavior that is activated as a whole.  A central program "selec-
          tively potentiate" subsets  of  lower-level  units  according  to
          their  relevance  to the current goal. The principles that deter-
          mine the "selective potentiation" of lower-level  units  are  the
          same that govern the properties of elementary units.

          Drawing from Deutsch's theory of learning, which  prescribes  how
          representations  of the world determine action, Gallistel defines
          cognition as the representation of the world stored in memory.

          Gallistel therefore argues in favor  of  innate  knowledge,  i.e.
          universal principles of behavior.

          The book contains reprints of  historical  papers  (Sherrington's
          study of the reflex, Von Holst's oscillators, Wilson's on coordi-
          nation, Fraenkel's analysis of geotaxis) and a wealth of  experi-
          mental data.
           Galton Antony: TEMPORAL LOGICS (Academic Press, 1987)
          Six essays from authoritative researchers in the  field  of  tem-
          poral logic.  Galton provides an overview of both the first-order
          (Davidson, McDermott, Allen, Kowalski) and  the  modal  (Prior's)
          approaches.   Sadri  discusses  in  detail Kowalski's calculus of
          events, Lee's logic of time and events, Allen's  temporal  logic.
          Galton presents his logic of occurrence

          In his logic of aspect an event-radical is a complete  expression
          that is neither a proposition nor a name, but it denotes an event
          type. Occurrences are event tokens: each single occurrence of  an
          event type is an occurrence.  Aspect operators (perfect, progres-
          sive and prospective) are applied to event-radicals to yield pro-
          positions.   Such  operators  express the occurrence of events in
          time. The logic of occurrence is the logic of such operators.

Galton Antony: THE LOGIC OF ASPECT (Clarendon Press, 1984)

          "Aspect" refers to  the fact that every verb has two  forms,  the
          imperfective (used to describe an action in progress) and perfec-
          tive (used to describe a completed action).  Aspect is related to
          tense:  aspect  determines how tense has to be interpreted (e.g.,
          perfective aspect is incompatible with present tense).

          Prior worked out a logic of tenses. Galton extends that logic  by
          introducing  a  distinction between events (which are perfective)
          and states (imperfective).  States "obtain" in  moments,  whereas
          events "occur" in intervals.  Aspects are treated like operators.
          Prior's two temporal operators are still  applied  to  states  to
          obtain  new  states  but  two new operators transform events into
          states and two more transform states into events.
           Gamut L.T.F.: LOGIC, LANGUAGE AND MEANING  (University  of  Chi-
          cago, 1990)

          J. Benthem, J. Groenendijk, D. De Jongh, M. Stokhof and  H.  Ver-
          kuyl provide a broad introduction to the standard and intensional
          logics, pragmatics and Montague's grammar.

Gardner Howard: MIND'S NEW SCIENCE (Basic, 1985)

          A history of cognitive research, that spans  cybernetics,  neuro-
          physiology   (Lashley,  Hebb),  philosophy  of  the  mind  (Ryle,
          Wittgenstein,  Austin),  psychology   (Miller,   James,   Kohler,
          Bartlett,  Piaget), artificial intelligence, linguistics, anthro-
          pology, biology (Gibson, Marr).

Gardner Howard: FRAMES OF MIND (Basic, 1983)

          Gardner argues that there  is  no  single,  unified,  indivisible
          intelligence,  but  rather a set of independent intellectual com-
          petences. Gardner finds the biological  foundations  of  intelli-
          gence in the plasticity of the neural system during development.
           Garnham Alan & Oakhill Jane: THINKING AND REASONING  (Blackwell,
          1994)

          A cognitive  psychology  approach  to  inference  (deduction  and
          induction),  creativity,  common  sense  and  the  development of
          cognition.

Gazdar Gerald: PRAGMATICS (Academic Press, 1979)

          Pragmatics  studies aspects of meaning that cannot  be  accounted
          for by reference to truth conditions. Pragmatics deals with mean-
          ing minus truth conditions, or meaning minus  semantics.  In  his
          approach  to  the  field  Gazdar  employs a formalist methodology
          analogous to the one applied By Montague to semantics.

          Gazdar offers a critique of the  theory  of  illocutionary  force
          based  on the performative hypothesis (that the deep structure of
          every sentence contains a performative verb).

          After recapitualting Grice's treatment of implicatures  and  four
          maxims,  Gazdar  proposes  to replace the quality maxim with "say
          only that which you know", so that implicatures due to the  maxim
          of  quality  (both  scalar implicatures and clausal implicatures)
          can be treated  as  Hintikka's  epistemic  implications,  thereby
          solving  the  "projection  problem" (how the presuppositions of a
          sentence are determined by those of its components).

          After a reasoned critique of existing treatments  of  presupposi-
          tion  (Hausser,  Katz,  Langendoen, Stalnaker, Karttunen), Gazdar
          offers his definition, drawing from Hamblin's "commitment  store"
          model  of  dialogue  and Bar-Hillel's view of an utterance as the
          pair of a sentence and a context.   Gazdar  offers  an  inductive
          definition  of context (a set of propositions constrained only by
          consistency) and uses Stalnaker's pragmatic  definition  of  sen-
          tence meaning.
           Gazdar Gerald: GENERALIZED PHRASE STRUCTURE GRAMMAR (MIT  Press,
          1985)

          Gazdar abandons  the  transformational  component  and  the  deep
          structure of Chomsky's model of grammar and focuses on rules that
          analyze syntactic trees rather than generate them. They translate
          natural  language  sentences  in  an intensional logic which is a
          variant of lambda calculus.

          Gazdar's grammar describes only context-free languages and  exhi-
          bits  mathematical properties that, unlike Chomsky's grammar, can
          be scientifically tested and falsified. A  phrase-structure  rule
          is not a generative rule but a condition of compliance for a syn-
          tactic tree. The semantic interpretation of a sentence is derived
          directly from its syntactic representation.

          Gazdar defines 43 rules  of  grammar  each  providing  a  phrase-
          structure  rule and a semantic-translation rule that shows how to
          build an intensional-logic expression from the  intensional-logic
          expressions  of  the  constituents  of the phrase-structure rule.
          Gazdar employs meta-rules to produce  new  rules  (and  therefore
          derived categories) from the existing rules.
           Gazzaniga Michael &  LeDoux  Joseph:  INTEGRATED  MIND  (Plenum,
          1978)

          Based on the results of  split  brain  experiments,  the  authors
          present  a theory that what is transferred between the emispheres
          is neural codes to maintain an informational balance and  provide
          for mental unity.

          The authors criticize the view that the two emispheres are highly
          specialized  units,  and reduce lateralization to the lateraliza-
          tion of linguistic skills.

Gazzaniga Michael: SOCIAL BRAIN (Basic, 1985)

          Humans are more of a sociological entity than  a  single  unified
          psychological entity. The human brain is social.

          Gazzaniga's model  of  the  brain  (and  the  mind)  is  modular:
          independent  units  work  in  parallel.  A  special module is the
          "interpreter" of behavior, which makes sense ex-post of even  the
          most  capricious  acts. Beliefs are created by the interaction of
          the interpreter with the other modules.  Gazzaniga looks for evi-
          dence  of  his theory in neurophysiology, archaeology and anthro-
          pology.

Gazzaniga Michael: NATURE'S MIND (Basic, 1992)

          Gazzaniga emphasizes that innate  factors  play  a  key  role  in
          determining  human  behavior.  Following Edelman, brains are born
          with a vast number of pre-wired circuits,  but  most  often  they
          offer  many  alternative  options  for  development.   Experience
          determines which of these pre-existing brain circuits  are  used.
          Many possible connections can be made, but only some are selected
          by experience.
           Genesereth Michael & Nilsson Nils: LOGICAL FOUNDATIONS OF ARTIF-
          ICIAL INTELLIGENCE (Morgan Kaufman, 1987)

          A textbook on Artificial Intelligence that covers production sys-
          tems  (predicate  calculus, deduction, resolution), some nonmono-
          tonic logics (closed-world assumption,  circumscription,  default
          theory),  inductive  learning, probabilistic reasoning, logics of
          belief, and planning. The last  chapter  attempts  to  define  an
          intelligent agent at three levels: a tropistic agent, that simply
          reacts to the environment; a hysteretic agent, that has an inter-
          nal  state;  and a knowlegde-level agent, whose internal state is
          basically determined by a production system.

Gell-Mann Murray: THE QUARK AND THE JAGUAR (W.H.Freeman, 1994)

          A book on complexity (i.e., nonlinearity) that  tries  to  bridge
          the  simple (e.g., elementary particles) and the complex (e.g., a
          living organism).

          "It is not simple to define simple". Gellman defines  it  as  the
          absence of complexity.

          According to superstring theory subatomic particles are compacti-
          fied hyperdimensional space (matter having originated when six of
          the original dimensions of space collapsed into superstrings).

          Gell-man provides a modern account of quantum mechanics, based on
          Richard  Feynman's view of many alternative possible histories of
          the universe as a direct consequence of chance.  The  probabilis-
          tic  nature of quantum mechanics allows the universe to unfold in
          an infinite number of ways. The second law of thermodynamics per-
          mits  the  temporary  growth  of  order  in  relatively isolated,
          energy-driven systems.

          Complex adaptive systems behave in accordance with the second law
          of  thermodynamics.  Biological  evolution  is a complex adaptive
          system that complies with that law once the  entire  environment,
          and  not  only  the  single organism, is taken into account. Once
          complex  adaptive  systems  establish  themselves  they   operate
          through  a  cycle  that  involves  variable schemata, randomness,
          phenotypic consequences and feedback of  selection  pressures  to
          the competition among schemata.

          Living organisms dwell "on the edge of chaos",  as  they  exhibit
          order  and  chaos at the same time, and they must exhibit both in
          order to survive.  Living organisms are complex adaptive  systems
          that  retrieve  information  from  the  world, find regularities,
          compress them into a schema  to represent the world, predict  the
          evolution of the world and prescribe behavior for themselves. The
          schema may undergo variants that compete with one another.  Their
          competition  is  regulated  by feedback from the real world under
          the form of selection  pressure.   Disorder  is  useful  for  the
          development  of new behavior patterns that enable the organism to
          cope with a changing environment.
           Gibson James Jerome: THE SENSES CONSIDERED AS PERCEPTUAL SYSTEMS
          (Houghton Mifflin, 1966)
          Gibson originated "ecological realism", the view that meaning  is
          located  in the interaction of living things and the environment.
          Perceiving is a process of picking up information that is  avail-
          able  in  the  environment.  Perception is a constant process and
          consists in detecting the invariants. The function of  the  brain
          is  to orient the organs of perception for seeking and extracting
          information from the continous energy flow  of  the  environment.
          Perception  cannot be separated from the environment in which the
          perceptive system evolved  and  from  the  information  which  is
          present  in  that environment.  There is much more information in
          the world and less in the head than  was  traditionally  assumed.
          The environment must be viewed as a source of stimulation.

          Conscious sensation and perception are two different  things  and
          they  are  often  independent.  Perceptual systems are sources of
          information.  Sensations are sources of conscious qualities.  The
          inflow of information does not always coincide with the inflow of
          sensations. Therefore, a study of sensations is not  very  useful
          to a study of perceptions.

          Perceptual organs are not passive. They can orient themselves  to
          pick  up  information,  to "resonate" with the information in the
          environment. Gibson goes to a great length to explain the details
          of their functioning.
           Gibson James  Jerome:  THE  ECOLOGICAL  APPROACH  TO  PERCEPTION
          (Houghton Mifflin, 1979)

          According to Gibson the correct context for a theory of action is
          not the abstract space of objects and their relationships but the
          real world of shapes and colors as it is presented by the senses.
          Perception  and action are not separate processes. Organisms move
          in the world using all the information that is available in it.

          Information originates from the interaction between the  organism
          and its environment.

          An "affordance" measures conjunctions between the characteristics
          of the organism and the environment. All the potential uses of an
          object constitute the activities it affords (e.g. a  pen  affords
          writing).  Such uses are directly perceivable.
           Ginsberg Matthew: READINGS IN NONMONOTONIC LOGIC  (Morgan  Kauf-
          mann, 1987)

          Ginsberg introduces the problems that led to the  development  of
          nonmonotonic  logics,  first and foremost property inheritance by
          default (such as "Tweety is a  bird"  implies  that  "Tweety  can
          fly").  Then  he  classifies  formal  approaches  to nonmonotonic
          inference:  proof-based  approaches  (Raymond  Reiter's   default
          logic),  modal  approaches  (Drew McDermott's nonmonotonic logic,
          Robert Moore's autoepistemic logic) and  minimization  approaches
          (John McCarthy's circumscription).

          Reiter's "A logic for default reasoning"  (1980)  introduces  the
          "closed  world axiom" (what is not true is false), or negation as
          failure to derive (if a ground atomic formula  cannot  be  proved
          using  the  premises,  then  assume the formula's negation), then
          defines his default logic with the following inference rule:  "if
          A is true and is consistent that B is true, then assume that B is
          also true" (or "if a premise if true,  then  the  consequence  is
          also true unless a condition contradicts what is known").

          McDermott's formulation of modal  logic  (1980)  is  based  on  a
          coherence  operator ("P is coherent with what is known" if P can-
          not be proven false by what is known).

          Moore's "Semantical consideration of nonmonotonic  logic"  (1985)
          removed  some  of the problems with McDermott's with his "autoep-
          istemic logic",  based  on  the  notion  of  belief  (related  to
          McDermott's  coherence), which extends Hintikka's epistemic modal
          logic to incorporate action.  The logic models the beliefs of  an
          agent  reflecting  upon  his  own beliefs.  Moore also provides a
          possible-world semantics for his logic.

          In 1987 Kurt Konolige proved that autoepistemic and default logic
          are formally identical.

          Yoav Shoham (1987) argues that  all  approaches  to  nonmonotonic
          reasoning  can be reduced to a partial ordering on the set of all
          models for a theory.

          Ginsberg argues that a variety of approaches to nonmonotonic rea-
          soning can be unified by resorting to multi-valued logics.

          Jon Doyle's "Truth  Maintenance  System"  (1979)  was  the  first
          effective  computational  frameworks  for  default reasoning.  It
          consists of a problem solver that draws inferences and  a  system
          that  records  those  inferences (or "justifications").  It main-
          tains beliefs and justifications for beliefs.   It  ensures  that
          the  database is free of contradictions by identifying and adding
          justifications  to  remove  contradictions  whenever   they   are
          discovered (dependency-directed backtracking).

          Johan DeKleer (1986) improved the concept with  his  "assumption-
          based"  TMS which labels each proved proposition with the sets of
          premises needed to derive it (its "context").

          In 1986 McDermott introduced the  "temporal  projection  problem"
          (which  occurs  when  trying to infer which facts are true once a
          sequence of events have occurred) and proved  that  none  of  the
          nonmonotonic  approaches  can  deal  with  it.   The logic should
          select not the minimal models, but  the  chronologically  minimal
          models.

          Shoham's "Chronological ignorance" (1986) formalizes the idea  of
          chronological minimization by temporally ordering the conflicting
          extensions that underlie it and preferring the later ones  (those
          in which abnormality occurs as late as possible).

          By employing possible worlds, Ginsberg's "Reasoning about action"
          (1987)  solves the frame, ramification and qualification problems
          and circumvents the temporal projection problem.  As  is  Richard
          Fikes'  STRIPS,  a  single  model  of  the  world is updated when
          actions are performed by constructing the nearest  world  to  the
          current  one  in which the consequences of the actions under con-
          sideration hold.  The nearest  world  is  found  by  constructing
          proofs  of  the  negation  of  the  explicit  consequences of the
          expected action and by removing a premise in each proof from  the
          current world.

          The book contains McCarthy's "Some  philosophical  problems  from
          the standpoint of Artificial Intelligence" (1969), "Epistemologi-
          cal  problems  of  Artificial  Intelligence"  (1977),  and  "Cir-
          cumscription"" (1980).

          The first article is the one that  introduced  situation  calculs
          and  the  frame  problem. The second identifies the qualification
          problem and the third one details his theory of circumscription.

          According to  McCarthy,  knowledge  representation  must  satisfy
          three  fundamental  requirements:  ontological (must allow one to
          describe the  relevant  facts),  epistemological  (allow  one  to
          express  the relevant knowledge) and heuristic (allow one to per-
          form the relevant  inference).  Artificial  Intelligence  can  be
          defined as the discipline that studies what can be represented in
          a formal manner  (epistemology)  and  computed  in  an  efficient
          manner  (heuristic).   The  language  of  logic  satisfies  those
          requirements: it allows us to express everything we know  and  it
          allows  us  to make computations on what is expressed by it. Each
          set of knowledge is in fact a mathematical theory.

          McCarthy's  situation  calculus  represents  temporally   limited
          events  as "situations" (snapshots of the world at a given time),
          by associating a situation of the world (set of  facts  that  are
          true)  to  each moment in time.  Actions and events are functions
          from states to states.  An interval of  time  is  a  sequence  of
          situations, a "chronicle" of the world.  The history of the world
          is a partially ordered sequence of states and actions.  The  pro-
          perty of states is permanence, the property of actions is change.
          Each situation is expressed in a formula of first-order predicate
          logic.   Causal relations between two situations can then be com-
          puted.  A state is expressed by means of logical expression  that
          relate  objects  in that state. An action is expressed by a func-
          tion that relates each state to another state.

          McCarthy's "frame problem" states that  it  is  not  possible  to
          represent  what does not change in the universe as a result of an
          action, as that is an infinite set of facts.  Complementary para-
          doxes  are  the  "ramification  problem"  (infinite things change
          because one can go into greater and greater  detail  of  descrip-
          tion)  and  the  "qualification problem" (the number of precondi-
          tions to an action is also infinite).

          Circumscription deals with default inference by minimizing abnor-
          mality:  an  axiom  that  states what is abnormal is added to the
          theory of what is known (predicate circumscription).  The objects
          that  can be shown to have a certain property, from what is known
          of the world, are all the objects that satisfy that property (or,
          the  only  individuals  for  which  that property holds are those
          individuals for which it must hold).  This definition involves  a
          second-order  quantifier.  This is analogous to Frege's method of
          forming the second-order definition of a set of  axioms:  such  a
          definition  allows  both the derivation of the original recursive
          axioms and an induction scheme stating that nothing  else  satis-
          fies those axioms.
           Glass Leon & Mackey Michael: FROM  CLOCKS  TO  CHAOS  (Princeton
          University Press, 1988)

          The authors propose nonlinear models for dynamic processes occur-
          ing in body organs (biological oscillators).
           Gleick James: CHAOS (Viking, 1987)
          The best seller  that  made  chaos  theory  fashionable.  Besides
          exposing  the  theory  in ordinary language, and highlighting its
          applications to many different disciplines, it  provides  a  pic-
          turesque  chronicle  of the field.  Chaos theory is about finding
          regularities in the irregular behaviors of nature,  i.e.  in  the
          behavior  of  nonlinear  systems. Chaotic systems are a subset of
          nonlinear systems in which small changes  in  initial  conditions
          yield big changes in behavior.

Glezer Vadim: VISION AND MIND (Lawrence Erlbaum, 1995)

          Starting from a description  of  how  the  visual  system  works,
          Glezer  develops  a  detailed neural theory of how categories are
          formed from sensory inputs  through  functional  organization  of
          neural structures.

          The mind somehow models the external world. Comprehension of  the
          world  is  achieved  through functional modules in the neocortex,
          whose first task is to segment the sensory input. This is done in
          a  way  consistent  with  Gabor's  quantum  theory of information
          (indeterminacy between the description of signals in space and in
          spatial-frequency  domains,  i.e. duality between space and spec-
          trum). The harmonics of each module have different properties  (a
          Fourier analysis is provided).

          An invariant representation of the object  is  localized  in  the
          left  emisphere,  i.e.  the  left emisphere uses a classification
          approach for recognition, whereas  the  right  emisphere  uses  a
          structural  approach.  Invariance emerges as a property of neural
          nets and Hebbian learning (an algorithm  for  the  production  of
          invariant representations of an image is provided).
           Gluck Mark & Rumelhart  David:  NEUROSCIENCE  AND  CONNECTIONIST
          THEORY (Lawrence Erlbaum, 1990)
          A collection of articles on how brain regions can be  modeled  to
          account for function, complexity and power.
           Goddard Cliff & Wierzbicka Anna: SEMANTIC AND LEXICAL UNIVERSALS
          (Benjamins, 1994)

          A collection of papers on the theme of Leibniz's universal alpha-
          bet  of  thought, the set of semantic and lexical universals that
          are supposed to be common to all languages.  At  the  end  Wierz-
          bicka gives a critical account of all the primitives (37 of them)
          that have been identified.  "Canonical" sentences are those  con-
          structed out of such primitives.

Goldberg David: GENETIC ALGORITHMS (Addison Wesley, 1989)

          This is the book that explained what Holland's theories were  all
          about.  Goldberg defines genetic algorithms as "search algorithms
          based on the mechanics of natural selection  and  natural  genet-
          ics".  Unlike  most optimization methods, that work from a single
          point in the decision space and employ  a  transition  method  to
          determine  the next point, genetic algorithms work from an entire
          "population" of points simultaneously, trying many directions  in
          parallel  and  employing  a  combination  of several genetically-
          inspired methods to determine the next population of points.

          Goldberg focuses on efficiency issues and possible applications.

          Simple algorithms such as reproduction (that  copies  chromosomes
          according  to  a fitness function), crossover (that switches seg-
          ments of two chromosomes) and mutation are discussed, as well  as
          more   complex  algorithms  such  as  dominance  (a  genotype-to-
          phenotype mapping), diploidy (pairs of chromosomes) and  abeyance
          (shielded  against overselection); inversion (the primary natural
          mechanism for recoding a problem, by switching two  points  of  a
          chromosome); and many micro-operators.

          A classifier system is a machine learning system that learns syn-
          tactically  rules  (or "classifiers") to guide its performance in
          the environment. A classifier system consists of three main  com-
          ponents:  a  production  system,  a  credit  system  (such as the
          "bucket brigade") and a genetic algorithm.  Goldberg verifies  an
          important  properties  of  Holland's  classifiers:  the  trend to
          create "standard hierarchies", in which  a  general  rule  covers
          normal  situations  but  many  exception rules take over in those
          situations where the default rule would not work.

Goldfield Eugene: EMERGENT FORMS (Oxford Univ Press, 1995)

          The book offers a theory of how functional acts (such as  eating,
          walking,  smiling)  emerge during infancy: through the assembling
          of a variety of biodynamic devices. The theory can be extended to
          cognition and language.
           Goodwin Brian:  HOW  THE  LEOPARD  CHANGED  ITS  SPOTS  (Charles
          Scribner, 1994)

          The organism, and not the gene, should be the focus of  attention
          for  evolutionary biologists. Goodwin argues in favor of a theory
          of morphogenesis as a process that is inherently ordered.  Genes'
          instructions are constrained by a principle of order.
           Gould Stephen Jay: ONTOGENY AND  PHYLOGENY  (harvard  University
          Press, 1977)

          Gould reviews the debate on "recapitulation", the idea that onto-
          geny  (individual  development)  recapitulates phylogeny (species
          development) and advances  a  theory  that  views  "heterochrony"
          (changes    in  developmental  timing  that  producing  parallels
          between ontogeny and phylogeny) as evolutionary crucial. Retarda-
          tion  (delayed growth and development), for example, has probably
          been fundamental for the evolution of humans, by prolonging  into
          later  life rapid brain growth and therefore an increase in cere-
          bralization.

Gould Stephen Jay: EVER SINCE DARWIN (Deutsch, 1978)

          An  accessible  introduction  to  darwinism,  neo-darwinism   and
          Gould's  own theory of punctuated equilibria (changes appear sud-
          denly in lineages) and non-repeatability of evolution (if  evolu-
          tion had to happen again, it would not repeat itself).

          The paleontological  record  shows  no  steady  progress  in  the
          development  of  higher  organisms. Evolution seems to proceed in
          bursts.

          Consciousness  is  probably  the  latest  burst  of  evolutionary
          development.

          Gould also touches on theories of the earth  and  the  nature  of
          science.

Gould Stephen Jay: WONDERFUL LIFE (Norton, 1989)

          A popular introduction to the significance of the findings of the
          Burgess  Shale.  Gould advances intriguing hypotheses: any replay
          of the tape of life would yield a different, unpredictable evolu-
          tionary  history, but still a meaningful one. Evolution is not in
          the hands of determinism and not in the hands of randomness,  but
          in the hands of contingency.  In the case of the creatures of the
          Burgess shale, survival was so unlikely that  chance  events  may
          well  have  shaped  evolution  more  than  fitness.  Humans exist
          because of a lucky chain of events that led  to  them,  but  they
          might have as well never been created.

Gould Stephen Jay: FULL HOUSE (Random House, 1996)

          Gould shatters stereotypes about evolution by claiming that  bac-
          teria  represent  the  dominant form of life.  Gould restates his
          point that evolution does not proceed towards complexity but ran-
          domly  produces  variety.   Progress  is purely accidental.  With
          Darwin, there is only variation, not progress.  Gould  reiterates
          his  theory  of  punctuated  equilibrium: mostly nothing happens.
          but when it happens, it happens quickly.   Consciousness  evolved
          only once in all the experiments life performed on Earth (whereas
          eyes evolved several dozen times, and  wings  even  more  often).
          Consciousness  is  therefore  unlikely  to occur, and human cons-
          ciousness must be considered a sheer accident.  If  the  tape  of
          life  were  played  back  again,  it is unlikely that a conscious
          being would emerge.  On the other hand life may be more  probable
          than  it  appears  to  be: it happened on the Earth as soon as it
          could happen.
           Graf Peter & Masson Michael: IMPLICIT MEMORY (Lawrence  Erlbaum,
          1993)

          A technical introduction to the field of  explicit  and  implicit
          memory.   Each  chapter  is  written  by  an expert in the field.
          Implicit memories are those in which experiences  influence  per-
          formance in the absence of specific intention to recollect them.
           Graubard Stephen: THE ARTIFICIAL INTELLIGENCE DEBATE (MIT Press,
          1988)

          A collection of more or less philosophical articles on the feasi-
          bility of Artificial Intelligence. Hubert and Stuart Dreyfus draw
          from Heidegger and Wittgenstein to affirm their conception  of  a
          holistic  intelligence, that cannot be broken down into knowledge
          representation systems or neural  networks,  of  an  intelligence
          that is driven by intentions which reflect the environment.  Put-
          nam even downplays the historical importance of Artificial Intel-
          ligence.

          The book also contains introductions to current research in  con-
          nectionist models, machine vision, etc.

Green David: COGNITIVE SCIENCE (Blackwell, 1996)

          A textbook on cognitive science that covers the history  and  the
          main topics of this discipline in a conversational style.

Green Georgia: PRAGMATICS (Lawrence Erlbaum, 1989)

          Pragmatics is defined as the study of  understanding  intentional
          human action.  Therefore it must deal with belief, goal, plan and
          act.  Green surveys indexical and anaphoric expressions  (expres-
          sions  whose  reference  cannot be determined without taking into
          account the context, such as pronouns and  demonstratives,  whose
          interpretation  requires  inferences about the speaker's intended
          referent), sense and reference (Frege's distinction of  extension
          and  intension,  Kripke's  and  Putnam's  casual theory of names,
          Kripke's distinction of rigid designators and non-rigid  designa-
          tors  in  the  context  of possible worlds, and Montague's inten-
          sional logic in which the sense of an expression is  supposed  to
          determine its reference)
          Green deals at length with illocutionary force  (what  action  an
          utterance  is  performing) and presupposition (the facts that are
          taken for granted), two linguistic phenomena without  which  many
          utterances  could not be evaluated.  Grice's maxims are presented
          as the basis to assess the coherence of a discourse.  Metaphor is
          treated   as   something  different  from  implicature  (indirect
          speech), but similar in terms of  the  strategies  that  must  be
          employed to understand it.

Greene Robert: HUMAN MEMORY (Lawrence Erlbaum, 1992)

          A comprehensive survey of cognitive models from  the  perspective
          of cognitive psychology.

Gregory Richard: OXFORD COMPANION TO THE MIND (Oxford, 1987)

          A monumental, detailed, accurate reference book.  An alphabetical
          dictionary  of  mental  phenomena  and  brain  anatomy,  spanning
          psychology, philosophy and neurophysiology.  Each entry is  writ-
          ten  by  experts in the field and reviews the state of the art on
          the subject.
           Grice H. Paul: STUDIES IN THE WAY OF WORDS (Harvard Univ  Press,
          1989)
          Grice thinks that language is based  on  a  form  of  cooperation
          among  the  speakers.  People  always choose the speech acts that
          achieve the goal with minimum cost and highest efficiency.

          Grice was influential in  emphasizing  the  linguistic  interplay
          between  the  speaker,  who  wants  to be understood and cause an
          action, and the listener.  This goes beyond syntax and semantics.
          A  sentence  has a timeless meaning, but also an occasional mean-
          ing: what the speaker meant to  achieve  when  s/he  uttered  it.
          Language  has  meaning  to  the extent that some conventions hold
          within the  linguistic  community.  Those  conventions  help  the
          speaker  achieve his/her goal. The participants of a conversation
          cooperate in saying only what makes sense in that circumstance.

          The significance of an utterance includes both what is said  (the
          explicit)  and what is implicated (the implicit). Grice therefore
          distinguishes between the proposition expressed from the proposi-
          tion implied, or the "implicature".  Implicatures exhibit proper-
          ties of cancellability (the implicature can  be  removed  without
          creating  a  contradiction) and calculability (an implicature can
          always be derived by reasoning  under  the  assumption  that  the
          speaker  is  observing  pragmatic  principles).  A particularized
          implicature is one that is such in virtue of the context.  A gen-
          eralized implicature is independent of the context.

          Grice's four maxims summarize those conventions.  They  help  the
          speaker  say  more than it says through implicatures which can be
          implied by the utterance. Conventional  implicatures  are  deter-
          mined  by  linguistic  constructions  in the utterance. Conversa-
          tional implicatures follow from maxims of truthfulness,  informa-
          tiveness,  relevance  and  clarity  that  speakers  are assume to
          observe. Conversational implicatures can be discovered through an
          inferential process: the hearer can deduct that the speaker meant
          something besides what he said by the fact that what he said  led
          the  hearer  believe in something and the speaker did not do any-
          thing to stop him from thinking it.

          The maxims are: provide as much information as needed in the con-
          text,  but  no more than needed (quantity), tell true information
          (quality), say only things  that  are  relevant  to  the  context
          (relation), avoid ambiguity as much as possible (manner).

Griffin Donald: ANIMAL THINKING (Harvard University Press, 1984)

          A study of animals' minds. Griffin  claims  that  smaller  brains
          have  a greater need to think because they can store fewer infor-
          mation. The only way they can cope with their environment  is  by
          thinking more.

Grishman Ralph: COMPUTATIONAL LINGUISTICS (Cambridge, 1986)

          An  introduction  to  the  field  (syntax,  semantics,  discourse
          analysis  and language generation) that provides detailed discus-
          sions of various parsing techniques, a brief discussion  on  ana-
          phora resolution, a survey of frames and scripts.
           Grossberg Stephen: NEURAL NETWORKS AND NATURAL INTELLIGENCE (MIT
          Press, 1988)

          The book explores a number of phenomena and proposes a  potential
          explanation in terms of neural dynamics.

          Grossberg's connectionist model, consistent with Hebb's  law  and
          Pavlov's  conditioning,  reduces  a  cognitive state to a dynamic
          state of "adaptive resonance", expressed by  a  non-linear,  non-
          stable and non-local algorithm.

          The essential element of the cognitive system is the long-lasting
          state of adaptive resonance reached when the feedback matches the
          input pattern.  Inspired by Helmholtz, Grossberg thinks  that  we
          perceive  the  sensory  data  only  when  a  consensus is reached
          between what the data are and what we expect them  to  be,  given
          what  we  already  know.  This competetive negotiation is reached
          through a feedback process.  Grossberg's theory of memory is non-
          linear, nonlocal and nonstationary.
           Grosz Barbara: READINGS IN NATURAL LANGUAGE  PROCESSING  (Morgan
          Kaufman, 1986)

          Focusing mainly on  discourse  analysis  and  historical  natural
          language  systems, it contains seminal papers by Perrault, Hobbs,
          Grosz, Wilensky.

          Perrault touches  on  mathematical  properties  of  some  popular
          linguistic   formalisms:   transformational   grammars,   Jerrold
          Kaplan's and Joan Bresnan's lexical-functional  grammars,  Gerald
          Gazdar's  generalized  phrase-structure  grammar and Joshi's tree
          adjunction grammar.

          Woods's 1970  paper  describes  a  parser  for  non  context-free
          languages: augmented transition networks, an extensions of recur-
          sive transition networks (directed graphs with labeled  arcs  and
          nodes)  that  have  registers  for storing partial parse trees or
          flagging features and conditions for testing registers to  deter-
          mine  how  to proceed. ATN's are equal in capacity to transforma-
          tional grammars.

          In 1980 Fernando Pereira and David Warren introduced a  formalism
          (definite  clause  grammar)  to  define  grammars  based on Horn-
          clauses: grammar rules are written as logical  formulas.  Parsing
          is a form of theorem proving.

          Coming to semantics, Drew McDermott's "No notation without  deno-
          tation"  played the role of a manifesto for systematic semantics:
          it is not only important that a syste  be  correct,  it  is  also
          important that it can be understood.

          Grosz thinks that dialogues exhibit a structure  much  like  sen-
          tences  and  this  structure affects the use of referring expres-
          sions. This "intentional" structure is characterized by a  global
          focus  of  attention and a number of immediate foci of attention.
          Grosz analyzes the relationship between focus  of  attention  and
          referring  expressions.  Grosz defines a focus space as that sub-
          set of the speaker's total  knowledge  which  is  relevant  to  a
          discourse segment. Several such spaces may be relevant at a time.

          A number of papers cover  the  integration  of  natural  language
          understanding  and  planning  techniques (although Richard Fikes'
          STRIPS is not mentioned).  Raymond Perrault  and  his  associates
          model  speech acts as operators and intentions as plans. Wilensky
          uses plan recognition for story understanding.

          Historical natural language processing systems from the Seventies
          reviewed  here  include William Woods' LUNAR, Daniel Bobrow's GUS
          and Terry Winograd's SHRDLU.
           Gupta Anil & Belnap Nuel: THE  REVISION  THEORY  OF  TRUTH  (MIT
          Press, 1993)

          According to Gupta's revision theory of truth, originally  formu-
          lated  in  the early 80's, truth is a circular concept. Therefore
          all paradoxes that arise from  circular  reasoning  in  classical
          logic fall into normality in Gupta's theory of truth.

          In Gupta's "revisionist theory of truth" truth is refined step by
          step.  In order to determine all the sentences of a language that
          are true when that language includes a truth predicate (a  predi-
          cate  that  refers to truth), one needs to determine whether that
          predicate is true, which in turn requires one to  know  that  the
          extension of true is, while such extension is precisely the goal.
          The solution is to assume an initial extension of "true" and then
          gradually refine it.

Haken Hermann: SYNERGETICS (Springer-Verlag, 1977)

          Synergetics is a theory of pattern formation in complex  systems.
          It  tries  to  explain  structures  that develop spontaneously in
          nature.

          Since order emerges out of chaos, and chaos is not well  defined,
          synergetics  employs  probabilities (to describe uncertainty) and
          information (to describe approximation). Entropy becomes  a  cen-
          tral concept, relating physics to information theory.

          Synergetics revolves around the concepts of: compression  of  the
          degrees of freedom of a complex system into dynamic patterns that
          can be expressed as a collective variable; behavioral  attractors
          of changing stabilities; and the appearance of new forms as none-
          quilibrium phase transitions.

          Systems at instability points are driven by a slaving  principle:
          long-lasting  quantities  can  enslave  short-lasting  quantities
          (i.e., they can act as order parameters).  Close to  instability,
          stable  motions  (or  "modes") are enslaved by unstable modes and
          can be ignored, thereby reducing the degrees of  freedom  of  the
          system.  The  macroscopic behavior of the system is determined by
          the unstable modes. The dynamic equations of the  system  reflect
          the  interplay  between  stochastic  forces ("chance") and deter-
          ministic forces ("necessity").

          Synergetics deals  with  self-organization,  how  collections  of
          parts  can  produce  structures. Synergetics therefore applies to
          systems driven far from equilibrium, where the  classic  concepts
          of  thermodynamics  are  no longer adequate. Order can arise from
          chaos and can be maintained by flows of energy/matter.

          Applictions to Physics, Chemistry, Sociology and Biology (popula-
          tion  dynamics,  evolution,  morphogenesis)  are  discussed. Com-
          pletely different systems exhibit surprising  analogies  as  they
          pass through an instability.

          Biological systems are unique in that they exhibit and  interplay
          between  structure  and function.  Function is embodied in struc-
          ture, function is latent in form.

          Synergetics belongs to  the  class  of  mathematical  disciplines
          (including   Von   Bertalanffi's   general   systems  theory  and
          Prigogine's nonequilibrium thermodynamics)  that  are  trying  to
          extend science to dynamic systems.

Hamblin Charles: IMPERATIVES (Basil Blackwell, 1987)

          The book describes Hamblin's action-state semantics  for  dealing
          with  imperatives. The theory provides for a time scale, distinc-
          tion between actions and states, physical and  mental  causation,
          agency and action-reduction, and intensionality.
           Hamilton Ternell: PROCESS AND PATTERN IN  EVOLUTION  (MacMillan,
          1967)

          Mutation, recombination, selection and isolation are the  driving
          forces  of  evolution.  Natural selection results in differential
          reproduction, i.e. in adaptation of populations, i.e.  in  evolu-
          tionary change. The phenotype of an organism is the result of the
          conflict between different selection forces.  The  individual  is
          the unit of natural selection, gene substitution is the unit pro-
          cess in adaptation, and the species is the major unit  of  evolu-
          tion.

          Hamilton thinks  that  evolution  is  accelerated  by  parasites.
          Organisms adopted sexual reproduction in order to cope with inva-
          sions of parasites.  Life is a symbiotic process  which  necessi-
          tates of competitors.
           Hampson  Peter   &   Morris   Peter:   UNDERSTANDING   COGNITION
          (Blackwell, 1995)

          An introduction to  the  main  topics  of  cognitive  psychology:
          memory, vision, language, attention. Three paradigms for studying
          cognition are discussed: artificial intelligence, cognitive  sci-
          ence and connectionism.
           Hanson Norwood: PATTERNS OF  DISCOVERY  (Cambridge  Univ  Press,
          1958)

          We see what we know. In order to see what another person sees  we
          first need to learn what he knows. As we learn new knowledge, the
          world as we perceive it changes.
           Harris MaryDee:  INTRODUCTION  TO  NATURAL  LANGUAGE  PROCESSING
          (Prentice Hall, 1985)

          An excellent textbook on how to process natural language  with  a
          computer.  It  starts  with  a  historic  review, from Chomsky to
          Fillmore's  case  grammar  and  generative  semantics.  The  main
          chapters  address  transformational  generative  grammar  (phrase
          marker, transformational rules, etc); transition networks (recur-
          sive  and  augmented);  case grammar; semantic networks; Schank's
          conceptual   dependency;   knowledge   representation   (scripts,
          frames).
           Hassoun Mohamad: FUNDAMENTALS OF ARTIFICIAL NEURAL NETWORKS (MIT
          Press, 1995)

          A textbook on neural networks that begins with  linear  threshold
          gates,  expands  computational  properties  into the most popular
          supervised and unsupervised learning rules.  A neural network  is
          defined  as  a  parallel computational model comprised of densely
          interconnected adaptive processing units  in  which  learning  by
          example   replaces   programming..   Neural  learning  is  viewed
          mathematically as a search/approximation method. Extensive treat-
          ment is provided of adaptive multilayer networks.  The book makes
          an effort to provide a unified and logical summary of the field.

Hassoun Mohamad: ASSOCIATIVE NEURAL MEMORIES (Oxford, 1993)

          Articles by James Anderson, Pentti Kanerva, Amir Dembo  and  lots
          of japanese contributions.

Haugeland John: ARTIFICIAL INTELLIGENCE (MIT Press, 1985)

          An introduction to the field, that begins  with  an  overview  of
          modern  science and explains the basic concepts for a broad audi-
          ence.
           Hayes-Roth Frederick: BUILDING EXPERT SYSTEMS  (Addison  Wesley,
          1983)

Haykin Simon: NEURAL NETWORKS (Macmillan, 1994)

          One of the most comprehensive and updated surveys of neural  net-
          work algorithms.

Hebb Donald: ESSAY ON MIND (Lawrence Erlbaum, 1980)

          Hebb's cell-assemblies theory holds that repeated  exposure to  a
          sensory stimulation will result in an assembly. Thought processes
          consist of an activity of such cell-assemblies. There is an inti-
          mate  relationship between learning and perception: perception in
          the early stages  is  consequence of a  primitive  learning  pro-
          cess,  but later learning becomes a function of perception and of
          cognitive structure that originate from perception.
          Hebb also makes a few philosophical comments.   The  word  "cons-
          cious"  is  used both for denoting the state of a human being and
          for denoting a type of mental activity.  The idea of the self and
          the idea of the other overlap, and this explains the existence of
          empathy.

Hebb Donald: THE ORGANIZATION OF BEHAVIOR (John Wiley, 1949)

          Hebb's hypothesis is that the basis for neural development lay in
          a  selective  strengthening or inibition of synapses between neu-
          rons. Synapses that get used are reinforced, while synapses  that
          are not used are inhibited. This dual process molds the structure
          of the brain in a darwinian fashion.  Metabolic change  therefore
          occurs in the brain all the time.  These synaptic changes are the
          basis for all learning and memory.

          Besides advancing the learning rule  for  synaptic  modification,
          the books defines the notion of the brain as a connectionist dev-
          ice and the notion that within the  brain  regions  of  intercon-
          nected self-reinforcing subnets of neurons (or "cell assemblies")
          form for long periods of time.

          The brain is an evolutionary system:  genes  determine  only  its
          initial  configuration,  experience  molds the brain according to
          darwinian principles of selection.

          The selective strenghtening of the synapses causes the  brain  to
          organize  itself into cell assemblies, each assembly representing
          a fragment of a concept, each assembly overlapping others so that
          concepts are naturally linked into larger concepts. Each resonat-
          ing cell assembly behaves like a rule:  triggered  by  an  event,
          will fire for a while at a higher rate.

          Psychological conditioning is ubiquitous in animals because it is
          a property of individual neurons.

Hecht-Nielsen Robert: NEUROCOMPUTING (Addison-Wesley, 1989)

          A textbook on neural networks  (parallel,  distributed,  adaptive
          information  processing  systems),  from  a pragmatic, industrial
          viewpoint.  All the  most  popular  learning  laws  are  examined
          extensively.

          Hecht-Nielsen uses Kolmogorov's theorem to demonstrate  that  for
          every  function  there  exists a three-layer neural net which can
          compute its values.
           Heil John: PERCEPTION AND COGNITION (Univ of  California  Press,
          1983)

          Heil attempts to reconcile Gibson's theory  of  perception,  that
          perception  is largely a process of gathering of information from
          the environment, with a cognitive account of  cognition.  Percep-
          tion  is a link between beliefs and events or objects. In the end
          perception is the acquisition of beliefs by way  of  the  senses.
          Concepts  are  simply  skills that enable the perceiving agent to
          acquire beliefs. Having  beliefs  does  not  necessarily  require
          language.   Having  beliefs does not necessarily require internal
          representations or computational capabilities.

          The class of perceptual objects for a perceiving agent is  deter-
          mined  by  1.  the  agent's sensory system (which is sensitive to
          some environmental stimuli and not others,  and  even  for  those
          stimuli  it is tuned to detect only some high-order features) and
          2. the agent's set of concepts, or perceptual beliefs.

          Heil has modified Dretske's theory by assuming, with  Kant,  that
          the  transition from analogic to digital is made possible by con-
          cepts that are innate in the agent.

Herbert Nick: ELEMENTAL MIND (Dutton, 1993)

          Herbert thinks that  consciousness  is  a  pervasive  process  in
          nature.  Mind  is  as  fundamental a component of the universe as
          elementary particles and forces.  Mind can be detected  by  three
          quantum  features:  randomness,  thinglessness  (objects  acquire
          attributes only once they are  observed)  and  interconnectedness
          (John  Bell's  discovery  that once two particles have interacted
          they remain connected). Herbert delves into psychological, parap-
          sychological  and even mystic phenomena that are supposed to cor-
          roborate his hypotheses.  Herbert  reviews  models  of  awareness
          based on quantum effects.
           Hertz John, Krogh Anders & Palmer Richard: INTRODUCTION  TO  THE
          THEORY OF NEURAL COMPUTATION (Addison-Wesley, 1990)

          A textbook on neural networks that starts with the Hopfield model
          and  then  covers  perceptrons,  multi-layer  networks, Boltzmann
          machines,  unsupervised learning (adaptive  resonance,  Kohonen).
          It  provides  a very modern expositions of the computational con-
          cepts.

Hewitt Carl: TOWARDS OPEN INFORMATION SCIENCE (MIT Press, 1990)

          Hewitt has developed a semantics of intelligent  communities.   A
          system is "open" when the outcome of its actions can be predicted
          and at any time it can absorb new information  from  the  outside
          world.   Distributed intelligent systems are a particular type of
          open systems that can interact.  The  dynamics  of  such  systems
          depends  on  the balance between two factors: self-reliance, i.e.
          the ability to act based only on local resources, and interdepen-
          dency,  the  need  to  find resources elsewhere.  That translates
          into the dualism of "committment" (the action that  a  system  is
          determined  to  perform)  and  "cooperation" (the set of mutually
          dependent roles among systems).

          The main property of such systems is their "deductive indecisive-
          ness": since many agents compete for the same resources in paral-
          lel, the state of the world at any  time  is  indeterminate.  The
          distributed system can only exhibit "global coherence".

Heyting Arend: INTUITIONISM (North Holland, 1956)

          A classic textbook  for  intuitionism.   Intuitionism  prescribes
          that  all  proofs  of  theorems  must be constructive.  Only con-
          structable objects are legitimate.  The meaning  of  a  statement
          resides  not in its truth conditions but in the means of proof or
          verification.

Hintikka Jaakko: KNOWLEDGE AND BELIEF (Cornell Univ Press, 1962)

          A very technical epistemic and doxastic theory. Hintikka sets  up
          a  formal  system  and  shows  its applications to the use of the
          verbs "know" and "believe".

Hintikka Jaakko THE INTENTIONS OF INTENTIONALITY (Reidel, 1975)

          A collection of articles, including "Objects of knowledge", which
          defines the principles of his logic of attitudes.

          Propositional attitudes can be interpreted using possible  worlds
          and  an  "alternativeness" relation. Alternatives are relative to
          an attitude, an agent and the world in which the agent  has  that
          attitude.  The  sentence  "a  believes  that  p" can be therefore
          interpreted as "a believes that p is true in a world if and  only
          if p is true in all the alternatives to that world".

          Following Gibson's biological theory, Hintikka argues  that  per-
          ception  is  intentional  because  it is informational. Possible-
          world semantics is advanced as a promising candidate for  a  gen-
          eral theory of intentionality.

Hintikka Jaakko: THE GAME OF LANGUAGE (Reidel, 1983)

          Hintikka proposed his "game-theoretical semantics" as an alterna-
          tive to compositional semantics. The semantic interpretation of a
          sentence is conceived of as a game between two agents. The seman-
          tics searches truth through a process of falsification and verif-
          ication. The truth of an expression is determined through  a  set
          of  domain-dependent  rules  which  define  a  "game" between two
          agents: one agent is trying to validate the expression, the other
          one  is trying to refute it.  The expression is true if the truth
          agent  wins.    Unlike   Dummett's   verificationist   semantics,
          Hintikka's is still a "truth-conditional" semantics.

          The existence of a winning strategy  for  either  player  can  be
          expressed  in  the form of a higher-order sentence. This sentence
          asserts the existence of the  relevant  Skolem  functions.  Game-
          theoretical  semantics  is therefore a translation of first-order
          languages into higher-order languages. Game-theoretical semantics
          can  be easily extended to intensional logic as a successive step
          to possible-world semantics. The transition to natural  languages
          is  performed  by substituting proper names for entire quantifier
          phrases. In natural languages the application of  game  rules  is
          governed by second-order principles.
           Hintikka Jaakko: LOGIC OF EPISTEMOLOGY (Kluwer Academics, 1989)
          A collection of articles on the  (limitations  of)  semantics  of
          possible worlds and epistemic logic (logic of knowledge).
           Hintikka Jaakko & Sandu Gabriel: ON THE METHODOLOGY OF  LINGUIS-
          TICS (Blackwell, 1990)

          Hintikka presents a case study for his "game-theoretic semantics"
          by applying it to the treatment of coreference.

Hintikka Jaakko: ASPECTS OF METAPHOR (Kluwer Academics, 1994)

          A collection of papers on metaphor, including  Bipin  Indurkhya's
          argument  for  an  interaction  theory of cognition and metaphor,
          Noel  Carroll's  presentation  of  visual  metaphors   and   Eric
          Steinhart's  model  for  generating  metaphors  in the context of
          semantic fields.
           Hinton Geoffrey & Anderson James: PARALLEL MODELS OF ASSOCIATIVE
          MEMORY (Lawrence Erlbaum, 1989)

          A selection of readings on parallel associative memory.

          D. Willshaw's "Holography, associative memory and inductive  gen-
          eralization" notes similarities between neural networks and holo-
          grams (such as information is not localized but spread  over  the
          entire system).

Hirst William: MAKING OF COGNITIVE SCIENCE (Cambridge, 1988)

A collection of essays in honor of George Miller.

           Hobbs Jerry & Moore Robert: FORMAL THEORIES OF  THE  COMMONSENSE
          WORLD (Ablex Publishing, 1985)

          A collection of seminal papers on commonsense reasoning,  includ-
          ing  the  official version of Pat Hayes' "The naive physics mani-
          festo".

          Pat Hayes' "The naive physics manifesto" defines "measure  space"
          for  each quantity (length, weight, date, temperature) as a space
          in which an ordering relationship holds. Measurement  spaces  are
          usually conceived as discrete spaces, even if the quantities they
          measure are in theory continous.  In common use things like birth
          dates,  temperatures,  distances,  heights and weights are always
          rounded.  Unlike McCarthy's situations, Hayes' "histories"  (con-
          nected  pieces  of  space-time) have a restricted spatial extent,
          thereby  avoiding  some  of  the  inconveniences  of  situations.
          Hayes'  logistic approach was very influential in formalizing and
          axiomatizing common sense knowledge.

          The elementary unit of measure for common sense is not the point,
          but  the  interval.  Which  interval  makes  sense depends on the
          domain: history is satisfied with  years  (and  sometimes  centu-
          ries), but birth dates require the day and track and fields races
          need tenths of seconds.

          The relationships between  intervals  differ  from  relationships
          between points.  Two intervals can partially overlap. An interval
          can be open or  closed.   Points  require  Physics'  differential
          equations, but intervals can be handled with a logic of time that
          deals with their ordering relationship.

          The book includes DeKleer's "A qualitative physics based on  con-
          fluences",  Robert  Moore's  "A  formal  theory  of knowledge and
          action" and James Allen's "A model of naive temporal reasoning".

          Allen's  representation  of  time  is  based  on  intervals,  not
          instants.  Intervals  may  be  related in several ways: one being
          before, after or equal to another.

Hobson J. Allan: THE DREAMING BRAIN (Basic, 1989)

          From the five  cardinal  features  of  dreams  (intense  emotion,
          illogical  content,  sensory  impression,  uncritical acceptance,
          difficulty of recalling) and their similarities  to  mental  ill-
          ness,  Hobson  derives  a  theory of dreams as a theory of mental
          illness. Hobson thinks that dreams need not be interpreted: their
          meaning is  transparent.

          Hobson builds a model of the  brain-mind  which  specifies  which
          brain  cells and molecules trigger REM sleep and dreaming and the
          dynamics  of  their  interaction.    His   "activation-synthesis"
          hypothesis  (periodic  activation  of the brain by the brain stem
          and synthesis provided by the forebrain) assumes that dreams  are
          meaningful:  the mind makes a synthetic effort to provide meaning
          to the signals that are  generated  internally  (during  a  dream
          memory  is  even "hypermnesic", i.e. is intensified).  Wishes are
          not the cause of the dreaming process,  although,  once  dreaming
          has been started by the brain stem, wishes may be incorporated in
          the dream.  Dreams are generated by internal signals.

          The function of dreams is speculated to be that of deriving  cru-
          cial action patterns from the genetic program of the individual.

          Hobson therefore interprets dreams in the realm of  neurophysiol-
          ogy.
           Hofstadter  Douglas:  FLUID  CONCEPTS  AND  CREATIVE   ANALOGIES
          (Basic, 1995)

          With this book Hofstadter goes as far as to propose  a  cognitive
          model,  or  at  least  refuse  existing cognitive models, for the
          mind.  The book comes with the software that was built to  imple-
          ment these analogical strategies, Copycat.

Hofstadter Douglas & Dennett Daniel: THE MIND'S I (Bantam, 1982)

          A collection of articles from  philosophers,  mathematicians  and
          novelists,  surrounded  by  HJofstadter's  own reflections on the
          themes of mind and consciousness.

Hofstadter Douglas: GODEL ESCHER BACH (Vintage, 1980)

          A bold synthesis of mathematics, art and music, and a  collection
          of intriguing thought experiments with recursion, self-reference,
          decision theory, artificial intelligence and  genetics  presented
          in a very elegant and creative manner.

          Consciousness could be caused by "strange loops", an  interaction
          between levels in which the top level and the bottom level influ-
          ence each other.

Holland John et al: INDUCTION (MIT Press, 1986)

          A study of induction (perceived as  "how  knowledge  is  modified
          through its use") built around a rule-based framework.  Induction
          is directed by problem-solving activity  and  based  on  feedback
          about the value of its predictions.  Learned categories are iden-
          tified by clusters of rules.  Induction involves two  fundamental
          processes: a process to revise parameters of existing rules and a
          process to generate new  rules.  Both  processes  are  guided  by
          knowledge about the domain.

          Classifier systems are  message-passing  variants  of  production
          systems.   A  classifier  system  learns  syntactically rules (or
          "classifiers") to guide its performance  in  the  environment.  A
          classifier system consists of three main components: a production
          system, a credit system (such as  the  "bucket  brigade")  and  a
          genetic algorithm to generate new rules.

          Analogical reasoning is considered as a special  case  of  induc-
          tion.
           Holland John Henry: ADAPTATION IN NATURAL AND ARTIFICIAL SYSTEMS
          (MIT Press, 1992)

          Revised edition of the  seminal  1975  book  that  generated  the
          momentum  for  the  study of complex adaptive systems and genetic
          algorithms.

          Holland had the intuition that the best way to solve a problem is
          to  mimick what biological organisms do to solve their problem of
          survival: to evolve (through natural selection) and to  reproduce
          (through genetic recombination).

          Genetic algorithms apply recursively a  series  of  biologically-
          inspired  operators  to  a population of potential solutions of a
          given problem. Each application of operators generates new  popu-
          lations  of  solutions which should better and better approximate
          the best solution.

          What evolves is not the single individual but the population as a
          whole.

          Genetic algorithms are actually a further  refinement  of  search
          methods  within  problem  spaces.  Genetic algorithms improve the
          search by incorporating the criterion of "competition".

          A measure function computes  how  "fit"  an  individual  is.  The
          selection  process starts from a random population of individual.
          For each individual of the population the fitness  function  pro-
          vides  a  numeric value for how much the solution is far from the
          ideal solution. The probability of selection for that  individual
          is  made proportional to its "fitness". On the basis of such fit-
          ness values a subset of the population is selected.  This  subset
          is  allowed  to  reproduce  itself  through biologically-inspired
          operators of crossover, mutation and inversion.

          Each individual  (each  point  in  the  space  of  solutions)  is
          represented  as  a string of symbols. Each genetic operators per-
          form an operation on the sequence or content of the symbols.

          Holland's classifier (which learns new rules to optimize its per-
          formance)  was  the  first practical application of genetic algo-
          rithms. Its emphasis on competition and coopertation, on feedback
          and  reinforcement,  rather  than on pre-programmed rules, set it
          apart from knowledge-based models of intelligence.

Holland John: HIDDEN ORDER (Addison Wesley, 1995)

          Holland focuses on "complex adaptive systems". Such  systems  are
          governed  by  principles of anticipation and feedback. Based on a
          model of the world, an adaptive system anticipates what is  going
          to  happen.  Models  are  improved  based  on  feedback  from the
          environment.

          Complex adaptive system are ubiquitous in  nature.  They  include
          brains,  ecosystems  and  even  economies. They share a number of
          features: each of these systems is a network of agents acting  in
          parallel  and  interacting;  behavior  of  the system arises from
          cooperation and competitiong among its agents; each of these sys-
          tems  has  many levels of organization, with agents at each level
          serving as building blocks for agents at  a  higher  level;  such
          systems are capable of rearranging their structure based on their
          experience; they are capable of anticipating the future by  means
          of innate models of the world; new opportunities for new types of
          agents are continously beeing created within the system.

          All complex adaptive systems share four properties  (aggregation,
          nonlinearity, flowing, diversity) and three mechanisms (categori-
          zation by tagging, anticipation through internal models, decompo-
          sition in building blocks).

          Holland also reviews his own framework for representing  adaptive
          agents,  consisting  of  a  performance  system  (to describe the
          system's skills), a credit-assignment algorithm  (to  reward  the
          fittest  rules) and a rule-discovery algorithm (to generate plau-
          sible hypotheses). His new visual model is called  ECHO,  and  it
          "echoes" the creation of complex structures by natural selection.
          ECHO operates on a network of sites,  each  containing  resources
          and  agents.   Each  agent's  structure  is  defined  in terms of
          strings of resources, each string being a chromosome. Each  chro-
          mosome contains three tags (offense, defense and adhesion), three
          conditions (exchange, mating and  replication),  and  a  list  of
          resource transformations. Tags and conditions determine what hap-
          pens when two agents interact.
           Humphrey Nicholas: CONSCIOUSNESS REGAINED  (Oxford  Univ  Press,
          1983)

          Humphrey thinks that the function of  consciousness  is  that  of
          social  interaction  with  other  consciousnesses.  Consciousness
          gives every human a priviliged picture of her own self as a model
          for what it is like to be another human.

          Consciousness provides humans with an explanatory model of  their
          own  behavior.   Psychological skills are a biologically adaptive
          trait in human beings: the best psychologists are the  best  sur-
          vivors.  The  best  psychologists  are  those who have the widest
          range of personal experience.
           Humphrey Nicholas: A HISTORY OF  THE  MIND  (Simon  &  Schuster,
          1993)

          A study of the evolution of consciousness from simple  matter  to
          thought, emotions and self-consciousness.

          Humphrey claims that to be conscious is to  feel  sensations,  as
          opposed  to  perceptions. Sensations are to be found at the boun-
          dary between the organism and the world and at  the  boundary  of
          past  and  future.  One  "senses"  a  circle of light hitting the
          retina; one "perceives" the sun in the sky. One can  have  sensa-
          tions about perceptions and perceptions about sensations. Animals
          have developed two ways of representing the  interaction  between
          the  body  and  the  world:  affect-laden  sensations and affect-
          neutral perceptions.

          Sensation and perception  are  separate  and  parallel  forms  of
          representation.    Consciousness  is  about  sensation.  Humphrey
          develops a theory of sensations, feelings and actions.  The  last
          stage  of  the  evolutionary  journey is a "sensory reverberating
          feedback loop" within the brain. Then consciousness arises.
           Hutchinson George Evelyn: AN INTRODUCTION TO POPULATION  ECOLOGY
          (Yale University Press, 1978)

          Hutchinson reviews the field of population  dynamics,  introduces
          formal definitions for quantities such as "ecological niche" ("an
          N-dimensional hypervolume within which  environmental  conditions
          at every point permit an organism to live") and derives nonlinear
          analyses of populations.

          The whole theory is based on two  postulates:  the  principle  of
          abiogenesis  (every  living organism has originated from at least
          one parent of like kind to itself, "omne vivum ex vivo"); and the
          postulate  of  upper limit (there is an upper limit to the number
          of beings that can utilize a given finite space).  They are  both
          reflected  in  Verhulst's  "logistic", a mathematical model for a
          continously growing population with an upper limit. There exist a
          number  of variants of the original logistic, mainly to take into
          account factors such as competition and coexistence.

          Any sulf-sustaining biological community must include on  popula-
          tion of photosynthetic plants at its lower level. Herbivores feed
          on this level and form a new level, on which  primary  carnivores
          feed  and  form  a new level, on which secondary carnivores feed,
          etc. Each level is smaller  (not  only  in  number  but  also  in
          biomass)  than  the  lower  one,  thereby originating a pyramidal
          structure.