Oatley  Keith  &  Jenkins   Jennifer:   UNDERSTANDING   EMOTIONS
          (Blackwell, 1996)

          Drawing from a  multitude  of  sources  (psychology,  philosophy,
          anthropology,  biology,  neurophysiology,  sociology),  the  book
          highlights the nature, development,  function  and  structure  of
          emotions.

Ornstein Robert: THE MIND AND THE BRAIN (Martinus Nijhoff, 1972)

          A critique of the identity theory  of  the  mind  and  the  brain
          (J.J.C. Smart and U.T. Place in particular) and a presentation of
          a "multi-aspect" theory of the mind: the mental has an  experien-
          tial  (the  experience  of  feeling  a  feeling),  a  neural (the
          corresponding brain processes), a behavioral (the related action)
          and a verbal (the related utterance) aspect.

Ornstein Robert: MULTIMIND (Houghton Mifflin, 1986)

          The human mind is viewed as  many  small  minds,  each  operating
          independently and specialized in one task. The body contains many
          centres of control. The lower level ones  developed  millions  of
          years  ago  for  basic survival activities, and humans share them
          with other animals.  The most recent ones (e.g., the cortex) deal
          with decisions, language, reasoning. The brain cannot be examined
          as a single whole.

          The goal of the mind is to simplify, to reduce the complexity  of
          the  external  world  to  what is useful for the body.  Minds are
          attracted by four types of events: recent events, unusual events,
          relevant  events,  events  that  can be compared to other events.
          When information is meaningful, it gets organized  (i.e.  simpli-
          fied).
           Ornstein Robert:  EVOLUTION  OF  CONSCIOUSNESS  (Prentice  Hall,
          1991)

          The mind is an adaptive system that has been shaped by the world.
          It  is the way it is because the world is the way it is. Ornstein
          retraces the (presumed) evolutionary steps of the  bodily  organs
          that  now  make  up  the  mind.  Then  he  retraces how the brain
          develops, according to neural darwinism.

          Human minds are initially endowed  with  many  possible  ways  of
          evolving  (e.g.,  with  the capability for learning many possible
          languages), but only some are pursued and the  other  skills  are
          lost during growth. The mind could potentially adapt to many dif-
          ferent environments, but will actually adapt only to the ones  it
          is exposed to.

          The large  context  developed  into  specialized  cerebral  hemi-
          spheres,  which  are  specialized,  autonomous centres of action.
          Different regions of the mind behave independently of  conscious-
          ness  (sometimes  consciousness  realizes  what  has been decided
          after it has already happened).

          The mind understands the world  through  two  processes,  one  of
          information gathering and one of interpretation. The same process
          of interpretation is used for memories, dreams  and  new  experi-
          ences.

          The self is only a part of the mind,  and  not  always  connected
          with  the  rest of it. The self shares the mind with other minds.
          Minds take hold of consciousness depending on the  needs  of  the
          moment. Each mind tends to stay in place for as long as possible,
          with its own memories and goals. The self rarely notices what  is
          going on. Continuity of the  mind is an illusion.  We are not the
          same person all the time.   Different  selves  within  the  brain
          fight for control over the next action.

          The mind is now capable of "conscious evolution" and this  should
          be used for ethical purposes.

Ortony Andrew: METAPHOR AND THOUGHT (Cambridge Univ Press, 1979)

A collection of studies on metaphor.

          Michael Reddy's "conduit metaphor" deals with the idea  that  the
          mind  contains  thoughts  that can be treated like objects. Reddy
          thinks it is wrong. The transfer of thought is not a  determinis-
          tic,  mechanical  process. It is an interactive, cooperative pro-
          cess.

          Searle distinguishes metaphor from indirect speech acts:  in  the
          latter  the  speaker  intends to convey both the sentence meaning
          and the indirect meaning, whereas in the former  the speaker only
          intends  to  convey  the  indiurect  meaning. Searle also distin-
          guishes the sentence meaning from the speaker meaning: interpret-
          ing a metaphor has to do with figuring out how to relate them.
           Ortony Andrew, Clore  Gerald  &  Collins  Allan:  THE  COGNITIVE
          STRUCTURE OF EMOTIONS (Cambridge Univ Press, 1988)

          Emotion is not defined anywhere, but it is implicitly assumed  to
          comply  with  the  traditional  view of consisting of arousal and
          appraisal, i.e.  of being triggered by eliciting conditions.  The
          authors  group  emotions  in  emotion types, and emotion types in
          emotion groups. Emotions in an emotion type are elicited  by  the
          same  situation  and  their  intensity  is determined by the same
          variables. Emotion types in the same group have eliciting  condi-
          tions that are structurally related.

          An ontology of the world is provided in terms of  events,  agents
          and  objects.  Emotions  are valenced reactions to either (conse-
          quences of) events, (actions of) agents or (aspects of) objects.

          There are no basic emotions and no compositional rules  to  build
          complex emotions.

          The intensity of an emotion depends on variables that  depend  on
          the  appraisal  of  the stimulus. The appraisal system is in turn
          goal-driven. Factors that affect  intensity  include:  proximity,
          unexpectedness, arousal.
           Osherson Daniel: AN INVITATION TO COGNITIVE SCIENCE (MIT  Press,
          1995)

          Second edition of  the  three-book  introduction  to  the  field:
          language, visual cognition and thinking. Each chapter is an essay
          by an expert.
           Oyama  Susan:  ONTOGENY  OF  INFORMATION  (Cambridge  University
          Press, 1985)

          Oyama  offers  and  alternative  to  the  nature-nurture  dualism
          (inherited vs acquired characters).

          The western tradition assumes that form preexists its  appearance
          in  bodies  and minds (e.g., as genetic code). Information is the
          modern source of form: ubiquitous in the environment as  well  as
          in  the  genome.  Development  is  traditionally explained as the
          parallel  process  of  translating  information  in  the   genome
          (nature)  and  acquiring  information  from the environment (nur-
          ture). This view has deep cultural roots,  but  is  nothing  more
          than myth.  Information regulates development.

          Oyama's viewpoint is that information (e.g., from the genome)  is
          itself  generated,  it  develops.  Information itself undergoes a
          developmental process, and therefore ontogenesis should apply  to
          information.

          In order to analyze what developmental information in the chromo-
          somes  means, Oyama focuses on three phenomena common to all life
          processes, and manifested in the genes,  i.e.  constancy,  change
          and variability.

          Oyama argues that organismic form cannot be transmitted in  genes
          or  contained  in  the  environment, and cannot be partitioned by
          degrees of coding: it is constructed in developmental  processes.
          Information  in  the genes and information in the environment are
          not biologically relevant until they  participate  in  phenotypic
          processes.   Form  emerges  through  a history of interactions at
          many  hierarchical  levels.   Constancy  across  generations  and
          within  a  population  is  due  to  interaction between genes and
          environment. Chromosomal form is but one of  the  "interactants".
          There is no vehicle of constancy.  Constancy is constructed.

          Change is  a  natural  consequence  of  matter  being  inherently
          reactive.  There  is  no  need  for  an external force to explain
          change. Ontogenetic change is the product of  interacting  influ-
          ences, some inside the organism's borders and some outside.

          An organism inherits its environment, as much as it inherits  its
          genotype.   It inherits the some competence, but also the stimuli
          that make that competence significant.

          Form is the result of interactive construction, not  the  outcome
          of a preexisting plan.

          The distinction between  inherited  and  acquired  characters  is
          replaced  by  the  notion of development systems, which allow for
          multiple development pathways.  Control  of  development  and  of
          behavior emerges through interaction.

          Oyama carries on a thorough critique of the gene as a  "ghost  in
          the  biological  machine",  as the set of instructions for living
          beings.
           Paivio Allan: IMAGERY AND VERBAL PROCESSES (Holt,  Rinehart  and
          Winston, 1971)

          Paivio was the first to posit that the mind  must  use  two  dif-
          ferent  types  of  representation, a verbal one and a visual one,
          corresponding to the brain's two main perceptive systems.

Parfit Derek: REASONS AND PERSONS (Oxford Univ Press, 1985)

          Parfit deals with matters  of  identity  and  consciousness.  His
          famous  thought  experiment  asks what happens to a person who is
          destroyed by a scanner in London and rebuilt cell by cell in  New
          York by a replicator that has received infinitely detailed infor-
          mation from the scanner about the  state  of  each  single  cell,
          including  all of the person's memories.  Is the person still the
          same person? Or did the person die in London?  What makes a  per-
          son  such  a  person: bodily or psychologically continuity?  If a
          person's matter is replaced cell by cell with equivalent cells is
          the  person  still  the same person?  If a person's psychological
          state (memory, beliefs, emotions and everything) is replaced with
          an  equivalent  psychological  state is the person still the same
          person?  The question eventually asks what is "a life": is  it  a
          continuum  of bodily states, whereby one grows from a child to an
          adult, or is it a continuum of psychological states? Or both?  Or
          none?   Parfit  thinks  that  psychological  continuity  is  what
          matters.
           Parkin   Alan:   EXPLORATIONS   IN   COGNITIVE   NEUROPSYCHOLOGY
          (Blackwell, 1996)

          A survey of the field, from  the  split  brain  to  connectionist
          models.

Pawlak Zdzislaw: ROUGH SETS (Kluwer Academic, 1991)

          Rough sets are sets that are defined in terms of lower and  upper
          bounds..   Rough sets are useful in classifying imprecise, uncer-
          tain or incomplete knowledge.  The approximation space is a clas-
          sification  of  the  domain  into  disjoint categories. The lower
          approximation is a description of the objects that are known with
          certainty  to belong to the domain.  The upper approximation is a
          description of the objects that possibly belong to the domain.

Peacock Christopher: A STUDY OF CONCEPTS (MIT Press, 1992)

The book details Peacock's own theory of concepts.

           Peak David & Frame Michael: CHAOS  UNDER  CONTROL  (W.H.Freeman,
          1994)

          A textbook for beginners on complexity, with a good  introduction
          to fractals.

Pearl Judea: HEURISTICS (Addison Wesley, 1984)

          A well-organized and comprehensive technical textbook on  heuris-
          tic  methods for problem solving: hill climbing, best first algo-
          rithms, and so forth. The second part is an analysis  of  perfor-
          mance, the third part is devoted to game playing.
           Pearl Judea:  PROBABILISTIC  REASONING  IN  INTELLIGENT  SYSTEMS
          (Morgan Kaufman, 1988)

          A property of information is that it is relevant for  some  other
          type  of information. Relevance's dual property is dependence: if
          a  piece  of  information  is  relevant  to  another   piece   of
          information,  than  this piece of information is dependent on the
          former. Relevance can be defined as  "conditional  independence".
          Pearl  provides an axiomatic formulation of "conditional indepen-
          dence".

          Pearl's causal networks (conditional dependency's graphical nota-
          tion) are direct acyclical graphs in which nodes represent casual
          variables (which can have any value) and arcs express dependecies
          among  them.   By  using Bayes' inversion formula the conditional
          probability of the nodes of the graph can be computed as informa-
          tion becomes available.

          A causal net is isotropic, i.e. it can be used to perform  infer-
          ences  in both ways, top-down (to "predict" an event) and bottom-
          up (to "diagnose" an event).

          Pearl thinks that experience is transformed into causal models so
          that it be possible for the mind to make decisions.

          Pearl's belief function measures how close a  proposition  is  to
          necessity,  as  opposed to classic probability which measures how
          close a proposition is to truth.

Peirce Charles: COLLECTED PAPERS (Harvard Univ Press, 1931)

          Peirce devised a graphical notation to express logical  relation-
          ships alternative to Peano's linear notation. Peirce then defined
          a set of operations to  manipulate  such  graphs  which  conserve
          truth  (equivalent  to  inference  rules).  Peirce's "existential
          graphs" can represent first-order  predicate  logic  as  well  as
          modal logic (through colored contexts) and higher-order logics.

          The theory of signs was originally developed  by  Charles  Peirce
          and then revised by C.W. Morris.

          A sign is something that stands for something else. Syntax is the
          study  of the relations that signs bear to other signs. Semantics
          is the study of the relations that signs bear to what they  stand
          for.  Pragmatics is the study of the relations that signs bear to
          what they stand for and their users.  Icons are signs  that  work
          by  virtue  of  a  relation of resemblance to what they stand for
          (e.g., photographs).  Indices are signs that work by virtue of  a
          relation of cause or effect with what they stand for. (e.g., dark
          clouds suggest rain). Symbols are signs that work by virtue of  a
          conventional  association to what they stand for (numbers, nouns,
          etc). For all three categories  of  signs,  types  are  kinds  of
          things, tokens are their instances.
           Penfield Wilder: MYSTERY OF  THE  MIND  (Princeton  Univ  Press,
          1975)

          Penfield showed that memory is distributed in the brain.

Penrose Roger: THE EMPEROR'S NEW MIND (Oxford Univ Press, 1989)

          Penrose reviews the historical debate  pro  and  cons  Artificial
          Intelligence, from Turing's test to Searle's chinese room experi-
          ment, and provides  economical  and  clear  explanations  of  the
          mathematical  tools involved, from Turing machines to lambda cal-
          culus.

          Following John Lucas, Godel's theorem states  the  preminence  of
          the human mind over the machine: some mathematical operations are
          not computable, still the human mind can treat them (at least  to
          prove  that they are not computable). Therefore Artificial Intel-
          ligence is impossible.

          Then he surveys scientific theories, from Euclides'  geometry  to
          Einstein's  relativity.   A  long  introduction to quantum theory
          brings Penrose to prove its inadequacy to deal  with  macroscopic
          phenomena.

          "Central to our feelings of awareness is  the  sensation  of  the
          progression  of  time".  Penrose  looks for the origin of time in
          cosmological models and in the second law of thermodynamics.

          After a short introduction to neuroscience,  Penrose  hints  that
          consciousness could be a quantum phenomenon.
           Penrose Roger: SHADOWS OF THE  MIND  (Oxford  University  Press,
          1994)

          Another attack on artificial intelligence, which recycles many of
          the  points  of  his previous book (Godel's theorem as proof that
          Artificial Intelligence is impossible, quantum mechanics  as  the
          foundation for a theory of consciousness).

          Penrose thinks that there is a  separate  mental  world  that  is
          grounded  in  the  physical world. There is also another separate
          world, that of abstract objects.

          Neurons are too big to account for consciousness. Inside  neurons
          there is a cytoskeleton, the structure that holds cells together,
          whose microtubules control the function of  synapses.  Conscious-
          ness is a manifestation of the quantum cytoskeletal state and its
          interplay between quantum and classical levels of activity.
           Pereira Nelson & Grosz Barbara: Natural Language Processing (MIT
          Press, 1994)

          A collection of articles from the Journal of Artificial  Intelli-
          gence.
           Piaget Jean: EQUILIBRATION OF COGNITIVE  STRUCTURES  (University
          of Chicago Press, 1985)

          Piaget's theory of knowledge (or genetic epistemology)  Knowledge
          is  constructed  by  each individual through her interaction with
          the environment, knowledge is a developing  relationship  between
          the  individual  and  her  environment.  Knowledge  is not simply
          absorbed, but it is also organized, for the  purpose  of  adapta-
          tion.   Knowledge develops through a process of self-organization
          based on feedback from the environment.  The goal is to  reach  a
          sequence  of  progressive states of equilibrium through a process
          of  "equilibration".  Development  is  viewed  as  a  progressive
          equilibration  leading from a lesser to a higher state of equili-
          brium, i.e.  as a progressive increase in equilibrium.  The  pas-
          sage  from one equilibrium state to the next is driven by matura-
          tion (physiological growth of hereditary structures),  experience
          and social transmission, besides equilibration.

          At different ages (developmental stages) the child exhibits  dif-
          ferent  knowledge structures. The stage of sensory-motor behavior
          include: a stage of hereditary  reflexes,  a  stage  of  acquired
          adaptations  (one  to four months), a stage of circular reactions
          (four to eight months), a stage of intentional behavior (eight to
          twelve  months),  a stage of directed groping (twelve to eighteen
          months), a stage of symbolic representation (eighteen  to  tween-
          tyfour months).  Through them the individual develops from a bio-
          logical organism to a social one.

          At this point the child is beginning to symbolize.  Thoughts  are
          actions  that  take  place  in  the  mind,  and Piaget calls them
          "operations". At this point cognitive  development  begins.  From
          concrete  operations  (seven-twelve  years) the child moves on to
          formal operations (twelve-fifteen years) and  eventually  to  the
          hypothetico-deductive  thinking  of  adults.  The construction of
          stages proceeds according to a law of temporal displacement, i.e.
          it is a nonlinear process of continous reconstruction of the con-
          structions of earlier stages  (relearning)  at  a  higher  level.
          This reconstruction provokes reflective abstraction, or reorgani-
          zation of knowledge at a higher level.

          Cognitive structures are forms of equilibrium between the indivi-
          dual  and  the environment. At each stage of development the pro-
          cess of equilibration is repeated. At each stage  of  equilibrium
          there  is  an  urge  toward adaptation based on feedback from the
          environment. At every cycle the structures  of  thought  ("struc-
          tures d'ensemble") become more sophisticated.  Progress is driven
          by the need to find solutions to current problems and  anticipat-
          ing possible ones.

          In the process of cognitive development a number of events occur:
          decentration  (the  individual becomes less and less egocentric),
          internalization (of action),  temporal  displacement,  reflective
          abstraction,  and  awareness.  Following Claparede, Piaget thinks
          that as long as the individual can cope with the environment  she
          does not develop self-consciousness.

Pinker Steven: THE LANGUAGE INSTINCT (William Morrow, 1994)

          A comprehensive study on how children learn language.   First  of
          all  Pinker  demolishes a number of common-sense beliefs, such as
          that children learn language by  imitating  adults.  Then  Pinker
          explains  his  findings  by positing that children are "wired" to
          pay attention to certain kinds of phrases  and  to  perform  some
          operations with words.  All languages share common features, sug-
          gesting that natural selection favored certain  syntactic  struc-
          tures.   Pinker  is  an  orthodox  disciple  of  Chomsky and even
          believes in the thesis that the human mind is made  of  "modules"
          (fifteen  of  them),  organs  that account for instincts that all
          humans share. Pinker also discusses language's place in darwinian
          evolution.
           Polya George: MATHEMATICS  AND  PLAUSIBLE  REASONING  (Princeton
          Univ Press, 1954)

          A multi-volume survey of plausible reasoning.  Plausible  reason-
          ing  is what supports and yields human knowledge of the world, as
          opposed to demonstrative reasoning, which is incapable of  yield-
          ing new knowledge.

          Volume one is devoted to induction and analogy.
           Popper Karl: THE  LOGIC  OF  SCIENTIFIC  DISCOVERY  (Hutchinson,
          1959)

          Popper  challenged  logical  positivism's   hypothetico-deductive
          model of theory formation. Criticizing any inductive form of rea-
          soning  that  attempts  to  derive  a  general  proposition  from
          specific  instances,  Popper  proposes  to focus on demonstrating
          that hypotheses are false.  The scientific process should be  one
          of conjectures and refutations.
           Popper Karl & Eccles John: THE SELF  AND  ITS  BRAIN  (Springer-
          Verlag, 1977)

          Popper's  interactionism  is  "tri-alist":  abstract  objects  of
          mathematics, scientific theories and art products are examples of
          activities that belong to neither the mental world nor the physi-
          cal  world.  Mind plays the role of intermediary between the ima-
          ginary world (World 3) and the real world (World 1).  The mind is
          basically  an operator that related abstract objects and physical
          ones.

          Since the mental world and the physical world are distinct,  men-
          tal states cannot be physical states.

          "Downward causation" operates from World 3 to World 1.

          Natural selection does not apply to World 3 and the  mind  as  it
          does  to  World  1.   In  World 3 and the mind the application of
          trial and error does not entail the violent elimination  of  some
          of  the  individuals  which  are the objects of the test. Natural
          selection trascended itself when it brought about  the  emergence
          of mind and of World 3.

          Along the way Popper also offers a comprehensive introduction  to
          the mind-body problem (Aristotle, Descartes, Leibniz, etc).

          Eccles provides a comprehensive view of neural processes underly-
          ing  various  cognitive functions. Then he advances his theory of
          the self-conscious mind and the brain: the mind is an independent
          entity  that  exercises a controlling role upon the neural events
          of the brain by virtue of its interaction  across  the  interface
          between  World  1  and  World 2. The mind is always searching for
          brain events that are interesting for its goals.
           Popper Karl: KNOWLEDGE AND  THE  BODY-MIND  PROBLEM  (Routledge,
          1994)

          In these lectures Popper recapitulates his theory of the mind.

          Popper distinguishes objective knowledge ("I know that  water  is
          liquid") from subjective knowledge ("I know that I am wrong").

          Popper posits the existence of a first world (the world of physi-
          cal  bodies),  a  second world (the world of mental states) and a
          third world (the world of products of the mind). The second world
          communicates with both the others.

          Objective knowledge belongs to the third world. The  third  world
          evolves  through  the  growth  of  objective knowledge. Objective
          knowledge confers  a  degree  of  autonomy  to  the  third  world
          (numbers  are  created  by  the  mind, but then mathematical laws
          determine what happens to them, regardless of the mind).   Popper
          derives  biological  phenomena of survival and evolution from the
          same formula that determines the growth and evolution  of  objec-
          tive knowledge (basically, trial and error).

          Consciousness emerged evolutionary with the faculty of  language.
          Consciousness emerges during growth with the faculty of language.
          Therefore it must be related to the brain region that deals  with
          speech.
           Port Robert & Van Gelder Timothy: MIND  AS  MOTION  (MIT  Press,
          1995)

          A collection of papers on the dynamical approach to cognition.
           Posner Michael: FOUNDATIONS OF  COGNITIVE  SCIENCE  (MIT  Press,
          1989)

          A monumental, comprehensive introduction to the field by a number
          of  distinguished  authors. Includes chapters on cognitive archi-
          tectures (such as ACT and SOAR),  connectionism,  model-theoretic
          semantics,  neurophysiology,  discourse,  mental  models, vision,
          memory, action, etc.

Pribram Karl: LANGUAGES OF THE BRAIN (Prentice Hall, 1971)

          Pribram's holonomic model of memory is  based  on  the  hologram.
          Memory is distributed in the brain. Memories do not disappear all
          of a sudden, but  slowly  fade  away.  This  is  consistent  with
          Penfield's experiments.

          A sensory perception is transformed in a "brain wave",  a  scheme
          of  electrical  activation that propagates through the brain just
          like the wavefront in a liquid. This crossing of the  brain  pro-
          vides the interpretation of the sensory perception in the form of
          a "memory wave", which in turn crosses  the  brain.  The  various
          waves  that travel through the brain can interfere. The interfer-
          ence of a memory wave and a visual  wave  generates  a  structure
          that resembles an hologram.
           Pribram Karl & Eccles John: RETHINKING NEURAL NETWORKS (Lawrence
          Erlbaum, 1993)

          Proceedings of  a  conference  on  neurodynamics.  Includes  R.L.
          Dawes' "Advances in the theory of quantum neurodynamics".
           Pribram Karl:  RETHINKING  NEURAL  NETWORKS  (Lawrence  Erlbaum,
          1993)

          Proceedings of a conference on neurodynamics that focused on  the
          nature of brain processes.

Pribram Karl: ORIGINS (Lawrence Erlbaum, 1996)

          Proceedings of a conference on neurodynamics that focused on  the
          origin and evolution of order.

Pribram Karl: BRAIN AND PERCEPTION (Lawrence Erlbaum, 1990)

          A collection of lectures that review Pribram's  holographic  (or,
          better, "holonomic") theory of the brain.

          The theory employs Fourier transformations to deal with the dual-
          ism  between  spacetime  and spectrum, and Gabor's phase space to
          embed spacetime and spectrum.   All  perceptions  (and  not  only
          colors  or  sounds) can be analyzed into their component frequen-
          cies of oscillation and therefore treated  by  Fourier  analysis.
          Dirac's "least action principle" (which favors the least expendi-
          ture of energy) constrains trajectories in such a space.  Gabor's
          uncertainty  principle sets a limit with which both frequency and
          spacetime can be concurrently determined (the fundamental minimum
          is Gabor's "quantum of information").

          A rigorous description of transformations  leading  from  percep-
          tions  to  feature extraction is provided for a variety of visual
          and cognitive activities.   Processes  local  to  specific  brain
          regions are studied in neurophysiological detail.

          Pribram expresses a few  innovative  viewpoints  along  the  way.
          Both distributed and localized functions characterize brain func-
          tions.  Structure and process are two aspects of the same entity,
          distinguished  only  by the scale of observation (from a distance
          an entity looks like a structure, but close enough it is  a  pro-
          cess).   The  formalism of quantum theory applies to the modeling
          of brain functions such as vision (brain microprocesses and  phy-
          sical microprocesses can be described by the same formalism).

Pribram Karl: ORIGINS (Lawrence Erlbaum, 1994)

          Proceedings of a conference on neurodynamics.   Contributions  by
          Prigogine ("mind and matter: beyond the cartesian dualism"), P.J.
          Werbos ("self-organization"), J. Gyr ("psychophysics"),  C.  Game
          ("non-equilibrium  thermodynamics  and the brain"), and neurophy-
          siological models.
           Prigogine Ilya: INTRODUCTION TO THERMODYNAMICS  OF  IRREVERSIBLE
          PROCESSES (Interscience Publishers, 1961)

          Prigogine introduced the minimum entropy principle (stable  near-
          equilibrium  dissipative  systems  minimize their rate of entropy
          production) to characterize living organisms.
           Prigogine Ilya: NON-EQUILIBRIUM STATISTICAL MECHANICS (Intersci-
          ence Publishers, 1962)

Prigogine Ilya: FROM BEING TO BECOMING (W.H.Freeman, 1980)

          Living organisms function as dissipative  structures,  structures
          that  form as patterns in the energy flow and that have the capa-
          city  for  self-organization  in  the   face   of   environmental
          fluctuations.

          Dissipative systems maintain their structure by continous  dissi-
          pation of energy.
           Prigogine Ilya & Stengers Isabelle: ORDER OUT OF CHAOS (Bantham,
          1984)

          This is the english edition of  "La  Nouvelle  Alliance"  (1979).
          Prigogine  analyzes the history of science and scientific thought
          and derives a new vision of the world.

          Classical science (and quantum mechanics) describes a world as  a
          static  and  reversible system that undergoes no evolution, whose
          information is constant in time. On the other hand the second law
          of  thermodynamics  describes the world as evolving from order to
          disorder, while biological evolution is about the complex  emerg-
          ing   from   the  simple  (structure,  i.e.  order,  arises  from
          disorder). Irreversible processes are an essential  part  of  the
          universe.   Conditions far from equilibrium foster phenomena such
          as life that classical physics does not cover.

          Prigogine focuses on the peculiar properties exhibited by systems
          far from equilibrium.

          Non-equilibrium conditions favor the spontaneous  development  of
          self-organizing  systems  (i.e.,  dissipative  structures), which
          maintain their internal organization, regardless of  the  general
          increase  in  entropy,  by  expelling  matter  and  energy in the
          environment. Most of Nature is made of  dissipative  systems,  of
          systems  subject  to fluxes of energy and/or matter.  Dissipative
          systems conserve their identity thanks to  the  interaction  with
          the external world.

          The concept of organization is  deeply  rooted  in  the  physical
          universe.

          Prigogine considers living organisms as dissipative structures in
          states  of  non-equilibrium.  A system that is not in equilibrium
          exhibits a variation of entropy which is the sum  of  the  varia-
          tions  of  entropy due to the internal source of entropy plus the
          variation of entropy due to the  interaction  with  the  external
          world.  The  former  is  positive,  but the latter can equally be
          negative. Therefore total entropy can decrease.
          An organism "lives" becausa it absorbs energy from  the  external
          world  and  processes  it  to generate an internal state of lower
          entropy.  An organism "lives" as long as it can avoid falling  in
          the equilibrium state.

          Probability and irreversibility are closely related. Boltzman had
          already proved that entropy grows because probability grows.

Prior Arthur: PAST, PRESENT AND FUTURE (Clarendon Press, 1967)

          Prior's temporal logic assumes that  the  temporal  reference  is
          negligible  and  therefore  provides  an underlying theory for an
          instant-based ontology (as opposed to the  interval-based  ontol-
          ogy) of Time.

          Time is formalized by means of modal operators that express  pro-
          perties  such  as "always" and "sometimes", "before" and "after",
          "while" and "when".  Prior's theory finds a logical correspondent
          to  many past and future tenses by reducing them to two fundamen-
          tal modal operators, one for the past and  one  for  the  future.
          Nonetheless,  Prior  cannot  represent "since" and "until", which
          can easily be expressed by classical logic.

Prior Arthur: WORLDS, TIMES, AND SELVES (Duckworth, 1977)

          Prior investigates structural analogies between modal logic  (the
          formal  study  of  necessity  and possibility) and quantification
          theory (the formal study of universality and existentiality)  and
          develops  a  modal  system  Q  with  an operator Q that picks out
          instants, worlds, or selves, as the case may be.
           Purves Dale: NEURAL ACTIVITY AND THE GROWTH OF THE  BRAIN  (Cam-
          bridge Univ Press, 1994)

          Brain cells are in a continual state of flux, creating  and  des-
          troying synapses all the time. Neural activity caused by external
          stimuli is responsible for the continual growth of the brain, and
          for sculpting a unique brain anatomy in every individual based on
          the individual's experience.
           Putnam Hilary: MIND, LANGUAGE AND REALITY (Cambridge Univ Press,
          1975)

          The same mental state may be implemented  by  different  physical
          states.

          Putnam imagines a world called "Twin Earth" exactly like Earth in
          every  respect  except  that  the stuff which appears and behaves
          like water, and is actually called "water", on Twin  Earth  is  a
          chemical  compound  XYZ.  If one Earth and one Twin Earth inhabi-
          tant, identical in all respects, think about  "water",  they  are
          thinking  about  two  different things, while their mental states
          are absolutely identical. Therefore  the  content  of  a  concept
          depends  on  the context ("externalism"). Meanings are not in the
          mind, they also depend on the objects that the mind is  connected
          to.

          Putnam classifies mental states based  on  their  function,  i.e.
          their causal roles withing the mental system, regardless of their
          physical structure.  Putnam originally suggested that the psycho-
          logical  state of an individual be identified with the state of a
          Turing machine. A psychological state would cause  other  psycho-
          logical states according to the machine's operations.  Belief and
          desire correspond to formulas stored  in  two  registers  of  the
          machine.   Appropriate  algorithms process those contents to pro-
          duce action.
           Putnam Hilary: REASON, TRUTH AND HISTORY (Cambridge Univ  Press,
          1981)

          Putnam proves that meaning does not  stand  in  the  realtionship
          between symbols and the world. Model-theoretic semantics fails as
          a theory of meaning.

          The definition of truth depends on the meaning of  the  words  of
          the  language and each definition of truth should list all condi-
          tions that meaning depends on (including the definition of  truth
          which is being defined).

Putnam Hilary: REPRESENTATION AND REALITY (MIT Press, 1988)

Putnam abandons his functionalist theory of the brain.

          Mind cannot comprehend itself. An automaton  cannot  explain  its
          own  behavior.   The same mental state may be implemented by dif-
          ferent computational (functional) states, therefore mental states
          cannot be computer programs.

          Explanation and  prediction  of  intentional  phenomena  such  as
          belief and desire belong to the realm of interpretation: concepts
          do not exist in the  mind,  are  the  output  of  interpretation.
          Interpretation  can  be  "normative",  when it employs Davidson's
          principle of charity or  Dennett's  principle  of  "rationality",
          which  state that an organism behaves as it should given the cir-
          cumstances (most of its beliefs are  true,  it  believes  in  the
          implications  of  its  beliefs,  no  two  beliefs contradict each
          other, and so on); or "projective" (Stitch), when  it  attributes
          to  an  organism  the  propositional attitudes that we would have
          were we in its situation.

          Meaning exhibits an identity through time but not in its  essence
          (such as momentum, which is a different thing for Newton and Ein-
          stein but expresses the same concept). An  individual's  concepts
          are  not  scientific  and  depend on the environment. Most people
          know what gold is, and still they cannot explain what it  is  and
          even need a jeweler to assess whether something is really gold or
          a fake. Still, if some day we found out that Chemistry has  erred
          in  counting  the  electrons  of the atom of gold, this would not
          change what it is.  The meaning of the word  "gold"  is  not  its
          scientific  definition,  but  the social meaning that a community
          has given it.  It is not true that every individual  has  in  its
          mind  all  the  knowledge  needed to understand the referent of a
          word. There is a subdivision of competence among human beings and
          the  referent  of a word is due to their cooperation.  Meaning is
          not in the mind.

Pylyshyn Zenon: COMPUTATION AND COGNITION (MIT Press, 1984)

          Zenon Pylyshyn believes in  a  variant  of  Fodor's  language  of
          thought.   He  has  applied  that  theory to the debate on mental
          imagery,  his  "descriptionalism"  being   opposed   to   Stephen
          Kosslyn's pictorialism.

          Pylyshyn recognizes three levels  of  description  for  cognitive
          tasks:  the knowledge level (which explains actions of the system
          as functions of what it knows and its goals), the symbolic  level
          (which  codifies  the  semantic  content of the knowledge and the
          goals), and the physical level. Unlike  Marr,  Pylyshyn  believes
          that  all  three  levels  must be studied to understand cognitive
          functions.

          The primitive operations of the mind's cognitive architecture can
          be  recognized because they are those defined solely by the biol-
          ogy of the brain, that is those that  cannot  be  altered  by  no
          other cognitive activity, that are "cognitively impenetrable".

          Images are simply the product of the manipulations  of  knowledge
          encoded in the form of propositions.

Quine Willard: WORD AND OBJECT (MIT Press, 1960)

          Quine criticized the distinction between analytic  and  synthetic
          and  advanced an indeterminacy principle to distinguish the logi-
          cal from the extra-logical vocabulary: the only  vocabulary  that
          counts  as  logical  is  the  one  that  is free of translational
          indeterminacy.

Quine Willard: FROM A LOGICAL POINT OF VIEW (Harper & Row, 1961)

          Contains the famous "Two Dogmas Of Empiricism",  a  manifesto  of
          holism.   Influenced  by  Pierre Duhem's argument that hypotheses
          cannot be tested in isolation from the whole theoretical  network
          in which they figure, Quine thinks that an hypothesis is verified
          true or false only relative to background assumptions.  There  is
          no  certain  way to determine what has to be changed in a theory,
          any hypothesis can be retained as true or discarded as  false  by
          performing  appropriate  adjustments  in  the  overall network of
          assumptions. No sentence has special  epistemic  properties  that
          safeguard  it from revision. Science is but self-conscious common
          sense.

Quine Willard: THE WEB OF BELIEF (Random House, 1978)

          The structure of concepts is determined  by  the  positions  that
          their  constituents  occupy in the "web of belief" of the indivi-
          dual. The child's concepts are based on the notion of similarity,
          and they slowly evolve to acquire a more theoretical structure.
           Quine Willard:  ONTOLOGICAL  RELATIVITY  (Columbia  Univ  Press,
          1969)
          The truth of a statement cannot be assessed as a function of  the
          meaning of its words. Words do not have an absolute meaning. They
          have a meaning only with respect to the other words they are con-
          nected to in the sentences that we assume to be true. Their mean-
          ing can even change in time.

          Quine's underdetermination theory  originates  in  the  sciences.
          For  every  empirical  datum  there  can be an infinite number of
          theories that explain it.  Science simply picks  the  combination
          of  hypotheses  that  seems  more  plausible.  When an hypothesis
          fails, the scientist can always modify the  other  hypotheses  to
          make it hold.

          Language is a special case. An empirical datum is a discourse and
          a  theory is its meaning. There are infinite interpretations of a
          discourse depending on the context. A single word has no meaning,
          its  referent  is  "inscrutable".  The meaning of language is not
          even in the mind of the  speaker.  It  is  a  natural  phenomenon
          related to the world of that speaker.

          A translation depends on the manual of translation that has  been
          chosen.

          Like verificationists, Quine thinks that the meaning of a  state-
          ment is the method that can verify it empirically.  Like holists,
          Quine thinks that the unity of meaning is given by science in its
          entirety,  i.e.   verification  of  a  statement  within a theory
          depends on the set of all other statements of the  theory.   Each
          statement  in  a theory partially determines the meaning of every
          other statement in the same theory.

          This is a variant of  Brentano's  "irreducibility"  thesis,  that
          mental  states  cannot  be  reduced to physical states. But Quine
          believes that intentional phenomena should be  purged  from  sci-
          ence.
           Quinn Naomi & Holland Dorothy: CULTURAL MODELS IN  LANGUAGE  AND
          THOUGHT (Cambridge Univ Press, 1987)

          Physical objects, because they exhibit spatial properties,  allow
          us  to  build mental models. The only way to build a mental model
          for a non-physical object is to transfer the model of a  physical
          object through a metaphor.

           Raiffa Howard: DECISION ANALYSIS (Addison-Wesley, 1968)

          Raiffa all but founded the discipline of taking decisions by bor-
          rowing  ideas from Von Neumann's concept of utility (compute pros
          and cons of a decision).  Its goal is to define what  constitutes
          a  "good" decision, regardless of whether its result is "good" or
          not.
           Ramsey William, Stich Stephen & Rumelhart David: PHILOSOPHY  AND
          CONNECTIONIST THEORY (Lawrence Erlbaum, 1991)

          Philosophical articles on the  connectionist  model  by  Margaret
          Boden, Daniel Dennett, William Lycan, etc.
           Ramsey  Allan: FORMAL METHODS IN ARTIFICIAL  INTELLIGENCE  (Cam-
          bridge University Press, 1988)

          Modal, temporal,  non-monotonic  logic  and  an  introduction  to
          Montague's semantics.

Ranta Aarne: TYPE-THEORETICAL GRAMMAR (Oxford Univ Press, 1995)

          Ranta   applies   Martin-Lof's   type   theory   to   linguistics
          (quantification, anaphora, temporal reference).
           Ray Thomas: EVOLUTION  AND  OPTIMIZATION  OF  DIGITAL  ORGANISMS
          (manuscript, 1992)

          Ray's goal  is  to  create  "creatures"  (sequences  of  computer
          instructions)  that  can compete for memory space. Unlike Dawkins
          and Holland, Ray does not employ a function to measure how  "fit"
          a  creature  is:  it  is  life  itself  to  determine how fit the
          creature is.
           Reilly Ronan & Sharkey Noel: CONNECTIONIST APPROACHES TO NATURAL
          LANGUAGE PROCESSING (Lawrence Erlbaum, 1992)

          An introduction and survey of connectionist natural language pro-
          cessing.
           Riesbeck Christopher & Schank Roger: INSIDE CASE-BASED REASONING
          (Lawrence Erlbaum, 1989)

          The definitive introduction  to  case-based  reasoning:  scripts,
          cases,  the episodic model of memory.

          Case-based reasoning is a form of analogical reasoning  in  which
          an  episodic  memory  archives generalizations of all known cases
          and each new case spawns the search for a case that  is  similar.
          The  new  case  is  interpreted based on any similar cases and is
          used to furtherly refine the generalizations.  Interpretation  is
          expectation-driven, based on what happened in previous cases.

          Episodic memory contains  examples   of  solutions,  rather  than
          solutions.

          The book surveys the most important  systems  built  by  Schank's
          team,  thereby  touching  on MOPs and more advanced memory struc-
          tures.
           Richards Ivor: THE PHILOSOPHY OF RHETORIC  (Oxford  Univ  Press,
          1936)

Rips Lance: THE PSYCHOLOGY OF PROOF (MIT Press, 1994)

          Lance claims that deductive reasoning is central to intelligence,
          explains how it depends on the ability to construct mental proofs
          by linking memory units to the conclusions they warrant, and pro-
          poses  a  unified  theory  of  natural  deductive  reasoning that
          requires only two  cognitive  skills:  the  ability  to  generate
          assumptions and the ability to generate subgoals.  Then shows its
          relations to natural logics, nonmonotonic reasoning and  defeasi-
          ble reasoning.

          When the mind has to categorize an object a variety  of  inferen-
          tial processes occur, not limited to prototype recognition.
           Roediger Henry: VARIETIES OF MEMORY AND CONSCIOUSNESS  (Lawrence
          Erlbaum, 1989)

          A collection of essays in honour of Endel Tulving.

          John Anderson provides a "rational analysis  of  memory":  memory
          must  behave  as an optimal solution to the information-retrieval
          problem.

          Daniel Schacter, the originator of the concepts of  implicit  and
          explicit memory, presents a cognitive architecture (DICE) to deal
          with awareness.  Explicit memory refers to memory with awareness,
          while  implicit memory refers to the recall of experience to per-
          form a task without any intentional remembering.

          Donald Broadbent highlights results of his tests: there  seem  to
          be  several  codes  for representing information; the information
          can pass from one code to another; such declarative knowledge  is
          distinct  from  procedural  knowledge,  which is used to memorize
          skills in which too much  information  would  be  required;  pro-
          cedural knowledge may be turned into declarative knowledge.

Rosch Eleanor: COGNITION AND CATEGORIZATION (Erlbaum, 1978)

          A collection of articles on categorization: Brent Berlin's  "Eth-
          nobiological  classification",  Kosslyn's  "Imagery  and internal
          representation" and Rosch's own "Principles of categorization".

          Rosch's principles are that 1. the task of category systems is to
          provide  maximum information with the least cognitive effort; and
          2. the perceived world comes as structured information.

          Concepts promote a cognitive economy by  partitioning  the  world
          into  classes.   Concepts  allow the mind to substantially reduce
          the amount of information to be remembered and processed.

          A concept is represented through a prototype which expresses  its
          most  significant  properties.  A  prototype  has cultural roots.
          Membership of an individual in a category is then  determined  by
          the  perceived  distance  of resemblance of the individual to the
          prototype of the category.

          There exists a level of  abstraction  at  which  the  most  basic
          category  cuts  are   made (cue validity is maximized), the basic
          level.  Superordinate  categories  are  more  abstract  and  more
          comprehensive.  Subordinate categories are less abstract and less
          comprehensive. Categories are related in a hierarchical organiza-
          tion of language to describe the world. The most fundamental per-
          ception and description of the  world  occurs  at  the  level  of
          natural (basic) categories.

          Categories occur in systems, and such systems include contrasting
          categories.   At  the  basic  level categories are maximally dis-
          tinct, i.e. they maximize  perceived  similarity  among  category
          members  and  minimize  perceived similarities across contrasting
          categories. Cue validity is the conditional probability  that  an
          object  falls  in a particular category given a specific feature.
          Category cue validity is the sum of all the individual cue  vali-
          dities  of  the  features associated with a category. The highest
          cue validity occurs at the basic level. The lowest cue validities
          occur for superordinate categories

          Later Rosch will ripudiate her theory of  prototypes:  categories
          are not mutually exclusive (an object can belong to more than one
          category to different degrees), i.e. they are fundamentally ambi-
          gous.
           Rose Steven Peter Russell: THE CONSCIOUS  BRAIN  (Harmondsworth,
          1976)

          The mental and the neuran are simply two aspects of the same phy-
          sical  state.   Mind neither causes a physical state of the brain
          nor is caused by it.

Rose Steven Peter Russell: THE MAKING OF MEMORY (Bantam, 1992)

A broad review of the mind-brain debate.

           Rosenberg Alexander: SOCIOBIOLOGY AND THE PREEMPTION  OF  SOCIAL
          SCIENCE (John Hopkins Univ Press, 1980)

          Rosenberg  thinks  that  logical  anomalies  of  the  intentional
          language  constitute  a good reason to omit intentional phenomena
          from science.

Rosenblatt Frank: PRINCIPLES OF NEURODYNAMICS (Spartan, 1962)

          Rosenblatt extended the model of the binary neuron to connections
          with  continous  values which are changing dynamically. His "per-
          ceptron" can be trained with a finite number of iterations  of  a
          training procedure.
           Rosenfeld Israel: THE STRANGE, THE FAMILIAR AND FORGOTTEN  (Vin-
          tage, 1995)

          An account of consciousness based on clinical  cases.  Memory  is
          impossible  without consciousness. Memory is not simply a storage
          mechanism, it  is  a  continuing  brain  activity.  Consciousness
          arises  from the "dynamic interrelations of the past, the present
          and the body image" (the ability of the brain  to  relate  sensa-
          tions to specific areas of the body).
           Rosenthal  David:  MATERIALISM   AND   THE   MIND-BODY   PROBLEM
          (Prentice-Hall, 1971)

          Contains J.J. Smart's 1959 paper  in  defense  of   the  identity
          theory of the mind.

Rosenthal David: NATURE OF MIND (Oxford University Press, 1991)

          A monumental reader on the mind,  collecting  some  of  the  most
          influential  papers  ever  written  on the subject, starting with
          Descartes and  ending  with  Searle's  chinese  room  experiment.
          Armostrong,  Smart,  Putnam, Lewis, Block, Kripke, Davidson, Kim,
          Quine, Chisholm, Fodor, Dennett, Dretske, Sellars, Nagel, Searle,
          Churchland,  Stich  and  many others are represented with some of
          their writings.

Rucker Rudy: INFINITY AND THE MIND (Birkhauser, 1982)

          A colossal excursion in the topic of  infinite:  history  of  the
          concept,  transfinite  numbers,  Godel's theorem, self-reference,
          etc. Many paradoxes highlight the main discussions.
           Rumelhart  David  &  McClelland  James:   PARALLEL   DISTRIBUTED
          PROCESSING VOL. 1 (MIT Press, 1986)

          Parallel Distributed Processing, or PDP, is a  class  of  general
          processing  systems  that  employ a number of interacting proces-
          sors, whereby processing is done in parallel by all  the  proces-
          sors   and   control  is  distributed  over  all  the  processes.
          Rumelhard, McClelland and Hinton propose a general framework in a
          formal way and introduce a number of variants, from simple linear
          models to thermodynamic models. The axiom of their  framework  is
          that  all  the  knowledge  of  the  system  is in the connections
          between the processors. Items can be represented by activity in a
          single  unit or by patterns of activity over a set of units. Dis-
          tributed representations are efficient for tasks  of  generaliza-
          tion, recognition, etc.

          This approach is better suited for pattern matching tasks such as
          visual recognition and language understanding.

          Neurocomputation is a form of "parallel distributed  processing":
          a  neural  net is a non-linear direct graph in which each element
          of processing (each node) receives signals from other  nodes  and
          emits  a  signal towards other nodes, and each connection between
          nodes has a weight that can vary in time.

          A concept is represented not by a symbol stored  at  some  memory
          location, but by an equilibrium state defined over a dynamic net-
          work of locally interacting units.  Each unit encodes one of  the
          many  features  relevant to recognizing the concept, and the con-
          nections between units are excitatory or inhibitory  inasmuch  as
          the  corresponding features are mutually supportive or contradic-
          tory.  A given unit can contribute to the definition of many con-
          cepts.

          Competitive learning,  the  Boltzmann  machine,  the  generalized
          delta  rule  and  Smolensky's  "harmony"  theory are discussed at
          length.

          Smolensky, by formalizing the  notion  of  "schema"  developed  a
          theory  of  dynamical  systems  that perform cognitive tasks at a
          subsymbolic level.  The task of a schema-based perceptual  system
          can  be  viewed  as  the completion of the partial description of
          static states of an environment.  Knowledge is  encoded  as  con-
          straints among a set of perceptual features.  The constraints and
          features evolve gradually with experience.  Schemata are  collec-
          tions  of knowledge atoms that become active in order to maximize
          "harmony". The cognitive  system  is  an  engine  for  activating
          coherent assemblies of atoms and drawing inferences that are con-
          sistent with the knowledge represented by  the  activated  atoms.
          The  harmony function measures the self-consistency of a possible
          state of the cognitive system.

          The harmony function obeys a law that resembles simulated anneal-
          ing  (just  like Boltzmann machine): the best completion is found
          by lowering the temperature to zero. Smolensky  borrows  concepts
          and  techniques  from  thermal  physics  for building his harmony
          theory.
           Ruspini Enrique, Bonissone Piero, Pedrycz  Witold:  HANDBOOK  OF
          FUZZY COMPUTATION (Oxford Univ Press, 1997)

          The ultimate handbook for professional fuzzy programmers.
           Russell Bertrand: AN INQUIRY INTO MEANING  AND  TRUTH  (Penguin,
          1962)

          In this 1940 essay, consciousness is a window into the brain that
          allows  us  to  have direct knowledge of matter.  Russell defines
          "propositional attitudes" sentences that  express  the  subject's
          attitude towards a proposition. They express a mental state.

          Matter is endowed with qualities that are directly accessible  to
          the mind, that are in a causal relationship with the mind.

          A member of the set of all sets that are  not  members  of  them-
          selves  is  a  contradiction.   the  set of all barbers who don't
          shave themselves a barber from this set does  not  shave  himself
          Who shaves the barbers who don't shave themselves?
           Russell Stuart Jonathan & Norvig Peter: ARTIFICIAL  INTELLIGENCE
          (Prentice Hall, 1995)

          A different approach to the subject.

Ryle Gilbert: THE CONCEPT OF MIND (Hutchinson, 1949)

          Ryle set himself to prove that Descartes' view of the  "ghost  in
          the  machine" is absurd. The relation of body and mind is a false
          problem. Behaviorism is an alternative to the  traditional  views
          of dualism and materialism.

          Mental and physical vocabularies have nothing in common. The men-
          tal  vocabulary does not refer to the structure of something, but
          simply to the way somebody behaves or will behave.

          A concept must always be referred to the set of  concepts  within
          which it is applicable.

          Ryle emphasizes the difference between knowing "how" and  knowing
          "that".   Propositional  knowledge  represents only one aspect of
          human intelligence.
           Sadock Jerrold:  TOWARD  A  LINGUISTIC  THEORY  OF  SPEECH  ACTS
          (Academic Press, 1974)

          Following John Ross, who first proposed the performative analysis
          of a sentence (i.e., explicitly identifying the performative for-
          mula definining the  illocutionary  force  of  a  sentence),  the
          abstract-performative  theory  posits  that  the highest semantic
          clause of the semantic representation of a sentence  provides:  a
          subject referring to the speaker, an indirect object referring to
          the addressee and a verb referring to a performative verb.  Illo-
          cutionary  force is then that aspect of a sentence's meaning that
          corresponds to the highest clause in its semantic representation,
          i.e. a performative formula.

          Sadock distinguishes semantic sense from interpreted sense (mean-
          ing  from  use) on the basis of three groups of formal properties
          (cooccurrence properties, paraphrase properties, grammatical pro-
          perties)

          Illocutionary force is part of the pragmatic meaning  of  a  sen-
          tence.  Sadock suggests that illocutionary acts are special cases
          of perlocutionary acts, because they to have an  effect  that  is
          posterior to the speech act.
           Salthe Stanley: EVOLVING HIERARCHICAL SYSTEMS (Columbia  Univer-
          sity Press, 1985)

          A novel take (from a philosophical  perspective)  at  traditional
          issues  of  evolutionary  theory. By combining the metaphysics of
          Justus Buchler and Michael Conrad's "statistical state model"  of
          the  evolutionary  process,  Salthe  develops  an ontology of the
          world, a formal theory of hierarchies and a model of  the  evolu-
          tion of the world.

          The world is viewed as a determinate machine  of  unlimited  com-
          plexity.  Within  complexity  discontinuities  arise.  The  basic
          structure of this world must allow for complexity that  is  spon-
          taneously  stable and that can be broken down in things separated
          by boundaries. A possible solution is a  hierarchical  structure,
          which  is  also  implied  by  Buchler's  principle of ordinality:
          nature (i.e., our representation of the world) is a hierarchy  of
          entities existing at different levels of organization.  Hierarch-
          ical structure is a  consequence  of  complexity.   Entities  are
          defined  by  four  criteria: boundaries, scale, integration, con-
          tinuity: an entity has size, is limited by boundaries,  and  con-
          sists of an integrated system which varies continously in time.

          Salthe develops  a  formal  theory  of  hierarchical  structures.
          Entities at different levels interact through mutual constraints,
          each constraint carrying information for the  level  it  operates
          upon.  A process can be described by a triad of contigous levels:
          the one it occurs at, its context (Bunge's environment)  and  its
          causes  (Bunge's  structure).  In general, a lower level provides
          initiating conditions for a process and an upper  level  provides
          boundary    conditions.     Representing   a   dynamical   system
          hierarchically requires a triadic structure.

          Aggregation occurs consequent upon differentiation.  Differentia-
          tion  interpolates  levels  between  the original two and the new
          entities aggregate in such a way that affects  the  structure  of
          the  upper  levels:  every  time  a new level emerges, the entire
          hierarchy must reorganize itself.

          Salthe also recalls Pattee's view of complexity as the result  of
          interactions  between physical and symbolic systems, where a phy-
          sical system is dependent on the rates at which  processes  occur
          and  a  symbolic system is not. Symbolic systems frequently serve
          as constraints within which physical systems  operate,  and  fre-
          quently  appear  as  products of the activity of physical systems
          (e.g., the genome in a cell).  A physical system is complex  when
          a part of it functions as a symbolic system (as a representation,
          and therefore as an observer) for another part of it.

          These abstract principles are then applied to organic evolution.

          Over time nature generates entities  of  gradually  more  limited
          scope  and more precise form and behavior. This process populates
          the hierarchy of  intermediate  levels  of  organization  as  the
          hierarchy  spontaneously  reorganizes itself.  This model applies
          to all open systems, whether organisms or ecosystems or planets.

Salthe Stanley: DEVELOPMENT AND EVOLUTION (MIT Press, 1993)

          By applying principles  of  complex  systems  to  biological  and
          social  phenomena,  Salthe  attempts  to  reformulate  biology on
          development rather  than  on  evolution.  Salthe's  postmodernist
          strategy is to foster the deconstruction of the rationalist trad-
          ition in science and show that an  alternative  exists  based  on
          Aristotle's and Hegel's thinking. Salthe makes use of theoretical
          tools from semiotics and information science.

          His approach is non-darwinian to the extent that development, and
          not  evolution,  is the fundamental process in self-organization.
          Evolution is merely the result of a margin of error.  His  theory
          rests  on  a  bold fusion of hierarchy theory, information theory
          and semiotics. Salthe is looking for a grand  theory  of  nature,
          which turns out to be essentially a theory of change, which turns
          out to be essentially a theory of emergence.

           Savage Leonard: THE FOUNDATIONS OF STATISTICS (John Wiley, 1954)

          Savage was a subjectivist, thinking that probability of an  event
          is  not  merely the frequency with which that event occurs, but a
          measure of the degree to which someone believes it  will  happen.
          Savage  devised  a  set of rational axioms for a person's prefer-
          ences
           Schacter Daniel & Tulving  Endel:  MEMORY  SYSTEMS  (MIT  Press,
          1994)

          A collection of essays. The editors provide a history  of  memory
          theories  and  survey the contemporary field. They also offer new
          criteria for defining a memory system  and  identify  five  major
          systems:  a  procedural system (nondeclarative, implicit), a per-
          ceptual system (ditto), a semantic  system  (declarative,  impli-
          cit), the episodic system (explicit) and the working memory (also
          explicit).

          Alan Baddeley describes working memory as the  interface  between
          memory and cognition.

          A few essays deal with the role of the hippocampus.
           Schank Roger: CONCEPTUAL INFORMATION PROCESSING (North  Holland,
          1975)

          A number of primitive actions can be used  to  form  all  complex
          actions.   Each  action  entails  roles  which  are common to all
          languages.  Therefore a verb can be represented in terms of  more
          primitive concepts.  Schanks' "conceptual dipendency" draws ideas
          from Fillmore and Katz.

          Conceptual  dependency  parsing  reveals  things  that  are   not
          explicit  in  the surface form of the utterance: additional roles
          and additional  relations.   They  are  filled  in  throught  the
          system's  knowledge  of  lexical  semantics and domain heuristics
          that help infer what is true in the domain.   Any  two  sentences
          that  share the same meaning will have exactly the same represen-
          tation in conceptual dependency, regardless of how much  is  left
          implicit by each one.

Schank Roger: DYNAMIC MEMORY (Cambridge Univ Press, 1982)

          Dynamic memory is a type of memory that  can  grow  of  its  own,
          based  on  experience. A script is a generalization of a class of
          situations. If a situation falls into the context  of  a  script,
          then  an expectation is created by the script, based on what hap-
          pened in all previous situations. If  the  expectation  fails  to
          materialize,  then  a new memory must be created.  This memory is
          structured according to an "explanation" of  the  failure.   Gen-
          eralizations are created from two identical expectation failures.
          Memories are driven by expectation failures, by  the  attempt  to
          explain  each  failure  and  learning  from that experience.  New
          experiences are stored only if they fail to conform to the expec-
          tations.   Remembering  is  closely  related to understanding and
          learning.

          A scene is a general description of a setting and a goal in  that
          setting.  A script is a particular instantiation of a scene (many
          scripts can be attached to one scene).   A  "memory  organization
          packet"  is a structure that keeps information about how memories
          are linked in frequently occuring combinations. A MOP is  both  a
          storing  structure and a processing structure. A MOP is basically
          an ordered set of scenes directed towards a goal.  A MOP is  more
          general  than  a  script in that it can contain information about
          many settings (including many scripts). A "thematic  organization
          packet" is an even higher-level structure that stores information
          independent of any setting.
           Schank Roger: SCRIPTS, PLANS, GOALS, AND UNDERSTANDING (Lawrence
          Erlbaum, 1977)

          A script  is  a  social  variant  of  Minsky's  frame.  A  script
          represents stereotypical knowledge of situations as a sequence of
          actions and a set of roles. Once the situation is recognized, the
          script  prescribes  the  actions  that are sensible and the roles
          that are likely to be played. The  script  helps  understand  the
          situation  and predicts what will happen. A script performs anti-
          cipatory reasoning.

          Scripts originate as units in the "event memory". The  comprehen-
          sion  of  an  event contributes to reorganize the abstractions of
          past events so that their scripts be more and more  efficient  in
          recognizing  that  type  of event.  This type of gradual learning
          depends on similarities between  events.  From  the  similarities
          scripts and roles are abstracted.

          Memory is syntactic (episodic) and dymanic (adaptive). Memory has
          the  passive  function  of remembering and the active function of
          predicting.  The comprehension of the world and  its  categoriza-
          tion proceed together.

Schank Roger: TELL ME A STORY (Scribner, 1990)

          Ultimately, knowledge (and intelligence itself) is stories.  Cog-
          nitive  skills emerge from discourse-related functions: conversa-
          tion is reminding and storytelling is understanding (and in  par-
          ticular  generalizing). The stories that are told differ from the
          stories that are in memory: in the process of being told, a story
          undergoes  changes  to reflect the intentions of the speaker. The
          mechanism is similar to script-driven reasoning: understanding  a
          story  entails  finding  a  story  in memory that matches the new
          story and enhancing the old story with details from the new  one.
          Underlying  the  mechanism  is  a  process of "indexing" based on
          identifying five factors: theme, goal, plan, result  and  lesson.
          Memory  actually  contains  only  "gists" of stories, that can be
          turned into stories by a number of operations (distillation, com-
          bination,   elaboration,   creation,   captioning,   adaptation).
          Knowledge is embodied in stories and cognition is carried out  in
          terms of stories that are already known.

Schank Roger: THE COGNITIVE COMPUTER (Addison-Wesley, 1984)

          An accessible introduction to Schank's theory of natural language
          understanding,  conceptual  dependency,  scripts, and some of the
          early programs of his school (MARGIE, SAM, POLITICS, FRUMP,  IPP,
          BORIS, CYRUS).

Schrodinger Erwin: WHAT IS LIFE (Cambridge Univ Press, 1944)

          This is the book that popularized the idea that biological organ-
          ization is created and maintained at the expense of thermodynamic
          order, thereby promoting the development of nonequilibrium  ther-
          modynamics.

          Life displays two fundamental process: creating order from  order
          (the progeny has the same order as the parent) and creating order
          from disorder (as every living system  does  at  every  metabolic
          step).   Living  systems  seem  to defy the second law of thermo-
          dynamics. In reality they live in a world of   energy  flux  that
          does  not confomr to the closed-world assumptions of thermodynam-
          ics. An organism stays alive in its  highly  organized  state  by
          absorbing  energy  from the environment and processing it to pro-
          duce a lower entropy state within itself.

Ivan Schmalhausen: FACTORS OF EVOLUTION (Blakiston, 1949)

          By reviewing a wealtch of biological data, Schmalhausen  advanced
          the  theory that evolution is a process of hierarchical construc-
          tion: differentiation yields increasing specialization and diver-
          sification  of  parts,  while  integration yields the creation of
          more stable and integrated forms of  organization  (specifically,
          the  formation  of new aggregates in which the structure and fuc-
          tion of parts are subordinated to and regulated by the  structure
          and function of the whole). For this to happen, genetic variation
          cannot be completely random but must be regulated  by  a  genetic
          system  of  genetic systems (analogous to Waddington's "canaliza-
          tion" process).

          The forces of natural selection can be divided  into  mobile  and
          stabilizing.  The  former  reshapes the individual to continously
          cope with the environment. The latter preserves the structure and
          function  of  organization  by producing new forms of ontogenesis
          which are less vulnerable to the action of  the  environment.  At
          the  beginning, life was at the mercy of accidental environmental
          changes. Over evolutionary time, organisms became more  and  more
          independent  of their environment, controlling their own function
          and structure (emergence of internal regulating mechanisms  coun-
          teracting  the  action  of  the environment).  Finally, organisms
          became able to determine their relationship with the environment.
          This  progression is due to the growing importance of the role of
          stabilizing selection.

Schwartz Eric: COMPUTATIONAL NEUROSCIENCE (MIT Press, 1990)

A collection of papers on the subject.

           Schwarz Norbert: COGNITION AND COMMUNICATION (Lawrence  Erlbaum,
          1996)

          The authors review applications of Paul Grice's "logic of conver-
          sation" to a variety of cases.

Scott Alwyn: STAIRWAY TO THE MIND (Springer, 1995)

          A hierarchical view  of  mental  organization  (a  "stairway"  of
          steps,  each  one emerging from the previous one) is used to pro-
          pose a new theory of consciousness. Underlying the account  is  a
          fundamental  reliance on nonlinear dynamics to explain the nature
          of biological organisms and the brain.  In his model  materialism
          and dualism can cohexist.

Searle John: SPEECH ACTS (Cambridge Univ Press, 1969)

          A theory of the conditions that preside to the genesis of  speech
          acts.  Searle classifies such acts in several categories, includ-
          ing "directive acts", "assertive  acts",  "permissive  acts"  and
          "prohibitive  acts".   Only  assertive  acts  can be treated with
          classical logic.

          Illocutionary acts, acts performed by a speaker when she utters a
          sentence  with  certain  intentions (e.g., statements, questions,
          commands, promises), are the minimal units  of  human  communica-
          tion.   An  illocutionary  act consists of an illocutionary force
          (e.g., statement, question, command, promise) and a propositional
          content.   There  is no major difference between locutionary acts
          and illocutionary acts.

          The meaning of some sentences rests in the set of social  conven-
          tions  (analogous to Grice's conversational maxims) that made the
          speaker choose those sentences to achieve its goal.

          The illocutionary force  of  sentences  is  what  determines  the
          semantics of language.

Searle John: EXPRESSION AND MEANING (Cambridge Univ Press, 1979)

          Searle attempts to explain intentionality  within his  theory  of
          speech  acts.  The causal relationship between a mental state and
          the world is due to a speech act or a perception.  Language  does
          not  have true intentionality, it inherits it from the underlying
          mental states.  Intentionality is a biological property.

Searle John: INTENTIONALITY (Cambridge University Press, 1983)

          Searle grounds the notion of meaning of a speech act in a general
          theory of the mind and action.

Searle John: MIND, BRAINS AND SCIENCE (BBC Publications, 1984)

          Searle is an outspoken critic of the functionalist  view  of  the
          mind.   After  the invention of the computer a number of thinkers
          from various disciplines (Herbert Simon, Alan Newell, Noam  Chom-
          sky,  Hilary  Putnam, Jerry Fodor) have adopted a cognitive model
          based on the relationship between the hardware and  the  software
          of  a  computer. Thinking is reduced to the execution of an algo-
          rithm in the brain.

          Searle objects with the "chinese room" thought experiment (origi-
          nally published in 1980): a conscious person who, without knowing
          chinese, was told all the rules  on  how  to  manipulate  chinese
          characters  in  order  to  put together sentences intelligible to
          chinese-speaking people would yet not "know" chinese,  no  matter
          how  well that person performed.  Programs are syntactical, minds
          have a semantics, syntax is not the same as semantics.

          Searle's Chinese room argument can be summarized as follows: com-
          puter programs are syntactical; minds have a semantics; syntax is
          not by itself sufficient for semantics.
           Searle John: FOUNDATIONS OF ILLOCUTIONARY LOGIC (Cambridge  Univ
          Press, 1985)

          A formal presentation of the logical foundations of speech acts.

          Illocutionary logic is the formal theory  of  illocutionary  acts
          that  attempts to formalize the logical properties of illocution-
          ary forces. Illocutionary force is defined by  seven  properties:
          illocutionary point (what the point is of performing that type of
          illocutionary act), degree of strength  of  the  point,  mode  of
          achievement  (set  of  conditions under which the point has to be
          achieved), propositional content conditions,  preparatory  condi-
          tions,,  sincerity conditions and degree of strength of sincerity
          conditions. Two illocutionary forces are identical if  they  have
          identical properties.

          Notions such as illocutionary commitment (by performing an  illo-
          cutionary  act the speaker commits herself to another illocution)
          and illocutionary compatibility are introduced.

          Five primitive illocutionary forces  are  recognized  (assertive,
          commissive,  directive,  declarative,  expressive)  and  a set of
          operations on the properties of a force are defined to obtain all
          other illocutionary forces from the primitive ones: adding condi-
          tions (whether propositional content, preparatory or  sincerity),
          increasing or decreasing the degrees of strength, restricting the
          mode of achievement.

          An axiomatical propositional illocutionary logic and its  general
          laws (of transitivity, identity, foundation) are defined.

Searle John: THE REDISCOVERY OF THE MIND (MIT Press, 1992)

          Searle's critique of theories of the mind is based  on  the  lack
          for  a  good  theory  of  consciousness. There is no mind without
          consciousness, and there can be no theory of the mind  without  a
          theory  of  consciousness.  All paradoxes of functionalist models
          arise from having neglected consciousness.

          Conscious mental states and processes are fundamentally different
          from  anything else in nature because they are "subjective". They
          are not equally accessible  to  all  observers.  They  cannot  be
          reduced  to  more  elementary  units.   Searle  believes that the
          objective properties of the brain cause the subjective ones, i.e.
          that  consciousness is a biological phenomenon, though conscious-
          ness can't be reduced to physical states in the brain.

          This is not "property dualism" because Searle  rejects  the  idea
          that  the universe can be partitioned in physical and mental pro-
          perties: things such as "ungrammatical sentences, my  ability  to
          ski,  the  government and points scored in football games" cannot
          be easily categorized as mental  or  physical.   The  traditional
          mental vs physical dichotomy is pointless.

          Brain processes cause consciousness but consciousness is itself a
          feature of the brain (non-event causation).

          Searle believes that human consciousness arises from  the  matter
          of  the brain, but does not exclude that consciousness could also
          arise from other type of matter. Consciousness is not  accessible
          to  empirical  tests, therefore we will never know what has cons-
          ciousness and what does not.

          Searle's critique of functionalism is still the same: that physi-
          cal  processes  do  not perform computations, they can be "inter-
          preted" as computations. The moment they are  "interpreted"  they
          are  no  longer physical but become mental. This goes back to his
          famous object that whatever a computer is computing the  computer
          does  not "know" that it is computing it: only a mind can look at
          it and tell what it is.

          In any event, Chomsky's grammatical rules, Fodor's mentalese  and
          so forth are supposed to be not accessible to consciousness, i.e.
          unconscious mental states, which is a contradiction in terms,  as
          only consciousness can turn a physical state into a mental state.
          The essence of the mind is consciousness,  all  mental  phenomena
          are actually or potentially conscious.

          Searle's main position can be summarized as: consciousness  is  a
          physical property of the brain and it is irreducible to any other
          physical property.
           Sellars Wilfrid: SCIENCE,  PERCEPTION  AND  REALITY  (Humanities
          Press, 1963)

          Intentional states are  physical  states.  Physical  states  have
          semantic  properties, similar to those owned by linguistic terms:
          an individual thinks P if there is a  state  in  his  brain  that
          carries  the semantic content P.  There is an analogy between the
          functional roles that the physical states of the  brain  play  in
          the  behavior  of  the  individual and the inferential roles that
          corresponding linguistic terms  play  in  linguistic  inferences.
          The  semantics  of  intentionality  is  related to the language's
          semantics.

          Among nature's ultimate constituents must be  the  senses,  which
          account  for  the  quality  of things. Each property of an object
          must be present in its constituents, and that includes the sensa-
          tions that the object creates in us.
           Selz Otto: ZUR  PSYCHOLOGIE  DES  PRODUKTIVEN  DENKENS  UND  DES
          IRRTUMS (###, 1922)

          To  solve  a  problem  means  to  recognize  that  the  situation
          represented  by the problem is described by a schema and fill the
          gaps in the  schema.   Given  a  problem,  the  cognitive  system
          searches the long-term memory for a schema that can represent it.
          Given the right schema, information in excess contains the  solu-
          tion.

          A schema is a network of concepts that organize past  experience.
          Representation  of  present  experience  is  a partially complete
          scheme. By comparing the two representations one can infer  some-
          thing relative to the present situation.

          Thanks to the schema's anticipatory nature, to solve a problem is
          equivalent  to  comprehend it, and comprehending ultimately means
          reducing the current situation to a past situation.
           Shackle George: Decision, order and time (Cambridge Univ  Press,
          1961)

          A theory of possibility, as an improvement over probabilities.
           Shafer Glenn: A MATHEMATICAL THEORY OF EVIDENCE (Princeton  Univ
          Press, 1976)

          This book summarizes Dempster-Shafer's theory  of  evidence  that
          refines  Bayes'  theory  of  probabilities.  The theory of belief
          functions relies on two principles: the  principle  of  inferring
          degrees  of belief for one question from subjective probabilities
          for a related question; and Dempster's rule  on  how  to  combine
          degrees of belief which are based on independent evidence.

          In 1968 Arthur Dempster and Glenn Shafer  ("A  generalization  of
          Bayesian  inference")  extended Bayes' theory of probabilities by
          introducing a "belief function" which operates on all subsets  of
          events  (not just the single events).  In the throwing of a dice,
          the possible events are only six, but the number of  all  subsets
          is  64 (all the combination of two sides, three sides, four sides
          and five sides). The sum of the probabilities of all  subsets  is
          one, but the sum of the probabilities of all the single events is
          generally less than one.

          Therefore, Dempster-Shafer's theory allows one to assign a proba-
          bility to a group of events, even if the probability of each sin-
          gle event is not  known.   Indirectly,  Dempster-Shafer's  theory
          also  allows  one to represent "ignorance", as the state in which
          the belief of an event is not known (while the belief of a set it
          belongs  to is known).  Dempster-Shafer's theory does not require
          a complete probabilistic model of the domain.

          An advantage of evidence over probabilities if that  its  ability
          to narrow the hypothesis set with the accumulation of evidence.

          Shafer, in accordance with Tversky's experiments, thinks that the
          way we assign probabilities to an event is a mental experiment to
          build an imaginary situation and the result we obtain depends  on
          the  process  of  construction.   People do not have preferences,
          people build them.
           Shafer Glenn & Pearl  Judea:  READINGS  IN  UNCERTAIN  REASONING
          (Morgan Kaufmann, 1990)

          Collects seminal papers by Shafer, Pearl,  Leonard  Savage,  Amos
          Tversky, Richard Cox, David Touretzky, Amos Tversky probabilities
          are degrees of belief

          A section is devoted to Decision Analysis: a historical  overview
          by  Shafer, an introduction by Warner North, an article on influ-
          ence diagrams by Ross Shachter.

          A section is devoted to Artificial  Intelligence  techniques  for
          reasoning under uncertainty, with articles by Paul Cohen and Rod-
          ney Brooks, and, fo course, articles on MYCIN.

          A section deals with belief functions (Dempster-Shafer's theory).
          Only one article touches on fuzzy logic.
           Shannon Claude & Weaver Warren: THE MATHEMATICAL THEORY OF  COM-
          MUNICATION (Univ of Illinois Press, 1949)

          Shannon freed Boltzmann's definition of entropy from its  thermo-
          dynamic context and applied it to information theory.

          The quality of a message as it is transformed from the source  to
          the  destination  is  a  function  of channel capacity and noise.
          Noise is a random process that can be described in terms of  sta-
          tistical probabilities.

          Entropy is the statistical state of knowledge about  a  question:
          the  entropy of a question is related to the probability assigned
          to all the possible answers to that question. Information is  the
          difference between two entropies.
           Shapiro Stuart Charles: ENCYCLOPEDIA OF ARTIFICIAL  INTELLIGENCE
          (John Wiley, 1992)

          The new edition of the most comprehensive book  on  the  subject.
          Each  section  provides comprehensive, detailed information on an
          artificial intelligence topic.
           Shastri Lokendra: SEMANTIC NETWORKS, AN  EVIDENTIAL  FORMULATION
          AND ITS CONNECTIONIST REALIZATION (Morgan Kaufman, 1988)

          Shastri's connectionist semantic memory  relates  concepts  of  a
          semantic network to neurons of a neural network.

Shepard Roger & Cooper Lynn: MENTAL IMAGES (MIT Press, 1986)

A collection of articles on cognitive models of vision.

          Shepard  thinks  that  species  survived  natural  selection   by
          developing innate structures to operate in their environment.

Shoham Yoav: REASONING ABOUT CHANGE (MIT Press, 1988)

Mainly a textbook on temporal logics.

          Shoham's preference logic, based on conditional logic, prescribes
          how  to  select  the best interpretation from a partially ordered
          set of interpretations according to  a  criterion  of  minimality
          (minimize  changes that may occur). Preference logic's expressive
          power is higher than any other non-monotonic logic.
           Simon Herbert Alexander:  MODELS  OF  THOUGHT  (Yale  University
          Press, 1979)

          Articles from the beginning of artificial intelligence, including
          Edward Feigenbaum's Sixties work.
           Simon Herbert Alexander: THE SCIENCES  OF  THE  ARTIFICIAL  (MIT
          Press, 1969)

          Both the computer and the mind belong to the category of physical
          symbol  systems. These systems process symbols to achieve a goal.
          Simon states the principle that a physical symbol system has  the
          necessary  and sufficient means for intelligent behavior.  A phy-
          sical symbol system  is  quite  simple:  the  complexity  of  its
          behavior  is  due  to the complexity of the environment it has to
          cope with.  Adaptation to the environment is the very reason  and
          purpose of their existence.

          No complex system can survive unless it is organized as a hierar-
          chy of subsystems. The entire universe must be hierarchical, oth-
          erwise it would not exist.

Simpson Patrick: ARTIFICIAL NEURAL SYSTEMS (Pergamon, 1990)

          A short, but nonetheless very technical, introduction  to  neural
          networks that covers all the main learning algorithms.
           Sirag  Saul-Paul:  HYPERSPACE  CRYSTALLOGRAPHY  (World  Science,
          1996)

          Saul-Paul Sirag's hyperspace contains  many  physical  dimensions
          and  many  mental  dimensions (time is one of the dimensions they
          have in common).
           Sloman Aaron: THE COMPUTER REVOLUTION IN  PHILOSOPHY  (Harvester
          Press, 1978)

          Each agent which is limited and intelligent and  must  act  in  a
          complex  environment,  in  which  an infinite number of resources
          should be needed to take decisions, must be endowed with  mechan-
          isms  that  cause emotions.  Emotions are therefore the result of
          constraints by the environment on the action of  the  intelligent
          being.

          Sloman explores the relation between emotional states and  cogni-
          tive states.
           Smith Edward E.: CATEGORIES  AND  CONCEPTS  (Harvard  University
          Press, 1981)

          In Medin Douglas' "A two-stage model  of  category  construction"
          the  mind  builds  categories  based on a primary feature, from a
          simple and efficient criterion to divide the universe in  objects
          that satisfy and objects that do not satisfy.
           Solso Robert & Massaro Dominic: THE SCIENCE OF THE MIND  (Oxford
          University Press, 1995)

          A collection of essays from leading psychologists, including Lak-
          off, Sternberg, Sperry, Kosslyn.

Sombe Lea: REASONING UNDER INCOMPLETE INFORMATION (Wiley, 1990)

          A number of different logics for common sense reasoning are  sur-
          veyed:  nonmonotonic  logics,  probabilistic  logic, fuzzy logic,
          analogical reasoning, revision theory.

Sowa John: CONCEPTUAL STRUCTURES (Addison-Wesley, 1984)

          Sowa surveys a number of philosophical and psychological theories
          (in  particular,  Selz's  schemata)  to justify his idea that the
          process of perception generates a structure called a  "conceptual
          graph",  describing the way percepts are assembled together. Con-
          ceptual relations describe the role that each percept plays.

          Conceptual graphs, based on Peirce's existential graphs (a  graph
          notation  for  logic),  are  a  system  of logic for representing
          natural language semantics.

          Conceptual graphs are finite, connected, bipartite graphs (bipar-
          tite because they contain both concepts and conceptual relations,
          boxes and circles).  Some concepts (concrete concepts) are  asso-
          ciated  with  percepts  for experiencing the world and with motor
          mechanisms for acting upon it. Some concepts are associated  with
          the items of language. A concept has both a type and a referent.

          A hierarchy of concept types defines  the  relationships  between
          concepts  at  different  levels of generality. The type hierarchy
          includes both natural types (e.g., "gold") and role types  (e.g.,
          "precious  stone"),  forms  a  lattice  and represents intensions
          (senses).

          Formation rules ("copy", "restrict", "join" and "simplify")  con-
          stitute  a generative grammar for conceptual structures just like
          production rules constitute a generative  grammar  for  syntactic
          structures.  All deductions on conceptual graphs involve a combi-
          nation of them.

          Sowa defines generalization and specialization,  abstraction  and
          definition  (through  lambda abstraction), aggregation (for plur-
          als) and individuation.

          Conceptual graphs can be translated to predicate calculus  formu-
          las, except those that have context-dependent features.

          Schemata incorporate domain-specific knowledge.  A  concept  type
          may be linked to any number of schemata, each schema representing
          a perspective on one way its concept type may be used. A  concept
          type may also be linked to a prototype.

          A discourse context is represented by a concept with one or  more
          conceptual  graphs  nested inside the referent field. There is an
          isomorphism between Peirce's contexts, Kamp's contexts and Sowa's
          contexts,  Kamp's  rules for resolving discourse referents can be
          used in conceptual graphs as well.

          Conceptual graphs can distinguish extensional models of the world
          from  intensional  propositions  on the world. The interpretation
          function relates the graphs of the formulas to the graphs of  the
          models.

          Tarski's model theory can be adapted to graphical representations
          by seeing each node as an object and each arc as a relation.
           Sowa John: PRINCIPLES  OF  SEMANTIC  NETWORKS  (Morgan  Kaufman,
          1991)

          A collection of six articles on semantic networks.  William Woods
          discusses subsumption and taxonomy.  Lenhart Schubert sees seman-
          tic networks as a notational variant of  logic.   Stuart  Shapiro
          believes  that semantic networks go beyond logic in that they can
          also deal with  "subconscious"  reasoning  through  the  implicit
          links between nodes.  Brachman presents a successor to the KL-ONE
          language
           Sperber Dan & Wilson Deirdre: RELEVANCE, COMMUNICATION AND  COG-
          NITION (Blackwell, 1995)

          The second edition of the classic 1986 text.

          Relevance  constraints  discourse's  coherence  and  enables  its
          understanding.

          Relevance is a relation between a proposition and a set  of  con-
          textual  assumptions:  a  proposition is relevant in a context if
          and only if it has at least one contextual  implication  in  that
          context.   The contextual implications of a proposition in a con-
          text are all the propositions that can be deduced from the  union
          of the proposition with the context.

          Relevance is achieved when  the  addition  of  a  sentence  to  a
          discourse  modifies the context in a manner which is not trivial,
          i.e. which is not only the sum of the context plus the  new  sen-
          tence  plus  all its implications. A universal goal in communica-
          tion is that the hearer is out to acquire  relevant  information.
          Another  universal  goal  is  that  the speaker tries to make his
          utterance as relevant as  possible.  Understanding  an  utterance
          consists  then  in  finding an interpretation which is consistent
          with the principle of relevance. The principle of relevance holds
          that any act of ostensive communication also includes a guarantee
          of its own optimal relevance. This principle is proven to subsume
          Grice's maxims.

          Relevance can arise in three ways: interaction  with  assumptions
          which  yields  new  assumptions,  contradiction  of an assumption
          which removes it, additional evidence  for  an  assumption  which
          strengthens the confidence in it.

          Implicatures are  either  contextual  assumptions  or  contextual
          implications  that the hearer must grasp to recognize the speaker
          as observing the principle of relevance. Utterance  comprehension
          is reduced to a process of hypothesis formation and confirmation:
          the best hypothesis about the speaker's intentions  and  expecta-
          tions is the one that best satisfies the principle of relevance.

          The nondemonstrative inference processes involved in the  deriva-
          tion of implicatures consist in 1. detecting the implicated prem-
          ises (through a nondeductive process of hypothesis formation  and
          confirmation),  and  2.  in  deducting the implicated conclusions
          from the implicated premises and the proposition expressed by the
          utterance.

Stalnaker Robert: INQUIRY (MIT Press, 1984)

          A study on the process of acquiring and  changing  beliefs  about
          the world.

          Stalnaker believes that possible worlds are not concrete  worlds,
          but  simply  ways the world might be. A proposition is a function
          from possible worlds to truth-values. Each world provides a truth
          value for a proposition.
           Stefik Mark: AN INTRODUCTION TO KNOWLEDGE SYSTEMS (Morgan  Kauf-
          mann, 1995)

          Monumental, but not particularly innovative.
           Sterelny  Kim:  THE  REPRESENTATIONAL  THEORY  OF  MIND   (Basil
          Blackwell, 1991)

          The book defends  the  functionalist  theory  of  the  mind,  and
          specifically the Fodor's "language of thought" hypothesis. Marr's
          theory of vision and  Fodor's  modular  model  of  the  mind  are
          explained. Eliminativism and connectionism are also examined.
           Sternberg Robert J.: HANDBOOK OF HUMAN INTELLIGENCE  (Cambridge,
          1982)

          A colossal reference book compiled by experts in various  psycho-
          logical  and  biological  fields.  Hundreds  of cognitive models,
          experiments and studies are surveyed.

Sternberg Robert J.: WISDOM (Cambridge University Press, 1990)

A collection of psychological essays on the subject of wisdom.

           Sternberg Robert J.: METAPHORS OF MIND  (Cambridge  Univ  Press,
          1990)

          A survey of psychological theories of intelligence,  from  Galton
          and Binet to Spearman and Thorndike. Cognitive science is briefly
          mentioned, as well as the biological perspective (Luria,  Sperry,
          Gazzaniga). A chapter is devoted to Piaget's genetic epistemology
          and its successors.
           Stich Stephen: FROM FOLK PSYCHOLOGY TO  COGNITIVE  SCIENCE  (MIT
          Press, 1983)

          Stich's theory of commonsense reasoning is based on a purely syn-
          tactic  approach.   But, unlike Fodor, Stich does not require the
          objects upon which these syntactic operations are performed to be
          representations (endowed with content).

          Stich assumes  that  cognitive  states  correspond  to  syntactic
          states  in such a way that causal relationships between syntactic
          states (or between syntactic  states  and  stimuli  and  actions)
          correspond  to syntactic relationships of corresponding syntactic
          objects. Stich's "autonomy  principle"  states  that  differences
          between  organisms that cannot be reduced to differences in their
          internal states are not relevant for a psychological theory.  The
          only  environmental factors that should be taken into account are
          those that cause differences in the internal states.

Stich Stephen: THE FRAGMENTATION OF REASON (MIT Press, 19##)

Further thoughts on Stich's theory of commonsense reasoning.

Stich Stephen: DECONSTRUCTING THE MIND (Oxford Univ Press, 1996)

          An attack against eliminativist philosophy and simulation  theory
          (the main alternative to folk psychology).

Stillings Neil: COGNITIVE SCIENCE (MIT Press, 1995)

          The second edition of the  comprehensive  textbook  on  cognitive
          theories of the mind adds new sections on connectionist models.