OLI Psychology is not your typical course. Our goal is for you to work through the course materials online on your own time and in the way that is most efficient given your prior knowledge.
While you will have more flexibility than you do in a traditional course, you will also have more responsibility for your own learning. You will need to:
Each unit in this course has features designed to support you as an independent learner, including:
Explanatory content: This is the informational “meat” of every unit. It consists of short passages of text with information, images, explanations, and short videos.
Learn By Doing activities: Learn By Doing activities give you the chance to practice the concept that you are learning, with hints and feedback to guide you if you struggle.
Did I Get This? activities: Did I Get This? activities are your chance to do a quick "self-check" and assess your own understanding of the material before doing a graded activity.
When starting an online course, most people neglect planning, opting instead to jump in and begin working. While this might seem efficient (after all, who wants to spend time planning when they could be doing?), it can ultimately be inefficient. In fact, one of the characteristics that distinguishes experts from novices is that experts spend far more time planning their approach to a task and less time actually completing it; while novices do the reverse: rushing through the planning stage and spending far more time overall.
In this course, we want to help you work as efficiently and effectively as possible, given what you already know. Some of you have already taken a psychology course, and are already familiar with many of the concepts. You may not need to work through all of the activities in the course; just enough to make sure that you've "got it." For others, this is your first exposure to psychology, and you will want to do more of the activities, since you are learning these concepts for the first time.
Improving your planning skills as you work through the material in the course will help you to become a more strategic and thoughtful learner and will enable you to more effectively plan your approach to assignments, exams and projects in other courses.
This idea of planning your approach to the course before you start is called Metacognition.
Metacognition involves five distinct skills:
These five skills are applied over and over again in a cycle—within the same course as well as from one course to another:
You get an assignment and ask yourself: “What exactly does this assignment involve and what have I learned in this course that is relevant to it?”
You are exercising metacognitive skills (1) and (2) by assessing the task and evaluating your strengths and weaknesses in relation to it.
If you think about what steps you need to take to complete the assignment and determine when it is reasonable to begin, you are exercising skill (3) by planning.
If you start in on your plan and realize that you are working more slowly than you anticipated, you are putting skill (4) to work by applying a strategy and monitoring your performance.
Finally, if you reflect on your performance in relation to your timeframe for the task, and discover an equally effective but more efficient way to work, you are engaged in skill (5); reflecting and adjusting your approach as needed.
Metacognition is not rocket science. In some respects, it is fairly ordinary and intuitive. Yet you’d be surprised how often people lack strong metacognitive skills; and you’d be amazed by how much weak metacognitive skills can undermine performance.
Now take the opportunity to practice the concepts you've been learning by doing these two Learn By Doing activities. Read each of the scenarios below and identify which metacognitive skill the student is struggling with. If you need help, remember that you can ask for a hint.
You've now read through the explanatory content in this unit, and you've had a chance to practice the concepts. Take a moment to reflect on your understanding. Do you feel like you are "getting it"? Use these next two activities to find out.
Strong metacognitive skills are essential for independent learning, so use the experience of monitoring your own learning in OLI Psychology as an opportunity to hone these skills for other classes and tasks.
Ambrose, S. A., Bridges, M. W., DiPietro, M., Lovett, M. C., & Norman, M. K. (2010). How learning works: 7 research-based principles for smart teaching. San Francisco: Jossey-Bass.
Chi, M. T. H., Bassock, M., Lewis, M. W., Reimann, P., & Glaser, R. (1989). "Self-explanations: How students study and use examples in learning to solve problems." Cognitive Science, 13, 145-182.
Dunning, D. (2007). Self-insight: Roadblocks and detours on the path to knowing thyself. New York: Taylor and Francis.
Hayes, J. R., & Flower, L. S. (1986). "Writing research and the writer." American Psychologist Special Issue: Psychological Science and Education, 41, 1106-1113.
Schoenfeld, A. H (1987). "What’s all the fuss about metacognition?" In A. H. Schoenfeld (Ed.), Cognitive science and mathematics education. (pp.189-215). Hillsdale, NJ: Erlbaum.
This Introduction to Psychology course was developed as part of the Community College Open Learning Initiative. Using an open textbook from Flatworld Knowledge as a foundation, Carnegie Mellon University's Open Learning Initiative has built an online learning environment designed to enact instruction for psychology students.
The Open Learning Initiative (OLI) is a grant-funded group at Carnegie Mellon University, offering innovative online courses to anyone who wants to learn or teach. Our aim is to create high-quality courses and contribute original research to improve learning and transform higher education by:
Flatworld Knowledge is a college textbook publishing company on a mission. By using technology and innovative business models to lower costs, Flatword is increasing access and personalizing learning for college students and faculty worldwide. Text, graphics and video in this course are built on materials by Flatworld Knowledge, made available under a CC-BY-NC-SA license. Interested in a companion text for this course? Flatworld provides access to the original textbook online and makes digital and print copies of the original textbook available at a low cost.
Welcome to world of psychology. This course will introduce you to some of the most important ideas, people, and research methods from the field of Psychology. You probably already know about some people, perhaps Sigmund Freud or B. F. Skinner, who contributed to our understanding of human thought and behavior, and you may have learned about important ideas, such as personality testing or methods of psychotherapy. This course will give you the opportunity to refine and organize the knowledge you bring to the class, and we hope that you will learn about theories, phenomena, and research results that give you new insight into the human condition.
This first module is your opportunity to explore the field of psychology for a while before moving into material that will be assessed and tracked. Let’s start with an obvious question:
The word psychology is based on two words from the Greek language: psyche, which means “life” or, in a more restricted sense “mind,” or “spirit,” and logia, which is the source for the current meaning, “the study of…”
Whatever the origin of the word, over the years, philosophers, scientists, and other interested people have debated about the “proper” subject matter for psychology. Should we focus on actual behavior, which we can observe and even measure, or on the mind, which includes the rich inner experience we all have of the world and of our own thoughts and motives, or on the brain, which is the centerpiece of the physical systems that make thought and behavior possible? And psychology is not just a bunch of fancy theories. Psychology is a vibrant, growing field because psychologists’ ideas and skills are used every day in thousands of ways to solve real-world problems.
To start your introduction to psychology, first survey the range of topics you will be studying in this course. The section on “What do psychologists study?” reviews the scope of topics covered in this course and allows you to see some of the general themes the various units develop. When you are finished with your survey of topics, the section on “What do psychologists do?” gives you some sense of the scope of psychological work and professional fields where psychological training is essential.
Your work in the rest of this module will also introduce you to one of the essential features of this course: Learning by Doing. Research and experience tell us that active involvement in learning is far more effective than mere passive reading or listening. Learning by Doing doesn’t need to be complicated or difficult. It simply requires that you get out of automatic mode and think a bit about the ideas you are encountering. We will get you to participate a bit in your introduction to the online materials and to the field of psychology, so you can Learn by Doing.
You will also encounter another type of activity: Did I Get This? These brief quizzes allow you to determine if you are on track in your understanding of the material. Take the Did I Get This? quiz after each section of the module. If you do well, you might decide that you have mastered the material enough to go on. However, keep in mind that you, not the quiz, should decide if you are ready. The quiz is just there to help you.
Click on one on the general topic boxes below. Choose the topic that seems to capture the general theme of the units. If you don’t choose the best answer, you will be given the opportunity to make another choice after receiving feedback.
Before you leave a particular section of a module, you will usually have the opportunity to check your knowledge. The activities called Did I Get This? are brief quizzes about the material you have just been studying. Use them to monitor your understanding of the material before moving on.
You now know that psychology is a big field and psychologists are interested in a great variety of issues. In order to introduce you to so much material, we will often have to focus on specific issues or problems and on an experiment or theory that addresses issue or problem. Much of this research is conducted in university laboratories. This may give you the impression that psychologists only work in universities, conducting experiments with undergraduate psychology students. It is true that a lot of research takes place in university labs, but the majority of psychologists work outside of the university setting.
The next Learn by Doing will give you a chance to consider different ways that people with training in psychology use their skills. In the second module of this introductory unit, we will explore the various areas of psychology more systematically, so this is just a first look at the scope of the work psychologists do.
Your task is to categorize the work of each of our seven psychologists as best fitting basic research, mental health, or applied psychology. In the real world, a single individual might work in two or all three of these areas, but, for the sake of our exercise, find the best fit for the description. Answer by clicking on the appropriate box below.
In the next module of this introductory unit you will learn a bit about the history of psychology and some of the major issues that influence psychological thinking.
In this module we review some of the important philosophical questions that psychologists attempt to answer, the evolution of psychology from ancient philosophy and how psychology became a science. You will learn about the early schools (or approaches) of psychological inquiry and some of the important contributors to each of these early schools of psychology. You will also learn how some of these early schools influenced the newer contemporary perspective of psychology.
The approaches that psychologists originally used to assess the issues that interested them have changed dramatically over the history of psychology. Perhaps most importantly, the field has moved steadily from speculation about the mind and behavior toward a more objective and scientific approach as the technology available to study human behavior has improved. There has also been an increasing influx of women into the field. Although most early psychologists were men, now most psychologists, including the presidents of the most important psychological organizations, are women.
Although psychology has changed dramatically over its history, several questions that psychologists address have remained constant and we will discuss them both here and in the units and modules to come:
Directions: Read each scenario and answer the questions about how each situation might be viewed by a psychologist.
Scenario 1: Alex and Julie, his girlfriend, are having a discussion about aggression in men and women. Julie thinks that males are much more aggressive than females because males have more physical fights and get into trouble with the law than females. Alex does not agree with Julie and tells her that even though males get into more physical fights, he thinks that females are much more aggressive than males because females engage more in gossip, social exclusion, and spreading of malicious rumors than men. Is Alex or Julie’s thinking about aggression correct?
Scenario 2: Your genetics give you certain physical traits and cognitive capabilities. You are great at math but may never be an artist. Sometimes in life, we have to accept the traits and abilities that have been given to us, even though you may wish you were different.
Scenario 3: Raul is having a discussion with his mother about his father, Tomas, and his brother, Hector. Tomas, the father, is an alcoholic and Raul expresses to his mother that he is concerned about his brother, Hector, who is also beginning to drink a lot. Raul does not want his brother to turn out like his father. Raul’s mother tells him that there have been several men in the family who are alcoholics such as his grandfather and two uncles. She says that it runs in the family and that his brother, Hector, can’t help himself and he will also be an alcoholic. Raul responds that he thinks they could stop drinking if they wanted too, despite the history of alcoholism in the family. “After all, look at me. I don’t drink and my friends don’t either. I don’t think I will become an alcoholic because I have friends who know how to control themselves."
The earliest psychologists that we know about are the Greek philosophers Plato (428–347 BC) and Aristotle (384–322 BC). These philosophers asked many of the same questions that today’s psychologists ask; for instance, they questioned the distinction between nature and nurture and mind and body. For example, Plato argued on the nature side, believing that certain kinds of knowledge are innate or inborn, whereas Aristotle was more on the nurture side, believing that each child is born as an “empty slate” (in Latin a tabula rasa) and that knowledge is primarily acquired through sensory learning and experiences.
European philosophers continued to ask these fundamental questions during the Renaissance. For instance, the French philosopher, René Descartes (1596–1650) influenced the belief that the mind (the mental aspects of life) and body (the physical aspects of life) were separate entities. He argued that the mind controls the body through the pineal gland in the brain (an idea that made some sense at the time but was later proved incorrect). This relationship between the mind and body is known as the mind-body dualism in which the mind is fundamentally different from the mechanical body, so much so that we have free will to choose the behaviors that we engage in. Descartes also believed in the existence of innate natural abilities (nature).
Another European philosopher, Englishman John Lock (1632–1704), is known for his viewpoint of empiricism, the belief that the newborn’s mind is a “blank slate” and that the accumulation of experiences mold the person into who he or she becomes.
The fundamental problem that these philosophers faced was that they had few methods for collecting data and testing their ideas. Most philosophers didn’t conduct any research on these questions, because they didn’t yet know how to do it and they weren’t sure it was even possible to objectively study human experience. Philosophers began to argue for the experimental study of human behavior.
Gradually in the mid-1800s, the scientific field of psychology gained its independence from philosophy when researchers developed laboratories to examine and test human sensations and perceptions using scientific methods. The first two prominent research psychologists were the German psychologist Wilhelm Wundt (1832–1920), who developed the first psychology laboratory in Leipzig, Germany in 1879, and the American psychologist William James (1842–1910), who founded an American psychology laboratory at Harvard University.
|The Early Schools of Psychology: No Longer Active|
|Early Schools of Psychology: Still Active and Advanced Beyond Early Ideas|
Wundt’s research in his laboratory in Liepzig focused on the nature of consciousness itself. Wundt and his students believed that it was possible to analyze the basic elements of the mind and to classify our conscious experiences scientifically. This focus developed into the field known as structuralism, a school of psychology whose goal was to identify the basic elements or “structures” of psychological experience. Its goal was to create a “periodic table” of the “elements of sensations,” similar to the periodic table of elements that had recently been created in chemistry.
Structuralists used the method of introspection in an attempt to create a map of the elements of consciousness. Introspection involves asking research participants to describe exactly what they experience as they work on mental tasks, such as viewing colors, reading a page in a book, or performing a math problem. A participant who is reading a book might report, for instance, that he saw some black and colored straight and curved marks on a white background. In other studies the structuralists used newly invented reaction time instruments to systematically assess not only what the participants were thinking but how long it took them to do so. Wundt discovered that it took people longer to report what sound they had just heard than to simply respond that they had heard the sound. These studies marked the first time researchers realized that there is a difference between the sensation of a stimulus and the perception of that stimulus, and the idea of using reaction times to study mental events has now become a mainstay of cognitive psychology.
Perhaps the best known of the structuralists was Edward Bradford Titchener (1867–1927). Titchener was a student of Wundt who came to the United States in the late 1800s and founded a laboratory at Cornell University. In his research using introspection, Titchener and his students claimed to have identified more than 40,000 sensations, including those relating to vision, hearing, and taste.
An important aspect of the structuralist approach was that it was rigorous and scientific. The research marked the beginning of psychology as a science, because it demonstrated that mental events could be quantified. But the structuralists also discovered the limitations of introspection. Even highly trained research participants were often unable to report on their subjective experiences. When the participants were asked to do simple math problems, they could easily do them, but they could not easily answer how they did them. Thus the structuralists were the first to realize the importance of unconscious processes—that many important aspects of human psychology occur outside our conscious awareness and that psychologists cannot expect research participants to be able to accurately report on all of their experiences. Introspection was eventually abandoned because it was not a reliable method for understanding psychological processes.
In contrast to structuralism, which attempted to understand the nature of consciousness, the goal of William James and the other members of the school of functionalism was to understand why animals and humans have developed the particular psychological aspects that they currently possess. For James, one’s thinking was relevant only to one’s behavior. As he put it in his psychology textbook, “My thinking is first and last and always for the sake of my doing.”
James and the other members of the functionalist school were influenced by Charles Darwin’s (1809–1882) theory of natural selection, which proposed that the physical characteristics of animals and humans evolved because they were useful, or functional. The functionalists believed that Darwin’s theory applied to psychological characteristics too. Just as some animals have developed strong muscles to allow them to run fast, the human brain, so functionalists thought, must have adapted to serve a particular function in human experience.
Although functionalism no longer exists as a school of psychology, its basic principles have been absorbed into psychology and continue to influence it in many ways. The work of the functionalists has developed into the field of evolutionary psychology, a contemporary perspective of psychology that applies the Darwinian theory of natural selection to human and animal behavior. You learn more about the perspective of evolutionary psychology in the next section of this module.
Perhaps the school of psychology that is most familiar to the general public is the psychodynamic approach to understanding behavior, which was championed by Sigmund Freud (1856–1939) and his followers. Psychodynamic psychology is an approach to understanding human behavior that focuses on the role of unconscious thoughts, feelings, and memories. Freud developed his theories about behavior through extensive analysis of the patients that he treated in his private clinical practice. Freud believed that many of the problems that his patients experienced, including anxiety, depression, and sexual dysfunction, were the result of the effects of painful childhood experiences that the person could no longer remember.
Freud’s ideas were extended by other psychologists whom he influenced including Erik Erikson (1902–1994). These and others who follow the psychodynamic approach believe that it is possible to help the patient if the unconscious drives can be remembered, particularly through a deep and thorough exploration of the person’s early sexual experiences and current sexual desires. These explorations are revealed through talk therapy and dream analysis, in a process called psychoanalysis.
The founders of the school of psychodynamics were primarily practitioners who worked with individuals to help them understand and confront their psychological symptoms. Although they did not conduct much research on their ideas, and although later, more sophisticated tests of their theories have not always supported their proposals, psychodynamics has nevertheless had substantial impact on the perspective of clinical psychology and, indeed, on thinking about human behavior more generally. The importance of the unconscious in human behavior, the idea that early childhood experiences are critical, and the concept of therapy as a way of improving human lives are all ideas that are derived from the psychodynamic approach and that remain central to psychology.
Although they differed in approach, both structuralism and functionalism were essentially studies of the mind. The psychologists associated with the school of behaviorism, on the other hand, were reacting in part to the difficulties psychologists encountered when they tried to use introspection to understand behavior. Behaviorism is a school of psychology that is based on the premise that it is not possible to objectively study the mind, and therefore that psychologists should limit their attention to the study of behavior itself. Behaviorists believe that the human mind is a “black box” into which stimuli are sent and from which responses are received. They argue that there is no point in trying to determine what happens in the box because we can successfully predict behavior without knowing what happens inside the mind. Furthermore, behaviorists believe that it is possible to develop laws of learning that can explain all behaviors.
The first behaviorist was the American psychologist John B. Watson (1878–1958). Watson was influenced in large part by the work of the Russian physiologist Ivan Pavlov (1849–1936), who had discovered that dogs would salivate at the sound of a tone that had previously been associated with the presentation of food. Watson and other behaviorists began to use these ideas to explain how events that people and animals experienced in their environment (stimuli) could produce specific behaviors (responses). For instance, in Pavlov’s research the stimulus (either the food or, after learning, the tone) would produce the response of salivation in the dogs.
In his research Watson found that systematically exposing a child to fearful stimuli in the presence of objects that did not themselves elicit fear could lead the child to respond with a fearful behavior to the presence of the stimulus. In the best known of his studies, an 8-month-old boy named Little Albert was used as the subject. Here is a summary of the findings:
The baby was placed in the middle of a room; a white laboratory rat was placed near him and he was allowed to play with it. The child showed no fear of the rat. In later trials, the researchers made a loud sound behind Albert’s back by striking a steel bar with a hammer whenever the baby touched the rat. The child cried when he heard the noise. After several such pairings of the two stimuli, the child was again shown the rat. Now, however, he cried and tried to move away from the rat. In line with the behaviorist approach, Little Albert had learned to associate the white rat with the loud noise, resulting in crying.
The most famous behaviorist was Burrhus Frederick (B. F.) Skinner (1904–1990), who expanded the principles of behaviorism and also brought them to the attention of the public at large. Skinner used the ideas of stimulus and response, along with the application of rewards or reinforcements, to train pigeons and other animals. He used the general principles of behaviorism to develop theories about how best to teach children and how to create societies that were peaceful and productive. Skinner even developed a method for studying thoughts and feelings using the behaviorist approach.
The behaviorists made substantial contributions to psychology by identifying the principles of learning. Although the behaviorists were incorrect in their beliefs that it was not possible to measure thoughts and feelings, their ideas provided new ideas that helped further our understanding regarding the nature-nurture and mind-body debates. The ideas of behaviorism are fundamental to psychology and have been developed to help us better understand the role of prior experiences in a variety of areas of psychology.
During the first half of the twentieth century, evidence emerged that learning was not as simple as it was described by the behaviorists. Several psychologists studied how people think, learn and remember. And this approach became known as cognitive psychology, a field of psychology that studies mental processes, including perception, thinking, memory, and judgment. The German psychologist Hermann Ebbinghaus (1850–1909) showed how memory could be studied and understood using basic scientific principles. The English psychologist Frederick Bartlett also looked at memory but focused more on how our memories can be distorted by our beliefs and expectations.
The two individuals from this time who arguably made the strongest impact on contemporary cognitive psychology were two great students of child development: the Swiss psychologist Jean Piaget (1896–1980) and the Russian psychologist Lev Vygotsky (1896–1934).
Jean Piaget was a prolific writer, a brilliant systematizer, and a creative observer of children. Using interviews and situations he contrived, he studied the thinking and reasoning of children from their earliest days into adolescence. He is best known for his theory that tracks the development of children’s thinking into a series of four major stages, each with several substages. Within each stage, Piaget pointed to behaviors and responses to questions that revealed how the developing child understands the world. One of Piaget’s critical insights was that children are not deficient adults, so when they do something or make a judgement that, in an adult, might seem to be a mistake, we should not assume that it is a mistake from the child’s perspective. Instead, the child may be using the knowledge and reasoning that are completely appropriate at his or her particular age to make sense of the world. For example, Piaget found that children often believe that other people know or can see whatever they know or can see. So, if you show a young child a scene containing several dolls, where a particular doll is visible to the child but blocked from your view by a dollhouse, the child will simply assume that you can see the blocked doll. Why? Because he or she can see it. Piaget called this thinking egocentrism, by which he meant that the child’s thinking is centered in his or her own view of the world (not that the child is selfish). If an adult made this error, we would find it odd. But it is quite natural for the child, because prior to about 4 years of age, children do not understand that different minds (theirs and yours) can know different things. Egocentric thinking is normal and healthy for a two year old (though not for a 20-year-old).
During the same years that Piaget was interviewing children and trying to chart the course of development, Russian psychologist Lev Vygotsky was struck by the rich social influences that influenced and even guided cognitive development. Like Piaget, Vygotsky observed children playing with one another, and he saw how children guide each other to learn social rules and, through those, to improve self-regulation of behavior and thoughts.
Vygotsky’s best known contribution was his analysis of the interactions of children and parents that lead to the development of more and more sophisticated thinking. He suggested that the effective parent or teacher is one who helps the child reach beyond his or her current level of thinking by creating supports, which Vygotsky’s followers called scaffolding. For example, if a teacher wants the child to learn the difference between a square and a triangle, she might allow the child to play with cardboard cutouts of the shapes, and help the child count the number of sides and angles on each. This assisted exploration is a scaffold—a set of supports for the child who is actively doing something—that can help the child do things and explore in ways that would not be likely or even possible alone.
Both Piaget and Vygotsky emphasized the mental development of the child and gave later psychologists a rich set of theoretical ideas as well as observable phenomena to serve as a foundation for the science of the mind that blossomed in the middle and late 20th century and is the core of 21st century psychology.
By the 1950s, a clear contrast existed between psychologists who favored behaviorism which focused exclusively on behavior that is shaped by the environment and those who favored psychodynamic psychology which focused on mental unconscious processes to explain behavior. Many of the psychodynamic therapists became disillusioned with the results of their therapy and began to propose new ways of thinking about behavior in that unlike animals, human behavior was not innately uncivilized as Freud, James and Skinner believed. Humanism developed on the beliefs that humans are inherently good, have free will to make decisions, and are motivated to seek and improve themselves to their highest potential. Instead of focusing on what went wrong with people’s lives as did the psychodynamic psychologists, humanists asked interesting questions about what made a person “good.” Thus, a new approach to psychology emerged called humanism, an early school of psychology which emphasized that each person is inherently good and motivated to learn and improve to become a healthy, effectively functioning individual. Abraham Maslow and Carl Rogers are credited for developing the humanistic approach in which they asked questions about what made a person good.
Abraham Maslow (1908–1970) developed the theory of self-motivation in which we all have a basic, broad need to develop our special unique human potential, which he called the drive for self-actualization. He proposed that, in order for us to achieve self-actualization, several basic needs beginning with physiological needs of hunger, thirst, and maintenance of other internal states of the body must first be met. As the lower-level needs are satisfied, our internal motivation strives to achieve higher-ordered needs such as safety, belonging and love needs and self-esteem needs until we ultimately achieve self-actualization. Maslow’s theory of hierarchy of needs represents our internal motivation to strive for self-actualization. Achieving self-actualization meant that one has achieved their unique and special human potential to be able to lead a positive and fulfilling life.
Carl Rogers (1902–1987), originally a psychodynamic therapist, developed a new therapy approach which he called client-centered therapy. This therapy approach viewed the person, not as a patient, but rather as a client with more equal status with the therapist. He believed that the client as well as every person should be respected and valued for his or her unique and special abilities and potential, and that the person had the ability to make conscious decisions and free will to achieve one’s highest potential.
While the humanistic school of psychology has been criticized for its lack of rigorous experimental investigation as being more of a philosophical approach, it has influenced current thinking on personality theories and psychotherapy methods. Furthermore, the foundations of the early school of humanism evolved into the contemporary perspective of positive psychology, the scientific study of optimal human functioning.
Psychologist Abraham Maslow introduced the concept of a hierarchy of needs, which suggests that people are motivated to fulfill basic needs before moving on to other, more advanced needs. Consider how this may influence our development, motivation, and accomplishments. Choose which level of needs would best explain the scenario below.
As you may have noticed in the six early schools of psychology, each attempted to answer psychological questions with a single approach. While some attempted to build a big theory around their approach (and some did not even attempt), no one school was successful. By the mid-20th century, the field of psychology was still a very young science, but it was gaining a lot of diverse attention and popularity. Psychologists began to study mental processes and behavior from their own specific points of interests and views. Thus, some of the specific viewpoints became known as perspectives from which to investigate a specific psychological topic.
Today, contemporary psychology reflects several major perspectives such biological/neuroscience, cognitive, behavioral, social, developmental, clinical, and individual differences/personality. These are not a complete list of perspectives and your instructor may introduce others. What’s important to know is that today all psychologists believe that there is no one specific perspective with which to study psychology, but rather any given topic can be approached from a variety of perspectives. For example, investigating how an infant learns language can be studied from all of the different perspectives that could provide information from a different viewpoint about the child’s learning. Also as perspectives become more specific, we see that the perspectives are interconnected with each other, meaning that it is difficult to study any topic on human thought or behavior from just one perspective without considering the complex influence of information from other perspectives.
|Contemporary Perspectives of Psychology|
Behavioral neuroscience studies the links among the brain, mind, and behavior. This perspective used to be called psychobiological psychology, which studied the biological roots such as brain structure and brain activity of behavior. But due to the advancements in our ability to view the intricate workings of the brain, called neuroimaging, the name behavioral neuroscience is now used for this broad discipline. Neuroimaging is the use of various techniques to provide pictures of the structures and functions of the living brain. And as you read about the following contemporary psychological perspectives, you will see how interconnected these perspectives are, largely due to neuroimaging techniques.
For example, neuroimaging techniques are used to study brain functions in learning, emotions, social behavior and mental illness which each have their own specialty perspective (see the descriptions of these perspectives below). Also the two perspectives of behavioral neuroscience and biological psychology are closely interconnected in that the uses of neuroimaging techniques such as electrical brain recordings enable biological psychologists to study the structure and functions of the brain. Another example is the study of behavioral genetics which is the study of how genes influence cognition, physical development and behavior.
Another related perspective is evolutionary psychology, which supports the idea that the brain and body are products of evolution and that inheritance plays an important role in shaping thought and behavior. This perspective developed from the functionalists’ basic assumption that many human psychological systems, including memory, emotion and personality, serve key adaptive functions called fitness characteristics. Evolutionary psychologists theorize that fitness characteristics have helped humans to survive and reproduce throughout the centuries at a higher rate than do other species who do not have the same fitness characteristics. Fitter organisms pass on their genes more successfully to later generations, making the characteristics that produce fitness more likely to become part of the organism’s nature than characteristics that do not produce fitness. For example, evolutionary theory attempts to explain many different behaviors including romantic attraction, jealousy, stereotypes and prejudice, and psychological disorders. The evolutionary perspective is important to psychology because it provides logical explanations for why we have many psychological characteristics.
Closely related to behavioral neuroscience, the perspective of biological psychology focuses on studying the connections between bodily systems such as the nervous and endocrine systems and chemicals such as hormones and their relationships to behavior and thought. Biological research on the chemicals produced in the body and brain have helped psychologists to better understand psychological disorders such as depression and anxiety and the effects of stress on hormones and behavior.
Cognitive psychology is the study of how we think, process information and solve problems, how we learn and remember, and how we acquire and use language. Cognitive psychology is interconnected with other perspectives that study language, problem solving, memory, intelligence, education, human development, social psychology, and clinical psychology.
Starting in the 1950s, psychologists developed a rich and technically complex set of ideas to understand human thought processes, initially inspired by the same insights and advances in information technology that produced the computer, cell phone and internet. As technology advanced, so did cognitive psychology. We are now able to see the brain in action using neuroimaging techniques. These images are used to diagnose brain disease and injury, but they also allow researchers to view information processing as it occurs in the brain, because the processing causes the involved area of the brain to increase metabolism and show up on the scan such as the functional magnetic resonance imaging (fMRI). We discuss the use of neuroimaging techniques in many areas of psychology in the units to follow.
The field of social psychology is the study of how social situations and cultures in which people live influence their thinking, feelings and behavior. Social psychologists are particularly concerned with how people perceive themselves and others, and how people influence each other’s behavior. For instance, social psychologists have found that we are attracted to others who are similar to us in terms of attitudes and interests. We develop our own beliefs and attitudes by comparing our opinions to those of others and we frequently change our beliefs and behaviors to be similar to people we care about.
Social psychologists are also interested in how our beliefs, attitudes and behaviors are influenced by our culture. Cultures influence every aspect of our lives. For example, fundamental differences in thinking, feeling and behaving exist among people of Western cultures (such as the United States, Canada, Western Europe, Australia, and New Zealand) and East Asian cultures (such as China, Japan, Taiwan, Korea, India, and Southeast Asia). Western cultures are primarily oriented toward individualism, which is about valuing the self and one’s independence from others, sometimes at the expense of others. The East Asian culture, on the other hand, is oriented toward interdependence, or collectivism, which focuses on developing harmonious social relationships with others, group togetherness and connectedness, and duty and responsibility to one’s family and other groups.
As our world becomes more global, sociocultural research will become more interconnected with the research of other psychological perspectives such as biological, cognitive, personality, developmental and clinical.
Developmental psychology is the study of the development of a human being from conception until death. This perspective emphasizes all of the transformations and consistencies of human life. Three major domains or aspects of human life, cognitive, physical and socioemotional, are researched as one ages. The cognitive domain refers to all of the mental processes that a person uses to obtain knowledge or think about the environment. The physical domain refers to all the growth and changes that occur in a person’s body and the genetic, nutritional, and health factors that affect that growth and change. And the socioemotional development includes development of emotions, temperament, and social skills. Developmentalists study how individuals change or remain the same over time in each of these three domains. It is easy to see how the perspective of developmental psychology is interconnected with all of the other major contemporary perspectives because of the overlapping and all encompassing aspects of the developmental perspective.
Clinical psychology focuses on the diagnosis and treatment of mental, emotional and behavioral disorders and ways to promote psychological health. This field evolved from the early psychodynamic and humanistic schools of psychology. While the clinical psychology perspective emphasizes treating individuals so that they may lead fulfilling and productive lives, clinical psychologists also conduct research to discover the origins of mental and behavioral disorders and effective treatment methods. The clinical psychology perspective is closely interconnected to behavioral neuroscience and biological psychology.
Personality psychology is the study of the differences and uniqueness of people and the influences on a person’s personality. Researchers in this field study whether personality traits change as we age or stay the same, something that developmental psychologists also study. Researchers interested in personality also study how environmental influences such as traumatic events affect personality.
Instructions: Imagine that you are a psychologist and you want to investigate specific behaviors of a person with Alzheimer’s disease. You have a team of psychologists who represent several contemporary perspectives in psychology to help you explore information as to the origin, symptoms, prevalence, influences and causes of this brain disease and the impact on family members who care for a relative with Alzheimer’s disease. Read the following scenario about a person who had Alzheimer’s disease.
Alzheimer’s disease (AD), the most common type of dementia, is a steady and gradual progressive brain disorder that damages and destroys brain cells. Eventually Alzheimer’s disease progresses to the point where the person requires full nursing care. Ronald Reagan, who was president of the United States from 1981 to 1989, announced in 1994 that he had Alzheimer’s disease. He died 10 years later at age 93. Despite extensive research, psychologists still have many questions to be researched about this fatal disease.
Read each set of questions. While some of these sets of questions could be researched by different psychological perspectives, try to determine which psychological perspective would most likely want to provide answers for each set of questions. Your team of psychologists represents the following perspectives and only one perspective is the correct answer for each set of questions.
As you can see, psychologists from all different contemporary perspectives can contribute to the scientific knowledge of Alzheimer’s disease, and for that matter, any kind of research pertaining to humans and animals.
Psychology is not one discipline but rather a collection of many subdisciplines that all share at least some common perspectives that work together to exchange knowledge to form a coherent discipline. Because the field of psychology is so broad, students may wonder which areas are most suitable for their interests and which types of careers might be available to them. The following figure will help you consider the answers to these questions. Click on any of the labeled, blue circles to learn more about each discipline.
You can learn more about these different subdisciplines of psychology and the careers associated with them by visiting the American Psychological Association (APA) website.
Step 1: Go to the APA website.
On this APA Home webpage, notice the various types of information.
Step 2: Find the box titled “Quick Links” on the APA Homepage.
Click on the link titled Divisions.
Step 3: On the APA site, search for the topic “Undergraduate Education.” Find the “Psychology as a Career” webpage to learn about what employers need from an employee, and then answer the following questions.
Now search for the topic “Careers in Psychology” on the APA website. Here you can read interesting information about the field of psychology. This section provides a long list of subfields in psychology that psychologists specialize in. Read about some of the interesting job tasks that psychologists perform in some of the subfields, and then complete the following statements by identifying the subfield that corresponds with it job tasks.
Psychologists aren’t the only people who seek to understand human behavior and solve social problems. Philosophers, religious leaders, and politicians, among others, also strive to provide explanations for human behavior. But psychologists believe that research is the best tool for understanding human beings and their relationships with others. Rather than accepting the claim of a philosopher that people do (or do not) have free will, a psychologist would collect data to empirically test whether or not people are able to actively control their own behavior. Rather than accepting a politician’s contention that creating (or abandoning) a new center for mental health will improve the lives of individuals in the inner city, a psychologist would empirically assess the effects of receiving mental health treatment on the quality of life of the recipients. The statements made by psychologists are based on an empirical study. An empirical study is results of verifiable evidence from a systematic collection and analysis of data that has been objectively observed, measured, and undergone experimentation.
In this unit you will learn how psychologists develop and test their research ideas; how they measure the thoughts, feelings, and behavior of individuals; and how they analyze and interpret the data they collect. To really understand psychology, you must also understand how and why the research you are reading about was conducted and what the collected data mean. Learning about the principles and practices of psychological research will allow you to critically read, interpret, and evaluate research.
In addition to helping you learn the material in this course, the ability to interpret and conduct research is also useful in many of the careers that you might choose. For instance, advertising and marketing researchers study how to make advertising more effective, health and medical researchers study the impact of behaviors such as drug use and smoking on illness, and computer scientists study how people interact with computers. Furthermore, even if you are not planning a career as a researcher, jobs in almost any area of social, medical, or mental health science require that a worker be informed about psychological research.
Psychologists study behavior of both humans and animals, and the main purpose of this research is to help us understand people and to improve the quality of human lives. The results of psychological research are relevant to problems such as learning and memory, homelessness, psychological disorders, family instability, and aggressive behavior and violence. Psychological research is used in a range of important areas, from public policy to driver safety. It guides court rulings with respect to racism and sexism as in the 1954 case of Brown v. Board of Education, as well as court procedure, in the use of lie detectors during criminal trials, for example. Psychological research helps us understand how driver behavior affects safety such as the effects of texting while driving, which methods of educating children are most effective, how to best detect deception, and the causes of terrorism.
Some psychological research is basic research. Basic research is research that answers fundamental questions about behavior. For instance, bio-psychologists study how nerves conduct impulses from the receptors in the skin to the brain, and cognitive psychologists investigate how different types of studying influence memory for pictures and words. There is no particular reason to examine such things except to acquire a better knowledge of how these processes occur. Applied research is research that investigates issues that have implications for everyday life and provides solutions to everyday problems. Applied research has been conducted to study, among many other things, the most effective methods for reducing depression, the types of advertising campaigns that serve to reduce drug and alcohol abuse, the key predictors of managerial success in business, and the indicators of effective government programs, such as Head Start.
Basic research and applied research inform each other, and advances in science occur more rapidly when each type of research is conducted. For instance, although research concerning the role of practice on memory for lists of words is basic in orientation, the results could potentially be applied to help children learn to read. Correspondingly, psychologist-practitioners who wish to reduce the spread of AIDS or to promote volunteering frequently base their programs on the results of basic research. This basic AIDS or volunteering research is then applied to help change people’s attitudes and behaviors.
One goal of research is to organize information into meaningful statements that can be applied in many situations.
A theory is an integrated set of principles that explains and predicts many, but not all, observed relationships within a given domain of inquiry. One example of an important theory in psychology is the stage theory of cognitive development proposed by the Swiss psychologist Jean Piaget. The theory states that children pass through a series of cognitive stages as they grow, each of which must be mastered in succession before movement to the next cognitive stage can occur. This is an extremely useful theory in human development because it can be applied to many different content areas and can be tested in many different ways.
Good theories have four important characteristics. A good theory is:
Piaget’s stage theory of cognitive development meets all four characteristics of a good theory. First, it is general in that it can account for developmental changes in behavior across a wide variety of domains, and second, it does so parsimoniously—by hypothesizing a simple set of cognitive stages. Third, the stage theory of cognitive development has been applied not only to learning about cognitive skills but also to the study of children’s moral and gender development. And finally, the stage theory of cognitive development is falsifiable because the stages of cognitive reasoning can be measured and because if research discovers, for instance, that children learn new tasks before they have reached the cognitive stage hypothesized to be required for that task, then the theory will be shown to be incorrect.
No single theory is able to account for all behavior in all cases. Rather, theories are each limited in that they make accurate predictions in some situations or for some people but not in other situations or for other people. As a result, there is a constant exchange between theory and data: Existing theories are modified on the basis of collected data, and the new modified theories then make new predictions that are tested by new data, and so forth. When a better theory is found, it will replace the old one. This is part of the accumulation of scientific knowledge as a result of research.
When psychologists have a question that they want to research, it usually comes from a theory based on other’s research reported in scientific journals. Recall that a theory is based on principles that are general and can be applied to many situations or relationships. Therefore, when a scientist has a research question to study, the question must be stated in a research hypothesis, which is a precise statement of the presumed relationship among specific parts of a theory. Furthermore, a research hypothesis is a specific and falsifiable prediction about the relationship between or among two or more variables, where a variable is any attribute that can assume different values among different people or across different times or places.
The research hypothesis states the existence of a relationship between the variables of interest and the specific direction of that relationship. For instance, the research hypothesis “Using marijuana will reduce learning” predicts that there is a relationship between a variable “using marijuana” and another variable called “learning.” Similarly, in the research hypothesis “Participating in psychotherapy will reduce anxiety,” the variables that are expected to be related are “participating in psychotherapy” and “level of anxiety.”
When stated in an abstract manner, the ideas that form the basis of a research hypothesis are known as conceptual variables. Conceptual variables are abstract ideas that form the basis of research hypotheses. Sometimes the conceptual variables are rather simple—for instance, age, gender, or weight. In other cases the conceptual variables represent more complex ideas, such as anxiety, cognitive development, learning, self-esteem, or sexism.
The first step in testing a research hypothesis involves turning the conceptual variables into measured variables, which are variables consisting of numbers that represent the conceptual variables. For instance, the conceptual variable “participating in psychotherapy” could be represented as the measured variable “number of psychotherapy hours the patient has accrued” and the conceptual variable “using marijuana” could be assessed by having the research participants rate, on a scale from 1 to 10, how often they use marijuana or by administering a blood test that measures the presence of the chemicals in marijuana.
Psychologists use the term operational definition to refer to a precise statement of how a conceptual variable is turned into a measured variable. The following table lists some potential operational definitions for conceptual variables that have been used in psychological research. As you read through this list, note that in contrast to the abstract conceptual variables, the operational definitions are measurable and very specific. This specificity is important for two reasons. First, more specific definitions mean that there is less danger that the collected data will be misunderstood by others. Second, specific definitions will enable future researchers to replicate the research.
|Examples of Some Conceptual Variables Defined as Operational Definitions for Psychological Research|
One of the keys to developing a well-designed research study is to precisely define the conceptual variables found in a hypothesis. When conceptual variables are turned into operational variables in a hypothesis, it then becomes a testable hypothesis. In this activity, read each of the following statements and answer its accompanying question to:
Now to make sure that you can identify the characteristics of a hypothesis and distinguish its conceptual variables from operational definitions used in a research study, choose an answer that correctly completes each of the following statements.
All scientists (whether they are physicists, chemists, biologists, sociologists, or psychologists) are engaged in the basic processes of collecting data and drawing conclusions about those data. The methods used by scientists have developed over many years and provide a common framework for developing, organizing, and sharing information. The scientific method is the set of assumptions, rules, and procedures scientists use to conduct research.
In addition to requiring that science be empirical, the scientific method demands that the procedures used are objective, or free from the personal bias or emotions of the scientist. The scientific method proscribes how scientists collect and analyze data, how they draw conclusions from data, and how they share data with others. These rules increase objectivity by placing data under the scrutiny of other scientists and even the public at large. Because data are reported objectively, other scientists know exactly how the scientist collected and analyzed the data. This means that they do not have to rely only on the scientist’s own interpretation of the data; they may draw their own, potentially different, conclusions.
In the following activity, you learn about a model presenting a five-step process of scientific research in psychology. A researcher or a small group of researchers formulate a research question and state a hypothesis, conduct a study designed to answer the question, analyze the resulting data, draw conclusions about the answer to the question, and then publish the results so that they become part of the research literature found in scientific journals.
Because the research literature is one of the primary sources of new research questions, this process can be thought of as a cycle. New research leads to new questions and new hypotheses, which lead to new research, and so on. This model also indicates that research questions can originate outside of this cycle either with informal observations or with practical problems that need to be solved. But even in these cases, the researcher begins by checking the research literature to see if the question had already been answered and to refine it based on what previous research had already found.
All scientists use the the scientific method which is a set of basic processes performed in the same order for conducting research. Using the following diagram of the scientific method, label each of the research steps in the correct order that scientists use to conduct scientific studies.
Now imagine that you are a research psychologist and you want to conduct a study to find out if there are any negative effects of talking on a cell phone while driving a car. What would you do first to begin your study? How would you know if your study might provide any new information? How would you go about conducting your study? What would you do after you have completed your study? These are questions that every researcher must answer in order to properly conduct a scientific study. As the researcher for your study on cell phone usage while driving, you will need to answer all of these questions.
The research by Mehl and his colleagues is described nicely by this model. Their question—whether women are more talkative than men—was suggested to them both by people’s stereotypes and by published claims about the relative talkativeness of women and men. When they checked the research literature, however, they found that this question had not been adequately addressed in scientific studies. They conducted a careful empirical study , analyzed the results (finding very little difference between women and men), and published their work so that it became part of the research literature. The publication of their article is not the end of the story, however, because their work suggests many new questions about the reliability of the result, such as potential cultural differences that will likely be further researched by them or other researchers.
Most new research is designed to replicate—that is, to repeat, add to, or modify—previous research findings. The process of repeating previous research, which forms the basis of all scientific inquiry, is known as replication. The scientific method therefore results in an accumulation of scientific knowledge through the reporting of research and the addition to and modifications of previous reported findings that are then replicated by other researchers.
One of the questions that all scientists must address concerns the ethics of their research. Research in psychology may cause some stress, harm, or inconvenience for the people who participate in that research. For instance, researchers may require introductory psychology students to participate in research projects and then deceive these students, at least temporarily, about the nature of the research. Psychologists may induce stress, anxiety, or negative moods in their participants, expose them to weak electrical shocks, or convince them to behave in ways that violate their moral standards. And researchers may sometimes use animals in their research, potentially harming them in the process.
Decisions about whether research is ethical are made using established ethical codes and standards developed by scientific organizations, such as the American Psychological Association, and the federal government, such as the U.S. Department of Health and Human Services (DHHS). In addition, there is no way to know ahead of time what the effects of a given procedure will be on every person or animal who participates or what benefit to society the research is likely to produce. What is ethical is defined by the current state of thinking within society, and thus perceived costs and benefits change over time. The DHHS regulations require that all universities receiving funds from the department set up an Institutional Review Board (IRB) to determine whether proposed research meets department regulations. The Institutional Review Board is a committee of at least five members whose goal it is to determine the cost-benefit ratio of research conducted within an institution. The IRB approves the procedures of all the research conducted at the institution before the research can begin. The board may suggest modifications to the procedures, or (in rare cases) it may inform the scientist that the research violates DHHS guidelines and thus cannot be conducted at all.
The following table presents some of the most important factors that psychologists take into consideration when designing their research using people.
|Characteristics of an Ethical Research Project Using Human Participants|
The most direct ethical concern of the scientist is to prevent harm to the research participants. One example is the well-known research, conducted in 1961 by Stanley Milgram, which investigated obedience to authority. Participants were induced by an experimenter to administer electric shocks to another person so that Milgram could study the extent to which they would obey the demands of an authority figure. Most participants evidenced high levels of stress resulting from the psychological conflict they experienced between engaging in aggressive and dangerous behavior and following the instructions of the experimenter. Studies such as those by Milgram are no longer conducted because the scientific community is now much more sensitized to the potential of such procedures to create emotional discomfort or harm.
Another goal of ethical research is to guarantee that participants have free choice regarding whether they wish to participate in research. Students in psychology classes may be allowed, or even required, to participate in research, but they are also always given an option to choose a different study to be in, or to perform other activities instead. And once an experiment begins, the research participant is always free to leave the experiment if he or she wishes to. Concerns about free choice also occur in institutional settings, such as in schools, hospitals, corporations, and prisons, when individuals are required by the institutions to take certain tests or when employees are told or asked to participate in research.
Researchers must also protect the privacy of the research participants. In some cases data can be kept anonymous by not having the respondents put any identifying information on their questionnaires. In other cases the data cannot be anonymous because the researcher needs to keep track of which respondent contributed the data. In this case one technique is to have each participant use a unique code number to identify his or her data, such as the last four digits of the student ID number. In this way the researcher can keep track of which person completed which questionnaire, but no one will be able to connect the data with the individual who contributed them.
Perhaps the most widespread ethical concern to the participants in behavioral research is the extent to which researchers employ deception. Deception occurs whenever research participants are not completely and fully informed about the nature of the research project before participating in it. Deception may occur in an active way, such as when the researcher tells the participants that he or she is studying learning when in fact the experiment really concerns obedience to authority. In other cases the deception is more passive, such as when participants are not told about the hypothesis being studied or the potential use of the data being collected.
Some researchers have argued that no deception should ever be used in any research.  They argue that participants should always be told the complete truth about the nature of the research they are in, and that when participants are deceived there will be negative consequences, such as the possibility that participants may arrive at other studies already expecting to be deceived. Other psychologists defend the use of deception on the grounds that it is needed to get participants to act naturally and to enable the study of psychological phenomena that might not otherwise get investigated. They argue that it would be impossible to study topics such as altruism, aggression, obedience, and stereotyping without using deception because if participants were informed ahead of time what the study involved, this knowledge would certainly change their behavior. The codes of ethics of the American Psychological Association and other organizations allow researchers to use deception, but these codes also require them to explicitly consider how their research might be conducted without the use of deception.
Nevertheless, an important tool for ensuring that research is ethical is the use of a written informed consent form. Informed consent, conducted before a participant begins a research session, is designed to explain the research procedures and inform the participant of his or her rights during the investigation. An informed consent form explains as much as possible about the true nature of the study, particularly everything that might be expected to influence willingness to participate, but it may in some cases withhold some information that allows the study to work.
Finally, participating in research has the potential for producing long-term changes in the research participants. Therefore, all participants should be fully debriefed immediately after their participation. The debriefing is a procedure designed to fully explain the purposes and procedures of the research and remove any harmful aftereffects of participation.
Instructions: View the video clip that describes the Stanley Milgram experiment on obedience and watch for any ethical violations made by the researchers. Then using the information provided in table above titled, "Characteristics of an Ethical Research Project Using Human Participants," answer the following questions to determine which ethical violations occurred in the Milgram experiment on obedience.
Note: In the following questions, the two types of participants used in Stanley Milgram’s study are the “participant-punisher” who administers electric shocks and the “participant-learner” who repeats the paired-word combinations.
Because animals make up an important part of the natural world, and because some research cannot be conducted using humans, animals are also participants in psychological research. Most psychological research using animals is now conducted with rats, mice, and birds; the use of other animals in research is declining. As with ethical decisions involving human participants, basic principles have been developed to help researchers make informed decisions about such research. The following table summarizes the APA Guidelines on Humane Care and Use of Animals in Research.
|APA Guidelines on Humane Care and Use of Animals in Research|
Because the use of animals in research involves a personal value, people naturally disagree about this practice. Although many people accept the value of such research, a minority of people, including animal-rights activists, believes that it is ethically wrong to conduct research on animals. This argument is based on the assumption that because animals are living creatures just as humans are, no harm should ever be done to them.
Most scientists, however, reject this view. They argue that such beliefs ignore the potential benefits that have and continue to come from research with animals. For instance, drugs that can reduce the incidence of cancer or AIDS may first be tested on animals, and surgery that can save human lives may first be practiced on animals. Research on animals has also led to a better understanding of the physiological causes of depression, phobias, and stress, among other illnesses. In contrast to animal-rights activists, then, scientists believe that because there are many benefits that accrue from animal research, such research can and should continue as long as the humane treatment of the animals used in the research is guaranteed.
Determine whether each each of the following scenarios is a compliance or violation of the ethical and humane care and use of animals as outlined by the American Psychological Association. Then select the appropriate guideline that applies to each scenario. To help you with this activity, you may want to review the APA Guidelines on Humane Care and Use of Animals in Research presented earlier.
Imagine you are on the Animal Care and Use Committee at your college. It is part of your responsibility to evaluate and either approve or reject research proposals of faculty members who want to use animals for research or instructional purposes. The two proposals are based on real experiments and describe the studies including goals, benefits, and discomforts or injuries of animals used. You must either approve or disapprove the research proposal based on the information provided. There is no need to suggest improvements or experimental design changes. Indicate why you decided upon the course of action that you did for each proposal. 
Professor Smith is a psychobiologist working on the frontiers of a new and exciting research area of neuroscience, brain grafting. Research has shown that neural tissue can be be removed from the brains of monkey fetuses and implanted into the brains of monkeys that have suffered brain damage. The neurons seem to make the proper connections and are sometimes effective in improving performance in brain-damaged animals. These experiments offer important animal models for human degenerative diseases such as Parkinson’s and Alzheimer’s. Dr. Smith want to transplant tissue from fetal monkey brains into the entorhinal cortex of adult monkeys; this is the area of the human brain that is involved with Alzheimer’s disease.
The experiment will use 20 adult rhesus monkeys. First, the monkeys will be subjected to brain lesioning. The procedure will involve anesthetizing the animals, opening their skulls, and making lesions using a surgical instrument. After they recover, the monkeys will be tested on a learning task to make sure their memory is impaired. Three months later, half of the animals will be given transplant surgery. Tissue taken from the cortex of the monkey fetuses will be implanted into the area of the brain damage. Control animals will be subjected to a placebo surgery, and all animals will be allowed to recover for 2 months. They will then learn a task to test the hypothesis that the animals having brain grafts will show better memory than the control group.
Dr. Smith argues that this research is in the exploratory stages and can only be done using animals. She further states that in 10 years, over 2 million Americans will have Alzheimer’s disease and that her research could lead to a treatment for the devastating memory loss that Alzheimer’s victims suffer. 
The Psychology Department is requesting permission from your committee to use 10 rats per semester for demonstration experiments in a physiological psychology course. The students will work in groups of three; each group will be given a rat. The students will first perform surgery on the rats. Each animal will be anesthetized. Following standard surgical procedures, an incision will be made in the scalp and two holes drilled in the animal’s skull. Electrodes will be lowered into the brain to create lesions on each side. The animals will then be allowed to recover. Several weeks later, the effects of destroying this part of the animal’s brain will be tested in a shuttle avoidance task in which the animals will learn when to cross over an electrified grid.
The instructor acknowledges that the procedure is a common demonstration and that no new scientific information will be gained from the experiment. He argues, however, that students taking a course in physiological psychology must have the opportunity to engage in small animal surgery and to see firsthand the effects of brain lesions. 
Psychologists agree that if their ideas and theories about human behavior are to be taken seriously, they must be backed up by data collected through research. The research goals that a psychologist wants to study determine the use of one of three types of research approaches. These varying approaches, summarized in the table below, are known as research designs. A research design is the specific method a researcher uses to collect, analyze, and interpret data. Psychologists use three major types of research designs in their research, and each provides an essential avenue for scientific investigation.
Each of the three research designs varies according to its strengths and limitations, and it is important to understand how each differs.
|Characteristics of the Three Research Designs|
Descriptive research is designed to create a snapshot of the current thoughts, feelings, or behavior of individuals. There are three types of descriptive research: case studies, surveys, and naturalistic observation.
Sometimes the data in a descriptive research project is based on only a small set of individuals, often only one person or a single small group. These research designs are known as case studies—descriptive records of one or more individuals’ experiences and behavior. Sometimes case studies involve ordinary individuals, as when developmental psychologist Jean Piaget used his observation of his own children to develop his stage theory of cognitive development. More frequently, case studies are conducted on individuals who have unusual or abnormal experiences or characteristics or who find themselves in particularly difficult or stressful situations. The assumption is that we can learn something about human nature by carefully studying individuals who are socially marginal, experiencing unusual situations, or going through a difficult phase in their lives.
Sigmund Freud was a master of using the psychological difficulties of individuals to draw conclusions about basic psychological processes. Freud wrote case studies of some of his most interesting patients and used these careful examinations to develop his important theories of personality. One classic example is Freud’s description of “Little Hans,” a child whose fear of horses the psychoanalyst interpreted in terms of repressed sexual impulses and the Oedipus complex.
The second type of descriptive research is the survey—a measure administered through either a face-to-face or telephone interview, or a written or computer-generated questionnaire—to get a picture of the beliefs or behaviors of a sample of people of interest. The people chosen to participate in the research, called a sample, are selected to be representative of all the people that the researcher wishes to know about, called the population. In election polls, for instance, a sample is taken from the population of all “likely voters” in the upcoming elections.
The results of surveys may sometimes be rather mundane, such as “nine out of ten doctors prefer Tymenocin,” or “the median income in Montgomery County is $36,712.” Yet other times (particularly in discussions of social behavior), the results can be shocking: “more than 40,000 people are killed by gunfire in the United States every year,” or “more than 60% of women between the ages of 50 and 60 suffer from depression.” Descriptive research is frequently used by psychologists to get an estimate of the prevalence (or incidence) of psychological disorders.
The third type of descriptive research—known as naturalistic observation—is research based on the observation of everyday events occurring in the natural environment of people or animals. For instance, a developmental psychologist who watches children on a playground and describes how they interact is conducting descriptive research. Another example of naturalistic research is a bio-psychologist who observes animals in their natural habitats.
A famous example of this type of research is the work of Dr. Jane Goodall at her primate research facility in Gombe Stream National Park, Tanzania (on the continent of Africa). Goodall and her staff have observed and recorded the social interactions and family life of the Kasakela chimpanzee community for over 50 years. Their work is considered groundbreaking and has revealed aspects of chimpanzee life that may have gone undiscovered. For instance, Goodall described human-like socializing behaviors in that members of the group would show affection and encouragement to one another. It was discovered that the chimpanzees were toolmakers and users—stripping leaves from twigs and poking the twig into termite holes to retrieve a meal. Goodall also described the carnivore side of chimpanzees, reporting that hunting groups from the chimpanzee community would stalk, isolate, and kill smaller primates for food and then divide their kill for distribution to other group members.
Two parts of Dr. Goodall’s research methods are representative of the disadvantages of naturalistic observational research. First, she decided to name the chimpanzees she studied instead of using the scientific convention of numbering subjects. The numbering technique hypothetically promotes objective observation that is devoid of attachment and bias on the part of the observer. Dr. Goodall identified members of her chimpanzee community by name, and discussed their behavior in terms of emotion, personality, intelligence, and family relationships; she was criticized by some for becoming overly involved and thus more subjective in her interpretations. This is known as observer bias, which happens when the individual observing behavior is influenced by their own experiences, expectations, or knowledge about the purpose of the observation or study.
Second, the Gombe research team utilized feeding stations to attract the animals for observation, thus potentially altering the natural feeding patterns and behaviors of the troop. This may have promoted artificial competition and increased aggression among the chimpanzees when the observers meaningfully interacted with the chimpanzees. This is called the observer effect. Indeed, the observer effect (interference with or modification of the subject’s behaviors by the process of observation) can lead to a distorted picture of a natural phenomenon, thus defeating the point of “naturalistic” observational research. It is difficult to know how influential the presence of a stranger can be in an established social situation or for the subject being observed.
In many observational studies, particularly those conducted with children, the observers are hidden away from the subjects. Some researchers use two-way mirrors, others use hidden cameras with monitors located in a separate room. Subjects can also be recorded on video from several angles as they interact socially or within their environment; the video recordings can then be observed and data recorded at a later time. A major advantage to this method is the ability to have two or more observers observe and record the behavior, followed by calculating a score for interrater reliability. This score can estimate how much agreement there is between the two observers about what the subjects were doing. This type of test can also identify observer bias.
An advantage of descriptive research is that it attempts to capture the complexity of everyday behavior. Specifically, case studies provide detailed information about a single person or a small group of people and surveys capture the thoughts or reported behaviors of a large population of people. Naturalistic observation, meanwhile, objectively record the behavior of people or animals as it naturally occurs. In sum, descriptive research is used to provide a relatively complete understanding of what is currently happening.
Despite these advantages, descriptive research has a distinct disadvantage in that, although it allows us to get an idea of what is currently happening, it is usually limited to static pictures. Although descriptions of particular experiences may be interesting, they are not always transferable to other individuals in other situations, nor do they tell us exactly why specific behaviors or events occurred. For instance, descriptions of individuals who have suffered a stressful event, such as a war or an earthquake, can be used to understand the individuals’ reactions to the event but cannot tell us anything about the long-term effects of the stress. And because there is no comparison group that did not experience the stressful situation, we cannot know what these individuals would be like if they hadn’t had the stressful experience.
In contrast to descriptive research, which is designed primarily to provide static pictures, correlational research involves measuring the relationship between or among two or more relevant variables. For instance, the variables of height and weight are systematically related (correlated) because taller people generally weigh more than shorter people. In the same way, study time and memory errors are also related, because the more time a person is given to study a list of words, the fewer errors he or she will make. When there are two variables in the research design, one of them is called the predictor variable and the other the outcome variable.
One way of organizing the data from a correlational study with two variables is to graph the values of each of the measured variables using a scatter plot. As you can see in figure below, a scatter plot is a visual image of the relationship between two variables. A point is plotted for each individual at the intersection of his or her scores for the two variables. When the association between the variables on the scatter plot can be easily approximated with a straight line, as in parts (a) and (b), the variables are said to have a linear relationship.
When the straight line indicates that individuals who have above-average values for one variable also tend to have above-average values for the other variable, as in part (a), the relationship is said to be positive linear. Examples of positive linear relationships include those between height and weight, between education and income, and between age and mathematical abilities in children. In each case people who score higher on one of the variables also tend to score higher on the other variable. Negative linear relationships, in contrast, as shown in part (b), occur when above-average values for one variable tend to be associated with below-average values for the other variable. Examples of negative linear relationships include those between the age of a child and the number of diapers the child uses, and between practice on and errors made on a learning task. In these cases, people who score higher on one of the variables tend to score lower on the other variable.
Relationships between variables that cannot be described with a straight line are known as nonlinear relationships. Part (c) shows a common pattern in which the distribution of the points is essentially random. In this case there is no relationship at all between the two variables, and they are said to be independent.
The most common statistical measure of the strength of linear relationships among variables is the Pearson correlation coefficient, which is symbolized by the letter r. The value of the correlation coefficient ranges from r = –1.00 to r = +1.00. The direction of the linear relationship is indicated by the sign of the correlation coefficient. Positive values of r (such as r= .54 or r = .67) indicate that the relationship is positive linear (i.e., the pattern of the dots on the scatter plot runs from the lower left to the upper right), whereas negative values of r (such as r = –.30 or r = –.72) indicate negative linear relationships (i.e., the dots run from the upper left to the lower right). The strength of the linear relationship is indexed by the distance of the correlation coefficient from zero (its absolute value). For instance, r = –.54 is a stronger relationship than r = .30, and r = .72 is a stronger relationship than r = –.57.
An important limitation of correlational research designs is that they cannot be used to draw conclusions about the causal relationships among the measured variables. Consider, for instance, a researcher who has hypothesized that viewing violent behavior will cause increased aggressive play in children. He has collected, from a sample of fourth-grade children, the data on how many violent television shows each child views during the week. He has also collected the data on how aggressively each child plays on the school playground. From his collected data, the researcher discovers a positive correlation between the two measured variables.
Although this positive correlation appears to support the researcher’s hypothesis, it cannot be taken to indicate that viewing violent television causes aggressive behavior. Although the researcher is tempted to assume that viewing violent television causes aggressive play, there are other possibilities.
It may be possible that the causal direction is exactly opposite from what has been hypothesized. Perhaps children who buhave aggressively at school develop residual excitement that leads them to want to watch violent television shows at home:
Although this possibility may seem less likely, there is no way to rule out the possibility of such reverse causation on the basis of this observed correlation. It is also possible that both causal directions are operating and that the two variables cause each other:
Another possible explanation for the observed correlation is that it has been produced by the presence of a common-causal variable (also known as a third variable). A common-causal variable is a variable that is not part of the research hypothesis but that causes both the predictor and the outcome variable and thus produces the observed correlation between them. In our example, a potential common-causal variable is the discipline style of the children’s parents. Parents who use a harsh and punitive discipline style may produce children who like to watch violent television and who behave aggressively in comparison to children whose parents use less harsh discipline:
In this case, television viewing and aggressive play would be positively correlated (as indicated by the curved arrow between them), even though neither one caused the other but they were both caused by the discipline style of the parents (the straight arrows). When the predictor and outcome variables are both caused by a common-causal variable, the observed relationship between them is said to be spurious. A spurious relationship is a relationship between two variables in which a common-causal variable produces and “explains away” the relationship. If effects of the common-causal variable were taken away, or controlled for, the relationship between the predictor and outcome variables would disappear. In the example, the relationship between aggression and television viewing might be spurious because by controlling for the effect of the parents’ disciplining style, the relationship between television viewing and aggressive behavior might go away.
Common-causal variables in correlational research designs can sometimes be thought of as “mystery” variables. For instance, some variables have not been measured or their presence and identity are unknown to the researcher. Since it is not possible to measure every variable that could cause both the predictor and outcome variables, the existence of an unknown common-causal variable is always a possibility. For this reason, we are left with the basic limitation of correlational research: Correlation does not demonstrate causation. It is important that when you read about correlational research projects, you keep in mind the possibility of spurious relationships and be sure to interpret the findings appropriately. Although correlational research is sometimes reported as demonstrating causality without any mention being made of the possibility of reverse causation or common-causal variables, informed consumers of research, like you, are aware of these interpretational problems.
In summary, correlational research designs have both strengths and limitations. One strength is that they can be used when experimental research is not possible because the predictor variables cannot be manipulated. Correlational designs also have the advantage of allowing the researcher to study behavior as it occurs in everyday life. And we can also use correlational designs to make predictions—for instance, to predict from the scores on their battery of tests the success of job trainees during a training session. But we cannot use such correlational information to determine whether the training caused better job performance. For that, researchers rely on experiments.
Instructions: Each of the following examples describes an actual correlational study. Your task is to decide what the results look like. Was there a positive correlation, a negative correlation, or no correlation?
All of the research reported here is loosely based on actual studies. Because most published research is more complicated than the descriptions given here, we have simplified—but hopefully not seriously distorted—the results of actual research.
Research Study 1
Researchers found that the constant exposure to mass media (television, magazines, websites) depicting atypically thin, glamorous female models (the thin-ideal body) may be linked to body image disturbance in women. The general finding was that the more women were exposed to pictures and articles about thin females perceived as ideal, the lower their satisfaction was with their own bodies.
Research Study 2
This research study found that kindergarten and elementary school children who were better at rhymes and hearing the sounds of individual letters before they started to read later learned to read words more quickly than children who were not as good with making and distinguishing elementary sounds of language.
Research Study 3
Ninth-grade students and teachers were surveyed to determine the level of bullying that the students experienced. The researchers were given permission to access scores of the students on several standardized tests, with topics including algebra, earth science, and world history. The researchers found that the more bullying a student experienced, the lower the student’s grades on the standardized tests.
Research Study 4
At one time, people used to assume that poor reading abilities were caused by low intelligence. The most thoroughly studied kind of reading problem is called dyslexia, a learning disability which appears in elementary school readers as difficulty learning to recognize individual words. Dyslexia can vary in seriousness from mild forms to profound levels. In one study, researchers assessed a large number of kindergarten and first-grade children for signs of dyslexia, and they also measured the children’s IQ using a standardized IQ measure. They found no relationship between IQ and seriousness of dyslexia.
Research Study 5
Researchers from the United Kingdom analyzed results of a survey of more than 5,000 young people ages 10 to 15. They used a variety of indicators to rate how healthy a lifestyle each person led, using factors like eating and drinking habits, smoking and drug use, and participation in sports and other activities. In addition, they used responses to several questions to rate each person on his or her level of happiness. They found that healthier habits were strongly related to how happy these young adolescents reported themselves to be.
True experiments are the only reliable method scientists have for inferring causal relationships between two variables of interest: Does one thing cause another? So, the goal of an experimental research design is to provide more definitive conclusions about the causal relationships among the variables in the research hypothesis than is available from correlational designs.
In an experimental research design, the variables of interest are called the independent variable (or variables) and the dependent variable. The independent variable in an experiment is the causing variable that is created (manipulated) by the experimenter. The dependent variable in an experiment is a measured variable that is expected to be influenced by the experimental manipulation. The research hypothesis suggests that the manipulated independent variable or variables will cause changes in the measured dependent variables. We can diagram the research hypothesis by using an arrow that points in one direction. This demonstrates the expected direction of causality:
An example of the use of these independent and dependent variables in an experiment is the effect of witnessing aggression on children’s aggressive behaviors—certainly an important developmental question given the television and video game influence currently in place. In a classic study conducted by Albert Bandura in 1961, it was revealed that children who first watched an adult demonstrating violent behavior on a Bobo doll (inflatable clown with sand at the base) in a play room were more likely to show the same aggressive behaviors, compared to children who watched a passive adult in the play room or no adult at all in the play room before each child entered the play room. The independent variable manipulated by the experimenter was viewing violent behavior with the Bobo doll. The dependent variable, or the measure of behavior, was whether the child in a play room by his or herself expressed aggression by hitting a Bobo doll. The operational definition of this dependent variable was the number of hits, kicks, and other displays of aggression the child inflicted on the Bobo doll. The design of the experiment is shown in the following figure .
Consider an experiment conducted by Anderson and Dill.  The study was designed to test the hypothesis that viewing violent video games would increase aggressive behavior. In this research, male and female undergraduates from Iowa State University were given a chance to play with either a violent video game (Wolfenstein 3D) or a nonviolent video game (Myst). During the experimental session, the participants played their assigned video games for 15 minutes. Then, after the play, each participant played a competitive game with an opponent in which the participant could deliver blasts of white noise through the earphones of the opponent. The operational definition of the dependent variable (aggressive behavior) was the level and duration of noise delivered to the opponent. The design of the experiment is shown in the figure below.
Anderson and Dill first randomly assigned about 100 participants to each of their two groups (Group A and Group B). Because they used random assignment to conditions, they could be confident that, before the experimental manipulation occurred, the students in Group A were, on average, equivalent to the students in Group B on every possible variable, including variables that are likely to be related to aggression, such as parental discipline style, peer relationships, hormone levels, diet—and in fact everything else.
Then, after they had created initial equivalence, Anderson and Dill created the experimental manipulation—they had the participants in Group A play the violent game and the participants in Group B play the nonviolent game. Then they compared the dependent variable (the white noise blasts) between the two groups, finding that the students who had viewed the violent video game gave significantly longer noise blasts than did the students who had played the nonviolent game.
Anderson and Dill had from the outset created initial equivalence between the groups. This initial equivalence allowed them to observe differences in the white noise levels between the two groups after the experimental manipulation, leading to the conclusion that it was the independent variable (and not some other variable) that caused these differences. The idea is that the only thing that was different between the students in the two groups was the video game they had played.
Experimental designs have two very nice features. For one, they guarantee that the independent variable occurs prior to the measurement of the dependent variable. This eliminates the possibility of reverse causation. Second, the influence of common-causal variables is controlled, and thus eliminated, by creating initial equivalence among the participants in each of the experimental conditions before the manipulation occurs.
The most common method of creating equivalence among the experimental conditions is through random assignment to conditions, a procedure in which the condition that each participant is assigned to is determined through a random process, such as drawing numbers out of an envelope or using a random number table.
Despite the advantage of determining causation, experiments do have limitations. One is that they are often conducted in laboratory situations rather than in the everyday lives of people. Therefore, we do not know whether results that we find in a laboratory setting will necessarily hold up in everyday life. Second, and more important, is that some of the most interesting and key social variables cannot be experimentally manipulated. If we want to study the influence of the size of a mob on the destructiveness of its behavior, or to compare the personality characteristics of people who join suicide cults with those of people who do not join such cults, these relationships must be assessed using a quasi-experiemental design because it may simply not possible to randomly assign subjects to groups or manipulate variables of interest.
Considering the phenomenon of aggression, certainly some would argue that spanking is a demonstration of aggression to which some children are exposed. Does spanking increase aggression in children? To study this question with an experimental design, one would need to recruit families into the study, randomly divide them into “spanking” and “non-spanking” groups, and compare the aggression in the children from the two groups. However, it would not be ethical or even possible to compel parents to spank their children in order to manipulate the independent variable. Thus, the strategy for this type of research would be a quasi-experimental design, which compares two groups that already exist in the population—in this case families who spank their children and those who do not. However, this nonrandom design eliminates the possibility of finding a causal relationship because we could never be sure there wasn’t a third variable contributing to differences in aggression. For instance, perhaps the families who do not spank their children also watch a significant amount of violence on television while the spanking families watch much less television in general. There’s no pattern or order to these habits, but they can nonetheless influence the analysis, particularly if there’s no difference in aggression found. Why? If spanking increases aggression and so does violent television, the groups might be about equal in aggression but for completely different reasons. Without random assignment and controlling for extraneous variables, it’s not possible to discover causality. Thus, quasi-experiments are used to describe relationships between existing groups. We call these pre-existing variables quasi-independent variables, and studies that use quasi-independent variables instead of randomly assigned true independent variables quasi-experiments.
In this activity, you will view a series of short videos about experimental designs. For each video, you will answer several questions:
Study 1: Reducing Stress
Background: Social Psychologist Mark Baldwin has studied the relationship between stressful environments and feelings of anxiety. He wondered if stressful work might lead people to expect and even search for negative messages, such as angry facial expressions, from other people. This can be a problem, because these negative messages then add to the stress which in turn increases anxiety. In this video, Dr. Baldwin shows a creative way to help people break away from this stress-induced tendency to search for negative messages.
Summary of the experiment:
Telemarketers were randomly assigned to one of two tasks. Some of them looked for happy faces among sets of non-smiling faces. This version of the independent variable was the “treatment condition” because the experimenter believed that this task would be the one that would reduce stress levels. The other task was to search for flowers with five petals. This was the control condition, because the experimenter believed that this task would have little if any beneficial effect on stress.
Because telemarketers were randomly assigned to one of the two tasks, we can call this a TRUE experiment. If they had been assigned based on personal characteristics or preferences or some other nonrandom basis, we would call this a quasi-experiment.
The primary dependent variable in this study was stress hormone level. Stress hormones are a biological indicator of the stress that the person is feeling, so level of the hormone is a valuable measure of actual stress experienced, and it does not depend on people’s personal statements about their feelings of stress, which might be distorted by various other factors. The researchers also looked at telemarketing sales, though the idea is that lower stress leads to better performance, so telemarketing sales are only indirectly affected by the task.
The research question was this: Does the specific task—looking for smiling faces versus flowers—affect stress hormone levels and telemarketing sales performance?
The results were clear. Stress hormones were 17% lower in the group that looked for smiling faces when compared to the control group. The only known difference between these groups was the task itself, so—tentatively—we can conclude that the task of looking for smiling faces caused the people in that group to have lower stress levels than similar telemarketers in the other group.
The results also showed that telemarketing sales in the treatment group (smiling faces) were 68% higher than sales for those in the control group (flowers). Here, too, we can tentatively conclude that the task of looking for smiling faces caused the people in that group to have higher telemarketing sales than similar telemarketers in the other group.
Study 2: Red for Romance?
Background: Psychologist Daniella Niesta is a researcher who is interested in factors that influence our motivation. Some factors that influence our behavior and thoughts can be unconscious and subtle. Dr. Niesta believes that colors often have meaning, and that red, particularly in a context of romantic attraction, carries a strong subconscious meaning for men. If she is correct, then colors associated with a romantic context should influence men’s feelings of attraction and interest.
Summary of the experiment:
Undergraduate males were randomly assigned to one of two conditions. The men in one group saw a picture of a woman wearing a red blouse and the men in the other condition saw the same picture, except that the blouse had been digitally altered to be blue. The color of the shirt was the independent variable.
The men then rated the woman on the question:”How attractive do you think this person is?” They answered on a 9-point scale, where 1 was labeled “not at all” and 9 was labeled “extremely.” They were also asked other related questions and answered these on a similar scale.
The results showed that the woman was rated as more attractive on the average when she was depicted in a red blouse than when she was in a blue blouse. Other related questions showed a similar higher level of attraction for the woman when the shirt was red.
Dr. Niesta also looked at other colors: green, gray, and white. The same pattern of results showed when red was up against these other colors. Men found the woman more attractive when she was wearing red.
Background: Dr. Ahmed El-Sohemy is a professor of nutritional science at the University of Toronto. In this study, he was interested in differences in people’s ability to regulate their own food intake, particularly their consumption of sugar. He divided people according to which version of a particular gene they had and studied their eating behavior. Watch the video for more details.
Summary of the experiment:
Previous research had indicated to Dr. El-Sohemy that the GLUT2 gene might be involved in regulation of sugar consumption. He recruited volunteers and, using blood samples, divided his volunteers into two groups according to the variation on the GLUT2 gene each person had. The gene with its two variations (alleles) was the quasi-independent variable.
He then had each person fill out an extensive questionnaire on their eating behaviors. For the study reported here, sugar consumption based on these reports was the dependent variable.
The researcher found that people with different versions of the gene consumed substantially different amounts of sugar. The researcher speculated that this gene might be associated with sensitivity to sugar so it could influence how much we eat before we feel we have had enough.
A researcher is interested in the effects of a famous author on the persuasiveness of a message. He recruits 50 college students to be in his study and randomly assigns 25 to be in the “high prestige” group and 25 to the be in the “low prestige” group. All of the students read the same document about the importance of improving mental health services at the college. But the 25 students in the “high prestige” group read on the document that the author is the chairperson of the Psychology Department. The 25 students in the “low prestige” group read on the document that the author is a psychology undergraduate student as part of a class assignment. Everyone was asked after reading the article to indicate how much they agreed with the idea that psychological services should be improved at the college.
A therapist develops a new approach to treating depression using exercise and diet along with regular counseling sessions. For all of her new clients who agree to be in her test of the therapy approach, she randomly assigns half to receive her new form of treatment and the other half receive the traditional form of treatment that the therapist has offered for many years. She uses a respected, standardized measure of depression before the beginning of her therapy and then again after 3 months of treatment. She uses the difference between these before and after measures to indicate the change in level of depression.
A researcher is interested to see if older adults (70 to 80 years old) have more trouble with multitasking than middle-age adults (40 to 50 years old). Participants were seated in a driving simulator that allowed them to drive a simulated car through a variety of situations. They were asked to carry on a hands-free cellphone conversation with an experimenter located in a different room during the driving test. The experimenters measured driving abilities, including avoidance of problems, speed, braking effectiveness, and other related behaviors.
Good research is valid research. When research is valid, the conclusions drawn by the researcher are legitimate. For instance, if a researcher concludes that participating in psychotherapy reduces anxiety, or that taller people are smarter than shorter people, the research is valid only if the therapy really works or if taller people really are smarter. Unfortunately, there are many threats to the validity of research, and these threats may sometimes lead to unwarranted conclusions. Often, and despite researchers’ best intentions, some of the research reported on websites as well as in newspapers, magazines, and even scientific journals is invalid. Validity is not an all-or-nothing proposition, which means that some research is more valid than other research. Only by understanding the potential threats to validity will you be able to make knowledgeable decisions about the conclusions that can or cannot be drawn from a research project. Here we discuss two of these major types of threats to the validity of research: internal and external validity.
Two Threats to the Validity of Research
Internal validity refers to the extent to which we can trust the conclusions that have been drawn about the causal relationship between the independent and dependent variables. Internal validity applies primarily to experimental research designs, in which the researcher hopes to conclude that the independent variable has caused the dependent variable. Internal validity is maximized when the research is free from the presence of confounding variables—variables other than the independent variable on which the participants in one experimental condition differ systematically from those in other conditions.
Consider an experiment in which a researcher tested the hypothesis that drinking alcohol makes members of the opposite sex look more attractive. Participants older than 21 years of age were randomly assigned either to drink orange juice mixed with vodka or to drink orange juice alone. To eliminate the need for deception, the participants were told whether or not their drinks contained vodka. After enough time had passed for the alcohol to take effect, the participants were asked to rate the attractiveness of pictures of members of the opposite sex. The results of the experiment showed that, as predicted, the participants who drank the vodka rated the photos as significantly more attractive.
If you think about this experiment for a minute, it may occur to you that although the researcher wanted to draw the conclusion that the alcohol caused the differences in perceived attractiveness, the expectation of having consumed alcohol is confounded with the presence of alcohol. That is, the people who drank alcohol also knew they drank alcohol, and those who did not drink alcohol knew they did not. It is possible that simply knowing that they were drinking alcohol, rather than the effect of the alcohol itself, may have caused the differences as shown in the following figure. One solution to the problem of potential expectancy effects is to tell both groups that they are drinking orange juice and vodka but really give alcohol to only half of the participants (it is possible to do this because vodka has very little smell or taste). If differences in perceived attractiveness are found, the experimenter could then confidently attribute them to the alcohol rather than to the expectancies about having consumed alcohol.
Another threat to internal validity can occur when the experimenter knows the research hypothesis and also knows which experimental condition the participants are in. The outcome is the potential for experimenter bias, a situation in which the experimenter subtly treats the research participants in the various experimental conditions differently, resulting in an invalid confirmation of the research hypothesis. In one study demonstrating experimenter bias, Rosenthal and Fode  sent twelve students to test a research hypothesis concerning maze learning in rats. Although it was not initially revealed to the students, they were actually the participants in an experiment. Six of the students were randomly told that the rats they would be testing had been bred to be highly intelligent, whereas the other six students were led to believe that the rats had been bred to be unintelligent. In reality there were no differences among the rats given to the two groups of students. When the students returned with their data, a startling result emerged. The rats run by students who expected them to be intelligent showed significantly better maze learning than the rats run by students who expected them to be unintelligent. Somehow the students’ expectations influenced their data. They evidently did something different when they tested the rats, perhaps subtly changing how they timed the maze running or how they treated the rats. And this experimenter bias probably occurred entirely out of their awareness.
To avoid experimenter bias, researchers frequently run experiments in which the researchers are blind to condition. This means that although the experimenters know the research hypotheses, they do not know which conditions the participants are assigned to. Experimenter bias cannot occur if the researcher is blind to condition. In a double-blind experiment, both the researcher and the research participants are blind to condition. For instance, in a double-blind trial of a drug, the researcher does not know whether the drug being given is the real drug or the ineffective placebo, and the patients also do not know which they are getting. Double-blind experiments eliminate the potential for experimenter effects and at the same time eliminate participant expectancy effects.
While internal validity refers to conclusions drawn about events that occurred within the experiment, external validity refers to the extent to which the results of a research design can be generalized beyond the specific way the original experiment was conducted. Generalization is the extent to which relationships among conceptual variables can be demonstrated in a wide variety of people and a wide variety of manipulated or measured variables.
Psychologists who use college students as participants in their research may be concerned about generalization, wondering if their research will generalize to people who are not college students. And researchers who study the behaviors of employees in one company may wonder whether the same findings would translate to other companies. Whenever there is reason to suspect that a result found for one sample of participants would not hold up for another sample, then research may be conducted with these other populations to test for generalization.
Recently, many psychologists have been interested in testing hypotheses about the extent to which a result will replicate across people from different cultures. For instance, a researcher might test whether the effects on aggression of viewing violent video games are the same for Japanese children as they are for American children by showing violent and nonviolent films to a sample of both Japanese and American schoolchildren. If the results are the same in both cultures, then we say that the results have generalized, but if they are different, then we have learned a limiting condition of the effect.
Unless the researcher has a specific reason to believe that generalization will not hold, it is appropriate to assume that a result found in one population (even if that population is college students) will generalize to other populations. Because the investigator can never demonstrate that the research results generalize to all populations, it is not expected that the researcher will attempt to do so. Rather, the burden of proof rests on those who claim that a result will not generalize.
Because any single test of a research hypothesis will always be limited in terms of what it can show, important advances in science are never the result of a single research project. Advances occur through the accumulation of knowledge that comes from many different tests of the same theory or research hypothesis. These tests are conducted by different researchers using different research designs, participants, and operationalizations of the independent and dependent variables. The process of repeating previous research, which forms the basis of all scientific inquiry, is known as replication.
Situation 1: A researcher wants to know if creativity can be taught. She designs a curriculum for teaching creative drawing to elementary school children. Then (with the permission of parents and the school) she randomly assigns 30 students to participate in several weekly sessions of creativity training. Another 30 randomly chosen students participate in a weekly session where they can draw, but they receive no creativity instruction. At the end of the six weeks of instruction, she has each child draw a picture. Five local school art teachers, who are also friends of hers, serve as judges. Each picture has a label showing whether the child was in the “creative training” group or the “no creative training” group. The art teacher-judges rate each picture on a 10-point scale, where 10 means “very high in creativity” and 1 means “very low in creativity.” The results were that the children in the “creative training” group received an average rating of 8.5 and the children in the “no creative training” group received an average rating of 4.0. Based on these results, the researcher claimed that her creativity training curriculum succeeded in teaching students to be more creative.
Situation 2: A researcher at Harvard University is interested in how much people enjoy film documentaries. He recruits 40 students enrolled in the documentary filmmaking program at Harvard. He has each person watch 5 recently produced documentaries about poverty, pollution, and European monetary policy. Then the students rated each documentary on a several questions related to enjoyment (e.g., How much did you enjoy this movie? Would you take a date to see this movie?). He also had the students watch and rate 5 recently produced Hollywood action movies. The students rated the documentaries more enjoyable than the Hollywood action movies. Based on this, the researcher states that movie producers should move from the “dying art form” of action movies to the “new wave” of important issue documentaries because people now prefer documentaries.
Situation 3: The following descriptions are from two actual research studies. Read both studies and answer the following questions about the validity of Experiment 1.
Experiment 1: In 1950, the Pepsi Cola Corporation, now PepsiCo, Inc., conducted the “Pepsi Challenge” by randomly assigning individuals to taste either Pepsi or Coca-Cola. The researchers labeled the cups with only an “S” for Pepsi or an “L” for Coca-Cola and asked the participants to rate how much they liked the beverage. The research showed that participants overwhelmingly preferred cup S over cup L, and the researchers concluded that Pepsi was preferred to Coca-Cola.
Experiment 2: In 1983, independent researchers modified the 1950s study in which randomly assigned participants tasted cola from two cups, one marked L and the other marked S. The same product (either Pepsi or Coca-Cola) was placed in both cups. Just as in the 1950s study, the participants overwhelmingly reported that cup S contained the better-tasting product regardless of whether cup S contained Pepsi or Coca-Cola.
The researchers then extended their study by conducting another experiment in which participants were asked their preference for either Pepsi or Coca-Cola. The participants drank from a Pepsi bottle (which contained Coke) and from a Coke bottle (which contained Pepsi). The results indicated that the participants were significantly influenced by the visible label of the product they preferred and not by taste differences between the two products. The researchers concluded that a taste comparison of colas should avoid using any type of labels, even presumably neutral ones like letters of the alphabet, since such labels may have more powerful influences on product comparisons than taste differences.
In 1986 Anne Adams was working as a cell biologist at the University of Toronto in Ontario, Canada. She took a leave of absence from her work to care for a sick child, and while she was away, she completely changed her interests, dropping biology entirely and turning her attention to art. In 1994 she completed her painting Unravelling Boléro, a translation of Maurice Ravel’s famous orchestral piece, Boléro, onto canvas. As you can see in the following image, this artwork is filled with themes of repetition. Each bar of music is represented by a lacy vertical figure, with the height representing volume, the shape representing note quality, and the color representing the music’s pitch. Like Ravel’s music (see the following video), which is a hypnotic melody consisting of two melodial themes repeated eight times over 340 musical bars, the theme in the painting repeats and builds, leading to a dramatic change in color from blue to orange and pink, a representation of Boléro’s sudden and dramatic climax.
Shortly after finishing the painting, Adams began to experience behavioral problems, including increased difficulty speaking. Neuroimages of Adams’s brain taken during this time show that regions in the front part of her brain, which are normally associated with language processing, had begun to deteriorate, while at the same time, regions of the brain responsible for the integration of information from the five senses were unusually well developed.  The deterioration of the frontal cortex is a symptom of frontotemporal dementia, a disease associated with changes in artistic and musical tastes and skills  as well as with an increase in repetitive behaviors. 
What Adams did not know as she worked on her painting was that her brain may have been undergoing the same changes that Ravel’s had undergone 66 years earlier. In fact, it appears that Ravel may have suffered from the same neurological disorder. Ravel composed Boléro at age 53, when he himself was beginning to show behavioral symptoms that interfered with his ability to move and speak. Scientists have concluded, on the basis of an analysis of his written notes and letters, that Ravel was also experiencing the effects of frontotemporal dementia.  If Adams and Ravel were both affected by the same disease, it could explain why they both became fascinated with the repetitive aspects of their arts, and it would present a remarkable example of the influence of our brains on behavior.
Every behavior begins with biology. Our behaviors, as well as our thoughts and feelings, are produced by the actions of our brains, nerves, muscles, and glands. In this unit, we begin our journey into the world of psychology by considering the biological makeup of the human being, including the most remarkable of human organs—the brain. We consider the structure of the brain and the methods psychologists use to study the brain and to understand how it works. Let’s begin by looking at neurons, which are nerve cells involved with all information processing in your brain.
A neuron is a cell in the nervous system whose function it is to receive and transmit information. Amazingly, your nervous system is composed of more than 100 billion neurons!
As you can see in the following figure, neurons consist of three major parts: a cell body, or soma, which contains the nucleus of the cell and keeps the cell alive; a branching, treelike fiber known as the dendrite, which collects information from other cells and sends the information to the soma; and a long, segmented fiber known as the axon, which transmits information away from the cell body toward other neurons or to the muscles and glands.
Some neurons have hundreds or even thousands of dendrites, and these dendrites may be branched to allow the cell to receive information from thousands of other cells. The axons are also specialized, and some, such as those that send messages from the spinal cord to the muscles in the hands or feet, may be very long—even up to several feet in length. To improve the speed of their communication, and to keep their electrical charges from shorting out with other neurons, axons are often surrounded by a myelin sheath. The myelin sheath is a layer of fatty tissue surrounding the axon of a neuron that both acts as an insulator and allows faster transmission of the electrical signal. Axons branch out toward their ends, and at the tip of each branch is a terminal button.
The nervous system operates using an electrochemical process (see the following video). An electrical charge moves through the neuron, and chemicals are used to transmit information between neurons. Within the neuron, when a signal is received by the dendrites, it is transmitted to the soma in the form of an electrical signal, and if the signal is strong enough, it may then be passed to the axon and then to the terminal buttons. If the signal reaches the terminal buttons, they are signaled to emit chemicals known as neurotransmitters, which communicate with other neurons across the spaces between the cells, known as synapses. You will be learning more about synapses and why they are important later on in this module.
The electrical signal moves through the neuron as a result of changes in the electrical charge of the axon. Normally, the axon remains in the resting potential, a state in which the interior of the neuron contains a greater number of negatively charged ions than does the area outside the cell. When the segment of the axon that is closest to the cell body is stimulated by an electrical signal from the dendrites, and if this electrical signal is strong enough that it passes a certain level or threshold, the cell membrane in this first segment opens its gates, allowing positively charged sodium ions that were previously kept out to enter. This change in electrical charge that occurs in a neuron when a nerve impulse is transmitted is known as the action potential. Once the action potential occurs, the number of positive ions exceeds the number of negative ions in this segment, and the segment temporarily becomes positively charged.
As you can see in the following figure, the axon is segmented by a series of breaks between the sausage-like segments of the myelin sheath. Each of these gaps is a node of Ranvier. The electrical charge moves down the axon from segment to segment, in a set of small jumps, moving from node to node. When the action potential occurs in the first segment of the axon, it quickly creates a similar change in the next segment, which then stimulates the next segment, and so forth, as the positive electrical impulse continues all the way down to the end of the axon. As each new segment becomes positive, the membrane in the prior segment closes up again, and the segment returns to its negative resting potential. In this way, the action potential is transmitted along the axon toward the terminal buttons. The entire response along the length of the axon is very fast—it can happen up to 1,000 times each second.
An important aspect of the action potential is that it operates in an all-or-nothing manner. What this means is that the neuron either fires completely, such that the action potential moves all the way down the axon, or it does not fire at all. Thus, neurons can provide more energy to the neurons down the line by firing faster but not by firing more strongly. Furthermore, the neuron is prevented from repeated firing by the presence of a refractory period—a brief time after the firing of the axon in which the axon cannot fire again because the neuron has not yet returned to its resting potential.
Not only do neural signals travel via electrical charges within the neuron, but they also travel via chemical transmission between the neurons. As we just learned, neurons are separated by junction areas known as synapses, areas where the terminal buttons at the end of the axon of one neuron nearly, but don’t quite, touch the dendrites of another. The synapses provide a remarkable function because they allow each axon to communicate with many dendrites in neighboring cells. Because a neuron may have synaptic connections with thousands of other neurons, the communication links among the neurons in the nervous system allow for a highly sophisticated communication system.
When the electrical impulse from the action potential reaches the end of the axon, it signals the terminal buttons to release neurotransmitters into the synapse. A neurotransmitter is a chemical that relays signals across the synapses between neurons. Neurotransmitters travel across the synaptic space between the terminal button of one neuron and the dendrites of other neurons, where they bind to the dendrites in the neighboring neurons. Furthermore, different terminal buttons release different neurotransmitters, and different dendrites are particularly sensitive to different neurotransmitters. The dendrites admit the neurotransmitters only if they are the right shape to fit in the receptor sites on the receiving neuron. For this reason, the receptor sites and neurotransmitters are often compared to a lock and key, as shown in the following figure.
To explore the process of neurotransmission, watch this animation.
When neurotransmitters are accepted by the receptors on the receiving neurons, their effect may be either excitatory (i.e., they make the cell more likely to fire) or inhibitory (i.e., they make the cell less likely to fire). Furthermore, if the receiving neuron is able to accept more than one neurotransmitter, it is influenced by the excitatory and inhibitory processes of each. If the excitatory effects of the neurotransmitters are greater than the inhibitory influences of the neurotransmitters, the neuron moves closer to its firing threshold, and if it reaches the threshold, the action potential and the process of transferring information through the neuron begins.
Neurotransmitters that are not accepted by the receptor sites must be removed from the synapse in order for the next potential stimulation of the neuron to happen. This process occurs in part through the breaking down of the neurotransmitters by enzymes, and in part through reuptake, a process in which neurotransmitters that are in the synapse are reabsorbed into the transmitting terminal buttons, ready to again be released after the neuron fires.
Watch the animation and then complete the following exercises.
More than 100 chemical substances produced in the body have been identified as neurotransmitters, and these substances have a wide and profound effect on emotion, cognition, and behavior. Neurotransmitters regulate our appetite, our memory, our emotions, as well as our muscle action and movement. And as you can see in the following table, some neurotransmitters are also associated with psychological and physical diseases.
Drugs that we might ingest—either for medical reasons or recreationally—can act like neurotransmitters to influence our thoughts, feelings, and behavior. An agonist is a drug that has chemical properties similar to a particular neurotransmitter and thus mimics the effects of the neurotransmitter. When an agonist is ingested, it binds to the receptor sites in the dendrites to excite the neuron, acting as if more of the neurotransmitter had been present. As an example, cocaine is an agonist for the neurotransmitter dopamine. Because dopamine produces feelings of pleasure when it is released by neurons, cocaine creates similar feelings when it is ingested. An antagonist is a drug that reduces or stops the normal effects of a neurotransmitter. When an antagonist is ingested, it binds to the receptor sites in the dendrite, thereby blocking the neurotransmitter. As an example, the poison curare is an antagonist for the neurotransmitter acetylcholine. When the poison enters the brain, it binds to the dendrites, stops communication among the neurons, and usually causes death. Still other drugs work by blocking the reuptake of the neurotransmitter itself—when reuptake is reduced by the drug, more neurotransmitter remains in the synapse, increasing its action.
|The Major Neurotransmitters and Their Functions|
If you were someone who understood brain anatomy and were to look at the brain of an animal that you had never seen before, you would nevertheless be able to deduce the likely capacities of the animal because the brains of all animals are much alike in overall form. In each animal the brain is layered, and the basic structures of the brain are similar. The innermost structures of the brain—the parts nearest the spinal cord—are the oldest part of the brain, and these areas carry out the same functions they did for our distant ancestors. The “old brain” regulates basic survival functions, such as breathing, moving, resting, and feeding, and creates our experiences of emotion. Mammals, including humans, have developed further brain layers that provide more advanced functions—for instance, better memory, more sophisticated social interactions, and the ability to experience emotions. Humans have a large and highly developed outer layer known as the cerebral cortex, which makes us particularly adept at these processes.
The brain stem is the oldest and innermost region of the brain. It controls the most basic functions of life, including breathing, attention, and motor responses. The brain stem begins where the spinal cord enters the skull and forms the medulla, the area of the brain stem that controls heart rate and breathing. In many cases, the medulla alone is sufficient to maintain life—animals that have their brains severed above the medulla are still able to eat, breathe, and even move. The spherical shape above the medulla is the pons, a structure in the brain stem that helps control the movements of the body, playing a particularly important role in balance and walking. The pons is also important in sleeping, waking, dreaming, and arousal.
Running through the medulla and the pons is a long, narrow network of neurons known as the reticular formation. The job of the reticular formation is to filter out some of the stimuli that are coming into the brain from the spinal cord and to relay the remainder of the signals to other areas of the brain. The reticular formation also plays important roles in walking, eating, sexual activity, and sleeping. When electrical stimulation is applied to the reticular formation of an animal, it immediately becomes fully awake, and when the reticular formation is severed from the higher brain regions, the animal falls into a deep coma.
Two structures near the brain stem are also vital for basic survival functions. The thalamus is the egg-shaped structure sitting just above the brain stem that applies still more filtering to the sensory information coming from the spinal cord and through the reticular formation, and it relays some of these remaining signals to the higher brain levels.  The thalamus also receives some of the higher brain’s replies, forwarding them to the medulla and the cerebellum. The thalamus is also important in sleep because it shuts off incoming signals from the senses, allowing us to rest.
The cerebellum (literally, “little brain”) consists of two wrinkled ovals behind the brain stem. It functions to coordinate voluntary movement. People who have damage to the cerebellum have difficulty walking, keeping their balance, and holding their hands steady. Consuming alcohol influences the cerebellum, which is why people who are drunk have difficulty walking in a straight line. Also, the cerebellum contributes to emotional responses, helps us discriminate between different sounds and textures, and is important in learning. 
Whereas the primary function of the brain stem is to regulate the most basic aspects of life, including motor functions, the limbic system is largely responsible for memory and emotions, including our responses to reward and punishment. The limbic system is a set of distinct and important brain structures located beneath and around the thalamus. Limbic system structures interact with the rest of the brain in complex ways, and they are extremely important for memory and control of emotional responses. They include the amygdala, the hypothalamus, and the hippocampus, among other structures.
The amygdala consists of two almond-shaped clusters (amygdala comes from the Latin word for almond) and is primarily responsible for regulating our perceptions of and reactions to aggression and fear. The amygdala has connections to other bodily systems related to fear, including the sympathetic nervous system (which we will see later is important in fear responses), facial responses (which perceive and express emotions), the processing of smells, and the release of neurotransmitters related to stress and aggression.  In a 1939 study, Klüver and Bucy  damaged the amygdala of an aggressive rhesus monkey. They found that the once angry animal immediately became passive and no longer responded to fearful situations with aggressive behavior. Electrical stimulation of the amygdala in other animals also influences aggression. In addition to helping us experience fear, the amygdala helps us learn from situations that create fear. When we experience events that are dangerous, the amygdala stimulates the brain to remember the details of the situation so that we learn to avoid it in the future. 
Located just under the thalamus (hence its name), the hypothalamus is a brain structure that contains a number of small areas that perform a variety of functions. Through its many interactions with other parts of the brain, the hypothalamus helps regulate body temperature, hunger, thirst, and sex drive and responds to the satisfaction of these needs by creating feelings of pleasure.
The hippocampus consists of two “horns” that curve back from the amygdala. The hippocampus is important in storing information in long-term memory. If the hippocampus is seriously damaged on both sides of the brain, a person may be unable to store new long-term memories, living instead in a strange world where everything he or she experiences just fades away, even while older memories from the time before the damage are untouched.
All animals have adapted to their environments by developing abilities that help them survive. Some animals have hard shells, others run extremely fast, and some have acute hearing. Human beings do not have any of these particular characteristics, but we do have one big advantage over other animals—we are very, very smart.
You might think we should be able to determine the intelligence of an animal by looking at the ratio of the animal’s brain weight to the weight of its entire body. But brain size is not a measure of intelligence. The elephant’s brain is one-thousandth of its body weight, but the whale’s brain is only one-ten-thousandth of its body weight. On the other hand, although the human brain is one-sixtieth of its body weight, the mouse’s brain represents one fortieth of its body weight. Despite these comparisons, elephants do not seem 10 times smarter than whales, and humans definitely seem smarter than mice.
The key to the advanced intelligence of humans is not found in the size of our brains. What sets humans apart from other animals is our larger cerebral cortex—the outer barklike layer of our brain that allows us to so successfully use language, acquire complex skills, create tools, and live in social groups.  In humans, the cerebral cortex is wrinkled and folded rather than smooth, as it is in most other animals. This creates a much greater surface area and size. and allows increased capacities for learning, remembering, and thinking. The folding of the cerebral cortex is called corticalization.
Although the cortex is only about one tenth of an inch thick, it makes up more than 80% of the brain’s weight. The cortex contains about 20 billion nerve cells and 300 trillion synaptic connections.  Supporting all these neurons are billions more glial cells (glia), cells that surround and link to the neurons, protecting them, providing them with nutrients, and absorbing unused neurotransmitters. The glia come in different forms and have different functions. For instance, the myelin sheath surrounding the axon of many neurons is a type of glial cell. The glia are essential partners of neurons, without which the neurons could not survive or function. 
The cerebral cortex is divided into two hemispheres, and each hemisphere is divided into four lobes, each separated by folds known as fissures. If we look at the cortex starting at the front of the brain and moving over the top, we see first the frontal lobe (behind the forehead), which is responsible primarily for thinking, planning, memory, and judgment. Following the frontal lobe is the parietal lobe, which extends from the middle to the back of the skull and is responsible primarily for processing information about touch. Then comes the occipital lobe, at the very back of the skull, which processes visual information. Finally, in front of the occipital lobe (pretty much between the ears) is the temporal lobe, responsible primarily for hearing and language.
When the German physicists Gustav Fritsch and Eduard Hitzig (1870/2009) applied mild electric stimulation to different parts of a dog’s cortex, they discovered that they could make different parts of the dog’s body move.  They also discovered an important and unexpected principle of brain activity. They found that stimulating the right side of the brain produced movement in the left side of the dog’s body, and conversely, stimulating the left brain affected the right side of the body. This finding follows from a general principle about how the brain is structured, called contralateral control. The brain is wired such that in most cases the left hemisphere receives sensations from and controls the right side of the body, and vice versa.
Fritsch and Hitzig also found that the movement that followed the brain stimulation occurred only when they stimulated a specific arch-shaped region that runs across the top of the brain from ear to ear, just at the front of the parietal lobe. Fritsch and Hitzig had discovered the motor cortex, the part of the cortex that controls and executes movements of the body by sending signals to the cerebellum and the spinal cord. More recently, researchers have mapped the motor cortex even more fully by providing mild electronic stimulation to different areas of the motor cortex in fully conscious participants while observing their bodily responses (because the brain has no sensory receptors, these participants feel no pain). As you can see in the following figure, this research has revealed that the motor cortex is specialized for providing control over the body: The parts of the body that require more precise and finer movements, such as the face and hands, also are allotted the greatest amount of cortical space.
Just as the motor cortex sends out messages to the specific parts of the body, the somatosensory cortex, an area just behind and parallel to the motor cortex at the back of the frontal lobe, receives information from the skin’s sensory receptors and the movements of different body parts. Again, the more sensitive the body region, the more area is dedicated to it in the sensory cortex. Our sensitive lips, for example, occupy a large area in the sensory cortex, as do our fingers and genitals.
Other areas of the cortex process other types of sensory information. The visual cortex is the area located in the occipital lobe (at the very back of the brain) that processes visual information. If you were stimulated in the visual cortex, you would see flashes of light or color, and perhaps you have had the experience of “seeing stars” when you were hit in or fell on the back of your head. The temporal lobe, located on the lower side of each hemisphere, contains the auditory cortex, which is responsible for hearing and language. The temporal lobe also processes some visual information, providing us with the ability to name the objects around us. 
As you can see in the preceding figure, the motor and sensory areas of the cortex account for a relatively small part of the total cortex. The remainder of the cortex is made up of association areas in which sensory and motor information are combined and associated with our stored knowledge. These association areas are responsible for most of the things that make human beings seem human—the higher mental functions, such as learning, thinking, planning, judging, moral reflecting, figuring, and spatial reasoning.
The control of some bodily functions, such as movement, vision, and hearing, is performed in specific areas of the cortex, and if an area is damaged, the individual will likely lose the ability to perform the corresponding function. For instance, if an infant suffers damage to facial recognition areas in the temporal lobe, it is likely that he or she will never be able to recognize faces.  However, the brain is not divided in an entirely rigid way. The brain’s neurons have a remarkable capacity to reorganize and extend themselves to carry out particular functions in response to the needs of the organism and to repair damage. As a result, the brain constantly creates new neural communication routes and rewires existing ones. Neuroplasticity is the brain’s ability to change its structure and function in response to experience or damage. Neuroplasticity enables us to learn and remember new things and adjust to new experiences.
Our brains are the most “plastic” when we are young children, as it is during this time that we learn the most about our environment. And neuroplasticity continues to be observed even in adults.  The principles of neuroplasticity help us understand how our brains develop to reflect our experiences. For instance, accomplished musicians have a larger auditory cortex compared with the general population  and also require less neural activity to play their instruments than do novices.  These observations reflect the changes in the brain that follow our experiences.
Plasticity is also observed when damage occurs to the brain or to parts of the body that are represented in the motor and sensory cortexes. When a tumor in the left hemisphere of the brain impairs language, the right hemisphere begins to compensate to help the person recover the ability to speak.  And if a person loses a finger, the area of the sensory cortex that previously received information from the missing finger begins to receive input from adjacent fingers, causing the remaining digits to become more sensitive to touch. 
Although neurons cannot repair or regenerate themselves as skin and blood vessels can, new evidence suggests that the brain can engage in neurogenesis, the forming of new neurons.  These new neurons originate deep in the brain and may then migrate to other brain areas where they form new connections with other neurons.  This leaves open the possibility that someday scientists might be able to “rebuild” damaged brains by creating drugs that help grow neurons.
We learned that the left hemisphere of the brain primarily senses and controls the motor movements on the right side of the body, and vice versa. This fact provides an interesting way to study brain lateralization—the idea that the left and the right hemispheres of the brain are specialized to perform different functions. Gazzaniga, Bogen, and Sperry  studied a patient, known as W. J., who had undergone an operation to relieve severe seizures. In this surgery, the region that normally connects the two halves of the brain and supports communication between the hemispheres, known as the corpus callosum, is severed. As a result, the patient essentially becomes a person with two separate brains. Because the left and right hemispheres are separated, each hemisphere develops a mind of its own, with its own sensations, concepts, and motivations. 
In their research, Gazzaniga and his colleagues tested the ability of W. J. to recognize and respond to objects and written passages that were presented to only the left or to only the right brain hemispheres. The researchers had W. J. look straight ahead and then flashed, for a fraction of a second, a picture of a geometric shape to the left of where he was looking. By doing so, they assured that—because the two hemispheres had been separated—the image of the shape was experienced only in the right brain hemisphere (remember that sensory input from the left side of the body is sent to the right side of the brain). Gazzaniga and his colleagues found that W. J. was able to identify what he had been shown when he was asked to pick the object from a series of shapes, using his left hand, but that he could not do so when the object was shown in the right visual field. Conversely, W. J. could easily read written material presented in the right visual field (and thus experienced in the left hemisphere) but not when it was presented in the left visual field.
The information presented on the left side of our field of vision is transmitted to the right brain hemisphere, and vice versa. In split-brain patients, the severed corpus callosum does not permit information to be transferred between hemispheres, which allows researchers to learn about the functions of each hemisphere.
This research, and many other studies following it, demonstrated that the two brain hemispheres specialize in different abilities. In most people, the ability to speak, write, and understand language is located in the left hemisphere. This is why W. J. could read passages that were presented on the right side and thus transmitted to the left hemisphere, but could not read passages that were only experienced in the right brain hemisphere. The left hemisphere is also better at math and at judging time and rhythm. It is also superior in coordinating the order of complex movements—for example, lip movements needed for speech. The right hemisphere has only limited verbal abilities, and yet it excels in perceptual skills. The right hemisphere is able to recognize objects, including faces, patterns, and melodies, and it can put a puzzle together or draw a picture. This is why W. J. could pick out the image when he saw it on the left, but not the right, visual field.
Although Gazzaniga’s research demonstrated that the brain is in fact lateralized, such that the two hemispheres specialize in different activities, this does not mean that when people behave in a certain way or perform a certain activity they are using only one hemisphere of their brains at a time. That would be drastically oversimplifying the concept of brain differences. We normally use both hemispheres at the same time, and the difference between the abilities of the two hemispheres is not absolute. 
One problem in understanding the brain is that it is difficult to get a good picture of what is going on inside it. But a variety of empirical methods allow scientists to look at brains in action, and the means by which to study the brain have improved dramatically in recent years with the development of new neuroimaging techniques. In this section, we consider the various techniques that psychologists use to learn about the brain. Each technique has some advantages, and when we put them together, we begin to get a relatively good picture of how the brain functions and which brain structures control which activities.
Perhaps the most immediate approach to visualizing and understanding the structure of the brain is to directly analyze the brains of human cadavers. When Albert Einstein died in 1955, his brain was removed and stored for later analysis. Researcher Marian Diamond  later analyzed a section of the Einstein’s cortex to investigate its characteristics. Diamond was interested in the role of glia, and she hypothesized that the ratio of glial cells to neurons was an important determinant of intelligence. To test this hypothesis, she compared the ratio of glia to neurons in Einstein’s brain with the ratio in the preserved brains of 11 more “ordinary” men. However, Diamond was able to find support for only part of her research hypothesis. Although she found that Einstein’s brain had relatively more glia in all the areas she studied than did the control group, the difference was statistically significant in only one of the areas she tested. Diamond admits a limitation in her study is that she had only one Einstein to compare with 11 ordinary men.
An advantage of the cadaver approach is that the brains can be fully studied, but an obvious disadvantage is that the brains are no longer active. In other cases, however, we can study living brains. The brains of living human beings may be damaged, for instance, as a result of strokes, falls, automobile accidents, gunshots, or tumors. These damages are called lesions. In rare circumstances, brain lesions may be created intentionally through surgery, for example, to remove brain tumors or (as in split-brain patients) to reduce the effects of epilepsy. Psychologists also sometimes intentionally create lesions in animals to study the effects on their behavior. In so doing, they hope to be able to draw inferences about the likely functions of human brains from the effects of the lesions in animals.
Lesions allow the scientist to observe any loss of brain function that may occur. For instance, when an individual suffers a stroke, a blood clot deprives part of the brain of oxygen, killing the neurons in the area and rendering that area unable to process information. In some cases, the result of the stroke is a specific lack of ability. For instance, if the stroke influences the occipital lobe, then vision may suffer, and if the stroke influences the areas associated with language or speech, these functions will suffer. In fact, our earliest understanding of the specific areas involved in speech and language were gained by studying patients who had experienced strokes.
It is now known that a good part of our social decision-making abilities are located in the frontal lobe, and at least some of this understanding comes from lesion studies. For instance, consider the well-known case of Phineas Gage, a 25-year-old railroad worker who, as a result of an explosion in 1848, had an iron rod driven into his right cheek and out through the top of his skull, causing major damage to his frontal lobe.  Remarkably, Gage was able to return to work after the wounds healed, but he no longer seemed to be the same person to those who knew him. The amiable, soft-spoken Gage had become irritable, rude, irresponsible, and dishonest. Although there are questions about the interpretation of this case study,  it did provide early evidence that the frontal lobe is involved in personality, emotion, inhibitory control, and goal-setting abilities.
Lesion studies are done using various neuroimaging methods that can record electrical activity in the brain, visualize blood flow and areas of brain activity in real time, provide cross-sectional images, and even provide computer-generated three-dimensional composites of the brain.
The single-unit recording method, in which a thin microelectrode is surgically inserted in or near an individual neuron, is used primarily with animals. The microelectrode records electrical responses or activity of the specific neuron. Research using this method has found, for instance, that specific neurons, known as feature detectors, in the visual cortex detect movement, lines, edges, and even faces. 
A less invasive electrical method that is used on humans is called the electroencephalograph (EEG). The EEG is an instrument that records the electrical activity produced by the brain’s neurons through the use of electrodes placed on the surface of the research participant’s head. An EEG can show if a person is asleep, awake, or anesthetized because the brain wave patterns are known to differ during each state. EEGs can also track the waves that are produced when a person is reading, writing, and speaking and are useful for understanding brain abnormalities, such as epilepsy. A particular advantage of EEG is that the participant can move around while the recordings are being taken, which is useful when measuring brain activity in children who often have difficulty keeping still. Furthermore, by following electrical impulses across the surface of the brain, researchers can observe changes over very short time periods (microseconds).
Although the EEG can provide information about the general patterns of electrical activity within the brain, and although the EEG allows the researcher to see these changes quickly as they occur in real time, the electrodes must be placed on the surface of the skull, and each electrode measures brain waves from large areas of the brain. As a result, EEGs do not provide a very clear picture of the structure of the brain.
But other methods exist to provide more specific brain images. The positron emission tomography (PET) scan is an invasive imaging technique that provides color-coded images of brain activity by tracking the brain’s use of a radioactively tagged compound, such as glucose, oxygen, or a drug that has been injected into a person’s bloodstream. The person lies in a PET scanner and performs a mental task, such as recalling a list of words or solving an arithmetic problem, while the PET scanner tracks the amounts of radioactive substance that causes metabolic changes in different brain regions as they are stimulated by a person’s activity. A computer analyzes the data, producing color-coded images of the brain’s activity. A PET scan can determine levels of activity when a person is given a task that requires hearing, seeing, speaking, or thinking.
Functional magnetic resonance imaging (fMRI) is a type of brain scan that uses a magnetic field to create images of brain activity in each brain area. The patient lies on a bed in a large cylindrical structure containing a very strong magnet. Neurons that are firing use more oxygen than neurons that are not firing, and the need for oxygen increases blood flow to the area. The fMRI detects the amount of blood flow in each brain region and thus is an indicator of neural activity.
Very clear and detailed pictures of brain structures can be produced via fMRI. Often, the images take the form of cross-sectional “slices” that are obtained as the magnetic field is passed across the brain. The images of these slices are taken repeatedly and are superimposed on images of the brain structure itself to show how activity changes in different brain structures over time. When the research participant is asked to engage in tasks (e.g., playing a game with another person), the images can show which parts of the brain are associated with which types of tasks. Another advantage of the fMRI is that is it noninvasive. The research participant simply enters the machine and the scans begin.
Although the scanners are expensive, the advantages of fMRI are substantial, and the machines are now available in many university and hospital settings. fMRI is now the most commonly used method of learning about brain structure.
A new approach that is being more frequently implemented to understand brain function, transcranial magnetic stimulation (TMS), may turn out to be the most useful of all. TMS is a procedure in which magnetic pulses are applied to the brain of living persons with the goal of temporarily and safely deactivating a small brain region. In TMS studies, the research participant is first scanned in an fMRI machine to determine the exact location of the brain area to be tested. Then the electrical stimulation is provided to the brain before or while the participant works on a cognitive task, and the effects of the stimulation on performance are assessed. If the participant’s ability to perform the task is influenced by the presence of the stimulation, then the researchers can conclude that this particular area of the brain is important to carrying out the task.
The primary advantage of TMS is that it allows the researcher to draw causal conclusions about the influence of brain structures on thoughts, feelings, and behaviors. When the TMS pulses are applied, the brain region becomes less active, and this deactivation is expected to influence the research participant’s responses. Current research has used TMS to study the brain areas responsible for emotion and cognition and their roles in how people perceive intention and approach moral reasoning.    TMS is also used as a treatment for a variety of conditions, including migraine, Parkinson disease, and major depressive disorder.
Imagine that you are a brain scientist. In the following scenarios, select the best method to learn more about the person’s brain and the presenting problem.
Recall what you learned about the amygdala in the previous module. The amygdala is a small, almond-shaped group of nuclei found at the base of the temporal lobe. Research has shown that the amygdala performs a primary role in the processing and memory of emotional reactions.
Using an fMRI machine, researchers assessed differences in the amygdala activity of participants when they viewed either expressions on a human face (figure on the left) or geometric shapes (figure on the right). While each participant was in the fMRI machine, he or she was told to identify either the two identical faces or the two identical shapes within a trio. This simple cognitive task was used to keep the participants’ gaze and attention on either the faces or the shapes so that the fMRI machine could record the activity of the amygdala in both conditions (faces or shapes). Then the activity level of the amygdala (its activation or lack of activation) was compared between the two conditions.
Study the fMRI image above. Using what you have learned about the amygdala and how the fMRI records activation in the brain, answer the following two questions.
Neuroimaging techniques have important implications for understanding human behavior, including people's responses to others. Naomi Eisenberger and her colleagues  tested the hypothesis that people who were excluded by others would report emotional distress and that images of their brains would show they experienced pain in the same part of the brain where physical pain is normally experienced. The experiment involved 13 participants. Each was placed into an fMRI brain imaging machine and told that he or she would be playing a computer cyberball game with two other players who were also in fMRI machines (the two opponents did not actually exist, and their responses were controlled by the computer).
Each participant was measured under three different conditions. In the first part of the experiment, the participants were told that due to technical difficulties, the link to the other two scanners could not yet be made, and until the problem was fixed, they could not engage in, but only watch, the game play. This allowed the researchers to take a baseline fMRI reading. Then, during a second inclusion scan, the participants played the game, supposedly with two other players. During this time, the other players threw the ball to the participants. In the third, exclusion, scan, however, the participants initially received seven throws from the other two players but were then excluded from the game because the two players stopped throwing the ball to the participants for the remainder of the scan (45 throws).
The results of the analyses showed that activity in two areas of the frontal lobe was significantly greater during the exclusion scan than during the inclusion scan. Because these brain regions are known from prior research to be active for individuals who are experiencing physical pain, the results suggest that the physiological brain responses associated with being socially excluded by others are similar to brain responses experienced upon physical injury.
Further research   has documented that people react to being excluded in a variety of situations with a variety of emotions and behaviors. People who feel they are excluded, and even those who observe other people being excluded, not only experience pain but feel worse about themselves and their relationships with people more generally, and they may work harder to try to restore their connections with others.
Now that we have considered how individual neurons operate and the roles of the different brain areas, it is time to ask how the body manages to “put it all together.” How do the complex activities in the various parts of the brain, the simple all-or-nothing firings of billions of interconnected neurons, and the various chemical systems within the body, work together to allow the body to respond to the social environment and engage in everyday behaviors? In this section, we will see that the complexities of human behavior are accomplished through the joint actions of electrical and chemical processes in the nervous system and the endocrine system.
The nervous system, the electrical information highway of the body, is made up of nerves—bundles of interconnected neurons that fire in synchrony to carry messages. The central nervous system (CNS) , made up of the brain and spinal cord, is the major controller of the body’s functions, charged with interpreting sensory information and responding to it with its own directives. The CNS interprets information coming in from the senses, formulates an appropriate reaction, and sends responses to the appropriate system to respond accordingly. Everything we see, hear, smell, touch, and taste is conveyed to us from our sensory organs as neural impulses, and each of the commands that the brain sends to the body, both consciously and unconsciously, travels through this system as well.
Nerves are differentiated according to their function. A sensory neuron carries information from the sensory receptors, whereas a motor neuron transmits information to the muscles and glands. An interneuron, which is by far the most common type of neuron, is located primarily within the CNS and is responsible for communicating among the neurons. Interneurons allow the brain to combine the multiple sources of available information to create a coherent picture of the sensory information being conveyed.
The spinal cord is the long, thin, tubular bundle of nerves and supporting cells that extends down from the brain. It is the central pathway of information for the body. Within the spinal cord, ascending tracts of sensory neurons relay sensory information from the sense organs to the brain while descending tracts of motor neurons relay motor commands back to the body. When a quicker-than-usual response is required, the spinal cord can do its own processing, bypassing the brain altogether. A reflex is an involuntary and nearly instantaneous movement in response to a stimulus. Reflexes are triggered when sensory information is powerful enough to reach a given threshold and the interneurons in the spinal cord act to send a message back through the motor neurons without relaying the information to the brain, as shown in the following figure. When you touch a hot stove and immediately pull your hand back, or when you fumble your cell phone and instinctively reach to catch it before it falls, reflexes in your spinal cord order the appropriate responses before your brain even knows what is happening.
If the central nervous system is the command center of the body, the peripheral nervous system (PNS) represents the front line. The PNS links the CNS to the body’s sense receptors, muscles, and glands. As you can see in the following figure, the PNS is divided into two subsystems, one controlling internal responses and one controlling external responses.
The autonomic nervous system (ANS) is the division of the PNS that governs the internal activities of the human body, including heart rate, breathing, digestion, salivation, perspiration, urination, and sexual arousal. Many of the actions of the ANS, such as heart rate and digestion, are automatic and out of our conscious control, but others, such as breathing and sexual activity, can be controlled and influenced by conscious processes.
The somatic nervous system (SNS) is the division of the PNS that controls the external aspects of the body, including the skeletal muscles, skin, and sense organs. The somatic nervous system consists primarily of motor nerves responsible for sending brain signals for muscle contraction.
The autonomic nervous system itself can be further subdivided into the sympathetic and parasympathetic systems (see figure below). The sympathetic division of the ANS is involved in preparing the body for rapid action in response to stress from threats or emergencies by activating the organs and glands in the endocrine system. When the sympathetic nervous system recognizes danger or a threat, the heart beats faster, breathing accelerates, and lungs and bronchial tubes expand. These physiological responses increase the amount of oxygen to the brain and muscles to prepare your body for defense. In other sympathetic nervous system responses, your pupils dilate to increase your field of vision, salivation stops and your mouth becomes dry, digestion stops in your stomach and intestines, and you begin to sweat due to your body’s use of more energy and heat. These bodily changes collectively represent the fight-or-flight response, which prepares you to either fight or flee from a perceived danger.
The parasympathetic division of the ANS tends to calm the body by slowing the heart and breathing and by allowing the body to recover from the activities that the sympathetic system causes. The parasympathetic nervous system acts more slowly than the sympathetic nervous system as it calms the activated organs and glands of the endocrine system, eventually returning your body to a normal state, called homeostasis.
Our everyday activities are also controlled by the interaction between the sympathetic and parasympathetic nervous systems. For example, when we get out of bed in the morning, we would experience a sharp drop in blood pressure if it were not for the action of the sympathetic system, which automatically increases blood flow through the body. Similarly, after we eat a big meal, the parasympathetic system automatically sends more blood to the stomach and intestines, allowing us to efficiently digest the food. And perhaps you’ve had the experience of not being at all hungry before a stressful event, such as a sports game or an exam (when the sympathetic division was primarily in action), but suddenly finding yourself starved afterward, as the parasympathetic system takes over. The two systems work together to maintain vital bodily functions, resulting in homeostasis, the natural balance in the body’s systems.
As you have seen, the nervous system is divided structurally into the central nervous system and the peripheral nervous system. The PNS is further divided into subdivisions, each having a particular function in the nervous system to help regulate the body. In the following activity, you will learn the function of each of the nervous system divisions by matching a specific descriptive function with each structure.
Instructions: Read the two scenarios below and answer the questions for each scenario.
Scenario 1: Susan, a college freshman, is taking college algebra. She never liked math and fears she will probably not do well in this first math course. She stays up all night studying for the first exam, and the next morning, she enters the classroom to take the test. As she sits down and takes out her pencils, she feels nervous; she begins to sweat, her stomach is upset, and her heart begins to race.
Scenario 2: As the exam is passed out, Susan takes several deep breaths and closes her eyes. She visualizes herself confidently taking the exam and focuses on her breathing and heart rate. She feels her heart and breathing slow down, and she feels calm and able to focus on answering the questions on the exam.
The nervous system is designed to protect us from danger through its interpretation of and reactions to stimuli. But a primary function of the sympathetic and parasympathetic nervous systems is to interact with the endocrine system, which secretes chemical messengers called hormones that influence our emotions and behaviors.
The endocrine system is made up of glands, which are groups of cells that secrete hormones into the bloodstream. When the hormones released by a gland arrive at receptor tissues or other glands, these receiving receptors may trigger the release of other hormones, resulting in a complex chemical chain reaction. The endocrine system works together with the nervous system to influence many aspects of human behavior, including growth, reproduction, and metabolism. The endocrine system also plays a vital role in emotions. Since the glands in men and women differ, the hormones from each of these glands, the ovaries and testes, explain some of the observed behavioral differences between men and women. The major glands in the endocrine system are shown in the figure above.
The secretion of hormones is regulated by the hypothalamus of the brain. The hypothalamus is the main link between the nervous system and the endocrine system and directs the release of hormones by its interactions with the pituitary gland, which is next to and highly interconnected with the hypothalamus. Review the module "Neurons: The Building Blocks of the Nervous System" for more information on the hypothalamus. The pituitary gland, a pea-sized gland, is responsible for controlling the body’s growth, but it also has many other influences that make it of primary importance to regulating behavior. The pituitary secretes hormones that influence our responses to pain as well as hormones that signal the ovaries and testes to make sex hormones. The pituitary gland also controls ovulation and the menstrual cycle in women. Because the pituitary has such an important influence on other glands, it is sometimes known as the “master gland.”
Other glands in the endocrine system include the pancreas, which secretes hormones designed to keep the body supplied with fuel to produce and maintain stores of energy; and the pineal gland, located in the middle of the brain, which secretes melatonin, a hormone that helps regulate the wake-sleep cycle.
The body has two triangular adrenal glands, one on top of each kidney. The adrenal glands produce hormones that regulate salt and water balance in the body, and they are involved in metabolism, the immune system, and sexual development and function. The most important function of the adrenal glands is to secrete the hormones epinephrine (also known as adrenaline) and norepinephrine (also known as noradrenaline) when we are excited, threatened, or stressed. Epinephrine and norepinephrine stimulate the sympathetic division of the autonomic nervous system, causing increased heart and lung activity, dilation of the pupils, and increases in blood sugar, which give the body a surge of energy to respond to a threat. The activity and role of the adrenal glands in response to stress provides an excellent example of the close relationship and interdependency of the nervous and endocrine systems. A quick-acting nervous system is essential for immediate activation of the adrenal glands, while the endocrine system mobilizes the body for action.
At this point, you can begin to see the important role the hormones play in behavior. But the hormones we reviewed in this section represent only a subset of the many influences that hormones have on our behaviors. In the upcoming units, we consider the important roles that hormones play in many other behaviors, including sleeping, sexual activity, and helping and harming others.
Instructions: In the following vignette, you will apply what you have learned about how the electrical components of the nervous system and the chemical components of the endocrine system work together to influence our behavior. Read the vignette and choose the best answers to complete the sentences describing the correct interaction of the nervous system and endocrine system.
Larry and Claire are hiking on a trail in the Rocky Mountains. As they walk, the trail becomes less distinguishable and is overgrown with brush. Suddenly, a man holding an axe jumps in front of them. This scares both of them; their hearts begin to pump faster and their breathing increases. They begin running in the opposite direction to get away from the man.
As Larry and Claire begin to run, they hear the man calling them. He yells, “Wait! I didn’t mean to scare you. I am a forest ranger, trying the clear part of this trail. Please don’t run away.” Larry and Claire stop running and turn around to look at the man. They notice that he is dressed in a typical forest ranger uniform and see his identification badge. Not feeling threatened any longer, both Larry and Claire begin to feel “calmed down” and walk back toward the forest ranger to resume their hike on the trail.
On September 6, 2007, the Asia-Pacific Economic Cooperation (APEC) leaders’ summit was being held in downtown Sydney, Australia. World leaders, including then U.S. president, George W. Bush, were attending the summit. Many roads in the area were closed for security reasons, and police presence was high.
As a prank, eight members of the Australian television satire The Chaser’s War on Everything assembled a false motorcade made up of two black four-wheel-drive vehicles, a black sedan, two motorcycles, body guards, and chauffeurs (see the video below). Group member Chas Licciardello was in one of the cars disguised as Osama bin Laden. The motorcade drove through Sydney’s central business district and entered the security zone of the meeting. The motorcade was waved on by police, through two checkpoints, until the Chaser group decided it had taken the gag far enough and stopped outside the InterContinental Hotel where former President Bush was staying. Licciardello stepped out onto the street and complained, in character as bin Laden, about not being invited to the APEC Summit. Only at this time did the police belatedly check the identity of the group members, finally arresting them.
Afterward, the group testified that it had made little effort to disguise its attempt as anything more than a prank. The group’s only realistic attempt to fool police was its Canadian flag–marked vehicles. Other than that, the group used obviously fake credentials, and its security passes were printed with “JOKE,” “Insecurity,” and “It’s pretty obvious this isn’t a real pass,” all clearly visible to any police officer who might have been troubled to look closely as the motorcade passed. The required APEC 2007 Official Vehicle stickers had the name of the group’s show printed on them, and this text: “This dude likes trees and poetry and certain types of carnivorous plants excite him.” In addition, a few of the “bodyguards” were carrying camcorders, and one of the motorcyclists was dressed in jeans, both details that should have alerted police that something was amiss.
The Chaser pranksters later explained the primary reason for the stunt. They wanted to make a statement about the fact that bin Laden, a world leader, had not been invited to an APEC Summit where issues of terror were being discussed. The secondary motive was to test the event’s security. The show’s lawyers approved the stunt, under the assumption that the motorcade would be stopped at the APEC meeting.
The senses provide our brains with information about the outside world and about our own internal world. Even single-celled organisms have ways to detect facts about their environment and they typically have the ability to use this information either to find nutrients or to avoid danger. For more complex organisms, certainly for humans, many sources of information about the external and internal world are necessary to allow us to survive and thrive. The systems we have throughout our bodies that allow us to detect information and transform energy into neural impulses are called the senses or sensory systems.
Detection of food or danger is generally not enough to permit an organism to respond effectively for survival. The world is full of complex stimuli that must be responded to in different ways. Organisms generally use both genetically transmitted knowledge and knowledge derived from experience to organize and interpret incoming sensory information. This process of organization and interpretation is what we refer to as perception.
In this unit we discuss the strengths and limitations of these capacities, focusing on both sensation—awareness resulting from the stimulation of a sense organ, and perception—the organization and interpretation of sensations. Sensation and perception work seamlessly together to allow us to experience the world through our eyes, ears, nose, tongue, and skin, but also to combine what we are currently learning from the environment with what we already know about it to make judgments and to choose appropriate behaviors.
The study of sensation and perception is exceedingly important for our everyday lives because the knowledge generated by psychologists is used in so many ways to help so many people. Psychologists work closely with mechanical and electrical engineers, with experts in defense and military contractors, and with clinical, health, and sports psychologists to help them apply this knowledge to their everyday practices. The research is used to help us understand and better prepare people to cope with such diverse events as driving cars, flying planes, creating robots, and managing pain. 
We begin the unit with a focus on the six senses of seeing, hearing, smelling, touching, tasting, and monitoring the body’s positions, also called proprioception. We will see that sensation is sometimes relatively direct, in the sense that the wide variety of stimuli around us inform and guide our behaviors quickly and accurately, but nevertheless is always the result of at least some interpretation. We do not directly experience stimuli, but rather we experience those stimuli as they are created by our senses. Each sense accomplishes the basic process of transduction—the conversion of stimuli detected by receptor cells to electrical impulses that are then transported to the brain—in different but related ways.
Each of your sense organs is a specialized system for detecting energy in the external environment and initiating neural messages—action potentials—to send information to the brain about the strength and other characteristics of the detected stimulus. For example, the eyes detect photons (individual units of light) and photosensitive (light sensitive) cells in the back of the eye react to the photons by sending an action potential down a series of neurons all the way to the occipital cortex in the back of the brain. Each of the senses has a specific place in the brain where information from that particular sense is processed. Very often, the information from these sense-specific brain areas is then sent to other parts of the brain for further analysis and integration with information with other senses. The result is your experience of a rich and constantly changing multisensory world, full of sights, sounds, smells, tastes, and texture.
After we have reviewed the basic processes of sensation, we will turn to the topic of perception, focusing on how the brain’s processing of sensory experience allows us to process and organize our experience of the outside world. However, the perceptual system does more than pass on information to the brain. Perception involves interpretation and even distortion of the sensory input. Perceptual illusions, discussed at the end of the unit, allow scientists to explore the various ways that the brain goes beyond the information that it receives.
Odd as it may seem, there is disagreement about the exact number of senses that we have. No one questions the fact that seeing uses the visual sensory system and hearing uses the auditory sensory system. There is some disagreement, however, about how to categorize the skin senses, which detect pressure and heat and pain, and the body senses, which tell our brains about body position. For our purposes, we discuss these senses:
Transduction is the process of turning energy detected around us into nerve impulses. Remember from the brain unit that a nerve impulse is called an action potential, so the result of transduction is always an action potential along a nerve going to the brain. Even though some action potentials start in the retina of the eye and other action potentials start in the cochlea in the inner ear, all action potentials are the same. At the neural level, there is no difference between an action potential coming from the eye or the ear or any other sensory system. What makes sensory experiences different from one another is not the sense organ or the action potential coming from a sense organ. Sensory experiences differ based on which brain area interprets the incoming message.
In the list below, for each of the senses, pick the type of signal that goes along the nerve from the sensory receptor on the body to the brain.
Each of our senses is specialized to detect a certain kind of energy and then to send a message to the brain in the form of action potentials in nerves that run from the sense receptor to specific parts of the brain. Let’s consider what kind of energy or information each sense receptor picks up:
Now let’s do an exercise to see if this all makes sense (no pun intended!).
There are three major types of transduction:
There is no difference between an action potential coming from one sense (e.g., the eye) and an action potential coming from a different sense (e.g., the ear). The way your brain knows if it is processing visual information or sound is by the location that receives the signal. If the action potential ends in the occipital lobe, your brain experiences it as visual information. If the action potential ends in the temporal lobe, then the brain interprets it as sound information.
In Unit 4, Brains, Bodies, and Behavior, you learned that different parts of the brain serve different functions. Let’s see where in the brain each of the senses sends its messages. First, all of the senses—except the sense of smell—send action potentials to the THALAMUS, in the middle of the brain, deep under the cortex. Then the different senses go to different parts of the brain.
For this exercise, put together the information you just learned about the pathway from the sensory sytem to the brain.
To explore the various senses further, go to this website of the BBC (British Broadcasting System) and click on each of the senses listed.
For each sensory system, determine
Humans possess powerful sensory capacities that allow us to sense the kaleidoscope of sights, sounds, smells, and tastes that surround us. Our eyes detect light energy and our ears pick up sound waves. Our skin senses touch, pressure, hot, and cold. Our tongues react to the molecules of the foods we eat, and our noses detect scents in the air. The human perceptual system is wired for accuracy, and people are exceedingly good at making use of the wide variety of information available to them. 
In many ways our senses are quite remarkable. The human eye can detect the equivalent of a single candle flame burning 30 miles away and can distinguish among more than 300,000 different colors. The human ear can detect sounds as low as 20 hertz (vibrations per second) and as high as 20,000 hertz, and it can hear the tick of a clock about 20 feet away in a quiet room. We can taste a teaspoon of sugar dissolved in 2 gallons of water, and we are able to smell one drop of perfume diffused in a three-room apartment. We can feel the wing of a bee on our cheek dropped from 1 centimeter above. 
Although there is much that we do sense, there is even more that we do not. Dogs, bats, whales, and some rodents all have much better hearing than we do, and many animals have a far richer sense of smell. Birds are able to see the ultraviolet light that we cannot (see the figure below) and can also sense the pull of the earth’s magnetic field. Cats have an extremely sensitive and sophisticated sense of touch, and they are able to navigate in complete darkness using their whiskers. The fact that different organisms have different sensations is part of their evolutionary adaptation. Each species is adapted to sensing the things that are most important to them, while being blissfully unaware of the things that don’t matter.
Psychophysics is the branch of psychology that studies the effects of physical stimuli on sensory perceptions and mental states. The field of psychophysics was founded by the German psychologist Gustav Fechner (1801–1887), who was the first to study the relationship between the strength of a stimulus and a person’s ability to detect the stimulus.
The measurement techniques developed by Fechner and his colleagues are designed in part to help determine the limits of human sensation. One important criterion is the ability to detect very faint stimuli. The absolute threshold of a sensation is the intensity of a stimulus that allows an organism to just barely detect it.
In a typical psychophysics experiment, an individual is presented with a series of trials in which a signal is sometimes presented and sometimes not, or in which two stimuli are presented that are either the same or different. Imagine, for instance, that you were asked to take a hearing test. On each of the trials, your task is to indicate either yes if you heard a sound or no if you did not. The signals are purposefully made to be very faint, making accurate judgments difficult.
The problem for you is that the very faint signals create uncertainty. Because our ears are constantly sending background information to the brain, you will sometimes think that you heard a sound when no sound was made, and you will sometimes fail to detect a sound that was made. Your must determine whether the neural activity that you are experiencing is due to the background noise alone or is a result of a signal within the noise.
The responses you give on the hearing test can be analyzed using signal detection analysis. Signal detection analysis is a technique used to determine the ability of the perceiver to separate true signals from background noise.   As you can see in the figure below, each judgment trial creates four possible outcomes: A hit occurs when you, as the listener, correctly say yes when there was a sound. A false alarm occurs when you respond yes to no signal. In the other two cases, you respond no—either a miss (saying no when there was a signal) or a correct rejection (saying no when there was in fact no signal).
The analysis of the data from a psychophysics experiment creates two measures. One measure, known as sensitivity, refers to the true ability of the individual to detect the presence or absence of signals. People who have better hearing will have higher sensitivity than will those with poorer hearing. The other measure, response bias, refers to a behavioral tendency to respond “yes” to the trials, which is independent of sensitivity.
Imagine for instance that rather than taking a hearing test, you are a soldier on guard duty, and your job is to detect the very faint sound of the breaking of a branch that indicates that an enemy is nearby. You can see that in this case making a false alarm by alerting the other soldiers to the sound might not be as costly as a miss (a failure to report the sound), which could be deadly. Therefore, you might well adopt a very lenient response bias in which whenever you are at all unsure, you send a warning signal. In this case, your responses may not be very accurate (your sensitivity may be low because you are making many false alarms) and yet the extreme response bias can save lives.
Another application of signal detection occurs when medical technicians study body images for the presence of cancerous tumors. Again, a miss (in which the technician incorrectly determines that there is no tumor) can be very costly, but false alarms (referring patients who do not have tumors to further testing) also have costs. The ultimate decisions that the technicians make are based on the quality of the signal (clarity of the image), their experience and training (the ability to recognize certain shapes and textures of tumors), and their best guesses about the relative costs of misses versus false alarms.
Signal detection analysis is often used to study ABSOLUTE THRESHOLD—the minimum intensity at which some sensory system works. In this demonstration, we use signal detection analysis, but not with absolute threshold, because absolute threshold is difficult to show under uncontrolled conditions. Instead, you will simply search for a target from within a field of objects that makes it hard to find.
You will see a set of blue crosses on a white background. The stimulus will be present for only 1 second and then it disappears. You must decide if there is a single blue L among the crosses. For example, here is a screen you might see for 1 second:
In this case, there is an L-shape in the figure, so you would click on the YES button. In case you don’t see it, here is where it is:
On other trials, there will be no L-shape, so you click on the NO button. Here is an example of a screen with no L-shape:
This demonstration takes about a minute—there are 12 trials in total. After each trial, you will learn if you were correct or incorrect in your decision. Then you will see your results in a signal detection report.
When you are ready to begin, click the Start button.
Although we have focused to this point on the absolute threshold, a second important criterion concerns the ability to assess differences between stimuli. The difference threshold (or just noticeable difference [JND]), refers to the change in a stimulus that can just barely be detected by the organism. The German physiologist Ernst Weber (1795–1878) made an important discovery about the JND: that the ability to detect differences depends not so much on the size of the difference but on the size of the difference in relationship to the absolute size of the stimulus. Weber’s law maintains that the just noticeable difference of a stimulus is a constant proportion of the original intensity of the timulus. As an example, if you have a cup of coffee that has only a very little bit of sugar in it (say, 1 teaspoon), adding another teaspoon of sugar will make a big difference in taste. But if you added that same teaspoon to a cup of coffee that already had 5 teaspoons of sugar in it, then you probably wouldn’t taste the difference as much (in fact, according to Weber’s law, you would have to add 5 more teaspoons to make the same difference in taste).
One interesting application of Weber’s law is in our everyday shopping behavior. Our tendency to perceive cost differences between products is dependent not only on the amount of money we will spend or save but also on the amount of money saved relative to the price of the purchase. I would venture to say that if you were about to buy a soda or candy bar in a convenience store and the price of the items ranged from $1 to $3, you would think that the $3 item cost a lot more than the $1 item. But now imagine that you were comparing two music systems, one that cost $397 and one that cost $399. Probably you would think that the cost of the two systems was about the same even though buying the cheaper one would still save you $2.
Weber’s law states that our ability to detect the difference between two stimuli is proportional to the magnitude of the stimuli. This may sound difficult, but consider this example. Imagine that you have a 1-pound weight in one hand. I put a 2-pound weight in your other hand. Do you think you could tell the difference? Probably so. These weights are light (low magnitude) so a difference of 1 pound is very easily detected. Now I put a 50-pound weight in one hand and a 51-pound weight in the other hand. Now do you think you could tell the difference? Probably not. When the weight is heavy (high magnitude), the 1-pound difference is not so easily detected.
Weber’s law focuses on one of the oldest variables in psychology, the JND. These letters stand for just noticeable difference, which is the smallest difference between two stimuli that you can reliably detect. Using this term, Weber’s law says that the size of the JND will increase as the magnitude of the stimulus increases. In the weight example, the JND when you have a 50-pound weight in your hand is much greater (2 pounds? 5 pounds? 1pounds?) than when you have a 1-pound weight in your hand (1 pound? ½ pound? ¼ pound?).
In this activity, we will try to determine your JND for some visual stimuli. For example, look at the two colored circles below. Your task is to decide if they are exactly the same or if they are different from one another. If they are the same, click the SAME button. If they are different, click the DIFFERENT button. As soon as you finish a judgment, you will see the next trial. You will see a total of 24 pairs to judge.
Whereas other animals rely primarily on hearing, smell, or touch to understand the world around them, human beings rely in large part on vision. A large part of our cerebral cortex is devoted to seeing, and we have substantial visual skills. Seeing begins when light falls on the eyes, initiating the process of transduction. Once this visual information reaches the visual cortex, it is processed by a variety of neurons that detect colors, shapes, and motion, and that create meaningful perceptions out of the incoming stimuli.
The air around us is filled with a sea of electromagnetic energy—pulses of energy waves that can carry information from place to place. As you can see in the figure below, electromagnetic waves vary in their wavelength—the distance between one wave peak and the next wave peak, with the shortest gamma waves being only a fraction of a millimeter in length and the longest radio waves being hundreds of kilometers long. Humans are blind to almost all of this energy—our eyes detect only the range from about 400 to 700 billionths of a meter, the part of the electromagnetic spectrum known as the visible spectrum.
As you can see in the above figure, light enters the eye through the cornea, a clear covering that protects the eye and begins to focus the incoming light. The light then passes through the pupil, a small opening in the center of the eye. The pupil is surrounded by the iris, the colored part of the eye that controls the size of the pupil by constricting or dilating in response to light intensity. When we enter a dark movie theater on a sunny day, for instance, muscles in the iris open the pupil and allow more light to enter. Complete adaptation to the dark may take up to 20 minutes.
Behind the pupil is the lens, a structure that focuses the incoming light on the retina, the layer of tissue at the back of the eye that contains photoreceptor cells. As our eyes move from near objects to distant objects, a process known as accommodation occurs. Accommodation is the process of changing the curvature of the lens to keep the light entering the eye focused on the retina. Rays from the top of the image strike the bottom of the retina, and vice versa, and rays from the left side of the image strike the right part of the retina, and vice versa, causing the image on the retina to be upside down and backward. Furthermore, the image projected on the retina is flat, and yet our final perception of the image will be three dimensional.
Accommodation is not always perfect, and in some cases the light hitting the retina is a bit out of focus. As you can see in the figure below, when the focus is in front of the retina, we say that the person is nearsighted, and when the focus is behind the retina we say that the person is farsighted. Eyeglasses and contact lenses correct this problem by adding another lens in front of the eye. Laser eye surgery corrects the problem by reshaping the eye's cornea, while another type of surgery involves replacing the eye's own lens.
The retina contains layers of neurons specialized to respond to light (see the figure below). As light falls on the retina, it first activates receptor cells known as rods and cones. The activation of these cells then spreads to the bipolar cells and then to the ganglion cells, which gather together and converge, like the strands of a rope, forming the optic nerve. The optic nerve is a collection of millions of ganglion neurons that sends vast amounts of visual information, via the thalamus, to the brain. Because the retina and the optic nerve are active processors and analyzers of visual information, it is not inappropriate to think of these structures as an extension of the brain itself.
Rods are visual neurons that specialize in detecting black, white, and gray colors. There are about 120 million rods in each eye. The rods do not provide a lot of detail about the images we see, but because they are highly sensitive to shorter-waved (darker) and weak light, they help us see in dim light, for instance, at night. Because the rods are located primarily around the edges of the retina, they are particularly active in peripheral vision (when you need to see something at night, try looking away from what you want to see). Cones are visual neurons that are specialized in detecting fine detail and colors. The 5 million or so cones in each eye enable us to see in color, but they operate best in bright light. The cones are located primarily in and around the fovea, which is the central point of the retina.
To demonstrate the difference between rods and cones in attention to detail, choose a word in this text and focus on it. Do you notice that the words a few inches to the side seem more blurred? This is because the word you are focusing on strikes the detail-oriented cones, while the words surrounding it strike the less-detail-oriented rods, which are located on the periphery.
As you can see in the figure below, the sensory information received by the retina is relayed through the thalamus to corresponding areas in the visual cortex, which is located in the occipital lobe at the back of the brain. (Hint: You can remember that the occipital lobe processes vision because it starts with the letter O, which is round like an eye.) Although the principle of contralateral control might lead you to expect that the left eye would send information to the right brain hemisphere and vice versa, nature is smarter than that. In fact, the left and right eyes each send information to both the left and the right hemispheres, and the visual cortex processes each of the cues separately and in parallel. This is an adaptational advantage to an organism that loses sight in one eye, because even if only one eye is functional, both hemispheres will still receive input from it.
Trace the path of visual information through the visual pathway.
The visual cortex is made up of specialized neurons that turn the sensations they receive from the optic nerve into meaningful images. Because there are no photoreceptor cells at the place where the optic nerve leaves the retina, a hole or blind spot in our vision is created (see the figure below). When both of our eyes are open, we don’t experience a problem because our eyes are constantly moving, and one eye makes up for what the other eye misses. But the visual system is also designed to deal with this problem if only one eye is open—the visual cortex simply fills in the small hole in our vision with similar patterns from the surrounding areas, and we never notice the difference. The ability of the visual system to cope with the blind spot is another example of how sensation and perception work together to create meaningful experience.
You can get an idea of the extent of your blind spot (the place where the optic nerve leaves the retina) by trying this demonstration. Close your left eye and stare with your right eye at the cross in the diagram. You should be able to see the elephant image to the right (don’t look at it, just notice that it is there). If you can’t see the elephant, move closer or farther away until you can. Now slowly move so that you are closer to the image while you keep looking at the cross. At one distance (probably a foot or so), the elephant will completely disappear from view because its image has fallen on the blind spot.
Perception is created in part through the simultaneous action of thousands of feature detector neurons—specialized neurons, located in the visual cortex, that respond to the strength, angles, shapes, edges, and movements of a visual stimulus.   The feature detectors work in parallel, each performing a specialized function. When faced with a red square, for instance, the parallel line feature detectors, the horizontal line feature detectors, and the red color feature detectors all become activated. This activation is then passed on to other parts of the visual cortex where other neurons compare the information supplied by the feature detectors with images stored in memory. Suddenly, in a flash of recognition, the many neurons fire together, creating the single image of the red square that we experience. 
Some feature detectors are tuned to selectively respond to particularly important objects, for instance, faces, smiles, and other parts of the body.   When researchers disrupted face recognition areas of the cortex using the magnetic pulses of transcranial magnetic stimulation (TMS), people were temporarily unable to recognize faces, and yet they were still able to recognize houses.  
It has been estimated that the human visual system can detect and discriminate among 7 million color variations,  but these variations are all created by the combinations of the three primary colors: red, green, and blue. The shade of a color, known as hue, is conveyed by the wavelength of the light that enters the eye (we see shorter wavelengths as more blue and longer wavelengths as more red), and we detect brightness from the intensity or height of the wave (bigger or more intense waves are perceived as brighter).
In his important research on color vision, Hermann von Helmholtz (1821–1894) theorized that color is perceived because the cones in the retina come in three types. One type of cone reacts primarily to blue light (short wavelengths), another reacts primarily to green light (medium wavelengths), and a third reacts primarily to red light (long wavelengths). The visual cortex then detects and compares the strength of the signals from each of the three types of cones, creating the experience of color. According to this Young-Helmholtz trichromatic color theory, what color we see depends on the mix of the signals from the three types of cones. If the brain is receiving primarily red and blue signals, for instance, it perceives purple; if it is receiving primarily red and green signals, it perceives yellow; and if it is receiving messages from all three types of cones, it perceives white.
The different functions of the three types of cones are apparent in people who experience colorblindness—the inability to detect either green and/or red colors. About 1 in 50 people, mostly men, lack functioning in the red- or green-sensitive cones, leaving them only able to experience either one or two colors.
The trichromatic color theory cannot explain all of human vision, however. For one, although the color purple does appear to us as a mixing of red and blue, yellow does not appear to be a mix of red and green. And people with colorblindness, who cannot see either green or red, nevertheless can still see yellow. An alternative approach to the Young-Helmholtz theory, known as the opponent-process color theory, proposes that we analyze sensory information not in terms of three colors but rather in three sets of “opponent colors”: red-green, yellow-blue, and white-black. Evidence for the opponent-process theory comes from the fact that some neurons in the retina and in the visual cortex are excited by one color (e.g., red) but inhibited by another color (e.g., green) as shown in the following figure.
One example of opponent processing occurs in the experience of an afterimage. If you stare at the flag on the left side of the figure below for about 30 seconds (the longer you look, the better the effect), and then move your eyes to the blank area to the right of it, you will see the afterimage. When we stare at the green stripes, our green receptors habituate and begin to process less strongly, whereas the red receptors remain at full strength. When we switch our gaze, we see primarily the red part of the opponent process. Similar processes create blue after yellow and white after black.
Watch the video below on aftereffect.
The tricolor and the opponent-process mechanisms work together to produce color vision. When light rays enter the eye, the red, blue, and green cones on the retina respond in different degrees, and send different strength signals of red, blue, and green through the optic nerve. The color signals are then processed both by the ganglion cells and by the neurons in the visual cortex. 
One of the important processes required in vision is the perception of form. German psychologists in the 1930s and 1940s, including Max Wertheimer (1880–1943), Kurt Koffka (1886–1941), and Wolfgang Köhler (1887–1967), argued that we create forms out of their component sensations based on the idea of the gestalt, a meaningfully organized whole. The idea of the gestalt is that the “whole is more than the sum of its parts.” Some examples of how gestalt principles lead us to see more than what is actually there are summarized in the following table.
Depth perception is the ability to perceive three-dimensional space and to accurately judge distance. Without depth perception, we would be unable to drive a car, thread a needle, or simply navigate our way around the supermarket.  Research has found that depth perception is in part based on innate capacities and in part learned through experience. 
Psychologists Eleanor Gibson and Richard Walk  tested the ability to perceive depth in 6- to 14-month-old infants by placing them on a visual cliff, a mechanism that gives the perception of a dangerous drop-off, in which infants can be safely tested for their perception of depth (see video below). The infants were placed on one side of the “cliff” while their mothers called to them from the other side. Gibson and Walk found that most infants either crawled away from the cliff or remained on the board and cried because they wanted to go to their mothers, but the infants perceived a chasm that they instinctively could not cross. Further research has found that even very young children who cannot yet crawl are fearful of heights.  On the other hand, studies have also found that infants improve their hand-eye coordination as they learn to better grasp objects and as they gain more experience in crawling, indicating that depth perception is also learned. 
Here is a video showing some of the babies in the original study. Notice how the mothers helped the experimenter to learn what the babies would and would not do.
Depth perception is the result of our use of depth cues, messages from our bodies and the external environment that supply us with information about space and distance. Binocular depth cues are depth cues that are created by retinal image disparity—that is, the space between our eyes, and thus require the coordination of both eyes. One outcome of retinal disparity is that the images projected on each eye are slightly different from each other. The visual cortex automatically merges the two images into one, enabling us to perceive depth. Three-dimensional movies make use of retinal disparity by using 3-D glasses that the viewer wears to create a different image on each eye. The perceptual system quickly, easily, and unconsciously turns the disparity into three dimensions.
An important binocular depth cue is convergence, the inward turning of our eyes that is required to focus on objects that are less than about 50 feet away from us. The visual cortex uses the size of the convergence angle between the eyes to judge the object’s distance. You will be able to feel your eyes converging if you slowly bring a finger closer to your nose while continuing to focus on it. When you close one eye, you no longer feel the tension—convergence is a binocular depth cue that requires both eyes to work.
The visual system also uses accommodation to help determine depth. As the lens changes its curvature to focus on distant or close objects, information relayed from the muscles attached to the lens helps us determine an object’s distance. Accommodation is only effective at short viewing distances, however, so while it comes in handy when threading a needle or tying shoelaces, it is far less effective when driving or playing sports.
Although the best cues to depth occur when both eyes work together, we are able to see depth even with one eye closed. Monocular depth cues are depth cues that help us perceive depth using only one eye.  Some of the most important are summarized in the following table.
Creative artists have taken advantage of the cues that the brain uses to perceive motion, starting in the early days of motion pictures and continuing to the present with modern computerized visual effects. The general phenomenon is called apparent motion. One example of apparent motion can be seen if two bright circles, one on the left of the screen and the other on the right of the screen, are flashed on and off in quick succession. At the right speed, your brain creates a blur that seems to move back and forth between the two circles. This is called the phi phenomenon. A similar, but different phenomenon occurs if a series of circles are flashed on and off in sequence, though the flashing occurs more slowly than in the phi phenomenon. The circle appears to move from one location to the next, though the connecting blur associated with the phi phenomenon is not present.
It is not necessary to use circles. Any visual shape can produce apparent motion. Motion pictures, which use a sequence of still images, each similar to but slightly different from the one before, to create the experience of smooth movement. At the frame speeds of modern motion pictures, the phi phenomenon is the best explanation for our experience of smooth and natural movement. However, as visual artists discovered more than a century ago, even at slower change rates, the beta effect can produce the experience of a moving image.
Like vision and all the other senses, hearing begins with transduction. Sound waves collected by our ears are converted to neural impulses, which are sent to the brain where they are integrated with past experience and interpreted as the sounds we experience. The human ear is sensitive to a wide range of sounds, ranging from the faint tick of a clock in a nearby room to the roar of a rock band at a nightclub, and we have the ability to detect very small variations in sound. But the ear is particularly sensitive to sounds in the same frequency as the human voice. A mother can pick out her child’s voice from a host of others, and when we pick up the phone we quickly recognize a familiar voice. In a fraction of a second, our auditory system receives the sound waves, transmits them to the auditory cortex, compares them to stored knowledge of other voices, and identifies the identity of the caller.
Just as the eye detects light waves, the ear detects sound waves. Vibrating objects (such as the human vocal cords or guitar strings) cause air molecules to bump into each other and produce sound waves, which travel from their source as peaks and valleys much like the ripples that expand outward when a stone is tossed into a pond. Unlike light waves, which can travel in a vacuum, sound waves are carried within mediums such as air, water, or metal, and it is the changes in pressure associated with these mediums that the ear detects.
As with light waves, we detect both the wavelength and the amplitude of sound waves. The wavelength of the sound wave (known as frequency) is measured in terms of the number of waves that arrive per second and determines our perception of pitch, the perceived frequency of a sound. Longer sound waves have lower frequency and produce a lower pitch, whereas shorter waves have higher frequency and a higher pitch.
The amplitude, or height of the sound wave, determines how much energy it contains and is perceived as loudness (the degree of sound volume). Larger waves are perceived as louder. Loudness is measured using the unit of relative loudness known as the decibel. Zero decibels represents the absolute threshold for human hearing, below which we cannot hear a sound. Each increase in 10 decibels represents a tenfold increase in the loudness of the sound. This means that 20 decibels is 10 times louder than 10 decibels, and 30 decibels is 100 times louder (10 X 10) than 10 decibels. The sound of a typical conversation (about 60 decibels) is 1,000 times louder than the sound of a faint whisper (30 decibels), whereas the sound of a jackhammer (130 decibels) is 10 billion times louder than the whisper.
Audition begins in the pinna, or auricle, the external and visible part of the ear, which is shaped like a funnel to draw in sound waves and guide them into the auditory canal. At the end of the canal, the sound waves strike the tightly stretched, highly sensitive membrane known as the tympanic membrane (or eardrum), which vibrates with the waves. The resulting vibrations are relayed into the middle ear through three tiny bones, known as the ossicles—the hammer (or malleus), anvil (or incus), and stirrup (or stapes)—to the cochlea, a snail-shaped liquid-filled tube in the inner ear. The vibrations cause the oval window, the membrane covering the opening of the cochlea, to vibrate, disturbing the fluid inside the cochlea.
The movements of the fluid in the cochlea bend the hair cells of the inner ear, much in the same way that a gust of wind bends over wheat stalks in a field. The movements of the hair cells trigger nerve impulses in the attached neurons, which are sent to the auditory nerve and then to the auditory cortex in the brain. The cochlea contains about 16,000 hair cells, each of which holds a bundle of fibers known as cilia on its tip. The cilia are so sensitive that they can detect a movement that pushes them the width of a single atom. To put things in perspective, cilia swaying at the width of an atom is equivalent to the tip of the Eiffel Tower swaying by half an inch. 
Although loudness is directly determined by the number of hair cells that are vibrating, two different mechanisms are used to detect pitch. The frequency theory of hearing proposes that whatever the pitch of a sound wave, nerve impulses of a corresponding frequency will be sent to the auditory nerve. For example, a tone measuring 600 hertz will be transduced into 600 nerve impulses a second. This theory has a problem with high-pitched sounds, however, because the neurons cannot fire fast enough. About 1,000 nerve impulses a second is the maximum firing rate for the fastest neurons. To reach the necessary speed for higher frequency (pitch) sounds, the neurons work together in a sort of volley system in which different neurons fire in sequence, allowing us to detect sounds up to about 4,000 hertz.
Not only is frequency important, but location is critical as well. The cochlea relays information about the specific area, or place, in the cochlea that is most activated by the incoming sound. The place theory of hearing proposes sounds of different frequencies set off waves in the cochlea that peak at different locations along the tube that makes up the coclea. Higher tones peak at areas closest to the opening of the cochlea (near the oval window). Lower tones peak at areas near the narrow tip of the cochlea, at the opposite end. Pitch is therefore determined in part by the area of the cochlea where the wave of energy reaches its maximum point.
Just as having two eyes in slightly different positions allows us to perceive depth, so the fact that the ears are placed on either side of the head enables us to benefit from stereophonic, or three-dimensional, hearing. If a sound occurs on your left side, the left ear will receive the sound slightly sooner than the right ear, and the sound it receives will be more intense, allowing you to quickly determine the location of the sound. Although the distance between our two ears is only about 6 inches, and sound waves travel at 750 miles an hour, the time and intensity differences are easily detected.  When a sound is equidistant from both ears, such as when it is directly in front, behind, beneath or overhead, we have more difficulty pinpointing its location. It is for this reason that dogs (and people, too) tend to cock their heads when trying to pinpoint a sound, so that the ears receive slightly different signals.
More than 31 million Americans suffer from some kind of hearing impairment.  Conductive hearing loss is caused by physical damage to the ear (such as to the eardrums or ossicles) that reduce the ability of the ear to transfer vibrations from the outer ear to the inner ear. Sensorineural hearing loss, which is caused by damage to the cilia or to the auditory nerve, is less common overall but frequently occurs with age.  The cilia are extremely fragile, and by the time we are 65 years old, we will have lost 40% of them, particularly those that respond to high-pitched sounds. 
Prolonged exposure to loud sounds will eventually create sensorineural hearing loss as the cilia are damaged by the noise. People who constantly operate noisy machinery without using appropriate ear protection are at high risk of hearing loss, as are people who listen to loud music on their headphones or who engage in noisy hobbies, such as hunting or motorcycling. Sounds that are 85 decibels or more can cause damage to your hearing, particularly if you are exposed to them repeatedly. Sounds of more than 130 decibels are dangerous even if you are exposed to them infrequently. People who experience tinnitus (a ringing or a buzzing sensation) after being exposed to loud sounds have very likely experienced some damage to their cilia. Taking precautions when being exposed to loud sound is important, as cilia do not grow back.
While conductive hearing loss can often be improved through hearing aids that amplify the sound, they are of little help to sensorineural hearing loss. But if the auditory nerve is still intact, a cochlear implant may be used. A cochlear implant is a device made up of a series of electrodes that are placed inside the cochlea. The device serves to bypass the hair cells by stimulating the auditory nerve cells directly. The latest implants utilize place theory, enabling different spots on the implant to respond to different levels of pitch. The cochlear implant can help children hear who would normally be deaf, and if the device is implanted early enough, these children can frequently learn to speak, often as well as normal children do.  
Although vision and hearing are by far the most important, human sensation is rounded out by four other senses, each of which provides an essential avenue to a better understanding of and response to the world around us. These other senses are touch, taste, smell, and our sense of body position and movement (proprioception).
Taste is important not only because it allows us to enjoy the food we eat, but even more crucial, because it leads us toward foods that provide energy (sugar, for instance) and away from foods that could be harmful. Many children are picky eaters for a reason—they are biologically predisposed to be very careful about what they eat. Together with the sense of smell, taste helps us maintain appetite, assess potential dangers (such as the odor of a gas leak or a burning house), and avoid eating poisonous or spoiled food.
Our ability to taste begins at the taste receptors on the tongue. The tongue detects six different taste sensations, known respectively as sweet, salty, sour, bitter, piquancy (spicy), and umami (savory). Umami is a meaty taste associated with meats, cheeses, soy, seaweed, and mushrooms, and particularly found in monosodium glutamate (MSG), a popular flavor enhancer.  
Our tongues are covered with taste buds, which are designed to sense chemicals in the mouth. Most taste buds are located in the top outer edges of the tongue, but there are also receptors at the back of the tongue as well as on the walls of the mouth and at the back of the throat. As we chew food, it dissolves and enters the taste buds, triggering nerve impulses that are transmitted to the brain.  Human tongues are covered with 2,000 to 10,000 taste buds, and each bud contains between 50 and 100 taste receptor cells. Taste buds are activated very quickly; a salty or sweet taste that touches a taste bud for even one tenth of a second will trigger a neural impulse.  On average, taste buds live for about 5 days, after which new taste buds are created to replace them. As we get older, however, the rate of creation decreases making us less sensitive to taste. This change helps explain why some foods that seem so unpleasant in childhood are more enjoyable in adulthood.
The area of the sensory cortex that responds to taste is in a very similar location to the area that responds to smell, a fact that helps explain why the sense of smell also contributes to our experience of the things we eat. You may remember having had difficulty tasting food when you had a bad cold, and if you block your nose and taste slices of raw potato, apple, and parsnip, you will not be able to taste the differences between them. Our experience of texture in a food (the way we feel it on our tongues) also influences how we taste it.
As we breathe in air through our nostrils, we inhale airborne chemical molecules, which are detected by the 10 million to 20 million receptor cells embedded in the olfactory membrane of the upper nasal passage. The olfactory receptor cells are topped with tentacle-like protrusions that contain receptor proteins. When an odor receptor is stimulated, the membrane sends neural messages up the olfactory nerve to the brain.
We have approximately 1,000 types of odor receptor cells,  and it is estimated that we can detect 10,000 different odors.  The receptors come in many different shapes and respond selectively to different smells. Like a lock and key, different chemical molecules “fit” into different receptor cells, and odors are detected according to their influence on a combination of receptor cells. Just as the 10 digits from 0 to 9 can combine in many different ways to produce an endless array of phone numbers, odor molecules bind to different combinations of receptors, and these combinations are decoded in the olfactory cortex. Women tend to have a more acute sense of smell than men. The sense of smell peaks in early adulthood and then begins a slow decline. By ages 60 to 70, the sense of smell has become sharply diminished.
The sense of touch is essential to human development. Infants thrive when they are cuddled and attended to, but not if they are deprived of human contact.    Touch communicates warmth, caring, and support, and is an essential part of the enjoyment we gain from our social interactions with close others.  
The skin, the largest organ in the body, is the sensory organ for touch. The skin contains a variety of nerve endings, combinations of which respond to particular types of pressures and temperatures. When you touch different parts of the body, you will find that some areas are more ticklish, whereas other areas respond more to pain, cold, or heat.
The thousands of nerve endings in the skin respond to four basic sensations: pressure, hot, cold, and pain, but only the sensation of pressure has its own specialized receptors. Other sensations are created by a combination of the other four. For instance:
Once the nerve endings receive sensory information, such as pain or heat, the information travels along sensory pathways to the central nervous system. Although some sensations may stop at the spinal cord, resulting in a reflex response, most continue on towards the brain for interpretation. The brain will then process the information and direct your body to respond based on the sensory information (i.e., your brain recognizes that you have felt the sensation of pain when a mosquito bit you and now sends the message to your muscles to slap your arm).
The skin is important not only in providing information about touch and temperature but also in proprioception—the ability to sense the position and movement of our body parts. Proprioception is accomplished by specialized neurons located in the skin, joints, bones, ears, and tendons, which send messages about the compression and the contraction of muscles throughout the body. Without this feedback from our bones and muscles, we would be unable to play sports, walk, or even stand upright. In fact, learning any new motor skill involves your proprioceptive sense. Imagine trying to hit a baseball if you had to watch your feet too? How would you be able to keep your eye on the ball coming at you? Fortunately, proprioception is automatic in many ways that it normally operates without us even noticing it.
The ability to keep track of where the body is moving is also provided by the vestibular system, a set of liquid-filled areas in the inner ear that monitors the head’s position and movement, maintaining the body’s balance. As you can see in the figure below, the vestibular system includes the semicircular canals and the vestibular sacs. These sacs connect the canals with the cochlea. The semicircular canals sense the rotational movements of the body and the vestibular sacs sense linear accelerations. The vestibular system sends signals to the neural structures that control eye movement and to the muscles that keep the body upright.
We do not enjoy it, but the experience of pain is how the body informs us that we are in danger. The burn when we touch a hot radiator and the sharp stab when we step on a nail lead us to change our behavior, preventing further damage to our bodies. People who cannot experience pain are in serious danger of damage from wounds that others with pain would quickly notice and attend to.
The gate-control theory of pain proposes that pain is determined by the operation of two types of nerve fibers in the spinal cord. One set of smaller nerve fibers carries pain from the body to the brain, whereas a second set of larger fibers is designed to stop or start (as a gate would) the flow of pain.  It is for this reason that massaging an area where you feel pain may help alleviate it—the massage activates the large nerve fibers that block the pain signals of the small nerve fibers. 
Experiencing pain is a lot more complicated than simply responding to neural messages, however. It is also a matter of perception. We feel pain less when we are busy focusing on a challenging activity,  which can help explain why sports players may feel their injuries only after the game. We also feel less pain when we are distracted by humor.  Our experience of pain is also affected by our expectations and moods. So, for example, if you were having a bad day and feeling frustrated, stubbing your toe may feel especially painful. Thankfully, our bodies have a way of dealing with pain, soothing it by the brain’s release of endorphins, which are natural hormonal pain killers. The release of endorphins can explain the euphoria experienced in the running of a marathon. 
The eyes, ears, nose, tongue, and skin sense the world around us, and in some cases perform preliminary information processing on the incoming data. But by and large, we do not experience sensation—we experience the outcome of perception—the total package that the brain puts together from the pieces it receives through our senses and that the brain creates for us to experience. When we look out the window at a view of the countryside, or when we look at the face of a good friend, we don’t just see a jumble of colors and shapes—we see, instead, an image of a countryside or an image of a friend. 
Our perception is also influenced by psychological factors such as our belief system and our expectations. For example, every so often, we may hear of people spotting UFO’s in the sky, only to discover later that they were hot air balloons or military air craft. Similarly, some people believe that they have seen images of divinity in unexpected places, such as the 10 year old who saw the Virgin Mary on his grilled cheese sandwich. Our beliefs, expectations, and culture therefore influence how we perceive sensory information.
This meaning-making involves the automatic operation of a variety of essential perceptual processes. One of these is sensory interaction-the working together of different senses to create experience. Sensory interaction is involved when taste, smell, and texture combine to create the flavor we experience in food. It is also involved when we enjoy a movie because of the way the images and the music work together.
Although you might think that we understand speech only through our sense of hearing, it turns out that the visual aspect of speech is also important. One example of sensory interaction is shown in the McGurk effect—an error in perception that occurs when we misperceive sounds because the audio and visual parts of the speech are mismatched.
The McGurk effect is a great example of sensory interaction, the mixing of information from more than one sensory system. The McGurk effect is an example of a phenomenon that does not occur in the natural world, but it still tells us about normal, natural information processing. In the natural world, what a person says and how that person’s lips move as he is talking are completely linked. You cannot move your lips one way and say something that doesn’t conform to the way your lips move. The closest thing to an exception is ventriloquism, where a person talks without moving his lips.
The McGurk effect is possible because we can use computer-based video editing to create an artificial experience. A psychologist can film a person saying various things, and then separate and recombine sound segments and visual segments. To keep the experience simple, the McGurk effect is usually created by having a person say simple nonsense sounds, like /GA/ or /BA/.
Watch the video below. Be sure you follow the instructions of the narrator, watching the screen when you are instructed to do that and not watching the screen when you are instructed to close your eyes. As instructed, write down what you actually hear in each condition. After you have completed this exercise, we will discuss the results.
See the video below on the McGurk effect:
To fully appreciate the McGurk effect, notice how you make the /BA/ sound. Try it. You close your lips and then open them to say /BA/. So you close and then open your air passage at the front of your mouth, at the lips. Now make the /GA/ sound. Notice that you close the air passage at the back of your mouth by touching the middle of your tongue to the back of your mouth. So /BA/ is made by closing the front of the air passage and /GA/ is made by closing the back of the air passage.
And /DA/? Notice that you make the /DA/ sound by putting the tip of your tongue on the top of your mouth near the middle of the air passage. What you hear is a compromise: Your ears HEAR a sound made by closing the front of the mouth and SEE a sound made by closing the back of your mouth, so your brain—your perceptual system—splits the difference and PERCEIVES a sound made by closing the middle of the mouth.
What does the McGurk effect tell us about normal processing? This automatic combination of visual and auditory information is not something that your brain throws together to make psychologists happy. The McGurk effect tells us that we always combine visual and auditory information when we are conversing with another person. Of course, in normal life, both senses tell us the same story—the mouth is making the movements that really do match the sounds. But this integration of sound and sight may be very useful in a noisy environment or if the person speaking doesn’t speak clearly. Our use of visual information allows us to automatically fill in the gaps, allowing us to perceive what a persons says more clearly than our auditory system could have done alone.
Other examples of sensory interaction include the experience of nausea that can occur when the sensory information being received from the eyes and the body does not match information from the vestibular system  and synesthesia—an experience in which one sensation (e.g., hearing a sound) creates experiences in another (e.g., vision). Most people do not experience synesthesia, but those who do link their perceptions in unusual ways, for instance, by experiencing color when they taste a particular food or by hearing sounds when they see certain objects. 
Synesthesia is a phenomenon that has received a lot of attention in recent years. It occurs when sensory input stimulates both normal sensory experiences as well as sensory experiences either from a different sense or another dimension of that same sense. An example of the first type is sound-color synesthesia: Sounds lead to the perception of those sounds, but also to visual experiences of colors, almost like fireworks. An example of the second type is number-color synesthesia: Looking at numbers not only leads to the visual experience of those numbers, but the numbers are experienced in particular colors, so, for instance, 9 might be red and 5 might be yellow, and so on.
Watch the two videos below and then we will analyze synesthesia more fully.
See the video below on number-color synesthesia:
See the video below on color-sound synesthesia:
Another important perceptual process is selective attention—the ability to focus on some sensory inputs while tuning out others. View the videos in the exercise below. You will see how surprisingly little we notice about things happening right in front of our faces. Perhaps the process of selective attention can help you see why the security guards completely missed the fact that the Chaser group’s motorcade was a fake—they focused on some aspects of the situation, such as the color of the cars and the fact that they were there at all, and completely ignored others (the details of the security information).
See the video below on selective attention:
In the video below, the researcher who created the gorilla video and used it in his research, Dan Simons, explains more about his work.
Next, watch the video below with British illusionist Derren Brown demonstrating how little we actually see.
Now watch this British safe driving advertisement:
Selective attention also allows us to focus on a single talker at a party while ignoring other conversations that are occurring around us.  Without this automatic selective attention, we’d be unable to focus on the single conversation we want to hear. However, selective attention is not complete; we also at the same time monitor what is happening in the channels we are not focusing on. Perhaps you have had the experience of being at a party and talking to someone in one part of the room, when suddenly you hear your name being mentioned by someone in another part of the room. This cocktail party phenomenon shows us that, although selective attention is limiting what we process, we are nevertheless at the same time doing a lot of unconscious monitoring of the world around us—you did not know you were attending to the background sounds of the party, but evidently you were.
One of the major problems in perception is to ensure that we always perceive the same object in the same way, despite the fact that the sensations that it creates on our receptors changes dramatically. The ability to perceive a stimulus as constant despite changes in sensation is known as perceptual constancy. Consider our image of a door as it swings. When it is closed, we see it as rectangular, but when it is open, we see only its edge and it appears as a line. But we never perceive the door as changing shape as it swings—perceptual mechanisms take care of the problem for us by allowing us to see a constant shape.
The figure below illustrates this point. In natural circumstances, it never occurs to us that a door is changing shapes as it opens in front of our eyes. As the figure shows, the shape that hits the retina undergoes a significant transformation, but our perceptual system simply interprets it as a rectangular door that is changing in its orientation to us.
The visual system also corrects for color constancy. Imagine that you are wearing blue jeans and a bright white t-shirt. When you are outdoors, both colors will be at their brightest, but you will still perceive the white t-shirt as bright and the blue jeans as darker. When you go indoors, the light shining on the clothes will be significantly dimmer, but you will still perceive the t-shirt as bright. This is because we put colors in context and see that, compared to its surroundings, the white t-shirt reflects the most light.  In the same way, a green leaf on a cloudy day may reflect the same wavelength of light as a brown tree branch does on a sunny day. Nevertheless, we still perceive the leaf as green and the branch as brown.
To see an example of color constancy, look at the picture of the hot air balloon below. If you were looking at it in a natural setting, not thinking about your psychology class, you would simply think of the main colors: green, red, white, and blue. However, our camera allows us to take a picture frozen in time to analyze. What we can easily see is that the four colors are extremely variable. Look at the color patches to the right. They are all taken from the three bands of color-green, white, and blue-in the lower half of the picture. Notice how the actual hues vary considerably, although we would simply perceive them as the same color if we were not concentrating on their variability. Color constancy is produced by your perceptual system, taking sensory input and compensating for changes in lighting.
A third kind of constancy is size constancy. Our eyes see objects that differ radically in size, but our perceptual system compensates for this. In the classroom scene below, notice the huge difference in actual size of people in the picture as you go from front to back. This is what the retina detects. But your perceptual system compensates for these differences, using the knowledge that all of the people are approximately the same size.
Our perceptual system compensates for distance, using depth cues from the space around objects to help it make adjustments. The figure below shows how physical context influences our perceptual system. On the left, the monster in front seems smaller than the one in the back because our perceptual system adjusts the perceived size to take the distance into the tunnel into account. On the right, we remove the tunnel, and the two monsters are easily seen as being identical in size.
Instructions: The image below illustrates the results of our perceptual system’s attempt to make sense of a complicated pattern—a pattern designed to fool the perceptual system. This is an example of color constancy.
Click and drag to move the rows of the figure (marked a through f) around to help you answer these questions.
Instructions: The picture below is another illustration of our perceptual system’s attempt to compensate for a complex world. In this case, it appears that some squares are in a shadow. Rather than assuming that those squares are actually different from the rest of the checkerboard, the perceptual system compensates for the shadow.
Click and drag to move squares A and B around to help you answer these questions.
Although our perception is very accurate, it is not perfect. Illusions occur when the perceptual processes that normally help us correctly perceive the world around us are fooled by a particular situation so that we see something that does not exist or that is incorrect. We will look at some perceptual illusions in the exercises below to see what we can learn from them.
Instructions: For the figures below, adjust the length of the center bar on the top figure so it looks to you to be the same length as the middle bar in the lower figure. Simply rely on what your perceptual system tells you looks equal. Don’t try to compensate for any illusions or use a straightedge to help you make the judgement. Just use your visual system.
The figures you just worked with form the Mueller-Lyer illusion (see also the figure below). The middle line on the figure with the arrows pointing in ( >--< ) looks longer to most people than the middle line on the one with the arrows pointing out (<-->) when they are both actually the same length.
Many psychologists believe that this illusion is, in part, the result of the way we interpret the angled lines at the ends of the figure. Based on our experience with rooms (see figure below), when we see lines pointing out away from the center line (building on the left), we tend to interpret that as meaning that the center line is close to us and the angled lines are going away into the distance. But when the angled lines on the end point in toward the center line (room interior on the right), we interpret that as meaning that the center line is far away and the angled lines are coming toward us. Our perceptual system then compensates for these assumptions and gives us the experience of perceiving the “more distant line” (the one on the right) as being longer than it really is.
The moon illusion refers to the fact that the moon is perceived to be about 50% larger when it is near the horizon than when it is seen overhead, despite the fact that both moons are the same size and cast the same size retinal image. The monocular depth cues of position and aerial perspective create the illusion that things that are lower and more hazy are farther away. The skyline of the horizon (trees, clouds, outlines of buildings) also gives a cue that the moon is far away, compared to a moon at its zenith. If we look at a horizon moon through a tube of rolled up paper, taking away the surrounding horizon cues, the moon will immediately appear smaller.
The moon always looks larger on the horizon than when it is high above. But if we take away the surrounding distance cues of the horizon, the illusion disappears. It seems that the long curvature of the earth on the horizon makes the moon, by comparison, look smaller. It appears that our perceptual system compares the curvature of the moon to the curvature of the earth and then exaggerates the smaller curvature of the moon to make it seem even smaller. Without those cues, when the moon is high in the sky, the perceptual system no longer exaggerates the moon’s relatively small size.
We can’t create the moon illusion in this course, but a similar illusion is one called the Ebbinhaus illusion.
Instructions: Adjust the two center circles so that they appear to you to be the same size. Don’t use any aids to measure the size or try to compensate for the illusion. Rely on your visual system.
The figure below is called the Wundt illusion after its creator, 19th-century German psychologist Wilhelm Wundt.
The Ponzo illusion operates on the same principle. The monocular depth cue of linear perspective leads us to believe that, given two similar objects, the distant one can only cast the same size retinal image as the closer object if it is larger. The topmost bar therefore appears longer.
The Ponzo illusion is caused by a failure of the monocular depth cue of linear perspective: Both bars are the same size even though the top one looks larger.
Instructions: In the figure above, the bottom yellow line is fixed. Your task is to adjust the top yellow line until it LOOKS the same length as the bottom one. Please use only your eyes to guide your decision. Don’t try to compensate for the illusion. Just adjust the line until your perceptual system tells you that the lines look to be the same length.
Illusions demonstrate that our perception of the world around us may be influenced by our prior knowledge. But the fact that some illusions exist in some cases does not mean the perceptual system is generally inaccurate—in fact, humans normally become so closely in touch with their environment that the physical body and the particular environment we sense and perceive become embodied—that is, built into and linked with—our cognition, such that the worlds around us become part of our brain.  The close relationship between people and their environments means that, although illusions can be created in the lab and under some unique situations, they may be less common with active observers in the real world. 
Here is a great spot to see many more illusions and to learn more about why they occur.
A Google image search will also turn up interesting illusions.
It is a continuous challenge living with post-traumatic stress disorder (PTSD), and I’ve suffered from it for most of my life. I can look back now and gently laugh at all the people who thought I had the perfect life. I was young, beautiful, and talented, but unbeknownst to them, I was terrorized by an undiagnosed debilitating mental illness.
Having been properly diagnosed with PTSD at age 35, I know that there is not one aspect of my life that has gone untouched by this mental illness. My PTSD was triggered by several traumas, most importantly a sexual attack at knifepoint that left me thinking I would die. I would never be the same after that attack. For me there was no safe place in the world, not even my home. I went to the police and filed a report. Rape counselors came to see me while I was in the hospital, but I declined their help, convinced that I didn’t need it. This would be the most damaging decision of my life.
For months after the attack, I couldn’t close my eyes without envisioning the face of my attacker. I suffered horrific flashbacks and nightmares. For four years after the attack I was unable to sleep alone in my house. I obsessively checked windows, doors, and locks. By age 17, I’d suffered my first panic attack. Soon I became unable to leave my apartment for weeks at a time, ending my modeling career abruptly. This just became a way of life. Years passed when I had few or no symptoms at all, and I led what I thought was a fairly normal life, just thinking I had a “panic problem.”
Then another traumatic event retriggered the PTSD. It was as if the past had evaporated, and I was back in the place of my attack, only now I had uncontrollable thoughts of someone entering my house and harming my daughter. I saw violent images every time I closed my eyes. I lost all ability to concentrate or even complete simple tasks. Normally social, I stopped trying to make friends or get involved in my community. I often felt disoriented, forgetting where, or who, I was. I would panic on the freeway and became unable to drive, again ending a career. I felt as if I had completely lost my mind. For a time, I managed to keep it together on the outside, but then I became unable to leave my house again.
Around this time I was diagnosed with PTSD. I cannot express to you the enormous relief I felt when I discovered my condition was real and treatable. I felt safe for the first time in 32 years. Taking medication and undergoing behavioral therapy marked the turning point in my regaining control of my life. I’m rebuilding a satisfying career as an artist, and I am enjoying my life. The world is new to me and not limited by the restrictive vision of anxiety. It amazes me to think back to what my life was like only a year ago, and just how far I’ve come. For me there is no cure, no final healing. But there are things I can do to ensure that I never have to suffer as I did before being diagnosed with PTSD. I’m no longer at the mercy of my disorder, and I would not be here today had I not had the proper diagnosis and treatment. The most important thing to know is that it’s never too late to seek help. 
In the early part of the 20th century, Russian physiologist Ivan Pavlov (1849–1936) was studying the digestive system of dogs when he noticed an interesting behavioral phenomenon: The dogs began to salivate when the lab technicians who normally fed them entered the room, even though the dogs had not yet received any food. Pavlov realized that the dogs were salivating because they knew that they were about to be fed; the dogs had begun to associate the arrival of the technicians with the food that soon followed their appearance in the room.
With his team of researchers, Pavlov began studying this process in more detail. He conducted a series of experiments in which, over a number of trials, dogs were exposed to a sound immediately before receiving food. He systematically controlled the onset of the sound and the timing of the delivery of the food, and recorded the amount of the dogs’ salivation. Initially the dogs salivated only when they saw or smelled the food, but after several pairings of the sound and the food, the dogs began to salivate as soon as they heard the sound. The animals had learned to associate the sound with the food that followed.
Use the next exercise to sort out the sometimes tricky terminology of classical conditioning.
In the first step, we focus on the initial conditions before conditioning has taken place. After looking at the pictures below, determine the unconditioned stimulus and the unconditioned response that results.
Now you must predict what should happen when a hungry dog is presented with food.
In this step, we find a neutral stimulus—a stimulus that produces no response. After looking at the pictures below, determine what the neutral stimulus is and the response that it causes. In this test, the dog is not hungry.
Now you must predict what should happen when a hungry dog hears the neutral stimulus: a tone.
In this step, we go through the actual conditioning process, associating a conditioned stimulus (CS) with an unconditioned stimulus (US). When we are finished, the neutral stimulus (NS) will have become the conditioned stimulus. After looking at the pictures below, determine the conditioned and unconditioned stimuli and the unconditioned response that they cause. Note that this sequence differs from the first one you did because there are always two stimuli present: first a CS and then a US.
This is the last part of this exercise. We want to see if conditioning—associative learning—has taken place. In the previous exercise, the US was always present, so it could produce the UR. Now we remove the US to see if the animal has learned to produce the same response when only the CS is present. If so, we will rename the response, when produced only by the CS, the Conditioned Response (CR). After looking at the pictures below, determine what the conditioned stimulus is and the conditioned response that it causes.
This exercise is best understood related to the previous exercise on learning. Imagine a series of learning trials in which the CS is followed by the US, and the UR is measured. Now, in this exercise, suppose on the next trial, we present only the CS without the US. What happens? So in Trial 1 we pair the CS and US only one time before presenting the CS alone. In Trial 2 we pair the CS and US two times before presenting the CS alone. In Trial 3 we pair the CS and US three times before presenting the CS alone.
Pavlov identified a fundamental associative learning process called classical conditioning. Classical conditioning refers to learning that occurs when a neutral stimulus (e.g., a tone) becomes associated with a stimulus (e.g., food) that naturally produces a specific behavior. After the association is learned, the previously neutral stimulus is sufficient to produce the behavior. As you can see in the following figure, psychologists use specific terms to identify the stimuli and the responses in classical conditioning. The unconditioned stimulus (US) is something (such as food) that triggers a natural occurring response, and the unconditioned response (UR) is the naturally occurring response (such as salivation) that follows the unconditioned stimulus. The conditioned stimulus (CS) is a neutral stimulus that, after being repeatedly presented prior to the unconditioned stimulus, evokes a response similar to the response to the unconditioned stimulus. In Pavlov’s experiment, the sound of the tone served as the conditioned stimulus that, after learning, produced the conditioned response (CR), which is the acquired response to the formerly neutral stimulus. Note that the UR and the CR are the same behavior—in this case salivation—but they are given different names because they are produced by different stimuli (the US and the CS, respectively).
Conditioning is evolutionarily beneficial because it allows organisms to develop expectations that help them prepare for both good and bad events. Imagine, for instance, that an animal first smells a new food, eats it, and then gets sick. If the animal can learn to associate the smell (CS) with the food (US), then it will quickly learn that the food creates the negative outcome and will not eat it next time.
A researcher is testing young children to see if they can learn to associate a red circle with an event that the child enjoys. She sets up an experiment where a toy bear dances. The infants predictably love the toy bear and stare at it when it makes noise and dances. She then trains the child by showing a big red circle on a screen in front of the child and, immediately after that, the bear appears and dances off to the side. The bear is only visible right after the red circle appears and the child must turn his or her head to see the bear.
After he had demonstrated that learning could occur through association, Pavlov moved on to study the variables that influenced the strength and the persistence of conditioning. In some studies, after the conditioning had taken place, Pavlov presented the sound repeatedly but without presenting the food afterward. As you can see, after the initial acquisition (learning) phase in which the conditioning occurred, when the CS was then presented alone, the behavior rapidly decreased—the dogs salivated less and less to the sound, and eventually the sound did not elicit salivation at all. Extinction is the reduction in responding that occurs when the conditioned stimulus is presented repeatedly without the unconditioned stimulus.
Although at the end of the first extinction period the CS was no longer producing salivation, the effects of conditioning had not entirely disappeared. Pavlov found that, after a pause, sounding the tone again elicited salivation, although to a lesser extent than before extinction took place. The increase in responding to the CS following a pause after extinction is known as spontaneous recovery. When Pavlov again presented the CS alone, the behavior again showed extinction.
For each example, select the term that best describes it.
Although the behavior has disappeared, extinction is never complete. If conditioning is again attempted, the animal will learn the new associations much faster than it did the first time. Pavlov also experimented with presenting new stimuli that were similar, but not identical to, the original conditioned stimulus. For instance, if the dog had been conditioned to being scratched before the food arrived, the stimulus would be changed to being rubbed rather than scratched. He found that the dogs also salivated upon experiencing the similar stimulus, a process known as generalization. Generalization refers to the tendency to respond to stimuli that resemble the original conditioned stimulus. The ability to generalize has important evolutionary significance. If we eat some red berries and they make us sick, it would be a good idea to think twice before we eat some purple berries. Although the berries are not exactly the same, they nevertheless are similar and may have the same negative properties.
Lewicki  conducted research that demonstrated the influence of stimulus generalization and how quickly and easily it can happen. In his experiment, high school students first had a brief interaction with a female experimenter who had short hair and glasses. The study was set up so that the students had to ask the experimenter a question, and (according to random assignment) the experimenter responded either in a negative way or a neutral way toward the students. Then the students were told to go into a second room in which two experimenters were present, and to approach either one of them. However, the researchers arranged it so that one of the two experimenters looked a lot like the original experimenter, while the other one did not (she had longer hair and no glasses). The students were significantly more likely to avoid the experimenter who looked like the earlier experimenter when that experimenter had been negative to them than when she had treated them more neutrally. The participants showed stimulus generalization such that the new, similar-looking experimenter created the same negative response in the participants as had the experimenter in the prior session.
The flip side of generalization is discrimination—the tendency to respond differently to stimuli that are similar but not identical. Pavlov’s dogs quickly learned, for example, to salivate when they heard the specific tone that had preceded food, but not upon hearing similar tones that had never been associated with food. Discrimination is also useful—if we do try the purple berries, and if they do not make us sick, we will be able to make the distinction in the future. And we can learn that although the two people in our class, Courtney and Sarah, may look a lot alike, they are nevertheless different people with different personalities.
In some cases, an existing conditioned stimulus can serve as an unconditioned stimulus for a pairing with a new conditioned stimulus—a process known as second-order conditioning. In one of Pavlov’s studies, for instance, he first conditioned the dogs to salivate to a sound, and then repeatedly paired a new CS, a black square, with the sound. Eventually he found that the dogs would salivate at the sight of the black square alone, even though it had never been directly associated with the food. Secondary conditioners in everyday life include our attractions to things that stand for or remind us of something else, such as when we feel good on a Friday because it has become associated with the paycheck that we receive on that day, which itself is a conditioned stimulus for the pleasures that the paycheck buys us.
Scientists associated with the behaviorist school argued that all learning is driven by experience, and that nature plays no role. Classical conditioning, which is based on learning through experience, represents an example of the importance of the environment. But classical conditioning cannot be understood entirely in terms of experience. Nature also plays a part, as our evolutionary history has made us better able to learn some associations than others.
Clinical psychologists make use of classical conditioning to explain the learning of a phobia—a strong and irrational fear of a specific object, activity, or situation. For example, driving a car is a neutral event that would not normally elicit a fear response in most people. But if a person were to experience a panic attack in which he suddenly experienced strong negative emotions while driving, he may learn to associate driving with the panic response. The driving has become the CS that now creates the fear response.
Psychologists have also discovered that people do not develop phobias to just anything. Although people may in some cases develop a driving phobia, they are more likely to develop phobias toward objects (such as snakes, spiders, heights, and open spaces) that have been dangerous to people in the past. In modern life, it is rare for humans to be bitten by spiders or snakes, to fall from trees or buildings, or to be attacked by a predator in an open area. Being injured while riding in a car or being cut by a knife are much more likely. But in our evolutionary past, the potential of being bitten by snakes or spiders, falling out of a tree, or being trapped in an open space were important evolutionary concerns, and therefore humans are still evolutionarily prepared to learn these associations over others.  
Another evolutionarily important type of conditioning is conditioning related to food. In his important research on food conditioning, John Garcia and his colleagues   attempted to condition rats by presenting either a taste, a sight, or a sound as a neutral stimulus before the rats were given drugs (the US) that made them nauseous. Garcia discovered that taste conditioning was extremely powerful—the rat learned to avoid the taste associated with illness, even if the illness occurred several hours later. But conditioning the behavioral response of nausea to a sight or a sound was much more difficult. These results contradicted the idea that conditioning occurs entirely as a result of environmental events, such that it would occur equally for any kind of unconditioned stimulus that followed any kind of conditioned stimulus. Rather, Garcia’s research showed that genetics matters—organisms are evolutionarily prepared to learn some associations more easily than others. You can see that the ability to associate smells with illness is an important survival mechanism, allowing the organism to quickly learn to avoid foods that are poisonous.
Classical conditioning has also been used to help explain the experience of posttraumatic stress disorder (PTSD), as in the case of P. K. Philips described at the beginning of this module. PTSD is a severe anxiety disorder that can develop after exposure to a fearful event, such as the threat of death.  PTSD occurs when the individual develops a strong association between the situational factors that surrounded the traumatic event (e.g., military uniforms or the sounds or smells of war) and the US (the fearful trauma itself). As a result of the conditioning, being exposed to, or even thinking about the situation in which the trauma occurred (the CS), becomes sufficient to produce the CR of severe anxiety. 
PTSD develops because the emotions experienced during the event have produced neural activity in the amygdala and created strong conditioned learning. In addition to the strong conditioning that people with PTSD experience, they also show slower extinction in classical conditioning tasks.  In short, people with PTSD have developed very strong associations with the events surrounding the trauma and are also slow to show extinction to the conditioned stimulus.
Instructions: John Garcia’s experiment, described in the text above, was based on the idea that it is easy to condition some associations, but others are difficult to condition. He injected a rat with a chemical that made the rat nauseous.
Let’s reconstruct Garcia’s experiment, so we’re sure that its implications are clear. Here is our subject, a rat.
Task #1: Conditioning the rat to a light. Does it work?
Answer each question to fill in the boxes. The emoticon faces represent whether the rat is happy or sick. You will have one emoticon face left over.
Now show what happened when Garcia tested to see if conditioning had been successful after only a few of the training trials above. As before, you will have an emoticon face left over. You will only use one of the stimuli, so pick the right one to see if classical conditioning has occurred.
Task #2: Conditioning the rat to a distinctive taste (e.g., salt). Does it work?
Answer the questions to fill in the learning phase of the experiment. What are the conditioned and unconditioned stimuli, and what is the unconditioned response? You will have one of the emoticon faces left over.
Now show what happened when Garcia tested to see if conditioning had been successful after only a few of the training trials above. As before, you will have an emoticon face left over. You will only use one of the stimuli, so pick the right one to see if classical conditioning has occurred.
Task #3: Conditioning the rat to a tone. Does it work?
Answer the questions to put the pictures in the appropriate locations to show the learning phase of the experiment. What are the conditioned and unconditioned stimuli, and what is the unconditioned response? You will have one of the emoticon faces left over.
Now show what happened when Garcia tested to see if conditioning had been successful after only a few of the training trials above. As before, you will have an emoticon face left over. You will only use one of the stimuli, so pick the right one to see if classical conditioning has occurred.
In this module, you learn about a different kind of conditioning called operant conditioning. First, remember how classical conditioning works. In classical conditioning, the individual learns to associate new stimuli with natural, biological responses, such as salivation or fear. The organism does not learn a new behavior but rather learns to perform an existing behavior in the presence of a new signal. For example, remember how Pavlov’s dogs learned to salivate when a bell was rung. This learning occurred because the dog learned to associate a new stimulus (e.g., the bell) with an existing stimulus (e.g., the meat), so now the bell produced the same response (the CR: salivation) originally produced only by the meat.
Operant conditioning, on the other hand, is learning that occurs on the bases of the consequences of behavior and can involve the learning of new behaviors. For operant conditioning, the process starts with doing something—behavior—and then noticing the consequences of that behavior. For example, operant conditioning occurs when a dog rolls over on command because it has been praised for doing so in the past. We go into the details of how this learning happens later in this module, but the important point is that an animal that never rolled over on command can learn this new behavior because it notices that its actions lead to rewards—treats or praise.
In summary, classical conditioning is a process in which an individual learns a new cue for an existing behavior by associating the new cue (the CS) with the existing cue (US). Operant conditioning is the process of learning a new behavior by noticing the consequences of that behavior.
Psychologist Edward L. Thorndike (1874–1949) was the first scientist to systematically study operant conditioning. In his research Thorndike  observed cats who had been placed in a “puzzle box” from which they tried to escape. At first the cats scratched, bit, and swatted haphazardly, without any idea how to get out. But eventually, and accidentally, they pressed the lever that opened the door and exited to their prize, a scrap of fish. The next time the cat was constrained within the box, it attempted fewer of the ineffective responses before carrying out the successful escape, and after several trials the cat learned to almost immediately make the correct response.
Observing these changes in the cats’ behavior led Thorndike to develop his law of effect, the principle that responses that create a typically pleasant outcome in a particular situation are more likely to occur again in a similar situation, whereas responses that produce a typically unpleasant outcome are less likely to occur again in the situation.  The essence of the law of effect is that successful responses, because they are pleasurable, are “stamped in” by experience and thus occur more frequently. Unsuccessful responses, which produce unpleasant experiences, are “stamped out” and subsequently occur less frequently.
The influential behavioral psychologist B. F. Skinner (1904–1990) expanded on Thorndike’s ideas to develop a more complete set of principles to explain operant conditioning. Skinner created specially designed environments known as operant chambers (usually called Skinner boxes) to systemically study learning. A Skinner box (operant chamber) is a structure that is big enough to fit a rodent or bird and that contains a bar or key that the organism can press or peck to release food or water. It also contains a device to record the animal’s responses.
The most basic of Skinner’s experiments was quite similar to Thorndike’s research with cats. A rat placed in the chamber reacted as one might expect, scurrying about the box and sniffing and clawing at the floor and walls. Eventually the rat chanced upon a lever, which it pressed to release pellets of food. The next time around, the rat took a little less time to press the lever, and on successive trials, the time it took to press the lever became shorter and shorter. Soon the rat was pressing the lever as fast as it could eat the food that appeared. As predicted by the law of effect, the rat had learned to repeat the action that brought about the food and cease the actions that did not.
Skinner studied, in detail, how animals changed their behavior through reinforcement and punishment, and he developed terms that explained the processes of operant learning as shown in the table below.. Skinner used the term reinforcer to refer to any event that strengthens or increases the likelihood of a behavior and the term punisher to refer to any event that weakens or decreases the likelihood of a behavior. And he used the terms positive and negative to refer to whether a reinforcement was presented or removed, respectively.
|How Positive and Negative Reinforcement and Punishment Influence Behavior|
Reinforcement, either positive or negative, works by increasing the likelihood of a behavior. Punishment, on the other hand, refers to any event that weakens or reduces the likelihood of a behavior. Positive punishment weakens a response by presenting something unpleasant after the response, whereas negative punishment weakens a response by reducing or removing something pleasant. A child who is yelled at after fighting with a sibling (positive punishment) or who loses out on the opportunity to go to recess after getting a poor grade (negative punishment) is less likely to repeat these behaviors.
For each example below, select which terms best describe it.
Although the distinction between reinforcement (which increases behavior) and punishment (which decreases it) is usually clear, in some cases it is difficult to determine whether a reinforcer is positive or negative. On a hot day a cool breeze could be seen as a positive reinforcer (because it brings in cool air) or a negative reinforcer (because it removes hot air). In other cases, reinforcement can be both positive and negative. One may smoke a cigarette both because it brings pleasure (positive reinforcement) and because it eliminates the craving for nicotine (negative reinforcement).
It is important to note that reinforcement and punishment are not simply opposites. The use of positive reinforcement in changing behavior is almost always more effective than using punishment. This is because positive reinforcement makes the person or animal feel better, helping create a positive relationship with the person providing the reinforcement. Types of positive reinforcement that are effective in everyday life include verbal praise or approval, the awarding of status or prestige, and direct financial payment. Punishment, on the other hand, is more likely to create only temporary changes in behavior because it is based on coercion and typically creates a negative and adversarial relationship with the person providing the reinforcement. When the person who provides the punishment leaves the situation, the unwanted behavior is likely to return.
Perhaps you remember watching a movie or being at a show in which an animal—maybe a dog, a horse, or a dolphin—did some pretty amazing things. The trainer gave a command and the dolphin swam to the bottom of the pool, picked up a ring on its nose, jumped out of the water through a hoop in the air, dived again to the bottom of the pool, picked up another ring, and then took both of the rings to the trainer at the edge of the pool. The animal was trained to do the trick, and the principles of operant conditioning were used to train it. But these complex behaviors are a far cry from the simple stimulus-response relationships that we have considered thus far. How can reinforcement be used to create complex behaviors such as these?
One way to expand the use of operant learning is to modify the schedule on which the reinforcement is applied. To this point we have only discussed a continuous reinforcement schedule, in which the desired response is reinforced every time it occurs; whenever the dog sits, for instance, it gets a biscuit. This type of reinforcement schedule can be depicted as follows.
Continuous reinforcement results in relatively fast learning but also rapid extinction of the desired behavior once the reinforcer disappears. The problem is that because the organism is used to receiving the reinforcement after every behavior, the responder may give up quickly when it doesn’t appear.
Most real-world reinforcers are not continuous; they occur on a partial (or intermittent) reinforcement schedule—a schedule in which the responses are sometimes reinforced, and sometimes not. In comparison to continuous reinforcement, partial reinforcement schedules lead to slower initial learning, but they also lead to greater resistance to extinction. Because the reinforcement does not appear after every behavior, it takes longer for the learner to determine that the reward is no longer coming, and thus extinction is slower.
The four types of partial reinforcement schedules are summarized in the following table.
|Four Types of Partial Reinforcement Schedules|
Partial reinforcement schedules are determined by whether the reinforcement is presented on the basis of the time that elapses between reinforcement (interval) or on the basis of the number of responses that the organism engages in (ratio), and by whether the reinforcement occurs on a regular (fixed) or unpredictable (variable) schedule.
In a fixed-interval schedule, reinforcement occurs for the first response made after a specific amount of time has passed. For instance, on a one-minute fixed-interval schedule the animal receives a reinforcement every minute, assuming it engages in the behavior at least once during the minute.
In a variable-interval schedule, the reinforcers appear on an interval schedule, but the timing is varied around the average interval, making the actual appearance of the reinforcer unpredictable. An example might be checking your e-mail: You are reinforced by receiving messages that come, on average, say every 30 minutes, but the reinforcement occurs only at random times. Interval reinforcement schedules tend to produce slow and steady rates of responding.
In a fixed-ratio schedule, a behavior is reinforced after a specific number of responses. For instance, a rat’s behavior may be reinforced after it has pressed a key 20 times, or a salesperson may receive a bonus after she has sold 10 products. A variable-ratio schedule provides reinforcers after a specific but average number of responses. Winning money from slot machines or on a lottery ticket are examples of reinforcement that occur on a variable-ratio schedule. For instance, a slot machine may be programmed to provide a win every 20 times the user pulls the handle, on average.
Complex behaviors are also created through shaping, the process of guiding an organism’s behavior to the desired outcome through the use of successive approximation to a final desired behavior. Skinner made extensive use of this procedure in his boxes as shown in the video below. For instance, he could train a rat to press a bar two times to receive food, by first providing food when the animal moved near the bar. Then when that behavior had been learned he would begin to provide food only when the rat touched the bar. Further shaping limited the reinforcement to only when the rat pressed the bar, to when it pressed the bar and touched it a second time, and finally, to only when it pressed the bar twice. Although it can take a long time, in this way operant conditioning can create chains of behaviors that are reinforced only when they are completed.
Reinforcing animals if they correctly discriminate between similar stimuli allows scientists to test the animals’ ability to learn, and the discriminations that they can make are sometimes quite remarkable. Pigeons have been trained to distinguish between images of Charlie Brown and the other Peanuts characters,  and between different styles of music and art.  
Behaviors can also be trained through the use of secondary reinforcers. Whereas a primary reinforcer includes stimuli that are naturally preferred or enjoyed by the organism, such as food, water, and relief from pain, a secondary reinforcer (sometimes called conditioned reinforcer) is a neutral event that has become associated with a primary reinforcer through classical conditioning. An example of a secondary reinforcer would be the whistle given by an animal trainer, which has been associated over time with the primary reinforcer, food. An example of an everyday secondary reinforcer is money. We enjoy having money, not so much for the stimulus itself, but rather for the primary reinforcers (the things that money can buy) with which it is associated.
You want to teach your dog to turn on the light in your living room when you command him to. Using shaping, put in order the behaviors you would reward. Remember: in shaping, you are rewarding successive approximations. Start out rewarding the most general behavior, and as the learning progresses, start rewarding behaviors that look more and more like the desired behavior.
Ivan Pavlov, John Watson, and B. F. Skinner were scientists who believed that all learning could be explained by the processes of conditioning. To critical features of their ideas about conditioning are useful to keep in mind as you study this section. First, they thought of learning as being the same thing as behavior change. Don’t confuse this with the idea that you figure out some idea first and then change your behavior. Second, they believed that learning (i.e., behavior change) occurs only when the individual directly and personally experiences the impact of some reward or punishment.
In this section, you will encounter some kinds of learning are difficult to explain using the ideas that learning occurs only with behavior change and that personal experience is required for learning. Although classical and operant conditioning play key roles in learning, they constitute only a part of the total picture.
In the preceding module, where you learned about operant conditioning, you read about Edward Thorndike’s work with trial-and-error learning. This kind of learning was described in the video that showed how a cat was able to learn to escape from a puzzle box. Trial-and-error leads to learning, according to Thorndike, because of the law of effect: individuals notice the consequences of their actions. They repeat actions that lead to desirable outcomes and avoid those that lead to undesirable results. Trial-and-error learning is the basis of operant conditioning.
One type of learning that is not determined by classical conditioning (learned associations) or operant conditioning (based on trial-and-error) occurs when we suddenly find the solution to a problem, as if the idea just popped into our head. This type of learning is known as insight, the sudden understanding of a solution to a problem. The German psychologist Wolfgang Köhler  carefully observed what happened when he presented chimpanzees with a problem that was not easy for them to solve, such as placing food in an area that was too high in the cage to be reached. He found that the chimps first engaged in trial-and-error attempts at solving the problem, but when these failed they seemed to stop and contemplate for a while. Then, after this period of contemplation, they would suddenly seem to know how to solve the problem, for instance by using a stick to knock the food down or by standing on a chair to reach it. Köhler argued that it was this flash of insight, not the prior trial-and-error approaches, which were so important for conditioning theories, that allowed the animals to solve the problem.
Edward Tolman  was studying traditional trial-and-error learning when he realized that some of his research subjects (rats) actually knew more than their behavior initially indicated. In one of Tolman’s classic experiments, he observed the behavior of three groups of hungry rats that were learning to navigate mazes.
The first group always received a food reward at the end of the maze, so the payoff for learning the maze was real and immediate. The second group never received any food reward, so there was no incentive to learn to navigate the maze effectively. The third group was like the second group for the first 10 days, but on the 11th day, food was now placed at the end of the maze.
As you might expect when considering the principles of conditioning, the rats in the first group quickly learned to negotiate the maze, while the rats of the second group seemed to wander aimlessly through it. The rats in the third group, however, although they wandered aimlessly for the first 10 days, quickly learned to navigate to the end of the maze as soon as they received food on day 11. By the next day, the rats in the third group had caught up in their learning to the rats that had been rewarded from the beginning. It was clear to Tolman that the rats that had been allowed to experience the maze, even without any reinforcement, had nevertheless learned something, and Tolman called this latent learning. Latent learning is to learning that is not reinforced and not demonstrated until there is motivation to do so. Tolman argued that the rats had formed a “cognitive map” of the maze but did not demonstrate this knowledge until they received reinforcement.
In the Learn by Doing exercise below, go through Tolman’s experiment with his three groups of rats. Keep in mind that Tolman, as a good scientist, was testing an idea that was controversial at the time: the idea that we can learn something without our behavior immediately revealing that we have learned it. It is the delay between the learning and the revealing behavior that is the basis for the name: latent (or “hidden”) learning.
Your task here is to predict what is going to happen on Trial 12 for the “no food until Trial 11” group.
Option A: Notice that this result is the same as the “no food on any trial” group. So, if you choose option A, you think that they will not act differently now than they acted on the first 11 trials and they will continue to make a lot of wrong turns.
Option B: This option suggests that they are now motivated to learn the path to the food, but that they will do so in small steps, just as we have seen for all three groups up to this point. Option B says that they are moving in the direction of the “food on every trial” group, but that it will take some extra learning to get there.
Option C: This option says that they already know the path to the food and, now that they are motivated to get there, they will show that they already know just as much as the “food on every trial” group. Their performance on Trial 12 will be the same as the low-error performance of the “food on every trial” group.
Tolman’s studies of latent learning show that animals, and people, can learn during unrewarded experience, but this learning only shows itself when rewards or punishment provide motivation to use some knowledge. However, Tolman’s rats at least had the opportunity to wander through the maze, and the rats in the critical condition did this for 10 days before food provided them with motivation to find an efficient path through the maze. So we could still hold onto the theory that learning only takes place if you actually do something, even it it is unrewarded. The direct connection between learning and behavior—even if the behavior seems aimless—has not been disproved. But in the 1960s, Albert Bandura conducted a series of experiments showing that learning can and does occur even when the learner is merely a passive spectator. Learning can occur when someone else is doing all the behaving and receiving all the rewards and punishments.
One of the best known and most influential experiments in the history of psychology involved some adults, some children, and a big inflatable doll called a Bobo doll. Bandura and his colleagues  allowed children to watch an adult—a man or a woman in different conditions of the study—“playing” with a Bobo doll, an inflatable balloon with a weight in the bottom that makes it pop back up when you knock it down. Bandura wanted to know if watching the adults would influence the way the children behaved.
The motivation for Bandura’s study was not that of solving some abstract scientific question, but a real debate that continues to this day. Many people felt that children who were “raised properly” would not be influenced very strongly by seeing someone—an unfamiliar adult, for instance—behave in a mean or hostile way. The children might be upset by what they saw, but surely they would not imitate poor behavior. As you learned when you studied about Tolman’s rats, many learning specialists believed that learning (i.e., behavior based on experience) only occurs for the individual who is actually doing the behavior. This theory held by the learning specialists supported the belief that the children would not learn by watching, because they would not be doing anything and they would not receive rewards or punishment. Bandura—along with many parents and some other psychologists—suspected that learning might occur merely by watching the actions of others and the consequences of those actions.
Watch the following video as Dr. Bandura explains his study to you.
In the following activity, you will go through one of Bandura’s classic studies.
Phase 1 of the Experiment
Bandura studied the impact of an adult’s behavior on the behavior of children who saw them. One of his independent variables was whether or not the adult was hostile or agressive toward the Bobo doll, so for some children the adults acted aggressively (treatment condition) and for others they did not (control condition 1) and for yet other children there were no adults at all (control condition 2). He was also interested to learn if the sex of the child and/or the sex of the adult model influenced what the child learned.
To give you a good view of how the experiment was organized or “designed,” the first thing you will do is put all the individuals involved—the adult models and the children—into the correct places in the study.
Instructions: The boxes show the labels for the three different modeling conditions: aggressive behavior, non-aggressive behavior (control condition 1), and no model (control condition 2). Organize the study by putting the adult models and children in the proper boxes. Be sure you distribute the children so that the same number of boys and girls are in all the conditions with a model. Put the rest of the children in the No Model boxes.
Phase 1 of the Experiment: The Observation Phase
The observation phase of the experiment is when the children see the behavior of the adults. Each child was shown into a room where an adult was already sitting near the Bobo doll. The child was positioned so he or she could easily see the adult.
Instructions: There are three people involved in the first phase of the experiment: an adult model, a child subject or participant, and an experimenter. Demonstrate your understanding of the first step of the experiment by moving each of the three characters (adult, experimenter, and child) to the correct location in the Experimentation Room depicted below.
Phase 2 of the Experiment: Frustration
Dr. Bandura thought that the children might be a bit more likely to show aggressive behavior if they were frustrated. The second phase of the experiment was designed to produce this frustration. After a child had watched the adult in phase 1, he or she was taken to another room, one that also contained a lot of attractive, fun toys and was told that is was fine to play with the toys. As soon as the child started to enjoy playing with the toys, the experimenter said something.
Phase 3 of the Experiment: The Testing Phase
After the child was told to stop playing with “the very best toys,” the experimenter said that he or she could play with any of the toys in the next room. Then the child was taken to a third room. This room contained a variety of toys. Many of the toys were engaging and interactive, but not the type that encouraged aggressive play. Critically, the Bobo doll and the hammer that the model had used in the first phase were now in this new play room. The goal of this phase in the experiment was to see how the child would react without a model around.
Instructions: The figure below represents the third room. Three individuals from the study are indicated by the boxes below the diagram. Drag each of them to the proper locations to indicate your understanding of the experimental procedure. One of three individuals does not appear in phase 3 of the study, so put this individual in the black box that says, “Not in this phase.”
The child was allowed to play freely for 20 minutes. Note that an adult did stay in the room so the child would not feel abandoned or frightened. However, this adult worked inconspicuously in a corner and interacted with the child as little as possible.
During the 20 minutes that the child played alone in the third room, the experimenters observed his or her behavior from behind a see-through mirror. Using a complex system that we won’t go into here, the experimenters counted the number of various types of behaviors that the child showed during this period. These behaviors included ones directed at the Bobo doll, as well as those involving any of the other toys. They were particularly interested in the number of behaviors the child showed that clearly imitated the actions of the adults that the child had observed earlier, in phase 1.
Below are the results for the number of imitative physically aggressive acts the children showed on average toward the Bobo doll. These acts included hitting and punching the Bobo doll. On the left, you see the two modeling conditions: aggression by the model in phase 1 or no aggression by the model in phase 1. Note: Children in the no-model conditions showed very few physically aggressive acts and their results do not change the interpretation, so we will keep the results simple by leaving them out of the table.
The story is a slightly, though not completely, different when we look at imitative verbal aggression, rather than physical aggression. The table below shows the number of verbally aggressive statements by the boys and girls under different conditions in the experiment. Verbally aggressive statements were ones like the models had made: for example, “Sock him” and “Kick him down!”
Note: Just as was true for the physically aggressive acts, children in the no model conditions showed very few verbally aggressive acts either and their results do not change the interpretation, so we will keep the results simple by leaving them out of the table.
Bandura and his colleagues did other studies in which they had children observe adults on television performing the acts rather than watching them in person. The effects of aggressive modeling were weaker when the adult was not physically present, but the same general pattern of results were found with television models. In yet another variation, Bandura had the children watch cartoon models rather than real adults. These results were even weaker than the television adult model results, but the pattern was still there and the aggressive modeling effect was still statically significant. Children even imitated the aggressive behavior of cartoon characters.
Observational learning is useful for animals and for people because it allows us to learn without having to actually engage in what might be a risky behavior. Monkeys that see other monkeys respond with fear to the sight of a snake learn to fear the snake themselves, even if they have been raised in a laboratory and have never actually seen a snake.  As Bandura put it, “the prospects for [human] survival would be slim indeed if one could learn only by suffering the consequences of trial and error. For this reason, one does not teach children to swim, adolescents to drive automobiles, and novice medical students to perform surgery by having them discover the appropriate behavior through the consequences of their successes and failures. The more costly and hazardous the possible mistakes, the heavier is the reliance on observational learning from competent learners. 
Although modeling is normally adaptive, it can be problematic for children who grow up in violent families. These children are not only the victims of aggression, but they also see it happening to their parents and siblings. Because children learn how to be parents in large part by modeling the actions of their own parents, it is no surprise that there is a strong correlation between family violence in childhood and violence as an adult. Children who witness their parents being violent or who are themselves abused are more likely as adults to inflict abuse on intimate partners or their children, and to be victims of intimate violence.  In turn, their children are more likely to interact violently with each other and to aggress against their parents. 
The average American child watches more than 4 hours of television every day, and two out of three programs they watch contain aggression. It has been estimated that by the age of 12, the average American child has seen more than 8,000 murders and 100,000 acts of violence. At the same time, children are also exposed to violence in movies, video games, and virtual reality games, as well as in music videos that include violent lyrics and imagery.   
It is clear that watching television violence can increase aggression, but what about violent video games? These games are more popular than ever and also more graphically violent. Youths spend countless hours playing these games, many of which involve engaging in extremely violent behaviors. The games often require the player to take the role of a violent person, to identify with the character, to select victims, and of course to kill the victims. These behaviors are reinforced by winning points and moving on to higher levels, and are repeated over and over.
In one experiment, Bushman and Anderson  assessed the effects of viewing violent video games on aggressive thoughts and behavior. Participants were randomly assigned to play either a violent or a nonviolent video game for 20 minutes. Each participant played one of four violent video games or one of four nonviolent video games.
Participants then read a story, such as this one about Todd, and were asked to list 20 thoughts, feelings, and actions about how they would respond if they were Todd:
Todd was on his way home from work one evening when he had to brake quickly for a yellow light. The person in the car behind him must have thought Todd was going to run the light because he crashed into the back of Todd’s car, causing a lot of damage to both vehicles. Fortunately, there were no injuries. Todd got out of his car and surveyed the damage. He then walked over to the other car.
Now it is your task to predict what will happen.
As you read in the text, Bushman and Anderson asked the participants what they would do, what they would be thinking, and how they would feel if they were in Todd’s position. The graph above is blank, so your task is to put the correct results into it. Note that the green bars show the results for people who had just played a nonviolent video game and the red bars are for people who just played a violent video game. The Y-axis shows how aggressive the response is, so a taller bar means MORE aggressive and a shorter bar means less aggressive.
It might not surprise you to hear that these exposures to violence have an effect on aggressive behavior. The evidence is impressive and clear: The more media violence people, including children, view, the more aggressive they are likely to be.   The relationship between viewing television violence and aggressive behavior is about as strong as the relation between smoking and cancer or between studying and academic grades. People who watch more violence become more aggressive than those who watch less violence.
As you have just read, playing violent video games also leads to aggressive responses. A recent meta-analysis by Anderson and Bushman  reviewed 35 research studies that had tested the effects of playing violent video games on aggression. The studies included both experimental and correlational studies, with both male and female participants in both laboratory and field settings. They found that exposure to violent video games is significantly linked to increases in aggressive thoughts, aggressive feelings, psychological arousal (including blood pressure and heart rate), as well as aggressive behavior. Furthermore, playing more video games was found to relate to less altruistic behavior.
For some people, memory is truly amazing. Consider, for instance, the case of Kim Peek, who was the inspiration for the Academy Award–winning film Rain Man.
There are others who are capable of amazing feats of memory. The Russian psychologist A. R. Luria  has described the abilities of a man known as “S,” who seems to have unlimited memory. S remembers strings of hundreds of random letters for years at a time, and seems in fact to never forget anything. As you watch the following video, you’ll notice that at the beginning, Kim is referred to as an idiot savant. Idiot is an old term that was used to describe profound mental retardation. Now we just say “profound mental retardation.” Kim and people like him are called savants.
The subject of this unit is memory. The term memory refers to our capacity to acquire, store, and retrieve the information and habits that guide our behavior. This capacity is largely modulated by associative learning mechanisms like those discussed in the unit on learning. Our memories allow us to do relatively simple things, such as remembering where we parked our car or the name of the current president of the United States. We can also form complex memories, such as how to ride a bicycle or write a computer program. Moreover, our memories define us as individuals—memories are the records of our experiences, our relationships, our successes, and our failures. Perhaps the coolest aspect of memory is that it provides us with the means to use mental time travel to access a lifetime of experiences and learning.
This unit is about human memory, but in our culture we commonly hear the term memory used in conjunction with descriptions of computers. Computer memory and human memory have some distinct differences and some similarities. Let's take a look at some of these.
Differences between Brains and Computers
Although we depend on computers in multiple aspects of our lives, and computers eclipse human processing capacity in terms of speed and volume, at least for some things, human memory is exponentially better than a computer  . Once we learn a face, we can recognize that face many years later—a task which computers have yet to master. Impressively, our memories can be acquired rapidly and retained indefinitely. Mitchell  contacted participants 17 years after they had been briefly exposed to some line drawings in a lab and found that they still could identify the images significantly better than participants who had never seen them.
In this unit we learn how psychologists use behavioral responses (such as memory tests and reaction time) to draw inferences about what and how people remember. And we will see that although we have very good memory for some things, our memories are far from perfect.  The errors we make are due to the fact that our memories are not simply recording devices that input, store, and retrieve the world around us. Rather, we actively process and interpret information as we remember and recollect it, and these cognitive processes influence what we remember and how we remember it. Because memories are constructed, not recorded, when we remember events we don’t reproduce exact replicas of those events. 
We also learn that our prior knowledge can influence our memory. People who read the words dream, sheets, rest, snore, blanket, tired, and bed and are then asked to remember the words often think they saw the word sleep even though that word was not in the list.  In other circumstances, we are influenced by the ease with which we can retrieve information from memory or by the information that we are exposed to after we first learn something.
Basic memory research has revealed profound inaccuracies in our memories and judgments. Understanding these potential errors is the first step in learning to account for them in our everyday lives.
|Types of Memory|
Explicit memory refers to knowledge or experiences that can be consciously and intentionally remembered. For instance, recalling when you have a dentist appointment or what you wore to senior prom relies on explicit memory. As you can see in the figure below, there are two types of explicit memory: episodic and semantic. Episodic memory refers to the firsthand experiences, or episodes, that we have on a daily basis (e.g., recollections of our high school graduation day or of the fantastic show we saw in New York last summer). Semantic memory refers to our knowledge of facts and concepts about the world (e.g., that the absolute value of −90 is greater than the absolute value of 9 and that one definition of the word affect is “the experience of feeling or emotion”).
Memory is assessed using measures that require an individual to consciously retrieve information. A recall test is a measure of explicit memory that involves retrieving information that has been previously learned, and it requires us to use a search strategy to perform that retrieval. We rely on our recall memory when we take an essay test, because the test requires us to generate previously remembered information. A multiple-choice test is an example of a recognition memory test, a measure of memory that involves determining whether information has been seen or learned before.
Read about and view the following historical event, and then respond to the questions below in terms of whether the information/memory in question is semantic or episodic.
On April 29, 2011, England’s Prince William married his longtime girlfriend, Kate Middleton, in Westminster Abbey. The Dean of Westminster, the Very Reverend Dr. John Hall, expressed his delight at the couple’s announcement of their choice of Westminster as the place to hold the ceremony. In attendance were various political and religious leaders as well as a number of celebrities such as Elton John and David Beckham, although most of the guest were friends and family of the couple.
Watch these highlights of the ceremony.
Your own experiences taking tests will probably lead you to agree with the scientific research finding that recall is more difficult than recognition. Recall, such as required on essay tests, involves two steps: first generating an answer and then determining whether it seems to be the correct one. Recognition, as on multiple-choice tests, only involves determining which item from a list seems most correct.  Although they involve different processes, recall and recognition memory measures tend to be correlated. Students who do better on a multiple-choice exam will also, by and large, do better on an essay exam. 
A third way of measuring memory is known as relearning.  Measures of relearning (or savings) assess how much more quickly information is processed or learned when it is studied again after it has already been learned but then forgotten. If you have taken some French courses in the past, for instance, you might have forgotten most of the vocabulary you learned. But if you were to work on your French again, you’d learn the vocabulary much faster the second time around. Relearning can be a more sensitive measure of memory than either recall or recognition because it allows assessing memory in terms of “how much” or “how fast” rather than simply “correct” versus “incorrect” responses. Relearning also allows us to measure memory for procedures like driving a car or playing a piano piece, as well as memory for facts and figures.
While explicit memory consists of the things that we can consciously report that we know, implicit memory refers to knowledge that we cannot consciously access. However, implicit memory is nevertheless exceedingly important to us because it has a direct effect on our behavior. Implicit memory refers to the influence of experience on behavior, even if the individual is not aware of those influences. As you can see above in the figure below, there are three general types of implicit memory: procedural memory, classical conditioning effects, and priming.
Procedural memory refers to our often unexplainable knowledge of how to do things. When we walk from one place to another, speak to another person in English, dial a cell phone, or play a video game, we are using procedural memory. Procedural memory allows us to perform complex tasks, even though we may not be able to explain to others how we do them. It is difficult to tell someone how to ride a bicycle; a person has to learn by doing it. The idea of implicit memory helps explain how infants are able to learn. The ability to crawl, walk, and talk are procedures, and these skills are easily and efficiently developed while we are children despite the fact that as adults we have no conscious memory of having learned them.
A second type of implicit memory is classical conditioning effects, in which we learn, often without effort or awareness, to associate neutral stimuli (such as a sound or a light) with another stimulus (such as food), which creates a naturally occurring response, such as enjoyment or salivation. The memory for the association is demonstrated when the conditioned stimulus (the sound) begins to create the same response as the unconditioned stimulus (the food) did before the learning.
The final type of implicit memory is known as priming, or changes in behavior as a result of experiences that have happened frequently or recently. Priming refers both to the activation of knowledge (e.g., we can prime the concept of “kindness” by presenting people with words related to kindness) and to the influence of that activation on behavior (people who are primed with the concept of kindness may act more kindly).
One measure of the influence of priming on implicit memory is the word fragment test, in which a person is asked to fill in missing letters to make words. You can try this yourself: First, try to complete the following word fragments, but work on each one for only three or four seconds. Do any words pop into mind quickly?
_ i b _ a _ y
_ h _ s _ _ i _ n
_ o _ k
_ h _ i s _
Now read the following sentence carefully:
“He got his materials from the shelves, checked them out, and then left the building.”
Then try again to make words out of the word fragments.
You might find that it is easier to complete fragments 1 and 3 as “library” and “book,” respectively, after you read the sentence than it was before you read it. However, reading the sentence didn’t really help you to complete fragments 2 and 4 as “physician” and “chaise.” This difference in implicit memory probably occurred because as you read the sentence, the concept of library (and perhaps book) was primed, even though they were never mentioned explicitly. Once a concept is primed it influences our behaviors, for instance, on word fragment tests.
Our everyday behaviors are influenced by priming in a wide variety of situations. Seeing an advertisement for cigarettes may make us start smoking, seeing the flag of our home country may arouse our patriotism, and seeing a student from a rival school may arouse our competitive spirit. And these influences on our behaviors may occur without our being aware of them.
One of the most important characteristics of implicit memories is that they are frequently formed and used automatically, without much effort or awareness on our part. In one demonstration of the automaticity and influence of priming effects, John Bargh and his colleagues  conducted a study in which they showed college students lists of five scrambled words, each of which they were to make into a sentence. Furthermore, for half of the research participants, the words were related to stereotypes of the elderly. These participants saw words such as the following:
in Florida retired live people
bingo man the forgetful plays
The other half of the research participants also made sentences, but from words that had nothing to do with elderly stereotypes. The purpose of this task was to prime stereotypes of elderly people in memory for some of the participants but not for others.
The experimenters then assessed whether the priming of elderly stereotypes would have any effect on the students’ behavior—and indeed it did. When the research participant had gathered all of his or her belongings, thinking that the experiment was over, the experimenter thanked him or her for participating and gave directions to the closest elevator. Then, without the participants knowing it, the experimenters recorded the amount of time that the participant spent walking from the doorway of the experimental room toward the elevator. As you can see in the figure below, participants who had made sentences using words related to elderly stereotypes took on the behaviors of the elderly—they walked significantly more slowly as they left the experimental room.
To determine if these priming effects occurred out of the awareness of the participants, Bargh and his colleagues asked still another group of students to complete the priming task and then to indicate whether they thought the words they had used to make the sentences had any relationship to each other, or could possibly have influenced their behavior in any way. These students had no awareness of the possibility that the words might have been related to the elderly or could have influenced their behavior.
Another way of understanding memory is to think about it in terms of stages that describe the length of time information remains available to us. According to this approach, as shown the following figure, information begins in sensory memory, moves to short-term memory, and eventually moves to long-term memory. But not all information makes it through all three stages; most of it is forgotten. Whether the information moves from shorter-duration memory into longer-duration memory or whether it is lost from memory entirely depends on how the information is attended to and processed.
Sensory memory refers to the brief storage of sensory information. Sensory memory is a memory buffer that lasts only very briefly and then, unless it is attended to and passed on for more processing, is forgotten. The purpose of sensory memory is to give the brain some time to process the incoming sensations and to allow us to see the world as an unbroken stream of events rather than as individual pieces.
Visual sensory memory is known as iconic memory. Iconic memory was first studied by the psychologist George Sperling.  In his research, Sperling showed participants a display of letters in rows, similar to the figure shown in the following activity. However, the display lasted only about 50 milliseconds (1/20 of a second). Then, Sperling gave his participants a recall test in which they were asked to name all the letters that they could remember. On average, the participants could remember only about one-quarter of the letters that they had seen.
Sperling  showed his participants displays such as this one for only 1/20 of a second. He found that when he cued the participants to report one of the three rows of letters, they could do it, even if the cue was given shortly after the display had been removed. The research demonstrated the existence of iconic memory.
Instructions: You will now be a participant in a similar experiment on the time course of iconic memory. You will see a brief grid of letters and, after the letters are hidden, you will see a green diamond signaling which row of letters to type. Press Start to begin.
Sperling reasoned that the participants had seen all the letters but could remember them only very briefly, making it impossible for them to report them all. To test this idea, in his next experiment he first showed the same letters, but then after the display had been removed, he signaled to the participants to report the letters from either the first, second, or third row. In this condition, the participants now reported almost all the letters in that row. This finding confirmed Sperling’s hunch: Participants had access to all of the letters in their iconic memories, and if the task was short enough, they were able to report on the part of the display he asked them to. The “short enough” is the length of iconic memory, which turns out to be about 250 milliseconds (¼ of a second).
Auditory sensory memory is known as echoic memory. In contrast to iconic memories, which decay very rapidly, echoic memories can last as long as 4 seconds.  This is convenient as it allows you—among other things—to remember the words that you said at the beginning of a long sentence when you get to the end of it, and to take notes on your psychology professor’s most recent statement even after he or she has finished saying it.
Instructions: You will now hear a series of sound clips containing white noise. It is hard to use words to describe different types of white noise, which makes it hard to use short-term memory to help find patterns within white noise. Some of the clips contain repeating patterns and others do not. After listening to each clip, decide whether it contained a repeating pattern or not.
In some people iconic memory seems to last longer, a phenomenon known as eidetic imagery (or “photographic memory”) in which people can report details of an image over long periods of time. These people, who often suffer from psychological disorders such as autism, claim that they can “see” an image long after it has been presented, and can often report accurately on that image. There is also some evidence for eidetic memories in hearing; some people report that their echoic memories persist for unusually long periods of time. The composer Wolfgang Amadeus Mozart may have possessed eidetic memory for music, because even when he was very young and had not yet had a great deal of musical training, he could listen to long compositions and then play them back almost perfectly. 
Most of the information that gets into sensory memory is forgotten, but information that we turn our attention to, with the goal of remembering it, may pass into short-term memory. Short-term memory (STM) is the place where small amounts of information can be temporarily kept for more than a few seconds but usually for less than one minute.  The cognitive psychologist George Miller  referred to “seven plus or minus two” pieces of information as the “magic number” in short-term memory. Information in short-term memory is not stored permanently but rather becomes available for us to process, and the processes that we use to make sense of, modify, interpret, and store information in STM are known as working memory.
Although it is called “memory,” working memory is not a store of memory like STM but rather a set of memory procedures or operations. Imagine, for instance, that you are asked to participate in a task such as this one, which is a measure of working memory.  Each of the following questions appears individually on a computer screen and then disappears after you answer the question:
To successfully accomplish the task, you have to answer each of the math problems correctly and at the same time remember the letter that follows the task. Then, after the six questions, you must list the letters that appeared in each of the trials in the correct order (in this case S, R, P, T, U, Q).
To accomplish this difficult task, you need to use a variety of skills. You clearly need to use STM, as you must keep the letters in storage until you are asked to list them. But you also need a way to make the best use of your available attention and processing. For instance, you might decide to use a strategy of “repeat the letters twice, then quickly solve the next problem, and then repeat the letters, including the new one, twice again.” Keeping this strategy (or others like it) going is the role of working memory’s central executive—the part of working memory that directs attention and processing. The central executive will make use of whatever strategies seem to be best for the given task. For instance, the central executive will direct the rehearsal process and at the same time direct the visual cortex to form an image of the list of letters in memory. You can see that although STM is involved, the processes that we use to operate on the material in memory are also critical.
STM is limited in both the length and the amount of information it can hold. Peterson and Peterson  found that when people were asked to remember a list of three-letter strings and then were immediately asked to perform a distracting task (counting backward by threes), the material was quickly forgotten, as shown the figure below, such that by 18 seconds it was virtually gone.
One way to prevent the decay of information from STM is to use working memory to rehearse it. Maintenance rehearsal is the process of repeating information mentally or out loud with the goal of keeping it in memory. We engage in maintenance rehearsal to keep something that we want to remember (e.g., a person’s name, e-mail address, or phone number) in mind long enough to write it down, use it, or potentially transfer it to long-term memory.
If we continue to rehearse information, it will stay in STM until we stop rehearsing it, but there is also a capacity limit to STM. Try reading each of the following rows of numbers, one row at a time, at a rate of about one number each second. Then when you have finished each row, close your eyes and write down as many of the numbers as you can remember.
If you are like the average person, you will have found that on this test of working memory, known as a digit span test, you did pretty well up to about the fourth line, and then you started having trouble. I bet you missed some of the numbers in the last three rows, and did pretty poorly on the last one.
The digit span of most adults is between five and nine digits, with an average of about seven, as noted by Miller  . But if we can only hold a maximum of about nine digits in short-term memory, then how can we remember larger amounts of information than this? For instance, how can we ever remember a 10-digit phone number long enough to dial it?
One way we are able to expand our ability to remember things in STM is by using a memory technique called chunking. Chunking is the process of organizing information into smaller groupings (chunks), thereby increasing the number of items that can be held in STM. For instance, try to remember this string of 12 letters:
You probably won’t do that well because the number of letters is more than the magic number of seven.
Now try again with this one:
Would it help you if I pointed out that the material in this string could be chunked into four sets of three letters each? I think it would, because then rather than remembering 12 letters, you would only have to remember the names of four television stations. In this case, chunking changes the number of items you have to remember from 12 to only four.
Experts rely on chunking to help them process complex information. Herbert Simon and William Chase  showed chess masters and chess novices various positions of pieces on a chessboard for a few seconds each. The experts did a lot better than the novices in remembering the positions because they were able to see the “big picture.” They didn’t have to remember the position of each of the pieces individually, but chunked the pieces into several larger layouts. But when the researchers showed both groups random chess positions—positions that were unlikely to occur in real games—both groups did equally poorly, because in this situation the experts lost their ability to organize the layouts, as shown in the following figure. The same occurs for basketball. Basketball players recall actual basketball positions much better than do nonplayers, but only when the positions make sense in terms of what is happening on the court, or what is likely to happen in the near future, and thus can be chunked into bigger units. 
If information makes it past STM it may enter long-term memory (LTM), memory storage that can hold information for days, months, and years. The capacity of long-term memory is large, and there is no known limit to what we can remember.  Although we may forget at least some information after we learn it, other things will stay with us forever. In the next section we will discuss the principles of long-term memory.
Although it is useful to hold information in sensory and short-term memory, we also rely on our long-term memory (LTM). Long-term memory is relatively permanent storage. Explicit memories stored there will stay with us throughout our lifetime (barring a brain disease or injury) as long as we continue to use them. Most of us who teach psychology originally learned the material many years ago. But because we continue to use the information, it remains readily available to us. Once we stop using the material we have learned, it will gradually fade. Implicit memories are less subject to fading with disuse, but over time, they will fade too. If you learn to ski as a child, stop in your teens, and then resume it again in your 40s, you will still remember some of what you learned, but you’ll have to practice again to do it well.
We use long-term memory to remember the name of the new boy in the class, the name of the movie we saw last week, and the material for our upcoming psychology test. Psychological research has produced a great deal of knowledge about long-term memory, and this research can be useful as you try to learn and remember new material. In this module we consider this question in terms of the types of processing we do on the information we want to remember. To be successful, the information we want to remember must be encoded and stored and then retrieved. The rest of this module discusses these three concepts.
Encoding is the process by which we place our experiences into memory. Unless information is encoded, it cannot be remembered. I’m sure you’ve been to a party where you were introduced to someone, and then—maybe only seconds later—you realized you did not remember the person’s name. It's not surprising that you forgot the name, because you probably were distracted and never encoded the name to begin with.
Not everything we experience can or should be encoded. We tend to encode things that we need to remember and not bother to encode things that are irrelevant. Look at figure below, which shows different images of U.S. pennies. Can you tell which one is the real one? Nickerson and Adams  found that very few of the U.S. participants they tested could identify the right one. We see pennies a lot, but we don’t bother to encode their features.
One way to improve our memory is to use better encoding strategies. Some ways of studying are more effective than others. Research has found that we are better able to remember information if we encode it in a meaningful way. When we engage in elaborative encoding, we process new information in ways that make it more relevant or meaningful.  
Imagine that you are trying to remember the characteristics of the different schools of psychology we discussed in the first unit. Rather than simply trying to remember the schools and their characteristics, you might try to relate the information to things you already know. For instance, you might try to remember the fundamentals of the cognitive school of psychology by linking the characteristics to the computer model. The cognitive school focuses on how information is input, processed, and retrieved, and you might think about how computers do pretty much the same thing. You might also try to organize the information into meaningful units. For instance, you might link the cognitive school to structuralism because both are concerned with mental processes. You also might try to use visual cues to help you remember the information. You might look at the image of Freud and imagine what he looked like as a child. That image might help you remember that childhood experiences were an important part of Freudian theory. Each person has his or her unique way of elaborating on information; the important thing is to try to develop unique and meaningful associations among the materials. These suggestions are very good study hints.
We all have knowledge bases we can build upon. Some information connects in meaningful ways to previous knowledge more easily than does other information—for example, a Spanish speaker can connect knowledge of Spanish grammar to help remember the rules of French grammar. Elaborative encoding makes use of these connections.
In an important study showing the effectiveness of elaborative encoding, Rogers, Kuiper, and Kirker  studied how people recalled information that they had learned under different processing conditions. All the participants were presented with the same list of 40 adjectives to learn, but through the use of random assignment, the participants were given one of four different sets of instructions about how to process the adjectives.
Participants assigned to the structural task condition were asked to judge whether the word was printed in uppercase or lowercase letters. Participants in the phonemic task condition were asked whether or not the word rhymed with another given word. In the semantic task condition, the participants were asked if the word was a synonym of another word. And in the self-reference task condition, participants were asked to indicate whether or not the given adjective was or was not true of themselves. After completing the specified task, each participant was asked to recall as many adjectives as he or she could remember.
Rogers and his colleagues hypothesized that different types of processing would have different effects on memory. As you can see in the following figure, participants in the self-reference task condition recalled significantly more adjectives than did participants in any other condition. This finding, known as the self-reference effect, is powerful evidence that the self-concept helps us organize and remember information. The next time you are studying for an exam, you might try relating the material to your own experiences. The self-reference effect suggests that doing so will help you better remember the information. 
Hermann Ebbinghaus (1850–1909) was a pioneer of the study of memory. In this section we consider three of his most important findings, each of which can help you improve your memory. In his research, in which he was the only research participant, Ebbinghaus practiced memorizing lists of nonsense syllables, such as the following:
DIF, LAJ, LEQ, MUV, WYC, DAL, SEN, KEP, NUD
You can imagine that because the material he was trying to learn was not at all meaningful, it was not easy to do. Ebbinghaus plotted how many of the syllables he could remember against the time that had elapsed since he studied them. He discovered an important principle of memory: Memory decays rapidly at first, but the amount of decay levels off with time (see the Ebbinghaus Forgetting Curve in the following figure). Although Ebbinghaus looked at forgetting after days had elapsed, the same effect occurs on longer and shorter time scales. Bahrick  found that students who took a Spanish language course forgot about one half of the vocabulary they had learned within 3 years, but after that time, their memory remained pretty much constant. Forgetting also drops off quickly on a shorter time frame, which suggests that you should try to review the material you have already studied right before you take an exam; that way, you will be more likely to remember the material during the exam.
Ebbinghaus also discovered another important principle of learning, known as the spacing effect. The spacing effect refers to the fact that learning is better when the same amount of study is spread out over periods of time than it is when it occurs closer together or at the same time. This means that even if you have only a limited amount of time to study, you’ll learn more if you study continually throughout the semester (a little bit every day is best) than if you wait to cram at the last minute before your exam. Another good strategy is to study and then wait as long as you can before you forget the material. Then review the information and again wait as long as you can before you forget it. (This probably will be a longer period of time than the first time.) Repeat and repeat again. The spacing effect is usually considered in terms of the difference between distributed practice (practice that is spread out over time) and massed practice (practice that comes in one block), with the former approach producing better memory.
Ebbinghaus also considered the role of overlearning—that is, continuing to practice and study even when we think that we have mastered the material. Ebbinghaus and other researchers have found that overlearning helps encoding.  Students frequently think that they have already mastered the material but then discover when they get to the exam that they have not. The point is clear: Try to keep studying and reviewing, even if you think you already know all the material.
Instructions: Answer the first two questions on the basis of what you know so far about the forgetting curve. Answer the third and fourth questions on the basis of what you know about the spacing effect and overlearning.
Even when information has been adequately encoded and stored, it does not do us any good if we cannot retrieve it. Retrieval is the process of reactivating information that has been stored in memory.
We’ve all experienced retrieval failure in the form of the frustrating tip-of-the-tongue phenomenon in which we are certain that we know something that we are trying to recall but cannot quite come up with it. You can try this one on your friends as well. Read your friend the names of the 10 states listed below, and ask him or her to name the capital city of each state. Now, for the capital cities that your friend can’t name, provide just the first letter of the capital city. You’ll probably find that having the first letters of the cities helps with retrieval. The tip-of-the-tongue experience is a very good example of the inability to retrieve information that is actually stored in memory.
Try this demonstration of the tip-of-the-tongue phenomenon with a classmate. Follow the instructions from the paragraph above.
|States and Capital Cities|
You can get an idea of the difficulty posed by retrieval by simply reading each of the words in the activity below. After you have read all the words, you will be asked to recall them.
Instructions: On the next page you will have 2 minutes to memorize a list of words. After you read the list, you will be given time to enter all the words that you can recall. Press Start to begin.
We are more likely to be able to retrieve items from memory when conditions at retrieval are similar to the conditions under which we encoded them. Context-dependent learning refers to an increase in retrieval when the external situation in which information is learned matches the situation in which it is remembered. Godden and Baddeley  conducted a study to test this idea using scuba divers. They asked the divers to learn a list of words either when they were on land or when they were underwater. Then they tested the divers on their memory, either in the same or the opposite situation. As you can see in the following figure, the divers’ memory was better when they were tested in the same context in which they had learned the words than when they were tested in the other context.
You can see that context-dependent learning might also be important in improving your memory. For instance, you might want to try to study for an exam in a situation that is similar to the one in which you are going to take the exam.
Whereas context-dependent learning refers to a match in the external situation between learning and remembering, state-dependent learning refers to superior retrieval of memories when the individual is in the same physiological or psychological state as during encoding. Research has found, for instance, that animals that learn a maze while under the influence of one drug tend to remember their learning better when they are tested under the influence of the same drug than when they are tested without the drug.  And research with humans finds that bilinguals remember better when tested in the same language in which they learned the material.  Mood states may also produce state-dependent learning. People who learn information when they are in a bad (rather than a good) mood find it easier to recall these memories when they are tested while they are in a bad mood, and vice versa. It is easier to recall unpleasant memories than pleasant ones when we’re sad, and easier to recall pleasant memories than unpleasant ones when we’re happy.  
Variations in the ability to retrieve information are also seen in the serial position curve. When we give people a list of words one at a time (e.g., on flashcards) and then ask them to recall them, the results look something like those in the figure below. People are able to retrieve more words that were presented to them at the beginning and the end of the list than they are words that were presented in the middle of the list. This pattern, known as the serial position curve, is caused by two retrieval phenomenon: The primacy effect is a tendency to better remember stimuli that are presented early in a list. The recency effect is the tendency to better remember stimuli that are presented later in a list.
There are a number of explanations for primacy and recency effects; one has to do with the effects of rehearsal on short-term and long-term memory.  Because we can keep the last words we learned in the presented list in short-term memory by rehearsing them before the memory test begins, they are relatively easily remembered. The recency effect therefore can be explained in terms of maintenance rehearsal in short-term memory. And the primacy effect may also be due to rehearsal—when we hear the first word in the list, we start to rehearse it, making it more likely that it will be moved from short-term to long-term memory. The same is true for the other words that come early in the list. But for the words in the middle of the list, this rehearsal becomes much harder, making them less likely to be moved to long-term memory.
In some cases our existing memories influence our new learning. This may occur either in a backward way or a forward way. Retroactive interference occurs when learning something new impairs our ability to retrieve information that was learned earlier. For example, if you have learned to program in one computer language, and then you learn to program in another similar one, you may start to make mistakes programming the first language that you never would have made before you learned the new one. In this case the new memories work backward (retroactively) to influence retrieval from memory that is already in place.
In contrast to retroactive interference, proactive interference works in a forward direction. Proactive interference occurs when earlier learning impairs our ability to encode information that we try to learn later. For example, if you learned French as a second language, this knowledge may make it more difficult, at least in some respects, to learn a third language (say Spanish), which involves similar but not identical vocabulary.
Memories stored in long-term memory are not isolated but rather are linked into categories—networks of associated memories that have features in common with each other. Forming categories and using them to guide behavior is a fundamental part of human nature. Associated concepts within a category are connected through spreading activation, which occurs when activating one element of a category activates other associated elements. For instance, because tools are associated in a category, reminding people of the word screwdriver will help them remember the word wrench. And when people learn lists of words that come from different categories (e.g., as in retrieval exercise on the previous page), they do not recall the information haphazardly. If they remember the word wrench, they are more likely to remember the word screwdriver next than they are to remember the word dahlia because the words are organized in memory by category and because screwdriver is activated by spreading activation from wrench. 
Some categories have defining features that must be true of all members of the category. For instance, all members of the category “triangles” have three sides, and all members of the category “birds” lay eggs. But most categories are not so well defined; the members of the category share some common features, but it is impossible to define which are or are not members of the category. For instance, there is no clear definition of the category “tool.” Some examples of the category, such as a hammer and a wrench, are clearly and easily identified as category members, whereas other members are not so obvious. Is an ironing board a tool? What about a car?
Members of categories (even those with defining features) can be compared to the category prototype, which is the member of the category that is most average or typical of the category. Some category members are more prototypical of, or similar to, the category than others. For instance, some category members (robins and sparrows) are highly prototypical of the category “birds,” whereas other category members (penguins and ostriches) are less prototypical. We retrieve information that is prototypical of a category faster than we retrieve information that is less prototypical. 
Mental categories are sometimes referred to as schemas—patterns of knowledge in long-term memory that help us organize information. We have schemas about objects (that a triangle has three sides and may take on different angles), about people (that Sam is friendly, likes to golf, and always wears sandals), about events (the particular steps involved in ordering a meal at a restaurant), and about social groups (we call these group schemas stereotypes).
Schemas are important in part because they help us remember new information by providing an organizational structure for it. Read the following paragraph  and then try to write down everything you can remember.
The procedure is actually quite simple. First you arrange things into different groups. Of course, one pile may be sufficient depending on how much there is to do. If you have to go somewhere else due to lack of facilities, that is the next step; otherwise you are pretty well set. It is important not to overdo things. That is, it is better to do too few things at once than too many. In the short run this may not seem important, but complications can easily arise. A mistake can be expensive as well. At first the whole procedure will seem complicated. Soon, however, it will become just another facet of life. It is difficult to foresee any end to the necessity for this task in the immediate future, but then one never can tell. After the procedure is completed, one arranges the materials into different groups again. Then they can be put into their appropriate places. Eventually they will be used once more and the whole cycle will then have to be repeated. However, that is part of life.
It turns out that people’s memory for this information is quite poor unless they are told before they read it that the information describes doing laundry, in which case their memory for the material is much better. This demonstration of the role of schemas in memory shows how our existing knowledge can help us organize new information and how this organization can improve encoding, storage, and retrieval.
Just as information is stored on digital media such as DVDs and flash drives, the information in LTM must be stored in the brain. How do different encoding and retrieval strategies affect our brains at the neural level? We saw from previous sections on elaborative encoding, categories and schemas that we give LTM a unique internal organization. Does that mean that there must be a “memory center” of the brain where all memories are organized for quick retrieval? Additionally, how do diseases such as Alzheimer’s disease and conditions such as amnesia cause us to forget information we have already stored in the brain? To answer these questions, we must think of the brain at two different levels: at the level of neurons and at the level of brain areas.
The ability to maintain information in LTM involves a gradual strengthening of the connections among the neurons in the brain. When pathways in these neural networks are frequently and repeatedly fired, the synapses become more efficient in communicating with each other, and these changes create memory. This process, known as long-term potentiation (LTP), refers to the strengthening of the synaptic connections between neurons as result of frequent stimulation.  Drugs that block LTP reduce learning, whereas drugs that enhance LTP increase learning.  Because the new patterns of activation in the synapses take time to develop, LTP happens gradually. The period of time in which LTP occurs and in which memories are stored is known as the period of consolidation. Consolidation of memories formed during the day often happens during sleep, and some theorize that this is one important function of sleep.
Long-term potentiation occurs as a result of changes in the synapses, which suggests that chemicals, particularly neurotransmitters and hormones, must be involved in memory. There is quite a bit of evidence that this is true. Glutamate, a neurotransmitter and a form of the amino acid glutamic acid, is perhaps the most important neurotransmitter in memory.  When animals, including people, are under stress, more glutamate is secreted, and this glutamate can help them remember.  The neurotransmitter serotonin is also secreted when animals learn, and epinephrine may also increase memory, particularly for stressful events.   Estrogen, a female sex hormone, also seems critical, because women who are experiencing menopause, along with a reduction in estrogen, frequently report memory difficulties.  These changes occur through practice. Rehearsal is important in learning. Each time we rehearse, the pathway is activated and each activation strengthens the connections along that pathway.
Our knowledge of the role of biology in memory suggests that it might be possible to use drugs to improve our memories, and Americans spend several hundred million dollars per year on memory supplements with the hope of doing just that. Yet controlled studies comparing memory enhancers, including Ritalin, methylphenidate, ginkgo biloba, and amphetamines, with placebo drugs find very little evidence for their effectiveness.   Memory supplements are usually no more effective than drinking a sugared soft drink, which also releases glucose and thus improves memory slightly.
The following video demonstrates a metaphor for how long-term potentiation creates strong, easily accessible memories. Please answer the following questions about long-term potentiation based on the video and the reading.
Memory occurs through sophisticated interactions between new and old brain structures shown the following figure. One of the most important brain regions in explicit memory is the hippocampus, which serves as a preprocessor and elaborator of information.  The hippocampus helps us encode information about spatial relationships, the context in which events were experienced, and the associations among memories.  The hippocampus also serves in part as a switching point that holds the memory for a short time and then directs the information to other parts of the brain, such as the cortex, to actually do the rehearsing, elaboration, and long-term storage.  Without the hippocampus, which might be described as the brain’s “librarian,” our explicit memories would be inefficient and disorganized. Even so, the older we get, the more susceptible we are to proactive and retroactive interference, which shows that the “librarian” finds it harder to retrieve the right memory from a pile of similar memories as we age. In people with Alzheimer’s disease, a neurodegenerative disease which is most common in the elderly, the hippocampus is severely atrophied. Unsurprisingly, one of the most common symptoms of this disease is the inability to form new memories, followed by a loss of the most recent memories and, finally, the loss of old memories.
While the hippocampus is handling explicit memory, the cerebellum and the amygdala are concentrating on implicit and emotional memories, respectively. Research shows that the cerebellum is more active when we are learning associations and in priming tasks, and animals and humans with damage to the cerebellum have more difficulty in classical conditioning studies.   The cerebellum is also highly involved in the learning of procedural tasks which need fine motor control, such as writing, riding a bike, and sewing. The storage of many of our most important emotional memories, and particularly those related to fear, is initiated and controlled by the amygdala.  If both amygdalae are damaged, people do not lose their memories of positive or negative emotional associations, but they lose the ability to create new positive or negative associations with objects and events.
Although some brain structures are particularly important in memory, this does not mean that all memories are stored in one place. The American psychologist Karl Lashley  attempted to determine where memories were stored in the brain by teaching rats how to run mazes, and then lesioning different brain structures to see if they were still able to complete the maze. This idea seemed straightforward, and Lashley expected to find that memory was stored in certain parts of the brain. But he discovered that no matter where he removed brain tissue, the rats retained at least some memory of the maze, leading him to conclude that memory isn’t located in a single place in the brain, but rather is distributed around it.
Our memories are not perfect. They fail in part due to our inadequate encoding and storage, and in part due to our inability to accurately retrieve stored information. But memory is also influenced by the setting in which it occurs, by the events that occur to us after we have experienced an event, and by the cognitive processes that we use to help us remember. Although our cognition allows us to attend to, rehearse, and organize information, cognition may also lead to distortions and errors in our judgments and our behaviors.
In this section we consider some of the cognitive biases that are known to influence humans. Cognitive biases are errors in memory or judgment that are caused by the inappropriate use of cognitive processes. The study of cognitive biases is important both because it relates to the important psychological theme of accuracy versus inaccuracy in perception, and because being aware of the types of errors that we may make can help us avoid them and therefore improve our decision-making skills.
A particular problem for eyewitnesses such as Jennifer Thompson, who misidentified her rapist in court resulting in his wrongful conviction, is that our memories are often influenced by the things that occur to us after we have learned the information.    This new information can distort our original memories such that we are no longer sure what is the real information and what was provided later. The misinformation effect refers to errors in memory that occur when new information influences existing memories.
In an experiment by Loftus and Palmer,  participants viewed a film of a traffic accident and then, according to random assignment to experimental conditions, answered one of three questions:
As you can see in the figure below, although all the participants saw the same accident, their estimates of the cars’ speed varied by condition. Participants who had been asked about the cars “smashing” each other estimated the highest average speed, and those who had been asked the “contacted” question estimated the lowest average speed.
In addition to distorting our memories for events that have actually occurred, misinformation may lead us to falsely remember information that never occurred. Loftus and her colleagues asked parents to provide them with descriptions of events that did (e.g., moving to a new house) and did not (e.g., being lost in a shopping mall) happen to their children. Then (without telling the children which events were real or made-up) the researchers asked the children to imagine both types of events. The children were instructed to “think real hard” about whether the events had occurred.  More than half of the children generated stories regarding at least one of the made-up events, and they remained insistent that the events did in fact occur even when told by the researcher that they could not possibly have occurred.  Even college students are susceptible to manipulations that make events that did not actually occur seem as if they did. 
The ease with which memories can be created or implanted is particularly problematic when the events to be recalled have important consequences. Therapists often argue that patients may repress memories of traumatic events they experienced as children, such as childhood sexual abuse, and then recover the events years later as the therapist leads them to recall the information—for instance, by using dream interpretation and hypnosis. 
But other researchers argue that painful memories such as sexual abuse are usually very well remembered, that few memories are actually repressed, and that even if they are, it is virtually impossible for patients to accurately retrieve them years later.   These researchers have argued that the procedures used by the therapists to “retrieve” the memories are more likely to actually implant false memories, leading the patients to erroneously recall events that did not actually occur. Because hundreds of people have been accused, and even imprisoned, on the basis of claims about “recovered memory” of child sexual abuse, the accuracy of these memories has important societal implications. Many psychologists now believe that most of these claims of recovered memories are due to implanted, rather than real, memories. 
One potential error in memory involves mistakes in differentiating the sources of information. Source monitoring refers to the ability to accurately identify the source of a memory. Perhaps you’ve had the experience of wondering whether you really experienced an event or only dreamed or imagined it. If so, you wouldn’t be alone. Rassin, Merkelbach, and Spaan  reported that up to 25% of college students reported being confused about real versus dreamed events. Studies suggest that people who are fantasy-prone are more likely to experience source monitoring errors,  and such errors also occur more often for both children and the elderly than for adolescents and younger adults. 
In other cases, we may be sure that we remembered the information from real life but be uncertain about exactly where we heard it. Imagine that you read a news story in a tabloid magazine such as the National Enquirer. Probably you would have discounted the information because you know that its source is unreliable. But what if later you were to remember the story but forget the source of the information? If this happens, you might become convinced that the news story is true because you forget to discount it. The sleeper effect refers to an attitude change that occurs over time when we forget the source of information. 
In still other cases we may forget where we learned information and mistakenly assume that we created the memory ourselves. Kaavya Viswanathan, the author of the book How Opal Mehta Got Kissed, Got Wild, and Got a Life, was accused of plagiarism when it was revealed that many parts of her book were very similar to passages from other material. Viswanathan argued that she had simply forgotten that she had read the other works, mistakenly assuming she had made up the material herself. And the musician George Harrison claimed that he was unaware that the melody of his song “My Sweet Lord” was almost identical to an earlier song by another composer. The judge in the copyright suit that followed ruled that Harrison didn’t intentionally commit the plagiarism. (Please use this knowledge to become extra vigilant about source attributions in your written work, not to try to excuse yourself if you are accused of plagiarism.)
Research reveals a pervasive cognitive bias toward overconfidence, which is the tendency for people to be too certain about their ability to accurately remember events and to make judgments. David Dunning and his colleagues  asked college students to predict how another student would react in various situations. Some participants made predictions about a fellow student whom they had just met and interviewed, and others made predictions about their roommates whom they knew very well. In both cases, participants reported their confidence in each prediction, and accuracy was determined by the responses of the people themselves. The results were clear: Regardless of whether they judged a stranger or a roommate, the participants consistently overestimated the accuracy of their own predictions.
Eyewitnesses to crimes are also frequently overconfident in their memories, and there is only a small correlation between how accurate and how confident an eyewitness is. A witness who claims to be absolutely certain about, for example, his or her identification of a suspect or account of events is not much more likely to be accurate than one who appears less sure, making it almost impossible to determine whether or not a particular witness is accurate. 
I am sure that you have a clear memory of when you first heard about the 9/11 attacks in 2001, and perhaps also when you heard that Princess Diana was killed in 1997 or when the verdict of the O. J. Simpson trial was announced in 1995. This type of memory, which we experience along with a great deal of emotion, is known as a flashbulb memory—a vivid and emotional memory of an unusual event that people believe they remember very well. 
People are very certain of their memories of these important events, and frequently overconfident. Talarico and Rubin  tested the accuracy of flashbulb memories by asking students to write down their memory of how they had heard the news about either the September 11, 2001, terrorist attacks or about an everyday event that had occurred to them during the same time frame. These recordings were made on September 12, 2001. Then the participants were asked again, either 1, 6, or 32 weeks later, to recall their memories. The participants became less accurate in their recollections of both the emotional event and the everyday events over time. But the participants’ confidence in the accuracy of their memory of learning about the attacks did not decline over time. After 32 weeks, the participants were overconfident; they were much more certain about the accuracy of their flashbulb memories than they should have been. Schmolck, Buffalo, and Squire  found similar distortions in memories of news about the verdict in the O. J. Simpson trial.
If you’ve already covered the unit on how cognitive change proceeds in childhood, this paragraph and the following Learn By Doing exercise will be a quick refresher on schematic processing. If that material is yet to come, then this will serve as a brief introduction to the topic. Schemata (plural of schema) are mental representations of the world that are formed and adjusted using the processes of assimilation and accommodation as a person experiences life. Assimilation is the use of existing schema to interpret new information and accommodation is the adjustment of existing schema to fit new information. Generally, both processes are in action at the same time.
We have seen that schemas help us remember information by organizing material into coherent representations. However, although schemas can improve our memories, they may also lead to cognitive biases. Using schemas may lead us to falsely remember things that never happened to us and to distort or misremember things that did. For one, schemas lead to the confirmation bias, which is the tendency to verify and confirm our existing memories rather than to challenge and disconfirm them. The confirmation bias occurs because once we have schemas, they influence how we seek out and interpret new information. The confirmation bias leads us to remember information that fits our schemas better than we remember information that disconfirms them,  a process that makes our stereotypes very difficult to change. And we ask questions in ways that confirm our schemas.  If we think that a person is an extrovert, we might ask her about ways that she likes to have fun, thereby making it more likely that we will confirm our beliefs. In short, once we begin to believe in something—for instance, a stereotype about a group of people—it becomes very difficult to later convince us that these beliefs are not true; the beliefs become self-confirming.
Darley and Gross  demonstrated how schemas about social class could influence memory. In their research they gave participants a picture and some information about a fourth-grade girl named Hannah. To activate a schema about her social class, Hannah was pictured sitting in front of a nice suburban house for one-half of the participants and pictured in front of an impoverished house in an urban area for the other half. Then the participants watched a video that showed Hannah taking an intelligence test. As the test went on, Hannah got some of the questions right and some of them wrong, but the number of correct and incorrect answers was the same in both conditions. Then the participants were asked to remember how many questions Hannah got right and wrong. Demonstrating that stereotypes had influenced memory, the participants who thought that Hannah had come from an upper-class background remembered that she had gotten more correct answers than those who thought she was from a lower-class background.
Our reliance on schemas can also make it more difficult for us to “think outside the box.” Peter Wason  asked college students to determine the rule that was used to generate the numbers 2-4-6 by asking them to generate possible sequences and then telling them if those numbers followed the rule. The first guess that students made was usually “consecutive ascending even numbers,” and they then asked questions designed to confirm their hypothesis (“Does 102-104-106 fit?” “What about 404-406-408?”). Upon receiving information that those guesses did fit the rule, the students stated that the rule was “consecutive ascending even numbers.” But the students’ use of the confirmation bias led them to ask only about instances that confirmed their hypothesis, and not about those that would disconfirm it. They never bothered to ask whether 1-2-3 or 3-11-200 would fit, and if they had, they would have learned that the rule was not “consecutive ascending even numbers” but simply “any three ascending numbers.” Again, you can see that once we have a schema (in this case a hypothesis), we continually retrieve that schema from memory rather than other relevant ones, leading us to act in ways that tend to confirm our beliefs.
Functional fixedness occurs when people’s schemas prevent them from using an object in new and nontraditional ways. Duncker  gave participants a candle, a box of thumbtacks, and a book of matches, and asked them to attach the candle to the wall so that it did not drip onto the table below . Few of the participants realized that the box could be tacked to the wall and used as a platform to hold the candle. The problem again is that our existing memories are powerful, and they bias the way we think about new information. Because the participants were “fixated” on the box’s normal function of holding thumbtacks, they could not see its alternative use.
Still another potential for bias in memory occurs because we are more likely to attend to, and thus make use of and remember, some information more than other information. For one, we tend to attend to and remember things that are highly salient, meaning that they attract our attention. Things that are unique, colorful, bright, moving, and unexpected are more salient.   In one relevant study, Loftus, Loftus, and Messo  showed people images of a customer walking up to a bank teller and pulling out either a pistol or a checkbook. By tracking eye movements, the researchers determined that people were more likely to look at the gun than at the checkbook, and that this reduced their ability to accurately identify the criminal in a lineup that was given later. The salience of the gun drew people’s attention away from the face of the criminal.
The salience of the stimuli in our social worlds has a big influence on our judgment, and in some cases may lead us to behave in ways that we might better not have. Imagine, for instance, that you wanted to buy a new music player for yourself. You’ve been trying to decide what brand of tablet computer to buy. You checked Consumer Reports online and found that, although the brands differed on many dimensions, including price, battery life, and so forth, Brand X was nevertheless rated significantly higher by owners than were the other brands. As a result, you decide to purchase Brand X the next day. That night, however, you go to a party, and a friend shows you her new Brand Y tablet. You check it out, and it seems really cool. You tell her that you were thinking of buying Brand X, and she tells you that you are crazy. She says she knows someone who had one and it had a lot of problems—it didn’t download files correctly, the battery died right after the warranty expired, and so forth—and that she would never buy one. Would you still buy Brand X, or would you switch your plans?
If you think about this question logically, the information that you just got from your friend isn’t really all that important. You now know the opinion of one more person, but that can’t change the overall rating of the brands very much. On the other hand, the information your friend gives you, and the chance to use her Brand Y tablet, are highly salient. The information is right there in front of you, in your hand, whereas the statistical information from Consumer Reports is only in the form of a table that you saw on your computer. The outcome in cases such as this is that people frequently ignore the less salient but more important information, such as the likelihood that events occur across a large population (these statistics are known as base rates), in favor of the less important but nevertheless more salient information. The situation is further complicated by the fact that people tend to selectively remember certain outcomes because they are salient while disregarding mundane ones. Moreover, people’s first person perspective leads them to overestimate the degree to which they played a role in an event or project, a phenomenon called cognitive accessibility.
Another way that our information processing may be biased occurs when we use heuristics, which are information-processing strategies that are useful in many cases but may lead to errors when misapplied. These strategies are in contrast to algorithms, which are recipe-style information-processing strategies that guarantee a correct answer at all times. Two examples are using the Pythagorean theorem for finding the length of the hypotenuse of a triangle and using a formula to convert from Fahrenheit to Celsius (and vice versa), and there are many others. The reason that people don’t always use algorithmic processing is that there are not algorithmic solutions to most problems people encounter (or even if there were, they may be too complicated), thus they resort to heuristics as the next best alternative. Let’s consider two of the most frequently applied (and misapplied) heuristics: the representativeness heuristic and the availability heuristic.
In many cases we base our judgments on information that seems to represent, or match, what we expect will happen, while ignoring other potentially more relevant statistical information. When we do so, we are using the representativeness heuristic. Consider, for instance, the puzzle presented in the following table. Let’s say that you went to a hospital, and you checked the records of the babies that were born today. Which pattern of births do you think you are most likely to find?
|The Representativeness Heuristic|
Using the representativeness heuristic may lead us to incorrectly believe that some patterns of observed events are more likely to have occurred than others. In this case, list B seems more random, and thus is judged as more likely to have occurred, but statistically both lists are equally likely.
Most people think that list B is more likely, probably because list B looks more random, and thus matches (is “representative of”) our ideas about randomness. But statisticians know that any pattern of four girls and four boys is mathematically equally likely. The problem is that we have a schema of what randomness should be like, which doesn’t always match what is mathematically the case. Similarly, people who see a flipped coin come up “heads” five times in a row will frequently predict, and perhaps even wager money, that “tails” will be next. This behavior is known as the gambler’s fallacy. But mathematically, the gambler’s fallacy is an error: The likelihood of any single coin flip being “tails” is always 50%, regardless of how many times it has come up “heads” in the past. Probability, the likelihood of something happening, is calculated solely by dividing the total number of potential favorable outcomes by the total number of possible outcomes (in our case ½ for both heads and tails). The previous history of events does not affect future events. Another illustration of the gambler’s fallacy is the deceptive phenomenon of streaky basketball shooters.
Imagine you are at a Laker game and are watching Dwight Howard shooting free throws. Let’s assume he generally makes 6 out of 10 shots, so his accuracy is 60%. As the game goes on, the other team continues to foul Dwight, so he takes many free throws throughout the game. You start to wonder what are the chances of him making a certain number of shots. You remember from your math class that one needs to multiply individual probabilities of events together to calculate their combined probability.
Our judgments can also be influenced by how easy it is to retrieve a memory.The tendency to make judgments of the frequency or likelihood that an event occurs on the basis of the ease with which it can be retrieved from memory is known as the availability heuristic.   Imagine, for instance, that I asked you to indicate whether there are more words in the English language that begin with the letter “R” or that have the letter “R” as the third letter. You would probably answer this question by trying to think of words that have each of the characteristics, thinking of all the words you know that begin with “R” and all that have “R” in the third position. Because it is much easier to retrieve words by their first letter than by their third, we may incorrectly guess that there are more words that begin with “R,” even though there are in fact more words that have “R” as the third letter.
The availability heuristic may also operate on episodic memory. We may think that our friends are nice people, because we see and remember them primarily when they are around us (their friends, who they are, of course, nice to). And the traffic might seem worse in our own neighborhood than we think it is in other places, in part because nearby traffic jams are more easily retrieved than are traffic jams that occur somewhere else.
In addition to influencing our judgments about ourselves and others, the ease with which we can retrieve potential experiences from memory can have an important effect on our own emotions. If we can easily imagine an outcome that is better than what actually happened, then we may experience sadness and disappointment; on the other hand, if we can easily imagine that a result might have been worse than what actually happened, we may be more likely to experience happiness and satisfaction. The tendency to think about and experience events according to “what might have been” is known as counterfactual thinking.  
Imagine, for instance, that you were participating in an important contest, and you won the silver (second-place) medal. How would you feel? Certainly you would be happy that you won the silver medal, but wouldn’t you also be thinking about what might have happened if you had been just a little bit better—you might have won the gold medal! On the other hand, how might you feel if you won the bronze (third-place) medal? If you were thinking about the counterfactuals (the “what might have beens”) perhaps the idea of not getting any medal at all would have been highly accessible; you’d be happy that you got the medal that you did get, rather than coming in fourth.
Tom Gilovich and his colleagues investigated this idea by videotaping the responses of athletes who won medals in the 1992 Summer Olympic Games.  They videotaped the athletes both as they learned that they had won a silver or a bronze medal and again as they were awarded the medal. Then the researchers showed these videos, without any sound, to raters who did not know which medal which athlete had won. The raters were asked to indicate how they thought the athlete was feeling, using a range of feelings from “agony” to “ecstasy.” The results showed that the bronze medalists were, on average, rated as happier than were the silver medalists. In a follow-up study, raters watched interviews with many of these same athletes as they talked about their performance. The raters indicated what we would expect on the basis of counterfactual thinking—the silver medalists talked about their disappointments in having finished second rather than first, whereas the bronze medalists focused on how happy they were to have finished third rather than fourth.
You might have experienced counterfactual thinking in other situations. Once I was driving across country, and my car was having some engine trouble. I really wanted to make it home when I got near the end of my journey; I would have been extremely disappointed if the car broke down only a few miles from my home. Perhaps you have noticed that once you get close to finishing something, you feel like you really need to get it done. Counterfactual thinking has even been observed in juries. Jurors who were asked to award monetary damages to others who had been in an accident offered them substantially more in compensation if they barely avoided injury than they offered if the accident seemed inevitable. 
Perhaps you are thinking that the kinds of errors that we have been talking about don’t seem that important. After all, who really cares if we think there are more words that begin with the letter “R” than there actually are, or if bronze medal winners are happier than the silver medalists? These aren’t big problems in the overall scheme of things. But it turns out that what seem to be relatively small cognitive biases on the surface can have profound consequences for people.
Why would so many people continue to purchase lottery tickets, buy risky investments in the stock market, or gamble their money in casinos when the likelihood of them ever winning is so low? One possibility is that they are victims of salience; they focus their attention on the salient likelihood of a big win, forgetting that the base rate of the event occurring is very low. The belief in astrology, which all scientific evidence suggests is not accurate, is probably driven in part by the salience of the occasions when the predictions are correct. When a horoscope comes true (which will, of course, happen sometimes), the correct prediction is highly salient and may allow people to maintain the overall false belief.
People may also take more care to prepare for unlikely events than for more likely ones, because the unlikely ones are more salient. For instance, people may think that they are more likely to die from a terrorist attack or a homicide than they are from diabetes, stroke, or tuberculosis. But the odds are much greater of dying from the latter than the former.
Salience and accessibility also color how we perceive our social worlds, which may have a big influence on our behavior. For instance, people who watch a lot of violent television shows also view the world as more dangerous,  probably because violence becomes more cognitively accessible for them. We also unfairly overestimate our contribution to joint projects,  perhaps in part because our own contributions are highly accessible, whereas the contributions of others are much less so.
Even people who should know better, and who need to know better, are subject to cognitive biases. Economists, stock traders, managers, lawyers, and even doctors make the same kinds of mistakes in their professional activities that people make in their everyday lives.  Just like us, these people are victims of overconfidence, heuristics, and other biases.
Furthermore, every year thousands of individuals, such as Ronald Cotton, are charged with and often convicted of crimes based largely on eyewitness evidence. When eyewitnesses testify in courtrooms regarding their memories of a crime, they often are completely sure that they are identifying the right person. But the most common cause of innocent people being falsely convicted is erroneous eyewitness testimony.  The many people who were convicted by mistaken eyewitnesses prior to the advent of forensic DNA and who have now been exonerated by DNA tests have certainly paid for all-too-common memory errors. 
Although cognitive biases are common, they are not impossible to control, and psychologists and other scientists are working to help people make better decisions. One possibility is to provide people with better feedback about their judgments. Weather forecasters, for instance, learn to be quite accurate in their judgments because they have clear feedback about the accuracy of their predictions. Other research has found that accessibility biases can be reduced by leading people to consider multiple alternatives rather than focus only on the most obvious ones, and particularly by leading people to think about opposite possible outcomes than the ones they are expecting.  Forensic psychologists are also working to reduce the incidence of false identification by helping police develop better procedures for interviewing both suspects and eyewitnesses. 
Another source of errors in cognition is belief in the paranormal. A Gallup poll in 2005 showed that 3 out of 4 Americans believe in the supernatural with over 40% responding that they believe in extra-sensory perception (ESP), the ability to sense things without being in physical proximity with the person, place, thing, or event. There has been much research that has claimed to prove or disprove the existence of such phenomena. While the paranomal is taken for granted by the general public, quite the opposite is observed in the American Academy of Sciences, where only 4% members believe in the existence of such phenomena.
The paranormal is a term that most people use to refer to a whole range of unusual aspects of human perception and cognition. Parapsychologists, scientists that study anomalous phenomena like ESP, generally use the term Psi, and have identified two specific forms. Psi-gamma refers to those phenomena that involve anomalous information transfer, like ESP, clairvoyance, and remote viewing. On the other hand, psi-kappa refers to those phenomena that involve anomalous transfer of matter, such as psychokinesis or telekinesis (the ability to move things with one’s mind), or even anomalous transfer of energy, such as pyrokinsesis (the ability to set things aflame with one’s mind). To date, the most rigorous set of studies were conducted at the Princeton Engineering Anomalies Research (PEAR) at Princeton University. Despite three decades of positive results, this research has not been accepted as a valid avenue of empirical investigation by the mainstream scientific community. 
Virtually all animals have ways to communicate with other members of their own species, whether it be through sounds, gestures, odors, or other means. Some animals, like chimpanzees and dolphins, have rich and complicated communication systems. But even the most sophisticated communication system of other species does not come close to the complexity and subtlety of human language. It is not an exaggeration to claim that the human language system takes communication to a very different level than that found in any other creature on earth. Although the word language is often used broadly (e.g., “the language of the bees”), here we restrict its use to a particular part of human communication—spoken language—and consider it apart from other important aspects of human communication (e.g., body language or emotional messages conveyed by facial expressions).
Language involves both the ability to comprehend spoken and written words and to produce meaningful communication when we speak or write. Most languages first appear in their spoken form. Although speaking may seem simple, it is a remarkably complex skill that involves a variety of cognitive, social, and biological processes, including operation of the vocal cords and the coordination of breath with movements of the throat, mouth, and tongue. A number of languages that are primarily or entirely expressed in sign also exist. In sign languages, communication is expressed by movements of the hands along with facial and bodily gestures. The most common sign language is American Sign Language (ASL), currently spoken by more than 500,000 people in the United States alone. Except for artificial languages developed for technology and an occasional special-use language, languages do not develop in written form. Although writing is generally derivative of spoken language, it involves a complex set of processes, some of them unique to writing.
Language is often used for the transmission of factual information (“Turn right at the next light, and then go straight,” “Place tab A into slot B”), but that is only its most mundane function. Language also allows us to access existing knowledge, to draw conclusions, to set and accomplish goals, and to understand and communicate complex social relationships. Language is fundamental to our ability to think, and without it we would be nowhere near as intelligent as we are.
Spoken languages can be conceptualized in terms of sounds, meaning, and the environmental factors that help us understand it. Although we usually notice words and sentences when we think about language, some of the most important psychological research on language involves more basic elements that give form and content to words and sentences. In the next section, we discuss phonemes, which are elementary units of sound that make up words; morphemes, which are “word parts”—small but meaningful sounds that alter and refine a word’s meaning; and finally, syntax, which is the set of grammatical rules that control how words are put together into phrases and sentences. Languages are governed by rules, but contextual information, the when, where, and why of communication, is also necessary for understanding the meaning of what a person says. The importance of context is also discussed in this section.
A phoneme is the smallest unit of sound that makes a meaningful difference in a language. Phonemes correspond to the sounds associated with the letters of an alphabet, though there is not always a one-to-one correspondence between sounds and letters. The word bit has three phonemes, /b/, /i/, and /t/ (in transcription, phonemes are placed between slashes), and the word pit also has three: /p/, /i/, and /t/. These two words differ by a single phoneme: /b/ versus /p/. However, the six-letter word phrase has only four phonemes: /f/, /r/, /long-a/, and /z/. In spoken languages, phonemes are produced by movements of our lips, teeth, tongue, vocal cords, and throat (the vocal tract), whereas in sign languages phonemes are defined by the shapes and movement of the hands.
Hundreds of unique phonemes can be made by human speakers, but most languages use only a small subset of the possibilities. English uses about 45 phonemes, whereas other languages have as few as 15 and others more than 60. For instance, the Hawaiian language contains only about a dozen phonemes, including five vowels (a, e, i, o, and u) and seven consonants (h, k, l, m, n, p, and w).
The fact that different languages use different sets of phonemes is the reason people usually have accents in languages that are not their native language. It is difficult to learn to make a new speech sound and use it regularly in words if you did not learn it early in life. And accents are not the whole story. Because the phoneme is actually a category of sounds—that is, many variations on a sound—and the members of this category are treated alike by the brain, some languages group several sounds together as a single phoneme, and others separate those same sounds as different phonemes. Speakers of different languages can hear the difference only between the sounds their language marks as different phonemes, and they cannot tell the difference between two sounds that are grouped together as the same phoneme. This is known as the categorical perception of speech sounds. For example, English speakers can differentiate the /r/ phoneme from the /l/ phoneme, and thus rake and lake are heard as different words. In Japanese, however, /r/ and /l/ are the same phoneme, and thus native speakers of Japanese cannot tell the difference between rake and lake. The /r/ versus /l/ difference is obvious to native English speakers, but English speakers run into the same problem when listening to speakers of other language. Try saying cool and keep out loud. Can you hear the difference between the two /k/ sounds? To English speakers, they both sound the same, but to speakers of Arabic they are two different phonemes.
Let’s practice identifying the various components of language. The first part of this activity focuses on phonemes, the smallest unit of sound. When you click the play button, you will hear a speech sound. Your job is to drag and drop the grapheme, or letter, of the phoneme you hear into the corresponding box.
Categorical perception is a way of perceiving different sensory inputs and mapping them to the same category. It explains why speakers of a particular language all group a variety of sounds into a single phoneme, so each phoneme is actually a set of variations on a single theme. This means that we hear different sounds as if they were the same, and often we cannot tell the difference even if we try. To demonstrate this fact, psychologists used computers to create a series of sounds, each made up of two phonemes, that gradually—in precise steps—changed from a /ba/ sound to a /pa/ sound. (Other two-phoneme sounds were tested as well, but we use /ba/ and /pa/ for our explanation.)
The experimenters wanted to know what the people (in this case, adults) would perceive when they heard the sounds. If you didn’t know about phonemes, you might expect that they would hear a clear /ba/ sound that gradually became more like a /pa/ sound until it became a clear /pa/ sound. But that is not what happened.
The following figure shows the many variations of the /ba/ and /pa/ sounds on the X-axis. The X-axis is labeled “Voice onset time (ms).” Voice onset time is a technical unit, not critical for our discussion. Simply understand that the sounds, created by a computer to guarantee precise differences, went from having strong characteristics of /ba/ on the far left to strong characteristics of /pa/ on the far right.
The two lines represent the percentage of participants who said they heard /pa/ and the percentage who said they heard /ba/. Percentages are shown on the Y-axis. Together the two lines add up to 100%, because participants always had to choose one or the other. What the graph shows is that there was only a small region of ambiguity, where the two lines cross. For most of the sounds, going left to right, just about every participant on every trial chose either /ba/ on the left or /pa/ on the right.
If the participants in the study had simply heard the differences, we would predice that as the sounds became more mixed (this occurs the most in the very center of the figure), the participants would be increasingly confused.
But that is not what happened. Instead, people perceived /ba/ unambiguously across many variations. Then, in the center of the variations, where /ba/ and /pa/ sounds were most mixed together, there was a bit of uncertainty, and then they unambiguously heard /pa/ sounds. You can see this in the following figure, where there is only a small range of sounds that led to any uncertainty about whether participants heard /ba/ or /pa/. This sharp change from perceiving the sounds to be /ba/ to perceiving the sounds to be /pa/ is called categorical perception, meaning that what we perceive is far more sharply (or “categorically”) divided than what our ears actually hear.
Infants are born able to understand all phonemes, but they lose their ability to do so as they get older; by 10 months of age, a child’s ability to recognize phonemes becomes similar to that of the adult speakers of the native language. Phonemes that were initially differentiated come to be treated as equivalent. 
Phonemes are units of sound, but sound is simply used by language to convey meaning. The basic meaningful units of words are called morphemes. A morpheme is a string of one or more phonemes that make up word meanings and, if a morpheme is added, eliminated, or changed, the meaning of the word changes. In some cases, an entire word is a morpheme. For instance, the word painted has seven letters, six phonemes (/p/, /long-a/, /n/, /t/, /e/, and /d/), and two morphemes (paint + ed, which is a morpheme that means that the first morpheme occurred in the past). However, we can add morphemes—for instance, the prefix re to make repainted—or eliminate morphemes, taking the ed away to leave the single morpheme word, paint. We can even add a morpheme to make up new words, such as unrepainted or depainting, even if we aren’t quite sure what they mean. However, in general, we know what the changed word means when we add a morpheme. For example, the syllable re-, as in rewrite or repay, means “to do again,” and the suffix -est, as in happiest or coolest, means “to the maximum.”
In this activity, we look at morphemes, which consist of one or more phonemes.
Syntax is the set of rules of a language by which we construct sentences. Each language has a different syntax. The syntax of the English language requires that each sentence have a noun and a verb, each of which may be modified by adjectives and adverbs. Some syntactical rules make use of the order in which words appear, while others do not. In English, “The man bites the dog” is different from “The dog bites the man.” Because the words are the same in both sentences, the order of the words must convey the difference in meaning. In German, however, only the article endings before the noun matter. “Der Hund beisst den Mann” means “The dog bites the man” but so does “Den Mann beisst der Hund.” The German word der goes with the subject of the sentence, while den goes with the object. The order of the words in this sentence is not as important as it would be in English.
In this activity, you create grammatical sentences. For each sentence, select the word that best completes the sentence.
Words, phrases, and entire sentences do not possess fixed meanings but change their interpretation as a function of the context in which they are spoken. We use contextual information—the situation in which language is being used, the topic, the things that were said previously, body language, and so on—to help us interpret a word or sentence. For example, imagine that you run into a friend who just saw a new high-tech movie and ask, “How was it?” The friend replies with an enthusiastic look on his face, “Unbelievable!” Now imagine that you run into a friend who just went to a lecture on “How to make a million dollars in two days selling bottled air.” You ask, “How was it?” and your friend rolls her eyes and groans, “Unbelievable!” In the first case, “unbelievable” means “very good,” and in the second case, “unbelievable” means “very bad.” We use context so naturally that we seldom notice how much it impacts our interpretation of language.
Examples of contextual information include our own knowledge, our assumptions about other people's knowledge, and nonverbal expressions such as facial expressions, postures, gestures, and tone of voice. Misunderstandings can easily arise if people aren’t attentive to contextual information or if some of it is missing, such as it may be in newspaper headlines or in text messages.
Examples in Which Syntax Is Correct but the Interpretation Can Be Ambiguous:
Now let’s take a look at the role of contextual information in language. In English, some words have multiple meanings. To determine which meaning of a word is the most appropriate, we must consider contextual information. For each of the following sentences, select which meaning of the italicized word is most appropriate given the context.
For each of the following definitions, select the language-related term it describes.
Anyone who has tried to master a second language as an adult knows the difficulty of language learning. And yet children learn languages easily and naturally. Children who are not exposed to language early in their lives will likely never learn one. Documented case studies of such children include Victor the “Wild Child,” who was abandoned as a baby in France and not discovered until he was 12, and Genie, a child whose parents kept her locked in a closet from 18 months until 13 years of age. Both of these children made some progress in socialization after they were rescued and even learned many words and simple phrases, but neither of them ever developed language even to the level of a typical 3-year-old. 
The cases of Victor and Genie illustrate the importance of early experience of language with people, usually adults, who are fluent in the language. There is one group of children who are in danger of isolation even though they are born to completely normal and loving families: congenitally deaf children or children who lose their hearing very early in life. The parents of these children are seldom deaf themselves and often have no warning that their newborn will be deaf. These parents are seldom fluent or even novices in sign language, a language that, like any other language, takes years of practice to achieve competency. Deaf children who are not exposed to sign language during their early years are likely to have difficulty mastering it when they are older.  Deaf children have a much better chance of later acquiring other languages, even spoken languages, if they have early exposure to fluent signing.
Deaf children and children like Victor and Genie have shown us that language is a complex ability that is most likely to mature in a normal way if the child is raised in an environment rich in language experiences. There is probably a “sensitive period,” usually associated with the period of childhood, when exposure to language must occur for the brain systems associated with language to develop properly. If exposure does not occur until later, as happened to Genie and Victor and as once regularly happened to deaf children in isolated communities, then the ability to acquire true language abilities may be extremely difficult or even impossible. The brain may lose its ability to form the necessary neural networks to permit language to develop.
For most people, language processing is dominated by the left hemisphere, but many people have a reversal of this pattern, so the right hemisphere dominates language. For instance, a study that looked at the relationship between handedness and the dominant hemisphere for language found that only 4% of people who are strongly right handed have right hemisphere dominance for language, while 15% of ambidextrous individuals and 27% of strong left handers had right hemisphere dominance for language. 
These differences in hemispheric dominance can easily be seen in the results of neuroimaging studies that show that listening to and producing language creates greater activity in the left hemisphere than in the right. As shown in the following figure, the Broca area, an area in front of the left hemisphere near the motor cortex, is responsible for language production. This area was first localized in the 1860s by the French physician Paul Broca, who studied patients with lesions to various parts of the brain. The Wernicke area, an area of the brain next to the auditory cortex, is responsible for language comprehension.
Evidence for the importance of the Broca and Wernicke areas in language is seen in patients who experience aphasia, a condition in which language functions are severely impaired. People with Broca aphasia have difficulty producing speech. They speak haltingly, using the minimum number of words to convey an idea. Frequently, they struggle to say a word even though they know the word they are looking for. Other times, they appear to search for a word but fail to find it. People with damage to Wernicke area can produce speech, but what they say is often confusing, and they have trouble understanding what other people are saying to them. Wernicke aphasia is sometimes called fluent aphasia because the person appears to speak in a relatively normal, fluent way, but the content of the sentences may be imprecise or even nonsensical. People with both types of aphasia have difficulty understanding what other people say to them, but this problem tends to be deeper and more serious in cases of Wernicke aphasia. People with Broca aphasia may have comprehension problems because they have difficulty understanding a particular word here and there, but people with Wernicke aphasia have trouble making sense of the meaning of entire sentences and also may have trouble keeping track of the point of the conversation.
The following video shows one man who suffers from Wernicke aphasia and another man who suffers from Broca aphasia. Watch the video and take note of the speech behaviors that are characteristic to each man’s type of aphasia.
Now that you’ve seen examples of people with damage to the Wernicke and Broca areas, let’s see if you can match the area of the brain to its appropriate role in language.
Let’s see how well you remember the parts of the brain that are important for language.
Language learning begins even before birth, because the fetus can hear muffled versions of speaking from outside the womb. Moon, Cooper, and Fifer  found that infants only two days old sucked harder on a pacifier when they heard their mother's native language being spoken than when they heard a foreign language, even when both the native and foreign languages were spoken by strangers. Babies are also aware of the patterns of their native language, showing surprise when they hear speech that has a different pattern of phonemes than those they are used to. 
During the first year or so after birth, and long before they speak their first words, infants are already learning language. One aspect of this learning is practice in producing speech. By the time they are 6 to 8 weeks old, babies start making vowel sounds, called cooing (“ooohh,” “aaahh,” “goo”) as well as a variety of cries and squeals to help them practice.
Between 5 and 7 months of age, most infants begin babbling, engaging in intentional vocalizations that lack specific meaning. Babbling sounds usually involve a combination of consonant and vowel sounds. In the early months these sounds are often simple consonant vowel pairs that are repeated, such as guh-guh-guy or ba-ba. This is called repetitive babbling. Over the next few months, the sound combinations become more complex, with different consonants and vowels mixed together, such as ma-ba-guh or aah-ga-mee. This is called variegated babbling. Children seem to be naturally motivated to make speech sounds, because they will often vocalize when they are alone and not in distress. This natural motivation has an important function because it encourages the baby to practice making and distinguishing speech sounds, a skill that will be very important as language emerges. Babbling can also serve an important social function for the infant. Parents and infants frequenty engage in "conversational exchanges" of sounds, where the adult will say something to the child, such as "You're such a sweet baby, yes you are, such a sweet baby." The infant will watch and listen and then, when it is his or her turn, make a babbling response. This can be an enjoyable interaction between adult and infant, serving a bonding function, and it also allows the infant to practice the skills of conversation prior to the appearance of words and sentences.
On the average, infants produce their first word at around 1 year of age. There is a great deal of variability in the timing of first words, so some children may start months earlier and others may not utter a distinguishable first word for another 6 months or more. The timing of first words typically has no relationship to later language abilities, though it is true that some disorders can lead to a delay in speech production.
At the same time infants are practicing their speaking skills by babbling, they are also learning to better understand sounds and eventually the words of language. One of the first words children understand is their own name, usually by about 6 months, followed by commonly used words like bottle, mama, and doggie by 10 to 12 months. 
At about 1 year of age, children begin to understand that words are more than sounds—they refer to particular objects and ideas. By the time children are 2 years old, they have a vocabulary of several hundred words, and by kindergarten, their vocabularies have increased to several thousand words. During the first decade of life, pronunciation of phonemes becomes increasingly precise (and understandable), and the use of morphemes and syntax becomes increasingly sophisticated.
The early utterances of children contain many errors, for instance, confusing /b/ and /d/, or /c/ and /z/. And the words that children create are often simplified, in part because they are not yet able to make the more complex sounds of the real language.  Children may say “keekee” for kitty, “nana” for banana, and “vesketti” for spaghetti, in part because it is easier. Often these early words are accompanied by gestures that may also be easier to produce than the words themselves.
Most of a child’s first words are nouns, and early sentences may include only the noun. “Ma” may mean “more milk please,” and “da” may mean “look, there’s Fido.” Eventually, typically by 18 months of age, the length of the utterances increases to two words (“ma ma” or “da bark”), and these primitive sentences begin to follow the appropriate syntax of the native language. By age 2, more complex sentences start to appear, and there is rapid increase in vocabulary and variations in language structure. Here, as was indicated earlier, there is a great deal of variability among perfectly normal children in the timing of language development, so a child who is ahead of the milestones discussed here is not neceassarily going to remain advanced, and a child who misses them, even by months, is not likely to remain behind his or her peers linguistically or intellectually.
Because language involves the active categorization of sounds and words into higher-level units, children make some mistakes in interpreting what words mean and how to use them. In particular, they often make overextensions of concepts, which means they use a given word in a broader context than appropriate. A child might at first call all adult men “daddy” or all animals “doggie.”
Children also use contextual information, particularly the cues that parents provide, to help them learn language. Infants are frequently more attuned to the tone of voice of the person speaking than to the content of the words themselves and are aware of the target of speech. Werker, Pegg, and McLeod  found that infants listened longer to a woman who was speaking to a baby than to a woman who was speaking to another adult. Children learn that people are usually referring to things that they are looking at when they are speaking  and that the speaker’s emotional expressions are related to the content of their speech. Children also use their knowledge of syntax to help them figure out what words mean. If a child hears an adult point to a strange object and say, “This is a dirb,” they will infer that a dirb is a thing, but if they hear them say, “This is one of those dirb things,” they will infer that dirb refers to the color or other characteristic of the object. And if they hear the word “dirbing,” they will infer that dirbing is something we do. 
Psychological theories of language learning differ in terms of the importance they place on nature versus nurture. Yet it is clear that both matter. Children are not born knowing language; they learn to speak by hearing what happens around them. On the other hand, human brains, unlike those of any other animal, are prewired in a way that leads them, almost effortlessly, to learn language.
Perhaps the most straightforward explanation of language development is that it occurs through principles of learning, including association, reinforcement, and the observation of others.  There must be at least some truth to the idea that language is learned, because children learn the language that they hear spoken around them rather than some other language. Also supporting this idea is the gradual improvement of language skills with time. It seems that children modify their language through imitation, reinforcement, and shaping, as would be predicted by learning theories.
But language cannot be entirely learned. For one, children learn words too fast for them to be learned through reinforcement. Between the ages of 18 months and 5 years, children learn up to 10 new words every day.  More important, language is more generative than it is imitative. Generativity refers to the ability of speakers to compose sentences to represent new ideas they have never before been exposed to. Language is not a predefined set of ideas and sentences that we choose when we need them, but rather a system of rules and procedures that allows us to create an infinite number of statements, thoughts, and ideas, including those that have never previously occurred. When a child says that she “swimmed” in the pool, for instance, she is showing generativity. An adult speaker of English would not say “swimmed,” yet the word is easily generated from the normal system of producing language.
Other evidence that refutes the idea that all language is learned through experience comes from the observation that children may learn languages better than they ever hear them. Deaf children whose parents do not speak American Sign Language very well nevertheless can learn it perfectly on their own and may even make up their own language if they need to.  A group of deaf children in a school in Nicaragua, whose teachers could not sign, invented a way to communicate through made-up signs and through signs different individuals had used to communicate with their own families.  Within a few years, this made-up signing system became increasingly rule governed and consistent. The development of this new Nicaraguan Sign Language has continued and changed as new generations of students have come to the school and started using the language. Although the original system was not a real language, linguists now find that the signing system invented by these children has all the typical features and complexity of a real language.
In the middle of the 20th century, American linguist Noam Chomsky explained how some aspects of language could be innate. Prior to this time, people tended to believe that children learn language soley by imitating the adults around them. Chomsky agreed that individual words must be learned by experience, but he argued that genes could code into the brain categories and organization that form the basis of grammatical structure. We come into the world ready to distinguish different grammatical classes, like nouns and verbs and adjectives, and sensitive to the order in which words are spoken. Then, using this innate sensitivity, we quickly learn from listening to our parents about how to organize our own language   For instance, if we grow up hearing Spanish, we learn that adjectives come after nouns (el gato amarillo, where gato means “cat” and amarillo is “yellow”), but if we grow up hearing English, we learn that adjectives come first (“the yellow cat”). Chomsky termed this innate sensitivity that allows infants and young children to organize the abstract categories of language the language acquisition device (LAD).
According to Chomsky’s approach, each of the many languages spoken around the world (there are between 6,000 and 8,000) is an individual example of the same underlying set of procedures that are hardwired into human brains. Each language, while unique, is just a set of variations on a small set of possible rule systems that the brain permits language to use. Chomsky’s account proposes that children are born with a knowledge of general rules of grammar (including phoneme, morpheme, and syntactical rules) that determine how sentences are constructed.
Although there is general agreement among psychologists that babies are genetically programmed to learn language, there is still debate about Chomsky’s idea that a universal grammar can account for all language learning. Evans and Levinson  surveyed the world’s languages and found that none of the presumed underlying features of the language acquisition device were entirely universal. In their search they found languages that did not have noun or verb phrases, that did not have tenses (e.g., past, present, future), and some that did not have nouns or verbs at all, even though a basic assumption of a universal grammar is that all languages should share these features. Other psychologists believe that early experience can fully explain language acquisition, and Chomsky’s language acquisition device is unnecessary. Nevertheless, Chomsky’s work clearly laid out the many problems that had to be solved in order to adequately explain how children acquire language and why languages have the structures that they do.
The two theories of language acquisition discussed in the text map to the nature versus nurture distinction. Proponents of the nurture view, such as learning theorists, maintain that language is, for the most part, acquired through principles of learning. Supporters of the nature view, such as Noam Chomsky, believe that the general foundation for grammatical parts of language is innate, though many important aspects of language are learned. For each of the following statements, select either Skinner’s learning theory or Chomsky’s LAD theory (LAD: language acquisition device).
Let’s ensure that you can identify the two theories of language acquisition discussed in the text: Skinner’s learning theory and Chomsky’s LAD theory. For each statement, select whether it is true or false.
Although it is less common in the United States than in other countries, bilingualism (the ability to speak two languages) is becoming increasingly frequent in the modern world. Nearly one-half of the world’s population, including 18% of U.S. citizens, grows up bilingual.
In recent years, many U.S. states have passed laws outlawing bilingual education in schools. These laws are in part based on the idea that students will have a stronger identity with the school, the culture, and the government if they speak only English and in part based on the idea that speaking two languages may interfere with cognitive development.
Some early psychological research showed that, when compared with monolingual children, bilingual children performed more slowly when processing language, and their verbal scores were lower. But these tests were frequently given in English, even when this was not the child’s first language, and the children tested were often of lower socioeconomic status than the monolingual children. 
More current research controlled for these factors and found that although bilingual children may in some cases learn language somewhat more slowly than do monolingual children  , bilingual and monolingual children do not significantly differ in the final depth of language learning, nor do they generally confuse the two languages.  In fact, participants who speak two languages have been found to have better cognitive functioning, cognitive flexibility, and analytic skills in comparison to monolinguals.  Thus, rather than slowing language development, learning a second language seems to increase cognitive abilities.
Does bilingualism cause mental confusion? Is being bilingual a cognitive advantage? People have debated these questions for a long time, but the answers aren’t simple. For this exercise, please read a brief article that discusses the work of some of the leading researchers in bilingualism. Then answer a few questions based on your reading.
Nonhuman animals have a wide variety of systems of communication. Some species communicate using scents; others use visual displays, such as baring the teeth, puffing up the fur, or flapping the wings; and still others use vocal sounds. Male songbirds, such as canaries and finches, sing songs to attract mates and to protect territory, and chimpanzees use a combination of facial expressions, sounds, and actions, such as slapping the ground, to convey aggression.  Honeybees use a “waggle dance” to direct other bees to the location of food sources.  The language of vervet monkeys is relatively advanced in the sense that they use specific sounds to communicate specific meanings. Vervets make different calls to signify that they have seen either a leopard, a snake, or a hawk. 
As mentioned earlier, despite the variety and sophistication of animal communication systems, none comes close to human language in its ability to express a variety of ideas and subtle differences in meaning. For years, scientists have wondered if it is the communication systems that are limited or if other animals are simply unable to acquire a system as advanced as human language. Quite a few efforts have been made to learn more by attempting to teach human language to other animals, especially to chimpanzees and their cousins, bonobos.
Despite their wide abilities to communicate, efforts to teach animals to use language have had only limited success. One of the early efforts was made by Catherine and Keith Hayes, who raised a chimpanzee named Viki in their home along with their own children. But Viki learned little and could never speak.  Researchers speculated that Viki’s difficulties might have been in part because the she could not create the words in her vocal cords, and so subsequent attempts were made to teach primates to speak using sign language or using boards on which they can point to symbols.
Allen and Beatrix Gardner worked for many years to teach a chimpanzee named Washoe to sign using ASL. Washoe, who lived to be 42 years old, could label up to 250 different objects and make simple requests and comments, such as “please tickle” and “me sorry.”  Washoe’s adopted daughter Loulis, who was never exposed to human signers, learned more than 70 signs simply by watching her mother sign.
The most proficient nonhuman language speaker is Kanzi, a bonobo who lives at the Language Learning Center at Georgia State University.  As you can see in the following video clip, Kanzi has a propensity for language that is in many ways similar to humans’. He learned faster when he was younger than when he got older, he learns by observation, and he can use symbols to comment on social interactions rather than simply for food treats. Kanzi can also create elementary syntax and understand relatively complex commands. Kanzi can make tools and can even play Pac-Man.
And yet even Kanzi does not have a true language in the same way that humans do. Human babies learn words faster and faster as they get older, but Kanzi does not. Each new word he learns is almost as difficult as the one before. Kanzi usually requires many trials to learn a new sign, whereas human babies can speak words after only one exposure. Kanzi’s language is focused primarily on food and pleasure and only rarely on social relationships. Although he can combine words, he generates few new phrases and cannot master syntactic rules beyond the level of about a 2-year-old human child. 
In sum, although many animals communicate, none of them have a true language. With some exceptions, the information that can be communicated in nonhuman species is limited primarily to displays of liking or disliking and related to basic motivations of aggression and mating. Humans also use this more primitive type of communication, in the form of nonverbal behaviors such as eye contact, touch, hand signs, and interpersonal distance, to communicate their like or dislike for others, but they (unlike animals) supplant this more primitive communication with language. Although other animal brains share similarities to ours, only the human brain is complex enough to create language. What is perhaps most remarkable is that although language never appears in nonhumans, language is universal in humans. All humans, unless they have a profound brain abnormality or are completely isolated from other humans, learn language.
Psychologists have long debated how to best conceptualize and measure intelligence. These questions include how many types of intelligence there are, the role of nature versus nurture in intelligence, how intelligence is represented in the brain, and the meaning of group differences in intelligence.
Psychologists have studied human intelligence since the 1880s. As you will read, there are several theories of intelligence and a variety of tests to measure intelligence. In fact, some define intelligence as whatever an intelligence test measures. And most intelligence tests measure how much knowledge one has, or in other words, “school smarts”. Today, most psychologists define intelligence as a mental ability consisting of the ability to learn from experience, solve problems, and use knowledge to adapt to new situations.
In the early 1900s, the French psychologist Alfred Binet (1857–1914) and his colleague Henri Simon (1872–1961) began working in Paris to develop a measure that would differentiate students who were expected to be better learners from students who were expected to be slower learners. The goal was to help teachers better educate these two groups of students. Binet and Simon developed what most psychologists today regard as the first intelligence test, which consisted of a wide variety of questions that included the ability to name objects, define words, draw pictures, complete sentences, compare items, and construct sentences.
Binet and Simon believed that the questions they asked their students, even though they were on the surface dissimilar, all assessed the basic abilities to understand, reason, and make judgments. And it turned out that the correlations among these different types of measures were in fact all positive; students who got one item correct were more likely to also get other items correct, even though the questions themselves were very different.
On the basis of these results, the psychologist Charles Spearman (1863–1945) hypothesized that there must be a single underlying construct that all of these items measure. He called the construct that the different abilities and skills measured on intelligence tests have in common the general intelligence factor (g). Virtually all psychologists now believe that there is a generalized intelligence factor, g, that relates to abstract thinking and that includes the abilities to acquire knowledge, to reason abstractly, to adapt to novel situations, and to benefit from instruction and experience. People with higher general intelligence learn faster.
Generalized intelligence factor, referred to as g, is assessed by having a person complete a variety of tasks. Some of the tasks that psychologists will have the person do are intended to measure key skill sets often needed to be successful in traditional school settings. They include the following:
Using what you have learned about g, determine what skill is being assessed in each of the following tasks.
Soon after Binet and Simon introduced their test, the American psychologist Lewis Terman (1877–1956) developed an American version of Binet’s test that became known as the Stanford-Binet Intelligence Test. The Stanford-Binet is a measure of general intelligence made up of a wide variety of tasks including vocabulary, memory for pictures, naming of familiar objects, repeating sentences, and following commands.
Although there is general agreement among psychologists that g exists, there is also evidence for specific intelligence (s), a measure of specific skills in narrow domains. One empirical result in support of the idea of s comes from intelligence tests themselves. Although the different types of questions do correlate with each other, some items correlate more highly with each other than do other items; they form clusters or clumps of intelligences.
One distinction is between fluid intelligence, which refers to the capacity to learn new ways of solving problems and performing activities, and crystallized intelligence, which refers to the accumulated knowledge of the world we have acquired throughout our lives. These intelligences must be different because crystallized intelligence increases with age—older adults are as good as or better than young people in solving crossword puzzles—whereas fluid intelligence tends to decrease with age.
Other researchers have proposed even more types of intelligences. L. L. Thurstone  proposed that there were seven clusters of primary mental abilities: word fluency, verbal comprehension, spatial ability, perceptual speed, numerical ability, inductive reasoning, and memory. But even these dimensions tend to be at least somewhat correlated, showing again the importance of g.
The goal of most intelligence tests is to measure g, the general intelligence factor. Good intelligence tests are reliable, meaning that they are consistent over time, and also demonstrate construct validity, meaning that they actually measure intelligence rather than something else. Because intelligence is such an important individual difference dimension, psychologists have invested substantial effort in creating and improving measures of intelligence, and these tests are now the most accurate of all psychological tests. In fact, the ability to accurately assess intelligence is one of the most important contributions of psychology to everyday public life.
Intelligence changes with age. A 3-year-old who could accurately multiply 183 by 39 would certainly be intelligent, but a 25-year-old who could not do so might be seen as unintelligent. Thus, understanding intelligence requires that we know the norms or standards in a given population of people at a given age. The standardization of a test involves giving it to a large number of people at different ages and computing the average score on the test at each age level.
It is important that intelligence tests be standardized periodically to determine that the average scores on the test at each age level remain the same; in other words, that the median score of 100 remains the same for each age level on the test over time. James Flynn, a New Zealand researcher, discovered that the mean IQ score of 100 between the years 1918 and 1995 had actually risen by about 25 points.  This is called the Flynn effect, referring to the observation that scores on intelligence tests worldwide have increased substantially over the past decades. Although the increase varies somewhat from country to country, the average increase is about 3 IQ points every 10 years. It is uncertain what causes this increase in intelligence on IQ tests. But some of the explanations for the Flynn effect include better nutrition, increased access to information, and more familiarity with multiple-choice tests. Whether people are actually getting smarter is debatable.
Each year from 1945 through 1985, all children in the fifth grade in the United States were given the California Scholastic Achievement Test, which was developed in 1944 and had not undergone any revisions or standardization. Later, a New Zealand researcher analyzed the patterns of these scores. Study the three line graphs on the chart below and complete the following questions.
Once the standardization has been accomplished, we have a picture of the average abilities of people at different ages and can calculate a person’s mental age, which is the age at which a person is performing intellectually. If we compare the mental age of a person to the person’s chronological age, the result is the intelligence quotient (IQ), a measure of intelligence that is adjusted for age. A simple way to calculate IQ is by using the following formula:
IQ = mental age ÷ chronological age × 100
A 10-year-old child who does as well as the average 10-year-old child has an IQ of 100 (10 ÷ 10 × 100), whereas an 8-year-old child who does as well as the average 10-year-old child would have an IQ of 125 (10 ÷ 8 × 100). Most modern intelligence tests are based on the relative position of a person’s score among people of the same age, rather than on the basis of this formula, but the idea of an intelligence “ratio” or “quotient” provides a good description of the score’s meaning.
Using the intelligence quotient formula, compute the required information for each of the persons described in the below scenarios.
A number of scales are based on the IQ. The Wechsler Adult lntelligence Scale (WAIS) is the most widely used intelligence test for adults. The current version of the WAIS, called the WAIS-IV, was standardized on 2,200 people ranging from 16 to 90 years of age. It consists of 15 different tasks, each designed to assess intelligence, including working memory, arithmetic ability, spatial ability, and general knowledge about the world (see the figure below). The WAIS-IV yields scores on four domains: verbal, perceptual, working memory, and processing speed. The reliability of the test is high (more than 0.95), meaning that when a person is assessed at different times on the test, the person will score approximately the same every time with more than a 95% accuracy rate.
The WAIS-IV also shows substantial validity; it is correlated highly with other IQ tests such as the Stanford-Binet, as well as with criteria of academic and life success, including college grades, measures of work performance, and occupational level. It also shows significant correlations with measures of everyday functioning among the mentally retarded.
The Wechsler scale has also been adapted for preschool children in the form of the Wechsler Primary and Preschool Scale of Intelligence (WPPSI-III) and for older children and adolescents in the form of the Wechsler Intelligence Scale for Children (WISC-IV).
Let's now look at the concepts of reliability and validity. These are concepts that are easily confused with one another and, in fact, they are related. A valid test must be reliable, but the fact that a test is reliable does not mean it is valid. So what is the difference?
Validity refers to the degree to which a test or other measure of some psychological construct actually measures that construct. A valid measure of your self-confidence is a questionnaire or other measure that accurately indicates or predicts your true level of self-confidence. There are a couple of things to notice about this definition.
First, it says “the degree to which…”—that means that validity is not an all-or-none idea. Some tests are more valid than other tests. You will seldom see a test in general use that is absolutely invalid, because such a test will be noticed and discarded by people who want to study the construct being measured (e.g., self-confidence).
Second, validity is a very difficult characteristic to prove, particularly when you are trying to measure something as complex as self-esteem or level of depression. For this reason, any test in widespread use in psychology has many studies that attempt to determine how valid it is in measuring what it is trying to measure and enumerating its limitations.
Reliability refers to the degree to which a test keeps producing the same or similar results over repeated testing. In other words, reliability is another term for consistency. There are a couple of things to notice here, as well.
First, reliability, like validity, is not all-or-none. Some tests are more reliable than other tests.
Second, reliability is easier to establish than validity, because we can easily conduct research that allows us to see if a test gives the same answer on repeated testing.
A nice metaphor for thinking about validity and reliability comes from target shooting. Imagine that you go to a target range and shoot at a target for a bull’s eye. Here is what you hope you will see when you are done:
But let’s imagine that the target below is the one you produce:
What’s wrong? The hits are all clustered together, so you are very consistent. The trouble is that you are missing the place you are aiming for.
For your review, here are four drawings of targets that illustrate the various levels of validity and reliability.
Now let’s apply the concepts of validity and reliability to a psychological test designed to measure self-confidence. A person’s true self-confidence is the “center of the target.” The self-confidence test requires people to answer five questions, such as the ones below. The person reads each statement and then rates himself or herself on a scale of 1 to 5, with 1 representing strong disagreement with the statement and 5 representing strong agreement with the statement.
This is not a real self-confidence test but just an example. As you can see, the test will result in a minimum score of 5 and a maximum score of 25. A person’s score can then be compared to his or her actual or true self-confidence level to determine how closely the test predicts the respondent’s actual self-confidence. The closer the result is to the true self-confidence level, the higher the validity.
To help you with this activity, each person’s true level of self-confidence is provided in the chart below in the column labeled TRUTH. Obviously, in real life, we don’t associate a person’s actual self-confidence level with a score—that’s why we are using this self-questionnaire.
Imagine that you gave the self-confidence test to 10 people, and then you retested them a week later. The chart below lists each person’s test scores for Week 1 and Week 2. The two scores are compared to determine whether they are consistent with each other. The more consistent the test scores, the higher the test reliability. Now you want to determine the validity and reliability of the self-confidence test.
Here is another example with different scores for each person for weeks 1 and 2:
Following is another example that presents more difficulty in determining the level of validity and reliability. Let’s see if you can get this one correct.
Depending on the design, intelligence tests measure achievement (what one has already learned) and aptitude (an ability to learn). Most licensed psychologists, who want to determine one’s mental abilities with regard to any mental disorders, will assess an individuals IQ that measures both achievement and aptitude. A psychologist will use tests such as the Standford Binet or one of the Wechsler scales.
More familiar intelligence tests are aptitude tests that are designed to measure one’s ability to do well in college or in postgraduate training. Most U.S. colleges and universities require students to take an aptitude test such as the Scholastic Assessment Test (SAT) or the American College Test (ACT), and postgraduate schools require the Graduate Record Examination (GRE), Medical College Admissions Test (MCAT), or the Law School Admission Test (LSAT). These tests are useful as one criteria for selecting students because they predict academic success in the programs that they are designed for, particularly in the first year of the program. These aptitude tests also measure, in part, intelligence. Frey and Detterman  found that the SAT correlated highly (between about r = .7 and r = .8) with standard measures of intelligence, particularly on the WAIS.
Aptitude tests are also used by industrial and organizational (I/O) psychologists in the process of personnel selection. Personnel selection is the use of structured tests to select people who are likely to perform well at given jobs. To develop a personnel selection test, industrial and organizational (I/O) psychologists begin by conducting a job analysis in which they determine what knowledge, skills, abilities, and personal characteristics (KSAPs) are required for a given job. This is normally accomplished by surveying and/or interviewing current workers and their supervisors. Based on the results of the job analysis, I/O psychologists choose selection methods that are most likely to be predictive of job performance. Measures include tests of cognitive and physical ability and job knowledge tests, as well as measures of intelligence and personality.
For students of psychology, it is important to know about some of the famous researchers and psychologists who made valuable contributions to the field of intelligence.
Before discussing different views of intelligence and controversies related to interpreting intelligence, let’s look at the typical results that researchers get when they measure intelligence using the Wechsler Adult Intelligence Scale (WAIS), which you studied in the previous module.
The WAIS was most recently updated in 2008. A sample of 2,200 adults varying in age (16 to 90 years old), sex, race, ethnicity, and other factors was tested. Although the test has many questions, the scores are standardized so that the average performance is scored as 100. People who did better than average have scores above 100, and people who did worse than average have scores below 100.
When we list all the scores of people taking tests like the WAIS, including other intelligence tests as well as tests of aptitudes and various skills and traits, those scores frequently fall into a pattern called the normal distribution, or the bell curve. It looks like this:
Although this section is a little technical, understanding normal distribution is useful for interpreting results not only of psychological studies but also of studies in many other fields. We focus on an IQ test—the WAIS—but what you will learn can be applied any time you see a bell curve.
In the following activities, you will learn the results of the study when the IQ scores from the 2,200 adults were analyzed.
The recent study tested 2,200 people, but for our purposes, let’s reduce the number to 16 people, and later we will discuss the results for all 2,200 people. Imagine that the 16 people pictured below participated in the IQ study. Each photo is labeled with the individual's IQ score.
Let’s see what happens when we organize them according to their IQ scores. Your task is to drag each person to an appropriate box below. You have only 16 people, so a lot of the boxes will be empty.
Start with the people with average IQ. Drag the pictures of the people with exactly average IQ (remember, average = 100 in IQ scores) into the purple boxes, starting with the bottom box.
Notice what happens as you get further to the right. What you see here with our little sample of 16 people is very much like what would happen if you had all 2,200 scores.
Now let’s eliminate the empty boxes.
What we have here is called a frequency distribution. It shows how frequently each score (in this case, each IQ score) appears in our group. Let’s see if you can read it accurately.
The group of people, or distribution, below looks like a triangle.
However, if all 2,200 people and all the possible IQ scores were represented, the shape would look like this:
The black line that goes above and around our 16 people has a distinctive shape, which gives the graph its name: bell curve.
It is also called the normal distribution. It shows how many people have each score on the IQ scale or the scale for any other test or measure (e.g., height, weight, achievement test score, and many others).
Let’s look at the bell curve without our little people inside.
Now imagine 2,200 people squeezed beneath our bell curve. Notice that the numbers are now gone from the Y axis on the left, so you can only talk in terms of more or fewer people with any particular score.
When working with the bell curve, researchers often want to identify locations or regions. For instance, they might want to talk about the top 10% of IQ scores or the middle 50%. To do this, they break the curve into units, and for IQ those units are 15 points wide. This 15-point step is called a standard deviation.
For example, here is our bell curve, but notice that we have colored in a region that goes from the mean (IQ = 100) to 15 points above the mean (100 + 15 = 115). Because this segment covers exactly 15 points, it is one standard deviation wide.
To keep track of these units, we start counting from the mean (100) and count the standard deviation steps from the mean in each direction. Here is another picture of the bell curve with the standard deviation steps marked above the IQ scores.
And here is the area from the mean (100) to one standard deviation, or 15 points below the mean (100 − 15 = 85).
As you can see, the further you get away from the mean, the smaller the area under the curve in the 15-point units.
Here is the area between one and two standard deviations below the mean.
And now we go between two and three standard deviations below the mean.
We can keep going, but now you see there aren’t many people left as we get to the area between three and four standard deviations below the mean.
Let’s look more closely at this. Here is the bell curve with the region covering the first standard deviation above the mean colored in blue. Notice the numbers above this region. It shows that this is 34% of the total area under the curve (just a little more than one-third of the total). To put that in perspective, the number below it shows how many people out of that original group of 2,200 used to standardize the IQ test are in this area: 748 out of 2,200.
If we go the same distance on the other side of the mean, we now show two colored regions: one above the mean (100 to 115) and one below the mean (100 down to 85).
Just above each segment, you can see the percentage of the area under the curve and the number of people in that unit. On top, in red, you can see the sum of these colored-in areas. This figure says that 748 people out of 2,200 had IQ scores between 100 and 115 and another 748 had IQ scores between 100 and 85, so a total of 1,496 had IQ scores between 85 and 115. That is just a little more than two-thirds of the total set of 2,200. Of course, we’re not really interested in those 2,200 people. They simply represent the larger population of adults.
Here is a figure you can manipulate. Simply click on any of the areas under the curve, and it will change colors (blue above the mean and green below the mean). The area under the curve and the number of people in the sample of 2,200, as well as the totals for all the colored areas, will be shown.
In the previous section, we explored the normal distribution in relation to the IQ scores of samples of particular people. However, IQ scores in the general population are also normally distributed. The figure below displays the distribution of IQ scores in the general population.
One end of the distribution of intelligence scores is defined by people with very low IQ. Mental retardation is a generalized disorder ascribed to people who have an IQ below 70, who have experienced deficits since childhood, and who have trouble with basic life skills, such as self-care and communicating with others.  About 1% of the U.S. population, most of them males, fulfill the criteria for mental retardation, but some children who are diagnosed as mentally retarded lose the classification as they get older and better learn to function in society. A particular vulnerability of people with low IQ is that they may be taken advantage of by others, and this is an important aspect of the definition of mental retardation.  Mental retardation is divided into four categories: mild, moderate, severe, and profound. Severe and profound mental retardation is usually caused by genetic mutations or accidents during birth, whereas mild forms have both genetic and environmental influences.
One cause of mental retardation is Down syndrome, a chromosomal disorder leading to mental retardation caused by the presence of all or part of an extra 21st chromosome. The incidence of Down syndrome is estimated at 1 per 800 to 1,000 births, although its prevalence rises sharply in those born to older mothers. People with Down syndrome typically exhibit a distinctive pattern of physical features, including a flat nose, upwardly slanted eyes, a protruding tongue, and a short neck.
Societal attitudes toward individuals with mental retardation have changed over the past decades. We no longer use terms such as “moron,” “idiot,” or “imbecile” to describe these people, although these were the official psychological terms used to describe degrees of retardation in the past. Laws such as the Americans with Disabilities Act (ADA) have made it illegal to discriminate on the basis of mental and physical disability, and there has been a trend to bring the mentally retarded out of institutions and into our workplaces and schools. In 2002 the U.S. Supreme Court ruled that the execution of people with mental retardation is “cruel and unusual punishment,” thereby ending this practice. 
Having an extremely high IQ is clearly less of a problem than having an extremely low IQ, but there may also be challenges to being particularly smart. It is often assumed that schoolchildren who are labeled as “gifted” may have adjustment problems that make it more difficult for them to create social relationships. To study gifted children, Lewis Terman and his colleagues  selected about 1,500 high school students who scored in the top 1% on the Stanford-Binet and similar IQ tests (i.e., who had IQs of about 135 or higher) and tracked them for more than seven decades (the children became known as the “Termites” and are still being studied today). This study found, first, that these students were not unhealthy or poorly adjusted but rather were above average in physical health and were taller and heavier than individuals in the general population. The students also had above average social relationships—for instance, they were less likely to divorce than the average person. 
Terman’s study also found that many of these students went on to achieve high levels of education and entered prestigious professions, including medicine, law, and science. Of the sample, 7% earned doctoral degrees, 4% earned medical degrees, and 6% earned law degrees. These numbers are all considerably higher than what would have been expected from a more general population. Another study of young adolescents who had even higher IQs found that these students ended up attending graduate school at a rate more than 50 times higher than that in the general population. 
As you might expect based on our discussion of intelligence, kids who are gifted have higher scores on general intelligence (g). But there are also different types of giftedness. Some children are particularly good at math or science, some at automobile repair or carpentry, some at music or art, some at sports or leadership, and so on. There is a lively debate among scholars about whether it is appropriate or beneficial to label some children as gifted and talented in school and to provide them with accelerated special classes and other programs that are not available to everyone. Although doing so may help the gifted kids,  it also may isolate them from their peers and make such provisions unavailable to those who are not classified as gifted.
Here is a figure you can manipulate. Simply click on any of the areas under the curve, and it will change colors (blue above the mean and green below the mean). The area under the curve and the number of people in the sample of 2,200, as well as the totals for all the colored areas, will be shown.
One advocate of the idea of multiple intelligences is the psychologist Robert Sternberg. Sternberg has proposed a triarchic (three-part) theory of intelligence that proposes that people may display more or less analytical intelligence, creative intelligence, and practical intelligence. Sternberg   argued that traditional intelligence tests assess analytical intelligence, the ability to answer problems with a single right answer, but that they do not well assess creativity (the ability to adapt to new situations and create new ideas) or practicality (e.g., the ability to write good memos or to effectively delegate responsibility).
As Sternberg proposed, research has found that creativity is not highly correlated with analytical intelligence,  and exceptionally creative scientists, artists, mathematicians, and engineers do not score higher on intelligence than do their less creative peers.  Furthermore, the brain areas associated with convergent thinking, thinking that is directed toward finding the correct answer to a given problem, are different from those associated with divergent thinking, the ability to generate many different ideas for or solutions to a single problem.  On the other hand, being creative often takes some of the basic abilities measured by g, including the abilities to learn from experience, to remember information, and to think abstractly. 
Studies of creative people suggest at least five components that are likely to be important for creativity:
The last aspect of the triarchic model, practical intelligence, refers primarily to intelligence that cannot be gained from books or formal learning. Practical intelligence represents a type of “street smarts,” or common sense, that is learned from life experiences. Although a number of tests have been devised to measure practical intelligence,   research has not found much evidence that practical intelligence is distinct from g or that it is predictive of success at any particular tasks.  Practical intelligence may include, at least in part, certain abilities that help people perform well at specific jobs, and these abilities may not always be highly correlated with general intelligence.  On the other hand, these abilities or skills are very specific to particular occupations and do not seem to represent the broader idea of intelligence.
Another champion of the idea of multiple intelligences is the psychologist Howard Gardner.   Gardner argued that it would be evolutionarily functional for different people to have different talents and skills and proposed that there are eight intelligences that can be differentiated from each other (shown in the table below). Gardner noted that some evidence for multiple intelligences comes from the abilities of autistic savants, people who score low on intelligence tests overall but who nevertheless may have exceptional skills in a given domain, such as math, music, art, or in being able to recite statistics in a given sport. 
|Howard Gardner’s Eight Specific Intelligences|
The idea of multiple intelligences has been influential in the field of education, and teachers have used these ideas to try to teach differently to different students. For instance, to teach math problems to students who have particularly good kinesthetic intelligence, a teacher might encourage the students to move their bodies or hands according to the numbers. On the other hand, some have argued that these “intelligences” sometimes seem more like “abilities” or “talents” rather than real intelligence. And there is no clear conclusion about how many intelligences there are. Are sense of humor, artistic skills, dramatic skills, and so forth also separate intelligences? Furthermore, and again demonstrating the underlying power of a single intelligence, the many different intelligences are in fact correlated and thus represent, in part, g. 
Although most psychologists have considered intelligence a cognitive ability, people also use their emotions to help them solve problems and relate effectively to others. Emotional intelligence is the ability to accurately identify, assess, and understand emotions, as well as to effectively control one’s own emotions.  
The idea of emotional intelligence is seen in Howard Gardner’s interpersonal intelligence (the capacity to understand the emotions, intentions, motivations, and desires of other people) and intrapersonal intelligence (the capacity to understand oneself, including one’s emotions). Public interest in and research on emotional intellgence became widely prevalent following the publication of Daniel Goleman’s best-selling book, Emotional Intelligence: Why It Can Matter More Than IQ. 
Mayer and Salovey  developed a four-branch model of emotional intelligence that describes four fundamental capacities or skills. More specifically, this model defines emotional intelligence as the ability to
There are a variety of measures of emotional intelligence.   One popular measure, the Mayer-Salovey-Caruso Emotional Intelligence Test, includes items about the ability to understand, experience, and manage emotions, such as these:
One problem with emotional intelligence tests is that they often do not show a great deal of reliability or construct validity.  Although it has been found that people with higher emotional intelligence are also healthier,  findings are mixed about whether emotional intelligence predicts life success—for instance, job performance.  Furthermore, other researchers have questioned the construct validity of the measures, arguing that emotional intelligence really measures knowledge about what emotions are, not necessarily how to use those emotions,  and that emotional intelligence is actually a personality trait, a part of g, or a skill that can be applied in some specific work situations—for instance, academic and work situations. 
Although measures of the ability to understand, experience, and manage emotions may not predict effective behaviors, another important aspect of emotional intelligence—emotion regulation—does. Emotion regulation is the ability to control and productively use one’s emotions. Research has found that people who are better able to override their impulses to seek immediate gratification and who are less impulsive also have higher cognitive and social intelligence. They have better SAT scores, are rated by their friends as more socially adept, and cope with frustration and stress better than those with less skill at emotion regulation.   
Because emotional intelligence seems so important, many school systems have designed programs to teach it to their students. However, the effectiveness of these programs has not been rigorously tested, and we do not yet know whether emotional intelligence can be taught or if learning it would improve the quality of people’s lives. 
The Mayer-Salovey-Caruso Emotional Intelligence Test includes questions about various abilities: identifying emotions, facilitating thinking, understanding emotions, and managing emotions.
Indicate which of the four branches of emotional intelligence is being assessed for each of the following items. That is, is the question assessing the ability to (1) identify emotions, (2) facilitate thinking, (3) understand emotions, or (4) manage emotions?
What do you think intelligence is? Is it something that you are born with, largely inherited from your parents, leaving you with little room for improvement? Or is it something that can be changed through hard work and by taking advantage of opportunities to grow intellectually?
Your personal answer to this question turns out to be surprisingly important. It may even affect your intelligence! Stanford University psychologist Carol Dweck has spent her career studying how people’s beliefs about their own abilities—particularly mental abilities like intelligence—influence the kinds of challenges they give themselves. In much of her research, she has studied students, from their early prekindergarten days through college.
Dweck identified two broad “theories of intelligence” that people—from young children to mature adults—hold. Some people have an “entity” theory of intelligence. They believe that their intelligence is determined by factors present at birth, particularly related to their genetic inheritance. According to this theory, intelligence is a relatively unchangeable fact about who you are and about your potential to excel. You may work hard, but intelligence will always act as a limit for some people and as a supercharged fuel for others. Other people hold an “incremental” theory of intelligence. They believe that intelligence can be changed, particularly through efforts to learn and to excel. They believe that genetic factors are only a starting point, and people’s future competencies are not determined by their initial strengths and weaknesses.
These two theories of intelligence would only be vaguely interesting if they didn’t influence people’s behavior. But they do. It turns out that students who hold the entity theory of intelligence tend to avoid academic challenges. When given the opportunity to work on a really challenging task—with opportunity for success but also a real possibility of failure—they are less likely to take the opportunity than are their classmates who hold an incremental theory of intelligence. The students who believe that intelligence is unchangeable—the entity theory holders—are more likely to choose a task that they already know will lead to success.
Attitudes toward failure can also be predicted by knowing which theory a student has about intelligence. Students who have an entity theory of intelligence tend to interpret failure—in academics and in other aspects of life—as a message about their own inherent limitations, so failure or even the anticipation of failure reduces their motivation to work on something. For them, the way to protect their self-esteem is to avoid failure. For the students who have an incremental theory of intelligence, failure is more often seen as a challenge that can actually increase motivation. These students are more likely than their entity theorist classmates to see failure as an opportunity to discover and test their potential, thus inspiring them to try to see what they can do.
Everything you have already read in this unit should have made it clear to you that intelligence is a difficult quality to define and measure. Nevertheless, we do know that IQ—the standard measure of intelligence—can change. For example, children who are unable to attend school for a long periods due to war or extended illness show IQ levels 2 standard deviations (30 IQ points) below their peers who are attending school.  On the positive side, research has shown that children, particularly children from low socioeconomic groups, can improve in IQ if they are placed in an enriched prekindergarten program and then in an elementary school of sufficient quality to maintain these gains.  Sadly, if children are only placed in enriched preschools, but later attend academically poor elementary schools, their IQ gains diminish or even disappear.
Perhaps the most convincing discovery for understanding whether intelligence can change or not comes from the work of James Flynn, who analyzed IQ data from 14 nations spanning a period from the beginning of the 20th century to the late 1980s. Later studies by Flynn and others have added another 16 countries to the list, and these more recent results are consistent with the initial findings. Flynn used cross-sectional testing, meaning that different people were tested at each point in time, rather than longitudinal testing, where the same people would have been tracked across the time period of the study.
What Flynn found was that countries that were fully modern, with a thriving middle class, effective school systems, and good employment opportunities, showed gains in IQ that averaged about 3 points per decade. This may seem like a small amount, but changes of this magnitude are actually astounding. Across a 50-year period, the average IQ of people in these countries increased by approximately 15 IQ points, which is a full standard deviation (see the Bell Curve module if you have forgotten what a standard deviation is). These changes were not due to modifications of the test; it reflects improvements in performance on IQ tests controlling for any changes to the test itself. Further confirmation of this progressive improvement in IQ—a change now called “the Flynn effect”—comes from data from countries that modernized in the mid-20th century (e.g., Kenya and some Caribbean nations). The people in these countries did not show substantial changes prior to modernization, but after people’s socioeconomic status changed, the same IQ gains that characterized the countries that modernized earlier—about 3 IQ points per decade—were recorded.
The reasons for these general changes in IQ are not completely understood. It is likely that improvements to nutrition are important, along with better schooling, more stimulating jobs, and even changes in childbearing practices. For example, with fewer children per family, a phenomenon common in more economically developed countries, parents can devote more time to each child, leading to more opportunities for the child to learn effectively from adults.
These changes may not go on forever. Some countries that were already modernized in the late 19th century (e.g., Scandinavian countries) were great examples of the Flynn effect when Flynn initially reported his discovery in the 1980s, but have stopped showing IQ improvements in recent years. Other countries, including the United States and Great Britain, continue to show the 3 IQ points per decade improvement, but there is no guarantee that this change will be sustained in future decades. Nevertheless, the Flynn effect along with the effects of enriched pre-kindergarten programs mentioned earlier clearly show that intelligence—at least as measured by IQ tests—is something that can be improved, both at the level of individuals and at the level of an entire nation.
From the earliest days of IQ testing, people have wondered about group differences in intelligence. You will probably not be surprised to learn that studies of differences between groups in intelligence can easily feed stereotypes and prejudices, and raise questions about testing biases and even the integrity of the researchers.
You might think that the question of differences between men and women could easily be resolved by analyzing IQ tests for thousands of people and simply reporting the results. But it turns out that this won’t work. The most commonly used IQ test, the WAIS, is regularly adjusted to eliminate questions that produce differences between men and women. Consequently, the fact that there are no gender differences on the WAIS is not interesting; the test is designed to eliminate the possibility of differences. The goal of this adjustment is to avoid including biased questions, but it means that we need to look elsewhere to answer the question of IQ differences between men and women.
Using advanced statistical techniques, researchers have been able to extract the necessary information from tests not specifically designed to measure IQ. For instance, Arthur Jensen, whose view we will discuss further in the section on race and intelligence, studied results from tests that are strongly related to IQ tests (“loaded heavily on g”) but have not been adjusted to eliminate gender differences. Jensen  found minor differences between men and women on tests of specific abilities, but he found no overall differences between men and women in average intelligence. Using a different strategy, James Flynn  looked at results from the Raven’s Progressive Matrices test, a well-regarded nonverbal measure of intelligence, and found that males and females were not different either in the childhood samples or adult samples. The generally accepted view today is that the average IQs for men and women are the same.
Note: For the following activities, the red distributions correspond to females and the blue distributions correspond to males.
This is not the end of questions about sex differences in intelligence, however. In 2006, Lawrence Summers, president of Harvard University, was discussing reasons that far more men than women go into advanced positions in science and engineering. Citing published research, Summers suggested that there may not be a difference between the average intelligence for men and women, but men may be more variable in their intelligence.
Prof. Summers’ claim is not without some foundation. Mental retardation (i.e., IQ below 70) is about 3.6 times more common for males than females.  Note that the diagnosis of mental retardation is based on more than scores on a standard IQ test, but this fact is consistent with the low end of the blue curve in the figure—more males than females appear to occupy that lower region. Summers’ statement about the high end of the distributions is based primarily on data showing more males than females among the highest scorers on tests of mathematics and quantitative reasoning. For instance, in the 1980s, twelve times more boys than girls scored above 700 (out of a possible 800) on the SAT test.  These results are consistent with the high extreme depicted in the figure: the blue curve for males is higher (more individuals) on the right than the red curve for females.
Note: For this activity, the red distributions correspond to females and the blue distributions correspond to males.
Now we get to the controversial part of the IQ debate for gender. Professor Summers of Harvard University suggested that women’s intelligence—particularly as related to mathematical thinking—might be less likely to extend to the genius level than men’s intelligence.
Why are there these differences between the spread of scores for males and for females in intelligence-related measures? Most researchers think that the male and female distributions would be about the same on the lower end of the distributions if males were not more vulnerable to genetic and prenatal factors that can hurt development of brain structures related to thinking. Some X-chromosome regions have been linked to mental retardation, and only males have a Y chromosome. Furthermore, prenatal steroidal hormones influence brain structures related to intelligence, and males fetuses are exposed to massively higher levels of these steroids than are female fetuses. Estrogen, an important female hormone, may be a protective factor for girls and women, reducing