Stephen Greenspan Ph.D.

Intelligence

Intelligence and stupid behavior, non-cognitive contributors to irrationality.

Posted November 2, 2016

istock getty images

In a September 16, 2016 essay in the New York Times, David Z. Hambrick and Alexander P. Burgoyne made an interesting distinction between intelligence and rationality. Drawing mainly on the work of noted cognitive psychologist Keith Stanovitch, they referred to “dysrationalia” (a term coined decades ago by Stanovitch) as the failure of people with average or above-average intelligence (as measured by IQ) to apply their intelligence adequately in addressing real-world problems. An example (termed the “Linda Problem) used by Hambrick and Burgoyne was drawn from work in behavioral economics , and involved the following scenario “Linda is 31 years old, single, outspoken and very bright. She majored in philosophy . As a student, she was deeply concerned with issues of discrimination and social justice, and also participated in antinuclear demonstrations.” Then the researchers asked the subjects which was more probable: (A) Linda is a bank teller or (B) Linda is a bank teller and is active in the feminist movement. The correct answer is A, because feminist bank tellers are included in the larger total class of tellers (obviously some non-feminist tellers will also have liberal views), but a very large percentage of respondents, including students at elite colleges, fell for the logical illusion created by the conjunction of feminism and social justice, and answered “B”. Hambrick and Burgoyne used this finding to make the point that having a high IQ does not ensure that one has a high ‘”rationality quotient,” as reflected in the fact that even smart people demonstrate cognitive biases that undermine their ability to make rational real-world decisions.

The fact that smart people sometimes behave stupidly is, of course, not exactly news. In his edited book Why Smart People Can Be So Stupid , another noted cognitive psychologist, Robert Sternberg, used the example of Bill Clinton’s disastrous Oval Office dalliance with college intern Monica Lewinsky to illustrate the phenomenon of irrational conduct (although Sternberg preferred the term “foolishness” to the term irrationality). Sternberg and other contributors to his book, operated—like the authors cited earlier--within a largely cognitive (but even if beyond IQ) framework, by attributing to smart-stupid people an absence of what Sternberg termed “tacit knowledge.” That term reflects the fact that in most social settings there are certain keys to success that are not explicitly taught, but which when not followed are likely to lead to failure. An example often used by Sternberg is an assistant professor at a research university who was denied tenure for failure to publish sufficiently, but then complained that “nobody told me that publishing a lot is so important for obtaining tenure here.” She was not given that information by the institution, because research universities want to protect an important secret (namely that teaching excellence in the very top schools is a low priority) and because it is assumed that anyone smart enough to land a job at an elite university should be smart enough to figure out what is required to stay there.

However, applying the tacit knowledge explanation to Clinton-Lewinsky was problematic for Sternberg and his colleague Richard Wagner, for the simple reason that a worldly and very savvy person like Bill Clinton would surely possess the tacit knowledge that an office affair with an intern was a very politically risky activity. So they came up with a supplementary personality explanation for Clinton’s foolishness, namely profound arrogance and a sense of immunity, stemming from past success in getting away with sexual misconduct. By thus bringing a non-cognitive (i.e., personality) factor into their explanation of foolishness/ irrationality, Sternberg and Wagner acknowledged that a purely cognitive approach to foolish / irrational behavior is not always sufficient. However, they can be faulted for omitting two other important causal elements: situation (Monica flirtatiously snapping her thong at Bill) and biological state disequilibrium (Clinton’s horniness, coupled with chronic sleep-deprivation).

The basic problem with equating irrationality with failure to solve logic illusions is that in most settings (including economics, where the term seems most widely used) rationality has to do less with inefficient thought and more to do with behavior in line with or contrary to self-interest. It is understandable that economists, who for the most part study decisions of relatively trivial importance (such as buying house A as opposed to house B), would over-value the contribution of logical efficiency to rationality, as a smart home (or any other) purchase clearly would benefit substantially from financial acumen. However, even there, emotion should be factored in, as falling in love with a house is more important than cost or investment potential for many home buyers (such as me). In fact, the main contribution that behavioral economics (essentially the merging of economics and psychology) has made to economic theory, is the correction of the classical economic assumption that individuals always make financial decisions based on reasoned self-interest.

When rationality is applied in non-economic contexts, however, the limitations of equating irrationality with deficient thought processes becomes even more obvious. Here an example from Stanovitch’s early writings about dysrationalia can be illustrative. He wrote about two Holocaust-denying public school teachers in Illinois who defied the curricular mandate to teach about the Shoah, when they sent out 6,000 letters (presumably one for each 1,000 mythical murdered Jews) telling parents in their school district why they felt unable to teach about an event they firmly believed had never happened. As a consequence, the teachers were fired from their jobs, a highly predictable outcome but one which the two clueless individuals apparently never anticipated.

The teachers’ behavior was irrational, not so much because it showed a lack of formal logic (which may have contributed to their mistaken reading of history) but because it showed a lack of social risk-awareness (the risk of being insubordinate to one’s employers, and the risk of offending taxpayers, in a state with many Holocaust survivors and their relatives). The irrationality of their behavior was, in that case, driven mainly by emotion (deeply-held political beliefs), which derailed their (likely limited) ability to reflect on social reality. In his original writings about dysrationalia, Stanovitch described the condition as an “ intuition pump,” by which he meant that the inability to use one’s intelligence in the real-world is affected by the presence of strongly interfering personality or state factors such as emotion and impulsivity. In a very limited sense, logical illusions such as the Linda problem can be considered analogous (in that they trigger heuristic associations that substitute for thinking) to emotion driven impulsivity, but non-economic irrationality (such as in the Illinois example) usually reflects much stronger non-cognitive influences.

Rationality is one of those constructs that is widely used in everyday parlance, and in various professional contexts besides economics (e.g., philosophy, psychology, legal theory), but which has never really been adequately defined. In most contexts (including, for the most part, economics), rationality refers to smart action (and irrationality to non-smart action) rather than efficient or inefficient thinking processes. The latter may, obviously, contribute to smart/ dumb behavior, but it is a mistake, in my opinion, to conflate the two and imply (as Hambrick and Burgoyne appear to do) that rationality is nothing more than non-IQ aspects of cognition . In criminal law (where I have been functioning as a psychological consultant for over a decade), irrationality refers to criminal behavior where the actor fails to reflect on the likely physical or social consequences of his or her behavior. In fact, in the 2002 US Supreme Court decision in Atkins v Virginia which abolished execution of people with Intellectual Disability (ID), Justice Stevens wrote that the impaired rationality of people with ID causes them to partially lack mens rea (criminal intent). The definition of a crime , in British and American jurisprudence, is based on conscious intent coupled with understanding of likely consequences. The essence of legal irrationality, therefore, is found in action which at least partially reflects a lack of risk awareness (in this case, risk to the legally protected interests of the victim). In my forthcoming book “Anatomy of Foolishness,” I define foolishness as action which reveals a relative absence of risk-awareness. Within the criminal justice field, therefore, foolishness is another word for irrationality. In my hypothesized theory of foolish behavior, there are four causal factors: situation, cognition, personality, and state. I wrote an analysis , applied to the irrationality of Bernard Madoff victims (using myself as an illustration), published in The Wall Street Journal only three weeks after the Madoff scandal news broke.

The basic mistake made by Hambrick and Burgoyne was in confusing rationality with reasoning. Poor reasoning involves faulty thinking while irrationality involves clueless behavior. How many of the Princeton or Stanford students who flunked the Linda problem would be likely to do anything as stupid as sending out a signed letter denying the Holocaust, even if they held such a belief? Zero, or close to it, in my opinion. Dumb behavior by smart people is a topic deserving of attention , but defining dumb behavior as poor performance on tricky tests of formal logic is not likely to add much to the understanding of that phenomenon.

Copyright Stephen Greenspan

Stephen Greenspan Ph.D.

Stephen Greenspan, Ph.D. , is a professor emeritus of educational psychology at the University of Connecticut and clinical professor of psychiatry at the University of Colorado.

  • Find a Therapist
  • Find a Treatment Center
  • Find a Psychiatrist
  • Find a Support Group
  • Find Teletherapy
  • United States
  • Brooklyn, NY
  • Chicago, IL
  • Houston, TX
  • Los Angeles, CA
  • New York, NY
  • Portland, OR
  • San Diego, CA
  • San Francisco, CA
  • Seattle, WA
  • Washington, DC
  • Asperger's
  • Bipolar Disorder
  • Chronic Pain
  • Eating Disorders
  • Passive Aggression
  • Personality
  • Goal Setting
  • Positive Psychology
  • Stopping Smoking
  • Low Sexual Desire
  • Relationships
  • Child Development
  • Therapy Center NEW
  • Diagnosis Dictionary
  • Types of Therapy

March 2024 magazine cover

Understanding what emotional intelligence looks like and the steps needed to improve it could light a path to a more emotionally adept world.

  • Coronavirus Disease 2019
  • Affective Forecasting
  • Neuroscience

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings

Preview improvements coming to the PMC website in October 2024. Learn More or Try it out now .

  • Advanced Search
  • Journal List
  • J Exp Anal Behav
  • v.90(2); 2008 Sep

Individual Differences, Intelligence, and Behavior Analysis

Ben williams.

1 University of California, San Diego

Joel Myerson

2 Washington University

Sandra Hale

Despite its avowed goal of understanding individual behavior, the field of behavior analysis has largely ignored the determinants of consistent differences in level of performance among individuals. The present article discusses major findings in the study of individual differences in intelligence from the conceptual framework of a functional analysis of behavior. In addition to general intelligence, we discuss three other major aspects of behavior in which individuals differ: speed of processing, working memory, and the learning of three-term contingencies. Despite recent progress in our understanding of the relations among these aspects of behavior, numerous issues remain unresolved. Researchers need to determine which learning tasks predict individual differences in intelligence and which do not, and then identify the specific characteristics of these tasks that make such prediction possible.

The surprising and consistent empirical finding in psychometric intelligence research is that people who do well on one mental task tend to do well on most others, despite large variations in the tests' contents… This was Spearman's (1904) discovery, and is arguably the most replicated result in all psychology. ( Deary, 2000, p. 6 )

Historically, the study of individual differences has been an area of research relatively separate from experimental psychology. While experimental psychology has focused on the processes that determine performance in specific experimental situations, the field of individual differences has studied the stable differences among people, particularly those that generalize across diverse situations. The behavioral differences that have received the most attention in this regard have been personality traits and cognitive abilities. Behavior analysis has largely ignored such differences, other than those that are explainable in terms of reinforcement history. This disregard of individual differences is puzzling, given that behavior analysts emphasize that their research focuses on the behavior of individuals rather than on group averages. After all, it is the differences among individuals that distinguish them from the average.

The disregard of individual differences also is surprising because the agenda of behavior analysis includes the analysis of behavior in educational settings, and individual differences in performance are among the most salient aspects of behavior in such settings. Careful programming of environmental contingencies often can improve educational accomplishment, although the fact remains that individuals vary widely in how effectively they deal with academic topics. Some learn and understand complex material with relative ease, whereas others labor to succeed and nevertheless frequently fail in their efforts. Given the prominence of individual differences at every level of educational endeavor, it is surprising that behavior analysts have made so little effort to understand them.

Individual differences in educational performance are strongly related to differences in intelligence, a major focus of individual-differences research. ‘Intelligence’ has multiple meanings—so many in fact, that one of the most prominent researchers in the area has argued that the term should be abandoned ( Jensen, 1998 ). Lurking within this diversity of meaning, however, are important facts that pose serious explanatory challenges to any approach to psychology that aspires to encompass the field's most basic empirical phenomena. The purpose of this article is to describe some of the most essential of these findings, and to consider their implications for a functional analysis of behavior.

Intelligence As Shared Variance

Numerous tests have been developed to help assess intelligence, including tests of vocabulary, short-term memory span, analogical reasoning, story construction from pictures, etc., with such diversity seemingly belying the usefulness of intelligence as an explanatory construct. Should the diversity of the tests putatively measuring intelligence be taken as evidence that, rather than there being one fundamental ability that distinguishes among people, there is a diverse set of cognitive skills in which people may differ? Would it be better, then, to consider these skills as separate behaviors with their own individual controlling variables? The few behaviorists who have addressed the topic of intelligence (e.g., Staats & Burns, 1981 ) have tended to adopt just such an approach, and nonbehavioral proponents of this view have had considerable popularity (e.g., Gardner, 1983 ). Such an interpretation certainly has intuitive appeal, but there is one very large elephant in the room that cannot be ignored: Performances on all of these tests, despite the obvious differences between the tests themselves, are significantly correlated with each other.

A specific example of such correlations is provided in Table 1 , which shows the correlation matrix for the 14 subtests of one of the most widely used intelligence tests, the Wechsler Adult Intelligence Scale (WAIS-III; Psychological Corporation, 1997 ). Only 2 of the 91 correlations among the subtests are below .30. Most (74 of the 91) are in the range of .40–.70. For example, consider the correlations of the Block Design subtest with the other 13. On each trial of the Block Design test, an individual is shown a pattern and then must reconstruct that pattern using a set of colored cubes, and the total score is based on both time and accuracy. As may be seen in Table 1 , the correlations between Block Design and each of the other subtests are surprisingly similar; all but one are in the .40–.70 range (the one exception being the correlation between Block Design and Digit Span).

Correlation Matrix for the 14 Subtests from the WAIS-III.

Note: Labels in the top row correspond to the letters printed in bold in the Subtest Column.

The issue raised by the pattern of correlations shown in Table 1 is why the correlations among subtests are all so similar. For example, Block Design is as strongly related to Vocabulary as it is to the other subtests that do not involve verbal materials. It is clear that the various tests share a great deal of common variance, despite large differences in their structure and in the stimuli of which they are composed. In the intelligence literature, this common variance often is taken as evidence of general intelligence, or simply g .

The amount of shared variance, and the relative contributions of the various subtests to it, can be measured using principal components analysis or PCA. In PCA, the first principal component or general factor, which is often taken as the operational definition of g , is simply the linear combination of the standardized scores on the subtests that accounts for the greatest amount of variance. If one thinks of each individual's subtest scores as a point in a space with as many dimensions as there are subtests, then PCA weights these scores so as to minimize the sum of the squared perpendicular distances from the data points to the line corresponding to the component axis. Importantly, the weight given a specific subtest in this linear combination, which is frequently referred to as its loading on g , corresponds to the correlation between individuals' scores on that subtest and their component scores, calculated as the weighted sum of their scores on all of the subtests.

For the WAIS-III correlation matrix shown in Table 1 , the first principal component accounts for 51% of the total variance. Table 2 shows the loadings of the various WAIS-III subtests on the first principal component. As can be seen, the highest loading is for the Vocabulary subtest (.83), and the lowest is for Digit Span (.60), indicating that relative to the other subtests, the first principal component (or g ) explains the most variance in Vocabulary scores and the least variance in Digit Span scores. Many researchers in the area of individual differences agree that the fundamental theoretical question about the nature of intelligence is how g should be conceptualized so as to capture the fact that the majority of variance is shared by such a diverse collection of subtests, while at the same time accounting for the differences among the subtests in terms of their contributions to this shared variance.

Task Descriptions and g-loadings for the Verbal and Performance Subtests of the WAIS-III.

What does this mean from the standpoint of behavior analysis? Simply put, it means that an individual's behavior (i.e., a person's test performance relative to that of other individuals) is consistent from subtest to subtest. In principle, an above-average overall score on an intelligence test could indicate that an individual is far above average on a few subtests and below average on the rest, yet this is relatively rare. The universally positive correlations among the various subtests mean that people who are above average overall tend to be above average on all of the subtests and people who are below average overall tend to be below average on all of the subtests.

In fact, this consistency in individual behavior is the heart of the matter—it is what is responsible for the universally positive correlations among the subtests and the relative similarity of their loadings on the first principal component or g . These three findings—individual consistency, positive intercorrelations, and roughly similar subtest loadings—are essentially one, and each of the three findings follows mathematically from either of the other two. Together they represent the essence of the puzzle of intelligence, which is, “Why do some people tend to be good at lots of things while others tend to be bad at lots of things? And why does this tendency to be consistently good or bad apply more to some things than others?” The “things” referred to here are exemplified by the various subtests, but these subtests are of interest because they presumably stand for many other behaviors in everyday life, such as various tasks that need to be done, and problems that need to be solved.

Varieties of Intelligence

One way to approach the issue of the consistency in individual differences in performance is to compare subtests with high g loadings with subtests that have low g loadings, and try to identify the critical dimensions along which these subtests differ. Interestingly, the subtests of the WAIS-III that have the highest g loadings (i.e., Vocabulary, Similarities, Information, Comprehension, and Arithmetic) are those that tap previously acquired knowledge and skills. Tests that tap previously acquired knowledge and skills are said to reflect crystallized intelligence. In contrast, tests that are designed to be as free as possible from prior knowledge, and depend only on current, on-line processing, are said to be tests of fluid intelligence, the prototypical example being the figural analogies of Raven's Progressive Matrices ( Raven, Raven, & Court, 2000 ).

When diverse batteries of subtests are subjected to factor analysis, typically two factors emerge, one a fluid factor and the other a crystallized factor, as indicated by the nature of the subtests that load on these factors ( Horn & Cattell, 1966 ). The distinction between crystallized and fluid intelligence is supported by their different functional properties, especially with respect to the differential effects of adult age. Whereas fluid intelligence begins its decline in the 20s, crystallized intelligence shows relatively little decline in healthy adults until they reach their 70s, and some tests of crystallized intelligence (e.g., vocabulary tests) even show a slight increase over this same period (for a review, see Deary, 2000 , Chapter 8). The two categories of intelligence are differentially sensitive to brain damage of various sorts, with little impairment typically evident for crystallized intelligence but major deficits for fluid intelligence. This pattern has been observed, for example, in patients with white matter lesions ( Leaper et al., 2001 ) and in those with frontal lobe lesions ( Duncan, Burgess, & Emslie, 1995 ), as well as in patients with Huntington's Disease, Parkinson's Disease, and mild Alzheimer's Disease ( Psychological Corporation, 1997 ).

The distinction between fluid and crystallized intelligence is only one of several different partitions of the total variance across different intelligence tests. Other schemes have identified other broad categories of variance (e.g., verbal/educational vs. spatial/mechanical), sometimes with additional, somewhat less broad categories such as retrieval ability and processing speed. The specific structure provided by factor analysis is somewhat arbitrary because it reflects the specific assortment of tests that are included in any given analysis. In addition, the more tests included in a given test battery, the greater the number of covariance clusters that can be identified, with each cluster signifying an ability that is partially distinct from other abilities. But regardless of the complexity of the covariance partitions and the number of factors that emerge as a result, there are always positive correlations between the factors, just as there are universally positive correlations between the individual tests and between the subtests ( Jensen, 1998 ), so that one can always identify a common factor that accounts for a major portion of the total variance (usually more than 50%).

Given that g can be extracted from any array of separate tests, a critical issue is how g factors extracted from separate test batteries are related. If the nature of the g factor were to depend on the specific composition of the test battery from which it is extracted, then g would be arbitrary and of much less theoretical interest. A recent study ( Johnson, Bouchard, Krueger, McGue, & Gottesman, 2004 ) administered three extensive, independently developed intelligence test batteries to 436 individuals and examined the correlations among g factors extracted from the three different batteries: (1) the Comprehensive Ability Battery, consisting of 14 different subtests ( Hakstian & Cattell, 1975 ); (2) a version of the Hawaii Battery, which included the Raven matrices in shortened and modified form as well as 16 other subtests ( DeFries et al., 1974 ); and (3) the original version of the WAIS, which consisted of 11 subtests ( Wechsler, 1955 ). The g factors from the three test batteries correlated .99 (!), providing strong support for the hypothesis that the shared variance captured by g represents a fundamental fact of human abilities and is not an arbitrary result of the composition of specific tests.

Explaining The Shared Variance

Is speed what is shared.

One major hypothesis regarding the consistency in individual performance that underlies g has been that it reflects individual differences in processing speed ( Jensen, 1998 ). There is growing evidence of individual consistency in speed of responding across very diverse tasks: Some people are consistently fast on most tasks, whereas others are consistently slow ( Hale & Jansen, 1994 ; Myerson, Hale, Zheng, Jenkins, & Widaman, 2003 ; Zheng, Myerson, & Hale, 2000 ). This consistency is reminiscent of that observed on intelligence tests and suggests that individual differences in the speed with which people process information could affect performance on many different psychological tests and everyday tasks.

The consistency in an individual's response times may be seen in Figure 1 , which shows individual performance on four quite different tasks: choice reaction time, letter classification, visual search, and abstract matching ( Hale & Myerson, 1993 ). Each of the four panels depicts the mean response times of a different individual plotted as a function of the average response times for the whole group of 65 undergraduates who were tested on these tasks. As may be seen, the 2 slow individuals (top panels) were slower than average in all task conditions (i.e., their data points all lie above the diagonal representing average performance) and the 2 fast individuals (bottom panels) were faster than average in all task conditions (i.e., their data points all lie below the diagonal). In addition, the size of the difference between individual and average response times (represented by the vertical distance from the diagonal) increased in an orderly fashion as the difficulty of the task increased.

An external file that holds a picture, illustration, etc.
Object name is jeab-90-02-06-f01.jpg

Response times (RTs) for 4 individuals from Hale and Myerson (1993) plotted as a function of the group mean RT. Each panel presents data from a different individual. Data from the 5 th , 10 th , 55 th , and 60 th slowest subjects out of the 65 subjects (ranked based on their slopes) are shown in the upper left, upper right, lower left, and lower right panels, respectively. Within each panel, each data point represents performance in one experimental condition. The tasks, in order of increasing difficulty, were as follows: two-choice RT, inverted white triangles (one condition); letter classification, gray squares (three conditions); visual search, white upright triangles (four conditions); abstract matching, gray diamonds (two conditions). The dashed diagonal line represents equality; if an individual's mean RT for a particular task condition did not differ from the corresponding group mean RT (as is approximately the case for the two-choice RT task for each of the individuals shown), then the data point for that condition would fall along the equality line.

Additional data showing the same pattern from 6 different individuals as well as the fast and slow quartiles of this sample may be seen in Figures 1 and 8 of Myerson et al. (2003) . Comparable data from a separate study using seven different tasks are presented in Hale and Jansen (1994) . Taken together, these results strongly support the idea that an individual's speed is a general characteristic of that individual, and has equivalent, multiplicative effects on the time required for any information processing task, regardless of the task being performed (for more details and a formal model, see Myerson et al., 2003 ).

The potential implications of such general characteristics for differences in intelligence may be seen in Figure 2 , which depicts performance by university students and by students at a vocational college, groups that differed in average academic aptitude, plotted as a function of the average response times calculated across both groups combined ( Vernon & Jensen, 1984 ). Not only was the higher ability group faster on all of the tasks (choice reaction time, memory scanning, same/different word judgment, and synonym/antonym judgment), but also the size of the difference from average increased linearly as a function of task difficulty, just as in the Hale and Myerson (1993) data seen in Figure 1 . These data show relatively little evidence of specific strengths or weaknesses on particular tasks or in particular task conditions. Clearly, results like these are difficult to reconcile with the goal of the once-popular chronometric approach to individual differences, which strove to identify distinct cognitive processes through componential analysis of response times, and then relate individual differences in these components to differences in higher cognitive abilities (e.g., Sternberg, 1977 ).

An external file that holds a picture, illustration, etc.
Object name is jeab-90-02-06-f02.jpg

Response times (RTs) of two different ability subgroups, university and vocational college students, plotted as a function of the combined student group mean RTs. White and gray symbols represent the data from Vernon (1983: vocational students) and Vernon and Jensen (1984: university students) , respectively. Each point represents performance in one experimental condition. The tasks, in order of increasing difficulty, were as follows: circles, choice RT (using the Jensen apparatus); octagons, Sternberg memory scanning; squares, same/different word judgment; diamonds, synonym/antonym judgment. The dashed diagonal line represents equality, that is, if one of the subgroup's mean RT for a particular task condition did not differ from the corresponding group mean RT (as is the case for the choice RT condition), then the data point for that condition would fall along the equality line. (Note: This figure was adapted with permission from Figure 2 in Myerson, Hale, Zheng, Jenkins, and Widaman, 2003 .)

One of the major proponents of the importance of processing speed for higher cognitive abilities has been Salthouse (1996) , who offered two reasons why speed should be so important. First, if you are slow to process information, and you cannot control the rate at which it is presented, then you are likely to miss information, some of which may be needed for the behavior in which you are engaged. Second, coordination between two different tasks is likely to be impaired if you are slow, because you may take so long on one task that you forget information that is needed to perform the other task.

It should be noted that the purely behavioral analyses of processing speed data presented here, as well as previous analyses reported in this journal and elsewhere (e.g., Chen, Hale, & Myerson, 2007 ; Myerson, Adams, Hale, & Jenkins, 2003 ; Myerson, Hale, Hansen, Hirschman, & Christiansen, 1989 ; Myerson, Robertson, & Hale, 2007 ), aptly illustrate the potential utility for behavior analysis of studying individual differences. As Underwood (1975) famously argued, individual differences can be thought of as natural experiments, and the results of such natural experiments can help assess the validity of theoretical constructs and models. As Underwood noted, “The whole idea behind behavioral theory is to reduce the number of independent processes to a minimum; to find that performance on two apparently diverse tasks is mediated at least in part by a single, more elementary, process is a step towards this long-range goal” (p. 133). Whereas cognitive psychologists typically assume that a set of diverse tasks of varying complexity, such as the various tasks that generated the data for the analyses presented here, involve different collections of processing components, the orderly linear relations between individual differences and group average response times on these tasks imply that the tasks primarily vary along a single dimension, which might be termed simply difficulty.

The finding that the size of individual differences varies as a simple multiplicative function of task difficulty is consistent with what would be expected if speed were the source of the shared variance on intelligence tests. Regardless of whether that claim is ultimately supported or not, however, we believe the finding stands on its own as revealing an important regularity in individual behavior, one that already has shed substantial light on the behavioral processes underlying changes observed in both child development (e.g., Fry & Hale, 1996 ) and aging (e.g., Myerson et al., 2003 ).

Or Is It Working Memory?

There can be little doubt that a substantial fraction of the variance in intelligence scores is related to measures of processing speed (for a review, see Vernon, 1983 ), but numerous investigators have questioned its adequacy as a complete account of g (e.g., Stankoff & Roberts, 1997 ). A popular alternative to processing speed as the major correlate of g is working memory capacity . Cognitive psychologists continue to disagree about specific aspects of the working memory construct, but it is generally assumed that information is maintained in a temporary memory buffer while it is being processed as well as while other information is being processed. Separate buffers for verbal and visuospatial information have been proposed, along with an executive function to organize and allocate limited attentional resources ( Baddeley, 1986 ). Various other executive functions also have been proposed, such as updating and monitoring, switching between mental sets, and inhibiting competing responses (e.g., Miyake, Friedman, Emerson, Witzi, & Howerth, 2000 ).

Many researchers assume that competition between the maintenance and processing functions of the working memory system provides the basis for the differences between individuals (e.g., Engle, Tuholski, Laughlin, & Conway, 1999 ). More capable individuals are assumed to have greater working memory “capacity” and/or a more effective attentional or executive system that allows the memory system to be less disrupted when simultaneous processing is required. It even has been suggested that reasoning ability is little more than working memory capacity ( Kyllonen & Christal, 1990 ), although this strong claim has been strenuously criticized ( Ackerman, Beier, & Boyle, 2005 ).

The importance of working memory in everyday life is exemplified by its role in reading. To comprehend what one is currently reading, one must relate the present text to the portion of the material already read. To the extent that the ability to retain prior relevant information while processing new information is deficient, the reader will need to frequently recheck the earlier material, greatly impeding the efficiency of reading and thereby limiting the actual amount learned. Not surprisingly, tests of verbal working memory strongly predict various aspects of verbal ability, including vocabulary, verbal comprehension, and the ability to infer word meanings from context ( Daneman & Carpenter, 1980 ; Daneman & Merikle, 1996 ).

To get a sense of what a working memory task is like, consider the “alphabet recoding” task used by Kyllonen and Christal (1990) . Subjects are presented a series of three letters and then an instruction such as “+ 2” that tells them how to transform the original letters before reporting the transformed series. If the letters were “C-K-W,” for example, then the correct answer would be “E-M-Y.” To perform the task correctly, subjects must transform the first letter without forgetting the other two letters, then transform the second letter without forgetting either the previously transformed first letter or the untransformed third letter, and then finally transform the third letter without forgetting the previous two transformations. The alphabet span task ( Craik, 1986 ), in which subjects hear a list of words and then must recall them in alphabetical order, is another example of a working memory task involving transformations.

Dual-task procedures represent a more commonly used type of working memory test. With these procedures, to-be-remembered items are presented individually, alternating with other stimuli that must be responded to without forgetting any of the to-be-remembered items. One prominent example is the operation span task, in which words are presented alternately with simple arithmetic equations (e.g., 3+5  =  7) that must be judged for correctness. After an entire list of words and equations has been presented, the subject must recall the words in order. As may be noted, competition and interference are what distinguish all these tasks from simple short-term memory span tasks. In the case of the operation span task, for example, the equations and the words compete to control recall responses, and the process of judging the correctness of the equations also may interfere with recall. (For more detailed information about this and other working memory task procedures, see Conway et al., 2005 ; and Waters & Caplan, 2003 .)

A variety of specific executive functions have been proposed, but recent studies have failed to support the idea that they are part of a unitary executive system. For example, at least one of these putative executive functions, task switching, has been found to be unrelated to working memory span measures ( Oberauer, Süß, Wilhelm, & Sander, 2007 ). Other failures to observe predicted covariation have been reported as well, not only between different putative executive functions (e.g., Miyake et al., 2000 ; Ward, Roberts, & Phillips, 2001 ) but also between different measures of the same function (e.g., Shilling, Chetwynd, & Rabbitt, 2002 ). Thus, rather than explaining individual differences in g , the concept of executive function seems both too fuzzy and too variegated to offer a gain in explanatory parsimony over the phenomena to be explained. Even Alan Baddeley, who pioneered the concept of a working memory system with a “central executive,” has explicitly stated that the “central executive” is merely a conceptual placeholder that serves as a reminder of what remains to be explained ( Baddeley, 2001 ).

Despite these conceptual and empirical problems with the executive aspects of the working memory construct, several features seem interpretable within a behavior-analytic account, and measures based on these features may eventually be shown to be critical predictors of individual differences in g . For example, Engle and his colleagues recently proposed that the reason that working memory performance predicts fluid intelligence is not primarily because memory per se is used to solve reasoning problems; rather, it is because working memory tasks measure how well an individual's attentional processes function “under conditions of interference, distraction, or conflict” ( Kane, Conway, Hambrick, & Engle, 2007 ; Unsworth & Engle, 2005 ; see also Hasher, Lustig, & Zacks, 2007 ). This account may be compared to a description of working memory tasks from a behavioral perspective. We would say that briefly presented stimuli must continue to control appropriate recall responses even after those brief stimuli are replaced by other stimuli that control different responses; on some tasks, previously presented stimuli must also compete with self-generated stimuli resulting from covert responses (e.g., transformations). In other words, working memory tasks involve competition for control (or “attention”) among stimuli, most of which are no longer present, and it is plausible that individuals will differ substantially in the outcome of such competition, perhaps in a way that predicts their performance on intelligence tests.

An analysis of individual differences in working memory in terms of individual differences in stimulus control seems achievable and clearly relevant to understanding the nature of intelligence. Recent theoretical accounts, however, have retreated from the strong view (e.g., Kyllonen & Chrystal, 1990 ) that fluid intelligence and working memory capacity are essentially isomorphic, in part because studies have shown that working memory tasks are far from perfectly correlated with intelligence (e.g., Ackerman et al., 2005 ). This means that much of the variance in g must be explained by factors other than working memory. The results of our own research in this area, discussed in the following section, suggest that competition among stimuli for control plays a role in human learning as well as in working memory, and that this broader role provides a fuller (albeit far from complete) account of individual differences in g ( Tamez, Myerson, & Hale, 2008 ; Williams & Pearlberg, 2006 ).

The Role of Learning

A remarkable feature of most recent attempts to identify the basic underlying dimensions that account for individual differences in general intelligence has been the neglect of associative learning. This neglect seems strange given the obviously important role played by associative learning in the acquisition of knowledge and skills that underlie performance on tests of crystallized intelligence (e.g., the Vocabulary and General Information subtests of the WAIS-III). In part, the neglect of associative learning may be a consequence of the view that knowledge and skill acquisition are the result of an inferential process rather than the result of forming associative connections. For example, the meaning of a new word is commonly inferred from the context in which it is encountered, rather than being learned from flash cards or by looking it up in the dictionary. Regardless of how learning is conceived, however, crystallized intelligence obviously reflects prior learning, and presumably also reflects individual differences in the efficiency of the learning process.

In addition, much recent research on intelligence seems out of touch with the goal of predicting educational achievement, which was the original purpose of intelligence tests and is still their major application. Recently, however, Luo, Thompson, and Detterman (2006) examined the extent to which basic information-processing tasks could replace conventional intelligence tests as predictors of children's performance on academic achievement tests. Luo et al. used multiple regression and structural equation models to analyze data from two large, representative samples: a normative sample of nearly 5,000 children, ages 6–19 years, that had been used to standardize the multi-faceted Woodcock-Johnson Cognitive Abilities and Achievement Tests, and a separate sample of more than 500 children, ages 6–13 years, from the Western Reserve Twin Project. For both samples, analyses showed that fluid intelligence tests could be replaced as predictors of academic achievement by measures of processing speed and working memory. In contrast, tests of crystallized intelligence could not be replaced because they explained a substantial amount of the variance in academic achievement that was not accounted for by the information-processing measures. In the Woodcock-Johnson normative sample, for example, a combination of crystallized intelligence and basic information-processing abilities accounted for more than one-half of the variance in academic achievement, of which approximately 40% was attributable to crystallized intelligence alone and approximately 45% was common to both crystallized intelligence and information processing ability ( Luo et al., 2006 ). If a major goal of intelligence testing is the prediction of academic achievement, then assessment of learning ability, which is a major determinant of crystallized intelligence, would appear to be important for achieving that goal.

It is important to note that not all types of associative learning are related to intelligence, and in fact, early studies failed to find a meaningful relation between intelligence scores and rate of learning on a variety of associative learning tasks ( Underwood, Boruch, & Malmi, 1978 ; Woodrow, 1946 ). These findings contributed to the subsequent disregard of learning by intelligence researchers, but they also may be a major clue as to the role of learning ability in performance on intelligence tests. Recently, Williams and Pearlberg (2006) replicated the low correlation between some measures of verbal learning and intelligence. They found that both paired-associate learning and free-recall list learning had correlations below .20 with Raven's Advanced Progressive Matrices ( Raven, Raven, & Court, 1998 ). More complex learning tasks (e.g., learning to write computer programs; Kyllonen & Stephens, 1990 ) have produced more substantial correlations with intelligence, but because of their very complexity, neither what is being learned on such tasks, nor how it is being learned, is clearly understood.

In contrast to the weak correlations observed between traditional verbal learning tasks and intelligence tests, Williams and Pearlberg (2006) have developed a novel verbal learning task that shows a surprisingly high correlation with scores on a test of fluid intelligence. In their new task, each subject learns a set of “three-term contingencies” modeled after the basic unit of operant conditioning. More specifically, subjects see 10 stimulus words presented one at a time on a computer screen. In the presence of each stimulus word, subjects first press the “A” key, then the “B” key, and finally the “C” key, with each response producing a different outcome word. For example, given the stimulus word “LAB,” pressing the “A” key produces “PUN,” pressing the “B” key produces “TRY,” and pressing the “C” key produces “EGG.” Given a different stimulus word (e.g., “RUM”), the same set of three responses produces a different set of outcome words (e.g., A→FAT, B→CAN, C→TIC). In subsequent testing, subjects are presented with all 30 stimulus word–response combinations and, in each case, they have to try and provide the appropriate outcome word (e.g., LAB, A→ ? Correct response  =  PUN). When Williams and Pearlberg had college students perform this task, they found that students' performance on the three-term learning task correlated strongly ( r  =  .50) with their scores on the Raven's Advanced Progressive Matrices.

Williams and Pearlberg conducted a follow-up study (unpublished) in which they compared their three-term learning task with a two-term associative learning task designed to mimic their three-term task as closely as possible. In the two-term learning task, ten stimulus words were each paired sequentially with three different words but without intervening responses to keys A, B, and C. During testing, the subject was asked to recall each of three words that had been associated with each stimulus. Despite the fact that both the type and the amount of material to be learned on this two-term task was similar to that on the three-term contingency task, the correlation between learning rate and Raven scores was only about .25, approximately half that obtained with the three-term task. This difference could not be attributed to differences in task difficulty. These findings clearly demonstrate that learning ability is an important component of fluid intelligence, but they also raise an important question: How can simply adding the middle (response) term of the three-term contingency lead to such a substantial increase in the correlation between learning and intelligence test scores?

One way to approach this question is to see how performance on the three-term learning task relates to performance on other types of basic information-processing tasks. Williams and Pearlberg (2006) originally reported that the three-term contingency task was not significantly correlated with working memory and processing speed tasks, but in subsequent research they have found occasional significant correlations between three-term contingency learning and some working memory tasks (but as yet no significant correlation with any speed-of-processing task has been observed). Importantly, the correlations between working memory and Raven scores have always been smaller than the correlations between three-term verbal learning and Raven scores.

Tamez et al. (2008) recently replicated the substantial correlation between Williams and Pearlberg's (2006) three-term contingency learning and Raven scores, and extended this finding to an essentially similar three-term contingency task that used visual-spatial patterns as stimuli. Moreover, as in the research by Williams and Pearlberg, Tamez et al. found that the three-term verbal learning task correlated more strongly with Raven's Advanced Progressive Matrices than any of the three working memory tasks used in their study. Of the three, the operation span task, which is becoming a standard for assessing working memory capacity ( Conway et al., 2005 ), correlated most highly with Raven scores ( r  =  .395). However, this correlation was still less than that between the three-term verbal learning task and Raven scores ( r  =  .489). In addition, multiple regression analyses revealed that performance on the three-term learning task accounted for all of the variance in Raven scores explained by operation span as well as additional variance unique to three-term contingency learning.

Thus, learning, or at least learning on tasks like the three-term contingency learning task developed by Williams and Pearlberg (2006) , is among the very best predictors of performance on fluid intelligence tests. From one perspective, this should not be surprising. Ever since Binet and Simon (1905 , 1916) developed the first standardized intelligence test, a major purpose of such tests has been to aid in the identification of children who were likely to have problems learning in regular schools. It follows that if one develops an intelligence test that can predict learning, then given the nature of correlations, learning should also predict performance on that test.

From another perspective, however, the relation of learning ability to intelligence test scores is surprising, at least with respect to fluid intelligence tests. After all, tests of fluid intelligence such as Raven's Progressive Matrices were developed originally with the goal of measuring cognitive ability independent of past learning. Such a goal may be unattainable, however, if individuals taking fluid intelligence tests are actually learning the correct solution strategies as they proceed, as some researchers recently have suggested (e.g., Carlstedt, Gustafsson, & Ullstadius, 2000 ; Verguts & De Boeck, 2002 ). Indeed, our research on three-term learning suggests that learning ability is a major component of fluid intelligence ( Tamez et al., 2008 ; Williams & Pearlberg, 2006 ).

The specific roles played by learning ability in determining individual differences in performance on intelligence tests remain to be determined. Nevertheless, Williams and Pearlberg's (2006) finding that only some types of learning tasks are significantly correlated with intelligence scores raises the possibility that it is the structure of what is to be learned that determines the strength of such correlations. For example, consider the structure of what must be learned on the three-term contingency task of Williams and Pearlberg: There are 10 stimulus words, each of which is associated with three different key-press responses, which in turn are each associated with 10 outcome words to be recalled. There presumably are also associations between stimulus words and outcome words in addition to the associations between stimulus–response combinations and outcome words. When a stimulus word and key press are specified, an individual performing the three-term task must contend with multiple competing associations. Out of all these various associations, only one unique stimulus–response–outcome combination is correct.

Individual differences in the ability to learn which word to recall in the face of such competing associations may well be what underlie differences on intelligence tests. Building on the idea that fluid intelligence tests also involve learning (e.g., Carlstedt et al., 2000 ; Verguts & De Boeck, 2002 ), we suggest that the reason that learning on the three-term task is related to performance on tests of fluid intelligence is because these fluid tests also involve learning in the face of stimuli competing for control, and the reason that learning ability is related to performance on tests of crystallized intelligence is that these crystallized tests assess the results of past learning in the face of such competition. In fact, learning in the presence of competing stimuli may be an important part of what glues the various components of g together and gives rise to the consistency of individuals' behavior across different tests and in quite different situations.

Interestingly, Unsworth and Engle (2007) recently suggested that the ability to efficiently constrain searches of long-term memory is a critical aspect of working memory function, and in their view, this ability may underlie the correlation between working memory and intelligence. Although the terminology is quite different (e.g., attention control vs. stimulus control) and the emphasis is on using knowledge rather than on its acquisition, the view expressed by Unsworth and Engle is similar to our own. At this point in time, the preceding ideas are clearly hypotheses in need of further evaluation. However, they exemplify our belief that in order to shed light on what scores on intelligence tests mean, and why individuals show such consistent performance on the subtests of which these tests are composed, what will be needed is a clear determination of the critical dimensions that govern when tests of learning ability predict scores on intelligence tests, and vice versa.

Final Comments

Astute application of behavioral principles may facilitate the development of expertise in specific learning situations, but it remains to be established whether general behavioral principles can provide insight into the fact that some people consistently perform better than others in situations that would be considered intellectually demanding. In principle, the observed covariance in performance level across very diverse tasks that characterizes g poses the same sort of issues as the covariance in different measures of other intervening variables. For example, a motivational construct like hunger refers to the fact that diverse food-related behaviors covary in strength. Similarly, Skinner's (1969) definition of the operant entails that seemingly different movement patterns are the same response unit to the extent that they are functionally equivalent and thus covary in strength. Like these other integrative concepts, the covariance implicit in the concept of g can be studied productively using functional analysis to determine the natural lines of fracture. Processing speed and working memory capacity, which are currently the predominant integrative constructs for explaining g , also are amenable to such a behavioral analysis. Indeed, new behavioral principles governing the absolute size of individual and age differences on processing speed tasks have already begun to emerge ( Chen et al., 2007 ; Hale & Jansen, 1994 ; Myerson et al., 2003 ).

Recent findings relating intelligence scores to learning rate suggest that a behavior- analytic approach has great promise for understanding individual differences in intelligence. These findings present us with an opportunity to identify the specific features of learning that are most relevant to intelligent behavior. It remains to be seen how much of the general factor in intelligence scores can be explained by differences in learning, but given the importance of g for so much of everyday life (e.g., Gottfredson, 1997 ; Herrnstein & Murray, 1994 ; for a recent review, see Lubinski, 2004 ), behavior analysts surely will be motivated to undertake the relevant research. There is no justification for leaving the study of intelligence to others.

  • Ackerman P.L, Beier M.E, Boyle M.O. Working memory and intelligence: The same or different constructs. Psychological Bulletin. 2005; 131 :30–60. [ PubMed ] [ Google Scholar ]
  • Baddeley A.E. Working memory. New York: Oxford University Press; 1986. [ Google Scholar ]
  • Baddeley A.E. Is working memory still working. American Psychologist. 2001; 56 :851–864. [ PubMed ] [ Google Scholar ]
  • Binet A, Simon T. Méthodes nouvelles pour le diagnostic du niveau intellectual des anormaux. L'Année Psychologique. 1905; 11 :191–336. [New methods for the diagnosis of the intellectual level of subnormals.]. [ Google Scholar ]
  • Binet A, Simon T. The development of intelligence in children (E. Kit, Trans.) Baltimore, MD: Williams & Wilkins; 1916. [ Google Scholar ]
  • Carlstedt B, Gustafsson J.-E, Ullstadius E. Item sequencing effects on the measurement of fluid intelligence. Intelligence. 2000; 28 :145–160. [ Google Scholar ]
  • Chen J, Hale S, Myerson J. Predicting the size of individual and group differences on speeded cognitive tasks. Psychonomic Bulletin & Review. 2007; 14 :534–541. [ PubMed ] [ Google Scholar ]
  • Conway A.R.A, Kane M.J, Bunting M.F, Hambrick D.Z, Wilhelm O, Engle R.W. Working memory span tasks: A methodological review and user's guide. Psychonomic Bulletin & Review. 2005; 12 :769–786. [ PubMed ] [ Google Scholar ]
  • Craik F.I.M. A functional account of age differences in memory. In: Klix F, Haggendorf & H, editors. Human memory and cognitive capabilities. New York: Elsevier; 1986. pp. pp. 409–422). [ Google Scholar ]
  • Daneman M, Carpenter P.A. Individual differences in working memory and reading. Journal of Verbal Learning and Verbal Behavior. 1980; 19 :450–466. [ Google Scholar ]
  • Daneman M, Merikle P.M. Working memory and language comprehension: A meta-analysis. Psychonomic Bulletin & Review. 1996; 3 :422–433. [ PubMed ] [ Google Scholar ]
  • Deary I.J. Looking down on human intelligence: From psychometrics to the brain. Oxford, England: Oxford University Press; 2000. [ Google Scholar ]
  • DeFries J.C, Vandenberg S.G, McClearn G.E, Kuse A.R, Wilson J.R, Ashton G.G, Johnson R.C. Near identity of cognitive structure in two ethnic groups. Science. 1974 Jan 25; 183 :338–339. [ PubMed ] [ Google Scholar ]
  • Duncan J, Burgess P, Emslie H. Fluid intelligence after frontal lobe lesions. Neuropsychologia. 1995; 33 :261–268. [ PubMed ] [ Google Scholar ]
  • Engle R.W, Tuholski S.W, Laughlin J.E, Conway A.R.A. Working memory, short-term memory, and fluid intelligence. Journal of Experimental Psychology: General. 1999; 128 :309–331. [ PubMed ] [ Google Scholar ]
  • Fry A.F, Hale S. Processing speed, working memory, and fluid intelligence in children. Psychological Science. 1996; 7 :237–241. [ PubMed ] [ Google Scholar ]
  • Gardner H. Frames of mind: The theory of multiple intelligences. New York: Basic Books; 1983. [ Google Scholar ]
  • Gottfredson L.S. Why g matters: The complexity of everyday life. Intelligence. 1997; 24 :79–132. [ Google Scholar ]
  • Hakstian A.R, Cattell R.B. The comprehensive ability battery. Champaign, IL: Institute for Personality and Ability Testing; 1975. [ Google Scholar ]
  • Hale S, Jansen J. Global processing-time coefficients characterize individual and group differences in cognitive speed. Psychological Science. 1994; 5 :384–389. [ Google Scholar ]
  • Hale S, Myerson J. Ability-related differences in cognitive speed: Evidence for global processing-time coefficients. 1993. Nov, Poster presented at the annual meeting of the Psychonomic Society, Washington, DC.
  • Hasher L, Lustig C, Zacks R. Inhibitory mechanisms and the control of attention. In: Conway A.R.A, Jarrold C, Kane M.J, Miyake A, Towse J, editors. Variation in working memory. New York: Oxford University Press; 2007. pp. pp. 227–249). [ Google Scholar ]
  • Herrnstein R.J, Murray C. The bell curve: Intelligence and class structure in American life. New York: The Free Press; 1994. [ Google Scholar ]
  • Horn J.L, Cattell R.B. Refinement and test of the theory of fluid and crystallized general intelligences. Journal of Educational Psychology. 1966; 57 :253–270. [ PubMed ] [ Google Scholar ]
  • Jensen A.R. The g factor: The science of mental ability. Westport, CN: Praeger; 1998. [ Google Scholar ]
  • Johnson W, Bouchard T.J, Jr., Krueger R.F, McGue M, Gottesman I.I. Just one g : Consistent results from three test batteries. Intelligence. 2004; 32 :95–107. [ Google Scholar ]
  • Kane M.J, Conway A.R.A, Hambrick D.Z, Engle R.W. Variation in working memory capacity as variation in executive attention and control. In: Conway A.R.A, Jarrold C, Kane M.J, Miyake A, Towse J, editors. Variation in working memory. New York: Oxford University Press; 2007. pp. pp. 21–48). [ Google Scholar ]
  • Kyllonen P.C, Christal E.E. Reasoning ability is (little more than) working memory capacity? Intelligence. 1990; 14 :389–433. [ Google Scholar ]
  • Kyllonen P.C, Stephens D.L. Cognitive abilities as determinants of success in acquiring logic skill. Learning and Individual Differences. 1990; 2 :129–160. [ Google Scholar ]
  • Leaper S.A, Murray A.D, Lemmon H.A, Staff R.T, Deary I.J, Crawford J.R, Whalley L.J. Neuropsychologic correlates of brain white matter lesions depicted on MR Images: 1921 Aberdeen birth cohort. Radiology. 2001; 221 :51–55. [ PubMed ] [ Google Scholar ]
  • Lubinski D. Introduction to the special section on cognitive abilities: 100 years after Spearman's (1904) “‘General intelligence,’ objectively determined and measured” Journal of Personality and Social Psychology. 2004; 86 :96–111. [ PubMed ] [ Google Scholar ]
  • Luo D, Thompson L.A, Detterman D.K. The criterion validity of tasks of basic cognitive processes. Intelligence. 2006; 34 :79–120. [ Google Scholar ]
  • Miyake A, Friedman N.P, Emerson M.J, Witzi A.H, Howerth A. The unity and diversity of executive functions and their contributions to complex “frontal lobe” tasks: A latent variable analysis. Cognitive Psychology. 2000; 41 :49–100. [ PubMed ] [ Google Scholar ]
  • Myerson J, Adams D.R, Hale S, Jenkins L. Analysis of group differences in processing speed: Brinley plots, Q-Q- plots, and other conspiracies. Psychological Bulletin & Review. 2003; 10 :224–237. [ PubMed ] [ Google Scholar ]
  • Myerson J, Hale S, Hansen C, Hirschman R.B, Christiansen B. Global changes in response latencies of early middle-age adults: Individual complexity effects. Journal of the Experimental Analysis of Behavior. 1989; 52 :353–362. [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • Myerson J, Hale S, Zheng Y, Jenkins L, Widaman K.F. The difference engine: A model of diversity in speeded cognition. Psychonomic Bulletin & Review. 2003; 10 :262–288. [ PubMed ] [ Google Scholar ]
  • Myerson J, Robertson S, Hale S. Aging and intra-individual variability: Analysis of response time distributions. Journal of the Experimental Analysis of Behavior. 2007; 88 :319–337. [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • Oberauer K, Süß H-M, Wilhelm O, Sander N. Individual differences in working memory capacity and reasoning ability. In: Conway A.R.A, Jarrold C, Kane M.J, Miyake A, Towse J, editors. Variation in working memory. New York: Oxford University Press; 2007. pp. pp. 49–75). [ Google Scholar ]
  • Psychological Corporation. . WAIS-III – WMS-III technical manual. San Antonio: Harcourt Brace & Company; 1997. [ Google Scholar ]
  • Raven J, Raven J.C, Court J.H. Manual for Raven's Progressive Matrices and Vocabulary Scales. Section 4: The Advanced Progressive Matrices. San Antonio, TX: Harcourt Assessment; 1998. [ Google Scholar ]
  • Raven J, Raven J.C, Court J.H. Manual for Raven's Progressive Matrices and Vocabulary Scales. Section 3: The Standard Progressive Matrices. San Antonio, TX: Harcourt Assessment; 2000. [ Google Scholar ]
  • Salthouse T.A. The processing-speed theory of adult age differences in cognition. Psychological Review. 1996; 103 :403–428. [ PubMed ] [ Google Scholar ]
  • Shilling V.M, Chetwynd A, Rabbitt P.M.A. Individual inconsistency across measures of inhibition: An investigation of the construct validity of inhibition in older adults. Neuropsychologia. 2002; 40 :605–619. [ PubMed ] [ Google Scholar ]
  • Skinner B.F. Contingencies of reinforcement: A theoretical analysis. New York: Appleton-Century-Crofts; 1969. [ Google Scholar ]
  • Staats A.W, Burns G.L. Intelligence and child development: What intelligence is and how it is learned and functions. Genetic Psychology Monographs. 1981; 104 :237–301. [ Google Scholar ]
  • Stankoff L, Roberts R.D. Mental speed is not the “basic” process of intelligence. Personality and Individual Differences. 1997; 22 :69–84. [ Google Scholar ]
  • Sternberg R. Intelligence, information processing, and analogical reasoning: The componential analysis of human abilities. Hillsdale, NJ: Erlbaum; 1977. [ Google Scholar ]
  • Tamez E, Myerson J, Hale S. Learning, working memory, and intelligence revisited. Behavioural Processes. 2008; 8 :240–245. [ PubMed ] [ Google Scholar ]
  • Underwood B.J. Individual differences as a crucible in theory construction. American Psychologist. 1975; 30 :128–134. [ Google Scholar ]
  • Underwood B.J, Boruch R.F, Malmi R.A. Composition of episodic memory. Journal of Experimental Psychology: General. 1978; 107 :393–419. [ Google Scholar ]
  • Unsworth N, Engle R.W. Working memory capacity and fluid abilities: Examining the correlation between operation span and Raven. Intelligence. 2005; 33 :67–81. [ Google Scholar ]
  • Unsworth N, Engle R.W. The nature of individual differences in working memory capacity: Active maintenance in primary memory and controlled search from secondary memory. Psychological Review. 2007; 114 :104–132. [ PubMed ] [ Google Scholar ]
  • Verguts T, De Boeck P. On the correlation between working memory capacity and performance on intelligence tests. Learning and Individual Differences. 2002; 13 :37–55. [ Google Scholar ]
  • Vernon P. Speed of information processing and intelligence. Intelligence. 1983; 7 :53–70. [ Google Scholar ]
  • Vernon P.A, Jensen A.R. Individual and group differences in intelligence and speed of processing. Personality and Individual Differences. 1984; 5 :411–423. [ Google Scholar ]
  • Ward G, Roberts M.J, Phillips R.H. Task-switching costs, Stroop costs, and executive control: A correlational study. Quarterly Journal of Experimental Psychology. 2001; 54A :491–511. [ PubMed ] [ Google Scholar ]
  • Waters G.S, Caplan D. The reliability and stability of verbal working memory measures. Behavior Research Methods, Instruments, & Computers. 2003; 35 :550–564. [ PubMed ] [ Google Scholar ]
  • Wechsler D. Manual for the Wechsler Adult Intelligence Scale. New York: The Psychology Corporation; 1955. [ Google Scholar ]
  • Williams B.A, Pearlberg S.L. Learning of three-term contingencies correlates with Raven scores, but not with measures of cognitive processing. Intelligence. 2006; 34 :177–191. [ Google Scholar ]
  • Woodrow H. The ability to learn. Psychological Review. 1946; 53 :147–158. [ PubMed ] [ Google Scholar ]
  • Zheng Y, Myerson J, Hale S. Age and individual differences in information-processing speed: Testing the magnification hypothesis. Psychonomic Bulletin & Review. 2000; 7 :113–120. [ PubMed ] [ Google Scholar ]

IMAGES

  1. 006 Behavior Essay Help Me Write An On Cognitive ~ Thatsnotus

    intelligent behavior essay

  2. Behavior Essay For Students To Copy Teaching Resources

    intelligent behavior essay

  3. (PDF) How Intelligent Is Machiavellian Behavior?

    intelligent behavior essay

  4. My Behavior Essay (400 Words)

    intelligent behavior essay

  5. Is Logic a Nuisance from Understanding Intelligent Behavior Essay

    intelligent behavior essay

  6. Essay On Human Behavior

    intelligent behavior essay

VIDEO

  1. Behavior Modification for Australian Shepherds: Addressing Unwanted Habits

  2. Intelligent Behavior: Social Referencing in Blue Jay Birds

  3. Intelligent behavior: learning to be a better learner

  4. Experience intelligent essay assistance with Weopis AI Chat.✨📖 #AIAssistant #Weopi #studenthacks

  5. How To Build AI Agents Using Crew AI With Local LLMs For Content Creation

  6. The Impact of Technology on Communication in Today's Society Essay

COMMENTS

  1. Intelligence and Stupid Behavior

    Non-cognitive contributors to irrationality. In a September 16, 2016 essay in the New York Times, David Z. Hambrick and Alexander P. Burgoyne made an interesting distinction between intelligence ...

  2. Individual Differences, Intelligence, and Behavior Analysis

    The present article discusses major findings in the study of individual differences in intelligence from the conceptual framework of a functional analysis of behavior. In addition to general intelligence, we discuss three other major aspects of behavior in which individuals differ: speed of processing, working memory, and the learning of three ...

  3. Human intelligence

    human intelligence, mental quality that consists of the abilities to learn from experience, adapt to new situations, understand and handle abstract concepts, and use knowledge to manipulate one ’s environment. Much of the excitement among investigators in the field of intelligence derives from their attempts to determine exactly what ...