Welcome!

This is the study guide I made myself to prepare for my comprehensive exams in a relatively challenging doctoral program in I/O Psychology. I've posted these materials because I frequently find myself wanting to refer back to them when I'm at work, so really, this site is primarily selfish. I've also integrated sections of my comps answers that I re-use from time to time. I passed on the first try, for what it's worth.

I've moved the site over from the original GooglePages site to clean it up a bit and make the content more searchable. I realize this isn't technically a blog - I don't plan to update often or fuss with comments. But I think this platform will make it easier for you guys to find what you're looking for. I'll leave the old page up, but it's fairly ugly and cumbersome, so I hope this one will serve you better.

You'll find the old linked list of topics below, and also in the right sidebar. The topics are adapted from the list of content areas suggested by The Society for Industrial & Organizational Psychology (SIOP) in their Guidelines for Education and Training at the Doctoral Level in Industrial-Organizational Psychology ... you'll see I skipped a couple; that was upon the recommendation of my own program, so check with yours to see if they might be so kind as to let you know on which areas you should focus. And please don't just rely on this page alone to get you through. No warranties expressed or implied here.

If you have corrections or summaries of your own that you'd like to contribute, feel free to contact me. No need to contact me to alert me to typos or improvements upon the prose used herein - it's really just a rough study guide and list of random references--it's not a journal pub.

Monday, July 7, 2008

Borman, W. C., & Motowidlo, S. J. (1997). Task performance and contextual performance: The meaning for personnel selection research. Human Performance, 10, 99-109.

Background

  • Task performance – effectiveness with which job incumbents perform activities that contribute to the organization’s technical core either directly by implementing a part of the technological process, or indirectly by providing it with needed materials or services
  • Contextual activities are important because they contribute to organizational effectiveness in ways that shape the organizational, social, and psychological context that serves as the catalyst for task activities and processes
  • Contextual performance is importantly different from task performance in at least three ways: (1) task activities vary considerably across jobs whereas contextual activities tend to be more similar across jobs; (2) task activities are more likely than contextual activities to be role-prescribed; (3) antecedents of task performance are more likely to involve cognitive ability, whereas antecedents of contextual performance are more likely to involve personality variables
  • If we include as criteria contextual performance factors, then personality predictors will be more successful in personnel selection research

Borman and Motowidlo Taxonomy of Contextual Performance

  • Persisting with enthusiasm and extra effort necessary to complete own task activities successfully.
    • Perseverance and conscientiousness; extra effort on the job
  • Volunteering to carry out task activities that are not formally part of own job.
    • Suggesting organizational improvements; initiative and taking on extra responsibility; making constructive suggestions, developing oneself
  • Helping and cooperating with others
    • Assisting/helping coworkers; assisting/helping customers; organizational courtesy; sportsmanship; altruism; helping coworkers
  • Following organizational rules and procedures
    • Following orders and regulations and respect for authority; complying with organizational values and policies; conscientiousness; meeting deadlines; civic virtue
  • Endorsing, supporting, and defending organizational objectives
    • Organizational loyalty; concern for unit objectives; staying with an organization during hard times and representing the organization favorably to outsiders; protecting the organization

Impact of Ratee Contextual Performance on Overall Performance Ratings

  • It is well-established that global overall performance ratings are influenced substantially by ratee performance; task and contextual performance is weighted about the same by supervisors making ratings

Evidence that Personality Predicts Contextual Performance

  • When the contextual components of overall performance can be measured separately, personality predictor validities will be higher than when the criterion is overall performance
  • Additional reason HPI scales may correlate more highly with contextual-like criteria is the similarity in their bandwidths. The HPI basic scales probably target a criterion domain more narrow than overall performance; from both a predictor-criterion conceptual mapping perspective and a bandwidth similarity perspective, typical personality scales should relate more highly with contextual factors than overall performance

Conclusions

  • The contextual performance domain is important-it seems conceptually and empirically distinct from task performance, distinction will increase in importance as:
    • Global competition continues to raise needed effort levels of employees
    • As team-based organizations become more popular
    • As downsizing continues to make employee adaptability and willingness to exhibit more effort more of a necessity
    • Customer service is increasingly emphasize
  • Research shows that experienced supervisors consider contextual performance on the part of subordinates when making overall performance ratings
  • When contextual performance dimensions are included as criteria, personality predictors are more likely to be successful correlates

Sunday, July 6, 2008

Austin, J. T., & Villanova, P. (1992). The criterion problem: 1917-1992. Journal of Applied Psychology, 77, 836-874.

Background

  • Criterion problem – difficulties involved in the process of conceptualizing and measuring performance constructs that are multidimensional and appropriate for different purposes
  • Unlike predictor constructs, criterion constructs often require additional translations between concepts and measurement operations
    • May be constrained by situation
    • Choice of dimensions relies on how broadly the conceptual criterion is construed
    • Dimensions of criteria are context dependent
    • Failure to articulate the values involved in decisions to include some measures of performance as criteria while excluding others makes the criterion problem much more oblique
    • This article reviews conceptualizations, technical advances, and controversies in the measurement and use of criteria since the formal beginnings of the discipline

Semantics of the Term Criterion

  • Criterion means both the sample of the performance domain to be predicted and the level of performance considered acceptable; distinguished from performance with performance being a more inclusive concept
  • Criteria occupy a special status as a function of being critical samples of the more extensive performance domain. They are critical in the sense of their value to multiple sets of potential users
  • A criterion is a sample of performance (including behavior and outcomes), measured directly or indirectly, perceived to be of value to organizational constituencies for facilitating decisions about predictors or programs
  • Criterion scores represent in part the causal effects of individual differences in predictor scores but are conceptually distinct from the construct reflected in predictor scores

1917-1939

  • Several forces encouraged study of worker behavior, including: the external influence of World War I and functionalism
  • Functionalism stressed the importance of individual differences in behavior and their consequences; contrasted with structuralist emphases on consciousness by moving from a focus on mental structure to a focus on adaptive behavior
    • Scientific pragmatism held that theories and concepts should be evaluated by their effects; value of ideas rested in their utilitarian consequences; required attention to measurement effects
    • Study of consequences occupied a central position in the conduct of science
  • Government and private industry began to apply intelligence testing after the war; but their decline was caused by economic causes rather than disillusionment of the value
  • Sophisticated psychometrics were reserved for the analysis of predictors whereas criterion were chosen for convenience
  • Concept of criteria were extended by arguing that the standards used by employers to evaluate work performance might differ from those used by employees to evaluate their own personal success; goals and values of the groups differed
    • Even now, few theoretical models take both individual and organizational perspectives into account
  • Practical impetus of World War I and the felt need of America business for better selection devices were important external influences or “pulls”. Additionally, the convergence of psychological currents, or “pushes,” supported the development of industrial psychology with a focus on criteria for vocational selection and guidance
  • Early efforts harnessed criteria to validation through the prediction of traits and outcomes, rather than focusing on work behavior per se and attempting to understand how and why these consequences occurred

1940-1959

  • Wherry’s (1952) model of the rating process – drawing on psychometric and cognitive research; it decomposed an observed rating into ratee performance, rater observation, rater bias, and error components
  • R. L. Thorndike’s (1949) “ultimate criterion” – construct used by the researcher as a theoretical “tool and goal”; statement of goal that criterion specialists strive to attain
  • Brogden & Taylor (1950) – framework for partitioning sources of variance and covariance among predictors, actual criteria, and the ultimate criterion
    • Terms deficiency, contamination, and relevance have become essential organizing concepts for students of criteria; converged on the idea that job performance is multidimensional and subject to various systematic and random fluctuations during measurement
  • Practical prediction situation for industrial and other applied measurement specialists continued to outweigh the conceptual work that was required to advance criterion measurement to the next stage, that of developing models of performance and conducting programmatic research

1960-1979

  • Increasing number of conceptual analyses, among them those pertaining to the dynamic nature of criteria, the use of composite versus multiple criteria, and the development of multifaceted criterion taxonomies supported greater elaboration and specificity of models and performance appraisal
    • Criteria came to be widely accepted as multidimensional and multiply determined
  • Although rating format studies dominated research on criteria, those efforts failed to identify one rating format was superior
  • Became clear that adopting any rating format involved a number of compromises with respect to appraisal effectiveness

1980-1992

  • Widespread concern for the validity of criterion research was stimulated by the earlier reviews that concluded that significant boundary variables might limit the generalizability of laboratory research on performance appraisal to actual appraisal situations
    • Effect sizes in appraisal research were significantly larger when the stimuli took the form of “paper people.” Thus, prescriptions for appraisal practice based on laboratory research may be of questionable ecological validity for the context in which observation and ratings of work performance actually occur
    • Organizational differences may act to circumscribe further the generality of the findings of appraisal research across organizational settings; the validity of research findings based on appraisal participants in one applied setting may not generalize to other people and settings
  • Research attention began to shift toward the study of the difficult issues of rater motivation and contextual factors within a communication framework; previous research tended to focus on the ability portion of the equation predicting rating performance, rather than as a joint function of ability and motivation

Conclusion

Validity of criterion Measures

  • Any measure, whether it occurs on one side of a regression equation or another, is capable of validation; appraisal instruments should be developed as much as tests are
    • Crucial concept is understanding the measure and its latent construct, which leads to prediction as a by-product and not as a terminal goal
    • There has been a chronic lack of attention to the conceptual and psychometric characteristics of criteria
  • A specific sign of deprivation is the distributions of reliabilities for predictors and criterion
    • Hypothetical reliability distributions for correcting predictors center on .80, whereas the corresponding mean for criterion reliability is .60
    • Bulk of reported validation studies were conducting using a single criterion
  • James’s (1973) application of construct validation to criteria argued that evaluating criterion measures requires multiple measures and models of individual performance to integrate the measures
    • Proposed an early latent variable model (LVM) by merging multiple criteria with person-process-product model of managerial effectiveness
    • Specified a theoretical model that incorporated predictor and criterion constructs defined through multiple indicators, reflecting an early application of structural equation modeling for I-O psychology
  • A number of subthemes emerge for implementing equal status for criterion measures through construct validation
    • Distinction between typical and maximum performance on predictor instruments, which can be extended to criteria through the idea of matching
    • Refocusing of validation research way from a concern with reliability toward an understanding of bias, which, by definition is systematic but irrelevant variance in criterion measures
      • Humphrey’s conception of systematic heterogeneity as a method for constructing measures. Systematic heterogeneity consists of deliberate attempts to measure a construct using a wide variety of perspectives; instead of focusing on high internal consistency and homogeneity, which may result in high reliabilities but narrow measures, heterogeneity is used to reduce bias and thereby increase validity

Values in Criterion Research

  • There has been persistent tension between various constituencies affected by criteria of work performance; this is an issue of values
    • Management is often focused on the administrative and evaluative aspects of criterion measurement, which facilitates personnel decisions; research purists are often more concerned with the understanding provided by integrated construct validation and substantive research; employees tend to focus on the feedback and developmental functions of criterion measures used in performance appraisal
    • Even this three-constituency approach to characterizing competing values may be too simplistic. Within each of these interest groups are embedded several others, each with different aims
  • Research to investigate the costs and benefits of numerous perspectives is needed to advise organizations how to implement a performance evaluation process that optimizes individual and organizational goal attainment
  • Preferences regarding system/product/service performance need to be driven from individuals who represent positions that derive value from the organization’s outputs (i.e., customers)
    • Those values need to be somehow hierarchically ordered and weighted with a subset used to identify potentially critical aspects of performance for use in evaluating individual and organizational effectiveness. Measures of criteria can be combined into a composite representative of the weighted policies or profiles of values elicited from customers

Research-Practice Interface

  • There has been uneven integration between research and practice across topics reviewed