Delay reduction hypothesis

In operant conditioning, the delay reduction hypothesis (DRH; also known as delay reduction theory) is a quantitative description of how choice among concurrently available chained schedules of reinforcement is allocated. The hypothesis states that the greater improvement in temporal proximity to reinforcement (delay reduction) correlated with the onset of a stimulus, the more effectively that stimulus will function as a conditional reinforcer.[1]

The hypothesis was originally formulated to describe choice behaviour among concurrently available chained schedules of reinforcement;[2] however, the basic principle of delay reduction as the basis for determining a stimulus’ conditionally reinforcing function can be applied more generally to other research areas.[1][3][4]

A variety of empirical data corroborate and are consistent with DRH and it represents one of the most substantiated accounts of conditional reinforcement to date.[5]

  1. ^ a b Fantino, E. (1977). Conditioned reinforcement: Choice and information. In W. K. Honig & J. E. R. Staddon (Eds.), Handbook of operant behavior (pp. 313–339). Prentice-Hall
  2. ^ Fantino, E. (1969). Choice and rate of reinforcement. Journal of the Experimental Analysis of Behavior, 12 (5), 723–730. https://doi.org/10.1901/jeab.1969.12-723
  3. ^ Fantino, E. (2012). Optimal and non-optimal behavior across species. Comparative Cognition & Behavior Reviews, 7, 44-54. https://doi.org/10.3819/ccbr.2012.70003
  4. ^ Shahan, T. A., & Cunningham, P. (2015). Conditioned reinforcement and information theory reconsidered. Journal of the Experimental Analysis of Behavior, 103 (2), 405–418. https://doi.org/10.1002/jeab.142
  5. ^ Williams B. A. (1994). Conditioned reinforcement: Neglected or outmoded explanatory construct?. Psychonomic Bulletin & Review, 1(4), 457–475. https://doi.org/10.3758/BF03210950