Session Overview
Session
IS2: On the Effect of Item Positions in Tests
Time:
Thursday, 23/Jul/2015:
4:30pm - 6:00pm

Session Chair: Karl Schweizer
Session Chair: Siegbert Reiß
Location: KO2-F-180 (Ⅵ)
capacity: 372

Presentations

On the effect of item positions in tests

Chair(s): Karl Schweizer (Goethe University Frankfurt, Germany), Siegbert Reiß (Goethe University Frankfurt, Germany)

The item-position effect is usually observable if test takers complete a homogeneous set of items that constitute a psychological scale because successively completing a number of items that are demanding to the same ability or trait is modifying performance. The repeated call of the same cognitive processes can involve automation, facilitation, clustering, maintenance of information and learning. The consequence is an increasing degree of dependency among the responses to the successively presented items. It means an increasing degree of consistency in responding from the first to last items. Although this effect has been known for quite a time, the major models of measurement do not take it into consideration.

The presentations will provide further evidence of the item-position effect regarding different psychological scales and inform about new developments in improving the representation and investigation of it. There will be reports of the item-position effect in Advanced Progressive Matrices, Cattell’s Culture Fair Test and Viennese Matrices Test. The new developments will encompass the IRT and CFA approaches. These new developments aim to enable more appropriate representations of the item-position effect and better ways of separating what is due to the effect and what is a pure representation of the construct.
 

Presentations of the Symposium

 

The impact of the position effect on the factorial structure of the Culture Fair Test (CFT)

Stefan J. Troche1, Felicitas L. Wagner2, Karl Schweizer3, Thomas H. Rammsayer2; Stefan.Troche@uni-wh.deStefan.Troche@uni-wh.de
1Private Universität Witten/Herdecke, Germany, 2University of Bern, Switzerland, 3Goethe-University Frankfurt, Germany

The Culture Fair Test (CFT) is a psychometric test of fluid intelligence consisting of four subtests; Series, Classification, Matrices, and Topographies. The four subtests are only moderately intercorrelated, doubting the notion that they assess the same construct (i.e., fluid intelligence). As an explanation of these low correlations, we investigated the position effect. This effect is assumed to reflect implicit learning during testing. By applying fixed-links modeling to analyze the CFT data of 206 participants, we identified position effects as latent variables in the subtests; Classification, Matrices, and Topographies. These position effects were disentangled from a second set of latent variables representing fluid intelligence inherent in the four subtests. After this separation of position effect and basic fluid intelligence, the latent variables representing basic fluid intelligence in the subtests Series, Matrices, and Topographies could be combined to one common latent variable which was highly correlated with fluid intelligence derived from the subtest Classification (r=.72). Correlations between the three latent variables representing the position effects in the Classification, Matrices, and Topographies subtests ranged from r=.38 to r=.59. The results indicate that all four CFT subtests measure the same construct (i.e., fluid intelligence) but that the position effect confounds the factorial structure.
 

The position effect in a Rasch-homogenous test: A fixed-links modeling approach

Philipp Thomas1, Thomas H. Rammsayer1, Karl Schweizer2, Stefan J. Troche3; philipp.thomas@psy.unibe.chphilipp.thomas@psy.unibe.ch
1University of Bern, Switzerland, 2Goethe University Frankfurt, Germany, 3Private Universität Witten/Herdecke, Germany

The position effect describes the influence of just-completed items in a psychological scale on subsequent items. This effect has been repeatedly reported for psychometric reasoning scales and is assumed to reflect implicit learning during testing. One way to identify the position effect is fixed-links modeling. With this approach, two latent variables are derived from the test items. Factor loadings of one latent variable are fixed to 1 for all items to represent ability-related variance. Factor loadings on the second latent variable increase from the first to the last item describing the position effect. Previous studies using fixed-links modeling on the position effect investigated reasoning scales constructed in accordance with classical test theory (e.g., Raven’s Progressive Matrices) but, to the best of our knowledge, no Rasch-scaled tests. These tests, however, meet stronger requirements on item homogeneity. In the present study, therefore, we will analyze data from 239 participants who have completed the Rasch-scaled Viennese Matrices Test (VMT). Applying a fixed-links modeling approach, we will test whether a position effect can be depicted as a latent variable and separated from a latent variable representing basic reasoning ability. The results have implications for the assumption of homogeneity in Rasch-homogeneous tests.
 

Predictors of an individual decrease in test performance during the PISA assessments

Johannes Hartig1, Janine Buchholz1, Dries Debeer2, Rianne Janssen2; hartig@dipf.dehartig@dipf.de
1DIPF, Germany, 2KU Leuven, Belgium

Item position effects have been shown repeatedly in large-scale assessments of student achievement. In addition to a fixed effect of items becoming more difficult during the test, there are individual differences related to this effect, meaning that students differ in the extent to which their performance declines during the test. These interindividual differences have been labelled as “persistence” in previous studies. The present study aims at gaining a better understanding of the nature of these differences by relating them to student characteristics. The analyses make use of the the PISA 2006 and 2009 assessments on science and reading, respectively, using data from several European countries. Gender, the language spoken at home, the socio-economic status, the motivational scales “effort thermometer” (2006 assessment), and the “joy of reading” (2009 assessment) were used as predictors for persistence. Position effects and persistence are modelled by a logistic multilevel regression model which is equivalent to an extension of the Rasch model.  Effects of gender, language, and reported test effort are inconsistent across countries, e.g. girls have a higher persistence only in some countries. The effect of the reported joy of reading is small but consistent across all countries, indicating that at least part of the individual differences is caused by individual differences in subject-specific motivation.
 

Modeling response omissions in tests using a tree-based IRT approach

Dries Debeer, Rianne Janssen; Rianne.Janssen@ppw.kuleuven.beRianne.Janssen@ppw.kuleuven.be
KU Leuven, Belgium

Reported item position effects in large-scale assessments often pertain to an increased item difficulty towards the end of the test and to respondents differing in their level of persistence completing the test. Both phenomena may be partly due to the increased occurrence of missing responses towards the end of the test and individual differences therein. In fact, two types of missing responses are possible, respondents may omit certain items well before reaching their last answered item, leading to “skipped items” and  respondents may not complete the entire test and drop out before the end of the assessment, leading to “not-reached items”. Both types of missing responses may be related to the proficiency of the respondent, and therefore, cause non-ignorable missingness. Several studies have proposed ways to deal with these missing responses. In the present paper, an IRTree-based approach will be presented in which both types of missing responses are modeled together with the proficiency process. The IRTree models can be applied to both power and speed tests and are modeled fairly easily. Apart from results of several simulation studies, the analyses of a speed test on mental arithmetic from a Flemish national assessment will be discussed.
 

On the search for the best possible representation of the item-position effect: A simulation study based on APM

Florian Zeller, Siegbert Reiss, Karl Schweizer; Florian.zeller@outlook.comFlorian.zeller@outlook.com
Goethe University Frankfurt, Germany

The item-position effect describes the impact of prior completed items on the following items. In previous studies the item-position effect was represented by constraints reflecting functions, for example a linear function. This kind of representation was inflexible regarding the specificities of the items, and therefore, there was the question whether this is the best possible way of representing the effect. Accordingly, our aim was to optimize the representation of the item-position effect in considering the items of Raven’s Advanced Progressive Matrices (APM). We disassembled the 36 APM items into two, three, four, and six same–sized subsets of neighboring items for separate investigations. Analyses were conducted by means of data that were simulated according to the covariance matrix of the APM items based on the data of 530 participants. Similar to former studies we used fixed-links models for testing different representations of the item-position effect. Besides the standard model with only one latent variable we analyzed linear, quadratic and logarithmic trends of the item-position effect. The results revealed an increase of true variance from the first to last items, just as expected. But the course of increase varied in slope.