Session Overview
Session
S2: Online Assessment and Internet-Based Research
Time:
Thursday, 23/Jul/2015:
4:30pm - 6:00pm

Session Chair: Ulf-Dietrich Reips
Session Chair: Stefan Stieger
Location: KOL-G-217 (Ⅳ)
capacity: 125

Presentations

Online assessment and internet-based research

Chair(s): Ulf-Dietrich Reips (University of Konstanz, Germany), Stefan Stieger (University of Konstanz, Germany; University of Vienna, Austria)

During the last decades, online assesment and testing became an indispensable data source for research, and not only in the fields of personality psychology, intelligence, and achievement. The Internet provides a powerful infrastructure for data collection and many researchers have been taking advantage of it to conduct basic and applied research. This symposium will cover new developments and present tools and examples of how to use the Internet for online assessment and research.

This session intends to give an overview of the online assessment/testing expansion in recent years, including topics such as the rise of mobile computing (smartphones) in research, relationships between self-reported executive problems, personality, and cognitive performance, self-ratings versus observers' ratings of personality on Facebook profiles, and how to handle and analyze dropout in Internet-based research. The session will also explore the technical and ethical issues in the use of data generated by online assessment/testing as well as the added value and benefits of such data. Special attention will be given to the development of the International Personality Item Pool (IPIP) as a possible blueprint for online cross-cultural personality assessment in the public domain.
 

Presentations of the Symposium

 

Smartphone apps in psychological science: Results from an experience sampling method study

Stefan Stieger1, Ulf-Dietrich Reips2; stefan.stieger@uni-konstanz.destefan.stieger@uni-konstanz.de
1University of Konstanz, University of Vienna, Germany, Austria, 2University of Konstanz, Germany

Data collection methods in the social and behavioral sciences have always been inspired by new technologies. The introduction of the Internet had a major impact in advancing the methodological repertoire of researchers, with Internet-based experiments, online questionnaires, and non-reactive online data collection methods, to name just a few. Meanwhile, the next major impact from technology is hitting research – smartphones. The penetration rate of these small mobile devices is increasing rapidly, and they offer a multitude of new sensors that can be used for scientific research (e.g., GPS, gyroscope, accelerometer, temperature sensors). We report a smartphone app field study about well-being conducted in German-speaking countries (n =219). It took place for 14 days with three measurements per day (8000+ well-being judgments). Based on this study, we discuss important aspects of the planning of a smartphone study (e.g., programming, implementation, pitfalls, and recruitment strategies). The presentation aims not only to present empirical data about an exemplary smartphone study, but also to present the unique aspects of smartphone studies compared to traditional research methods of data collection.
 

What do self-report measures of problems with executive function actually measure? Data from internet and laboratory studies

Tom Buchanan; T.Buchanan@westminster.ac.ukT.Buchanan@westminster.ac.uk
University of Westminster, United Kingdom

Measuring executive function interests researchers and practitioners in a number of psychological fields. Self-report measures of executive problems may have considerable value, especially for research conducted via the internet. They are easier to implement online than traditional cognitive tests, and arguably have greater ecological validity as indices of everyday problems. However, there are questions about whether they actually measure executive function, or other constructs such as personality. Relationships between self-reported executive problems, personality, and cognitive performance were assessed in three correlational studies using non-clinical samples. In Study 1, 49398 participants completed online measures of personality and self-reported executive problems. In Study 2, 345 participants additionally completed an online Digit Span task. In Study 3, 103 participants in a traditional laboratory setting completed multiple measures of personality, self-reported executive problems, and objective cognitive tests.
Across all three studies, self-reported problems correlated with neuroticism and with low conscientiousness, with medium to large effect sizes. However self-reported problems did not correlate with performance on Trail Making, Phonemic Fluency, Semantic Fluency or Digit Span tests tapping aspects of executive function.  These findings raise questions about self-report measures of executive problems, both on the Internet and offline.
 

Self-ratings of personality and observers' ratings based on Facebook profiles

Boris Mlačić, Goran Milas, Ivna Sladić; Boris.Mlacic@pilar.hrBoris.Mlacic@pilar.hr
Institute of Social Sciences Ivo Pilar, Croatia

The aim of the study was to investigate the relationship between self-ratings of personality and expert observers’ ratings of personality based on Facebook profiles of target persons. The self-rating sample consisted of 177 participants with active Facebook profiles between March and June 2014. Expert observers were students in the final year of masters’ course in psychology with training in personality psychology. Personality traits from the Big-Five model were assessed by the IPIP50 (Goldberg, 1999; Mlačić & Goldberg, 2007) while Facebook usage was assessed by the Questionnaire of Facebook use (Ross et al., 2009). Observers’ ratings of personality were based on data from Facebook profiles where each of the five personality dimensions (Extraversion, Agreeableness, Conscientiousness, Emotional Stability, and Intellect) was briefly defined. The results showed significant relations between Facebook usage and Agreeableness and Extraversion, respectively. Observers’ personality ratings correlated significantly with self-ratings of Conscientiousness and Intellect while the ratings between observers were the highest for the dimensions of Extraversion, Conscientiousness, and Emotional Stability.
 

Dropout analysis with DropR: An R-based web app to analyze and visualize dropout

Ulf-Dietrich Reips1, Matthias Bannert2; reips@uni-konstanz.dereips@uni-konstanz.de
1University of Konstanz, Germany, 2ETH Zurich, Switzerland

With Internet-based research non-response such as lack of responses to particular items and dropout have become interesting dependent variables, due to highly voluntary participation and large numbers of participants (Reips, 2000, 2002). In this paper we develop and discuss the methodology of using and analyzing dropout in Internet-based research, and we present DropR, a Web App to analyze and visualize dropout. The Web App was written in R, a free software environment for statistical computing and graphics.
Among other features, DropR turns input from datasets in various formats into visual displays of dropout curves. It calculates parameters relevant to dropout analysis, such as Chi Square values and odds ratios for points of difference, initial drop, and percent remaining in stable states. With automated inferential components, it identifies critical points in dropout and critical differences between dropout curves for different experimental conditions and produces related statistical copy. The visual displays are interactive, users can use mouse over and mouse drag and click to identify regions within a display for further analysis. DropR is provided as a free R package (http://cran.r-project.org/web/licenses/GPL-2) and Web service (http://dropr.eu) from researchers for researchers.
 

Measuring narcissism online: Development and validation of a brief web‐based instrument

Tim Kuhlmann, Michael Dantlgraber, Ulf-Dietrich Reips; tim.kuhlmann@uni-konstanz.detim.kuhlmann@uni-konstanz.de
University of Konstanz, Germany

Narcissism continues to be a widely researched topic in psychology, and the scientific community is in need of validated online instruments. The present paper describes the development and validation of a questionnaire for the web-based assessment of sub‐clinical narcissism. Several versions were developed, including items from the original NPI-40 (Raskin & Terry, 1988) and from the open item database IPIP. Using the multiple-­site-entry technique (Reips, 2000), a sample of 1972 participants was recruited. They answered the original 40 items of the NPI­‐40 in either choice or Likert-­type format as well as 80 items from the IPIP with a Likert-­type answer format. The NPI‐40 in original choice format showed unsatisfactory fit-­characteristics in a CFA. After factor analysis of all Likert-­type items, an 18-­item questionnaire for narcissism with three intercorrelated subscales emerged. These were labeled importance, manipulation, and vanity. The overall narcissism score had good internal consistency (α = .91), with the subscales showing acceptable reliabilities (α = .78 - .83). The final scale was validated in a separate sample with 549 participants. The three-factor-­structure was replicated and similar psychometric properties were shown. The questionnaire provides researchers with a brief and validated instrument for the web-­based assessment of narcissism and its sub‐facets.