Keynote Speakers

Dr Norman Verhelst

Eurometrics

Title: The balance in the performance: a missed opportunity in achievement testing

My main interest for many years was to search for means to combine the possibility of conditional maximum likelihood estimation procedures in combination with models that were more general and powerful than the simple Rasch model. This resulted in the program package OPLM which is still freely available for research purposes.

A great deal of the effort was dedicated to the development of well founded statistical goodness-of-fit tests of statistical models. In the recent period it was more and more clearly realized that the approach with yes-or-no statistical testing is a dead end and that it might be more constructive to try to summarize what the simple statistical models are missing. One of the big problems in this approach will be discussed at the AEA-Europe conference.

Studied psychology at the Catholic University of Leuven (Belgium). Was assistant professor at the universities of Leuven (Belgium), Nijmegen (The Netherlands) and Utrecht (The Netherlands). From 1985 to 2010 senior researcher at the National Institute of Educational Measurement (Cito) in Arnhem, and combined from 1992 to 2001 professor at the university of Twente (The Netherlands). From 2011 until now working at the psychometric institute Eurometrics in Tiel, doing psychometric research and consultancy.

Prof Dr Samuel Greiff

Université du Luxembourg Maison des Sciences Humaines
Title: Educational assessment and its prospects in the 21st century
  • Educational Assessment
  • Large-Scale Assessments
  • Computer-Based Assessment
  • Cognitive Psychology
  • Educational Policy

Prof Dr Samuel Greiff is research group leader, principal investigator, and ATTRACT-fellow at University of Luxembourg. He holds a PhD in cognitive and experimental psychology from the University of Heidelberg, Germany (passed with distinction).

Prof Greiff has been awarded several national and international research funds by diverse funding organizations such as the German Ministry of Education and Research and the European Union (overall funding approx. 9.3 M €), is currently fellow in the Luxembourg research programme of excellency, and has published articles in national and international scientific journals and books (>90 contributions in peer-reviewed journals; many of them leading in their field). He has an extensive record of conference contributions and invited talks (>200 talks) and serves as editor for several journals, for instance as editor-in-chief for European Journal of Psychological Assessment, as associate editor for Thinking Skills & Creativity, and as guest editor for Journal of Educational Psychology, Computers in Human Behaviour, or Journal of Business & Psychology. He has a regular record of ad-hoc reviewing for around 40 different journals and currently serves on five editorial boards.

He has been and continues to be involved in the 2012, 2015, and 2018 cycle of the Programme for International Student Assessment (PISA), for instance as external advisor to the PISA 2012 and 2015 Expert and Subject Matter Expert Groups and as contracting partner at his institution. He serves also as chair of the problem solving expert group for the 2nd cycle of the Programme for the International Assessment of Adult Competencies (PIAAC). In these positions, he has considerably shaped the understanding of transversal skills across several large-scale assessments.

He has been working for several years on the assessment of transversal skills such as complex and collaborative problem solving and their role in the classroom, at work, and in private life. Currently, he is involved in the large-scale assessment of problem solving, collaboration, and life-long learning in various populations and leads a team of test developers, research assistants and graduate students totalling in 4 Postdocs and 12 PhD students dedicated at increasing the understanding, the measurement, and the application of different aspects of transversal skills and lifelong learning in educational contexts and the work life.

Prof Dr Eliane Segers

Behavioural Science Institute (BSI) at Radboud University
Title: On the assessment of (digital) reading skills

Eliane’s research focuses on individual variation in learning and learning problems and with a specific interest on how the use of ICT may foster learning. Her research has both a fundamental and an applied focus, with a strong societal relevance. She also (co-)developed games for educational use, focusing on emergent literacy, reading fluency, scientific thinking, and museum visits. She is involved in the development of www.samenonderzoeken.nl, together with the Expertisecentrum Nederlands, this website contains e.g. best-practice videos, regarding teaching STEM in primary schools.

Eliane Segers studied Language, Speech and Computer Sciences (MA) as well as Cognitive Science (MSc) at Radboud University, Nijmegen. She received a PhD in Social Sciences at the same university in 2003 (“Multimedia support of early literacy learning.”). Currently, she is professor of Learning and Technology at the Behavioural Science Institute (BSI) at Radboud University. She is the coordinator of the PhD program of this institute. She is also professor of Reading and Digital Media at Twente University, at the department of Instructional Technology. This is an endowed chair funded by Stichting Lezen (“Reading Foundation”, Amsterdam).

In my talk, I will take you from kindergarten to secondary school, on the assessment and impact of (digital) reading skills. My focus is on research in the Netherlands, in which children learn to read via a phonics based reading method in a rather transparant orthography. Reading efficiency is in such an orthography a better indicator of reading problems than reading accuracy.

In kindergarten, children develop phonological awareness, which is very predictive for beginning reading, and this is already the case for decoding ability after only four weeks of formal reading instruction in first grade. In nationwide student tracking systems, reading ability is only being assessed after the first half year in first grade. However, earlier curriculum based in-between measures of word-reading efficiency are valuable instruments for teachers, also to have an early eye for children at risk for reading problems and/or dyslexia. When children have problems in gaining reading speed, games can be used, and we have shown the efficiency of such games, both in typical reading third graders as well as in poor reading second graders. I will discuss the possibilities of online gaming behaviors as a possible assessment.

In the upper grades, word reading ability is an important prerequisite for reading comprehension. Reading comprehension is often measured via questions that mostly tap into the ability of the child to find the relevant piece of the text that the question is about, or to be able to combine different elements of the text (local inferencing). However, ultimately, the reader builds a situation model of the text, and this is more difficult to assess. I will discuss the pros and cons of a mindmap, but will mostly focus on the use of pathfinder networks.

Finally, I will include research on hypertext comprehension as well in this part of the talk, in combination with the e-reading assessment in the PISA studies, and discuss how developments in digital society call for adaptations in the assessment of (digital) reading skills.

Prof Dr Maria Bolsinova

The winner of the KTNAR
Title: Response times in educational assessment: Moving beyond traditional assumptions

Maria Bolsinova is a research scientist at ACTNext, Amsterdam, The Netherlands. Prior to joining the team she was a postdoctoral researcher at the University of Amsterdam for 2 years. She received her PhD in Psychometrics cum laude at Utrecht University in The Netherlands. Her doctoral dissertation, Balancing Simple Models and Complex Reality: Contributions to Item Response Theory in Educational Measurement was awarded the Psychometric Society’s prestigious Dissertation Prize in 2017. Maria received her MSc and BSc degree in Psychology at Moscow University and her MSc in Methodology and Statistics at Leiden University. Maria’s postdoctoral and doctoral research was devoted to developing advanced psychometric models for the analysis of product and process data (primarily response times) from educational and psychological tests. At ACTNext, she contributes her expertise in psychometrics, educational measurement and Bayesian statistics towards achieving the goals of developing statistical model for innovative dynamic assessment. With her research, she aims to develop statistical tools that will improve the quality of assessment in practice and help deliver innovative learning and assessment systems.

With the increasing popularity of computerised testing, in many applications of educational testing not only the accuracy of the response is recorded, but response time as well. This additional information provides a more complex picture of the response processes. The two most important reasons to consider response times are: 1) to increase precision of the estimation of ability by using response times as collateral information; 2) to gain further insight in the underlying response processes. The hierarchical modelling framework for response times and accuracy (van der Linden, 2007) which has become the standard approach for jointly modeling response time and accuracy in educational measurement provides a clear structure for studying both response times and accuracy, but is based on a set of assumptions which may not match the complex picture that arises when realistic response processes are considered. In this presentation, the simple structure assumption and the assumption of conditional independence between response times and accuracy will be considered critically, and statistical models that relax these assumptions will be proposed. Our first model relaxes the simple structure assumption and includes cross loadings from ability to response times and allows to use more information from response times for improving precision of measurement of ability: not only differences in how much time persons use in total but also differences in how this time is allocated to different items. Second, we present a set of models that relax the assumption of conditional independence between response time and accuracy. These models can be used to explore the presence and nature of conditional dependencies.  These conditional dependences might be relevant from a substantive point of view since they shed light on interesting phenomena in response processes and possible between- and within-person differences. In addition to presenting statistical models for conditional dependence, we also look at them from a more psychological perspective and discuss different kind of phenomena that may lead to this dependence and potential ways of distinguishing between them