Actualidades Investigativas en Educación ISSN electrónico: 1409-4703

OAI: https://www.revistas.ucr.ac.cr/index.php/aie/oai
Expert-guided Responses: Validation of inferences based on response processes
PDF (Español (España))
HTML (Español (España))

Keywords

educational assessment
solving strategies
verbal reports
inter-rater agreement
evaluación educativa
estrategias de resolución
reportes verbales
acuerdo entre jueces

How to Cite

Brizuela Rodríguez, A., Pérez Rojas, N., & Rojas Rojas, G. (2018). Expert-guided Responses: Validation of inferences based on response processes. Actualidades Investigativas En Educación, 18(3). https://doi.org/10.15517/aie.v18i3.33456

Abstract

This article presents a quantitative study whose objective was to test a new method to collect evidence about the response processes used by examinees in educational tests. As a fundamental precedent, the study starts from the need to have systematic methods to improve the quality of the instruments used in educational assessment. A traditional method of verbal reports was compared to the Expert-guided Responses method regarding its suitability to gather evidence for scores interpretation in an educational test. For this purpose, a convenience sample of 17 first-year students from the University of Costa Rica was selected and a semistructured interview was applied to each, in which they had to solve reasoning items aloud using the traditional method of verbal reports, or the Expert-guided Responses method. The interviews were coded by a group of raters, and a coefficient of agreement between them was calculated. With the traditional method, a Fleiss kappa coefficient of 0.22 was obtained, whereas with the Expert-guided Responses method it was 0.40. The article concludes with the advantages of using the new method to develop educational assessment tools that adequately represent the students’ abilities, skills, proficiencies, and knowledge.
https://doi.org/10.15517/aie.v18i3.33456
PDF (Español (España))
HTML (Español (España))

References

American Educational Research Association (AERA), American Psychological Association (APA) y National Council on Measurement in Education (NCME). (2014). Standards for educational and psychological testing. Estados Unidos: American Educational Research Association.

Basterra, María del Rosario, Trumbull, Elise y Solano-Flores, Guillermo. (2011). Preface. En María del Rosario Basterra, Elise Trumbull y Guillermo Solano Flores (Eds.), Cultural validity in assessment: Addressing linguistic and cultural diversity (pp. 12-15). Estados Unidos: Routledge.

Brizuela, Armel y Montero, Eiliana. (2013). Predicción del nivel de dificultad en una prueba estandarizada de comprensión de lectura: aportes desde la psicometría y la psicología cognitiva. Relieve, 19(2), 1-23.

Brizuela, Armel, Cerdas, Danny, Fallas, Selene, Ordóñez, Kenner, Pérez, Nelson, Rojas, Luis y Seas, Guido. (2015). Resolvamos la PAA. Costa Rica: Editorial Universidad de Costa Rica.

Brizuela, Armel, Jiménez, Karol, Pérez, Nelson y Rojas, Guaner. (2016). Autorreportes verbales en voz alta para la identificación de procesos de razonamiento en pruebas estandarizadas. Revista Costarricense de Psicología, 35(1), 17-30.

Castillo, Miguel y Padilla, José. (2013). How cognitive interviewing can provide validity evidence of the response processes to scale items. Social indicators research, 114(3), 963-975.

Cromley, Jennifer y Azevedo, Roger. (2006). Self-report of reading comprehension strategies: What are we measuring? Metacognition and Learning, 1(3), 229-247.

Cui, Ying y Roduta, Mary. (2013). Validating Student Score Inferences with Person‐Fit Statistic and Verbal Reports: A Person-Fit Study for Cognitive Diagnostic Assessment. Educational Measurement: Issues and Practice, 32(1), 34-42.

Ercikan, Kadriye, Arim, Rubab, Law, Danielle, Domene, Jose, Gagnon, France y Lacroix, Serge. (2010). Application of think aloud protocols for examining and confirming sources of differential item functioning identified by expert reviews. Educational Measurement: Issues and Practice, 29(2), 24-35.

Farr, Roger, Pritchard, Robert y Smitten, Brian. (1990). A Description of What Happens When an Examinee Takes a Multiple-Choice Reading Comprehension Test. Journal of Educational Measurement, 27(3), 209-226.

Fox, Mark, Ericsson, K. Anders y Best, Ryan. (2011). Do procedures for verbal reporting of thinking have to be reactive? A meta-analysis and recommendations for best reporting methods. Psychological bulletin, 137(2), 316-344.

Garrison, Anne y Andrews-Larson, Christine. (2016). Why Don’t Teachers Understand our Questions? Reconceptualizing Teachers’ “Misinterpretation” of Survey Items. AERA Open, 2(2), 1-13. Doi: 10.1177/2332858416643077

Gorin, Joanna. (2006). Test design with cognition in mind. Educational measurement: Issues and practice, 25(4), 21-35.

Jiménez, Karol y Morales, Evelyn. (2009-2010). Validez predictiva del Promedio de Admisión de la Universidad de Costa Rica y sus componentes. Actualidades en Psicología, 23-24, 21-55.

Karabenick, Stuart, Woolley, Michael, Friedel, Jeanne, Ammon, Bridget, Blazevski, Julianne, Rhee Christina, de Groot, Elizabeth, Gilbert, Melissa, Musu, Lauren, Kempler, Toni y Kelly, Kristin. (2007). Cognitive Processing of Self-Report Items in Educational Research: Do They Think What We Mean? Educational Psychologist, 42(3), 139-151. Doi https://doi.org/10.1080/00461520701416231

Landis, J. Richard y Koch, Gary. (1977). The measurement of observer agreement for categorical data. Biometrics, 33(1), 159-174.

Leighton, Jaqueline y Gierl, Mark. (2007). Defining and evaluating models of cognition used in educational measurement to make inferences about examinees' thinking processes. Educational Measurement: Issues and Practice, 26(2), 3-16.

Leighton, Jaqueline y Gokiert, Rebecca. (2008). Identifying potential test item misalignment using student verbal reports. Educational Assessment, 13(4), 215-242. Doi https://doi.org/10.1080/10627190802602384

Leighton, Jaqueline. (2004). Avoiding misconception, misuse, and missed opportunities: The collection of verbal reports in educational achievement testing. Educational Measurement: Issues and Practice, 23(4), 6-15.

Miller, Kristen, Willson, Stephanie, Chepp, Valerie y Padilla, José (Eds.). (2014). Cognitive interviewing methodology. Estados Unidos: John Wiley & Sons.

Padilla, José y Benítez, Isabel. (2014). Validity evidence based on response processes. Psicothema, 26(1), 136-144.

Powers, Donald y Wilson, Susan. (1995). Answering the New SAT Reading Comprehension Questions without the Passages. Journal of Educational Measurement, 32(2), 105-129.

Rojas, Luis. (2013). Validez predictiva de los componentes del promedio de admisión a la Universidad de Costa Rica utilizando el género y el tipo de colegio como variables control. Actualidades Investigativas en Educación, 13(1), 1-24. Doi https://doi.org/10.15517/aie.v13i1.11707

Rojas, Luis. (2014). Evidencias de validez de la Prueba de Aptitud Académica de la Universidad de Costa Rica basadas en su estructura interna. Actualidades en Psicología, 28(116), 15-26. Doi https://doi.org/10.15517/ap.v28i116.14889

Rupp, André, Ferne, Tracy y Choi, Hyeran. (2006). How Assessing Reading Comprehension with Multiple-Choice Questions Shapes the Construct: A Cognitive Processing Perspective. Language Testing, 23(4), 441-474.

Ryan, Katherine, Gannon-Slater, Nora y Culbertson, Michael. (2012). Improving Survey Methods with Cognitive Interviews in Small- and Medium-Scale Evaluations. American Journal of Evaluation, 33(3), 414-430. Doi 10.1177/1098214012441499

Segura, Mario. (2009). La evaluación de los aprendizajes basada en el desempeño por competencias. Actualidades Investigativas en Educación, 9(2), 1-25. Doi https://doi.org/10.15517/aie.v9i2.9522

Shrout, Patrick y Fleiss, Joseph. (1979). Intraclass correlations: uses in assessing rater reliability. Psychological Bulletin, 86(2), 420-428.

Sireci, Stephen, Baldwin, Peter, Martone, Andrea, Zenisky, April, Kaira, Leah, Lam, Wendy, Lewis, Christine, Han, Kyung, Deng, Nina, Delton, Jill y Hambleton, Ronald. (2008). Massachusetts adult proficiency tests technical manual: Version. Recuperado de http://www.umass.edu/remp/CEA_TechMan.html

Smith, Mark. (2017). Cognitive Validity: Can Multiple-Choice Items Tap Historical Thinking Processes? American Educational Research Journal, 54(6), 1256-1287. Doi 10.3102/0002831217717949

Smith, Vanessa y Molina, Mauricio. (2011). La entrevista cognitiva. Costa Rica: Instituto de Investigaciones Psicológicas.

Taylor, K. Lynn y Dionne, Jean-Paul. (2000). Accessing problem-solving strategy knowledge: The complementary use of concurrent verbal protocols and retrospective debriefing. Journal of Educational Psychology, 92(3), 413-425.

UNESCO. (2015). Educación 2030: Declaración de Incheon y Marco de Acción para la realización del Objetivo de Desarrollo Sostenible 4. Recuperado de http://unesdoc.unesco.org/images/0024/002456/245656s.pdf

Wagner, Daniel. (2011). Smaller, Quicker, Cheaper: Improving Learning Assessments for Developing Countries. Recuperado de http://unesdoc.unesco.org/images/0021/002136/213663e.pdf

Wang, Changjiang y Gierl, Mark. (2011). Using the attribute hierarchy method to make diagnostic inferences about examinees’ cognitive skills in critical reading. Journal of Educational Measurement, 48(2), 165-187.

Willis, Gordon. (2015). Analysis of the cognitive interview in questionnaire design. Estados Unidos: Oxford University Press.

Zapata, Gerardo y Canet, Teresa. (2008). Propuesta metodológica para la construcción de escalas de medición a partir de una aplicación empírica. Actualidades Investigativas en Educación, 8(2), 1-26. Doi https://doi.org/10.15517/aie.v8i2.9342

Comments

Downloads

Download data is not yet available.