390016 DK PhD-M: Experimental Methods for Behavioral Sciences (2024W)
Continuous assessment of course work
Labels
Registration/Deregistration
Note: The time of your registration within the registration period has no effect on the allocation of places (no first come, first served).
- Registration is open from Mo 09.09.2024 09:00 to Th 19.09.2024 12:00
- Deregistration possible until Mo 14.10.2024 23:59
Details
max. 15 participants
Language: English
Lecturers
Classes (iCal) - next class is marked with N
- Monday 04.11. 09:45 - 18:15 Seminarraum 4 Oskar-Morgenstern-Platz 1 1.Stock
- Tuesday 05.11. 09:45 - 18:15 Seminarraum 4 Oskar-Morgenstern-Platz 1 1.Stock
- Wednesday 06.11. 09:45 - 14:45 Seminarraum 1 Oskar-Morgenstern-Platz 1 Erdgeschoß
- Wednesday 06.11. 15:00 - 18:15 Seminarraum 3 Oskar-Morgenstern-Platz 1 1.Stock
Information
Aims, contents and method of the course
Assessment and permitted materials
Each student is to identify 3 specific challenges related to experimental research methods that they are facing in their own (planned) research, and to submit a detailed description of each of these challenges.The 3 days of in-person class time will be used for interactive, in-depth discussion of the student-generated challenges (and questions), complemented by mini lectures on specific topics.Each student is to write a paper on a research question, or on a set of tightly-related research questions, that they would like to examine rigorously using (at least in part) an experimental approach, and that they believe has the potential to be the foundation for a manuscript they might ultimately develop for submission to a major journal. This course paper should include (1) a conceptual part roughly resembling the front end of a journal article and (2) a proposal for the methods to be used to empirically examine the research question(s).
Minimum requirements and assessment criteria
A student’s overall grade for the course is based on the following components:Quality of Advance Submission 10%
Engagement and Thoughtful Participation During In-Person Sessions 30%
Quality of Course Paper 60%
Engagement and Thoughtful Participation During In-Person Sessions 30%
Quality of Course Paper 60%
Examination topics
Reading list
There is no textbook for this course.
However, here are some recommended books on the design (and analysis) of experiments:
Abdi, Edelman, Valentin, and Dowling (2009), Experimental Design and Analysis for Psychology, Oxford University Press.
Field and Hole (2003), How to Design and Report Experiments, Sage.
Keppel and Wickens (2004), Design and Analysis: A Researcher's Handbook, Pearson.
Kirk (2013), Experimental Design: Procedures for the Behavioral Sciences, Sage.
Martin (2007), Doing Psychology Experiments, Wadsworth.
Oehlert (2010), A First Course in Design and Analysis of Experiments, available online at: http://users.stat.umn.edu/~gary/book/fcdae.pdf
Online Statistics Education: A Multimedia Course of Study, Project Leader: David M. Lane,
Rice University, available online at: http://onlinestatbook.com/2/index.html
In addition, the following papers are recommended as background readings for the course:
Oppenheimer, Meyvis, and Davidenko (2009), “Instructional Manipulation Checks: Detecting Satisficing to Increase Statistical Power,” Journal of Experimental Social Psychology, 45, 867-872.
Zhao, Lynch, and Chen (2010), “Reconsidering Baron and Kenny: Myths and Truths about Mediation Analysis,” Journal of Consumer Research, 37, 197-206.
Pieters (2017), “Meaningful Mediation Analysis: Plausible Causal Inference and Informative Communication,” Journal of Consumer Research, 44, 3, 692-716.
Spiller, Fitzsimons, Lynch, and McClelland (2013), “Spotlights, Floodlights, and the Magic Number Zero: Simple Effects Tests in Moderated Regression,” Journal of Marketing Research, 50, 277-288.
Cumming, Geoff (2014), “The New Statistics: Why and How,” Psychological Science, 25, 1, 7-
29.
Elrod, Häubl, and Tipps (2012), “Parsimonious Structural Equation Models for Repeated Measures Data, With Application to the Study of Consumer Preferences,” Psychometrika, 77, 2, 358-387.
Simmons, Nelson, and Simonsohn (2011), “False-Positive Psychology: Undisclosed Flexibility in Data Collection and Analysis Allows Presenting Anything as Significant,” Psychological Science, 22, 11, 1359-1366.
Simonsohn, Nelson, and Simmons (2014), “P-Curve: A Key to the File-Drawer,” Journal of Experimental Psychology: General, 143, 2, 534-547.
Goodman and Paolacci (2017), “Crowdsourcing Consumer Research,” Journal of Consumer Research, 44, 1, 196-210.
Meyvis and Van Osselaer (2018), “Increasing the Power of Your Study by Increasing the Effect Size,” Journal of Consumer Research, 44, 5, 1157-1173.
Morales, Amir, and Lee (2017), “Keeping It Real in Experimental Research—UnderstandingWhen, Where, and How to Enhance Realism and Measure Consumer Behavior,” Journal of Consumer Research, 44, 2, 465-476.
McShane and Böckenholt (2017), “Single-Paper Meta-Analysis: Benefits for Study Summary, Theory Testing, and Replicability,” Journal of Consumer Research, 43, 6, 1048-1063.
Vosgerau, Simonsohn, Nelson, and Simmons (2019), “99% Impossible: A Valid, or Falsifiable, Internal Meta-Analysis,” Journal of Experimental Psychology: General, 148, 9, 1628- 1639.Other Resources:
Amazon Mechanical Turk (a marketplace for “hiring” study participants):
www.mturk.com
CloudResearch (tools for participant recruitment; formerly known as TurkPrime):
www.cloudresearch.com
Prolific (platform for participant recruitment):
www.prolific.co
Qualtrics (an easy-to-use web-based system for implementing experiments):
www.qualtrics.com
However, here are some recommended books on the design (and analysis) of experiments:
Abdi, Edelman, Valentin, and Dowling (2009), Experimental Design and Analysis for Psychology, Oxford University Press.
Field and Hole (2003), How to Design and Report Experiments, Sage.
Keppel and Wickens (2004), Design and Analysis: A Researcher's Handbook, Pearson.
Kirk (2013), Experimental Design: Procedures for the Behavioral Sciences, Sage.
Martin (2007), Doing Psychology Experiments, Wadsworth.
Oehlert (2010), A First Course in Design and Analysis of Experiments, available online at: http://users.stat.umn.edu/~gary/book/fcdae.pdf
Online Statistics Education: A Multimedia Course of Study, Project Leader: David M. Lane,
Rice University, available online at: http://onlinestatbook.com/2/index.html
In addition, the following papers are recommended as background readings for the course:
Oppenheimer, Meyvis, and Davidenko (2009), “Instructional Manipulation Checks: Detecting Satisficing to Increase Statistical Power,” Journal of Experimental Social Psychology, 45, 867-872.
Zhao, Lynch, and Chen (2010), “Reconsidering Baron and Kenny: Myths and Truths about Mediation Analysis,” Journal of Consumer Research, 37, 197-206.
Pieters (2017), “Meaningful Mediation Analysis: Plausible Causal Inference and Informative Communication,” Journal of Consumer Research, 44, 3, 692-716.
Spiller, Fitzsimons, Lynch, and McClelland (2013), “Spotlights, Floodlights, and the Magic Number Zero: Simple Effects Tests in Moderated Regression,” Journal of Marketing Research, 50, 277-288.
Cumming, Geoff (2014), “The New Statistics: Why and How,” Psychological Science, 25, 1, 7-
29.
Elrod, Häubl, and Tipps (2012), “Parsimonious Structural Equation Models for Repeated Measures Data, With Application to the Study of Consumer Preferences,” Psychometrika, 77, 2, 358-387.
Simmons, Nelson, and Simonsohn (2011), “False-Positive Psychology: Undisclosed Flexibility in Data Collection and Analysis Allows Presenting Anything as Significant,” Psychological Science, 22, 11, 1359-1366.
Simonsohn, Nelson, and Simmons (2014), “P-Curve: A Key to the File-Drawer,” Journal of Experimental Psychology: General, 143, 2, 534-547.
Goodman and Paolacci (2017), “Crowdsourcing Consumer Research,” Journal of Consumer Research, 44, 1, 196-210.
Meyvis and Van Osselaer (2018), “Increasing the Power of Your Study by Increasing the Effect Size,” Journal of Consumer Research, 44, 5, 1157-1173.
Morales, Amir, and Lee (2017), “Keeping It Real in Experimental Research—UnderstandingWhen, Where, and How to Enhance Realism and Measure Consumer Behavior,” Journal of Consumer Research, 44, 2, 465-476.
McShane and Böckenholt (2017), “Single-Paper Meta-Analysis: Benefits for Study Summary, Theory Testing, and Replicability,” Journal of Consumer Research, 43, 6, 1048-1063.
Vosgerau, Simonsohn, Nelson, and Simmons (2019), “99% Impossible: A Valid, or Falsifiable, Internal Meta-Analysis,” Journal of Experimental Psychology: General, 148, 9, 1628- 1639.Other Resources:
Amazon Mechanical Turk (a marketplace for “hiring” study participants):
www.mturk.com
CloudResearch (tools for participant recruitment; formerly known as TurkPrime):
www.cloudresearch.com
Prolific (platform for participant recruitment):
www.prolific.co
Qualtrics (an easy-to-use web-based system for implementing experiments):
www.qualtrics.com
Association in the course directory
Last modified: Th 31.10.2024 11:46
concrete examples, challenges, and solutions.
Topics:
The topics covered in the course include:
• Basic principles of experimental research
• Formulation of research question and hypothesis development
• Experimental paradigms
• Design and manipulation
• Measurement
• Factorial designs
• Implementation of experiments
• Data analysis and reporting of results
• Advanced methods and complex experimental designs
• Ethical issues