Peer review checklist¶
If you’re previewing a study someone else has requested feedback on, here are some things to think about as you look through it and make notes.
Is it short and memorable?
Note: it’s fine for a title to be cute but not super informative, as long as it has something to do with the content or purpose of the study and could be used to distinguish it from other similar studies. One of its most important functions is to be memorable enough that parents can say “hey I did the XYZ study and [you should too! / I have a question about when my gift card will arrive / etc.]
Feel free to suggest any clever ideas you have!
Would a follow-up study, another study from the same group, or another study using the same method fall naturally under the same title? If so, the name should be more specific.
Does it include information that’s covered elsewhere, e.g. eligibility criteria or duration? Don’t duplicate this here. (E.g., “XYZ for 5-month-olds” or “XYZ: a 10-minute study about cats” should just be XYZ.)
- “A fish, THE fish, MY fish” (babies’ understanding of articles)
- “Baby see, baby do” (neonatal imitation)
- “Cue the music!” (singing in a language cueing attention to that language)
- “Is two (voices) a crowd?” (kids’ ability to understand speech with one/many background voices)
- “Little drummers” (toddlers’ production of rhythm)
- “Does your baby know what you are thinking?” (infant theory of mind)
Examples of titles we asked for more specific versions of:
- “Thinking about actions”
- “Look at the pictures”
- “Emotion learning study”
- If people are pictured, and they’re white - do they need to be? (It’s easy for too many researchers to “default to” pictures of white kids.)
- Is this image relevant to the study? (e.g., does it have a child pictured that’s in the age range, does it include potential images that the child may see during the study, etc.)
Short description / “What happens”¶
- Does this clearly explain to parents what they/their child will be doing during this experiment in a few sentences?
- Does it contain unnecessary detail (e.g. exact numbers or timing of trials) or information covered in other fields (e.g. duration)? These should be removed and covered only in the other field to avoid duplication and make it easier to keep information consistent if it changes.
- Does it adhere to the style guide?
Purpose / “What we’re studying”¶
Is it accurate given the study design? (Really think this through - this has been a common challenge for researchers.)
Does it explain what this particular study addresses (not a broader research program)?
Note: it’s ok if the study is developing/testing out a measure, trying to get a baseline measurement for future work on X, etc. It’s better to actually say this than to say that the study is doing something it isn’t.
Does it explain why this question matters? Note that neither “this hasn’t been studied before” nor “this ability is important in adults, so we’re finding out when it emerges in kids” explains why the question matters. Think about the different answers there could be - what is different about those worlds?
Is it readable by a bright highschooler without being condescending? Check for unnecessarily complex wording or sentence structure.
Does it adhere to the style guide?
- If providing compensation, have they included any conditions for payment? (e.g., child needs to be in age range, child needs to be visible at some point, only one per child)
- Is compensation dependent on the child completing the study, or on the child’s behavior in any way? (This is generally not allowed.)
- If providing compensation, have they included information about how long it will take to receive? (Make sure this is consistently stated throughout the study!)
- Is the participant eligibility description easy to understand? (E.g., translate ages into commonly-used terms; don’t say your study is for children between 56 and 70 weeks old.)
- Are any eligibility criteria beyond age either language-based (e.g., speaking English or being bilingual) or rare (e.g., ASD)? We generally ask that other criteria be implemented as part of analysis, rather than preventing families from participating..
- Measure how long the study takes you to preview and let the study authors know. Is the duration listed accurate?
- Are webcam setup & consent steps included? Does the information in the consent form make sense and avoid repetition?
- Are these at the start of the study, or if they are later is there a good reason (and are they still before any data collection, including video recording)?
- If children need to be visible or arranged a particular way, do you get a chance to look at the webcam setup right before the study starts?
- If parents are facing away or have their eyes closed, is it clear when they need to do that and when they can stop? Are there any points where it might seem like there’s a problem with the study if they can’t see what’s going on?
- Is it clear what you as a parent should be doing during the study?
- Are the directions friendly? (i.e. don’t want to sound demanding/condescending)
- Do things “flow”? Are there abrupt transitions?
- Are the instructions clear and straightforward (to the point you could read them while also supervising/holding a few children)? Is there ever an overwhelming amount of info on the screen at once?
- Is there an indication to the parent of progress through the study during test trials if possible, especially if the parent needs to be quiet or keep their eyes closed?
- Is audio clear enough to understand & reasonably well-balanced for volume throughout?
- Do you have any concerns about how data collection will work (e.g. whether children will be familiar with the ‘familiar objects’, how stimuli will look to a colorblind child, etc.) or suggestions?
Debriefing (after exit survey)¶
- Did they clearly explain the point of the study again (as in the purpose field, this needs to actually get at why the question matters)?
- Did they concretely walk through the study design and explain HOW the study will answer the question? This is the heart of the debriefing. Generally this will entail briefly explaining what happened during the study, what the dependent measure is and what it indexes if that’s not obvious, and an if-then prediction: e.g., if babies realize that she doesn’t know where the ball is, we expect them to look longer when she finds it right away, because that’s surprising!
- Did they explain the multiple conditions if there was randomization?
- Did they head off likely potential parental concerns/objections?
- there are many reasons a child might answer a particular way on any given trial (e.g., first/last option, favorite objects), that’s why we average over lots of kids/trial types
- make sure parents know their child may not have answered a particular way/ looked more or less on a given trial/ or successfully performed some action and that’s OK
- Did they restate information about compensation and when to expect it? (make sure this is the same throughout the study)
- Did they link to someplace to learn more about this general topic if possible? (e.g. TED talk, popular science article, website with more games, journal paper, other educational video, etc.) Feel free to share ideas!
General things to think about¶
- Are any questions/tasks ambiguous or inappropriate for…
- A single parent (due to choice, breakup/divorce, or death), an unmarried but partnered parent, a parent with a same-sex partner, a divorced parent who shares custody, a parent with more than one partner
- A family that lost a child in infancy (e.g. “how many siblings” type questions)
- Multiracial families (e.g. questions about race where it’s ambiguous whether you care about child, parent, or both)
- Adoptive parents (e.g. questions about prenatal history)
- A parent under 20 (e.g. educational background qs may be less informative measures)
- A family of a child born very prematurely and whose adjusted age does not match her chronological age, or who has developmental delays
- A transgender parent or parent of a gender-nonconforming child
- You / someone you know! (This is not meant as an exhaustive list, just some examples of places where questions sometimes reveal hidden assumptions.)
- Are tasks/questions appropriate for the age range?
- Is the study aesthetically pleasing to look at? (remember parents and children need to be able to stay engaged and we don’t want things to come off too “sterile”)
- Is all audio clear and easy to understand? Is it as engaging as possible (intonation, pauses, etc.) given the constraints of the study? (Sometimes we default to an unnecessarily flat tone.)
- Are there any typos?
- Are there enough signposts to clearly direct you on what will be happening next?