< SPSP Main Site

2020 SPSP New Orleans Convention

Home > Submissions > Evaluation Process

Evaluation and Selection Process for Symposia and Single Presenter Submissions

Have you ever wondered how SPSP science submissions are evaluated? Below we describe the evaluation process in an effort to improve transparency surrounding the convention. We also describe the process for selecting a location and determining registration costs as part of these transparency efforts here and here.
All submissions are evaluated by 3 independent reviewers. Who are those reviewers and how are they selected?
  1. Anyone with a PhD can self-nominate to be a reviewer. During the nomination process, potential reviewers indicate their areas of expertise using keywords.
  2. Co-chairs of the symposia and single-presenter panels decide on the final pool of reviewers from this self-nominated group, seeking to ensure a range of expertise areas.
  3. Once the submission portal has closed, two strategies are used for assigning reviewers to specific submissions (depending on the type of submission):
    1. Symposia submissions are matched to reviewers using keywords, primarily matching the first keyword listed for the symposia and the first keyword listed by the reviewers, sometimes using the second keyword if necessary. As part of this process, we try to give reviewers an equal number of submissions to review.
    2. Single presenter submissions (i.e., posters and single presenter talks) are assigned randomly. This is because the number of submissions is too great to use a matching process.

All submissions are evaluated on a set of criteria pre-specified by the science committee. What are those criteria and how are they used?

  1. Reviewers are blind to the authors' identity when evaluating a submission; submissions are not evaluated based on the names of the people involved.
  2. Reviewers consider several dimensions when evaluating submissions:
    1. Importance. Does the symposium address a question or set of questions that substantially advances our knowledge of a theoretical and/or applied issue in social and/or personality psychology?
    2. Strength. Does the research reflect best practices, including issues of statistical power? Are studies well-designed to answer the research question(s)? If the session includes applied or non-empirical talks, do these present strong arguments or clear evidence toward the goals of the session?
    3. Novelty. Does the symposium represent the "cutting edge" of psychological science? Will the audience feel they have learned something new from the symposium?
    4. Interest-value. Will the symposium cut across subfields in an integrative way and have a clear impact on future conversations about social and/or personality psychology? Will it otherwise likely to be well-attended?
  3. Reviewers provide a single holistic evaluation on a scale of 1-4.  For symposia, these levels correspond to weaker (1), good (2), very good (3), and exceptional (4). Reviewers are instructed to have 25% of their ratings fall within each category (i.e., a rectangular distribution).  For single presenter submissions, these levels correspond to unacceptable/should be rejected (1), weak (2), good (3), and exceptional (4). 
  4. The review process is independent each year, so the reviewers use information from the current year's submissions only; the content of prior conventions does not factor into the evaluation process. Exception: If there is a significant amount of feedback during the post-convention survey indicating attendees would like to see more representation around a particular topic, the committee may try to have that topic represented more at the following year's convention.
  5. When making a final decision about which submissions to accept, the selection committee relies primarily on the reviewers' rubric scores. Also, at this stage submissions are unblinded. This allows the selection committee to ensure that a diverse range of speakers is represented in the program, both in terms of the content of the talks and the demographics of the speakers (for the demographic data we have on potential speakers). A broad goal in making the final selection is to accept high-quality submissions while creating a balanced and diverse program. There are no strict quotas in place to specify that X number of submissions from topic Y need to be accepted, however there may be general targets for specific topics to ensure various subfields are adequately represented. Beyond that, any topic-related themes that may emerge do so organically based on having a high number of high-quality submissions that year.
  6. Single-paper submissions that receive the highest scores from reviewers are examined carefully. In many cases, several high-quality submissions will cluster together in terms of content; these will be collected into a symposia and the presenters will be invited to work together to select a chair, title, and description of the program. The other highest-rated submissions that don't cluster into a symposium set will be included in one of several data blitz sessions scheduled for the convention (provided the speaker is eligible for a data blitz).

How is the final symposium schedule determined? 

  1. SPSP staff create a grid that ensures primary keywords do not overlap blocks of the schedule.
  2. The convention committee adjusts the grid if they see symposia with overlapping content areas are scheduled at the same time.
  3. SPSP staff sends a list of all selected symposia for that year's convention to the symposium chairs having them indicate if there is overlap with theirs and others. There is a quick turnaround to identify 2-3 that might have topic overlap. This information is considered when creating the final grid.

What are historic rates of acceptance for prior conferences? 

Since we have significantly revised our submission evaluation process in recent years, the data below represents the 2018 and 2019 conventions only:


2020 Annual Convention
February 27-29, 2020
New Orleans, LA USA