Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
. 2020 May 19;30(3):1145-1156.
doi: 10.1007/s40670-020-00980-7. eCollection 2020 Sep.

Factors That Determine the Perceived Effectiveness of Peer Feedback in Collaborative Learning: a Mixed Methods Design

Affiliations

Factors That Determine the Perceived Effectiveness of Peer Feedback in Collaborative Learning: a Mixed Methods Design

Dayane Daou et al. Med Sci Educ. .

Abstract

Introduction: Peer assessment has been promoted as a valuable approach to formative assessment to support learning and peer professionalism. This mixed methods study employed a conceptual framework to explore the factors that enhance the perceived effectiveness of formative peer assessment in the context of team-based learning as a form of collaborative learning.

Materials and methods: The volume and quality of written peer comments of two medical school classes at three time points were analyzed. Focus groups were then conducted to clarify issues that appeared in the quantitative data and to explore other emerging dimensions.

Results: There was a notable deficiency in both the volume and quality of the comments provided, with no improvement over time. Several factors were identified, including some that are logistical and operational and can be corrected easily, such as the timing of the assignments. Others that stood out as major substantive issues and/or limitations related to the students' conceptions of the purpose of the peer assessment and to their interpersonal variables.

Discussion: There were social disincentives for students to provide constructive feedback to peers with whom a continuing working relationship is necessary. There was also an inconsistency between the quality of the peer feedback being typically shallow and lacking in substance, and students considering it beneficial.

Conclusion: The findings identify factors that need to be addressed in order to ensure the quality and effectiveness of formative peer assessment among medical students.

Keywords: Collaborative learning; Medical students; Mixed methods; Peer assessment; Peer feedback; Team-based learning.

PubMed Disclaimer

Conflict of interest statement

Conflict of InterestThe authors declare that they have no competing interests.

Figures

Fig. 1
Fig. 1
Conceptual framework: factors that affect the volume and quality of the peer feedback and determine its effectiveness
Fig. 2
Fig. 2
Mean and standard error of the mean of the number of peer evaluation comments received per student over the 3 batches of TBL teams during medicine 1 and 2 of the classes of 2017 and 2019. The three different batches depict three different team allocations for the whole class (see Table 1). Batch 1 = beginning of medicine 1, Batch 2 = middle of medicine 1, and Batch 3 = beginning of medicine 2. Black = Class of 2017 and gray = Class of 2019
Fig. 3
Fig. 3
Frequency distribution (percentages) of the words extracted for each thematic area from the peer evaluation written comments over the 3 batches of TBL teams during medicine 1 and 2 of the classes of 2017 and 2019. The three different batches depict three different team allocations for the whole class (see Table 1). Batch 1 = beginning of medicine 1, Batch 2 = middle of medicine 1, and Batch 3 = beginning of medicine 2. Black = Class of 2017 and gray = Class of 2019. Numbers depict frequency (%) of extracted words for each thematic area per batch
Fig. 4
Fig. 4
Mean and standard error of the mean of the quality rating of the peer evaluation comments over the 3 batches of TBL teams during medicine 1 and 2 of the classes of 2017 and 2019. The three different batches depict three different team allocations for the whole class (see Table 1). Batch 1 = beginning of medicine 1, Batch 2 = middle of medicine 1, and Batch 3 = beginning of medicine 2. Black = Class of 2017 and gray = Class of 2019. A quality score was assigned a value of 0–3 as such: (0) irrelevant comment; (1) descriptive comment, focused on a single aspect, not specific enough, and with little practically useful information; (2) more detailed, covers several aspects but still not very specific as to be helpful practically; and (3) very useful, multifaceted, detailed, and specific feedback

Similar articles

Cited by

References

    1. Strijbos JW, Sluijsmans D. Unravelling peer assessment: methodological, functional, and conceptual developments. Learn Instr. 2010;20:265–269.
    1. Nofziger AC, Naumburg EH, Davis BJ, Mooney CJ, Epstein RM. Impact of peer assessment on the professional development of medical students: a qualitative study. Acad Med. 2010;85(1):140–147. - PubMed
    1. Arnold L. Assessing professional behavior: yesterday, today, and tomorrow. Acad Med. 2002;77(6):502–515. - PubMed
    1. Prins FJ, Sluijsmans DMA, Kirschner PA, Strijbos JW. Formative peer assessment in a CSCL environment: a case study. Assess Eval High Educ. 2005;30(4):417–444.
    1. Strijbos JW, Narciss S, Dunnebier K. Peer feedback content and sender’s competence level in academic writing revision tasks: are they critical for feedback perceptions and efficiency? Learn Instr. 2010;20:291–303.

LinkOut - more resources