Home — Essay Samples — Education — Class Reflection — Reflecting on Class Participation: A Self-Evaluation Paper

test_template

Reflecting on Class Participation: a Self-evaluation Paper

  • Categories: Class Reflection

About this sample

close

Words: 508 |

Published: Feb 7, 2024

Words: 508 | Page: 1 | 3 min read

Table of contents

Evaluation of strengths, areas for improvement, strategies for improvement.

  • Active Engagement: I consistently attended class sessions and actively engaged with the course material. I found the topics interesting, which motivated me to participate regularly.
  • Respectful and Inclusive: I made an effort to create a respectful and inclusive classroom environment. I listened attentively to my classmates and respected diverse perspectives, fostering a welcoming atmosphere for open discussions.
  • Preparation: I generally came to class well-prepared. I read assigned readings, reviewed lecture notes, and formulated thoughtful questions and contributions in advance.
  • Speaking Up More: There were instances where I held back from contributing my thoughts, especially in larger class discussions. I want to overcome my hesitancy and share my perspectives more consistently.
  • Engaging in Debate: While I respect different viewpoints, I sometimes avoid engaging in debates or presenting counterarguments. I want to challenge myself to engage in constructive debates and offer diverse perspectives when appropriate.
  • Participation in Group Activities: In group projects or collaborative activities, I sometimes took a backseat and let others lead. I aim to be a more active and proactive participant in group settings.
  • Setting Personal Goals: I will set specific participation goals for each class session. This may include the number of times I speak, the type of questions I ask, or my level of engagement in group activities.
  • Overcoming Fear: To address my hesitation, I will remind myself of the value of my contributions and the opportunity to learn from others through constructive discussions. I will work on building confidence in my ideas.
  • Active Listening: I will continue to prioritize active listening. By understanding others' perspectives deeply, I can engage in more meaningful discussions and respond thoughtfully.
  • Feedback and Reflection: I will seek feedback from my peers and instructors to gain insights into my participation and areas for improvement. Regular self-reflection will help me track my progress and adapt my strategies accordingly.

Image of Dr. Charlotte Jacobson

Cite this Essay

Let us write you an essay from scratch

  • 450+ experts on 30 subjects ready to help
  • Custom essay delivered in as few as 3 hours

Get high-quality help

author

Verified writer

  • Expert in: Education

writer

+ 120 experts online

By clicking “Check Writers’ Offers”, you agree to our terms of service and privacy policy . We’ll occasionally send you promo and account related email

No need to pay just yet!

Related Essays

1 pages / 410 words

1 pages / 649 words

2 pages / 921 words

1 pages / 351 words

Remember! This is just a sample.

You can get your custom paper by one of our expert writers.

121 writers online

Still can’t find what you need?

Browse our vast selection of original essay samples, each expertly formatted and styled

Related Essays on Class Reflection

Public Speaking was never an issue for me, as I have participated in a few public speaking competitions and got into the final of both. Anyway, there is dependably an opportunity to get better for anybody.  Over [...]

Life is an intricate journey filled with unique experiences and profound lessons. As a college student, I find myself constantly reflecting on various aspects of life, questioning its purpose, and exploring my own perspectives. [...]

What I learned in sociology class essay offers an opportunity to delve into the enriching insights and valuable knowledge gained from the study of human behavior, social interactions, and societal structures. Sociology provides [...]

Education is a lifelong journey filled with opportunities for growth, discovery, and personal development. Throughout my academic journey, I have been fortunate to learn from dedicated teachers and engage in a variety of [...]

Allen, W. (Director). (Year). Title of Woody Allen film . Production Company.Boggs, J. M., & Petrie, D. W. (2019). The art of watching films. McGraw-Hill Education.Bordwell, D., & Thompson, K. (2017). Film art: An introduction. [...]

Throughout this class, I have gained a lot of knowledge about how to break down writing a essay. My instructor has taught me a lot of skills about writing that I have applied in my class work. The course has been helpful in [...]

Related Topics

By clicking “Send”, you agree to our Terms of service and Privacy statement . We will occasionally send you account related emails.

Where do you want us to send this sample?

By clicking “Continue”, you agree to our terms of service and privacy policy.

Be careful. This essay is not unique

This essay was donated by a student and is likely to have been used and submitted before

Download this Sample

Free samples may contain mistakes and not unique parts

Sorry, we could not paraphrase this essay. Our professional writers can rewrite it and get you a unique paper.

Please check your inbox.

We can write you a custom essay that will follow your exact instructions and meet the deadlines. Let's fix your grades together!

Get Your Personalized Essay in 3 Hours or Less!

We use cookies to personalyze your web-site experience. By continuing we’ll assume you board with our cookie policy .

  • Instructions Followed To The Letter
  • Deadlines Met At Every Stage
  • Unique And Plagiarism Free

class participation self evaluation essay

Self-Reflection on Course Participation Essay (Critical Writing)

All-over class participation, listening and reading skills, class preparation, quality of contribution, impact on seminar, frequency of participation.

Participation in class discussions and online activities is important in any learning endeavor because it promotes effective learning activities, stimulates creativity, and instills confidence. Active contribution to discussions is a reflection of competency of the skills I have gained in class. This paper provides a self-reflection on course participation concerning various learning elements.

I participated adequately in class and online discussions (High rating). I only missed one out of the 12 on-campus sessions. However, my contribution to the attended sessions was up to the recommended standards. I stuck to the objectives of the course throughout the sessions. Consequently, I posed questions meant to complement the learning objectives to engage my classmates during discussions (Akkaya & Demirel, 2012). I avoided using ‘leading questions’ that could otherwise prompt the answers and in so doing enhanced critical thinking and creativity. I also avoided asking ambiguous questions that could lead to unnecessary discussions and wastage of time (Etemadzadeh, Seifi, & Far, 2013). I tried my best to provide precise answers and made a habit of giving other students a chance to speak without interruption.

Listening and reading are important in grasping and mastering the course content. Listening requires focus and concentration with the ability to analyze different situations critically while reading requires internal listening. Therefore, poor listening and reading skills are indicators of failure. Throughout the course, I demonstrated an interest in my peers’ and instructors’ contributions. During the class sessions, I used keywords to note down the ideas presented by others. I also strived to make eye contact with my audience during discussions.

I prepared for class by completing required readings (High rating). Before every class and online session, I read ahead on the topic to be discussed as suggested by Ding, Kim, and Orey (2017). The designated course readings were useful in this regard. I found that this approach prepared me for the discussions because the preparation helped me to anticipate various questions on the discussion topics.

Though I made significant contributions to the class, I was not able to answer all questions posed to me as comprehensively as I had wished, which indicated that my preparation was not as thorough as I thought. I had small problems in linking some of the readings to the discussion. These problems may have been caused by internal interferences such as mental state and nervousness (Staveley-O’Carrol, 2015). Nevertheless, most of my answers were accurate. Therefore, I rate myself as high in this aspect.

My contributions were useful in stimulating new ideas and setting the base to recognize the strengths of other people. Therefore, I grade myself in the high category in this aspect. The use of real-life examples and happenings in the contemporary world helped me to deliver my sentiments effectively (Bosangit & Demangeot, 2012).

Maintaining consistency in schoolwork requires frequent participation in class. The frequency of participation is also an indication of classroom attendance because it is impossible to participate in a class where one is conspicuously absent (Heaslip, Board, Duckworth, & Thomas, 2017). Frequent participation also encourages other students to be active participants in the class. I rate my frequency of participation as high because I attended all class sessions except one.

In conclusion, I rate myself in the medium category for all the aspects of class participation. However, I rate my participation in group activities and all other categories as high. Overall, my performance was good because this was my first semester even though I aim for a score of 10 come next semester. This reflection has helped me to identify my strengths and weaknesses, which will help me perform better in the coming semester.

Akkaya, N., & Demirel, M. V. (2012). Teacher candidates’ use of questioning skills in during-reading and post-reading strategies. Procedia – Social and Behavioral Sciences , 46 , 4301-4305. Web.

Bosangit, C., & Demangeot, C. (2016). Exploring reflective learning during the extended consumption of life experiences. Journal of Business Research , 69 (1), 208-215. Web.

Ding, L., Kim, C., & Orey, M. (2017). Studies of student engagement in gamified online discussions. Computers & Education , 115 , 126-142. Web.

Etemadzadeh, A., Seifi, S., & Far, H. R. (2013). The role of questioning technique in developing thinking skills: The ongoing effect on writing skill. Procedia – Social and Behavioral Sciences , 70 , 1024-1031. Web.

Heaslip, V., Board, M., Duckworth, V., & Thomas, L. (2017). Widening participation in nurse education: An integrative literature review. Nurse Education Today , 59 , 66-74. Web.

Staveley-O’Carrol, J. (2015). International Review of Economics Education , 20 , 46-58. Web.

  • Chicago (A-D)
  • Chicago (N-B)

IvyPanda. (2023, July 13). Self-Reflection on Course Participation. https://ivypanda.com/essays/self-reflection-on-course-participation/

"Self-Reflection on Course Participation." IvyPanda , 13 July 2023, ivypanda.com/essays/self-reflection-on-course-participation/.

IvyPanda . (2023) 'Self-Reflection on Course Participation'. 13 July.

IvyPanda . 2023. "Self-Reflection on Course Participation." July 13, 2023. https://ivypanda.com/essays/self-reflection-on-course-participation/.

1. IvyPanda . "Self-Reflection on Course Participation." July 13, 2023. https://ivypanda.com/essays/self-reflection-on-course-participation/.

Bibliography

IvyPanda . "Self-Reflection on Course Participation." July 13, 2023. https://ivypanda.com/essays/self-reflection-on-course-participation/.

  • Self-Reflection: Maintaining Patients’ Rights
  • Self-Reflection and Successful Communication
  • Self-Reflection Pertinence in Understanding Oneself
  • Child Development Observation and Self-Reflection
  • Self-Reflection in Nurses: 70-Year-Old Patient
  • Self-Reflection in Social Work
  • Self-Reflection and Awareness: Cultural Concealment and Therapy Outcomes
  • Self-Reflection: Community Health Nursing
  • Black Lives Matter: Diversity Awareness and Self-Reflection
  • Self-Reflection About Life Questions
  • Programs for a Masters in Psychology
  • Safe Early Childhood Learning Environments Analysis
  • Success in the ENG101 Course
  • Gothic Literature Course Experience
  • IOC MOOC Assessment: Evaluating the Course Outcomes
  • Open supplemental data
  • Reference Manager
  • Simple TEXT file

People also looked at

Systematic review article, a critical review of research on student self-assessment.

class participation self evaluation essay

  • Educational Psychology and Methodology, University at Albany, Albany, NY, United States

This article is a review of research on student self-assessment conducted largely between 2013 and 2018. The purpose of the review is to provide an updated overview of theory and research. The treatment of theory involves articulating a refined definition and operationalization of self-assessment. The review of 76 empirical studies offers a critical perspective on what has been investigated, including the relationship between self-assessment and achievement, consistency of self-assessment and others' assessments, student perceptions of self-assessment, and the association between self-assessment and self-regulated learning. An argument is made for less research on consistency and summative self-assessment, and more on the cognitive and affective mechanisms of formative self-assessment.

This review of research on student self-assessment expands on a review published as a chapter in the Cambridge Handbook of Instructional Feedback ( Andrade, 2018 , reprinted with permission). The timespan for the original review was January 2013 to October 2016. A lot of research has been done on the subject since then, including at least two meta-analyses; hence this expanded review, in which I provide an updated overview of theory and research. The treatment of theory presented here involves articulating a refined definition and operationalization of self-assessment through a lens of feedback. My review of the growing body of empirical research offers a critical perspective, in the interest of provoking new investigations into neglected areas.

Defining and Operationalizing Student Self-Assessment

Without exception, reviews of self-assessment ( Sargeant, 2008 ; Brown and Harris, 2013 ; Panadero et al., 2016a ) call for clearer definitions: What is self-assessment, and what is not? This question is surprisingly difficult to answer, as the term self-assessment has been used to describe a diverse range of activities, such as assigning a happy or sad face to a story just told, estimating the number of correct answers on a math test, graphing scores for dart throwing, indicating understanding (or the lack thereof) of a science concept, using a rubric to identify strengths and weaknesses in one's persuasive essay, writing reflective journal entries, and so on. Each of those activities involves some kind of assessment of one's own functioning, but they are so different that distinctions among types of self-assessment are needed. I will draw those distinctions in terms of the purposes of self-assessment which, in turn, determine its features: a classic form-fits-function analysis.

What is Self-Assessment?

Brown and Harris (2013) defined self-assessment in the K-16 context as a “descriptive and evaluative act carried out by the student concerning his or her own work and academic abilities” (p. 368). Panadero et al. (2016a) defined it as a “wide variety of mechanisms and techniques through which students describe (i.e., assess) and possibly assign merit or worth to (i.e., evaluate) the qualities of their own learning processes and products” (p. 804). Referring to physicians, Epstein et al. (2008) defined “concurrent self-assessment” as “ongoing moment-to-moment self-monitoring” (p. 5). Self-monitoring “refers to the ability to notice our own actions, curiosity to examine the effects of those actions, and willingness to use those observations to improve behavior and thinking in the future” (p. 5). Taken together, these definitions include self-assessment of one's abilities, processes , and products —everything but the kitchen sink. This very broad conception might seem unwieldy, but it works because each object of assessment—competence, process, and product—is subject to the influence of feedback from oneself.

What is missing from each of these definitions, however, is the purpose of the act of self-assessment. Their authors might rightly point out that the purpose is implied, but a formal definition requires us to make it plain: Why do we ask students to self-assess? I have long held that self-assessment is feedback ( Andrade, 2010 ), and that the purpose of feedback is to inform adjustments to processes and products that deepen learning and enhance performance; hence the purpose of self-assessment is to generate feedback that promotes learning and improvements in performance. This learning-oriented purpose of self-assessment implies that it should be formative: if there is no opportunity for adjustment and correction, self-assessment is almost pointless.

Why Self-Assess?

Clarity about the purpose of self-assessment allows us to interpret what otherwise appear to be discordant findings from research, which has produced mixed results in terms of both the accuracy of students' self-assessments and their influence on learning and/or performance. I believe the source of the discord can be traced to the different ways in which self-assessment is carried out, such as whether it is summative and formative. This issue will be taken up again in the review of current research that follows this overview. For now, consider a study of the accuracy and validity of summative self-assessment in teacher education conducted by Tejeiro et al. (2012) , which showed that students' self-assigned marks tended to be higher than marks given by professors. All 122 students in the study assigned themselves a grade at the end of their course, but half of the students were told that their self-assigned grade would count toward 5% of their final grade. In both groups, students' self-assessments were higher than grades given by professors, especially for students with “poorer results” (p. 791) and those for whom self-assessment counted toward the final grade. In the group that was told their self-assessments would count toward their final grade, no relationship was found between the professor's and the students' assessments. Tejeiro et al. concluded that, although students' and professor's assessments tend to be highly similar when self-assessment did not count toward final grades, overestimations increased dramatically when students' self-assessments did count. Interviews of students who self-assigned highly discrepant grades revealed (as you might guess) that they were motivated by the desire to obtain the highest possible grades.

Studies like Tejeiro et al's. (2012) are interesting in terms of the information they provide about the relationship between consistency and honesty, but the purpose of the self-assessment, beyond addressing interesting research questions, is unclear. There is no feedback purpose. This is also true for another example of a study of summative self-assessment of competence, during which elementary-school children took the Test of Narrative Language and then were asked to self-evaluate “how you did in making up stories today” by pointing to one of five pictures, from a “very happy face” (rating of five) to a “very sad face” (rating of one) ( Kaderavek et al., 2004 . p. 37). The usual results were reported: Older children and good narrators were more accurate than younger children and poor narrators, and males tended to more frequently overestimate their ability.

Typical of clinical studies of accuracy in self-evaluation, this study rests on a definition and operationalization of self-assessment with no value in terms of instructional feedback. If those children were asked to rate their stories and then revise or, better yet, if they assessed their stories according to clear, developmentally appropriate criteria before revising, the valence of their self-assessments in terms of instructional feedback would skyrocket. I speculate that their accuracy would too. In contrast, studies of formative self-assessment suggest that when the act of self-assessing is given a learning-oriented purpose, students' self-assessments are relatively consistent with those of external evaluators, including professors ( Lopez and Kossack, 2007 ; Barney et al., 2012 ; Leach, 2012 ), teachers ( Bol et al., 2012 ; Chang et al., 2012 , 2013 ), researchers ( Panadero and Romero, 2014 ; Fitzpatrick and Schulz, 2016 ), and expert medical assessors ( Hawkins et al., 2012 ).

My commitment to keeping self-assessment formative is firm. However, Gavin Brown (personal communication, April 2011) reminded me that summative self-assessment exists and we cannot ignore it; any definition of self-assessment must acknowledge and distinguish between formative and summative forms of it. Thus, the taxonomy in Table 1 , which depicts self-assessment as serving formative and/or summative purposes, and focuses on competence, processes, and/or products.

www.frontiersin.org

Table 1 . A taxonomy of self-assessment.

Fortunately, a formative view of self-assessment seems to be taking hold in various educational contexts. For instance, Sargeant (2008) noted that all seven authors in a special issue of the Journal of Continuing Education in the Health Professions “conceptualize self-assessment within a formative, educational perspective, and see it as an activity that draws upon both external and internal data, standards, and resources to inform and make decisions about one's performance” (p. 1). Sargeant also stresses the point that self-assessment should be guided by evaluative criteria: “Multiple external sources can and should inform self-assessment, perhaps most important among them performance standards” (p. 1). Now we are talking about the how of self-assessment, which demands an operationalization of self-assessment practice. Let us examine each object of self-assessment (competence, processes, and/or products) with an eye for what is assessed and why.

What is Self-Assessed?

Monitoring and self-assessing processes are practically synonymous with self-regulated learning (SRL), or at least central components of it such as goal-setting and monitoring, or metacognition. Research on SRL has clearly shown that self-generated feedback on one's approach to learning is associated with academic gains ( Zimmerman and Schunk, 2011 ). Self-assessment of the products , such as papers and presentations, are the easiest to defend as feedback, especially when those self-assessments are grounded in explicit, relevant, evaluative criteria and followed by opportunities to relearn and/or revise ( Andrade, 2010 ).

Including the self-assessment of competence in this definition is a little trickier. I hesitated to include it because of the risk of sneaking in global assessments of one's overall ability, self-esteem, and self-concept (“I'm good enough, I'm smart enough, and doggone it, people like me,” Franken, 1992 ), which do not seem relevant to a discussion of feedback in the context of learning. Research on global self-assessment, or self-perception, is popular in the medical education literature, but even there, scholars have begun to question its usefulness in terms of influencing learning and professional growth (e.g., see Sargeant et al., 2008 ). Eva and Regehr (2008) seem to agree in the following passage, which states the case in a way that makes it worthy of a long quotation:

Self-assessment is often (implicitly or otherwise) conceptualized as a personal, unguided reflection on performance for the purposes of generating an individually derived summary of one's own level of knowledge, skill, and understanding in a particular area. For example, this conceptualization would appear to be the only reasonable basis for studies that fit into what Colliver et al. (2005) has described as the “guess your grade” model of self-assessment research, the results of which form the core foundation for the recurring conclusion that self-assessment is generally poor. This unguided, internally generated construction of self-assessment stands in stark contrast to the model put forward by Boud (1999) , who argued that the phrase self-assessment should not imply an isolated or individualistic activity; it should commonly involve peers, teachers, and other sources of information. The conceptualization of self-assessment as enunciated in Boud's description would appear to involve a process by which one takes personal responsibility for looking outward, explicitly seeking feedback, and information from external sources, then using these externally generated sources of assessment data to direct performance improvements. In this construction, self-assessment is more of a pedagogical strategy than an ability to judge for oneself; it is a habit that one needs to acquire and enact rather than an ability that one needs to master (p. 15).

As in the K-16 context, self-assessment is coming to be seen as having value as much or more so in terms of pedagogy as in assessment ( Silver et al., 2008 ; Brown and Harris, 2014 ). In the end, however, I decided that self-assessing one's competence to successfully learn a particular concept or complete a particular task (which sounds a lot like self-efficacy—more on that later) might be useful feedback because it can inform decisions about how to proceed, such as the amount of time to invest in learning how to play the flute, or whether or not to seek help learning the steps of the jitterbug. An important caveat, however, is that self-assessments of competence are only useful if students have opportunities to do something about their perceived low competence—that is, it serves the purpose of formative feedback for the learner.

How to Self-Assess?

Panadero et al. (2016a) summarized five very different taxonomies of self-assessment and called for the development of a comprehensive typology that considers, among other things, its purpose, the presence or absence of criteria, and the method. In response, I propose the taxonomy depicted in Table 1 , which focuses on the what (competence, process, or product), the why (formative or summative), and the how (methods, including whether or not they include standards, e.g., criteria) of self-assessment. The collections of examples of methods in the table is inexhaustive.

I put the methods in Table 1 where I think they belong, but many of them could be placed in more than one cell. Take self-efficacy , for instance, which is essentially a self-assessment of one's competence to successfully undertake a particular task ( Bandura, 1997 ). Summative judgments of self-efficacy are certainly possible but they seem like a silly thing to do—what is the point, from a learning perspective? Formative self-efficacy judgments, on the other hand, can inform next steps in learning and skill building. There is reason to believe that monitoring and making adjustments to one's self-efficacy (e.g., by setting goals or attributing success to effort) can be productive ( Zimmerman, 2000 ), so I placed self-efficacy in the formative row.

It is important to emphasize that self-efficacy is task-specific, more or less ( Bandura, 1997 ). This taxonomy does not include general, holistic evaluations of one's abilities, for example, “I am good at math.” Global assessment of competence does not provide the leverage, in terms of feedback, that is provided by task-specific assessments of competence, that is, self-efficacy. Eva and Regehr (2008) provided an illustrative example: “We suspect most people are prompted to open a dictionary as a result of encountering a word for which they are uncertain of the meaning rather than out of a broader assessment that their vocabulary could be improved” (p. 16). The exclusion of global evaluations of oneself resonates with research that clearly shows that feedback that focuses on aspects of a task (e.g., “I did not solve most of the algebra problems”) is more effective than feedback that focuses on the self (e.g., “I am bad at math”) ( Kluger and DeNisi, 1996 ; Dweck, 2006 ; Hattie and Timperley, 2007 ). Hence, global self-evaluations of ability or competence do not appear in Table 1 .

Another approach to student self-assessment that could be placed in more than one cell is traffic lights . The term traffic lights refers to asking students to use green, yellow, or red objects (or thumbs up, sideways, or down—anything will do) to indicate whether they think they have good, partial, or little understanding ( Black et al., 2003 ). It would be appropriate for traffic lights to appear in multiple places in Table 1 , depending on how they are used. Traffic lights seem to be most effective at supporting students' reflections on how well they understand a concept or have mastered a skill, which is line with their creators' original intent, so they are categorized as formative self-assessments of one's learning—which sounds like metacognition.

In fact, several of the methods included in Table 1 come from research on metacognition, including self-monitoring , such as checking one's reading comprehension, and self-testing , e.g., checking one's performance on test items. These last two methods have been excluded from some taxonomies of self-assessment (e.g., Boud and Brew, 1995 ) because they do not engage students in explicitly considering relevant standards or criteria. However, new conceptions of self-assessment are grounded in theories of the self- and co-regulation of learning ( Andrade and Brookhart, 2016 ), which includes self-monitoring of learning processes with and without explicit standards.

However, my research favors self-assessment with regard to standards ( Andrade and Boulay, 2003 ; Andrade and Du, 2007 ; Andrade et al., 2008 , 2009 , 2010 ), as does related research by Panadero and his colleagues (see below). I have involved students in self-assessment of stories, essays, or mathematical word problems according to rubrics or checklists with criteria. For example, two studies investigated the relationship between elementary or middle school students' scores on a written assignment and a process that involved them in reading a model paper, co-creating criteria, self-assessing first drafts with a rubric, and revising ( Andrade et al., 2008 , 2010 ). The self-assessment was highly scaffolded: students were asked to underline key phrases in the rubric with colored pencils (e.g., underline “clearly states an opinion” in blue), then underline or circle in their drafts the evidence of having met the standard articulated by the phrase (e.g., his or her opinion) with the same blue pencil. If students found they had not met the standard, they were asked to write themselves a reminder to make improvements when they wrote their final drafts. This process was followed for each criterion on the rubric. There were main effects on scores for every self-assessed criterion on the rubric, suggesting that guided self-assessment according to the co-created criteria helped students produce more effective writing.

Panadero and his colleagues have also done quasi-experimental and experimental research on standards-referenced self-assessment, using rubrics or lists of assessment criteria that are presented in the form of questions ( Panadero et al., 2012 , 2013 , 2014 ; Panadero and Romero, 2014 ). Panadero calls the list of assessment criteria a script because his work is grounded in research on scaffolding (e.g., Kollar et al., 2006 ): I call it a checklist because that is the term used in classroom assessment contexts. Either way, the list provides standards for the task. Here is a script for a written summary that Panadero et al. (2014) used with college students in a psychology class:

• Does my summary transmit the main idea from the text? Is it at the beginning of my summary?

• Are the important ideas also in my summary?

• Have I selected the main ideas from the text to make them explicit in my summary?

• Have I thought about my purpose for the summary? What is my goal?

Taken together, the results of the studies cited above suggest that students who engaged in self-assessment using scripts or rubrics were more self-regulated, as measured by self-report questionnaires and/or think aloud protocols, than were students in the comparison or control groups. Effect sizes were very small to moderate (η 2 = 0.06–0.42), and statistically significant. Most interesting, perhaps, is one study ( Panadero and Romero, 2014 ) that demonstrated an association between rubric-referenced self-assessment activities and all three phases of SRL; forethought, performance, and reflection.

There are surely many other methods of self-assessment to include in Table 1 , as well as interesting conversations to be had about which method goes where and why. In the meantime, I offer the taxonomy in Table 1 as a way to define and operationalize self-assessment in instructional contexts and as a framework for the following overview of current research on the subject.

An Overview of Current Research on Self-Assessment

Several recent reviews of self-assessment are available ( Brown and Harris, 2013 ; Brown et al., 2015 ; Panadero et al., 2017 ), so I will not summarize the entire body of research here. Instead, I chose to take a birds-eye view of the field, with goal of reporting on what has been sufficiently researched and what remains to be done. I used the references lists from reviews, as well as other relevant sources, as a starting point. In order to update the list of sources, I directed two new searches 1 , the first of the ERIC database, and the second of both ERIC and PsychINFO. Both searches included two search terms, “self-assessment” OR “self-evaluation.” Advanced search options had four delimiters: (1) peer-reviewed, (2) January, 2013–October, 2016 and then October 2016–March 2019, (3) English, and (4) full-text. Because the focus was on K-20 educational contexts, sources were excluded if they were about early childhood education or professional development.

The first search yielded 347 hits; the second 1,163. Research that was unrelated to instructional feedback was excluded, such as studies limited to self-estimates of performance before or after taking a test, guesses about whether a test item was answered correctly, and estimates of how many tasks could be completed in a certain amount of time. Although some of the excluded studies might be thought of as useful investigations of self-monitoring, as a group they seemed too unrelated to theories of self-generated feedback to be appropriate for this review. Seventy-six studies were selected for inclusion in Table S1 (Supplementary Material), which also contains a few studies published before 2013 that were not included in key reviews, as well as studies solicited directly from authors.

The Table S1 in the Supplementary Material contains a complete list of studies included in this review, organized by the focus or topic of the study, as well as brief descriptions of each. The “type” column Table S1 (Supplementary Material) indicates whether the study focused on formative or summative self-assessment. This distinction was often difficult to make due to a lack of information. For example, Memis and Seven (2015) frame their study in terms of formative assessment, and note that the purpose of the self-evaluation done by the sixth grade students is to “help students improve their [science] reports” (p. 39), but they do not indicate how the self-assessments were done, nor whether students were given time to revise their reports based on their judgments or supported in making revisions. A sentence or two of explanation about the process of self-assessment in the procedures sections of published studies would be most useful.

Figure 1 graphically represents the number of studies in the four most common topic categories found in the table—achievement, consistency, student perceptions, and SRL. The figure reveals that research on self-assessment is on the rise, with consistency the most popular topic. Of the 76 studies in the table in the appendix, 44 were inquiries into the consistency of students' self-assessments with other judgments (e.g., a test score or teacher's grade). Twenty-five studies investigated the relationship between self-assessment and achievement. Fifteen explored students' perceptions of self-assessment. Twelve studies focused on the association between self-assessment and self-regulated learning. One examined self-efficacy, and two qualitative studies documented the mental processes involved in self-assessment. The sum ( n = 99) of the list of research topics is more than 76 because several studies had multiple foci. In the remainder of this review I examine each topic in turn.

www.frontiersin.org

Figure 1 . Topics of self-assessment studies, 2013–2018.

Consistency

Table S1 (Supplementary Material) reveals that much of the recent research on self-assessment has investigated the accuracy or, more accurately, consistency, of students' self-assessments. The term consistency is more appropriate in the classroom context because the quality of students' self-assessments is often determined by comparing them with their teachers' assessments and then generating correlations. Given the evidence of the unreliability of teachers' grades ( Falchikov, 2005 ), the assumption that teachers' assessments are accurate might not be well-founded ( Leach, 2012 ; Brown et al., 2015 ). Ratings of student work done by researchers are also suspect, unless evidence of the validity and reliability of the inferences made about student work by researchers is available. Consequently, much of the research on classroom-based self-assessment should use the term consistency , which refers to the degree of alignment between students' and expert raters' evaluations, avoiding the purer, more rigorous term accuracy unless it is fitting.

In their review, Brown and Harris (2013) reported that correlations between student self-ratings and other measures tended to be weakly to strongly positive, ranging from r ≈ 0.20 to 0.80, with few studies reporting correlations >0.60. But their review included results from studies of any self-appraisal of school work, including summative self-rating/grading, predictions about the correctness of answers on test items, and formative, criteria-based self-assessments, a combination of methods that makes the correlations they reported difficult to interpret. Qualitatively different forms of self-assessment, especially summative and formative types, cannot be lumped together without obfuscating important aspects of self-assessment as feedback.

Given my concern about combining studies of summative and formative assessment, you might anticipate a call for research on consistency that distinguishes between the two. I will make no such call for three reasons. One is that we have enough research on the subject, including the 22 studies in Table S1 (Supplementary Material) that were published after Brown and Harris's review (2013 ). Drawing only on studies included in Table S1 (Supplementary Material), we can say with confidence that summative self-assessment tends to be inconsistent with external judgements ( Baxter and Norman, 2011 ; De Grez et al., 2012 ; Admiraal et al., 2015 ), with males tending to overrate and females to underrate ( Nowell and Alston, 2007 ; Marks et al., 2018 ). There are exceptions ( Alaoutinen, 2012 ; Lopez-Pastor et al., 2012 ) as well as mixed results, with students being consistent regarding some aspects of their learning but not others ( Blanch-Hartigan, 2011 ; Harding and Hbaci, 2015 ; Nguyen and Foster, 2018 ). We can also say that older, more academically competent learners tend to be more consistent ( Hacker et al., 2000 ; Lew et al., 2010 ; Alaoutinen, 2012 ; Guillory and Blankson, 2017 ; Butler, 2018 ; Nagel and Lindsey, 2018 ). There is evidence that consistency can be improved through experience ( Lopez and Kossack, 2007 ; Yilmaz, 2017 ; Nagel and Lindsey, 2018 ), the use of guidelines ( Bol et al., 2012 ), feedback ( Thawabieh, 2017 ), and standards ( Baars et al., 2014 ), perhaps in the form of rubrics ( Panadero and Romero, 2014 ). Modeling and feedback also help ( Labuhn et al., 2010 ; Miller and Geraci, 2011 ; Hawkins et al., 2012 ; Kostons et al., 2012 ).

An outcome typical of research on the consistency of summative self-assessment can be found in row 59, which summarizes the study by Tejeiro et al. (2012) discussed earlier: Students' self-assessments were higher than marks given by professors, especially for students with poorer results, and no relationship was found between the professors' and the students' assessments in the group in which self-assessment counted toward the final mark. Students are not stupid: if they know that they can influence their final grade, and that their judgment is summative rather than intended to inform revision and improvement, they will be motivated to inflate their self-evaluation. I do not believe we need more research to demonstrate that phenomenon.

The second reason I am not calling for additional research on consistency is a lot of it seems somewhat irrelevant. This might be because the interest in accuracy is rooted in clinical research on calibration, which has very different aims. Calibration accuracy is the “magnitude of consent between learners' true and self-evaluated task performance. Accurately calibrated learners' task performance equals their self-evaluated task performance” ( Wollenschläger et al., 2016 ). Calibration research often asks study participants to predict or postdict the correctness of their responses to test items. I caution about generalizing from clinical experiments to authentic classroom contexts because the dismal picture of our human potential to self-judge was painted by calibration researchers before study participants were effectively taught how to predict with accuracy, or provided with the tools they needed to be accurate, or motivated to do so. Calibration researchers know that, of course, and have conducted intervention studies that attempt to improve accuracy, with some success (e.g., Bol et al., 2012 ). Studies of formative self-assessment also suggest that consistency increases when it is taught and supported in many of the ways any other skill must be taught and supported ( Lopez and Kossack, 2007 ; Labuhn et al., 2010 ; Chang et al., 2012 , 2013 ; Hawkins et al., 2012 ; Panadero and Romero, 2014 ; Lin-Siegler et al., 2015 ; Fitzpatrick and Schulz, 2016 ).

Even clinical psychological studies that go beyond calibration to examine the associations between monitoring accuracy and subsequent study behaviors do not transfer well to classroom assessment research. After repeatedly encountering claims that, for example, low self-assessment accuracy leads to poor task-selection accuracy and “suboptimal learning outcomes” ( Raaijmakers et al., 2019 , p. 1), I dug into the cited studies and discovered two limitations. The first is that the tasks in which study participants engage are quite inauthentic. A typical task involves studying “word pairs (e.g., railroad—mother), followed by a delayed judgment of learning (JOL) in which the students predicted the chances of remembering the pair… After making a JOL, the entire pair was presented for restudy for 4 s [ sic ], and after all pairs had been restudied, a criterion test of paired-associate recall occurred” ( Dunlosky and Rawson, 2012 , p. 272). Although memory for word pairs might be important in some classroom contexts, it is not safe to assume that results from studies like that one can predict students' behaviors after criterion-referenced self-assessment of their comprehension of complex texts, lengthy compositions, or solutions to multi-step mathematical problems.

The second limitation of studies like the typical one described above is more serious: Participants in research like that are not permitted to regulate their own studying, which is experimentally manipulated by a computer program. This came as a surprise, since many of the claims were about students' poor study choices but they were rarely allowed to make actual choices. For example, Dunlosky and Rawson (2012) permitted participants to “use monitoring to effectively control learning” by programming the computer so that “a participant would need to have judged his or her recall of a definition entirely correct on three different trials, and once they judged it entirely correct on the third trial, that particular key term definition was dropped [by the computer program] from further practice” (p. 272). The authors note that this study design is an improvement on designs that did not require all participants to use the same regulation algorithm, but it does not reflect the kinds of decisions that learners make in class or while doing homework. In fact, a large body of research shows that students can make wise choices when they self-pace the study of to-be-learned materials and then allocate study time to each item ( Bjork et al., 2013 , p. 425):

In a typical experiment, the students first study all the items at an experimenter-paced rate (e.g., study 60 paired associates for 3 s each), which familiarizes the students with the items; after this familiarity phase, the students then either choose which items they want to restudy (e.g., all items are presented in an array, and the students select which ones to restudy) and/or pace their restudy of each item. Several dependent measures have been widely used, such as how long each item is studied, whether an item is selected for restudy, and in what order items are selected for restudy. The literature on these aspects of self-regulated study is massive (for a comprehensive overview, see both Dunlosky and Ariel, 2011 and Son and Metcalfe, 2000 ), but the evidence is largely consistent with a few basic conclusions. First, if students have a chance to practice retrieval prior to restudying items, they almost exclusively choose to restudy unrecalled items and drop the previously recalled items from restudy ( Metcalfe and Kornell, 2005 ). Second, when pacing their study of individual items that have been selected for restudy, students typically spend more time studying items that are more, rather than less, difficult to learn. Such a strategy is consistent with a discrepancy-reduction model of self-paced study (which states that people continue to study an item until they reach mastery), although some key revisions to this model are needed to account for all the data. For instance, students may not continue to study until they reach some static criterion of mastery, but instead, they may continue to study until they perceive that they are no longer making progress.

I propose that this research, which suggests that students' unscaffolded, unmeasured, informal self-assessments tend to lead to appropriate task selection, is better aligned with research on classroom-based self-assessment. Nonetheless, even this comparison is inadequate because the study participants were not taught to compare their performance to the criteria for mastery, as is often done in classroom-based self-assessment.

The third and final reason I do not believe we need additional research on consistency is that I think it is a distraction from the true purposes of self-assessment. Many if not most of the articles about the accuracy of self-assessment are grounded in the assumption that accuracy is necessary for self-assessment to be useful, particularly in terms of subsequent studying and revision behaviors. Although it seems obvious that accurate evaluations of their performance positively influence students' study strategy selection, which should produce improvements in achievement, I have not seen relevant research that tests those conjectures. Some claim that inaccurate estimates of learning lead to the selection of inappropriate learning tasks ( Kostons et al., 2012 ) but they cite research that does not support their claim. For example, Kostons et al. cite studies that focus on the effectiveness of SRL interventions but do not address the accuracy of participants' estimates of learning, nor the relationship of those estimates to the selection of next steps. Other studies produce findings that support my skepticism. Take, for instance, two relevant studies of calibration. One suggested that performance and judgments of performance had little influence on subsequent test preparation behavior ( Hacker et al., 2000 ), and the other showed that study participants followed their predictions of performance to the same degree, regardless of monitoring accuracy ( van Loon et al., 2014 ).

Eva and Regehr (2008) believe that:

Research questions that take the form of “How well do various practitioners self-assess?” “How can we improve self-assessment?” or “How can we measure self-assessment skill?” should be considered defunct and removed from the research agenda [because] there have been hundreds of studies into these questions and the answers are “Poorly,” “You can't,” and “Don't bother” (p. 18).

I almost agree. A study that could change my mind about the importance of accuracy of self-assessment would be an investigation that goes beyond attempting to improve accuracy just for the sake of accuracy by instead examining the relearning/revision behaviors of accurate and inaccurate self-assessors: Do students whose self-assessments match the valid and reliable judgments of expert raters (hence my use of the term accuracy ) make better decisions about what they need to do to deepen their learning and improve their work? Here, I admit, is a call for research related to consistency: I would love to see a high-quality investigation of the relationship between accuracy in formative self-assessment, and students' subsequent study and revision behaviors, and their learning. For example, a study that closely examines the revisions to writing made by accurate and inaccurate self-assessors, and the resulting outcomes in terms of the quality of their writing, would be most welcome.

Table S1 (Supplementary Material) indicates that by 2018 researchers began publishing studies that more directly address the hypothesized link between self-assessment and subsequent learning behaviors, as well as important questions about the processes learners engage in while self-assessing ( Yan and Brown, 2017 ). One, a study by Nugteren et al. (2018 row 19 in Table S1 (Supplementary Material)), asked “How do inaccurate [summative] self-assessments influence task selections?” (p. 368) and employed a clever exploratory research design. The results suggested that most of the 15 students in their sample over-estimated their performance and made inaccurate learning-task selections. Nugteren et al. recommended helping students make more accurate self-assessments, but I think the more interesting finding is related to why students made task selections that were too difficult or too easy, given their prior performance: They based most task selections on interest in the content of particular items (not the overarching content to be learned), and infrequently considered task difficulty and support level. For instance, while working on the genetics tasks, students reported selecting tasks because they were fun or interesting, not because they addressed self-identified weaknesses in their understanding of genetics. Nugteren et al. proposed that students would benefit from instruction on task selection. I second that proposal: Rather than directing our efforts on accuracy in the service of improving subsequent task selection, let us simply teach students to use the information at hand to select next best steps, among other things.

Butler (2018 , row 76 in Table S1 (Supplementary Material)) has conducted at least two studies of learners' processes of responding to self-assessment items and how they arrived at their judgments. Comparing generic, decontextualized items to task-specific, contextualized items (which she calls after-task items ), she drew two unsurprising conclusions: the task-specific items “generally showed higher correlations with task performance,” and older students “appeared to be more conservative in their judgment compared with their younger counterparts” (p. 249). The contribution of the study is the detailed information it provides about how students generated their judgments. For example, Butler's qualitative data analyses revealed that when asked to self-assess in terms of vague or non-specific items, the children often “contextualized the descriptions based on their own experiences, goals, and expectations,” (p. 257) focused on the task at hand, and situated items in the specific task context. Perhaps as a result, the correlation between after-task self-assessment and task performance was generally higher than for generic self-assessment.

Butler (2018) notes that her study enriches our empirical understanding of the processes by which children respond to self-assessment. This is a very promising direction for the field. Similar studies of processing during formative self-assessment of a variety of task types in a classroom context would likely produce significant advances in our understanding of how and why self-assessment influences learning and performance.

Student Perceptions

Fifteen of the studies listed in Table S1 (Supplementary Material) focused on students' perceptions of self-assessment. The studies of children suggest that they tend to have unsophisticated understandings of its purposes ( Harris and Brown, 2013 ; Bourke, 2016 ) that might lead to shallow implementation of related processes. In contrast, results from the studies conducted in higher education settings suggested that college and university students understood the function of self-assessment ( Ratminingsih et al., 2018 ) and generally found it to be useful for guiding evaluation and revision ( Micán and Medina, 2017 ), understanding how to take responsibility for learning ( Lopez and Kossack, 2007 ; Bourke, 2014 ; Ndoye, 2017 ), prompting them to think more critically and deeply ( van Helvoort, 2012 ; Siow, 2015 ), applying newfound skills ( Murakami et al., 2012 ), and fostering self-regulated learning by guiding them to set goals, plan, self-monitor and reflect ( Wang, 2017 ).

Not surprisingly, positive perceptions of self-assessment were typically developed by students who actively engaged the formative type by, for example, developing their own criteria for an effective self-assessment response ( Bourke, 2014 ), or using a rubric or checklist to guide their assessments and then revising their work ( Huang and Gui, 2015 ; Wang, 2017 ). Earlier research suggested that children's attitudes toward self-assessment can become negative if it is summative ( Ross et al., 1998 ). However, even summative self-assessment was reported by adult learners to be useful in helping them become more critical of their own and others' writing throughout the course and in subsequent courses ( van Helvoort, 2012 ).

Achievement

Twenty-five of the studies in Table S1 (Supplementary Material) investigated the relation between self-assessment and achievement, including two meta-analyses. Twenty of the 25 clearly employed the formative type. Without exception, those 20 studies, plus the two meta-analyses ( Graham et al., 2015 ; Sanchez et al., 2017 ) demonstrated a positive association between self-assessment and learning. The meta-analysis conducted by Graham and his colleagues, which included 10 studies, yielded an average weighted effect size of 0.62 on writing quality. The Sanchez et al. meta-analysis revealed that, although 12 of the 44 effect sizes were negative, on average, “students who engaged in self-grading performed better ( g = 0.34) on subsequent tests than did students who did not” (p. 1,049).

All but two of the non-meta-analytic studies of achievement in Table S1 (Supplementary Material) were quasi-experimental or experimental, providing relatively rigorous evidence that their treatment groups outperformed their comparison or control groups in terms of everything from writing to dart-throwing, map-making, speaking English, and exams in a wide variety of disciplines. One experiment on summative self-assessment ( Miller and Geraci, 2011 ), in contrast, resulted in no improvements in exam scores, while the other one did ( Raaijmakers et al., 2017 ).

It would be easy to overgeneralize and claim that the question about the effect of self-assessment on learning has been answered, but there are unanswered questions about the key components of effective self-assessment, especially social-emotional components related to power and trust ( Andrade and Brown, 2016 ). The trends are pretty clear, however: it appears that formative forms of self-assessment can promote knowledge and skill development. This is not surprising, given that it involves many of the processes known to support learning, including practice, feedback, revision, and especially the intellectually demanding work of making complex, criteria-referenced judgments ( Panadero et al., 2014 ). Boud (1995a , b) predicted this trend when he noted that many self-assessment processes undermine learning by rushing to judgment, thereby failing to engage students with the standards or criteria for their work.

Self-Regulated Learning

The association between self-assessment and learning has also been explained in terms of self-regulation ( Andrade, 2010 ; Panadero and Alonso-Tapia, 2013 ; Andrade and Brookhart, 2016 , 2019 ; Panadero et al., 2016b ). Self-regulated learning (SRL) occurs when learners set goals and then monitor and manage their thoughts, feelings, and actions to reach those goals. SRL is moderately to highly correlated with achievement ( Zimmerman and Schunk, 2011 ). Research suggests that formative assessment is a potential influence on SRL ( Nicol and Macfarlane-Dick, 2006 ). The 12 studies in Table S1 (Supplementary Material) that focus on SRL demonstrate the recent increase in interest in the relationship between self-assessment and SRL.

Conceptual and practical overlaps between the two fields are abundant. In fact, Brown and Harris (2014) recommend that student self-assessment no longer be treated as an assessment, but as an essential competence for self-regulation. Butler and Winne (1995) introduced the role of self-generated feedback in self-regulation years ago:

[For] all self-regulated activities, feedback is an inherent catalyst. As learners monitor their engagement with tasks, internal feedback is generated by the monitoring process. That feedback describes the nature of outcomes and the qualities of the cognitive processes that led to those states (p. 245).

The outcomes and processes referred to by Butler and Winne are many of the same products and processes I referred to earlier in the definition of self-assessment and in Table 1 .

In general, research and practice related to self-assessment has tended to focus on judging the products of student learning, while scholarship on self-regulated learning encompasses both processes and products. The very practical focus of much of the research on self-assessment means it might be playing catch-up, in terms of theory development, with the SRL literature, which is grounded in experimental paradigms from cognitive psychology ( de Bruin and van Gog, 2012 ), while self-assessment research is ahead in terms of implementation (E. Panadero, personal communication, October 21, 2016). One major exception is the work done on Self-regulated Strategy Development ( Glaser and Brunstein, 2007 ; Harris et al., 2008 ), which has successfully integrated SRL research with classroom practices, including self-assessment, to teach writing to students with special needs.

Nicol and Macfarlane-Dick (2006) have been explicit about the potential for self-assessment practices to support self-regulated learning:

To develop systematically the learner's capacity for self-regulation, teachers need to create more structured opportunities for self-monitoring and the judging of progression to goals. Self-assessment tasks are an effective way of achieving this, as are activities that encourage reflection on learning progress (p. 207).

The studies of SRL in Table S1 (Supplementary Material) provide encouraging findings regarding the potential role of self-assessment in promoting achievement, self-regulated learning in general, and metacognition and study strategies related to task selection in particular. The studies also represent a solution to the “methodological and theoretical challenges involved in bringing metacognitive research to the real world, using meaningful learning materials” ( Koriat, 2012 , p. 296).

Future Directions for Research

I agree with ( Yan and Brown, 2017 ) statement that “from a pedagogical perspective, the benefits of self-assessment may come from active engagement in the learning process, rather than by being “veridical” or coinciding with reality, because students' reflection and metacognitive monitoring lead to improved learning” (p. 1,248). Future research should focus less on accuracy/consistency/veridicality, and more on the precise mechanisms of self-assessment ( Butler, 2018 ).

An important aspect of research on self-assessment that is not explicitly represented in Table S1 (Supplementary Material) is practice, or pedagogy: Under what conditions does self-assessment work best, and how are those conditions influenced by context? Fortunately, the studies listed in the table, as well as others (see especially Andrade and Valtcheva, 2009 ; Nielsen, 2014 ; Panadero et al., 2016a ), point toward an answer. But we still have questions about how best to scaffold effective formative self-assessment. One area of inquiry is about the characteristics of the task being assessed, and the standards or criteria used by learners during self-assessment.

Influence of Types of Tasks and Standards or Criteria

Type of task or competency assessed seems to matter (e.g., Dolosic, 2018 , Nguyen and Foster, 2018 ), as do the criteria ( Yilmaz, 2017 ), but we do not yet have a comprehensive understanding of how or why. There is some evidence that it is important that the criteria used to self-assess are concrete, task-specific ( Butler, 2018 ), and graduated. For example, Fastre et al. (2010) revealed an association between self-assessment according to task-specific criteria and task performance: In a quasi-experimental study of 39 novice vocational education students studying stoma care, they compared concrete, task-specific criteria (“performance-based criteria”) such as “Introduces herself to the patient” and “Consults the care file for details concerning the stoma” to vaguer, “competence-based criteria” such as “Shows interest, listens actively, shows empathy to the patient” and “Is discrete with sensitive topics.” The performance-based criteria group outperformed the competence-based group on tests of task performance, presumably because “performance-based criteria make it easier to distinguish levels of performance, enabling a step-by-step process of performance improvement” (p. 530).

This finding echoes the results of a study of self-regulated learning by Kitsantas and Zimmerman (2006) , who argued that “fine-grained standards can have two key benefits: They can enable learners to be more sensitive to small changes in skill and make more appropriate adaptations in learning strategies” (p. 203). In their study, 70 college students were taught how to throw darts at a target. The purpose of the study was to examine the role of graphing of self-recorded outcomes and self-evaluative standards in learning a motor skill. Students who were provided with graduated self-evaluative standards surpassed “those who were provided with absolute standards or no standards (control) in both motor skill and in motivational beliefs (i.e., self-efficacy, attributions, and self-satisfaction)” (p. 201). Kitsantas and Zimmerman hypothesized that setting high absolute standards would limit a learner's sensitivity to small improvements in functioning. This hypothesis was supported by the finding that students who set absolute standards reported significantly less awareness of learning progress (and hit the bull's-eye less often) than students who set graduated standards. “The correlation between the self-evaluation and dart-throwing outcomes measures was extraordinarily high ( r = 0.94)” (p. 210). Classroom-based research on specific, graduated self-assessment criteria would be informative.

Cognitive and Affective Mechanisms of Self-Assessment

There are many additional questions about pedagogy, such as the hoped-for investigation mentioned above of the relationship between accuracy in formative self-assessment, students' subsequent study behaviors, and their learning. There is also a need for research on how to help teachers give students a central role in their learning by creating space for self-assessment (e.g., see Hawe and Parr, 2014 ), and the complex power dynamics involved in doing so ( Tan, 2004 , 2009 ; Taras, 2008 ; Leach, 2012 ). However, there is an even more pressing need for investigations into the internal mechanisms experienced by students engaged in assessing their own learning. Angela Lui and I call this the next black box ( Lui, 2017 ).

Black and Wiliam (1998) used the term black box to emphasize the fact that what happened in most classrooms was largely unknown: all we knew was that some inputs (e.g., teachers, resources, standards, and requirements) were fed into the box, and that certain outputs (e.g., more knowledgeable and competent students, acceptable levels of achievement) would follow. But what, they asked, is happening inside, and what new inputs will produce better outputs? Black and Wiliam's review spawned a great deal of research on formative assessment, some but not all of which suggests a positive relationship with academic achievement ( Bennett, 2011 ; Kingston and Nash, 2011 ). To better understand why and how the use of formative assessment in general and self-assessment in particular is associated with improvements in academic achievement in some instances but not others, we need research that looks into the next black box: the cognitive and affective mechanisms of students who are engaged in assessment processes ( Lui, 2017 ).

The role of internal mechanisms has been discussed in theory but not yet fully tested. Crooks (1988) argued that the impact of assessment is influenced by students' interpretation of the tasks and results, and Butler and Winne (1995) theorized that both cognitive and affective processes play a role in determining how feedback is internalized and used to self-regulate learning. Other theoretical frameworks about the internal processes of receiving and responding to feedback have been developed (e.g., Nicol and Macfarlane-Dick, 2006 ; Draper, 2009 ; Andrade, 2013 ; Lipnevich et al., 2016 ). Yet, Shute (2008) noted in her review of the literature on formative feedback that “despite the plethora of research on the topic, the specific mechanisms relating feedback to learning are still mostly murky, with very few (if any) general conclusions” (p. 156). This area is ripe for research.

Self-assessment is the act of monitoring one's processes and products in order to make adjustments that deepen learning and enhance performance. Although it can be summative, the evidence presented in this review strongly suggests that self-assessment is most beneficial, in terms of both achievement and self-regulated learning, when it is used formatively and supported by training.

What is not yet clear is why and how self-assessment works. Those of you who like to investigate phenomena that are maddeningly difficult to measure will rejoice to hear that the cognitive and affective mechanisms of self-assessment are the next black box. Studies of the ways in which learners think and feel, the interactions between their thoughts and feelings and their context, and the implications for pedagogy will make major contributions to our field.

Author Contributions

The author confirms being the sole contributor of this work and has approved it for publication.

Conflict of Interest Statement

The author declares that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

Supplementary Material

The Supplementary Material for this article can be found online at: https://www.frontiersin.org/articles/10.3389/feduc.2019.00087/full#supplementary-material

1. ^ I am grateful to my graduate assistants, Joanna Weaver and Taja Young, for conducting the searches.

Admiraal, W., Huisman, B., and Pilli, O. (2015). Assessment in massive open online courses. Electron. J. e-Learning , 13, 207–216.

Google Scholar

Alaoutinen, S. (2012). Evaluating the effect of learning style and student background on self-assessment accuracy. Comput. Sci. Educ. 22, 175–198. doi: 10.1080/08993408.2012.692924

CrossRef Full Text | Google Scholar

Al-Rawahi, N. M., and Al-Balushi, S. M. (2015). The effect of reflective science journal writing on students' self-regulated learning strategies. Int. J. Environ. Sci. Educ. 10, 367–379. doi: 10.12973/ijese.2015.250a

Andrade, H. (2010). “Students as the definitive source of formative assessment: academic self-assessment and the self-regulation of learning,” in Handbook of Formative Assessment , eds H. Andrade and G. Cizek (New York, NY: Routledge, 90–105.

Andrade, H. (2013). “Classroom assessment in the context of learning theory and research,” in Sage Handbook of Research on Classroom Assessment , ed J. H. McMillan (New York, NY: Sage), 17–34. doi: 10.4135/9781452218649.n2

Andrade, H. (2018). “Feedback in the context of self-assessment,” in Cambridge Handbook of Instructional Feedback , eds A. Lipnevich and J. Smith (Cambridge: Cambridge University Press), 376–408.

PubMed Abstract

Andrade, H., and Boulay, B. (2003). The role of rubric-referenced self-assessment in learning to write. J. Educ. Res. 97, 21–34. doi: 10.1080/00220670309596625

Andrade, H., and Brookhart, S. (2019). Classroom assessment as the co-regulation of learning. Assessm. Educ. Principles Policy Pract. doi: 10.1080/0969594X.2019.1571992

Andrade, H., and Brookhart, S. M. (2016). “The role of classroom assessment in supporting self-regulated learning,” in Assessment for Learning: Meeting the Challenge of Implementation , eds D. Laveault and L. Allal (Heidelberg: Springer), 293–309. doi: 10.1007/978-3-319-39211-0_17

Andrade, H., and Du, Y. (2007). Student responses to criteria-referenced self-assessment. Assess. Evalu. High. Educ. 32, 159–181. doi: 10.1080/02602930600801928

Andrade, H., Du, Y., and Mycek, K. (2010). Rubric-referenced self-assessment and middle school students' writing. Assess. Educ. 17, 199–214. doi: 10.1080/09695941003696172

Andrade, H., Du, Y., and Wang, X. (2008). Putting rubrics to the test: The effect of a model, criteria generation, and rubric-referenced self-assessment on elementary school students' writing. Educ. Meas. 27, 3–13. doi: 10.1111/j.1745-3992.2008.00118.x

Andrade, H., and Valtcheva, A. (2009). Promoting learning and achievement through self- assessment. Theory Pract. 48, 12–19. doi: 10.1080/00405840802577544

Andrade, H., Wang, X., Du, Y., and Akawi, R. (2009). Rubric-referenced self-assessment and self-efficacy for writing. J. Educ. Res. 102, 287–302. doi: 10.3200/JOER.102.4.287-302

Andrade, H. L., and Brown, G. T. L. (2016). “Student self-assessment in the classroom,” in Handbook of Human and Social Conditions in Assessment , eds G. T. L. Brown and L. R. Harris (New York, NY: Routledge), 319–334.

PubMed Abstract | Google Scholar

Baars, M., Vink, S., van Gog, T., de Bruin, A., and Paas, F. (2014). Effects of training self-assessment and using assessment standards on retrospective and prospective monitoring of problem solving. Learn. Instruc. 33, 92–107. doi: 10.1016/j.learninstruc.2014.04.004

Balderas, I., and Cuamatzi, P. M. (2018). Self and peer correction to improve college students' writing skills. Profile. 20, 179–194. doi: 10.15446/profile.v20n2.67095

Bandura, A. (1997). Self-efficacy: The Exercise of Control . New York, NY: Freeman.

Barney, S., Khurum, M., Petersen, K., Unterkalmsteiner, M., and Jabangwe, R. (2012). Improving students with rubric-based self-assessment and oral feedback. IEEE Transac. Educ. 55, 319–325. doi: 10.1109/TE.2011.2172981

Baxter, P., and Norman, G. (2011). Self-assessment or self deception? A lack of association between nursing students' self-assessment and performance. J. Adv. Nurs. 67, 2406–2413. doi: 10.1111/j.1365-2648.2011.05658.x

PubMed Abstract | CrossRef Full Text | Google Scholar

Bennett, R. E. (2011). Formative assessment: a critical review. Assess. Educ. 18, 5–25. doi: 10.1080/0969594X.2010.513678

Birjandi, P., and Hadidi Tamjid, N. (2012). The role of self-, peer and teacher assessment in promoting Iranian EFL learners' writing performance. Assess. Evalu. High. Educ. 37, 513–533. doi: 10.1080/02602938.2010.549204

Bjork, R. A., Dunlosky, J., and Kornell, N. (2013). Self-regulated learning: beliefs, techniques, and illusions. Annu. Rev. Psychol. 64, 417–444. doi: 10.1146/annurev-psych-113011-143823

Black, P., Harrison, C., Lee, C., Marshall, B., and Wiliam, D. (2003). Assessment for Learning: Putting it into Practice . Berkshire: Open University Press.

Black, P., and Wiliam, D. (1998). Inside the black box: raising standards through classroom assessment. Phi Delta Kappan 80, 139–144; 146–148.

Blanch-Hartigan, D. (2011). Medical students' self-assessment of performance: results from three meta-analyses. Patient Educ. Counsel. 84, 3–9. doi: 10.1016/j.pec.2010.06.037

Bol, L., Hacker, D. J., Walck, C. C., and Nunnery, J. A. (2012). The effects of individual or group guidelines on the calibration accuracy and achievement of high school biology students. Contemp. Educ. Psychol. 37, 280–287. doi: 10.1016/j.cedpsych.2012.02.004

Boud, D. (1995a). Implementing Student Self-Assessment, 2nd Edn. Australian Capital Territory: Higher Education Research and Development Society of Australasia.

Boud, D. (1995b). Enhancing Learning Through Self-Assessment. London: Kogan Page.

Boud, D. (1999). Avoiding the traps: Seeking good practice in the use of self-assessment and reflection in professional courses. Soc. Work Educ. 18, 121–132. doi: 10.1080/02615479911220131

Boud, D., and Brew, A. (1995). Developing a typology for learner self-assessment practices. Res. Dev. High. Educ. 18, 130–135.

Bourke, R. (2014). Self-assessment in professional programmes within tertiary institutions. Teach. High. Educ. 19, 908–918. doi: 10.1080/13562517.2014.934353

Bourke, R. (2016). Liberating the learner through self-assessment. Cambridge J. Educ. 46, 97–111. doi: 10.1080/0305764X.2015.1015963

Brown, G., Andrade, H., and Chen, F. (2015). Accuracy in student self-assessment: directions and cautions for research. Assess. Educ. 22, 444–457. doi: 10.1080/0969594X.2014.996523

Brown, G. T., and Harris, L. R. (2013). “Student self-assessment,” in Sage Handbook of Research on Classroom Assessment , ed J. H. McMillan (Los Angeles, CA: Sage), 367–393. doi: 10.4135/9781452218649.n21

Brown, G. T. L., and Harris, L. R. (2014). The future of self-assessment in classroom practice: reframing self-assessment as a core competency. Frontline Learn. Res. 3, 22–30. doi: 10.14786/flr.v2i1.24

Butler, D. L., and Winne, P. H. (1995). Feedback and self-regulated learning: a theoretical synthesis. Rev. Educ. Res. 65, 245–281. doi: 10.3102/00346543065003245

Butler, Y. G. (2018). “Young learners' processes and rationales for responding to self-assessment items: cases for generic can-do and five-point Likert-type formats,” in Useful Assessment and Evaluation in Language Education , eds J. Davis et al. (Washington, DC: Georgetown University Press), 21–39. doi: 10.2307/j.ctvvngrq.5

CrossRef Full Text

Chang, C.-C., Liang, C., and Chen, Y.-H. (2013). Is learner self-assessment reliable and valid in a Web-based portfolio environment for high school students? Comput. Educ. 60, 325–334. doi: 10.1016/j.compedu.2012.05.012

Chang, C.-C., Tseng, K.-H., and Lou, S.-J. (2012). A comparative analysis of the consistency and difference among teacher-assessment, student self-assessment and peer-assessment in a Web-based portfolio assessment environment for high school students. Comput. Educ. 58, 303–320. doi: 10.1016/j.compedu.2011.08.005

Colliver, J., Verhulst, S, and Barrows, H. (2005). Self-assessment in medical practice: a further concern about the conventional research paradigm. Teach. Learn. Med. 17, 200–201. doi: 10.1207/s15328015tlm1703_1

Crooks, T. J. (1988). The impact of classroom evaluation practices on students. Rev. Educ. Res. 58, 438–481. doi: 10.3102/00346543058004438

de Bruin, A. B. H., and van Gog, T. (2012). Improving self-monitoring and self-regulation: From cognitive psychology to the classroom , Learn. Instruct. 22, 245–252. doi: 10.1016/j.learninstruc.2012.01.003

De Grez, L., Valcke, M., and Roozen, I. (2012). How effective are self- and peer assessment of oral presentation skills compared with teachers' assessments? Active Learn. High. Educ. 13, 129–142. doi: 10.1177/1469787412441284

Dolosic, H. (2018). An examination of self-assessment and interconnected facets of second language reading. Read. Foreign Langu. 30, 189–208.

Draper, S. W. (2009). What are learners actually regulating when given feedback? Br. J. Educ. Technol. 40, 306–315. doi: 10.1111/j.1467-8535.2008.00930.x

Dunlosky, J., and Ariel, R. (2011). “Self-regulated learning and the allocation of study time,” in Psychology of Learning and Motivation , Vol. 54 ed B. Ross (Cambridge, MA: Academic Press), 103–140. doi: 10.1016/B978-0-12-385527-5.00004-8

Dunlosky, J., and Rawson, K. A. (2012). Overconfidence produces underachievement: inaccurate self evaluations undermine students' learning and retention. Learn. Instr. 22, 271–280. doi: 10.1016/j.learninstruc.2011.08.003

Dweck, C. (2006). Mindset: The New Psychology of Success. New York, NY: Random House.

Epstein, R. M., Siegel, D. J., and Silberman, J. (2008). Self-monitoring in clinical practice: a challenge for medical educators. J. Contin. Educ. Health Prof. 28, 5–13. doi: 10.1002/chp.149

Eva, K. W., and Regehr, G. (2008). “I'll never play professional football” and other fallacies of self-assessment. J. Contin. Educ. Health Prof. 28, 14–19. doi: 10.1002/chp.150

Falchikov, N. (2005). Improving Assessment Through Student Involvement: Practical Solutions for Aiding Learning in Higher and Further Education . London: Routledge Falmer.

Fastre, G. M. J., van der Klink, M. R., Sluijsmans, D., and van Merrienboer, J. J. G. (2012). Drawing students' attention to relevant assessment criteria: effects on self-assessment skills and performance. J. Voc. Educ. Train. 64, 185–198. doi: 10.1080/13636820.2011.630537

Fastre, G. M. J., van der Klink, M. R., and van Merrienboer, J. J. G. (2010). The effects of performance-based assessment criteria on student performance and self-assessment skills. Adv. Health Sci. Educ. 15, 517–532. doi: 10.1007/s10459-009-9215-x

Fitzpatrick, B., and Schulz, H. (2016). “Teaching young students to self-assess critically,” Paper presented at the Annual Meeting of the American Educational Research Association (Washington, DC).

Franken, A. S. (1992). I'm Good Enough, I'm Smart Enough, and Doggone it, People Like Me! Daily affirmations by Stuart Smalley. New York, NY: Dell.

Glaser, C., and Brunstein, J. C. (2007). Improving fourth-grade students' composition skills: effects of strategy instruction and self-regulation procedures. J. Educ. Psychol. 99, 297–310. doi: 10.1037/0022-0663.99.2.297

Gonida, E. N., and Leondari, A. (2011). Patterns of motivation among adolescents with biased and accurate self-efficacy beliefs. Int. J. Educ. Res. 50, 209–220. doi: 10.1016/j.ijer.2011.08.002

Graham, S., Hebert, M., and Harris, K. R. (2015). Formative assessment and writing. Elem. Sch. J. 115, 523–547. doi: 10.1086/681947

Guillory, J. J., and Blankson, A. N. (2017). Using recently acquired knowledge to self-assess understanding in the classroom. Sch. Teach. Learn. Psychol. 3, 77–89. doi: 10.1037/stl0000079

Hacker, D. J., Bol, L., Horgan, D. D., and Rakow, E. A. (2000). Test prediction and performance in a classroom context. J. Educ. Psychol. 92, 160–170. doi: 10.1037/0022-0663.92.1.160

Harding, J. L., and Hbaci, I. (2015). Evaluating pre-service teachers math teaching experience from different perspectives. Univ. J. Educ. Res. 3, 382–389. doi: 10.13189/ujer.2015.030605

Harris, K. R., Graham, S., Mason, L. H., and Friedlander, B. (2008). Powerful Writing Strategies for All Students . Baltimore, MD: Brookes.

Harris, L. R., and Brown, G. T. L. (2013). Opportunities and obstacles to consider when using peer- and self-assessment to improve student learning: case studies into teachers' implementation. Teach. Teach. Educ. 36, 101–111. doi: 10.1016/j.tate.2013.07.008

Hattie, J., and Timperley, H. (2007). The power of feedback. Rev. Educ. Res. 77, 81–112. doi: 10.3102/003465430298487

Hawe, E., and Parr, J. (2014). Assessment for learning in the writing classroom: an incomplete realization. Curr. J. 25, 210–237. doi: 10.1080/09585176.2013.862172

Hawkins, S. C., Osborne, A., Schofield, S. J., Pournaras, D. J., and Chester, J. F. (2012). Improving the accuracy of self-assessment of practical clinical skills using video feedback: the importance of including benchmarks. Med. Teach. 34, 279–284. doi: 10.3109/0142159X.2012.658897

Huang, Y., and Gui, M. (2015). Articulating teachers' expectations afore: Impact of rubrics on Chinese EFL learners' self-assessment and speaking ability. J. Educ. Train. Stud. 3, 126–132. doi: 10.11114/jets.v3i3.753

Kaderavek, J. N., Gillam, R. B., Ukrainetz, T. A., Justice, L. M., and Eisenberg, S. N. (2004). School-age children's self-assessment of oral narrative production. Commun. Disord. Q. 26, 37–48. doi: 10.1177/15257401040260010401

Karnilowicz, W. (2012). A comparison of self-assessment and tutor assessment of undergraduate psychology students. Soc. Behav. Person. 40, 591–604. doi: 10.2224/sbp.2012.40.4.591

Kevereski, L. (2017). (Self) evaluation of knowledge in students' population in higher education in Macedonia. Res. Pedag. 7, 69–75. doi: 10.17810/2015.49

Kingston, N. M., and Nash, B. (2011). Formative assessment: a meta-analysis and a call for research. Educ. Meas. 30, 28–37. doi: 10.1111/j.1745-3992.2011.00220.x

Kitsantas, A., and Zimmerman, B. J. (2006). Enhancing self-regulation of practice: the influence of graphing and self-evaluative standards. Metacogn. Learn. 1, 201–212. doi: 10.1007/s11409-006-9000-7

Kluger, A. N., and DeNisi, A. (1996). The effects of feedback interventions on performance: a historical review, a meta-analysis, and a preliminary feedback intervention theory. Psychol. Bull. 119, 254–284. doi: 10.1037/0033-2909.119.2.254

Kollar, I., Fischer, F., and Hesse, F. (2006). Collaboration scripts: a conceptual analysis. Educ. Psychol. Rev. 18, 159–185. doi: 10.1007/s10648-006-9007-2

Kolovelonis, A., Goudas, M., and Dermitzaki, I. (2012). Students' performance calibration in a basketball dribbling task in elementary physical education. Int. Electron. J. Elem. Educ. 4, 507–517.

Koriat, A. (2012). The relationships between monitoring, regulation and performance. Learn. Instru. 22, 296–298. doi: 10.1016/j.learninstruc.2012.01.002

Kostons, D., van Gog, T., and Paas, F. (2012). Training self-assessment and task-selection skills: a cognitive approach to improving self-regulated learning. Learn. Instruc. 22, 121–132. doi: 10.1016/j.learninstruc.2011.08.004

Labuhn, A. S., Zimmerman, B. J., and Hasselhorn, M. (2010). Enhancing students' self-regulation and mathematics performance: the influence of feedback and self-evaluative standards Metacogn. Learn. 5, 173–194. doi: 10.1007/s11409-010-9056-2

Leach, L. (2012). Optional self-assessment: some tensions and dilemmas. Assess. Evalu. High. Educ. 37, 137–147. doi: 10.1080/02602938.2010.515013

Lew, M. D. N., Alwis, W. A. M., and Schmidt, H. G. (2010). Accuracy of students' self-assessment and their beliefs about its utility. Assess. Evalu. High. Educ. 35, 135–156. doi: 10.1080/02602930802687737

Lin-Siegler, X., Shaenfield, D., and Elder, A. D. (2015). Contrasting case instruction can improve self-assessment of writing. Educ. Technol. Res. Dev. 63, 517–537. doi: 10.1007/s11423-015-9390-9

Lipnevich, A. A., Berg, D. A. G., and Smith, J. K. (2016). “Toward a model of student response to feedback,” in The Handbook of Human and Social Conditions in Assessment , eds G. T. L. Brown and L. R. Harris (New York, NY: Routledge), 169–185.

Lopez, R., and Kossack, S. (2007). Effects of recurring use of self-assessment in university courses. Int. J. Learn. 14, 203–216. doi: 10.18848/1447-9494/CGP/v14i04/45277

Lopez-Pastor, V. M., Fernandez-Balboa, J.-M., Santos Pastor, M. L., and Aranda, A. F. (2012). Students' self-grading, professor's grading and negotiated final grading at three university programmes: analysis of reliability and grade difference ranges and tendencies. Assess. Evalu. High. Educ. 37, 453–464. doi: 10.1080/02602938.2010.545868

Lui, A. (2017). Validity of the responses to feedback survey: operationalizing and measuring students' cognitive and affective responses to teachers' feedback (Doctoral dissertation). University at Albany—SUNY: Albany NY.

Marks, M. B., Haug, J. C., and Hu, H. (2018). Investigating undergraduate business internships: do supervisor and self-evaluations differ? J. Educ. Bus. 93, 33–45. doi: 10.1080/08832323.2017.1414025

Memis, E. K., and Seven, S. (2015). Effects of an SWH approach and self-evaluation on sixth grade students' learning and retention of an electricity unit. Int. J. Prog. Educ. 11, 32–49.

Metcalfe, J., and Kornell, N. (2005). A region of proximal learning model of study time allocation. J. Mem. Langu. 52, 463–477. doi: 10.1016/j.jml.2004.12.001

Meusen-Beekman, K. D., Joosten-ten Brinke, D., and Boshuizen, H. P. A. (2016). Effects of formative assessments to develop self-regulation among sixth grade students: results from a randomized controlled intervention. Stud. Educ. Evalu. 51, 126–136. doi: 10.1016/j.stueduc.2016.10.008

Micán, D. A., and Medina, C. L. (2017). Boosting vocabulary learning through self-assessment in an English language teaching context. Assess. Evalu. High. Educ. 42, 398–414. doi: 10.1080/02602938.2015.1118433

Miller, T. M., and Geraci, L. (2011). Training metacognition in the classroom: the influence of incentives and feedback on exam predictions. Metacogn. Learn. 6, 303–314. doi: 10.1007/s11409-011-9083-7

Murakami, C., Valvona, C., and Broudy, D. (2012). Turning apathy into activeness in oral communication classes: regular self- and peer-assessment in a TBLT programme. System 40, 407–420. doi: 10.1016/j.system.2012.07.003

Nagel, M., and Lindsey, B. (2018). The use of classroom clickers to support improved self-assessment in introductory chemistry. J. College Sci. Teach. 47, 72–79.

Ndoye, A. (2017). Peer/self-assessment and student learning. Int. J. Teach. Learn. High. Educ. 29, 255–269.

Nguyen, T., and Foster, K. A. (2018). Research note—multiple time point course evaluation and student learning outcomes in an MSW course. J. Soc. Work Educ. 54, 715–723. doi: 10.1080/10437797.2018.1474151

Nicol, D., and Macfarlane-Dick, D. (2006). Formative assessment and self-regulated learning: a model and seven principles of good feedback practice. Stud. High. Educ. 31, 199–218. doi: 10.1080/03075070600572090

Nielsen, K. (2014), Self-assessment methods in writing instruction: a conceptual framework, successful practices and essential strategies. J. Res. Read. 37, 1–16. doi: 10.1111/j.1467-9817.2012.01533.x.

Nowell, C., and Alston, R. M. (2007). I thought I got an A! Overconfidence across the economics curriculum. J. Econ. Educ. 38, 131–142. doi: 10.3200/JECE.38.2.131-142

Nugteren, M. L., Jarodzka, H., Kester, L., and Van Merriënboer, J. J. G. (2018). Self-regulation of secondary school students: self-assessments are inaccurate and insufficiently used for learning-task selection. Instruc. Sci. 46, 357–381. doi: 10.1007/s11251-018-9448-2

Panadero, E., and Alonso-Tapia, J. (2013). Self-assessment: theoretical and practical connotations. When it happens, how is it acquired and what to do to develop it in our students. Electron. J. Res. Educ. Psychol. 11, 551–576. doi: 10.14204/ejrep.30.12200

Panadero, E., Alonso-Tapia, J., and Huertas, J. A. (2012). Rubrics and self-assessment scripts effects on self-regulation, learning and self-efficacy in secondary education. Learn. Individ. Differ. 22, 806–813. doi: 10.1016/j.lindif.2012.04.007

Panadero, E., Alonso-Tapia, J., and Huertas, J. A. (2014). Rubrics vs. self-assessment scripts: effects on first year university students' self-regulation and performance. J. Study Educ. Dev. 3, 149–183. doi: 10.1080/02103702.2014.881655

Panadero, E., Alonso-Tapia, J., and Reche, E. (2013). Rubrics vs. self-assessment scripts effect on self-regulation, performance and self-efficacy in pre-service teachers. Stud. Educ. Evalu. 39, 125–132. doi: 10.1016/j.stueduc.2013.04.001

Panadero, E., Brown, G. L., and Strijbos, J.-W. (2016a). The future of student self-assessment: a review of known unknowns and potential directions. Educ. Psychol. Rev. 28, 803–830. doi: 10.1007/s10648-015-9350-2

Panadero, E., Jonsson, A., and Botella, J. (2017). Effects of self-assessment on self-regulated learning and self-efficacy: four meta-analyses. Educ. Res. Rev. 22, 74–98. doi: 10.1016/j.edurev.2017.08.004

Panadero, E., Jonsson, A., and Strijbos, J. W. (2016b). “Scaffolding self-regulated learning through self-assessment and peer assessment: guidelines for classroom implementation,” in Assessment for Learning: Meeting the Challenge of Implementation , eds D. Laveault and L. Allal (New York, NY: Springer), 311–326. doi: 10.1007/978-3-319-39211-0_18

Panadero, E., and Romero, M. (2014). To rubric or not to rubric? The effects of self-assessment on self-regulation, performance and self-efficacy. Assess. Educ. 21, 133–148. doi: 10.1080/0969594X.2013.877872

Papanthymou, A., and Darra, M. (2018). Student self-assessment in higher education: The international experience and the Greek example. World J. Educ. 8, 130–146. doi: 10.5430/wje.v8n6p130

Punhagui, G. C., and de Souza, N. A. (2013). Self-regulation in the learning process: actions through self-assessment activities with Brazilian students. Int. Educ. Stud. 6, 47–62. doi: 10.5539/ies.v6n10p47

Raaijmakers, S. F., Baars, M., Paas, F., van Merriënboer, J. J. G., and van Gog, T. (2019). Metacognition and Learning , 1–22. doi: 10.1007/s11409-019-09189-5

Raaijmakers, S. F., Baars, M., Schapp, L., Paas, F., van Merrienboer, J., and van Gog, T. (2017). Training self-regulated learning with video modeling examples: do task-selection skills transfer? Instr. Sci. 46, 273–290. doi: 10.1007/s11251-017-9434-0

Ratminingsih, N. M., Marhaeni, A. A. I. N., and Vigayanti, L. P. D. (2018). Self-assessment: the effect on students' independence and writing competence. Int. J. Instruc. 11, 277–290. doi: 10.12973/iji.2018.11320a

Ross, J. A., Rolheiser, C., and Hogaboam-Gray, A. (1998). “Impact of self-evaluation training on mathematics achievement in a cooperative learning environment,” Paper presented at the annual meeting of the American Educational Research Association (San Diego, CA).

Ross, J. A., and Starling, M. (2008). Self-assessment in a technology-supported environment: the case of grade 9 geography. Assess. Educ. 15, 183–199. doi: 10.1080/09695940802164218

Samaie, M., Nejad, A. M., and Qaracholloo, M. (2018). An inquiry into the efficiency of whatsapp for self- and peer-assessments of oral language proficiency. Br. J. Educ. Technol. 49, 111–126. doi: 10.1111/bjet.12519

Sanchez, C. E., Atkinson, K. M., Koenka, A. C., Moshontz, H., and Cooper, H. (2017). Self-grading and peer-grading for formative and summative assessments in 3rd through 12th grade classrooms: a meta-analysis. J. Educ. Psychol. 109, 1049–1066. doi: 10.1037/edu0000190

Sargeant, J. (2008). Toward a common understanding of self-assessment. J. Contin. Educ. Health Prof. 28, 1–4. doi: 10.1002/chp.148

Sargeant, J., Mann, K., van der Vleuten, C., and Metsemakers, J. (2008). “Directed” self-assessment: practice and feedback within a social context. J. Contin. Educ. Health Prof. 28, 47–54. doi: 10.1002/chp.155

Shute, V. (2008). Focus on formative feedback. Rev. Educ. Res. 78, 153–189. doi: 10.3102/0034654307313795

Silver, I., Campbell, C., Marlow, B., and Sargeant, J. (2008). Self-assessment and continuing professional development: the Canadian perspective. J. Contin. Educ. Health Prof. 28, 25–31. doi: 10.1002/chp.152

Siow, L.-F. (2015). Students' perceptions on self- and peer-assessment in enhancing learning experience. Malaysian Online J. Educ. Sci. 3, 21–35.

Son, L. K., and Metcalfe, J. (2000). Metacognitive and control strategies in study-time allocation. J. Exp. Psychol. 26, 204–221. doi: 10.1037/0278-7393.26.1.204

Tan, K. (2004). Does student self-assessment empower or discipline students? Assess. Evalu. Higher Educ. 29, 651–662. doi: 10.1080/0260293042000227209

Tan, K. (2009). Meanings and practices of power in academics' conceptions of student self-assessment. Teach. High. Educ. 14, 361–373. doi: 10.1080/13562510903050111

Taras, M. (2008). Issues of power and equity in two models of self-assessment. Teach. High. Educ. 13, 81–92. doi: 10.1080/13562510701794076

Tejeiro, R. A., Gomez-Vallecillo, J. L., Romero, A. F., Pelegrina, M., Wallace, A., and Emberley, E. (2012). Summative self-assessment in higher education: implications of its counting towards the final mark. Electron. J. Res. Educ. Psychol. 10, 789–812.

Thawabieh, A. M. (2017). A comparison between students' self-assessment and teachers' assessment. J. Curri. Teach. 6, 14–20. doi: 10.5430/jct.v6n1p14

Tulgar, A. T. (2017). Selfie@ssessment as an alternative form of self-assessment at undergraduate level in higher education. J. Langu. Linguis. Stud. 13, 321–335.

van Helvoort, A. A. J. (2012). How adult students in information studies use a scoring rubric for the development of their information literacy skills. J. Acad. Librarian. 38, 165–171. doi: 10.1016/j.acalib.2012.03.016

van Loon, M. H., de Bruin, A. B. H., van Gog, T., van Merriënboer, J. J. G., and Dunlosky, J. (2014). Can students evaluate their understanding of cause-and-effect relations? The effects of diagram completion on monitoring accuracy. Acta Psychol. 151, 143–154. doi: 10.1016/j.actpsy.2014.06.007

van Reybroeck, M., Penneman, J., Vidick, C., and Galand, B. (2017). Progressive treatment and self-assessment: Effects on students' automatisation of grammatical spelling and self-efficacy beliefs. Read. Writing 30, 1965–1985. doi: 10.1007/s11145-017-9761-1

Wang, W. (2017). Using rubrics in student self-assessment: student perceptions in the English as a foreign language writing context. Assess. Evalu. High. Educ. 42, 1280–1292. doi: 10.1080/02602938.2016.1261993

Wollenschläger, M., Hattie, J., Machts, N., Möller, J., and Harms, U. (2016). What makes rubrics effective in teacher-feedback? Transparency of learning goals is not enough. Contemp. Educ. Psychol. 44–45, 1–11. doi: 10.1016/j.cedpsych.2015.11.003

Yan, Z., and Brown, G. T. L. (2017). A cyclical self-assessment process: towards a model of how students engage in self-assessment. Assess. Evalu. High. Educ. 42, 1247–1262. doi: 10.1080/02602938.2016.1260091

Yilmaz, F. N. (2017). Reliability of scores obtained from self-, peer-, and teacher-assessments on teaching materials prepared by teacher candidates. Educ. Sci. 17, 395–409. doi: 10.12738/estp.2017.2.0098

Zimmerman, B. J. (2000). Self-efficacy: an essential motive to learn. Contemp. Educ. Psychol. 25, 82–91. doi: 10.1006/ceps.1999.1016

Zimmerman, B. J., and Schunk, D. H. (2011). “Self-regulated learning and performance: an introduction and overview,” in Handbook of Self-Regulation of Learning and Performance , eds B. J. Zimmerman and D. H. Schunk (New York, NY: Routledge), 1–14.

Keywords: self-assessment, self-evaluation, self-grading, formative assessment, classroom assessment, self-regulated learning (SRL)

Citation: Andrade HL (2019) A Critical Review of Research on Student Self-Assessment. Front. Educ. 4:87. doi: 10.3389/feduc.2019.00087

Received: 27 April 2019; Accepted: 02 August 2019; Published: 27 August 2019.

Reviewed by:

Copyright © 2019 Andrade. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY) . The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

*Correspondence: Heidi L. Andrade, handrade@albany.edu

This article is part of the Research Topic

Advances in Classroom Assessment Theory and Practice

  • Educational Assessment

A Useful Strategy for Assessing Class Participation

  • September 14, 2008
  • Denise D. Knight

One of the changes we have seen in academia in the last 30 years or so is the shift from lecture-based classes to courses that encourage a student-centered approach. Few instructors would quibble with the notion that promoting active participation helps students to think critically and to argue more effectively. However, even the most savvy instructors are still confounded about how to best evaluate participation, particularly when it is graded along with more traditional assessment measures, such as essays, exams, and oral presentations. Type the words “class participation” and “assessment” into www.google.com , and you will get close to 700,000 hits.

Providing students with a clear, fair, and useful assessment of their class participation is challenging for even the most seasoned educator. Even when I provide a rubric that distinguishes every category of participation from outstanding to poor, students are often still confused about precisely what it is that I expect from them. It is not unusual, for example, for students to believe that attendance and participation are synonymous. On the other hand, when we attempt to spell out too precisely what it is we expect in the way of contributions, we run the risk of closing down participation. In one online site that offers assessment guidelines, for example, the course instructor characterizes “unsatisfactory” participation as follows: “Contributions in class reflect inadequate preparation. Ideas offered are seldom substantive, provide few if any insights and never a constructive direction for the class. Integrative comments and effective challenges are absent. If this person were not a member of the class, valuable airtime would be saved.” The language used in the description—”inadequate,” “seldom,” “few,” “never,” and “absent”—hardly encourages positive results. The final sentence is both dismissive and insensitive. Shy students are unlikely to risk airing an opinion in a classroom climate that is negatively charged. Certainly, the same point can be made by simply informing students, in writing, that infrequent contributions to class discussions will be deemed unsatisfactory and merit a “D” for the participation grade.

While there are a number of constructive guidelines online for generating and assessing participation, the dichotomy between the students’ perception of their contributions and the instructor’s assessment of participation is still often a problem. One tool that I have found particularly effective is to administer a brief questionnaire early in the semester (as soon as I have learned everyone’s name), which asks students to assess their own participation to date. Specifically, I ask that students do the following: “Please check the statement below that best corresponds to your honest assessment of your contribution to class discussion thus far:

_____ I contribute several times during every class discussion. (A)

_____ I contribute at least once during virtually every class discussion. (B)

_____ I often contribute to class discussion. (C)

_____ I occasionally contribute to class discussion. (D)

_____ I rarely contribute to class discussion. (E)”

I then provide a space on the form for the student to write a brief rationale for their grade, along with the option to write additional comments if they so choose. Finally, I include a section on the form for instructor response. I collect the forms, read them, offer a brief response, and return them at the next class meeting.

This informal self-assessment exercise does not take long, and it always provides intriguing results. More often than not, students will award themselves a higher participation grade than I would have. Their rationale often yields insight into why there is a disconnect between my perception and theirs. For example, a student may write, “I feel that I have earned a ‘B’ so far in class participation. I know that I’m quiet, but I haven’t missed a class and I always do my reading.” Using the “Instructor Response” space, I now have an opportunity to disabuse the student’s notion that preparation, attendance, and participation are one and the same. I also offer concrete measures that the student can take to improve his or her participation.

When this exercise is done early in the semester, it can enhance both the amount and quality of participation. It helps to build confidence and reminds students that they have to hold themselves accountable for every part of their course grade, including participation.

  • Denise D. Knight, PhD, is a distinguished teaching professor of English at State University of New York College at Cortland.

Stay Updated with Faculty Focus!

Get exclusive access to programs, reports, podcast episodes, articles, and more!

  • Opens in a new tab

Already a subscriber, log in now.

Mobile search

Grading Class Participation

Teachers often include the assessment of classroom participation – or classroom contribution, as it is sometimes called – in an assessment strategy to encourage students to participate in class discussion, and to motivate students to do the background reading and preparation for a class session. When you assess participation in classroom discussion, you also encourage and reward development of oral skills, as well as group skills such as interacting and cooperating with peers and a tutor. Classroom participation can encompass active learning in a lab, studio, tutorial, team or group, online (e.g. in eportfolios and learning-management systems) or in role-plays and simulations.

It is possible to assess classroom participation in a wide variety of learning contexts:

  • undergraduate and postgraduate coursework in which students need to develop practical and generic skills as well as to assimilate a body of theoretical knowledge
  • postgraduate clinical programs (such as in medicine, psychology or social work) where personal qualities and interpersonal and communication skills are crucial learning outcomes
  • humanities-based courses in which written and spoken discourse and discussion are integral parts of the learning process
  • courses that traditionally privilege the delivery of a large body of content that can benefit from a more student-centred approach based on assessing classroom participation
  • online learning where students are expected to take part in activities such as blogs, wikis, discussion boards or  chat rooms.

With appropriate consideration of curriculum design and learning outcomes, the assessment of classroom participation can be used in any course.

Assessing participation regularly and consistently can offer benefits for both students and teachers. Benefits for students include:

  • Increases motivation, as students need to take responsibility for their own learning
  • Encourages students to prepare for class and to do the weekly readings, lab notes and studio preparation
  • Encourages students to be active participants in classroom activities
  • Encourages students to think and reflect on issues and problems that relate to the class, including lab and studio preparation
  • Encourages students to develop oral, aural and language communication skills and to demonstrate them in their interactions and cooperation with peers and educators.
  • Fosters the development of a student's communication and presentation skills in individual and group presentations
  • Encourages participation and social interaction in the sharing of ideas and concepts
  • Develops respect for others' points of view in cooperative and collaborative learning environments
  • Develops group and team skills
  • Develops students' capacity to critique peers' responses in a supportive and collegial environment
  • Through fostering students' active involvement in their own learning, increases what is remembered, how well it is assimilated, and how the learning is used in new situations.

Benefits for teachers include:

  • Creates a fair and equitable environment that gives all students an opportunity to participate
  • Aids in creating a valid and reliable assessment that clearly details in the course outline what is expected of students
  • Requires both explicit marking criteria and holistic rubrics, which provide a foundation for students to receive feedback that is timely and specific to the task
  • Assists in developing a reliable assessment task because it requires consistency, a standards-based approach and a clear articulation of the teacher's expectations.

Assessment of classroom participation can be  highly subjective, as there may be little evidence outside the classroom to support the judgements made on individuals' performance. The role of teachers can be problematic, as they are required to both facilitate and mark the learning. The assessment of  class participation may be hampered by a teacher's lack of skills and experience in facilitating active learning in classrooms.

A range of issues may also affect the fairness of the assessment strategy. When students do not participate, it may not be because they are not prepared; they may be shy, or classroom dynamics may be problematic, allowing other students to dominate, for example. There may be cultural or language problems, and/or gender issues. Students may be anxious about classroom participation being assessed, and this may well change the nature of the classroom interactions.

Students face numerous challenges when being assessed on classroom participation:

  • Assessment of participation can create tension, thus inhibiting active participation and contribution to discussion or other activities.
  • Student contributions may be affected by class size, group dynamics and other factors external to the purpose of the assessment.
  • Some international students find it difficult to participate in active classrooms due to cultural inhibitions and face-saving concerns; students from non-English-speaking backgrounds, for instance, may not be confident in their spoken-language ability and may feel shy about actively participating in a lab, preparing a presentation in a class or speaking in public, especially in front of many native speakers. (See Responding to Cross-Cultural Diversity .)

Assessing classroom participation also presents challenges for teachers:

  • Participation may be hard to assess objectively unless you are very clear as to what skills are being assessed and what criteria are being used.
  • The teacher must understand, and be able to develop, holistic and criterion-referenced rubrics, so that assessment is not affected by the teacher's ability to manage group dynamics or practical lab/studio sessions. (See Using Assessment Rubrics .)
  • Students may not come to class prepared with the necessary information, instruments or tools for active learning.
  • Students' attendance may be poor or erratic in labs, making assessment of participation less meaningful or useful.
  • Teachers must be skilled in classroom management, developing authentic tasks (see Assessing Authentically ) and setting up a classroom environment in which participation is important.
  • Teachers need to develop explicit standards-based assessment as part of criterion-marking schemes; this should be workshopped and shared with colleagues and students if possible.

Assessing classroom participation is more valid when you align it with the course’s learning outcomes and those tasks that measure a student’s achievement. To develop strategies that align with the learning outcomes and the students' expectations, consider why you want to assess classroom participation and how you can assess these skills, attributes or behaviours. Good practice includes:

  • developing clear criteria by which participation will be marked
  • differentiating between attendance at labs, studio and tutorials, and participation in them
  • keeping the criteria simple
  • considering the reliability of the task
  • telling students how to prepare to actively learn and participate effectively in class
  • training tutors to facilitate an equitable and fair classroom
  • providing clear, timely and usable feedback on the nature and quality of participation
  • maintaining records of marks achieved by each student.

Note that rewarding students with marks merely for talking, or only to encourage them to participate further, will not adequately reflect their achievement in a higher-education setting, especially in later years of study.

Design an assessment for classroom participation

The principles underlying the assessment of performance in class are much the same as those for any form of assessment. Initially, the teacher needs to foster an ethos of active learning, classroom discussion and participation in the lab or studio environment. Be an active grade-keeper and clear marker who uses explicit marking guidelines and ensure that students themselves play an active role in developing these rubrics.

  • Identify the qualities that you want students to demonstrate in their participation
  • Identify the criteria that you will use to assess whether students have displayed these qualities
  • Develop an assessment rubric and marking criteria that explicitly demonstrate to students what is expected of them
  • Let students participate in developing the assessment criteria
  • Get students to self- and peer-assess at the mid-point of a course
  • Give formative feedback at the mid-point of a course
  • Use  online collaboration tools that capture student participation and tutor feedback automatically, making the grading process more transparent and evidence-based
  • Use holistic, rather than analytical, participation rubrics.

Interpret and grade classroom participation

  • Assess performance on clearly defined tasks, not on vague impressions of the quantity or quality of a student's contribution to the active learning.
  • Specify clearly the criteria for assessing the in-class performance of students; make sure they are in a form that students can translate into action or behaviour.
  • Provide students with the opportunity to learn the skills that are being assessed.
  • Ensure that all tutors are skilled in small-group teaching (see Teaching Small Groups ); the assessment should not reflect the competence of the person facilitating the class.
  • Make sure that the assessment is fair to everyone; it should not discriminate against those with a disability, different genders or sexualities, different cultural groups, etc.
  • Involve students in the development of the rubrics.
  • Explicitly demonstrate the learning outcomes and their alignment to the assessment rubric.
  • Distribute the rubric to students at the beginning of the semester so they know which contributions, discussions and participation will merit high participation grades.
  • Provide ongoing and timely feedback; this is important to students, and can be provided in various forms.
  • Many of the class-participation approaches above can be applied in online environments (See Assessing by Discussion Board ). Online discussions have the benefit of allowing close analysis of written contributions. These contributions may be assessed according to a) frequency; b) depth and quality; or c) the extent to which they provoke further discussion and debate on relevant topics.

Example rubrics

Table 1: Expectations for class participation

"Participation is graded on a scale from 0 (lowest) through 4 (highest), using the criteria. The criteria focus on what you demonstrate, and do not presume to guess at what you know but do not demonstrate. This is because what you offer to the class is what you and others learn from." ( Maznevski, 1996)

Table 2: Group Participation Rubric

This peer assessment rubric is to help students in groups/teams evaluate the participation of individual members in the group/team presentation (Source: Making the Grade: The Role of Assessment in Authentic Learning-PDF ).

Use technology

When you use a learning management system such as Moodle , some class participation becomes very easy to measure. You can produce several different kinds of reports on the activity of students within your course. You can also obtain a quick scan of students' discussion-forum posting activity for the purpose of awarding a participation mark.

Comments from UNSW academics

From a teacher in a class of 25 students:.

"I had used [assessing classroom participation] before, in other classes, but was always struck by the fact that students could challenge the mark I gave them and I would have no way of defending the decision I had made. I wanted a more transparent and fairer system to allocate marks. 

"I developed a rubric, or marking template, where the five criteria for classroom participation were described at each of the levels of pass, credit, distinction and high distinction. This formed my marking template. In each class, apart from the first one, I selected five students that I was going to assess that week. The students did not know in which week they were being assessed. At the end of the class I would record on the marking template, one for each student who was being assessed that week, their scores across each of the five criteria (recorded by ticking the box) and a brief comment I could generate using the criteria. I would record the student's name and the date on each of the marking templates. At the end of the first six weeks, by which time I had assessed all of the 25 students, I handed back the marking templates to the whole class.  I would then repeat the exercise for the second part of the semester; again, students did not know in which week they were being assessed, but received the feedback in class at the end of the six-week period. 

"I felt that this strategy was fair, and transparent. I did not need to remember at the end of the session what everyone had done, but recorded it week by week, a process that did not take very long. No students challenged the marking, and I saw better and more constructive classroom participation as a result."  

From a teacher in Law:

"The other tip I liked was giving students five tokens to 'spend' on classroom participation; thus they had to talk, and they had to choose when to talk, and they couldn't talk too much. I adapted this in one of my classes by banning anyone who had spoken from speaking again until everyone had a turn. I was nervous about it being confrontational for the shy ones, but in fact the quiet ones told me they really liked it! I intend to do it again."

Assessing Classroom Participation in Practice

Assessing Laboratory Participation - Dr Iain Skinner

(See Transcripts for audio)

In this video, Dr Iain Skinner presents his rationale and approach to assessing student participation in laboratory work. The video also shows how the strategy works in a real classroom situation.

Case study: School of Humanities Associate Professor Karyn Lai's course assessment for ARTS1362

Thinking about reasoning.

This is a first-year course offered within the Philosophy curriculum. However, enrolment is open to students from all undergraduate programs within UNSW. Objectives of the course focus on developing students’ capacity to think clearly, reason productively, argue well and develop analytical, critical and interpretive skills, which are important to life as a whole, beyond the knowledge and skills required for any particular profession or vocation. The course uses a combination of classroom and online teaching resources to give students the benefits of classroom teaching and more opportunities for practising critical thinking skills.

Why participation?

As this is a skills-based course, it is particularly important for students to take an active role in their learning. Hence, students’ participation is critical if they are to do well in their course. There are two primary reasons that undergird the focus on participation skills in this course. First, participation increases the opportunity for students to engage in active learning, as contrasted with them passively absorbing content. Secondly, participation provides opportunities for students to learn from peers. Exposure to different views places the onus on students to compare and evaluate the views. In addition, students may be asked by their peers to provide justification for their view, compare it with competing ones, convince others, assess new ideas and revise existing beliefs. Feedback from peers may help in the development of knowledge and skills.

This raises the issue of how students’ participation skills are developed in the course, and assessed appropriately.

Developing participation skills

The course uses a participation rubric to (a) draw students’ attention to salient aspects of participation and (b) allow them to identify and evaluate their strengths and weaknesses in participation.

Table 3: Participation rubric (for in-class participation and critical analysis assignment)

This rubric is made available to students in the course outline and students will be familiarised with the marking criteria used in the rubric.

Assessing participation skills: Critical-analysis assignment

The rubric is used in an online critical analysis assignment, whereby students are required to engage in small-group online discussions on a set topic for a fortnight. At the end of the fortnight, each student submits an individually written essay on the topic.

The assignment seeks to assess students’ capacities for participating with peers in an online critical-thinking exercise. From the point of view of developing students’ critical-thinking skills, participation in online discussions that allow students to explore debates and issues in an in-depth way may allow them to improve their communicative and interpretive skills as well as higher-order thinking skills.

The instructions for the assignment are as follows:

Critical analysis

Students will be given a set topic or article to review. There are two parts to this assignment. The first involves small-group online discussions on the topic. The point of this is to allow students to participate in these discussions in order to learn from a range of different perspectives on the topic. The second is an individually written reflective essay that encourages students to draw from the discussions to present a well-reasoned piece on the topic

Details of the two parts of this assignment are as follows:

  • Participation in small-group online discussions in Blackboard [Blackboard was the LMS used at UNSW prior to Moodle] (Friday 17th August–Wednesday 29th August), to discuss a set topic – 15% A rubric setting out the participation criteria will be available on Blackboard and also included in this course outline. Rationale: The purpose of the online discussions is to give you an opportunity to test your views and then to refine them before handing your written piece in. So take every opportunity to try out your ideas with others – especially if they don’t agree with your analysis, as this will force you to reconsider your view. Your provision of a modified view, or a good justification for your initial view, is the primary objective of this exercise.  
  • 300-word individually written essay on the same topic (Friday 31st August) – 15%

Marking criteria

Students should focus on:

  • identification of the issues at stake in the debate
  • clear expression of ideas
  • coherent structure of essay
  • ability to take a detached position with respect to the article/theme, and to state why you agree or disagree with particular points of view
  • ability to raise questions or issues that warrant further debate or thought.
  • Harvard University: Encouraging Student Participation Online – and Assessing It Fairly
  • Making the Grade: The Role of Assessment in Authentic Learning-PDF
  • Vanderbilt University: Tips for Encouraging Student Participation in Classroom Discussions

Kim, A. S. N., Shakory, S., Azad, A., Popovic, C., & Park, L. (2020). Understanding the impact of attendance and participation on academic achievement. Scholarship of Teaching and Learning in Psychology , 6 (4), 272-284. https://doi.org/10.1037/stl0000151

Maznevski, M. (1996). Grading class participation . Teaching Concerns: A newsletter for faculty and teaching assistants .

Springer, M. (2023). Rethinking participation: Benefits from reflective assessment .

Events & news

Academic Development Centre

Class participation

Assessing class participation to promote / support learning

Introduction.

Assessing class participation to encourage and develop student engagement in learning activities is sometimes used in the HE Sector. This might include involvement in face-to-face activities such as seminars, discussions, debates, group work activities, experiments, simulations or placements. Participation in online activities such as discussions in chatroom, posting on bulletin boards, online forum or webinars could also be taken into account.

We are not suggesting rewarding attendance, rather we aim to identify those characteristics that we expect ‘effective’ learners to exhibit [preparation, participation, collaboration, etc.] and by rewarding demonstration and achievement of such behaviours both promote and emphasise their value. This is something of a ‘marmite’ method of assessment: some colleagues value the approach and feel that it is necessary to encourage (through grades or marks, however small) good study characteristics; whilst others consider that students should already have these approaches or realise that by developing them they will achieve better learning gain and hence enhanced assessment results.

What can we measure by assessing class participation?

A starting point here is that only that which is ‘visible’ can be assessed. It is not possible to assess directly attitudes or dispositions, for example. Whilst it may be easy to assess attendance, number of questions asked, time spent on an activity, involvement in debates, etc., there is a tension between what can be readily assessed and what it is desirable to assess. Easily assessed characteristics tend not to be associated with higher order thinking or deep learning.

Some of the aims of assessing class participation are to:

  • encourage students to participate in discussion
  • motivate students to engage with background reading
  • prompt student to prepare for a learning session
  • contributing
  • interacting
  • cooperating
  • collaborating.

Participation can take different forms: face to face; online; written; spoken; as groups; as individuals; or a combination of these.

Assessing participation indicates to learners the behaviours and attitudes that are seen as important (Bean and Peterson, 1998). By assessing involvement in a group activity, for example, students will be given the signal that the process of undertaking the task is important, not just a final outcome or product.

It might be beneficial to consider these questions during the design process:

  • which of the module intended learning outcomes will be assessed through participation?
  • which of Bloom’s Taxonomy (Bloom, 1956) related verbs (such as analyse, evaluate or create) found in the intended learning outcomes will this assessment ‘method’ focus on?
  • what sort of behaviours are desirable in the learning contexts? For example: speaking, listening, making a contribution online, being part of a team, leading a team, taking a part in a role play or completing a particular ‘hands-on’ activity in a laboratory? Can all of these behaviours be observed or can evidence for them having taken place be obtained?
  • is it necessary to distinguish the quantity of participation from the quality of participation? For example, whether it is desirable that students should ask questions (and perhaps the more the better) or is there concern for the type of question asked and whether they support higher order thinking in line with the intended learning outcomes
  • how will meaningful feedback (judgemental) and feed-forward (developmental) opinion be given during the class?
  • if the participation in group work is to be assessed then it may only be possible through the use of self- or peer-assessment; how will this be implemented?

Bean and Petersen (n.d) provide a Holistic Rubric for Scoring Class Participation and the table below is adapted from that source:

Thinking about the quality rather than the quantity of participation, they usefully add that:

  • a 5-score may also be appropriate to an active participant whose contributions are less developed or cogent than those of a 6 but still advance the conversation
  • an award of 3 may result from the student being shy or introverted. The tutor may choose to give such students a 5 if they participate fully in smaller group discussions or if they make progress in overcoming shyness as the course progresses.

You may, like us, be surprised that a student should gain any mark for the sort of behaviour described by a 1-score; the grid is given as a starter rather than an example of good practice. The important point is that as well as a clear rationale for assessing participation and reflecting this in the learning outcomes, we should develop and publish the criteria by which we will assess our learners.

Bauer (2002) and Penny & Murphy (2009) advocate the use of rubrics to inform learners what they should be doing in order to signal the kinds of learning and thinking that are expected for success. A rubric normally comprises three main features (Reddy and Andrade, 2010):

  • evaluation criteria: which are usually mapped to the learning outcomes or competencies that are to be measured
  • quality criteria: qualitative descriptions of what is expected for a given grade or mark
  • scoring system: grade ranges or degree classifications mapped to the quality description.

The following example, Hack (n.d.) illustrates this:

Diversity & inclusion

An assessment approach would be unfair if it benefitted certain personality types or cultural norms. For example, students who describe themselves as extroverts may find participation in group activities easier and receive higher marks independently of effort. In addition, international students who may be learning in a second language may be slower to contribute not through lack of understanding or ability, but merely through command of language; is this what we wish to assess? This is, of course, less of an issue if we are online as asynchronous communication can give these student the necessary extra time. Indeed, a small scale study on perceptions of inclusivity amongst students at Warwick revealed that the expectation to actively participate in class can be perceived as non-inclusive (WIHEA funded project 2019).

In addition, we can help student prepare by giving:

  • a writing exercise, before class, based around the types of questions that will be posed in class and which the student can read from in their responses rather than having to respond spontaneously
  • a brief free-writing period after the question is posed e.g. 3-5 minutes of silence during which students write their initial thoughts down
  • an asynchronous online activity can allow students to think through their responses not pressured by having to listen to those students who might respond more readily and dominant the interactions.

If a percentage of the final assessment for a module is for participation then in effect it becomes obligatory. For some students it is daunting to be required through the assessment process to speak in seminar groups or to have to contribute to group work. They may require support to do this. It cannot be assumed that poor participation is a deliberate choice - there may be other reasons for this such as diagnosed or undiagnosed social anxiety. Crossthwaite et al . (2015) suggest that students considered in-class participation measures were unfair. Further Gopinath (1999) found that students were not positive about a model where peer- and self-assessment was used. As with all methods of assessment, we need to discuss the approach with our learners in order for them to see the relevance and value and to understand how they will be measured.

Academic integrity

The nature of this form of assessment means that there are very few opportunities for academic misconduct. Where students have to contribute online or as the result of group work it is important that the differences between collaboration, which you might want to encourage, and collusion is discussed. (Click here for further guidance on plagiarism.)

Student and staff experience

Benefits for students.

LSE (2017) suggest that “fostering students’ active involvement in their own learning increases what is remembered, how well it is assimilated, and how the learning is used in new situations.” Combining their claims for the benefits to students with the work of UNSW (2019) the list is impressive.

Activities involving class participation will:

  • encourage students to prepare for class and to do the weekly readings, lab notes and studio preparation
  • encourage students to be active participants in classroom activities and encourage them to take responsibility for their learning
  • encourage students to reflect on issues and problems that relate to the class
  • encourage students to develop oral, aural and language communication skills and to demonstrate them in their interactions and co-operation with peers and staff
  • foster the development of a student’s presentation skills in individual and group presentations
  • foster students’ analytical skills and their capacity to critique ideas and concepts in a supportive environment
  • support students in developing their collaborative and team-working skills
  • develop students’ capacity to critique peers' responses in a supportive and collegial environment
  • increases motivation, as students need to take responsibility for their own learning.

Challenges for students

  • creates tension: students may think it better to keep quiet than get it wrong; hence assessment results in the polar opposite of intent
  • contributions may be affected by class size and group dynamics
  • international students may find it difficult to participate due to language and cultural issues.

Challenges for staff

Even with clear criteria and rubrics assessment of classroom participation can be a highly subjective form of assessment. In addition, staff need to facilitate the teaching / learning session and assess at the same time; both activities can be challenging individually, and together require a high level of skill.

Given the, usually, low marks assigned to this form of assessment, students may be strategic and just not turn up. This makes the assessment less meaningful and useful. Macfarlane and Tomlinson (2017) critique ‘participation’ because of associated threats to ‘student rights and freedoms as learners’ (p.5), and offer six critiques of student engagement including performativity , where easily measurable student behaviours are monitored and audited (measuring what can be rather than what should be measured) and infantilisation where adults are treated like children. Considering carefully the motivations for including assessment of participation is important; it might be argued that a ‘deficit’ view of how students behave, and a desire to adapt their behaviour, is not a good reason to assess participation.

Students: Given the drivers for this type of assessment, one could expect students’ workload to increase, but one might argue that they should be doing this work anyway.

For staff: Despite the challenges of design and implementation, Heyman and Sailors (2010) found that peer assessment of participation took a short amount of time and was easy to undertake. Bean and Peterson (1998) found that it was hard for academics to keep records of participation.

If participation is assessed across a module a large amount of data can be accumulated very quickly and this would need management and processing. If participation in a one-off learning experience is assessed then the amount of data would be reduced but, perhaps, be less representative of true participation of the learners.

Annotated bibliography

Concept maps

Essay variants: essays only with more focus

  • briefing / policy papers
  • research proposals
  • articles and reviews
  • essay plans

Film production

Laboratory notebooks and reports

Objective tests

  • short-answer
  • multiple choice questions

Oral presentations

Patchwork assessment

Creative / artistic performance

  • learning logs
  • learning blogs

Simulations

Work-based assessment

Reference list

Study Site Homepage

  • Request new password
  • Create a new account

Organizational Change: An Action-Oriented Toolkit

Instructor resources, class participation self-assessments.

A note from the authors:

Many professors want to use participation in the class for part of the grade assigned. Determining the amount of participation is always an issue and open to dispute by students. In order to avoid disputes, faculty can ask students to self-assess their participation which we then confirm or discuss further. Our experience with this is very positive. We find the students appreciate the feedback.

Our procedure is to ask students to fill in the self-assessment sheet after each class. We collect the sheets and review them quickly to see if we disagree with those assessments. If we agree, we initial the assessment. If we disagree, we state this and give reasons why. This may lead to further discussion about standards and expectations. At the end of the term, we calculate an average participation score for each person and assign a grade based on this.

Other faculty have asked students to complete the self-assessment every third or fourth class. They then follow a similar procedure but with fewer data points.

›   Self-Assessment Sheets

Academia.edu no longer supports Internet Explorer.

To browse Academia.edu and the wider internet faster and more securely, please take a few seconds to  upgrade your browser .

Enter the email address you signed up with and we'll email you a reset link.

  • We're Hiring!
  • Help Center

paper cover thumbnail

Assessing the Assessment: An Evaluation of a Self-assessment of Class Participation Procedure

Profile image of Eddy White, Ph.D.

2009, The Asian EFL Journal Quarterly September 2009 …

Related Papers

Journal of Educational and Social Research

Abdallah El-Omari

class participation self evaluation essay

Paedagogia: Jurnal Penelitian Pendidikan

Tono SUWARTONO

This research study involved nine senior high school students who had high achievement in the study of English as a Foreign Language (EFL) - five of SMA Negeri I and four of SMK Negeri 3 in the town of Purwokerto - Central Java, Indonesia. The primary aim of this research was to identify the learning strategies used commonly by lhe better performing learners or EFI .. Through retrospective interview \\'S the nine participating students were asked lo report "special things they did", special ways they used in acquiring English. The interviews were tape-recorded, and abbreviated transcripts were made of the tapes in order that the data was easier to analyze, Data on learning strategies was identified and classified adopting O'Malley's psychology-oriented taxonomy. In addition lo informant feedback, data validation was also done through peer discussions. This survey study discovered some twenty-on~ or so strategics. Of Uic strategies listed, ten were used commonly by the two groups of students. including the commonest used strategics.of practice, set•king learning environment, and resourcing,

LEARN Journal

Apisak Sukying

Language learning strategies (LLS) are conscious behaviours used by language learners to foster the acquisition, storage, and use of new information. This study investigated the LLS used by Thai EFL university students using a questionnaire based on Oxford’s (1990) LLS taxonomy. It also identified the relationship and the difference in LLS use across clusters of academic study. The participants were 1,523 first-year students enrolled in a general English course at a university. The findings showed that university students reported a moderate use of LLS. Affective strategies were used the most frequently, followed by metacognitive, compensation, cognitive, social and memory strategies, respectively. The analysis revealed that LLS were interrelated and that LLS use differed across academic clusters. The present study also revealed the relationship between learning strategy employment and English proficiency. Overall, the results demonstrate that the use of learning strategies among Thai university learners varies, depending on individual differences and contextual factors. The findings also suggest that learners would benefit greatly from training in the use of learning strategies. Additional qualitative research is needed to understand the learners’ selection of specific strategies within each category of strategies. Such research would provide further important pedagogical and theoretical implications.

Nirma Paris

Journal of English as Foreign Language (JEFL)

Arif Nugroho

The present study involved one EFL learner who was regarded as successful, not only of understanding second language acquisition theories but also of demonstrating exceptional language skills thus far. An in-depth semi-structured interview was administered with three key objectives-to identify features of language inputs that enable her to develop English skills, to investigate her situation of the learning environment, and to reveal her motivation in learning the language. The participant was a successful EFL learner who was determined by the excellent TOEFL score and performed an outstanding achievement in English proven by the academic transcript. The data were gathered by means of a semi-structured interview and analyzed through the transcription process, coding, and drawing the conclusion. The results indicated that joining an intensive class and integrating English in her daily activities had primarily contributed to her language skill improvement. While the physical environment slightly provided a conducive environment for learning English, her academic environment was steadily supporting her. In addition to the importance of English in her future, the participant was successful in keeping her motivation to learn the language. These findings Journal on English as a Foreign Language (JEFL) http://e-journal.iain-palangkaraya.ac.id/index.php/jefl

Dr. Devrim HOL

The purpose of this study is to investigate the attributions of Turkish EFL learners on success and failure in learning English as a foreign language with different variables such as gender and level of English proficiency. To investigate the attributions of the participants and gather the relevant data, a questionnaire including 38 items and semi-structured interview protocol were applied. To analyze the data, SPSS 20.0 was used, and interview protocol was decoded using document analysis. It was revealed from the findings that learners attribute their success and failure to both internal and external attributions; however, they give more priority to internal attributions. Investigating all these concepts will help all the stakeholders to be aware and understand the reasons behind the learners' success and failure in learning English with different variables including gender, and level of proficiency.

Mojtaba Mohammadi

This study mainly aims at investigating the relationship between learners’ use of learning strategies in the process of English language learning and their command of English among English students in university. Out of 368 freshmen English language students, 189 students studying English Language at Islamic Azad University - Roudehen Branch were selected applying cluster random sampling. Oxford's Strategy Inventory of Language Learning (SILL) and an ETS version of TOEFL examination (2003) were administered. The findings indicated that participants’ use of learning strategies in their study had correlation with their English proficiency level. Moreover, there is no significant difference between male and female students regarding the use of learning strategies in relation with their proficiency level.

Innovation in Language Learning and Teaching

David Lasaga

Yustinus Calvin Gai Mali

This study explores students’ attributions for success and failure in their English as a Foreign Language (EFL) learning process at English Language Education Study Program, Satya Wacana Christian University (ED-SWCU). For data collection, a semi-structured interview was conducted with three ED-SWCU students. The results appear to prove that the students’ negative environment, time management, and negative habits are the attributions for their failure, while strategies and family support are the primary attributions for their success. In essence, the results would seem to indicate that external influences primarily take an essential role in the students’ failure and success in their learning process. The interconnections possibly state that classroom teachers cannot completely be the one who handles the particular outcomes of their students’ EFL learning. In light of the findings, this study proposes some pedagogical ideas for the development of EFL teaching and learning particularly in Indonesian Higher Education context.

Juliana Othman

This paper presents the results of an investigation into p atterns of English language use a mong English as Foreign Language (EFL) learners at Sunway University College, Malaysia. A sample of 47 respondents participated in this survey. Data collection was carried out through questionnaires. The results indicate that the respondents use English in both their social and academic domains. The findings also reveal t hat t he EFL learners in this private institution are more inclined towards instrumental goals—passing examinations and future career developments—in their learning of English. The implications of this research are discussed.

RELATED PAPERS

lamiss ibrahim

Frontiers in plant science

Peter Gregory

Armando Sternieri

Cleyton Rodrigues

Chalermpol Waitayangkoon

Matilda Djolai

Compte rendu

Moussa Abou Ramadan

Human Reproduction

Aktuğ Ertekin

The Libyan journal of medicine

Mouna Boudabbous

Journal of Biological Chemistry

Kamran Ghaedi

Matthias Conrad , Norman Döhlert-Albani

Jean-François Draperi

CERN European Organization for Nuclear Research - Zenodo

Media Ekonomi dan Manajemen

mesri silalahi

Ahsan Arozullah

Muhammad eka Setiawan

Revista chilena de historia natural

DIANA ESPINOZA

Handbook of Research on Workforce Diversity in a Global Society

Dr. Marilyn Y Byrd

Engineering Applications of Computational Fluid Mechanics

Ahmet İrvem

Alvaro Rocchetti

Advances in Applied Mathematics

jorge alfonsin Alfonsin

tyghfg hjgfdfd

Mutation Research/DNA Repair

Cassandra Smith

See More Documents Like This

  •   We're Hiring!
  •   Help Center
  • Find new research papers in:
  • Health Sciences
  • Earth Sciences
  • Cognitive Science
  • Mathematics
  • Computer Science
  • Academia ©2024

Peer Evaluation of Class Participation

class participation self evaluation essay

This memo describes a mechanism for  evaluating class participation  in courses where it matters, refined and developed over a couple of decades but surely not perfected. 

Plenary sessions of a college or graduate course are increasingly regarded as an opportunity for the students to apply and explore the tools and content of the curriculum collaboratively  (cf “Flipped Classroom” literature,  eg   http://www.knewton.com/flipped-classroom/ (link is external)  ).   Examples include “case-method”  teaching (business or law school versions), multiple small-group discussions with coaching, and any discussion-based classroom pedagogy.   This memo does not review the many reasons nor contexts in which learning is best accomplished as a collaborative, social process.

In this learning environment, students teach each other extensively, and a course grade (to be fair, and to send the right signals) must in part reflect their respective success at doing this.  Furthermore, college and graduate students have been socialized extensively to believe that flattering the prof will be good for them, and commonly fear that course grades are zero-sum, so they can only advance at the expense of others.   A lot of learned behavior and expectations have to be undermined. 

I also take as given that the correct criterion for this part of a grade is  student’s contribution to the learning of others .  The problem this criterion presents is that it cannot be observed outside the heads of those doing the learning, and proxy indicators like “how often student A’s  contributions match what I (the prof)believe to be correct” are compromised by my ego  and anyway say nothing about the value those contributions are or are not creating for other students.

On the principle that I have the right to demand information (like answers on exams) that I need to make a fair performance evaluation, I demand information about their learning from others.  In response to student preference that I grade class performance–that they are diffident about ‘grading each other’–I’m happy to say “of course I assign this grade, like any other grade.  But you have to give me the information with which I can do so.”  I suggest to students who believe they can learn without others that they will do better on the web and in the library, and not to take the course.

Finally, as Lauren Resnick pithily observed, “in school, collaboration is cheating; in the workplace, it’s essential”.  When I write letters of recommendation, I often have occasion to include the following text, and I think it has a good effect:

Student X took my course Y in semester Z [paper, projects, yada yada] ….In this course, class participation counts for 00% of the grade [varies from 25% to 40%] and is assessed by the other students in a confidential survey.   Wallflowers, unprepared students, and air hogs tend to do poorly on this element.  X received a CP grade [of  G/in the top 00% of the class], and I consider this an indicator of real leadership potential.

Twice during the semester and a third time at the end, I circulate an Excel spreadsheet  with two alphabets of named rows, distinguished by color (example in Appendix A). The students are instructed to record a score from 1 to 5 for each other student, in the second alphabet for (i) students in their section (ii) students who critiqued their paper drafts (iii) students from whom they received a draft to critique, and in the first for everyone else.  Scores must total 3N for a course of N students.   They give themselves 7 in the second alphabet.   

A few wiseguys occasionally give everyone the same grade:  I discard their scores as uninformative (which costs them their own 7). I calculate the mean scores for each student in each panel (I usually have a GSI copy the score column from each response into a master spreadsheet), weight the means in the second alphabet 1.5 to 2x the scores in the first, and sum them into a total score.   I order the names by total score,  alphabetize the names within quartiles  (or terciles, depending on the size of the class–no reason someone should be at the very top or very bottom of a list like this), and publish the resulting list (without numerical scores).

The first two rounds of this survey don’t count for grades, but are purely advisory; the last one counts. With the GSI’s (teaching assistants), I assign a letter grade to the student receiving the lowest score, and the other grades go up from there to A or A+.  In principle, and often nearly in fact, everyone can get a very high grade.

Ancillary practices

Some standard discussion management practices should be recalled here, because a grading system is not independent of other elements of a course’s culture.

 A lecture hall is not a discussion classroom.  Students need to see each other’s faces, and need name cards, every day, with names on the front  and back .  Early in the semester, it’s necessary to bring markers and card stock to class daily, and invite every student without a name card down to the front of the class to make herself another, in a nice way.   You need to learn the names. A good trick for this is to go around the room with a video camera one day, having each student hold up his name card and say his name and ‘one interesting fact about yourself’;  a few times through this tape and you will have the names cold.

Peer CP grading requires additional practices to signal and reinforce what’s sought and why, and repetition is very important here.  For example, post the video described above on the course website for the students to use. The idea that students are responsible for the learning of others is not a model they slip into easily. I like to emphasize the devious incentives to help others improve their performance built into the grading rule. 

Everything  about this peer grading process must be transparent to the students, except who gave whom what score. All the information in this memo is shared with the students, sometimes more than once, in the syllabus and in class, from the start of the course. 

I do not require attendance.   I tell the students that they are grownups, and if they have more to gain being somewhere else they should definitely be there.  But I record attendance carefully and include it on the survey form, with instructions to use it as they wish, or not at all.  Attendance in courses graded in this fashion hovers around 95%; this is probably higher than my own lifetime “showing up for important stuff” rate.  I also tell them that everyone misses a class now and then, but if they have to it’s polite to email the class explaining why so people don’t get the wrong idea.  They are quite reluctant to do this. Some still email me for permission to miss class, and just I tell them they’re not there for my benefit and I’m not in charge of that “permission” either way.

I tell the students that grownups use laptops in important meetings and that others will probably be happy if they find something on the web in the middle of class that advances the discussion (this is fairly common).   They figure out on their own pretty quickly that reading email or bidding on Ebay with their name card right in front of them, and three or four students beside and behind them who will be grading their CP, is not a good idea, and I have not had any problem with laptops or phones in class.  

Students sometimes ask me for criteria to use in scoring.  I tell them that unlike an exam, there’s no reason everyone’s performance should be assessed the same way by everyone, as people learn in different ways and use different gifts to teach.  But I distribute a list of criteria students have found useful, especially to broaden their scope (Appendix B).

Almost immediately, someone grousing about the process will use language like “I don’t want to/feel able to evaluate other students.”  I lie in wait for this, and pounce on it, in a nice way: “Of course you shouldn’t do that: no human being has the right to evaluate another person!  This is about evaluating performance at a specific task .  But one reason performance evaluation is the most corrupt and incompetent function in almost any organization, and why people hate it and avoid it, is that it  feels like evaluating people.  So we have to be careful not to use that language, even as shorthand.”

I distribute the first two survey results with some language reminding people that every group of people including Cy Young Award winning pitchers, Nobel Prize physicists, and even Berkeley students, has a top, middle, and bottom tercile on any given measure.  I offer “what can I do with this information?” advice, in the email, along the lines of “first, look at the people in the top tercile.  What are they doing?  Try it!  Second, look at the people lower in the list and see what you can do to encourage them to get in the game more.   Third, pick three people at random from the class, and take each one out for a latte in return for telling you two things they think you could do to get a higher score, and two things you should do even more of.”  I also cold-call people in the bottom tercile or bottom two quartiles, because plain shyness is often one component of not contributing, and a fair amount of this still water actually runs pretty deep.   I never cold-call enough and students are always grateful when I do it more.

Appendix A- CP Form

Appendix B- Peer Evaluation Criteria

Michael O'Hare's blog:  http://www.samefacts.com/ (link is external)  

Rubrics for Class Participation

Introduction.

Most instructors expect students to participate actively in their courses and often specify that a portion of the students’ grade will be determined by class participation.  However, class participation can become an amorphous category without any specific standards for determining points allocated toward a grade.  That is, the behaviors involved are rarely specified, and related expectations are rarely explicated to students. Developing a rubric for class participation allows an instructor to be specific about how participation is evaluated, including the relevant dimensions and the meaning of the various performance levels. ►  Creating Rubrics

Participation has been understood as comprising a variety of behaviors, both active and passive. For on ground courses, definitions have included quantity and quality of contributions to class discussions, volunteering to lead class discussions, listening attentively and responding to classmates, asking questions, coming to class prepared, attending regularly, being respectful of the instructor and classmates, being supportive of classmates, being mindful of the group’s dynamics, and engaging in small group activities. For asynchronous online courses, many of the dimensions have been the same, with the addition of behavior specific to online posts (e.g., format, mechanics, timeliness, responsiveness to prompts, and discussion etiquette).

Participation – in its many forms – can be thought of as a skill to be developed and thus adopted as a course learning objective. This objective should specify which behaviors are to be developed in the course and therefore evaluated through rubrics.​​​​​​​

The benefits of classroom participation for student learning include

  • Encourages students to prepare for classes and engage with course readings and materials
  • Encourages students to think about issues that relate to the class
  • Encourages students to develop and demonstrate oral communication skills
  • Fosters the development of students’ presentation skills
  • Encourages students to clearly articulate and share ideas
  • Develops respect for others’ points of view
  • Develops group and team interaction skills
  • Develops students’ capacity to critique peers’ responses in a supportive manner
  • Through fostering students’ active involvement in their own learning, increases what is remembered, how well it is assimilated, and how the learning is used in new situations

The benefits of a classroom participation rubric include

  • Creates a fair and equitable standards available to all students
  • Clearly details what is expected of students
  • Provides instructors with feedback on students’ understanding of course material
  • Provides instructors with a means of acknowledging students’ contributions
  • Provides a way for students to get timely and specific feedback

Creating and using rubrics for class participation does present some challenges, however.   Specifically, student contributions – when thought of as including speaking up in class – may be affected by class size, group dynamics, and other factors unrelated to the purpose of the assessment. Additionally, some students find it difficult to participate actively in classrooms due to cultural inhibitions and face-saving concerns (e.g., students from non-English speaking backgrounds or students from cultures that do not encourage voicing opinions).

A related potential problem with evaluating and rewarding class participation is that students who are confident speaking up in group situations may be disproportionately rewarded, regardless of the quality of their contributions. Conversely, students who are shy and those who feel self-conscious or uncomfortable speaking English in a group setting may be disadvantaged.

In fact, some educators believe that instructors should not grade participation. For example, Dr. James Lang, Professor and Director of the Center for Teaching Excellence at Assumption University, has argued against this practice. He believes that participation grades are subject to bias and memory distortions, advantage quantity over quality of comments, and disadvantage students who are shy or anxious or those who, for other reasons, find it difficult to participate. He recommends structuring a course to include other forms of student engagement, including small task-oriented groups and invitational participation, as well as creating a safe and inclusive environment that feels supportive rather than competitive.  Should We Stop Grading Class Participation?

In a follow-up article, Dr. Lang suggested two options for “fairly” grading class participation (although he does not intend to use them). One is to grade students not on their participation but on their engagement, which means their completion of class-participation activities that the instructor can track, collect, and assess. The idea is to have students produce something concrete that requires them to engage actively with the material and can be quickly evaluated to ensure that students are showing up to class and participating. A second is to create a rubric that covers all the forms of class participation in the course (e.g., attentive listening, asking questions, contributing to a discussion, participating in group work, or posting comments to a discussion board before or after class). The rubric can be given to students at several points throughout the semester, and they can be asked to log their participation.  Two Ways to Fairly Grade Class Participation

Good practices for creating and using rubrics for participation are similar to those for creating rubric in general and include

  • Identify behaviors you want to include as participation
  • Identify the qualities that you want students to demonstrate in their participation
  • Relate dimensions evaluated to course learning objectives and make that relationship explicit for students
  • Identify the criteria that you will use to assess whether students have displayed these qualities
  • Let students aid in developing the assessment criteria; talk with students about characteristics of high-quality participation and use their feedback to develop criteria
  • Make sure that the assessment is fair to everyone; it should not discriminate against those with a disability, women, different cultural groups, etc.
  • Distribute the rubric to students at the beginning of the semester so they know which contributions will be rewarded with high participation grades
  • Teach students how to participate effectively in class
  • Provide clear, timely, and usable feedback throughout the course
  • Get students to self- and peer- assess during the course

Questions to Consider Before Assessing Student Participation University of Pittsburg  University Times

Assessing Class Participation University of Melbourne Teaching and Learning Quality Assurance Committee

Class Participation London School of Economics and Political Science Eden Centre

Developing Participation as a Skill Iowa State University Center for Excellence in Learning and Teaching

Grading Class Participation University of New South Wales/Sydney Teaching

Pros and Cons of Grading In Class Participation University of Calgary Taylor Institute for Teaching and Learning

Evaluating Students on Class Participation New York Institute of Technology Center for Teaching & Learning

Creating Rubrics for Effective Assessment Management University of Michigan Online Teaching

Making Criteria for Class Participation Explicit University of Wisconsin/Madison Writing Across the Curriculum

How to Add a Rubric Canvas

On Ground Courses

Specific dimensions that have been proposed for on ground class participation include the following:

  • Frequency: How often did the student participate during class?
  • Relevance: Were contributions relevant to the topic under discussion?
  • Preparation: Did the student appear to be adequately prepared? Did contributions reflect or apply the content of course or other readings?
  • Content of contributions: Were contributions factually correct and based on evidence?
  • Quality of ideas: Did the student contribute new ideas that were insightful and constructive and advanced the level and depth of the discussion?
  • Critical thinking: What was the evidence of critical thinking in the student’s contributions?
  • Listening skills: How well did the student listen to the contributions of others – indicated by comments that built on others’ remarks?
  • Civility: Did the student engage in civil behavior during discussions (avoid interrupting others, use respectful language, etc.)?
  • Group dynamic: Did the student contribute to improving the group dynamic by helping to focus the discussion or move it forward?

Asynchronous Online Courses

Some of the dimensions that have been proposed for evaluating asynchronous online class participation are the same as those proposed for evaluating on ground class participation (e.g., frequency, relevance, preparation, quality, and content of contributions). Others are specific to the online environment and include the following:

  • Initial assignment posting: Did the student post well-developed post that fully addressed the task?
  • Follow-up postings: Did the follow-up posts demonstrate analysis of others’ posts and extend meaningful discussion by building on previous posts?
  • Responsiveness to prompts: Were all components of the discussion prompt addressed?
  • Style: Were the posts clear, concise, and well written?
  • Timeliness: Were posts distributed throughout the week?
  • Mechanics: Were posts formatted in a style easy to read and free or grammatical, punctuation, and spelling errors?
  • Discussion etiquette: Do posts show respect for viewpoints of others?
  • Contribution to learning community: Were the posts responsive to other posts and attempted to move the group discussion forward?

Examples of Rubrics for Evaluating Participation

Rubric for Assessing Student Participation Carnegie Mellon University Eberly Center for Teaching Excellence

Sample Rubric – Class Participation Southern Methodist University Teaching Resources

Rubric for Participation in Class Temple University Center for Advancement of Teaching

Rubric for Evaluation of Class Participation Albany Law School

Rubric for Classroom Discussion Northwestern University Searle Center for Advancing Learning & Teaching

Rubric for Asynchronous Discussion Participation University of Delaware

Sample of Online Discussion Rubric University of Iowa

Sample Online Discussion Rubric University of Connecticut

Online Discussion Participation Rubric University of Central Florida

Online Discussion Rubric Northwestern University Searle Center for Advancing Learning & Teaching

IMAGES

  1. Self evaluation Essay Example

    class participation self evaluation essay

  2. ⇉Self Reflection on my Class Participation Essay Example

    class participation self evaluation essay

  3. Classroom Participation Process And Definition Essay Example

    class participation self evaluation essay

  4. Assessing the Assessment: An Evaluation of a Self-Assessment of Class…

    class participation self evaluation essay

  5. Trying to Teach: Self-Evaluation for Participation Grades Middle School

    class participation self evaluation essay

  6. In-Class Participation Self Evaluation

    class participation self evaluation essay

VIDEO

  1. Class Participation By Diksha Bhattarai Class 7

  2. Learn more at www.educatewithinfluence.com 🤎

  3. Seerat Zahra ( Grade Prep ) participated in Tell and Show

  4. Student Explains How She Uses Self-Efficacy

  5. Gender inequality in Indian politics||gender discrimination

  6. Unlocking the Secrets of Class Participation

COMMENTS

  1. Reflecting on Class Participation: a Self-evaluation Paper

    It involves engaging with course material, contributing to discussions, and collaborating with peers. In this self-evaluation essay, I will reflect on my class participation over the course of this semester, assess my strengths and areas for improvement, and outline my strategies for enhancing my participation in future academic settings ...

  2. Self-Reflection on Course Participation Essay (Critical Writing)

    Self-Reflection on Course Participation Essay (Critical Writing) Participation in class discussions and online activities is important in any learning endeavor because it promotes effective learning activities, stimulates creativity, and instills confidence. Active contribution to discussions is a reflection of competency of the skills I have ...

  3. PDF Encouraging and Evaluating Class Participation

    classroom for class participation; and peer, faculty and self-evaluation of class participation. Class-Participation Themes Class-Management Strategies Many academics consider class participation evidence of active learning or engagement that benefits learning, critical thinking, writing, appreciation of cultural differences, time management ...

  4. Participation Self-Assessment

    Participation Self-Assessment . College is a time of self-discovery that leads to the development of one's own voice. The expectation in my classes is that students develop their voices by creating oral arguments that are sustained with evidence (e.g., from the book, other classes or experiential evidence). This level

  5. Participation in the Classroom: Classification and Assessment Techniques

    Class participation captures the traditional notion of participation which involves being vocal and active within the classroom by answering and asking questions and by participating in class discussions and activities. "Course participation may include readily speaking, thinking, reading, role taking, risk taking, and engaging oneself and ...

  6. PDF Grading Participation article

    Microsoft Word - Grading Participation article.doc. Grading class participation signals students the kind of learning and thinking an instructor values. This chapter describes three models of class participation, several models for assessment including a sample rubric, problems with assessing classroom participation, and strategies for ...

  7. PDF Promoting excellence in teaching at the University of Virginia…

    Grading Class Participation by Martha L. Maznevski, Assistant Professor, McIntire School of Commerce In my experience, grading class participation is one of the most difficult aspects of student evaluation. Over the past few years I have spent a lot of time trying to figure out why I expect students to participate, how I want

  8. PDF Student self-evaluation for class participation

    Student Self-Evaluation for Class Participation. The following exercise is designed to help you reflect on how well you are doing with respect to class participation. Please review the first three modules that we have completed as you answer the following questions. This form is for self-evaluation, meaning you are assessing your own participation.

  9. Peer, professor and self-evaluation of class participation

    The purpose of this project was to determine the validity of peer and self-evaluations of class participation compared to professors' class participation grades. Students (N = 96) evaluated themselves and their classmates on class participation on a four-point scale and students were required to assign grades in a normalized distribution.

  10. A Critical Review of Research on Student Self-Assessment

    This article is a review of research on student self-assessment conducted largely between 2013 and 2018. The purpose of the review is to provide an updated overview of theory and research. The treatment of theory involves articulating a refined definition and operationalization of self-assessment. The review of 76 empirical studies offers a critical perspective on what has been investigated ...

  11. A Useful Strategy for Assessing Class Participation

    One of the changes we have seen in academia in the last 30 years or so is the shift from lecture-based classes to courses that encourage a student-centered approach. Few instructors would quibble with the notion that promoting active participation helps students to think critically and to argue more effectively. However, even the most savvy instructors are still confounded about how to best ...

  12. Class participation and feedback as enablers of student academic

    Highly connected with students' assessment and class participation is the role played by feedback, which can be defined as "information provided by an agent (e.g., teacher, peer, book, parent, self, [or] experience) regarding aspects of one's performance or understanding" (Hattie & Timperley, 2007, p. 81).Once class participation is graded, this grade should be communicated to students ...

  13. Grading Class Participation

    The course uses a participation rubric to (a) draw students' attention to salient aspects of participation and (b) allow them to identify and evaluate their strengths and weaknesses in participation. Table 3: Participation rubric (for in-class participation and critical analysis assignment) Levels of Attainment.

  14. Class participation

    Some of the aims of assessing class participation are to: encourage students to participate in discussion. motivate students to engage with background reading. prompt student to prepare for a learning session. encourage and reward development of communication skills and group skills such as: contributing. interacting.

  15. Class Participation Self-Assessments

    A note from the authors:Many professors want to use participation in the class for part of the grade assigned. Determining the amount of participation is always an issue and open to dispute by students. In order to avoid disputes, faculty can ask students to self-assess their participation which we then confirm or discuss further. Our experience with this is very positive.

  16. Peer, professor and self-evaluation of class participation

    The purpose of this project was to determine the validity of peer and self-evaluations of class participation compared to professors' class participation grades. Students (N = 96) evaluated themselves and their classmates on class participation on a four-point scale and students were required to assign grades in a normalized distribution. Relative to faculty evaluations, the bias and precision ...

  17. Fostering and assessing equitable classroom participation

    Student participation factors into many instructional approaches used by Brown faculty, whether through discussion, presentations, or in- and out-of-class writing and problem-solving. Approaches that use student interaction are most likely to enhance student learning in a diverse classroom (Gurin, 2000; Milem, 2000).

  18. Assessing the Assessment: An Evaluation of a Self-assessment of Class

    The results indicated that joining an intensive class and integrating English in her daily activities had primarily contributed to her language skill improvement. While the physical environment slightly provided a conducive environment for learning English, her academic environment was steadily supporting her.

  19. PDF Examples of Class Participation

    If participation is part of the assessment, the course syllabus must include the criteria used to evaluate participation. At an interval or intervals determined by the instructor, the grade for participation should be shared with the student. This provides a mechanism of evaluation and documentation of the work that is subsequently linked

  20. Peer Evaluation of Class Participation

    Student X took my course Y in semester Z [paper, projects, yada yada] ….In this course, class participation counts for 00% of the grade [varies from 25% to 40%] and is assessed by the other students in a confidential survey. Wallflowers, unprepared students, and air hogs tend to do poorly on this element. X received a CP grade [of G/in the ...

  21. In-Class Participation Self Evaluation

    In-Class Participation Self-Evaluation Instructions: Reflect on your participation during live class through-out the semester. Based on your reflection, complete the self-evaluation form below. Be sure to provide examples to support your ratings. Due: Monday, August 2nd by 11:59 pm Self-Evaluation: Excellent Exceeds Expectations Meets ...

  22. Rubrics for Class Participation

    Developing a rubric for class participation allows an instructor to be specific about how participation is evaluated, including the relevant dimensions and the meaning of the various performance levels. Creating Rubrics. Participation has been understood as comprising a variety of behaviors, both active and passive.