Organizational Models of Student Peer Assessment

Andrei BerezhkovDepartment of Infocommunicational Technologies ITMO University, St. Petersburg, Russia, dead0343@gmail.com

Yulia ValitovaDepartment of Infocommunicational Technologies ITMO University, St. Petersburg, Russia, julijawal@gmail.com

Nataliia GorlushkinaDepartment of Infocommunicational Technologies ITMO University, St. Petersburg, Russia, nagor.spb@mail.ru

Nail NasyrovDepartment of Infocommunicational Technologies ITMO University, St. Petersburg, Russia, pasdel@mail.ru

Sergei IvanovDepartment of Infocommunicational Technologies ITMO University, St. Petersburg, Russia, serg_ie@mail.ru

Elizaveta KobetsDepartment of Infocommunicational Technologies ITMO University, St. Petersburg, Russia, www.kobets@yandex.com

Abstract — The paper considers the existing models of peer assessment of students’ works conducted both full-time, and using modern infocommunication technologies. Individual components of peer assessment models and principles of their application are reviewed. The combinations of model components and the models themselves are offered to efficiently control students’ achievements considering the specifics of an educational institution. The complexity of automating peer assessment process is discussed and recommendations are given.

Keywords: models, peer assessment, peer-review, learning outcomes, automation

© The Authors, published by CULTURAL-EDUCATIONAL CENTER, LLC, 2020

This work is licensed under Attribution-NonCommercial 4.0 International

I. Introduction

Universities are constantly seeking new forms, tools, and methods of learning, enabling increasing efficiency in the formation of professional competencies in-demand at labour market. Currently, the educational environment strives for active and well-developed methods and forms of students’ learning. It is not only the knowledge acquired at the University that becomes important, but also the modes of their assimilation, thinking and educational activities, the development of cognitive forces and a student’s creative potential. The changes are aimed to increase not only the student activity in the educational process, but also their involvement in interpersonal interaction. This allows students to complement each other’s knowledge and model situations related to future professional activities.

These changes appear to be the most important parts of life-long learning, although the organization of students’ interactivity is problematic. This is due to the fact that many people understand control only as an administrative procedure and management procedure aimed at identifying the level of students’ knowledge and their assessment. Therefore, a student is given the passive role of a performer, doing control tasks and final tests. Such an approach to control and assessment is quite limited. Competent and systematic organization of the assessment procedure allows not only to assess the level of students’ professional competencies but also deepen and expand students’ knowledge to prevent the formation of erroneous skills and adjust the learning process.

One of the modern interactive methods of control and assessment is mutual control of the results, or peer-review. The involvement of students in the process of learning outcomes control provides them with an opportunity to compare their results with their peers and estimate sufficiency of their knowledge to solve various problems in the class framework. The techniques of peer review, mutual work validation by students under control, are becoming more common during class-time performance, developing cognitive, social, and professional skills [1]. Peer review and collaboration design new knowledge based on the existing one.

Therefore, if students are involved in the process of peer assessment and analysis, this forms an understanding of principles, criteria and approaches that teachers use when assessing the actions performed in students’ work [38].

II. Peer Assessment Models Consider the Existing Models of Peer Assessment and Their Development

In [4] a stage model was offered:

• Tasks to test theoretical knowledge and practical skills; tasks for testing knowledge of definitions and logical-­oriented tasks are performed by students. At this stage peer assessment is carried out by a teacher.

• Practical tasks are performed by students. The tasks completed are peer-reviewed: students are divided into small groups (3–4 people to solve problems of different levels), discuss the completed task, and develop a common algorithm of task solution.

• Optimal solution is offered.

• Outcomes are summed up and revised.

The final grade is calculated by the teacher as a weighted average number from the calculation of grades for the first and the second stages. Here, an increase in the number of questions from students from 15 to 34%, personal participation of all students in the discussion of the problems posed, students’ interest in conducting a similar form of educational results assessment in other sections of the discipline was notable.

Another model is based on the hypothesis, that “if a student knows the material well and correctly answers the questions, then they correctly assess the answers of other participants in the test” [5]. The features of this model are as follows:

• students have no visual contact with other test participants, and thus do not know whom exactly they assess, their friend or an answer already assessed by the teacher of the previous groups from the database;

• students are limited in time;

• every fourth student is virtual; their works have already been assessed and the grades of these works are not subject to question;

• each student is assigned the weight of his grade in relation to other grades, based on previous audits reviews.

• The third model is presented in the electronic platform LMS Moodle [6], its features are as follows:

• “seminar” section is created to set the tasks and provide instructions for successful task solution;

• evaluation form that includes instructions and criteria is managed by a teacher and reported to students;

• students can download tasks solutions (answers) only when the session is open;

• at the stage of peer assessment the submitted materials are distributed between students, who assess each other’s works;

• the assessment report is generated in the form of a table (namely a table, which summarizes the results of the session).

The report provides two student ratings: task assessment and peer assessment (namely an extent of how close the grades of their peers’ works are to actual grades). The final grade is calculated based on the distribution of peer works’ assessment and teacher’s assessment their task using weighing mechanism.

Several improvements to this model were proposed in [7], namely:

Student model reliability concept is added to help a teacher make a solution whether a student model is trustworthy. To draw the right conclusions about student competence, it is proposed to use the integrated Bayesian network to calculate the student model, which is described in [8].

The improved grade report contains additional student data and added visualization functions, which allows monitoring of students’ progress.

Based on [4] the sequence of peer assessment stages is determined:

• preparatory stage:

• student motivation for responsible relationship to peer assessment;

• formation of grading criteria to comply the work performed with the established requirements, rules and samples;

• distribution of tasks to students;

• main stage:

• presentation of final tasks for peer review in class or distantly;

• peer review and then peer assessment based on the preprogrammed criteria;

• comparison of the assessment results with the reference ones (under reference results here and further is understood the exposed expert’s assessment or grades, or automatic assessment);

• final stage:

• final assessment, reflection.

It is worth noting, that each stage has various implementations. For example, the assessment process can take a manual, automate, or automatic mode. Thus, there are several methods for assessment [1, 2, 4, 20], and some are reviewed here.

In the paper “Accounting for Peer Reviewer Bias with Bayesian Models” by Ilya M. Goldin [9] peer review is determined by the following minimal set of processes:

• students are divided into two groups, authors and critics, namely in our terminology, auditees and auditors,

• each auditee sends their assignments to auditors,

• each auditor estimates these assignments according to clear criteria.

To assess the assignments the criteria must be formed in advance and have a single-­value estimate. Therefore, the most objective assessment could be obtained by quantitative rather than qualitative criteria. However, qualitative criteria, as shown in [10] is also estimated, although not only in manual mode, but with a hybrid methodology. This methodology includes several variables or derivatives received statistically or the data extracted by the natural language processing (NLP) methods. The final model of linear regression is used to predict grades, and includes syntax, rhetorical, and relevant features. As a result, the assessment time for the teacher is reduced and the feedback for the student is formed faster.

Here, an auditor with their own interest in assessment, namely whose assessment is not objective giving higher or lower grades to their auditees, has the “biased” status[9].

At the same time, the increase in reliability is possible due to an increase in the number of auditors. The simplest assessment model contains average values of each auditor’s assessment.

To increase the objectiveness of grade setting, calculation methods with regression are applied, taking into account the weights of assessment criteria or rules [11].

An alternative model is presented in[12], where hierarchical Bayesian models of peer review are considered. Here, feedback is provided, which allows to iteratively improve the system of assessment criteria and tasks.

It is worth noting, that the assessment process should not be performed “blindly”, as within one group the students know each other well. This becomes an important factor to support the educational prosess of both auditors and auditees during peer assessment. Alsoa waiver of “blind” testing allows to avoid plagiarism [13].

Therefore, a model is offeredto assess the assignments in an open format, where the auditors and auditees are familiar with each other. For the auditor, the stimulus to review and assess a large number of works may become a positive impact on its final grade.

The assignments in some subjects, for example, on programming, succumb to almost absolute automatization when they are checked and assessed [14]. Consequently, the grades set by the automated system become the reference grades and further comparison of the auditors’ grades is performed with reference to the grades set by the system automatically.

Each method can show high results in one area and be practically not suitable for other areas: automated assessment of a piece of music or a picture at the present level of development of techniques and technologies is almost impossible. At the same time, the automated software code assessment, from the code style point of view, is used quite often (Chekio [15], Stepik [16], School 21 [17]).

III. Assessment

Peerassessment systems are used in various educational institutions. Many of them have their own programs or use theready-made solutions. Considering the experience in peer assessment process, the best practices are selected here to be presented in a single scenario or sequence of actions for peer assessmentanalysing specific task features and assessment modes. At the same time, there are several problems which still do not have a unambiguous solution. These problems were presented by Goldfish in “The case about of superfluous bounty” in 1994 [18]. Here, Goldfish recommends students to become part of peer assessment process, namely conduct peer review. In the article, calculation of assessment criteria “Maximum possible amount” (PA Score) is suggested, where PA is formed from this group Х the criteria Х maximum grade by criterion).

 (1)

where (100% — w%) is the weight given for the expert’s estimate; and w% = 33% is commonly used, though at some courses this value might be 50%.

In the case of a group mark:

Student’s individual estimate = (PA – Score) 0´5* group mark.

According to Goldfish, “this formula has that advantage, that those, who does not contribute anything, get a zero, and those, who contribute almost nothing, get much more suitable low estimates, than in the previous formula.” [18].

Another notable tool to assess homework based on crowdsourcing is presented on the CrowdGrader site [19]. At CrowdGrader “the common estimate, that students get in their homework, depends on both (total) the estimate, that they get for their presentation, so does and from their effort and accuracy in reviewing representations’ of their colleagues” [20]. The life cycle of the task in CrowdGrader is described in three stages: a stage of presentation, a review stage, and a grading stage. Notably, students at CrowdGrader are stimulated to be accurate in their peer assessment and grade giving. Each student is assigned the crowd-­grade common rating, which unites three ratings:

• consensus grade for student’s assignment submitted;

• accuracy grade to measure the student’s accuracy in peer review and peer assessment;

• helpfulness grade to measure peer’s benefit.

More detailed information about estimation algorithms is presented in “CrowdGrader: Crowdsourcing the Evaluation of Homework Assignments”, technical Report UCSC-SOE-13–11 by Luca de Alfaro and Michael Shavlovsky [21].

PeerStudio [22] is an estimation platform, supporting the integration of automated systems to get quick feedback.

SPARK (self and Peer Assessment Resource Kit) [24] is a web template aimed to improve learning process with team peer assessment to increase grade fairness of separate students in a team. Thus, not only the student’s knowledge is assessed, but also their ability to work in a team. Here, students were asked to define, whose contribution in the teamwork was the most considerable and this result served to further adjustement of grades. Despite the interest in overstating or “creative approach” to peer assessment, students try to be objective. The increased objectivity was achieved by introducing students’ self-assessment.

The methods used in “Coursera” [25], “Udacity” [26], “Lectorium” [27], “EdX” [28], “Massachusetts Institute of Technology” [29], “Stanford University” [30] feature a very low teacher involvement from student point of view after the course starts (Balfour S., Assessing writing in MOOCs: automated essay scoring and calibrated peer review) [31].

The “University of California” [32] system features the method based on “calibrated peer review” (CPR), which allows to conduct the expert’s assessment of same-year students’ works, perform self-assessment of their own works, and also get feedback from all colleagues, who reviewed their work (these CPR opportunities are presented in. Assessing writing in MOOCs: automated essay scoring and calibrated peer review by Balfour S [31]). The disadvantage of the CPR method is that if all auditors have low competency index in the process of peer assessment, then the results of the calibration test and final results must be audited by the teacher, whose competencies are formed and confirmed. Another disadvantage of the CPR method is the situation when the teacher must perform the student works’ audit when students drop out of task implementation process, on condition that the number of student auditors is less than thrudents’ee.

In Tuned models of peer assessment in MOOCs [33], PG1-bias, PG1, PG2, PG3models accurately describe peer assessment effect on students’ results. However, these models only reduce the mean square deviation at results processing.

In the process of students’ peer assessment observation, the importance of relationship between the given grades by the auditor and the previously given grades of other students is highlighted.

The best results are shown by PG3 and PG2 models. The most accurate is PG1 model. Used in massive open online courses, the model “grading the graders” (GG) (this model is presented in Grading the graders: motivating peer graders in a MOOC [34]) tends to improve peer assessment quality with lack of anonymity from auditors’ part. To sum up, peer assessment and grading are considered more transparent and correct, providing a higher degree of trust under the observed condition of lack of anonymity during peer assessment of students from one course.

Moodle, as a powerful tool for creating MOOC courses, also provides the possibility of peer assessment in the “Seminar” part. Due to simple Moodle architecture this part is easy to modify. One of the readymade solutions for Moodle system is presented in: Integrating Enhanced Peer Assessment Features in the Moodle Learning Management System [35].

Using the plugin Moodle Workshop[27], the module is extended by adding new features:

• student modeling support (distribution tasks) based on Bayesian network approach;

• metrics, regarding the reliability of the computed models;

• improved visualization and model comparison of students and results of their sessions;

• expanded seminar module with student modeling based on the Bayesian network approach.

Unfortunately, within the framework of the article, it was impossible to compare and consider all possible algorithms and methods for peer assessment implementation. The authors tried to consider the most mentioned МООС systems and highlight their key features. For example, A Systematic Analysis of Peer Assessment in the MOOC Era and Future Perspectives [36] consideres 17 features: Cloud Teaching Assistant System (CTAS), IT Based Peer Assessment (ITPA), Organic Peer Assessment, EduPCR4, GRAASP Extension, Web-­PA, SWoRD (Peerceptiv now), Calibrated Peer Reviews (CPR), Aropä, Web-­SPA, Peer Scholar, Study Sync, Peer Grader and L²P (Lehr und Lern Portal, RWTH Aachen) Peer Reviews.

IV. Algorithm

As a result of the research, a unified model for peer assessment is presented. Peer assessment could be implemented in various forms and various methods, although, all the considered papers have basic elements, which could be combined to achieve the best result:

  • The form of peer assessment is determined:

• team assessment, when members of one team assess the work of another team;

• peer review (with or without disclosing the identity of the person being audited, or auditors who audit);

• automatic (passing automated tests, using machine learning for assessment);

• one-to-one (when only two students participate in peer assessment);

• expert (comparing the task with a certain reference (standard);

  • The importance of students’ self- and peer assessment is determined to increase the assessment objectivity.
  • Defining assessment сriteria:

•formalize criteria weights, if necessary;

• highlight criteria inter-/dependence;

• add criteria, which allows to identify the objectivity of the exhibited grades;

• define the type of criteria: binary or assessment.

  • format the works estimated — full-time, distant and mixed;
  • Get feedback from the participants of peer assessment: commenting on works.

The best results are shown on implementation of peer assessment concept of “double-­blind peer review”, namely, the loaded work is considered verified only in the case that it got a grade from two other participants a from two other participants at least ‘. It should be noted that the mathematical model presented below is greatly simplified, as various various methods for improving the accuracy and objectivity of peer assessment peer assessment presented earlier. . Thus, assessment is carried out both linearly with Z-scale depending on the sphere of method application. Here, a 100 point scale is used with evaluative tools and assessment criteria.

Thus, three conditions are formed for the system user:

• Submitter (“student”role);

• verifier / aditor (“expert” role);

• Auditee (“student”role).

It should be noted that the user is assigned the role of an “expert” only after completing and downloading the task.

Moreover, to exclude the fact of sequential verification of the work more than once by the same user, it is necessary to develop a mechanism for random task distribution, based on audit history.

In a first approximation, the grade (G) by the user ‘s current work could be expressed as the arithmetic means of the system’s estimates and verifiers:

 (2)

where:

g is the automated gradeof the system or the teacher;

g1g2 is the auditors’ grade.

Figure 1. State diagram.

Moreover, the system or teacher ‘s grade is described as “ the system grade”, considering the fact that the difference is in the degree of process automation. —In the implemented automated verification module, this grade is set automatically, whereas without the implementation of the module it is set by the teacher.

The main disadvantage of this approach is low level of detail and the inability to identify the weakest aspects of the assignment being audited. For this purpose, assessment criteria is an obvious solution. Let a set of criteria be presented as and each criterion could get two assessment variants “the criterion is present” and “the criterion is absent”, which are marked respectively as 1 и 0. Then, for ., and introduce the matrix A:

 (3)

Obviously, three factors could affect the quality score of the material that has been learned:

•the grade, that has been exposed by “users”;

•the grade, that has been exposed by “the system”;

•the grade, that has been exposed by “the expert”.

Also, it is possible to use grades, that have been exposed by the user as “the expert”. Denote  as the expert assessment, calculated by the formula (1). Then, taking into account the grade difference exposed by the system (by the teacher) and the second participant, we proofread the grades

 (4)

where: gexpert (En) is the grade received by the student from other participants by criterionn;

gsystem (En) is the grade automated by the system or by the teacher;

i is a student’s number;

gi (En) is the grade exposed by the expert to other students.

In (4), when each criterion is estimated, considered is not only how the user with “the student” role completed the task, but also the correctness of his assessment of other tasks. This agrees with the situation, when the user as a “student” gets a point for a certain criterion, but at the same time, incorrectly estimates the same criterion as an “expert”. It means, that the user does not fully understand the theoretical part.

This approach allows solve the problem of peer review and control, namely, upgrade the interest and objectivity of assessment by other participants in peer control. Students’grades depend on the quality of their assessment. The disadvantages of this approach are that the final grade (the grade on completion) is possible only after all members of the group complete all tasks and peer review and assessment.

The assessment determination by Boolean data type allows even the least ready users acting as experts in peer assessment, quickly and objectively assess the works.

In general, peer assessment process can be represented on the state diagram.

V. Conclusion

The definition of peer assessment of student’s works is presented in the paper. The main stages of peer assessment are shown and their advantages and disadvantages are considered. Each individual area requires adaptation of peer assessment model, although the general sequence of actions observed in all examples considered is the same.

The mathematical apparatus for peer assessment is not perfect, though many solutions have been proposed to improve peer assessment objectivity and accuracy. Despite the fact that peer assessment format is still subject to discussion, distant format allows to increase the objectiveness of peer assessment. However, full-time (face-to-face) peer assessment ensures strong student feedback.

At the same time, the main problem of peer assessment is still student motivation to objective peer review and assessment. Also, it is worth considering the question of “reference” works, as in this case assessment of each work by the teacher or the expert is required.

Peer assessment is definitely the best method to verify those solutions where the automation of verification and assessment of submitted work is possible. In this case, the teacher or the expert do not need to carry out assessment, but only set strict rules, criteria and limitations for the work submitted. Such verification can be used to study the basic algorithms of programming languages or databases.

REFERENCES

 [1]   Trautmann, N. M. Interactive learning through web-mediated peer review of student science reports, in Educational Technology Research and Development, no. 57(5), pp. 685–704, DOI: 10.1007/s11423–007–9077-y

 [2]   Webster, J. and Hackley, P. Teaching Effectiveness in Technology-­Mediated Distance Learning, in Academy of Management Journal, 2017, no. 40(6), pp. 1282–1309, DOI: 10.5465/257034

 [3]   Yavorsky, V. V., Bashirov, A. V., Emelina, N. K., Rakhimbekova, A. E., Chvanova, A. O., and Baidikova, N. V. Development of a mixed form of education in the process of improving information and communication support of higher education, in International journal of experimental education, 2017, no. 7, pp. 60–64. Available at: http://www.expeducation.ru/ru/article/view?id=11725 (Accessed: 20.03.2020)

 [4]   Smirnova, O. B., Prikhodko, M. A., and Dolgova, L. V. On the organization of mutual verification in terms of intermediate control in mathematics, in Current issues of education and science, 2019, no. 2, pp. 55–58. Available at: https://www.elibrary.ru/item.asp?id=41226555 (Accessed: 20.03.2020)

 [5]   Gogol, A. A., Tomashevich, S. V., and Krasov, A. V. Network method of mutual verification of students‘ knowledge, in Materials of the X all-Russian (with international participation) scientific and practical conference, 2008, pp. 222–226. Available at: https://www.elibrary.ru/item.asp?id=25351061 (Accessed: 20.03.2020)

 [6]   Moodle learning management system. Available at: https://moodle.org/. Accessed 20 March 2020

 [7]   Badea, G., Popescu, E., Sterbini, A., and Temperini, M. Integrating Enhanced Peer Assessment Features in Moodle Learning Management System, in Lecture Notes in Educational Technology, 2019, pp. 135–144, DOI:10.1007/978–981–13–6908–7_19

 [8]   Badea, G., Popescu, E., Sterbin, A., and Temperini, M. A Service-­Oriented Architecture for Student Modeling in Peer Assessment Environments, in Lecture Notes in Computer Science, 2018, pp. 32–37, DOI: 10.1007/978–3–030–03580–8_4

 [9]   Goldin, I. Accounting for peer reviewer bias with bayesian models, in Proceedings of the Workshop on Intelligent Support for Learning Groups at the 11th International Conference on Intelligent Tutoring Systems, 2012

[10]   Burstein, J., Kukich, K.Wolff, S.Lu, C.Chodorow, M.Braden-­Harder, L., and Harris, M.-D. Automated Scoring Using A Hybrid Feature Identification Technique, in Proceedings of the 36th Annual Meeting of the Association for Computational Linguistics and 17th International Conference on Computational Linguistics, 1998, vol. 1, pp. 206–21

[11]    Clauser, B. E., Margolis, M. J., Clyman, S. G., and Ross, L. P. Development of Automated Scoring Algorithms for Complex Performance Assessments: A Comparison of Two Approaches, in Journal of Educational Measurement, no. 34(2), pp. 141–161, DOI: 10.1111/j. 1745–3984. 1997.tb00511.x

[12]    Goldin, I. M. and Ashley, K. D. Peering Inside Peer Review with Bayesian Models, in: Biswas, G., Bull, S., Kay, J., and Mitrović, A. (eds.) Artificial Intelligence in Education, Springer Berlin Heidelberg, Berlin, Heidelberg, 2011, pp. 90–97

[13]    Fresco-­Santalla, A. and Hernández-­Pérez, T. Current and Evolving Models of Peer Review, in The Serials Librarian, 2014, no. 67(4), pp. 373–398

[14]   Kitaya, H. and Inoue, U. An online automated scoring system for Java programming assignments, in Int J Inf Educ Technol, 2014, no. 6(4), pp. 275–9

[15]    Checki, O. Available at: https://checkio.org (Accessed 20 March 2020)

[16]    Stepik. Available at: https://stepik.org/catalog 20 March 2020

[17]    School 21. Available at: https://21-school.ru/ 20 March 2020

[18]    Goldfinch, J. Further development in peer assessment of group projects, in Assessment and Evaluation in Higher Education, 1994, no. 19(1), pp. 29–35

[19]    Crowdgrader. Available at: https://www.crowdgrader.org/ 20 March 2020

[20]    Alfaro, L. de and Shavlovsky, M. CrowdGrader: A tool for crowdsourcing the evaluation of homework assignments, in Proceedings of the 45th ACM technical symposium on Computer science education, ACM, 2014, pp. 415–420

[21]    Alfaro, L. de and Shavlovsky, M. CrowdGrader: Crowdsourcing the Evaluation of Homework Assignments, in Technical report UCSC-SOE-13–11, August 2013

[22]    PeerStudio. Available at: https://www.peerstudio.org/ 20 March 2020

[23]   Kulkami, C., Bernstein M. S., and Klemmer, S. PeerStudio: rapid peer feedback emphasizes revision and improves performance, in Proceedings from the second ACM conference on learning Scale, 2015, pp. 75–84

[24]    Freeman, M. and McKenzie, J. SPARK, a confidential web–based template for self and peer assessment of student teamwork: benefits of evaluating across different subjects, in British Journal of Educational Technology 33 (5), pp. 551–569.

[25]    Coursera. Available at: https://www.coursera.org (Accessed 20 March 2020)

[26]    Udacity. Available at: https://www.udacity.com (Accessed 20 March 2020)

[27]    Lektorium. Available at: https://www.lektorium.tv (Accessed 20 March 2020)

[28]    EDX. Available at: https://www.edx.org (Accessed 20 March 2020)

[29]    Massachusetts Institute of Technology. Available at: https://web.mit.edu (Accessed 20. March 2020)

[30]   Stanford online. Available at: https://online.stanford.edu (Accessed 20 March 2020)

[31]    Balfour, S. Assessing writing in MOOCs: Automated essay scoring and Calibrated Peer Review, in Research and Practice in Assessment, 2013, vol. 8 (Summer), pp. 40–48

[32]    UCLA. Available at: http://www.ucla.edu (Accessed 20 March 2020)

[33]    Piech, C.Huang, J.Chen, Z.Do, C.Ng, A., and Koller, D. Tuned models of peer assessment in MOOCs, 2012, Jul. [Online], Available at: http://arxiv.org/abs/1307.2579

[34]    Lu, Ya.,Warren, J., Jermaine, Ch., Chaudhuri, S., and Rixner, S. Grading the Graders: Motivating Peer Graders in a MOOC, in Proceedings of the 24th International Conference on World Wide Web (WWW’15), 2015, pp. 680–690

[35]    Badea, G.Popescu, E.Sterbini, A.and Temperini, M. Integrating Enhanced Peer Assessment Features in Moodle Learning Management System, in Lecture Notes in Educational Technology, 2019, pp. 135–144

[36]    Wahid, U.Chatti M. A., and Schroeder, U. A systematic analysis of peer assessment in the MOOC era and future perspectives, in Presented at the eLmL 2016, The Eighth International Conference on Mobile, Hybrid, and On-line Learning Apr 24, 2016

[37]    Moodle Workshop. Available at: https://docs.moodle.org/38/en/Using_Workshop (Accessed: 20.03.2020)

[38]   Nasyrov, N.Gorlushkina, N., and Uzharinskiy, A. Using the Subtask Methodology in Student Training for Demonstration Examination in “Web Design and Development” Skill. In: Alexandrov, D., Boukhanovsky, A., Chugunov, A., Kabanov, Y., Koltsova, O., and Musabirov, I. (eds) Digital Transformation and Global Society. DTGS2019. Communications in Computer and Information Science, 2019, vol. 1038. Springer, Cham, DOI: 10.1007/978–3–030–37858–5_48