Authentic assessment design for computer programming master’s courses

By Laura.Duckett, 7 March, 2025
View
A four-step plan for more meaningful assessment that incorporates AI-assisted evaluation, group discussions and presentations
Article type
Article
Main text

The “one and done” nature of many coursework assessments leaves little opportunity for students to reflect on their work, and the feedback provided by the teacher is not particularly meaningful because it doesn’t help improve their project outcome. This is especially problematic in advanced computer programming project assessments, which should be closely aligned with existing professional software development practices. Indeed, in the industry, software is considered a “living” thing: it is continuously being evaluated, modified and improved. This makes the “one and done” approach particularly inauthentic in this context.

Specifically, the typical sequence in which a coursework project might unfold is:

  1. Instructors release task specification to students a few weeks before the submission deadline
  2. The students work on the problem and submit their solution
  3. The teacher takes one or two weeks to grade the students’ work
  4. The teacher releases grades and (possibly) feedback to students.

We introduced a two-phase group software development project in one of the modules of our master’s level computer programming course. In Phase 1, students, working in small groups, submit an initial software system (v1) based on a task specification. While Phase 1 follows the typical assignment structure, Phase 2 introduces a dynamic, iterative process. In this phase, students have the opportunity to evaluate and improve their v1 system, according to the following process:

Teacher feedback: the module leader provides feedback to groups on their v1, discussing issues with their system and suggesting improvements. This mimics the supervisor feedback process in the workplace. 

Individual AI-assisted evaluation: students, assisted by AI tools, critically evaluate their individual contribution to the v1 system. They discuss the benefits and drawbacks of their design and explore alternative solutions. This mimics the industry best practice of using AI to evaluate software. 

Group discussion and resubmission: students return to their groups to decide which proposed changes from the previous step they will implement. This will be based on how well proposed modifications for one part would integrate with the remaining system. The refined v2 system is then submitted. This mirrors the common industry practice of software refactoring – continuously refining and improving code. 

Group presentations: students demonstrate their modified v2 system in group presentations, including a short “client pitch” highlighting their system’s main strengths and selling points. This mimics project presentations in an industrial setting.

What are the benefits?

The main benefits of our assessment design are:

  • Authentic assessment experience for students: this two-phase project aligns closely with current industry practices. This ensures that our graduates are “industry-ready”, improving their future employability prospects.
  • Development of soft and transferable skills: in addition to improving students’ ability to create robust, maintainable software, our project develops their critical thinking and evaluative judgement skills.
  • Improved feedback mechanisms: by embedding feedback into the assessment process, rather than treating it as an afterthought, we shift the focus from assessment of learning to assessment for learning.

Tips for implementing this kind of assessment

This specific assessment design is tailored towards advanced computer programming courses but we believe that educators from any discipline can take a number of lessons from our approach. The concept of a two-phase coursework project assignment (whether individual or group work) can be applied to any area of learning. Here are some questions to guide fellow educators in assessment design.

  1. Industry-aligned assessment: what are the current best practices in industry (or more broadly, the “real world”) that relate to my module? How can I ensure that my assessment aligns with these practices?
  2. Soft skills: beyond subject-specific knowledge and abilities, what soft or transferable skills would I want my students to develop? Which skills will serve them best in their future study and career? How can I design my assessment to promote the development of such skills?
  3. Useful feedback: instead of just explaining a student’s grade, how can I ensure feedback is useful and beneficial to students? How can I make feedback a more integral part of the assessment process?

We encourage fellow educators to think of assessment not just as a process through which students receive grades. Instead, it should be a cornerstone of their learning experience, helping develop the knowledge and skills necessary for them to succeed in further study and future careers.

Thomas Selig is associate professor in the department of computing at Xi’an Jiaotong-Liverpool University’s School of Advanced Technology; Ling Wang is an educational developer in the Academy of Future Education’s Educational Development Unit at Xi’an Jiaotong-Liverpool University.

If you would like advice and insight from academics and university staff delivered directly to your inbox each week, sign up for the Campus newsletter.

Standfirst
A four-step plan for more meaningful assessment that incorporates AI-assisted evaluation, group discussions and presentations

comment