Home United States USA — software Software Development Peer Reviews: Five Lessons Learned From the Experience Software Development...

Software Development Peer Reviews: Five Lessons Learned From the Experience Software Development Peer Reviews: Five Lessons Learned From the Experience

105
0
SHARE

A discussion of the peer review process, how it should be run, what benefits it poses for the dev team, and how you can integrate peer reviews into Scrum.
In this article, I want to present five lessons I’ve learned from applying Peer Reviews in Software Engineering related activities, both in software development and teaching environments.
I’ve had the opportunity of not only applying a peer review process in software development industrial settings, but also in academic settings, for undergraduate courses I taught at the Departamento de Informática of the Universidad Técnica Federico Santa María (Chile) , for training courses for several enterprises, and in R&D software projects.
Although a complete discussion about peer reviews is out of the scope of this article, we need to remember that a peer reviewing process deals with three main objects:
A reviewer: the person who acts as observer and commenter.
A producer: the person who gets his or her work revised by a peer.
Some of the objects mentioned here have been already mentioned in other previously defined process areas or subprocesses. For instance, we have defined a subprocess called « Peer reviews of work products » (see http: //ieeexplore.ieee.org/document/5750494/) , but what I’m doing in this article is to give you specific and concrete recommendations on how to use these artifacts and how to improve these processes.
Well, with all that said, let’s start reviewing the lessons.
No matter how experienced your developers are, a checklist will always be welcomed. This checklist can be considered an organizational asset and, as such, it does not die at the end of a project, but it is constantly evolving as time goes on. But, of course, there is always a zero hour, where the checklist gets its initial definition.
By definition, a checklist is a list of items to be checked or observed. I also recommend grouping the items to be checked by software-development-related areas. For example, let’s enumerate two groups of items (note that I’m not imposing a limit on either the groups or on the items, it’s only an example) :
Items for software code.
Are actors identified?
Is there, at least, one actor (the primary one) ?
Are alternative courses of action correctly identified and linked to the main course of action?
Are two technical elements inside the use case? (e.g., windows, sizes)
Are variable names « searchable »?
Do method names indicate action? (verb + substantive)
Are objects/classes substantively named?
Are comments explicating the rationale behind important parts of the software?
Developers will always welcome a report template. Depending on the specific context of your organization, a report template can be viewed as either a project or as an organizational asset. No matter the case, your template should present to the reviewer a brief indication of concepts, scales, and possible values that will be used for evaluation purposes. For example, in your report template, you should be informing the reviewers what a « 1 » means, what a « 5 » means, and what a « pass » or « fail » means.
Here I present a three-part report template I’ve been using for several years.
The first part is the identification of the report. And, if applied, it should have a reference to the repository in which the files are located.
Reviewer identification (perhaps a team) .
The project involved (or product, if applicable) .
Repository URL (if applicable) .
Start Date, End Date (of revision) .
General comments (if applicable) .
For an example, let’s look at the following table:
Please note that the reviewer field in the table is not always required. But if you work with « teams of reviewers,  » then you must identify the team in the first part, and identify the particular reviewer in the second part (i.e., the table) .
It is up to you and your organization to define what low, medium and high means in the severity field, but it is a good practice to mark all issues considere ‘severe’ for mandatory and urgent correction. If you don’t like using L, M or H, you are free to use the Likert-like scale or colors. In the case of colors, be careful to not use similar tones. It is preferable to use red, yellow and green, rather than to use three tones of yellow.
Although I presented one table for the example, I always prefer to have two tables: one for software code related issues and one for all the other issues (requirements, sequence diagrams, use cases, etc.) .
The explanation part shows the reviewer how to fill-in the report template and also shows what the important fields mean. In this part, you should be describing what low severity means and how it compares to a high severity issue or problem. In the example, I advise you to consider all high severity issues marked as mandatory for urgent correction. This advice, if taken by you or your organization, should be described here.
Note that this part should be present in the report template, but deleted from the final report (e.g., when you generate the .pdf file) .
Whether you’re using peer reviews for academic or industrial purposes, you must set a goal. Try to set clear specific goals, not generic ones. In my experience, generic goals such as « improve quality » will be hard to account for.
Some examples of goals that are too generic:
« To teach a new quality assurance technique. »
« To reduce the number of defects in the products. »
« To improve the overall quality of deployed software. »
I strongly recommend you set goals that are measurable. In my experience, it is not important in this phase to set metrics and thresholds for them, but it is really important that you can tell, for instance, that you’re reducing the number of defects in the products. I also strongly recommend always taking into account your organization’s strategic goals. Alignment with these goals ensures you and your development team are not wasting resources measuring things that won’t contribute to your organization.
Let’s look at a concrete example which is drawn from a real-world experience of mine. Peer reviews are time-consuming, no one doubts that. When you’re applying peer reviews in an academic setting, you must be especially careful with the time you require from your students to be aligned with the flow of the course. Requiring your students to apply peer reviews in each of the milestones (supposing your course is organized with milestones) is a daunting task for students. One solution to this problem is to require your students to apply peer reviews in a random fashion with the constraint that no group will have to use peer reviews again. But be careful: this proposed solution is only suitable if your goal is to « teach peer reviewing as a quality assurance technique.

Continue reading...