In this presentation, we show some work on Measuring Code Review in the Linux Kernel Development Process.
We investigated the following research questions:
- Does the number of responses increase as the patch developer is more experienced?
- Do maintainers get fewer or more responses than others, when they author a patch?
- Do patch developers who have previously been active in some areas of the kernel get more responses than developers who have been active in other areas?
We also investigated various characteristics of the patches themselves; such as files, sections
and mailing lists, as the following questions:
- Does the number of responses increase or decrease with the number of files a patch proposes to change?
- Does the number of responses increase or decrease with the number of maintainer sections to which changed files belong to?
- Does a patch get more responses if it is submitted to more mailing lists?
- Do some mailing lists or maintainer sections lead to larger numbers of responses than others?
As 7.94% of the response traffic is classified as being authored by bots, we also considered where bots are active.
We will present some interesting insights we gained in this research and the diverse set of variables which define the review process. This presentation summarizes the results of a master thesis, finished in spring 2021.
|I agree to abide by the anti-harassment policy||I agree|