Various static analysis tools have been used for many years in the kernel development; even more, some static analysis tools have dedicatedly been developed in the realm of the kernel community.
While with the introduction of the first static analysis tools, some relevant kernel bugs were found and fixed, the repeated execution of those static analysis tools on recent kernels suffer from a large set of false positives compared to the really relevant findings that would require attention and fixing.
So making use of these results in the long term requires to track the false positives. Most efforts using static analysis tools and tracking false positives have been done by single individuals in the community. For single individuals doing this with a long history of following the kernel development with a specific tool in mind, some simple light non-distributed solutions might be sufficient for tracking false positives.
However for anyone that would like to involve in following these static analysis findings or for a larger open group to continuously assess findings more technology and organisational setup is needed.
I would like to discuss if we see a critical mass for running some static analysis tools, maintaining a database of false positive findings of static analysis tools collaboratively, what is the technical setup required to maintain those findings, and what are the organisational steps that should be taken towards establishing such a collaborative effort.
|I agree to abide by the anti-harassment policy||I agree|