The lessons? This picture illustrates with real data that the initial release of any software system is likely to be the buggiest. It can give managers an insight into how the software design and engineering is improving (or not) over time, why Quality Assurance personnel tend to get that "burnt out" feeling just before every release, and why Customer Support tends to get that feeling just after every release.
On the left is a Pareto Chart of bug reports for the same major operating system for which the above run chart was prepared. The columns represent a total count of bug reports for each subsystem (or "package") in the operating system over a period of time. These columns are ordered from most to least, the subsystem with the most bug reports on the left. The red line represents the cumulative count of the bug reports: the first data point being the number for the first subsystem alone, the second point being that number plus the number for the second subsystem, etc. The blue horizontal line represents where the cumulative count (the data point on red line) is close to 80%. The vertical blue line represents where that same point lies with respect to the subsystems.
This analysis revealed that 82% of the bug reports were from 15.2% of all subsystems, but from 20.8% of the subsystems with actual bug reports, fitting closely to the Pareto Principle that 80% of the problems arise from 20% of the causes. Similar results were revealed when the first subsystem was eliminated: about 82% of the bug reports applied to about 21% of all the remaining subsystems.
Such an analysis can help tell planners and managers where to focus resources, but it doesn't tell them that the quality of any subsystem (or the engineers working on it) are any worse than any other. Many factors can effect bug report numbers, requiring insight and analytical skills. These factors include: