Click on these thumbnail images to see them in a larger scale.
Here are just two examples of "Glass Box" tools, so called because the
is transparent, i.e., algorithmic, rational and essentially a matter
of complex information processing.
Operating under the maxim that
a picture is worth a thousand words
both of these examples present a lot of data in a graphical form.
Run Chart of Bug Reports
Pareto Chart of Bug Reports
The run chart on the left facilitated a real-world analysis of bug
reports for a major operating system over a period of 2 1/2 years.
Each data point was simply a count of bug reports received during
a particular month.
The first spike in the chart reflected a backlog of bug reports that
were folded into a new bug report database.
The subsequent spikes corresponded to periodic software releases,
while the general trend declined over time, reflecting a steady
improvement in quality.
This picture illustrates with real data that the initial release
of any software system is likely to be the buggiest.
It can give managers an insight into how the software design
and engineering is improving (or not) over time, why Quality Assurance
personnel tend to get that "burnt out" feeling just before every
release, and why Customer Support tends to get that feeling just after
How many times have you heard something like,
"80% of this is caused by 20% of that"?
Though often misapplied (due, e.g., to mere coincidence) this
is the "80-20 Rule" or the "Pareto Principle", named after the
Italian economist Vilfredo Pareto.
On the left is a Pareto Chart of bug reports for the same major
operating system for which the above run chart was prepared.
The columns represent a total count of bug reports for each
subsystem (or "package") in the operating system over a period of
These columns are ordered from most to least, the subsystem with
the most bug reports on the left.
The red line represents the cumulative count of the bug reports:
the first data point being the number for the first
subsystem alone, the second point being that number plus the number
for the second subsystem, etc.
The blue horizontal line represents where the cumulative count
(the data point on red line) is close to 80%.
The vertical blue line represents where that same point lies with
respect to the subsystems.
This analysis revealed that 82% of the bug reports were from
15.2% of all subsystems, but from 20.8% of the subsystems with actual
bug reports, fitting closely to the Pareto Principle that 80%
of the problems arise from 20% of the causes.
Similar results were revealed when the first subsystem was eliminated:
about 82% of the bug reports applied to about 21% of all the remaining
Such an analysis can help tell planners and managers where to focus
resources, but it doesn't tell them that the quality of any subsystem
(or the engineers working on it) are any worse than any other.
Many factors can effect bug report numbers, requiring insight and
These factors include:
Subsystem size, i.e., the number of lines of code.
(Correlation diagrams could illustrate the relationship of
source code size to the number of bugs reported.)
Complexity, i.e., how difficult is the subsystem to code?
On how many different platforms must it run?
How unique is the functionality, hence experience in coding it?
How much new code is required? Can reusable (field tested)
code be utilized?
Profile of the customer base.
Size of the customer base, i.e., the more customers that use
the subsystem, the more the code gets field tested.
Sophistication of the customer base; e.g., software used
by programmers and "power users" is more likely to get reports
about real bugs than those due to user error.
employs both "Glass Box" and "Black Box" tools as required to
unravel design and engineering problems.
If tools like these, and the problem-solving skills they
support, could help your business or organization call
us at (310) 455 3107 or