Cyclomatic Complexity

Cyclomatic Complexity is added as a new approach to find methods that are possible failures. It is implemented in the way that McCabe describes it in its original paper on page 7 and 8. The cyclomatic complexity number is computed by counting all if, for and which statements of a method. For every if statement that contains a && 2 is add to the cyclomatic complexity.

Complex

The cyclomatic complexity doesn’t have a maximum value but the likelihood evaluator has to return a value between 0 and 1 including 0 and 1. To have a maximum for converting the cyclomatic complexity between 0 and 1 complexity values bigger than 10 are set to 10. This is the upper bound that McCabe terms in his paper on page 7.

Case statements

McCabe says in his paper on page 9: “The only situation in which this limit has seemed unreasonable is when a large number of independent cases followed a selection function (a large case statement), which was allowed.” To have a “reasonable” upper limit the case statements aren’t count.

Enhancements

To have a complete implementation of the cyclomatic complexity measurement the case statements should also be taken in account, but then it should be possible to configure the upper limit.

Automated evaluation

How to evaluate? First my colleagues should only test it. But my professor thought that this isn’t enough for an expressive and compareable result. So the idea is to set errors and start the evaluation when tests fail. And everything should work automatically. The user start it and after a month or so you have the result without any intervention by the user

Jesting

There exists a tool called Jester that changes the source code of programms and run the tests after every change. If no tests fail it will remember the position and what changed. With this procedure it is possible to find source code that isn’t well tested. But this isn’t exactly what we need. We need failed tests to test how good an evaluator finds the changed method.

EzUnit and Jester

The procedure is as follow. First we get all methods of a projects source folder from the Eclipse (JDT). Then we do one style of change, there are 11 variants of changing the source code, after each other for every method. These variants are taken from Jester. I will explain them later. Then the test suite is started. If compilation is necessary the launch waits for completion and continues thereafter. There are three cases after the test site launch. Junit can give an error back this means something not test specific goes wrong. In the case Junit returns with a failure the evaluation is started because thento reverse there are failed tests. The third case is that nothing failed after the change. Then there is nothing to evaluate like after an error and you know that there is bad tested code in the project. The next step is to take all made changes back. If all methods are changed with all variants of source code changes a csv file is written with all results in the project root folder.

Changing source code

Jester has 11 variant of changing source code. All are implemented by EzUnit. There is an number changer that searches for one digit and add 1 to it. This Jester implementation can lead to compilation errors because it changes an octal number representation from 0x … to 1x… If such a change happens it is ignored. The other methods replace java language constructs to reverse there meaning like “true” to “false”, “if (” to “if (true ||”, “if (” to “if (false &&”, “==” to “!=”, “!=” to “==”, “++” to “−−” and “−−” to “++”. Such changes can lead to endless loops. To avoid an ever blocking test the timeout annotation (@Test(timeout = 60000) for Junit4 should be used. But we saw also endless loops where the cause is unknown for now and where the setting of the timeout are not working. In this cases the launch has an timeout set so that it continues with another change.

The time

First tests show that a evaluation run can take very long dependent on the number of methods and tests a project has and the time for one test run. An evaluation run with commons-codec (191 tests) needs 4 hours on double processor with 1.66GHz and for commons-math (1022 tests) about 4 days.

Framework for calculation and presentation of likelihoods

My first addition to EzUnit is the implementation of a likelihood framework. Needed things are a view that displays the likelihood for a failure of a method, an extension point where different likelihood calculation methods can be added and some methods for calculating a likelihhod.

Likelihood calculation

I implemented four likelihood calculation methods. The first computes the quotient from the count of failures and the count of all calls of a method. The second calculates the test coverage of a method as the quotient from the count of all calls and the sum of all method calls. Then it calculates the failure likelihood and the passed likelihood as quotient from failures/passes and the count of all calls of a method. These three values are now taken to calculate the end likelihood in the following way. A difference is build from the failure and passed likelihood and this value is multiplied with the test coverage. This method should rate methods higher that are more tested. The third and the fourth are of the same type. They calculate a complexity of a method by there length in characters. In the one hand it is calculated the quotient from the length of method and the sum of the length of all methods and on the other hand it is calculated the quotient from a methods length and the length of the biggest method.

Displaying the result

The result is displayed as a tree. First comes the methods that have possible failures and under this methods there come the test name from where they called. Here it is defferenciated between failed and passed tests. Then it is displayed a overall likelihood and under this overall likelihood come all single likelihoods. The overall likelihood is calculated as the sum from all likelihoods. All single likelihoods go in this sum with the same weight.

Rate likelihood methods

To adjust the result you can vote or disable a likelihood method. A method should be voted if it helps to find the failure. The user has to decide when this is the case.