Oct20 '15 by Admin

Dr. Menzies submits his paper titled “On the Value of Negative Results in Software Analytics” to Empirical Software Engineering. This is a joint work with Dr. Ekrem Kocaguneli.

Title: On the Value of Negative Results in Software Analytics

Abstract: When certifying some new technique in software analytics, some ranking procedure is applied to check if the new model is in fact any better than the old. These procedures include t-tests and other more recently adopted operators such as Scott-Knott. We offer here the negative result that at least one supposedly “better” ranking procedure, recently published in IEEE Transactions on Software Engineering, is in fact functionally equivalent (i.e. gives the same result) as some much simpler and older procedures. This negative results is useful, for several reasons. Firstly, functional equivalence can prune research dead-ends before researchers waste scarce resources on tasks with little practical impact. Secondly, by recognizing needless elaborations, negative results like functional equivalence can inform the simplification of the toolkits and syllabi used by practising or student data scientists. Thirdly, each time a new ranking procedure is released into the research community then old results must be revisited. By slowing the release of new procedures, negative results of functional equivalence lets us be more confident about old results, for longer. Fourthly, the particular negative result presented in this paper explains two previously inexplicable results; specifically:

(1) prior results on conclusion instability results documented ;

(2) the strangely similar performance of different evaluation rigs found in previous publications.