News

Jan14 '17: Journal Paper accepted:

Accepted Paper

Dr. Menzies' paper titled "TMAP: Discovering Relevant API Methods through Text Mining of API Documentation" has been accepted for publication at Journal of Software - Special Issue, SCAM 2015. Author version can also be found here.

Jan13 '17: How to read less:

Technical Report

Zhe Yu's paper titled "How to Read Less: Better Machine Assisted Reading Methods for Systematic Literature Reviews" has been released as a technical report.

Jan12 '17: Impacts of Bad ESP:

Technical Report

George Matherw's paper titled "Impacts of Bad ESP (Early Size Predictions) on Software Effort Estimation" has been released as a technical report.

Nov14 '16: Paper accepted at EMSE:

Accepted Paper

Dr. Menzies' paper titled "Are Delayed Issues Harder to Resolve? Revisiting Cost-to-Fix of Defects throughout the Lifecycle" has been accepted for publication at EMSE. Author version can also be found here.

Nov13 '16: Dr. Menzies is a keynote speaker at SWAN-2017:

Keynote

Dr. Menzies will be the keynote speaker at 2nd International Workshop on Software Analytics (SWAN 2016). His talk is titled "More or Less: seeking simpler software analytics". The slides of the talk can be found here

Oct11 '16: Dr. Menzies to serve as co-chair at SSBSE-2017:

Flyer

Dr. Menzies to serve as co-program chair for Symposium on "Search-based Software Engineering. 2017". Find the flyer to SSBSE'17.

Oct10 '16: Among the top-3 papers at SSBSE-2016:

Top-3 Papers

Vivek, Dr. Menzies and Jianfeng paper titled " An (Accidental) Exploration of Alternatives to Evolutionary Algorithms for SBSE" at Symposium on Search-Based Software Engineering (SSBSE) was adjudged to be among the Top-3 (of 48 submission) . Slides can be viewed here.

Oct9 '16: Dr. Menzies presents at SSBSE-2016:

Talk

Dr. Menzies presented his paper titled " An (Accidental) Exploration of Alternatives to Evolutionary Algorithms for SBSE" at Symposium on Search-Based Software Engineering(SSBSE). Slides can be viewed here.

Sep29 '16: Paper accepted at ESE:

Accepted Paper

George and Dr. Menzies' paper titled "Negative Results for Software Effort Estimation" has been accepted for publication at ESE.

Sep20 '16: Dr. Menzies is a Guest Speaker:

Guest Speaker

Dr. Menzies has been invited to speak at "Big Software on the Run", winter school, Netherlands on October 27, 2016.

Sep12 '16: Rahul Krishna submits his paper to IST:

Paper Submitted

Rahul submitted his paper titled 'Recommendations for Intelligent Code Reorganization' to Journal of Information and Software Technology.

Sep8 '16: Wei Fu submits his paper to IST:

Paper Submitted

Wei Fu submitted his paper titled 'Why is Differential Evolution Better than Grid Search for Tuning Defect Predictors?' to Journal of Information and Software Technology.

Sep5 '16: Rahul Krishna presents at ASE-2016:

Talk

Rahul Krishna presented his paper titled "Too much automation? the bellwether effect and its implications for transfer learning" at International Conference on Automated Software Engineering (ASE 2016). Slides can be viewed here.

Aug28 '16: Three Papers submitted to ICSE'17:

Paper submitted

Last day for submission to International Conference of Software Engineering-2017 was on Aug 28, 2016. This year we have three very interesting papers, which were submitted by Amrit, George and Dr. Menzies.

The papers are:

Now we wait with our fingers crossed!

Aug27 '16: Jianfeng Chen submits his paper to TSE:

Paper Submitted

Jianfeng Chen submitted his paper titled 'Is “Sampling” better than “Evolution” for Search-based Software Engineering?' to Transactions of Software Engineering.

Aug27 '16: Reading Party for ICSE'17:

Reading Party

Reading party to critique works of Amrit, George among others. Great papers, good food along with a lots of caffiene.

Aug18 '16: Foundation of Software Science:

New Course

Dr. Menzies is teaching a new course - "Foundation of Software Science. This subject will explore methods for designing data collection experiments; collecting that data; exploring that data; then presenting that data in such a way to support business-level decision making for software projects.

Aug15 '16: Funding from LexisNexis:

Funding

Thanks LexisNexis for sponsoring our BIG SE with a grant (total award: $60K).

Aug10 '16: Funding from NSA:

Funding

Thanks NSA for sponsoring our privatized data sharing research(Privatized data sharing: Practical? Useful?) with a grant (total award: $85K).

Jul18 '16: Rahul Krishna's paper accepted to ASE:

Accepted Paper

Rahul Krishna's paper titled "Too Much Automation? The Bellwether Effect and Its Implications for Transfer Learning" is accepted to the 31st IEEE/ACM International Conference Automated Software Engineering 2016. This was a joint work with Dr. Lucas Layman of
Fraunhofer Center for Experimental Software Engineering. Here is a link to his paper.

Jun29 '16: REU Camp:

Welcome to REU Students

RAISE hosted two undergraduate students (Abdulrahim Sheikhnureldin and Matthew J. Martin) over summer'16, where they were involved in project titled 'The Effect of Code Dependencies on Software Project Quality' and 'Enhanced Issue Prediction Using Contextual Features respectively.

Jun10 '16: Vivek Nair's paper accepted to SSBSE:

Accepted Paper

Vivek Nair's paper titled "An (Accidental) Exploration of Alternatives to Evolutionary Algorithms for SBSE" is accepted to the Symposium on Search-Based Software Engineering - 2016.

Jun1 '16: Congrats to RAISE Members:

Summer Internship

Congrats to 5 members of RAISE for securing internships from Lexisnexis and ABB.

May5 '16: Rahul Krishna's paper accepted to BIG DSE:

Accepted Paper

Rahul Krishna's paper titled "The “BigSE” Project: Lessons Learned from Validating Industrial Text Mining" is accepted to the BIG Data Software Engineering Workshop, 2016. This was a joint work with Manuel Dominguez, David Wolf of LexisNexis, Raleigh. Here is a link to his paper.

Apr29 '16: Wei Fu's paper accepted to IST journal:

Accepted Paper

Wei Fu's paper titled "Tuning for software analytics: Is it really necessary?" is accepted to the Journal of Information and Software Technology. This was a joint work with Dr. Xipeng Shen. Here is a link to his paper.

Feb1 '16: The BigSE Project:

Submission-EMSE

Mr. Krishna submits his paper titled "The “BigSE” Project: Lessons Learned from Validating Industrial Text" to BIGDSE. This is a joint work with Mr. Yu, Mr. Agarwal, Dr. Menzies, Mr. Manuel Dominguez and Mr. David Wolf.

Title: The BigSE Project: Lessons Learned from Validating Industrial Text Mining

Abstract:

As businesses become increasingly reliant on big data analytics, it becomes increasingly important to test the choices made within the data miners. This paper reports lessons learned from the BigSE Lab, an industrial/university collaboration that augments industrial activity with low-cost testing of data miners (by graduate students). BigSE is an experiment in academic/ industrial collaboration. Funded by a gift from LexisNexis, BigSE has no specific deliverables. Rather, it is fueled by a research question “what can industry and academia learn from each other?”. Based on open source data and tools, the output of this work is (a) more exposure by commercial engineers to state-of-the-art methods and (b) more exposure by students to industrial text mining methods (plus research papers that comment on methods on how to improve those methods). The results so far are encouraging. Students at BigSE Lab have found numerous “standard” choices for text mining that could be replaced by simpler and less resource intensive methods. Further, that work also found additional text mining choices that could significantly improve the performance of industrial data miners.

Nov24 '15: Dr. Menzies talk AT CREST Open Workshop:

Talk

Dr. Menzies is one of the speakers at The 44th CREST Open Workshop - Predictive Modelling for Software Engineering. The talk is titled "Predicting What Follows Predictive Modeling". Slides can be viewed here.

Oct20 '15: Relax! Most stats yields the same results:

Submission-EMSE

Dr. Menzies submits his paper titled "On the Value of Negative Results in Software Analytics" to Empirical Software Engineering. This is a joint work with Dr. Ekrem Kocaguneli.

Title: On the Value of Negative Results in Software Analytics

Abstract: When certifying some new technique in software analytics, some ranking procedure is applied to check if the new model is in fact any better than the old. These procedures include t-tests and other more recently adopted operators such as Scott-Knott. We offer here the negative result that at least one supposedly “better” ranking procedure, recently published in IEEE Transactions on Software Engineering, is in fact functionally equivalent (i.e. gives the same result) as some much simpler and older procedures. This negative results is useful, for several reasons. Firstly, functional equivalence can prune research dead-ends before researchers waste scarce resources on tasks with little practical impact. Secondly, by recognizing needless elaborations, negative results like functional equivalence can inform the simplification of the toolkits and syllabi used by practising or student data scientists. Thirdly, each time a new ranking procedure is released into the research community then old results must be revisited. By slowing the release of new procedures, negative results of functional equivalence lets us be more confident about old results, for longer. Fourthly, the particular negative result presented in this paper explains two previously inexplicable results; specifically:

(1) prior results on conclusion instability results documented ;

(2) the strangely similar performance of different evaluation rigs found in previous publications.

Oct20 '15: Older methods just as good or better than anything else:

Submission-EMSE

Dr. Menzies submits his paper titled "Negative Results for Software Effort Estimation" to Empirical Software Engineering. This is a joint work with Dr. Ye Yang, Mr. George Mathew, Dr. Barry Boehm and Dr. Jairus Hihn.

Title: Negative Results for Software Effort Estimation

Abstract:

Context: More than half the literature on software effort estimation (SEE) focuses on comparisons of new estimation methods. Surprisingly, there are no studies comparing state of the art latest methods with decades-old approaches. Objective: To check if new SEE methods generated better estimates than older methods.

Method: Firstly, collect effort estimation methods ranging from “classical” COCOMO (parametric estimation over a pre-determined set of attributes) to “modern” (reasoning via analogy using spectral-based clustering plus instance and feature selection). Secondly, catalog the list of objections that lead to the development of post-COCOMO estimation methods. Thirdly, characterize each of those objections as a comparison between newer and older estimation methods. Fourthly, using four COCOMO-style data sets (from 1991, 2000, 2005, 2010), run those comparisons experiments. Fifthly, compare the performance of the different estimators using a Scott-Knott procedure using (i) the A12 effect size to rule out “small” differences and (ii) a 99% confident bootstrap procedure to check for statistically different groupings of treatments). Sixthly, repeat the above for some non-COCOMO data sets.

Results: For the non-COCOMO data sets, our newer estimation methods performed better than older methods. However, the major negative result of this paper is that for the COCOMO data sets, nothing we used did any better than Boehm’s original procedure.

Conclusions: In some projects, it is not possible to collect effort data in the COCOMO format recommended by Boehm. For those projects, we recommend using newer effort estimation methods. However, when COCOMO-style attributes are available, we strongly recommend using that data since the experiments of this paper show that, at least for effort estimation, how data is collected is more important than what learner is applied to that data.

Oct6 '15: Dr. Menzies talk AT University of Notre Dame:

Talk

Dr. Menzies is to talk to the computer students at University of Norte Dame. The talk is titled "The Future and Promise of Software Engineering Research". This posting for the talk. Slides can be viewed here.

Sep29 '15: Dr. Menzies delivers a talk at HPCC summit 2015:

Dr. Menzies was recognized for his outstanding contribution

Dr. Menzies delivers a talk titled "Big Data: the weakest link " at HPCC summit 2015. He was also a part of a panel discussion on "Grooming Data Scientists for Today and for Tomorrow".

Congratulations to Dr. Menzies for winning an award for his outstanding contribution to the HPCC community.

Dr. Menzies says "I want a scientist. I want someone who actually doubts their own conclusions vigorously."

Sep27 '15: Welcome Dr. Dam:

Welcome to Dr. Dam

We are very happy to host fellow researcher, Dr. Dam, from down under(Australia). Dr Hoa Khanh Dam is a Senior Lecturer at the School of Computing and Information Technology, University of Wollongong, Australia. The lab is excited to learn from his experience with requirements engineering and effort estimation in AGILE settings.

Aug28 '15: Mr. Rahul Krishna submits his paper to ICSE'16:

How to Learn Useful Changes to Software Projects

Mr. Rahul Krishna submits his paper titled "How to Learn Useful Changes to Software Projects (to Reduce Runtimes and Software Defects)" to ICSE 2016. This is a joint work with Dr. Xipeng Shen, Andrian Marcus, Naveen Lekkalapudi and Lucas Layman. For more see notes.

Title: How to Learn Useful Changes to Software Projects (to Reduce Runtimes and Software Defects)

Abstract: Business users now demand more insightful analytics; specifically, tools that generate “plans”– specific suggestions on what to change in order to improve the predicted values. This paper proposes XTREE, a planner for software projects. XTREE receives tables of data with independent features and a corresponding weighted class which indicates the quality (“bad” or “better”) of each row in the table. Plans are edits to the rows which ensures the changed row is more likely to be of a “better” quality. XTREE learns those plans by building a decision tree across the data, then reporting the differences in the branches from some current branch to another desired branch. Using data from 11 software projects, XTREE can find better plans compared to three alternate methods. Those plans have lead to improvements with a median size of (56%, 28%) and largest size of (60%, 77%) in (defect counts, runtimes), respectively.

Aug28 '15: Mr. Wei Fu submits his paper to ICSE'16:

Tuning for Software Analytics

Wei Fu submits his paper titled "Tuning for Software Analytics: is it Really Necessary?" to ICSE 2016. This is a joint work with Dr. Xipeng Shen. For more see notes.

Aug28 '15: Dr. Tim Menzies submits his paper to ICSE'16:

Live and Let Die?

Dr. Menzies submits his paper titled "Live and Let Die? (Delayed Issues not Harder to Resolve)" to ICSE 2016. This is a joint work with Dr. William R. Nichols, Dr. Forrest Shulland Dr. Lucas Layman.

Title: Live and Let Die? (Delayed Issues not Harder to Resolve)

Abstract: Many practitioners and academics believe in a delayed issue effect (DIE); i.e. as issues linger longer in a system, they become exponentially harder to resolve. This belief is often used to justify major investments in new development processes that promise to retire more issues, sooner. This paper tests for the delayed issue effect in 171 software projects conducted around the world in the period from 2006–2014. To the best of our knowledge, this is the largest study yet published on this effect. We found no evidence for the delayed issue effect; i.e. the time to resolve issues in a later phase was not consistently more than when issues were resolved soon after their introduction. This result begs the question: how many other long-held beliefs in software engineering are unsupported by current data?

Aug10 '15: Laws of trusted data sharing:

Private data is better data, says our recent ICSE paper.

A repeated, and somewhat pessimistic, conclusion is that the more we privitize data, the more we lose the signal in that data. That is, the safer the data (for sharing) the worse it ebcomes (for making conclusions)

Recent results have addressed this issue. Former RAISE-member (now working on her post-doc) Fayola Peters presented her novel privacy algorithms called LACE2. In recent work with Dr. Tim Menzies, presented at the International Confernce on Software Engineering, Dr Peters applies instance-based learning methods to learns how much (and how litte) we can mutate data without changing the conclusions we might learn from that data. Based on that work, she offers three laws of trusted data mining.

To explain our three laws, we must first introduce the concept of “corners” in a data set. Many researchers in machine learning offer the same conclusion: when learning models from data, it is not necessary to share all rows and columns within tables of data. It turns out that most of the signal in a data set tables can be represented as small "corners" of the data; i.e. just a few columns and just a few rows. While the exact numbers vary from data set to data set:

  • The usual instance selection result is that rows of data contain redundancies; i.e. repeated instances of a similar example. Hence, M1 rows can be approximated using M2=M1/5 (or fewer) rows by (for example), clustering the data then replacing each cluster with the median point within that cluster.
  • The usual feature selection result , is most of the signal in N1 cols comes from N2=sqrt(N1) columns or less (and the remaining data is either noisy or closely correlated to the data in the selected N2 columns).

We say that the “corner” of a data set are just the small number of rows and columns found via instance and row selection. Using the corners, we can state our first law of cost-effective trusted data sharing:

First Law: don’t share everything; just share the corners.

This law is interesting since, in the usual case, that this “corner” is very small compared to the original data set. For example, consider a table of data with 1000 rows and 100 columns. Note that this data has 1000*100 cells. If this data set compresses according to the instance/feature selection ratios listed above, then the “corners” of this data would hold just 200 rows and 10 columns; i.e. 2000 cells in all-- which is 2% of the original data. That is, if we just shared the “corners” of this data, then most of the data is never revealed to an outside party (in this case, we’d share 2% and hide 98%).

Of course, if this “corner” contains the essence of the data, then it is important to apply a second law of cost-effective trusted data sharing:

Second Law: anonymize the data in the “corners”.

Our research has suggested an interesting way to implement this second law. Our experience with this approach1,2,3 is that as move from all the data into the “corners”, then this increases the distance between:

  • an example of some class A
  • and the nearest example of some class B.

Halfway between these examples is the class boundary where the classification might flip from A to B. Note that, to privatize data, we could mutate the example A anywhere up to that class boundary without changing what the conclusions of a nearest neighbor algorithm working this dataset. Accordingly, we offer the third law of cost-effective trusted data sharing:

Third Law: while anonymizing, never mutate examples across the class boundary.

Note that, using this third law, our privatization methods achieved better results that standard anonymization algorithms such as k-anonymity , at least for data taken from software engineering projects.

In their recent paper, Dr Peters and Dr. Menzies we simulated a consortium of 20+ data owners, each with a separate data set. A “pass the parcel” system was implemented in which each data owner incrementally added their data to a parcel of shared data- but only the parts of their data that was somehow outstandingly different from the data already in the parel. To define “different”, various instance-based reasoning operators were employed such that when we said some data was “different”, we based that comparison on the most informative attributes. In all, that shared parcel held just 5% of the data owned by all members of the consortium-- yet when we build software quality predictors from this 5%, those predictors performed better than predictors built from all that data.

The most significant aspect of that work was that after applying our three laws of data sharing we built predictive models from a privatized version of the shared data and the shared privatized data generated better predictions than the raw data.

So, not only is privitization necessary, it can actually boost the value of the data.

Jun9 '15: LexisNexis to fund AI lab:

Industry workers need research partners to vertify their innovative results.

For more, see briefing notes

Feb26 '15: HPC Cluster Access:

1000 cores

We now have HPC accounts, which gives us access to 1000+ 8-core machines! For more see tutorial.

Oct28 '14: News of GALE:

Kicking ass and taking names

DTLZ, d=20,o-2,4,6,8

DTLZ o=2,d=20,40,80,160

Oct24 '14: Crazed idea.... Keys West:

Data mining meets MOEA/D

Been reading on MOEA/D (see also PADE. Its kind of a meta-learner. It builds islands then runs a standard learner on each island. E.g. MOEA/D-DE would run differential evolution on various islands.

There are some standard methods for making the islands but I was thinking, why not just use linear-time binary-split FastMap?

Then, build recommendations for jumping from current to better, as follows. For each current island I1..

  • Find "better" islands where "better" means that for at least one objective, the Cliff's delta effect size (using the thresholds proposed top of p14 of here, pword=user=guest, o) says they are truly different and the medians are skewed in a "better" way.
  • For each island I2 build a contrast learning task as follows where class1= I1, class2= I2 and class3= every other island.
  • Discretize all numerics by minimizing entropy of class1,class2,class3.
  • Sort the ranges by BORE where best=class2 and rest=class1_ (for notes on BORE, see section 4.2 of this paper.
  • Let the value of the first i items of that sort be what percentage of class1,class2,class3 instances that have those i ranges contain class2 (the target class).
  • Return the smallest i ranges where i+1 has less value.

If this is data mining (where no new data can be generated) then stop. Call what you have "islands, first generation". Else:

  • For each island with a contrast set, collect new instances by interpolating instances in that island, then applying the contrast set.
  • Repeat till new improvements are only epsilon better than last. This generates, "islands, generation last".
  • Run the above current to better algorithm using a combination of the first and last generation algorithms.

Not some short cuts:

  • Instead of discretizing for each new pair of current,better, discretize ONCE across all the islands. Proably would work just fine.
  • Once the data is discretized, build a reverse index from the ranges back to the candidates they select for. Which would make testing the value stuff very fast.
  • When looking for better, be simpler.
  • Active learning: on the way down with fastmap, prune dull islands. Also, when testing if one island is better than another, only pick some items at random in each island (say, the small m examples nearest the fastmap poles of each island)

But why is it called Keys West? An algorithm that builds bridges between islands? That extends an older algorithm of mine called Keys2? Well, see if you can figure that out.