The automated labelling and severity prediction of bug reports for computer software is the target of researchers at The Hashemite University in Zarqa, Jordan. Details of their efforts are mapped out in the International Journal of Computational Science and Engineering. Ultimately, they are developing an intelligent classifier that can predict whether a newly submitted bug report is of sufficient concern in the bug-tracking system to warrant urgent investigation and remediation.
To develop their system, the team build two datasets using 350 bug reports from the open-source community – Eclipse, Mozilla, and Gnome – reported in the monstrous, well-known, and aptly named database, Bugzilla. The datasets with have characteristic textual features, based on 51 important terms, the team explains and so based on this information, they could train various discriminative models to carry out automated labelling and severity prediction of any subsequent bug report submitted. They used a boosting algorithm to improve performance.
“For automated labelling, the accuracy reaches around 91% with the AdaBoost algorithm and cross-validation test,” the team reports. However, they only saw a severity prediction classification of some 67% with the AdaBoost algorithm and the cross-validation test. Nevertheless, the team says their results are encouraging and offers hope of removing the bottleneck that is the manual assessment of bug reports used until now.
“The proposed feature sets have proved a good classification performance on two ‘hard’ problems,” the team reports. “The results are encouraging and, in the future, we plan to work more on enhancing the classification algorithms component for better performance,” the researchers conclude.
Otoom, A.F., Al-Shdaifat, D., Hammad, M., Abdallah, E.E. and Aljammal, A. (2019) ‘Automated labelling and severity prediction of software bug reports‘, Int. J. Computational Science and Engineering, Vol. 19, No. 3, pp.334-342.
No comments:
Post a Comment