An explainable deep model for defect prediction



Publication Details

Humphreys, J. & Dam, H. Khanh. (2019). An explainable deep model for defect prediction. Proceedings - 2019 IEEE/ACM 7th International Workshop on Realizing Artificial Intelligence Synergies in Software Engineering, RAISE 2019 (pp. 49-55). United States: IEEE.


Self attention transformer encoders represent an effective method for sequence to class prediction tasks as they can disentangle long distance dependencies and have many regularising effects. We achieve results substantially better than state of the art in one such task, namely, defect prediction and with many added benefits. Existing techniques do not normalise for correlations that are inversely proportional to the usefulness of the prediction but do, in fact, go further, specifically exploiting these features which is tantamount to data leakage. Our model is end-to-end trainable and has the potential capability to explain its prediction. This explainability provides insights and potential causes of a model's decisions, the absence of which has stopped defect prediction from gaining any traction in industry.

Please refer to publisher version or contact your library.



Link to publisher version (DOI)