An open-source natural language processing toolkit to support software development: addressing automatic bug detection, code summarisation and code search
- PMID: 38654755
- PMCID: PMC11036033
- DOI: 10.12688/openreseurope.14507.2
An open-source natural language processing toolkit to support software development: addressing automatic bug detection, code summarisation and code search
Abstract
This paper aims to introduce the innovative work carried out in the Horizon 2020 DECODER project - acronym for "DEveloper COmpanion for Documented and annotatEd code Reference" - (Grant Agreement no. 824231) by linking the fields of natural language processing (NLP) and software engineering. The project as a whole addresses the development of a framework, namely the Persistent Knowledge Monitor (PKM), that acts as a central infrastructure to store, access, and trace all the data, information and knowledge related to a given software or ecosystem. This meta-model defines the knowledge base that can be queried and analysed by all the tools integrated and developed in DECODER. Besides, the DECODER project offers a friendly user interface where each of the predefined three roles (i.e., developers, maintainers and reviewers) can access and query the PKM with their personal accounts. The paper focuses on the NLP tools developed and integrated in the PKM, namely the deep learning models developed to perform variable misuse, code summarisation and semantic parsing. These were developed under a common work package - "Activities for the developer" - intended to precisely target developers, who can perform tasks such as detection of bugs, automatic generation of documentation for source code and generation of code snippets from natural languages instructions, among the multiple functionalities that DECODER offers. These tools assist and help the developers in the daily work, by increasing their productivity and avoiding loss of time in tedious tasks such as manual bug detection. Training and validation were conducted for four use cases in Java, C and C++ programming languages in order to evaluate the performance, suitability, usability, etc. of the developed tools.
Keywords: Code Summarisation; Deep Learning; Natural Language Processing; Semantic Parsing; Software Engineering; Variable Misuse.
Plain language summary
Software engineers usually spends a lot of time in tedious activities like debugging and documenting code or finding examples of code snippets to use as a basis for their new programmes. Given the large and complex software systems that exist nowadays, being forced to perform these tasks manually causes a considerable drop in the overall productivity of programmers. The models developed in this work target Java, C and C++ programming languages and aim to alleviate software developers’, maintainers’ and reviewers’ efforts, by proposing automatic NLP solutions to carry out tasks such as bug detection, documentation generation and code search.
Copyright: © 2023 Robledo C et al.
Conflict of interest statement
No competing interests were disclosed.
Figures















Similar articles
-
Detecting non-natural language artifacts for de-noising bug reports.Autom Softw Eng. 2022;29(2):52. doi: 10.1007/s10515-022-00350-0. Epub 2022 Aug 24. Autom Softw Eng. 2022. PMID: 36065351 Free PMC article.
-
Do Programmers Prefer Predictable Expressions in Code?Cogn Sci. 2020 Dec;44(12):e12921. doi: 10.1111/cogs.12921. Cogn Sci. 2020. PMID: 33314282
-
Enriching query semantics for code search with reinforcement learning.Neural Netw. 2022 Jan;145:22-32. doi: 10.1016/j.neunet.2021.09.025. Epub 2021 Oct 11. Neural Netw. 2022. PMID: 34710788
-
Natural Language Generation and Understanding of Big Code for AI-Assisted Programming: A Review.Entropy (Basel). 2023 Jun 1;25(6):888. doi: 10.3390/e25060888. Entropy (Basel). 2023. PMID: 37372232 Free PMC article. Review.
-
Promises and perils of using Transformer-based models for SE research.Neural Netw. 2025 Apr;184:107067. doi: 10.1016/j.neunet.2024.107067. Epub 2024 Dec 24. Neural Netw. 2025. PMID: 39732064 Review.
References
-
- Xia X, Bao L, Lo D, et al. : Measuring Program Comprehension: A Large-Scale Field Study with Professionals. IEEE Transactions on Software Engineering. 2018;44(10):951–976. 10.1109/TSE.2017.2734091 - DOI
-
- Allamanis M, Brockschmidt M, Khademi M: Learning to represent Programs with graphs. Proceedings of the 6th International Conference on Learning Representations (ICLR). 2018. Reference Source
-
- Vessey I: Expertise in debugging computer programs: A process analysis. Int J Man Mach Stud. 1985;23(5):459–494. 10.1016/S0020-7373(85)80054-7 - DOI
-
- Haiduc S, Aponte J, Moreno L, et al. : On the Use of Automated Text Summarization Techniques for Summarizing Source Code. 2010 17th Working Conference on Reverse Engineering. 2010;35–44. 10.1109/WCRE.2010.13 - DOI
-
- Ahmad WU, Chakraborty S, Ray B, et al. : A Transformer-based Approach for Source Code Summarization. Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics (ACL). 2020;4998–5007. Reference Source
LinkOut - more resources
Full Text Sources
Miscellaneous