

Given the low interpretability of large language models (LLMs) due to their extensive parameters and intricate features, this study aims to enhance the understandability and interpretability of automatic QA systems powered by LLMs, thereby addressing a critical gap in the field. To achieve this, we introduce an interpretable architecture for a domain-specific LLM-based question-answering (QA) system. The research decomposes the QA system into six modules: operation recognition, intent recognition, normalization, triplet structured data conversion, knowledge graph querying, and query result processing. Through this approach, the input and output of each module in the QA system are human-readable text data, enhancing the interpretability of the QA system’s processing. The use of knowledge graph data increases the credibility of the answers provided by the QA system. The QA system architecture proposed in this study attempts to integrate the powerful natural language understanding capabilities of large language models with the data querying capacity of knowledge graphs, offering a reference for addressing the issue of low interpretability in automatic QA systems based on large language models (LLMs).