Various techniques for "explainable artificial intelligence" have been studied to alleviate the inability to interpret deep neural network models, so-called "black boxes." In this paper, one such method, 'Region of Low Entropy (RLE),' is used to quantify uncertainty with abdominal CT image data. RLE method based on the concept of entropy information theory shows low uncertainty for objects in the class the model learned. Thus, entropy for learned objects appears low and entropy otherwise high, which can help to determine confidence by quantifying the uncertainty of the results inferred by the model. The experiment calculates two reliability scores using predictive results from existing learning models, and reliability scores using RLE. Thereafter, when this reliability score was high, whether the actual value and the model's class prediction were the same was compared and evaluated through the AUC score. As a result of the experiment, it was confirmed that the method using RLE infers reliability much more accurately.