Automatic detection of Wireless Endoscopic Images can avoid dangerous possible diseases such as cancers; therefore, a number of articles have been published with different kinds of methods to enhance the speed of detection and accuracy. Deep learning has achieved remarkable positive results and impact on medical diagnostics in recent years. However, accuracy has been reached enough for implementation as we using in several proposes, these algorithms are black-box, therefore, are hard to understand without explanation. Explainable artificial intelligence (XAI) is essential for enabling clinical users to get informed decision support from AI and comply with evidence-based medical practice. Applying XAI in clinical settings requires proper evaluation criteria to ensure the explanation technique is both technically sound and clinically useful, but specific support is lacking to achieve this goal. To reduce the above gap, we propose an Explainable artificial intelligence method for Wireless Endoscopic image classification using custom CNN model classification with a modified version of gradient-weighted class activation maps which is GRAD-CAM. In addition, we implemented a new custom data augmentation method to enhance the data quality, even for small datasets. We focused on maintaining the color of medical images because the sensitivity of medical images can affect the efficiency of the model We used an open-source KVASIR dataset for both data augmentation and a custom CNN model that consists of a total of 8000 Wireless capsule images. Heat map of classification result and efficient augmentation method has achieved a high positive result in terms of medical image classification. Recently, hybrid models have emerged as a promising method for the detection and classification of tumors but the decision of classification remains as black-box. For the above reasons, we decided to conduct the Explainable AI research on Wireless endoscopic images.