Learning and memory are two of the most basic functions of the animal brain. The process of learning and memory relies on the repeated activities of neural networks. Human learning and memory behaviors are extremely complex. Lower animals have much simpler learning and memory behaviors. Their nervous systems are simple. They are good materials for researching learning and memory, such as invertebrate Aplysia. Machine learning is a method to realize artificial intelligence, with the core content of learning. Current machine learning focuses on the improvement of the performance of specific algorithms in empirical learning, belonging to low-level learning. Human-level machine learning should be first revealed to realize human-level artificial intelligence. The artificial neural network is a kind of neural network simulating the human brain nervous system for intelligent information processing. It has made significant achievements in learning and memory through the continuous improvement of models and algorithms. However, the neuron nodes in the neural network are not identifiable compared with the existing neural network. With the purpose of obtaining a relatively high learning rate of the neural network, the nodes in the network are required to participate in the operation of the network to increase the learning rate and then improve the network performance. The neurons of a biological neural network are identifiable. In this paper, the three simple learning forms of Kandel's Aplysia gill withdrawal reflex are analyzed starting with biological neural networks. This aims to discover advanced learning that can be applied to machine learning, that is, an artificial neural network with biological neural network characteristics. Additionally, the identifiability of nodes in the artificial neural network is verified.
Our approach is to establish a neural network model through the learning mechanism of Aplysia gill withdrawal reflex. Whether the gill contraction response was used as the output, the first experiment was simply an "M-P" model with two input terminals and one output terminal, with siphon stimulation and tail stimulation as input. By training the model, the result of learning is a perfect display of two simple learning forms: habituation and sensitization. In the second experiment, a neural network model with 4 hidden layers was constructed, and the output was represented by three neurons, which indicated the Presence or Absence of gill withdrawal reflex, weak response, and strong response, respectively. The Sigmoid function and BP algorithm were employed to learn and train the model. The experimental results demonstrated habituation and sensitization, and another combination. The learning mechanism, classical conditioning, was also manifested. In the process of model training, the processes of learning and forgetting are presented. The third experiment verified the identifiability of nodes in the neural network. Additionally, the neural network model was modified again. The final model is a neural network with 2 inputs, 3 outputs, and 2 hidden layers. The third experiment was conducted in two situations. One was to observe the results of the experiment when a node in the hidden layer was deleted (4 forms in total), and the other was to delete a node in the hidden layer under three simple learning mechanisms results (3 times in total).
The different functions of the neuron nodes in the hidden layer of the neural network model were discovered through experiments. x11 neurons participate in the excitatory transmission of the siphon during classical conditioning; x21 neurons engage in both the excitatory transmission of the tail and the siphon, as well as the inhibitory transmission of the tail and the siphon; x31 neurons are involved in the excitatory transmission of the siphon when sensitized; x41 neurons join the inhibitory transmission of the siphon. The experimental results verified our hypothesis that the neuron nodes in the artificial neural network model can be identified. The results are consistent with the biological experimental results obtained by Kandel. Future research will focus on how to make these identifiable neurons learn better in neural networks without relying on complex algorithms and increasing network depth.