Generative Adversarial Networks (GANs) are the generative model that produces samples with the same probability distribution as the given training data. However, as they frequently train with sensitive data to generate realistic samples, many people are concerned about the leakage of training data from the GANs model, causing a severe breach of privacy in practice. This paper focuses on membership inference attacks aiming to reveal information about whether a certain data record is used in the target model training procedure. Although the membership inference attack has been successfully applied to various models, delivering membership inference attacks against generative models still remains a challenging problem, because the attack model could not accurately represent the target model. To solve this problem, we quantify the target model representation degree by measuring the attack model's generalization gap (i.e., the difference between the attack model's prediction distribution on training data and unseen data). To reduce the generalization gap, we propose a novel membership inference attack framework, called VoteGAN, consisting of multiple discriminators and one generator. VoteGAN trains the discriminators separately with a partition of the training data. It enables VoteGAN to approximate the mixture distribution of all partitions, allowing the reflection of the entire data more accurately. Thus, VoteGAN can facilitate the membership inference attack more effectively by leveraging the represented target model. Our experimental results demonstrate that the proposed attack model outperforms LOGAN (PETS'19), which is the state-of-the-art baseline model, showing 25% higher attack success rate on average. In addition, VoteGAN can effectively attack trained models with up to twice as much data as LOGAN in the black-box setting. Moreover, we show that VoteGAN is more resistant to the model with overfitting mitigation than the baseline attack, enabling a more generic membership inference attack against generative models.