Determining learning direction via multi-controller model for stably searching generative adversarial networks

Publication Name

Neurocomputing

Abstract

The data generated by Generative Adversarial Network (GAN) inevitably contains noise, which can be reduced by searching and optimizing the architecture of GAN. To search for generative adversarial networks architectures stably, a neural architecture search (NAS) method, StableAutoGAN, is proposed based on the existing algorithm, AutoGAN. The stability of conventional reinforcement learning (RL)-based NAS methods for GAN is adversely influenced by the uncertainty of direction, where the controller will go forward once receiving inaccurate rewards. In StableAutoGAN, a multi-controller model is employed to mitigate this problem via comparing the performance of controllers after receiving rewards. During the search process, each controller independently learns the sampling policy. Meanwhile, the learning effect is measured by the credibility score, which further determines the usage of controllers. Our experiments show that the standard deviation of Frchet Inception Distance (FID) scores of the GANs discovered by StableAutoGAN is approximately 1/16 and 1/8 of that by AutoGAN on CIFAR-10 and on STL-10 respectively, while the effects remain similar to AutoGAN.

Open Access Status

This publication is not available as open access

Volume

464

First Page

37

Last Page

47

Funding Number

JG00418JX66

Funding Sponsor

National Natural Science Foundation of China

Share

COinS
 

Link to publisher version (DOI)

http://dx.doi.org/10.1016/j.neucom.2021.08.070