Lipizzaner: A System That Scales Robust Generative Adversarial Network Training

Lipizzaner: A System That Scales Robust Generative Adversarial Network Training

Over the last few years, researchers and practitioners have found in Generative Adversarial Networks (GANs) a tool to address several challenging machine learning problems. Most of these problems are related to generative machine learning, but we can find others like semi-supervised learning tacked to create classifiers by using few labeled data samples. Despite its success, GAN training presents several limitiations or pathologies. The most commonly observed ones are:

  • mode collapse: the generator can only generate one mode of the distribution
  • non-convergence: the algorithm does not find an equilibrium, and
  • vanishing gradients: there are no training gradients, i.e. one adversary is too strong/weak.

At Anyscale Learning For All (ALFA), we have designed and developed Lipizzaner framework, which applies spatially distributed co-evolution to provide resilient and robust GAN training.

Follow @LipizzanerGAN The main advantages of using Lipizzaner to train GANs are:

  • Fast convergence due to gradient-based steps
  • Improved convergence due to hyperparameter evolution
  • Diverse sample generation due to mixture evolution
  • Scalability due to spatial distribution topology and asynchronous parallelism
  • Robustness and resilience

This website will include plenty of information allowing the users take advantage of this framework to deal with their own problems.

If you have any coments, do not hesitate to contact us.

Comming soon

We are launching a new release, Lipizzaner 2.0, you will ride a much more powerful horse. This new release incorporates a set of features that allows you to easily extend your own distributed Co-evolutionary GAN training. Among others it includes:

  • Semi-Supervised learning
  • New state-of-the-art data sets, including Covid-19 X-Rays images
  • New architectures, e.g., MLP, CNN, and RNN with multiple sizes
  • Data dieting:
  • Non-selection in the cells
  • Extra ensemble optimization
  • and much more

Funding

This research was partially funded by the Systems that Learn Initiative at MIT CSAIL. Jamal Toutouh was partially funded by the European Union’s Horizon 2020 research and innovation program under the Marie Skłodowska-Curie grant agreement No 799078.