Training energy-based models from a single image for internal learning and inference using trained models
Abstract:
Different from prior works that model the internal distribution of patches within an image implicitly with a top-down latent variable model (e.g., generator), embodiments explicitly represent the statistical distribution within a single image by using an energy-based generative framework, where a pyramid of energy functions, each parameterized by a bottom-up deep neural network, are used to capture the distributions of patches at different resolutions. Also, embodiments of a coarse-to-fine sequential training and sampling strategy are presented to train the model efficiently. Besides learning to generate random samples from white noise, embodiments can learn in parallel with a self-supervised task (e.g., recover an input image from its corrupted version), which can further improve the descriptive power of the learned model. Embodiments does not require an auxiliary model (e.g., discriminator) to assist the training, and embodiments also unify internal statistics learning and image generation in a single framework.
Information query
Patent Agency Ranking
0/0