Volodymyr Kuleshov, Stefano Ermon
Most methods in machine learning are described as either discriminative or generative. The former often attain higher predictive accuracy, while the latter are more strongly regularized and can deal with missing data.
Here, we propose a new framework to combine a broad class of discriminative and generative models, interpolating between the two extremes with a multiconditional likelihood objective.
Unlike previous approaches, we couple the two components through shared latent variables, and train using recent advances in variational inference.
Instantiating our framework with modern deep architectures gives rise to deep hybrid models, a highly flexible family that generalizes several existing models and is effective in the semi-supervised setting, where it results in improvements over the state of the art on the SVHN dataset.