Deep Learning 11: Energy-Based Learning (1)–What is EBL?

As a part of our goals, it is absolutely important to look back and think about the loss functions we applied, for example, the cross entropy. There are other types, however, targeting on different practical problems and you will need to think about which one is suitable. Besides, the Energy-Based Models (EBMs) provides more. These are learning notes from A Tutorial on Energy-Based Learning.

Introduction: EBMs

The purpose of learning, is to find out the energy function, which assigns low energies to correct values and higher energies to incorrect ones. Then we need to minimize the whole energy in the system during training. This gives us a common inference/learning framework in any types of statistical models (probabilistic or especially, non-probabilistic).

Energy-Based Inference
Considering an image classification problem, suppose we have 6 classes: Human, Animal, Airplane, Car, Truck and “None of the above” . We will put images as the inputs (X). They are vector (say RGB channels, values range in [0,255]). As the output Y, we are expecting the model to provide probabilities for each class, like the results after a softmax in NNs. From the view of the energy function, we will need to measure our quality of the model. If we provide an animal image, we are expecting the energy of “animal” to be the lowest, while others are higher. Which means, small energy values a high compatibility between the values of input X and output Y.


We use E(X,Y) as the energy function. {Y}^{*} is the final model produced result, chosen from a set \gamma.So given all the possible results, we need to find one who provides the smallest energy:{Y}^{*}={argmin}_{Y\in\gamma}E(X,Y).
A small set of \gamma is easy to find out the result by simply go through all possible one and find out the minimum energy value. A huge number of element in \gamma costs a lot during training, which means have too many classes (like face recognition, even though the result set is discrete and finite). Other cases including NLP tasks.
We will use inference procedure to deal with those cases. Such a special strategy will produce an approximate result, may or not may provide a global minimum value of E(X,Y). In the practice, we would use a non-convex function, in which the local optimizers are easier to be found. There are some cases that the energy function has equivalent values of minima. We will see different types of them.

Model Types
From the view of application, we will treat them as four types:
1. Prediction, classification and decision-making: find out the best Y, given X! The model is going to tell you an answer the class (which class the image belongs to) or the decision to be made (like “steer left” in self-driving cars).
2. Ranking: Which one is more compatible with given X? Similar to the first one, but the model is able to provide multiple results that satisfy a given input. Like to recommend top-k items, not only one item is selected at each time.
3. Detection: is the current Y compatible with X? Like the face detection task. We will need thresholds as criteria, and they are unknown in general.
4. Conditional density estimation: find out P(\gamma|X). Usually as new inputs to other systems.

Combines results: Gibbs Distribution
Sometimes we need to combine results from different models. And we know the energy function values are measured in arbitrary units, and they are uncalibrated. We need to deal with the different scales. Thus, the most common method is to use a Gibbs distribution:

P(Y|X)=\frac { { e }^{ -\beta E(Y,X) } }{ \int _{ y\in \gamma }^{ }{ { e }^{ -\beta E(y,X) } } }
The denominator is the partition function, it is the sum up of all values, with the goal of normalization. Since we will transfer all energies into values between 0 and 1, and sum up to 1 (features of a probability distribution).

Energy-Based Training
When we say “train” a model, we mean we design a model first and learn parameters W. In that way, we have:E =\{ E(W,Y,X):W\in w \} , which is a set of parameterized energy functions.
A set of training samples is given as S=\{ ({ X }^{ i },{ Y }^{ i }): i = 1...P\} .
The main job is to find out the best energy function, so we need an approach to measure the quality ( loss functional). So with a loss function L(E,S), we could find out the best W, who produces the lowest energy given energy function E and training set S. We simply define it to be:
where P is the total number of training samples. That means the first term on the right, gives an average energy (the per-sample loss). The second term R(W)is a regularizer, contains our prior knowledge about which energy functions in the set are preferable.
Before introducing some famous loss functions, we first keep in mind some Ys:
2Suppose we had trained a model for image classification for 4 classes [flower, dog, people, fish], and now you give a new input image of a dog, then you will get a tensor with probabilities: [flower=0.01,dog=0.70, people=0.20, fish=0.09]. Then obviously, the correct answer Y is dog. The {Y}^{*} is dog (the highest probability). The \bar { Y } is people (the lowest energy among incorrect answers).

We want to “push down” the correct energies and “pull up” on the incorrect ones. Here is a short summary :

Published by Irene

Keep calm and update blog.

3 thoughts on “Deep Learning 11: Energy-Based Learning (1)–What is EBL?

  1. Firstly, thank you so much for sharing.
    Please, can you explain what is the difference between energy function and loss function? How energy function is minimized in the inference step and loss function is minimized in the learning step? If you can kindly provide an example with respect to machine learning, I would be so grateful. For me the main idea is so vivid.


Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: