This post is the learning notes from Prof Hung-Yi Lee‘s lecture, the pdf could be found here (page40-52). I have read few articles, and I found this is a must-read. It is simple, and you can easily understand what is going on. I would say it is a good starting point for further readings.
If you use ROUGE Evaluation metric for text summarization systems or machine translation systems, you must have noticed that there are many versions of them. So how to get it work with your own systems with Python? What packages are helpful? In this post, I will give some ideas based on engineering’s view (which means I am not going to introduce what is ROUGE). I also suffered from few issues and finally got them solved. My methods might not be the best ways but they worked.
Download ROUGE script
Many papers refer to this paper when they report results : ROUGE: A Package for Automatic Evaluation of Summaries by Chin-Yew Lin. Although other versions are acceptable, people recently use ROUGE 1.5.5 commonly. You need to find the original Perl script: ROUGE-1.5.5.pl. Make sure if this is a good one. Normally people will not change it if they use Python, and that’s how it becomes a standard. Check how many lines (should be ~3298 lines) before use.
You may need to download the whole ROUGE-1.5.5 folder from the link. Run the test before the next steps:
If everything goes well, you will see outputs like:
./ROUGE-1.5.5.pl -e ../data -c 95 -2 -1 -U -r 1000 -n 4 -w 1.2 -a ROUGE-test.xml > ../sample-output/ROUGE-test-c95-2-1-U-r1000-n4-w1.2-a ....
It might take few seconds to run the test cases.
I met some issues here saying that I need to install other things about Perl. Follow the error message if you meet some problems.
Install Python wrapper
It is natural to choose a Python wrapper, which will help you to calculate ROUGE score form the Perl script. I recommend pyrouge, and I have seen some papers have applied it to ROUGE scores. In the official document, you can easily find installation and usage.
Remember to set your ROUGE path (absolute path to ROUGE-1.5.5 directory, which contains the Perl script), and run a test.
Run with Python codes
In my case, I have my system outputs organized as follows:
I have my reference folder for original summarizations. Each txt file contains a single line for each article, and it is clear that for the file name there is an ID. Same format for the decoded folder, where I keep the system outputs. In my case, they are summarizations created by the machine, and the txt file names are also attached with an ID, paired with their true results in the reference folder.
ID is important when using pyrouge to get ROUGE scores. I changed the name format to match my case, based on the codes from official document:
from pyrouge import Rouge155 r = Rouge155() # set directories r.system_dir = 'decoded/' r.model_dir = 'reference/' # define the patterns r.system_filename_pattern = '(\d+)_decoded.txt' r.model_filename_pattern = '#ID#_reference.txt' # use default parameters to run the evaluation output = r.convert_and_evaluate() print(output) output_dict = r.output_to_dict(output)
You can see many log info come out:
2017-12-18 11:21:36,865 [MainThread ] [INFO ] Writing summaries. 2017-12-18 11:21:36,868 [MainThread ] [INFO ] Processing summaries. Saving ...
Then after some works (transform the txt files into other format files), you can see the default parameters and finally a table as results.
2017-12-18 11:21:36,871 [MainThread ] [INFO ] Running ROUGE with command /Users/ireneli/Tools/pyrouge/tools/ROUGE-1.5.5/ROUGE-1.5.5.pl -e /Users/ireneli/Tools/pyrouge/tools/ROUGE-1.5.5/data -c 95 -2 -1 -U -r 1000 -n 4 -w 1.2 -a -m /var/folders/3h/x33pjd7d3k136fh564j3stqr0000gn/T/tmpkr3m965d/rouge_conf.xml --------------------------------------------- 1 ROUGE-1 Average_R: 0.78378 (95%-conf.int. 0.78378 - 0.78378) 1 ROUGE-1 Average_P: 0.80556 (95%-conf.int. 0.80556 - 0.80556) 1 ROUGE-1 Average_F: 0.79452 (95%-conf.int. 0.79452 - 0.79452) --------------------------------------------- 1 ROUGE-2 Average_R: 0.69444 (95%-conf.int. 0.69444 - 0.69444) 1 ROUGE-2 Average_P: 0.71429 (95%-conf.int. 0.71429 - 0.71429) 1 ROUGE-2 Average_F: 0.70423 (95%-conf.int. 0.70423 - 0.70423) --------------------------------------------- 1 ROUGE-3 Average_R: 0.62857 (95%-conf.int. 0.62857 - 0.62857) 1 ROUGE-3 Average_P: 0.64706 (95%-conf.int. 0.64706 - 0.64706) 1 ROUGE-3 Average_F: 0.63768 (95%-conf.int. 0.63768 - 0.63768) --------------------------------------------- 1 ROUGE-4 Average_R: 0.55882 (95%-conf.int. 0.55882 - 0.55882) 1 ROUGE-4 Average_P: 0.57576 (95%-conf.int. 0.57576 - 0.57576) 1 ROUGE-4 Average_F: 0.56716 (95%-conf.int. 0.56716 - 0.56716) --------------------------------------------- 1 ROUGE-L Average_R: 0.78378 (95%-conf.int. 0.78378 - 0.78378) 1 ROUGE-L Average_P: 0.80556 (95%-conf.int. 0.80556 - 0.80556) 1 ROUGE-L Average_F: 0.79452 (95%-conf.int. 0.79452 - 0.79452) --------------------------------------------- 1 ROUGE-W-1.2 Average_R: 0.32228 (95%-conf.int. 0.32228 - 0.32228) 1 ROUGE-W-1.2 Average_P: 0.68198 (95%-conf.int. 0.68198 - 0.68198) 1 ROUGE-W-1.2 Average_F: 0.43771 (95%-conf.int. 0.43771 - 0.43771) --------------------------------------------- 1 ROUGE-S* Average_R: 0.60961 (95%-conf.int. 0.60961 - 0.60961) 1 ROUGE-S* Average_P: 0.64444 (95%-conf.int. 0.64444 - 0.64444) 1 ROUGE-S* Average_F: 0.62654 (95%-conf.int. 0.62654 - 0.62654) --------------------------------------------- 1 ROUGE-SU* Average_R: 0.61966 (95%-conf.int. 0.61966 - 0.61966) 1 ROUGE-SU* Average_P: 0.65414 (95%-conf.int. 0.65414 - 0.65414) 1 ROUGE-SU* Average_F: 0.63643 (95%-conf.int. 0.63643 - 0.63643)
Normally we report ROUGE-2 Average_F and ROUGE-L Average_F scores.
Besides, you might want to remove the temp file to release some space on your machine. In this case, I will need to delete tmpkr3m965d folder (can be found in the log info).
###Troubleshooting: illegal division by zero
I was annoyed by this error:
Now starting ROUGE eval... Illegal division by zero at /home/lily/zl379/RELEASE-1.5.5/ROUGE-1.5.5.pl line 2455. subprocess.CalledProcessError: Command '['/home/lily/zl379/RELEASE-1.5.5/ROUGE-1.5.5.pl', '-e', '/home/lily/zl379/RELEASE-1.5.5/data', '-c', '95', '-2', '-1', '-U', '-r', '1000', '-n', '4', '-w', '1.2', '-a', '-m', '/tmp/tmpuu0bqmes/rouge_conf.xml']' returned non-zero exit status 255
So I checked line 2455 in ROUGE-1.5.5.pl:
And the error said “illegal division by zero”, which means here the $$base could be zero. We may infer that, the txt files might be some empty ones. Or, if the txt files contain something like
<b> or other strange characters. By filtering them, I got the problem solved easily.
Hi! In the following posts, I will introduce Q-Learning, the first part to learn if you want to pick up reinforcement learning. But before that, let us shed light on some fundamental concepts in reinforcement learning (RL).
Q-Learning works in this way: do an action and get reward and observation from the environment, as shown below. Image is taken from here :
Berkeley’s CS 294: Deep Reinforcement Learning by John Schulman & Pieter Abbeel
Imagine a baby boy in a kindergarten and how he performs on the first day. He does not know the kindergarten and knows nothing about how to behave. So he begins with random actions, say he hits the other kids, and when he performed this, he has no idea if it is right or not. Then the teacher becomes mad and gives him a punishment (a negative reward), then he knows hits others is not a good action; in the next time, the boy washes his lunch box, and the teacher rewards him with candy, then he knows this action is a good one. So in our kindergarten example, simply the Agent is the boy, who has no knowledge in the very beginning; Action is how he behaves; Environment contains all the objectives that he could perform on; Reward is something he gets from the environment (punishment or the candy), and Observation is what he could observe or the feedback from the environment.
Exploitation vs. Exploration
To understand how Q-learning works, it is important to know exploration and exploitation.
Let’s say our baby boy from the kindergarten goes home one day, and his mom prepares five boxes (we call them A-E), where there are different numbers of candies inside the boxes and he doesn’t know which one has more candies. So if his goal is to get as much candy as possible, what he would do?
Method 1: Obviously he could choose an arbitrary box each time. However, it is not guaranteed that he could get as much as possible.
Method 2: Another method would choose a “possible” box. Each time, he can choose the box with the maximum expectation of the candies. So to get a distribution of the candies, say he could open box 1000 times uniformly then keep track of the number of candies.
Method 3: If he has some prior knowledge about these boxes, for example, his mom told him that box A has 10 (expectation), box B has 20 (expectation) and others unknown. So based on his goal, it seems box B is a good choice. But box C might have even more candies! We could either choose box B, or randomly choose a box from C-E.
We call these methods policies in Q-learning. In brief, we choose our action (choose a box) based on our current state in a policy. So we define latex]\pi[/latex] as a policy, which maps states to actions.
Exploitation is to choose an action based on information that we have known. Method 2 is an exploitation-only policy. We say we know the expectations of all actions and then choose the best one.
Exploration is to explore the new actions that we have no information about. Method 1 is an exploration-only policy. Method 3 is a balanced version of these two. This provides us the idea of -greedy policy.
Ranges from 0 to 1, is the probability of exploration, which is set to search for new things. Typically, we just random a state and return that action. In practice, we initialize it with a value between 0 and 1; then we usually let it shrink during episodes . An Episode is a whole game process from the start to the terminal state. Say in Flappy Bird, you start the game until the death state. Intuitively, imagine when an agent starts to play a new game, it has no “experience” about the game, so it is natural to go randomly; after some episodes, it is about to learn the skills and tricks, then it tends to use its own experience to play instead of randomly choose an action, because the more episodes it plays, the more confident it is about the experience (the more accurate the reward approximation is). There are various settings to , say , where refers to episode.
Slide from Percy Liang
Q-learning has a table called Q-table, which is a table of states, actions and approximated rewards. Let’s get back to the kindergarten example.
Kindergarten states and actions
We simplify the problem: states for the boy are washing lunch box (wash) and hitting others (hit), there are four actions marked as action A to D. Our Q table shows bellow, where we could observe that each row is a state and the corresponding reward values to different actions. Some state-action pairs are illegal and reach no values. The values indicate number of candies as rewards.
There are unsupervised learning models in multiple-level learning methods, for example, RBMs and Autoencoder. In brief, Autoencoder is trying to find a way to reconstruct the original inputs — another way to represent itself. In addition, it is useful for dimensionality reduction. For example, say there is a 32 * 32 -sized image, it is possible to represent it by using a fewer number of parameters. This is called you are “encoding” an image. The goal is to learn the new representation, so it is also applied as pre-training; then a traditional machine learning models could be applied depending on the tasks — it is a typical “two-stage” way for solving problems. by Hinton is a great work on this problem, which shows the ability of “compressing” in neural networks, solving the bottleneck of massive information.
Few friends with me did some works together since last October. All of us were looking for jobs in machine learning or deep learning. We all agreed that we need to review some interesting algorithms together. We had a draft of machine learning algorithms (part 1) during this new year:
Click here for a full version: mlrecap.
Also, we are working on part 2; there are some advanced algorithms which you can see from our outline. It is expected to finish around this June.
These slides are suitable for people to review old things. Some details are not included, so do not suggest readers learn some concepts from our slides. If you find mistakes, please leave comments. If you are interested in some particular algorithms, leave comments and we will consider updating our part 2 outline.
As a cardinal part of deep learning or machine learning, optimization has long been a mathematical problem for researchers. Why we need optimization? Remember you have a loss function for linear regression, and then you would need to find the optimum of the function to minimize the square error, for example. You might also very familiar about gradient descent, especially when you start learning neural networks. In this blog, we will cover some method in optimization, and in what conditions should we apply them.
We could split the optimization problem into two groups: constraint and unconstraint. Unconstraint means given a function, we try to minimize or maximize it without any other conditions. Constraint means we would fit some conditions while optimizing a function.
Definition: , where is the optimum.
There are some stochastic and iterative methods for this kind of unconstraint problem. For example, Gradient Descent, Newton Method and Quasi Newton Method (an optimized Newton Method). Compared with GD, the other two would converge faster. In the graph, we could notice that point: the red arrow denotes GD and the green is Newton Method.
The original “Newton Method” (also known as the Newton–Raphson method) is trying to find the roots of a function, f(x)=0. It still needs some iterations. If you look at the fantastic gif below, which I borrowed from wiki, you would have a quick point of view. So the job is to find the x, where we want f(x)=0. First, we select a point randomly x1, and get f(x1), then we get the tangent at that point (x1,f(x1)). The next point to choose is exactly the x where the tangent and x-axis has the intersection, so . Then we do it many times until a sufficiently accurate value is reached.
Now you understand that Newton Method is perfect for solving something like f(x)=0 problem. Then how it is going to help the optimization. You need some math knowledge then. If the function f is a twice-differentiable function f, and you want to find the max or min of it. Then the problem is to find the roots of the derivative f’ (solutions to f ‘(x)=0), also known as the stationary points of f.
Constraint Optimization: Equality Optimization
Definition: , subject to .
In the equality optimization problem, the equality constraints could be more than one. Here we would talk about one constraint for convenience. The method is called Lagrange Multiplier. When you have constraints, a natural way is try to eliminate them. So it goes this way. We brings in a new variable and create a Lagrange function:
Then what we need to do is to calculate the following equations:
When we successfully get the right , remember it can not be zero, we could bring it into the Lagrange function. When the Lagrange function has the optimum, the f(x,y) has, too. Because you have is always 0.
Constraint Optimization: Inequality Optimization
So all and are Lagrange multipliers and the >=0.
Proposed in 2014, the interesting Generative Adversarial Network (GAN) has now many variants. You might not surprised that the relevant papers are more like statistics research. When a model was proposed, the evaluations would be based on some fundamental probability distributions, where generalized applications start. Continue reading “Deep Learning 13: Understanding Generative Adversarial Network”