To bridge the gap between Paddle Serving and PaddlePaddle framework, we release the new service of PaddleServing: Model As A Service (MAAS) online in Github. With the help of the new service, when a PaddlePaddle model is trained, users now can obtain the corresponding inference model at the same time, making it possible to deployContinue reading “Paddle Serving: model-as-a-service! Triggered by a single command line, deployment finishes in 10 minutes”
Category Archives: Python
ELMo in Practice
ELMo: Deep contextualized word representations In this blog, I show a demo of how to use pre-trained ELMo embeddings, and how to train your own embeddings.
TensorFlow 08: save and restore a subset of variables
TensorFlow provides save and restore functions for us to save and re-use the model parameters. If you have a trained VGG model, for example, it will be helpful for you to restore the first few layers then apply them in your own networks. This may raise a problem, how do we restore a subset ofContinue reading “TensorFlow 08: save and restore a subset of variables”
Working with ROUGE 1.5.5 Evaluation Metric in Python
If you use ROUGE Evaluation metric for text summarization systems or machine translation systems, you must have noticed that there are many versions of them. So how to get it work with your own systems with Python? What packages are helpful? In this post, I will give some ideas based on engineering’s view (which meansContinue reading “Working with ROUGE 1.5.5 Evaluation Metric in Python”
TensorFlow 07: Word Embeddings (2) – Loading Pre-trained Vectors
A brief introduction on Word2vec please check this post. In this post, we try to load pre-trained Word2vec model, which is a huge file contains all the word vectors trained on huge corpora. Download Download here .I downloaded the GloVe one, the vocabulary size is 4 million, dimension is 50. It is a smaller one trainedContinue reading “TensorFlow 07: Word Embeddings (2) – Loading Pre-trained Vectors”
Understanding SVM(2)
A brief Introduction here. (Wrote a blog about it last year, but do not think it is detailed.) This blog is learning notes from this video (English slides but Chinese speaker). First a quick introduction on SVM, then the magic of how to solve max/min values. Also, you could find Kernel SVM.
NLP 05: From Word2vec to Doc2vec: a simple example with Gensim
Introduction First introduced by Mikolov 1 in 2013, the word2vec is to learn distributed representations (word embeddings) when applying neural network. It is based on the distributed hypothesis that words occur in similar contexts (neighboring words) tend to have similar meanings. Two models here: cbow ( continuous bag of words) where we use aContinue reading “NLP 05: From Word2vec to Doc2vec: a simple example with Gensim”
NLP 04: Log-Linear Models for Tagging Task (Python)
We will focus on POS tagging in this blog. Notations While HMM gives us a joint probability on tags and words: . Tags t and words w are one-to-one mapping, so in the series, they share the same length.
TensorFlow 05: Understanding Basic Usage
Until recently, I realized I missed some basics about TF. I went directly to the MNIST when I learned. Also, I asked few people if they have some nice tutorials for TF or for DL. Well, it is not like other modules, where you can easily find good ones like Andrew’s ML. But I didContinue reading “TensorFlow 05: Understanding Basic Usage”
NLP 03: Finding Mr. Alignment, IBM Translation Model 1
It is somehow a little bit fast to start MT. Anyway, this blog is very superficial, giving you a view on basics, along with an implementation but a bad result…which gives you more chances to optimize. Btw, you might learn some Chinese here 😛