Paddle Serving: model-as-a-service! Triggered by a single command line, deployment finishes in 10 minutes

To bridge the gap between Paddle Serving and PaddlePaddle framework, we release the new service of PaddleServing: Model As A Service (MAAS) online in Github. With the help of the new service, when a PaddlePaddle model is trained, users now can obtain the corresponding inference model at the same time, making it possible to deployContinue reading “Paddle Serving: model-as-a-service! Triggered by a single command line, deployment finishes in 10 minutes”

TensorFlow 08: save and restore a subset of variables

TensorFlow provides save and restore functions for us to save and re-use the model parameters. If you have a trained VGG model, for example, it will be helpful for you to restore the first few layers then apply them in your own networks. This may raise a problem, how do we restore a subset ofContinue reading “TensorFlow 08: save and restore a subset of variables”

Working with ROUGE 1.5.5 Evaluation Metric in Python

If you use ROUGE Evaluation metric for text summarization systems or machine translation systems, you must have noticed that there are many versions of them. So how to get it work with your own systems with Python? What packages are helpful? In this post, I will give some ideas based on engineering’s view (which meansContinue reading “Working with ROUGE 1.5.5 Evaluation Metric in Python”

TensorFlow 07: Word Embeddings (2) – Loading Pre-trained Vectors

A brief introduction on Word2vec please check this post. In this post, we try to load pre-trained Word2vec model, which is a huge file contains all the word vectors trained on huge corpora. Download Download here .I downloaded the GloVe one, the vocabulary size is 4 million, dimension is 50. It is a smaller one trainedContinue reading “TensorFlow 07: Word Embeddings (2) – Loading Pre-trained Vectors”

NLP 05: From Word2vec to Doc2vec: a simple example with Gensim

  Introduction First introduced by Mikolov 1 in 2013, the word2vec is to learn distributed representations (word embeddings) when applying neural network. It is based on the distributed hypothesis that words occur in similar contexts (neighboring words) tend to have similar meanings. Two models here: cbow ( continuous bag of words) where we use aContinue reading “NLP 05: From Word2vec to Doc2vec: a simple example with Gensim”

TensorFlow 05: Understanding Basic Usage

Until recently, I realized I missed some basics about TF. I went directly to the MNIST when I learned. Also, I asked few people if they have some nice tutorials for TF or for DL. Well, it is not like other modules, where you can easily find good ones like Andrew’s ML. But I didContinue reading “TensorFlow 05: Understanding Basic Usage”