Paddle Serving: model-as-a-service! Triggered by a single command line, deployment finishes in 10 minutes

To bridge the gap between Paddle Serving and PaddlePaddle framework, we release the new service of PaddleServing: Model As A Service (MAAS) online in Github. With the help of the new service, when a PaddlePaddle model is trained, users now can obtain the corresponding inference model at the same time, making it possible to deploy the deep learning inference service online for any applications. PaddleServing has the following four key features:

Continue reading

Prepare for the Interviews!

For the past few years after my Master’s, I did many jobs, long term, short term, internship, or full-time. I also had too many interviews, some of them I failed. Together with my friends, we had collected many materials, including basic algorithms, popular questions, basic machine learning knowledge, and deep learning knowledge. Then I organized them as one huge PDF (150+ pages).

A very brief outline:

  • Sorting
  • Data structure + popular questions
  • Machine Learning 
  • SoftDev interview questions

The material covers some screenshots from other people’s lectures and books. [Some slide pages are not in English! I am too lazy to translate them..]

I went through this PDF each time before there is an interview, in the case to answer questions like “what is knn”.   I hope you may find the material useful.  Download link:

interview_beta

Recently, I am working on a new version by adding more deep learning basics.

 

New items need to be updated: Merge sort; Sorting code in Python; Boyer-Moore Vote Algorithm.

 

Understanding Variational Graph Auto-Encoders

Variational Auto-Encoders

My post about Auto-encoder.
For Variational Auto-Encoders (VAE) (from paper Auto-Encoding Variational Bayes), we actually add latent variables to the existing Autoencoders. The main idea is, we want to restrict the parameters from a known distribution. Why we want this? We wish the generative model to provide more “creative” things. If the model only sees the trained samples, it will eventually lose the ability to “create” more! So we add some “noises” to the parameters by forcing the parameters to adapt to a known distribution.

Continue reading