Understanding Variational Graph Auto-Encoders

Variational Auto-Encoders

My post about Auto-encoder.
For Variational Auto-Encoders (VAE) (from paper Auto-Encoding Variational Bayes), we actually add latent variables to the existing Autoencoders. The main idea is, we want to restrict the parameters from a known distribution. Why we want this? We wish the generative model to provide more “creative” things. If the model only sees the trained samples, it will eventually lose the ability to “create” more! So we add some “noises” to the parameters by forcing the parameters to adapt to a known distribution.

Continue reading “Understanding Variational Graph Auto-Encoders”

LectureBank: a dataset for NLP Education and Prerequisite Chain Learning

Introduction

In this blog post, we introduce our AAAI 2019 accepted paper “What Should I Learn First: Introducing LectureBank for NLP Education and Prerequisite Chain Learning.”
Our LectureBank dataset contains 1,352 English lecture files collected from university courses in mainly Natural Language Processing (NLP) field. Besides, each file is manually classified according to an existing taxonomy. Together with the dataset, we include 208 manually-labeled prerequisite relation topics. The dataset will be useful for educational purposes such as lecture preparation and organization as well as applications such as reading list generation. Additionally, we experiment with neural graph-based networks and non-neural classifiers to learn these prerequisite relations from our dataset.

Continue reading “LectureBank: a dataset for NLP Education and Prerequisite Chain Learning”