Previously, we introduced Autoencoders and Hierarchical Variational Autoencoders (HVAEs). In this post, we will cover the details of Denoising Diffusion Probabilistic Models (DDPM). Diffusion Models We can treat DDPM as a restricted HVAE. Here, each only depends on . In DDPM, we do not have parameters to add noises, and it is a predefined GaussianContinue reading “Deep Learning 22: Diffusion Models (2)”
Tag Archives: Learning Note
Reinforcement Learning (1): Q-Learning basics
Hi! In the following posts, I will introduce Q-Learning, the first part to learn if you want to pick up reinforcement learning. But before that, let us shed light on some fundamental concepts in reinforcement learning (RL). Kindergarten Example Q-Learning works in this way: do an action and get reward and observation from the environment,Continue reading “Reinforcement Learning (1): Q-Learning basics”
Deep Learning 07: Are you talking to a machine?
Recently working on a shared task job of image annotation. An interesting paper saw on NIPS’15 was proposed by Baidu Research. Find paper here . Official website. This post is the study notes.
Random Forest: intro and an example
About Decision Trees * All samples will start from the root. * At each node, one feature will split the samples.
Understanding SVM(1)
About advanced machine learning:
Tinkerpop3 GraphComputer: VertexPrograms
GraphComputer TP3 provides OLTP and OLAP means of interacting with a graph. OLTP-based graph system provides query in real-time, with a limited set of data and respond on the order of milliseconds or seconds. (Only a part). The graph is walked by moving from vertex to another, via incident edges.
Parallel Graph Coloring Algorithms and an Implementation of Jones-Plassmann
Graph Coloring Algorithms
Logistic Regression: a quick introduction
Logistic Regression is very popular in Machine Learning, used to give predictions on something. (It is not the exact probabilities, but general values. )
Parallel Gibbs Sampling and Neural Networks
Parallel in Variables (Vertexes): General huge, undirected graph: each vertex is a variable (parallel sampling on a high dimension).
Gibbs Sampling: about Parallelization
About BN Belief Network, or directed acyclic graphical model (DAG). When BN is huge: Exact Inference(variable elimination) Stochastic Inference(MCMC)