## Deep Learning 22: Diffusion Models (2)

Previously, we introduced Autoencoders and Hierarchical Variational Autoencoders (HVAEs). In this post, we will cover the details of Denoising Diffusion Probabilistic Models (DDPM). Diffusion Models We can treat DDPM as a restricted HVAE. Here, each only depends on . In DDPM, we do not have parameters to add noises, and it is a predefined GaussianContinue reading “Deep Learning 22: Diffusion Models (2)”

## Reinforcement Learning (1): Q-Learning basics

Hi! In the following posts, I will introduce Q-Learning, the first part to learn if you want to pick up reinforcement learning. But before that, let us shed light on some fundamental concepts in reinforcement learning (RL). Kindergarten Example Q-Learning works in this way: do an action and get reward and observation from the environment,Continue reading “Reinforcement Learning (1): Q-Learning basics”

## Random Forest: intro and an example

About Decision Trees * All samples will start from the root. * At each node, one feature will split the samples.

## Tinkerpop3 GraphComputer: VertexPrograms

GraphComputer TP3 provides OLTP and OLAP means of interacting with a graph. OLTP-based graph system provides query in real-time, with a limited set of data and respond on the order of milliseconds or seconds. (Only a part). The graph is walked by moving from vertex to another, via incident edges.

## Logistic Regression: a quick introduction

Logistic Regression is very popular in Machine Learning, used to give predictions on something. (It is not the exact probabilities, but general values. )

## Parallel Gibbs Sampling and Neural Networks

Parallel in Variables (Vertexes): General huge, undirected graph: each vertex is a variable (parallel sampling on a high dimension).