About advanced machine learning:
Advanced Supervised Learning
SVM & Active Learning
NN & Deep Learning
Advanced unsupervised Learning
Community Finding in Social Networks
General: find a hyperplane to split two classes.
w: the weight vector which controls the orientation
b: the displacement from the origin
Support Vector Machines (SVM) for binary classification: find a hyperplane between two classes that maximises the margin between the clases.
Support vectors, a subset of the training set.
Weight vector w is a linear combination of the support vectors.
Soft Margin Classification
A trade-off method. A wrong separation is accepted.
Hard Margin Classification
All examples are correctly classified.
Use a parameter to control the cost. —> allowing training errors and forcing rigid margins.
Non Linear? (Original 2D is not separable)
Move to 3D.
Kernal Function: Inner products between pairs of examples..to be specified by some funtion. Transform the data into implicit high dimensional feature space. —> Similarity Measure (Same class samples have a higher value)
Sequence Kernels: text classification (words), protein classification.
A Kernal matrix (Gram matrix). n times n, with k(i,j).
A new input q, calculate K(q,x) for each of the support vectors x.
1.Linear SVMs can find the optimal hyperplane for linearly separable classes.
2.Non-linear separation could be done by a kernel method.
3.Kernel methods are new ways to calculate similarities, so with SVMs, they provide ways to solve non-linear problems.
4.Active learning can be used in conjunction with SVMs to
minimise the number of training examples required.