So I have decided to take that plunge into serious PhD business.
My professor has asked me to work on structural SVMs. But I have to work my way upto there from the bottom. So the course is route is going to be something like:
Machine Learning -> Supervised Learning -> Statistical Learning theory -> Pattern Recognition, Classification, Regression -> SVM -> Structural SVM
So the first question that I find asking myself is: What is Machine Learning definition? And I find the following definitions:
Arthur Samuel(1959) Feild of study that gives computers the ability to learn without being explicitly programmed.
Tom Mitchell(1998) Well-posed Learning problem: A computer program is said to learn from experience E with respect to some task T and some performance measure P, if its performance on T, as measured by P, improves with experience E.
The fundamental concepts in Machine Learning are:
1. Supervised Learning: Learning from Examples
Supervised Learning can consist of
Regression Problems : Predict continous values
Classification Problems: Predict discrete values
2.1 Statistical learning theory :tries to give an understanding of how and why learning algorithms work. This field of study makes it possible to formulate theorms that will garuentee that a learning algorithm will work.
How well can you approximate the learning function
How much training data do you need
3. Unsupervised Learning: Learning without any Examples
4. Reinforcement Learning: Learning on the fly from experience
During the course of my work, I hope this blog maintains a record of the different things that I lay my hands on.
For starters, I installed the latex support for blogs so that I can do stuff like this:
$\sum_{i=1}^nx_i$
More later....
No comments:
Post a Comment