MNIT Jaipur Syllabus computer science Machine Learning

 

MNIT Jaipur Syllabus computer science Machine Learning

 

Machine Learning

Introduction: Definition of learning systems. Goals and applications of machine learning. Aspects of

developing a learning system: training data, concept representation, function approximation.

Inductive Classification:  The concept learning task. Concept learning as search through a hypothesis

space. General-to-specific ordering of hypotheses. Finding maximally specific hypotheses. Version

spaces and the candidate elimination algorithm. Learning conjunctive concepts. The importance of

inductive bias.

Decision Tree Learning: Representing concepts as decision trees. Recursive induction of decision trees.

Overfitting, noisy data, and pruning.

Ensemble Learning Using committees of multiple hypotheses. Bagging, boosting, and DECORATE.

Active learning with ensembles.

Experimental Evaluation of Learning Algorithms: Measuring the accuracy of learned hypotheses.

Comparing learning algorithms: cross-validation, learning curves, and statistical hypothesis testing.

Rule Learning: Propositional and First-Order: Translating decision trees into rules. Heuristic rule

induction using separate and conquer and information gain. First-order Horn-clause induction (Inductive

Logic Programming) and Foil. Learning recursive rules. Inverse resolution.

Artificial Neural Networks: Neurons and biological motivation. Linear threshold units. Perceptrons:

representational limitation and gradient descent training. Multilayer networks and back propagation.

Hidden layers and constructing intermediate, distributed representations. Overfitting, learning network

structure, recurrent networks.

Bayesian Learning: Probability theory and Bayes rule. Naive Bayes learning algorithm. Parameter

smoothing. Generative vs. discriminative training. Logistic regression. Bayes nets and Markov nets for

representing dependencies.

Instance-Based Learning: Constructing explicit generalizations versus comparing to past specific

examples. k-Nearest-neighbor algorithm. Case-based learning.

Clustering and Unsupervised Learning: Learning from unclassified data. Clustering. Hierarchical

Agglomerative Clustering. k-means partitional clustering. Expectation maximization (EM) for soft

clustering. Semi-supervised learning with EM using labeled and unlabeled data.

Text/References:

1. Bishop, C. (2006) Mitchell, T. M. (1997) Machine Learning. McGraw-Hill

2. Pattern Recognition and Machine Learning. Berlin: Springer-Verlag.

3. Richard O. Duda, Peter E. Hart and David G. Stork. Pattern Classi_cation. WileyInterscience,second edition, 2001.

4. Thomas Mitchell. Machine Learning. McGraw Hill Higher Education, First edition, 1997.

5. Stuart Russell and Peter Norvig. Arti_cial Intelligence: A Modern Approach. Prentice

Hall,second edition, 2003. (Machine-learning related chapters.)

6. Information Theory, Inference and Learning Algorithms by David MacKay.

Leave a Comment