Advances in Machine Learning

Course Objectives:

  1. Explore and comprehend advanced ML algorithms, their strengths, and weaknesses.
  2. Master techniques to interpret and explain ML model predictions for transparency and trust.
  3. Build and train deep learning models to address specific tasks and datasets.
  4. Apply the acquired knowledge to tackle real-world challenges in AI and ML domains.

Course Outcomes:

  • CO1 Analyze and apply advanced machine learning algorithms to solve complex real-world problems.
  • CO2 Evaluate and interpret ML models to understand their decision-making processes.
  • CO3 Implement deep learning architectures for tasks like image analysis, language processing, and sequence modeling.
  • CO4 Develop expertise in applying cutting-edge ML techniques to various AI applications and domains.
Unit I
Advanced ML Algorithms: Ensemble Learning and Ensemble methods: Bagging, Boosting, and Stacking, and Kernel Methods.
Reinforcement Learning: Q Learning, HMM model, Deep Reinforcement Learning

Unit II
Model Interpretability and Explainability: Feature importance and SHAP values, LIME (Local
Interpretable Model-agnostic Explanations), Explainable AI (XAI) techniques,

Unit III
Deep Learning Architectures: Convolutional Neural Networks (CNN) for image analysis,
Recurrent Neural Networks (RNN) for sequence data, Transformers and Attention mechanisms

Unit IV
Applications of Advanced ML: Natural Language Processing (NLP) with BERT and GPT,
Generative Adversarial Networks (GANs) for image synthesis, Transfer learning and domain adaptation

Textbooks:
  1. "Hands-On Machine Learning with Scikit-Learn, Keras, and TensorFlow" by Aurélien Géron
  2. "Pattern Recognition and Machine Learning" by Christopher M. Bishop
  3. "Deep Learning" by Ian Goodfellow, Yoshua Bengio, and Aaron Courville

Reference Books:
  1. "Interpretable Machine Learning" by Christoph Molnar
  2. "Reinforcement Learning: An Introduction" by Richard S. Sutton and Andrew G. Barto
  3. "Natural Language Processing in Action" by Lane, Howard, and Hapke

No comments:

Post a Comment