Back to Full Curriculum
ML201Semester 33 (3-0-0)Major

Fundamentals of AI & Game Theory

History and foundations of AI, The Turing Test and Chinese Room Argument. Intelligent Agents: Agents and Environments, The Concept of Rationality, The Nature of Environments (Fully/Partially observable, Deterministic/...

Syllabus

01

Unit 1: Introduction to Artificial Intelligence

History and foundations of AI, The Turing Test and Chinese Room Argument. Intelligent Agents: Agents and Environments, The Concept of Rationality, The Nature of Environments (Fully/Partially observable, Deterministic/Stochastic), Structure of Agents (Simple Reflex, Model-based, Goal-based, Utility-based).

02

Unit 2: Problem Solving by Search

Problem-Solving Agents, Formulating Problems, Example Problems (8-Puzzle, TSP). Uninformed Search Strategies: Breadth-First Search (BFS), Depth-First Search (DFS), Depth-Limited Search, Iterative Deepening DFS. Informed (Heuristic) Search Strategies: Greedy Best-First Search, A* Search (Optimality and Completeness), Heuristic Functions (Admissibility and Consistency). Local Search Algorithms: Hill-climbing, Simulated Annealing.

03

Unit 3: Game Theory and Adversarial Search

Games as Search Problems. Zero-Sum Games. Perfect Information Games: The Minimax Algorithm, Optimal decisions in games. Alpha-Beta Pruning (optimization). Imperfect Real-time decisions. Introduction to General Game Playing. Non-Zero-Sum Games: Prisoner's Dilemma, Nash Equilibrium, Cooperative vs. Non-Cooperative Games.

04

Unit 4: Knowledge Representation and Reasoning

Logical Agents. Propositional Logic: Syntax and Semantics, Entailment, Inference rules. First-Order Logic (FOL): Syntax and Semantics, Quantifiers (Universal, Existential). Inference in FOL: Forward Chaining, Backward Chaining, and Resolution. Knowledge Engineering and Ontologies.

05

Unit 5: Uncertainty and Utility Theory

Quantifying Uncertainty: Probability axioms, Random Variables, Independence, Bayes' Rule and its application. Making Simple Decisions: Utility Theory, Utility Functions, Decision Networks, The Value of Information. Introduction to Markov Decision Processes (MDP) basics (States, Actions, Rewards).