CS194-10 Fall 2011 Introduction to Machine Learning Machine Learning: An Overview
Course outline
Learning is….
Learning: Why?
Learning: Why?
Learning: Why?
Learning: Why?
Structure of a learning agent
Design of learning element
Examples
Supervised learning
Supervised learning
Supervised learning
Example: object recognition
Example: object recognition
Example: curve fitting
Example: curve fitting
Example: curve fitting
Example: curve fitting
Learning
Learning
Learning
Learning
716.50K
Категория: Английский языкАнглийский язык

CS194-10 Fall 2011 Introduction to Machine

1. CS194-10 Fall 2011 Introduction to Machine Learning Machine Learning: An Overview

2. Course outline

• Overview of machine learning (today)
• Classical supervised learning






Linear regression,
perceptrons,
neural nets,
SVMs,
decision trees,
nearest neighbors
• Learning probabilistic models




Probabilistic classifiers (logistic regression, etc.)
Unsupervised learning, density estimation, EM
Bayes net learning
Time series models
Lecture 1 8/25/11
CS 194-10 Fall 2011, Stuart Russell
2

3. Learning is….

… a computational process for improving
performance based on experience
Lecture 1 8/25/11
CS 194-10 Fall 2011, Stuart Russell
3

4. Learning: Why?

Lecture 1 8/25/11
CS 194-10 Fall 2011, Stuart Russell
4

5. Learning: Why?

• The baby, rushed by eyes, ears, nose, skin,
and insides at once, feels it all as one great
blooming, buzzing confusion …
– [William James, 1890]
Lecture 1 8/25/11
CS 194-10 Fall 2011, Stuart Russell
5

6. Learning: Why?

• The baby, assailed by eyes, ears, nose, skin, and
entrails at once, feels it all as one great blooming,
buzzing confusion …
– [William James, 1890]
Learning is essential for unknown environments,
i.e., when the designer lacks knowledge
Lecture 1 8/25/11
CS 194-10 Fall 2011, Stuart Russell
6

7. Learning: Why?

• Instead of trying to produce a programme to simulate the
adult mind, why not rather try to produce one which
simulates the child's? If this were then subjected to an
appropriate course of education one would obtain the adult
brain. Presumably the child brain is something like a
notebook as one buys it from the stationer's. Rather little
mechanism, and lots of blank sheets.
– [Alan Turing, 1950]
• Learning is useful as a system construction
method, i.e., expose the system to reality rather
than trying to write it down
Lecture 1 8/25/11
CS 194-10 Fall 2011, Stuart Russell
7

8. Structure of a learning agent

Lecture 1 8/25/11
CS 194-10 Fall 2011, Stuart Russell
8

9. Design of learning element

• Key questions:
– What is the agent design that will implement the
desired performance?
– Improve the performance of what piece of the agent
system and how is that piece represented?
– What data are available relevant to that piece? (In
particular, do we know the right answers?)
– What knowledge is already available?
Lecture 1 8/25/11
CS 194-10 Fall 2011, Stuart Russell
9

10. Examples

Agent design
Component
Representation
Feedback
Knowledge
Alpha-beta
search
Evaluation
function
Linear
polynomial
Win/loss
Rules of game;
Coefficient signs
Logical planning
agent
Transition model Successor-state
(observable
axioms
envt)
Action outcomes Available
actions;
Argument types
Utility-based
patient monitor
Physiology/sens
or model
Observation
sequences
Gen physiology;
Sensor design
Satellite image
pixel classifier
Classifier (policy) Markov random
field
Partial labels
Coastline;
Continuity scales
Dynamic
Bayesian
network
Supervised learning: correct answers for each training instance
Reinforcement learning: reward sequence, no correct answers
Unsupervised learning: “just make sense of the data”
Lecture 1 8/25/11
CS 194-10 Fall 2011, Stuart Russell
10

11. Supervised learning

• To learn an unknown target function f
• Input: a training set of labeled examples (xj,yj)
where yj = f(xj)
• E.g., xj is an image, f(xj) is the label “giraffe”
• E.g., xj is a seismic signal, f(xj) is the label “explosion”
• Output: hypothesis h that is “close” to f, i.e.,
predicts well on unseen examples (“test set”)
• Many possible hypothesis families for h
– Linear models, logistic regression, neural networks,
decision trees, examples (nearest-neighbor),
grammars, kernelized separators, etc etc
Lecture 1 8/25/11
CS 194-10 Fall 2011, Stuart Russell
11

12. Supervised learning

• To learn an unknown target function f
• Input: a training set of labeled examples
where yj = f(xj)
(xj,yj)
• E.g., xj is an image, f(xj) is the label “giraffe”
• E.g., xj is a seismic signal, f(xj) is the label “explosion”
• Output: hypothesis h that is “close” to f, i.e., predicts
well on unseen examples (“test set”)
• Many possible hypothesis families for h
– Linear models, logistic regression, neural networks, decision
trees, examples (nearest-neighbor), grammars, kernelized
separators, etc etc
Lecture 1 8/25/11
CS 194-10 Fall 2011, Stuart Russell
12

13. Supervised learning

• To learn an unknown target function f
• Input: a training set of labeled examples
where yj = f(xj)
(xj,yj)
• E.g., xj is an image, f(xj) is the label “giraffe”
• E.g., xj is a seismic signal, f(xj) is the label “explosion”
• Output: hypothesis h that is “close” to f, i.e., predicts
well on unseen examples (“test set”)
• Many possible hypothesis families for h
– Linear models, logistic regression, neural networks, decision
trees, examples (nearest-neighbor), grammars, kernelized
separators, etc etc
Lecture 1 8/25/11
CS 194-10 Fall 2011, Stuart Russell
13

14. Example: object recognition

x
f(x)
giraffe
Lecture 1 8/25/11
giraffe
giraffe
llama
CS 194-10 Fall 2011, Stuart Russell
llama
llama
14

15. Example: object recognition

x
f(x)
giraffe
X=
Lecture 1 8/25/11
giraffe
giraffe
llama
llama
llama
f(x)=?
CS 194-10 Fall 2011, Stuart Russell
15

16. Example: curve fitting

Lecture 1 8/25/11
CS 194-10 Fall 2011, Stuart Russell
16

17. Example: curve fitting

Lecture 1 8/25/11
CS 194-10 Fall 2011, Stuart Russell
17

18. Example: curve fitting

Lecture 1 8/25/11
CS 194-10 Fall 2011, Stuart Russell
18

19. Example: curve fitting

Lecture 1 8/25/11
CS 194-10 Fall 2011, Stuart Russell
19

20. Learning

Learning
knowledge
data
Lecture 1 8/25/11
CS 194-10 Fall 2011, Stuart Russell
20

21. Learning

prior
knowledge
Learning
knowledge
data
Lecture 1 8/25/11
CS 194-10 Fall 2011, Stuart Russell
21

22. Learning

prior
knowledge
Learning
knowledge
data
Lecture 1 8/25/11
CS 194-10 Fall 2011, Stuart Russell
22

23. Learning

prior
knowledge
Learning
knowledge
data
Crucial open problem: weak intermediate forms of knowledge
that support future generalizations
Lecture 1 8/25/11
CS 194-10 Fall 2011, Stuart Russell
23
English     Русский Правила