Академический Документы
Профессиональный Документы
Культура Документы
You submitted this quiz on Thu 3 Apr 2014 12:18 AM IST. You got a score of 5.00
out of 5.00.
Question 1
In which one of the following figures do you think the hypothesis has overfit the training set?
1.00 The hypothesis follows the data points very closely and is
highly complicated, indicating that it is overfitting the
training set.
Total 1.00 /
1.00
https://class.coursera.org/ml-005/quiz/feedback?submission_id=828858 1/5
4/3/2014 Quiz Feedback | Coursera
Question 2
In which one of the following figures do you think the hypothesis has underfit the training set?
Total 1.00 /
1.00
Question 3
You are training a classification model with logistic regression. Which of the following statements
https://class.coursera.org/ml-005/quiz/feedback?submission_id=828858 2/5
4/3/2014 Quiz Feedback | Coursera
Adding many new 0.25 Adding many new features gives us more expressive
features to the model models which are able to better fit our training set. If
helps prevent overfitting too many new features are added, this can lead to
on the training set. overfitting of the training set.
Adding many new 0.25 Adding many new features gives us more expressive
features to the model models which are able to better fit our training set. If
makes it more likely to too many new features are added, this can lead to
overfit the training set. overfitting of the training set.
Total 1.00 /
1.00
Question 4
Suppose you ran logistic regression twice, once with λ = 0, and once with λ = 1. One of the
26.29 2.75
times, you got parameters θ = [ ], and the other time you got θ = [ ]. However,
65.41 1.32
you forgot which value of λ corresponds to which value of θ. Which one do you think corresponds
to λ = 1?
https://class.coursera.org/ml-005/quiz/feedback?submission_id=828858 3/5
4/3/2014 Quiz Feedback | Coursera
26.29
θ = [ ]
65.41
Total 1.00 /
1.00
Question 5
Which of the following statements about regularization are true? Check all that apply.
Using too large a value of 0.25 A large value of λ results in a large regularization
λ can cause your penalty and thus a strong preference for simpler
hypothesis to underfit the models which can underfit the data.
data.
Using too large a value of 0.25 Using a very large value of λ can lead to
λ can cause your underfitting of the training set.
hypothesis to overfit the
data; this can be avoided by
reducing λ.
Using a very large value 0.25 Using a very large value of λ can lead to
of λ cannot hurt the underfitting of the training set.
performance of your
hypothesis; the only reason
we do not set λ to be too
large is to avoid numerical
problems.
Total 1.00 /
1.00
https://class.coursera.org/ml-005/quiz/feedback?submission_id=828858 4/5
4/3/2014 Quiz Feedback | Coursera
https://class.coursera.org/ml-005/quiz/feedback?submission_id=828858 5/5