Вы находитесь на странице: 1из 9

Weekly Quiz 2 (In Week 3) DM: PGPBABI.O.JUL19A Data Mining https://olympus.greatlearning.

in/courses/6321/quizzes/14521

Due No due date Points 10 Questions 15 Time Limit 20 Minutes Allowed Attempts 2

Dear Participants,

This quiz has 15 questions.

The time limit is 20 Mins.

Kindly go through these guidelines before you attempt the quiz :

Only attempt the quiz when you are prepared and have enough time on your hand to finish it. Please ensure you attempt the quiz well before the
due date. No extension will be provided for any quiz once the deadline is passed.
The quiz once opened, must be completed within the time frame provided. You CANNOT start the quiz, leave it unattended for an extended
period of time and come back later to finish.
Ensure there is proper internet connection while taking up the quiz. Any breakup in the connection will automatically submit your quiz.
No re-attempts will be provided if the quiz gets submitted for any of the above-mentioned reasons.
If you face any other technical issues on Olympus, you should share the screenshot with your Program Manager so that the team can
understand and resolve it on priority.
There might be questions that will require you to use statistical tools like R and Excel.
(Additional Instruction only if Dataset is present) Please download the dataset/ tableau workbook before you attempt the quiz

Regards
Program Office

Take the Quiz Again

Attempt Time Score

LATEST Attempt 1 18 minutes 7 out of 10

Answers will be shown after your last attempt

Score for this attempt: 7 out of 10


Submitted Nov 2 at 2:15am
This attempt took 18 minutes.

Question 1 0.5 / 0.5 pts

Decision Trees fall into which category of machine learning techniques?

Supervised Learning

1 of 9 02-11-2019, 02:16
Weekly Quiz 2 (In Week 3) DM: PGPBABI.O.JUL19A Data Mining https://olympus.greatlearning.in/courses/6321/quizzes/14521

Question 2 0.5 / 0.5 pts

What differentiates Classification Decision Trees from Regression Decision Trees?

Type of dependent variable

Incorrect Question 3 0 / 0.5 pts

What of the following is true for the difference between CART and CHAID decision trees?

a) CHAID requires independent variables to be categorical whereas CART has no such pre-requisite

b) CHAID uses p-value of chi square test for splitting criterion whereas CART uses Gini index

c) Pruning is used in CART but not in CHAID

a, b & c

Incorrect Question 4 0 / 1 pts

Measure of likelihood of a randomly chosen element from the set to be labeled/predicted incorrectly, based

2 of 9 02-11-2019, 02:16
Weekly Quiz 2 (In Week 3) DM: PGPBABI.O.JUL19A Data Mining https://olympus.greatlearning.in/courses/6321/quizzes/14521

on the distribution of the labels/classes in the node is called

All of the above

Incorrect Question 5 0 / 0.5 pts

In CART, the splitting criterion is decided in such a way that the net Gini Index across the nodes

reduced by at least 'x', x being predefined

Question 6 0.5 / 0.5 pts

Overfitting happens when the model,

a) has high complexity and captures both information & noise

b) has good performance on training dataset but relatively poor on testing dataset

both a & b

3 of 9 02-11-2019, 02:16
Weekly Quiz 2 (In Week 3) DM: PGPBABI.O.JUL19A Data Mining https://olympus.greatlearning.in/courses/6321/quizzes/14521

Question 7 0.5 / 0.5 pts

Pruning is when decision trees are stopped from splitting beyond a certain level to,

a) avoid overfitting of the model

b) ensure that at least 'x' amount of reduction in error/impurity from root node to children nodes

Which of the above is true?

both a & b

Question 8 1 / 1 pts

Minimum required reduction in error/impurity used as threshold in pruning of decision trees is called

both of the above

Question 9 1 / 1 pts

What is the criterion to decide where to stop splitting the tree further or where to prune the decision tree?

decrease in relative error is less than alpha

4 of 9 02-11-2019, 02:16
Weekly Quiz 2 (In Week 3) DM: PGPBABI.O.JUL19A Data Mining https://olympus.greatlearning.in/courses/6321/quizzes/14521

Question 10 1 / 1 pts

Which of the following do you expect to be least interpret-able?

Random Forest

Question 11 0.5 / 0.5 pts

Random Forest is an ensemble modelling technique.

True

Question 12 0.5 / 0.5 pts

Random Forest can be used as a

Both

5 of 9 02-11-2019, 02:16
Weekly Quiz 2 (In Week 3) DM: PGPBABI.O.JUL19A Data Mining https://olympus.greatlearning.in/courses/6321/quizzes/14521

Incorrect Question 13 0 / 1 pts

In Random Forest, each individual tree is based on the data that has

a subset of the rows present in the original training data

Question 14 0.5 / 0.5 pts

The process of random sampling with replacement from the original dataset to create multiple models is
called

Bagging

Question 15 0.5 / 0.5 pts

6 of 9 02-11-2019, 02:16
Weekly Quiz 2 (In Week 3) DM: PGPBABI.O.JUL19A Data Mining https://olympus.greatlearning.in/courses/6321/quizzes/14521

In a given dataset, there are M columns. Out of these M, m columns are chosen each time for creating
training sample for the individual trees in Random Forest. What will happen if

a) m is almost equal to M ?

b) m is very small ?

a will result in high correlation among individual tree resulting in lack of diversity, & b will result
in very weak individual trees

Quiz Score: 7 out of 10

7 of 9 02-11-2019, 02:16
Weekly Quiz 2 (In Week 3) DM: PGPBABI.O.JUL19A Data Mining https://olympus.greatlearning.in/courses/6321/quizzes/14521

8 of 9 02-11-2019, 02:16
Weekly Quiz 2 (In Week 3) DM: PGPBABI.O.JUL19A Data Mining https://olympus.greatlearning.in/courses/6321/quizzes/14521

9 of 9 02-11-2019, 02:16

Вам также может понравиться