Вы находитесь на странице: 1из 65

About the exam

 

Dear Participant,

Greetings! You have completed the "Final Exam" exam. At this juncture, it is important for you to understand your strengths and focus on them to achieve the best results. We present here a snapshot of your performance in "Final Exam" exam in terms of marks scored by you in each section, question-wise response pattern and difficulty-wise analysis of your performance.

This Report consists of the following sections that can be accessed using the left navigation panel:

Overall Performance: This part of report shows the summary of marks scored by you across all sections of the exam and the comparison of your performance across all sections.

Section-wise Performance: You can click on a section name in the left navigation panel to check your performance in that section. Section-wise performance includes the details of your response at each question level and difficulty-wise analysis of your performance for that section.

NOTE : For Short Answer, Subjective, Typing and Programing Type Questions participant will not be able to view Bar Chart Report in the Performance Analysis.

 

Subject

Questions Attempted

Correct

Score

Final

40/99

31

31

 

Final, 100%FinalMarks Obtained Subject Wise

 

NOTE : Subject having negative marks are not considered in the pie chart. Pie chart will not be shown if all the subject contains 0 marks.

Final

 

The Final section comprises of a total of 99 questions with the following difficulty level

distribution: -

Difficulty Level

No. of questions

Easy

0

Moderate

99

Hard

0

 

Question wise details

 

Please click on question to view detailed analysis

 

  

= Not Evaluated

 

  

= Evaluated

 

  

= Correct

 

  

= Incorrect

 

  

= Not Attempted

 

  

= Marked For Review

 

  

= Correct Option

 

  

= Your Option

Question Details

 

  
 

Q1.Key/Value is considered as hadoop format.

Difficulty Level : Moderate

Status : Correct

Marks Obtained : 1

Response : 1

 Marks Obtained : 1  Response : 1 Option 1 : T r u e

Option 1 :

True

Option 2 :

False

 

Q2.What kind of servers are used for creating a hadoop cluster?

Difficulty Level : Moderate

Status : Correct

Marks Obtained : 1

Response : 2

 

Option 1 :

Server grade machines. Commodity hardware. Only supercomputers None of the above.

 
  Option 2 :

Option 2 :

 

Option 3 :

Option 4 :

 

  

Q3.Hadoop was developed by:

Difficulty Level : Moderate

Status : Correct

Marks Obtained : 1

Response : 1

 
  Option 1 : Doug Cutting

Option 1 :

Doug Cutting

 

Option 2 :

Lars George

Option 3 :

Tom White

Option 4 :

Eric Sammer

 

  

Q4.One of the features of hadoop is you can achieve parallelism.

Difficulty Level : Moderate

Status : Correct

Marks Obtained : 1

Response : 2

 

Option 1 :

False

 
  Option 2 : T r u e

Option 2 :

True

 

  

Q5.Hadoop can only work with structured data.

Difficulty Level : Moderate

Status : Correct

Marks Obtained : 1

Response : 1

 
  Option 1 : F a l s e

Option 1 :

False

 

Option 2 :

True

 

  

Q6.Hadoop cluster can scale out:

Difficulty Level : Moderate

Status : Incorrect

Marks Obtained : 0

Response : 2

 

Option 1 :

By upgrading existing servers

 
  Option 2 : By increasing the area of the cluster. By downgrading existing servers

Option 2 :

By increasing the area of the cluster. By downgrading existing servers

 

Option 3 :

 
  Option 4 : By adding more hardware

Option 4 :

By adding more hardware

 

  

Q7.Hadoop can solve only use cases involving data from Social media.

Difficulty Level : Moderate

Status : Correct

Marks Obtained : 1

Response : 2

 

Option 1 :

1

 
  Option 2 : F a l s e

Option 2 :

False

 

  

Q8.Hadoop can be utilized for demographic analysis.

Difficulty Level : Moderate

Status : Correct

Marks Obtained : 1

Response : 1

 
  Option 1 : T r u e

Option 1 :

True

 

Option 2 :

False

 

  
 

Q9.Hadoop is inspired from which file system.

Difficulty Level : Moderate

Status : Correct

Marks Obtained : 1

Response : 2

 

Option 1 :

AFS GFS MPP None of the above.

 
  Option 2 :

Option 2 :

 

Option 3 :

Option 4 :

 

  
 

Q10.For Apache Hadoop one needs licensing before leveraging it.

Difficulty Level : Moderate

Status : Correct

Marks Obtained : 1

Response : 2

 

Option 1 :

True

 
  Option 2 : F a l s e

Option 2 :

False

 

  
 

Q11.HDFS runs in the same namespace as that of local filesystem.

Difficulty Level : Moderate

Status : Correct

Marks Obtained : 1

Response : 1

 
  Option 1 : F a l s e

Option 1 :

False

 

Option 2 :

True

 

  

Q12.HDFS follows a master-slave architecture.

Difficulty Level : Moderate

Status : Correct

Marks Obtained : 1

Response : 2

 

Option 1 :

False

 
  Option 2 : T r u e

Option 2 :

True

 

  
 

Q13.Namenode only responds to:

Difficulty Level : Moderate

Status : Unanswered

Marks Obtained : 0

Response :

 

Option 1 :

FTP calls

Option 2 :

SFTP calls.

 
  Option 3 : RPC calls

Option 3 :

RPC calls

 

Option 4 :

MPP calls

 

  
 

Q14.Perfect balancing can be achieved in a Hadoop cluster.

Difficulty Level : Moderate

Status : Unanswered

Marks Obtained : 0

Response :

 
  Option 1 : F a l s e

Option 1 :

False

 

Option 2 :

True

 

  
 

Q15.What does Namenode periodically expects from Datanodes?

Difficulty Level : Moderate

Status : Correct

Marks Obtained : 1

Response : 2

 

Option 1 :

EditLogs

 
  Option 2 : Block report and Status FSImages None of the above

Option 2 :

Block report and Status FSImages None of the above

 

Option 3 :

Option 4 :

 

  

Q16.After client requests JobTracker for running an application, whom does JT contacts?

Difficulty Level : Moderate

Status : Correct

Marks Obtained : 1

Response : 3

 

Option 1 :

DataNodes Tasktracker

Option 2 :

 
  Option 3 : Namenode None of the above.

Option 3 :

Namenode None of the above.

 

Option 4 :

 

  

Q17.Intertaction to HDFS is done through which script.

Difficulty Level : Moderate

Status : Unanswered

Marks Obtained : 0

Response :

 

Option 1 :

Fsadmin

Option 2 :

Hive

Option 3 :

Mapreduce

 
  Option 4 : H a d o o p

Option 4 :

Hadoop

 

  

Q18.What is the usage of put command in HDFS?

Difficulty Level : Moderate

Status : Unanswered

Marks Obtained : 0

Response :

 

Option 1 :

It deletes files from one file system to another. It copies files from one file system to another It puts configuration parameters in configuration files

 
  Option 2 :

Option 2 :

 

Option 3 :

Option 4 :

None of the above.

 

  
 

Q19.Each directory or file has three kinds of permissions:

Difficulty Level : Moderate

Status : Unanswered

Marks Obtained : 0

Response :

 
  Option 1 : read,write,execute

Option 1 :

read,write,execute

 

Option 2 :

read,write,run

Option 3 :

read,write,append

Option 4 :

read,write,update

 

  
 

Q20.Mapper output is written to HDFS.

Difficulty Level : Moderate

Status : Unanswered

Marks Obtained : 0

Response :

 
  Option 1 : F a l s e

Option 1 :

False

 

Option 2 :

True

 

  
 

Q21.A Reducer writes its output in what format.

Difficulty Level : Moderate

Status : Unanswered

Marks Obtained : 0

Response :

 Marks Obtained : 0  Response : Option 1 : Key/Value Option 2 : Text

Option 1 :

Key/Value

Option 2 :

Text files

Option 3 :

Sequence files

Option 4 :

None of the above

 

Q22.Which of the following is a pre-requisite for hadoop cluster installation?

Difficulty Level : Moderate

Status : Correct

Marks Obtained : 1

Response : 3

 

Option 1 :

Gather Hardware requirement Gather network requirement Both None of the above

Option 2 :

 
  Option 3 :

Option 3 :

 

Option 4 :

 

  
 

Q23.Nagios and Ganglia are tools provided by:

Difficulty Level : Moderate

Status : Unanswered

Marks Obtained : 0

Response :

 
  Option 1 : Cloudera Hortonworks M a p R None of the above

Option 1 :

Cloudera Hortonworks MapR None of the above

 

Option 2 :

Option 3 :

Option 4 :

 

  
 

Q24.Which of the following are cloudera management services?

Difficulty Level : Moderate

Status : Unanswered

Marks Obtained : 0

Response :

 

Option 1 :

Activity Monitor Host Monitor Both None of the above

Option 2 :

 
  Option 3 :

Option 3 :

 

Option 4 :

 

  
 

Q25.Which of the following is used to collect information about activities running in a hadoop cluster?

Difficulty Level : Moderate

Status : Unanswered

Marks Obtained : 0

Response :

 

Option 1 :

Report Manager Cloudera Navigator Activity Monitor All of the above

Option 2 :

 
  Option 3 :

Option 3 :

 

Option 4 :

 

  

Q26.Which of the following aggregates events and makes them available for alerting and searching?

Difficulty Level : Moderate

Status : Unanswered

Marks Obtained : 0

Response :

 
  Option 1 : Event Server Host Monitor Activity Monitor None of the above

Option 1 :

Event Server Host Monitor Activity Monitor None of the above

 

Option 2 :

Option 3 :

Option 4 :

 

  

Q27.Which tab in the cloudera manager is used to add a service?

Difficulty Level : Moderate

Status : Unanswered

Marks Obtained : 0

Response :

 Marks Obtained : 0  Response : Option 1 : H o s t s

Option 1 :

Hosts

Option 2 :

Activities

Option 3 :

Services

Option 4 :

None of the above

 

Q28.Which of the following provides http access to HDFS?

Difficulty Level : Moderate

Status : Incorrect

Marks Obtained : 0

Response : 3

 
  Option 1 : H t t p s F S Name Node Data Node All

Option 1 :

HttpsFS Name Node Data Node All of the above

 

Option 2 :

 
  Option 3 :

Option 3 :

 

Option 4 :

 

  
 

Q29.Which of the following is used to balance a load in case of addition of a new node and in case of a failure?

Difficulty Level : Moderate

Status : Unanswered

Marks Obtained : 0

Response :

 

Option 1 :

Gateway

 
  Option 2 : Balancer Secondary Name Node None of the above

Option 2 :

Balancer Secondary Name Node None of the above

 

Option 3 :

Option 4 :

 

  
 

Q30.Which of the following is used to designate a host for a particular service?

Difficulty Level : Moderate

Status : Unanswered

Marks Obtained : 0

Response :

 
  Option 1 : G a t e w a y Balancer Secondary Name Node All

Option 1 :

Gateway Balancer Secondary Name Node All of the above

 

Option 2 :

Option 3 :

Option 4 :

 

  
 

Q31.Which of the following are the configuration files?

Difficulty Level : Moderate

Status : Unanswered

Marks Obtained : 0

Response :

 

Option 1 :

Core-site.xml Hdfs-site.xml Both None of the above

Option 2 :

 
  Option 3 :

Option 3 :

 

Option 4 :

 

  

Q32.Which are the commercial leading Hadoop distributors in the market?

Difficulty Level : Moderate

Status : Unanswered

Marks Obtained : 0

Response :

 

Option 1 :

Cloudera , Intel, MapR MapR, Cloudera, Teradata Hortonworks, IBM, Cloudera

Option 2 :

Option 3 :

 
  Option 4 : MapR, Hortonworks, Cloudera

Option 4 :

MapR, Hortonworks, Cloudera

 

  

Q33.What are the core Apache components enclosed in its bundle?

Difficulty Level : Moderate

Status : Unanswered

Marks Obtained : 0

Response :

 
  Option 1 : HDFS, Map-reduce,YARN,Hadoop Commons HDFS, NFS, Combiners, Utility Package HDFS, Map-reduce, Hadoop core

Option 1 :

HDFS, Map-reduce,YARN,Hadoop Commons HDFS, NFS, Combiners, Utility Package HDFS, Map-reduce, Hadoop core MapR-FS, Map-reduce,YARN,Hadoop Commons

 

Option 2 :

Option 3 :

Option 4 :

 

  

Q34.Apart from its basic components Apache Hadoop also provides:

Difficulty Level : Moderate

Status : Unanswered

Marks Obtained : 0

Response :

 

Option 1 :

Apache Hive Apache Pig Apache Zookeeper All the above

Option 2 :

Option 3 :

 
  Option 4 :

Option 4 :

 

  
 

Q35.Rolling upgrades is not possible in which of the following?

Difficulty Level : Moderate

Status : Correct

Marks Obtained : 1

Response : 2

 

Option 1 :

Cloudera

 
  Option 2 : Hortonworks M a p R Possible in all of the above

Option 2 :

Hortonworks MapR Possible in all of the above

 

Option 3 :

Option 4 :

 

  
 

Q36.In which of the following Hbase Latency is low with respect to each other:

Difficulty Level : Moderate

Status : Unanswered

Marks Obtained : 0

Response :

 

Option 1 :

Cloudera Hortonworks MapR IBM BigInsights

Option 2 :

 
  Option 3 :

Option 3 :

 

Option 4 :

 

  
 

Q37.MetaData Replication is possible in:

Difficulty Level : Moderate

Status : Unanswered

Marks Obtained : 0

Response :

 

Option 1 :

Cloudera

Option 2 :

Hortonworks

 
  Option 3 : M a p R

Option 3 :

MapR

 

Option 4 :

Teradata

 

  

Q38.Disastor recovery management is not handled by:

Difficulty Level : Moderate

Status : Incorrect

Marks Obtained : 0

Response : 2

 
  Option 1 : Hortonworks M a p R Cloudera Amazon Web Services EMR

Option 1 :

Hortonworks MapR Cloudera Amazon Web Services EMR

Option 2 :

 

Option 3 :

Option 4 :

 

  

Q39.Mirroring concept is possible in Cloudera.

Difficulty Level : Moderate

Status : Unanswered

Marks Obtained : 0

Response :

 

Option 1 :

True

 
  Option 2 : F a l s e

Option 2 :

False

 

  

Q40.Does MapR supports only Streaming Data Ingestion ?

Difficulty Level : Moderate

Status : Incorrect

Marks Obtained : 0

Response : 1

 
  Option 1 : T r u e

Option 1 :

True

Option 2 :

False

 

  
 

Q41.Hcatalog is open source metadata framework developed by:

Difficulty Level : Moderate

Status : Unanswered

Marks Obtained : 0

Response :

 

Option 1 :

Cloudera

Option 2 :

MapR

 
  Option 3 : Hortonworks

Option 3 :

Hortonworks

 

Option 4 :

Amazon EMR

 

  
 

Q42.BDA can be applicable to gain knowledge on user behaviour, prevents customer churn in Media and Telecommunications Industry.

Difficulty Level : Moderate

Status : Unanswered

Marks Obtained : 0

Response :

 
  Option 1 : T r u e

Option 1 :

True

 

Option 2 :

False

 

  
 

Q43.What is the correct sequence of Big Data Analytics stages?

Difficulty Level : Moderate

Status : Unanswered

Marks Obtained : 0

Response :

 Marks Obtained : 0  Response : Option 1 : Big Data Production > Big

Option 1 :

Big Data Production > Big Data Consumption > Big Data Management

Option 2 :

Big Data Management > Big Data Production > Big Data Consumption

Option 3 :

Big Data Production > Big Data Management > Big Data Consumption

Option 4 :

None of these

 

Q44.Big Data Consumption involves:

Difficulty Level : Moderate

Status : Unanswered

Marks Obtained : 0

Response :

 

Option 1 :

Mining Analytic Search and Enrichment All of the above

Option 2 :

Option 3 :

 
  Option 4 :

Option 4 :

 

  
 

Q45.Big Data Integration and Data Mining are the phases of Big Data Management.

Difficulty Level : Moderate

Status : Incorrect

Marks Obtained : 0

Response : 1

 
  Option 1 : T r u e

Option 1 :

True

Option 2 :

False

 

  
 

Q46.RDBMS, Social Media data, Sensor data are the possible input sources to a big data environment.

Difficulty Level : Moderate

Status : Correct

Marks Obtained : 1

Response : 1

 
  Option 1 : T r u e

Option 1 :

True

 

Option 2 :

False

 

  

Q47.For which of the following type of data it is not possible to store in big data environment and then process/parse it?

Difficulty Level : Moderate

Status : Correct

Marks Obtained : 1

Response : 4

 

Option 1 :

XML/JSON type of data RDBMS Semi-structured data None of the above

Option 2 :

Option 3 :

 
  Option 4 :

Option 4 :

 

  
 

Q48.Software framework for writing applications that parallely process vast amounts of data is known as:

Difficulty Level : Moderate

Status : Unanswered

Marks Obtained : 0

Response :

 
  Option 1 : Map-reduce H i v e I m p a l a None

Option 1 :

Map-reduce Hive Impala None of the above

 

Option 2 :

Option 3 :

Option 4 :

 

  
 

Q49.In proper flow of the map-reduce, reducer will always be executed after mapper.

Difficulty Level : Moderate

Status : Correct

Marks Obtained : 1

Response : 1

 
  Option 1 : T r u e

Option 1 :

True

 

Option 2 :

False

 

  
 

Q50.Which of the following are the features of Map-reduce?

Difficulty Level : Moderate

Status : Correct

Marks Obtained : 1

Response : 4

 

Option 1 :

Automatic parallelization and distribution Fault-Tolerance Platform independent

Option 2 :

Option 3 :

 
  Option 4 : All of the above

Option 4 :

All of the above

 

  
 

Q51.Where does the intermediate output of mapper gets written to?

Difficulty Level : Moderate

Status : Incorrect

Marks Obtained : 0

Response : 4

 
  Option 1 : Local disk of node where it is executed. HDFS of node where

Option 1 :

Local disk of node where it is executed. HDFS of node where it is executed. On a remote server outside the cluster.

 

Option 2 :

Option 3 :

 
  Option 4 : Mapper output gets written to the local disk of Name node machine.

Option 4 :

Mapper output gets written to the local disk of Name node machine.

 

  
 

Q52.Reducer is required in map-reduce job for:

Difficulty Level : Moderate

Status : Correct

Marks Obtained : 1

Response : 1

 
  Option 1 : It combines all the intermediate data collected from mappers. It reduces the

Option 1 :

It combines all the intermediate data collected from mappers. It reduces the amount of data by half of what is supplied to it. Both a and b None of the above

 

Option 2 :

Option 3 :

Option 4 :

 

  
 

Q53.Output of every map is passed to which component.

Difficulty Level : Moderate

Status : Correct

Marks Obtained : 1

Response : 2

 

Option 1 :

Partitioner Combiner Mapper None of the above

 
  Option 2 :

Option 2 :

 

Option 3 :

Option 4 :

 

  
 

Q54.Data Locality concept is used for:

Difficulty Level : Moderate

Status : Unanswered

Marks Obtained : 0

Response :

 

Option 1 :

Localizing data

 
  Option 2 : Avoiding network traffic in hadoop system Both A and B None of

Option 2 :

Avoiding network traffic in hadoop system Both A and B None of the above

 

Option 3 :

Option 4 :

 

  
 

Q55.No of files in the output of map reduce job depends on:

Difficulty Level : Moderate

Status : Correct

Marks Obtained : 1

Response : 1

 
  Option 1 : No of reducer used for the process Size of the data Both

Option 1 :

No of reducer used for the process Size of the data Both A and B None of the above

 

Option 2 :

Option 3 :

Option 4 :

 

  
 

Q56.Input format of the map-reduce job is specified in which class?

Difficulty Level : Moderate

Status : Correct

Marks Obtained : 1

Response : 3

 

Option 1 :

Combiner class

Option 2 :

Reducer class

 
  Option 3 : Mapper class Any of the above

Option 3 :

Mapper class Any of the above

 

Option 4 :

 

  

Q57.The intermediate keys, and their value lists, are passed to the Reducer in sorted key order.

Difficulty Level : Moderate

Status : Correct

Marks Obtained : 1

Response : 1

 
  Option 1 : T r u e

Option 1 :

True

 

Option 2 :

False

 

  

Q58.In which stage of the map-reduce job data is transferred between mapper and reducer?

Difficulty Level : Moderate

Status : Unanswered

Marks Obtained : 0

Response :

 

Option 1 :

Transfer Combiner Distributed Cache Shuffle and Sort

Option 2 :

Option 3 :

 
  Option 4 :

Option 4 :

 

  

Q59.Maximum three reducers can run at any time in a MapReduce Job.

Difficulty Level : Moderate

Status : Correct

Marks Obtained : 1

Response : 2

 

Option 1 :

True

 
  Option 2 : F a l s e

Option 2 :

False

 

Q60.Functionality of the Jobtracker is to:

Difficulty Level : Moderate

Status : Correct

Marks Obtained : 1

Response : 1

 
  Option 1 : Coordinate the job run Sorting the output Both A and B None

Option 1 :

Coordinate the job run Sorting the output Both A and B None of the above

 

Option 2 :

Option 3 :

Option 4 :

 

  
 

Q61.The submit() method on Job creates an internal JobSummitter instance and

calls

on it.

Difficulty Level : Moderate

Status : Unanswered

Marks Obtained : 0

Response :

 

Option 1 :

jobSubmitInternal() internalJobSubmit()

Option 2 :

 
  Option 3 : submitJobInternal() None of these

Option 3 :

submitJobInternal() None of these

 

Option 4 :

 

  
 

Q62.Which method polls the job's progress and after how many seconds?

Difficulty Level : Moderate

Status : Unanswered

Marks Obtained : 0

Response :

 
  Option 1 : WaitForCompletion() and after each second WaitForCompletion() after every 15 seconds Not possible

Option 1 :

WaitForCompletion() and after each second WaitForCompletion() after every 15 seconds Not possible to poll None of the above

 

Option 2 :

Option 3 :

Option 4 :

 

  
 

Q63.Job Submitter tells the task tracker that the job is ready for execution.

Difficulty Level : Moderate

Status : Unanswered

Marks Obtained : 0

Response :

 

Option 1 :

True

 
  Option 2 : F a l s e

Option 2 :

False

 

  

Q64.Hadoop 1.0 runs 3 instances of job tracker for parallel execution on hadoop cluster.

Difficulty Level : Moderate

Status : Unanswered

Marks Obtained : 0

Response :

 

Option 1 :

True

 
  Option 2 : F l a s e

Option 2 :

Flase

 

  

Q65.Map and Reduce tasks are created in job initialization phase.

Difficulty Level : Moderate

Status : Unanswered

Marks Obtained : 0

Response :

 
  Option 1 : T r u e

Option 1 :

True

 

Option 2 :

False

 

  

Q66.Based on heartbeats received after how many seconds does it help the job tracker to decide regarding health of task tracker?

Difficulty Level : Moderate

Status : Unanswered

Marks Obtained : 0

Response :

 

Option 1 :

After every 3 seconds After every 1 second After every 60 seconds None of the above

Option 2 :

Option 3 :

 
  Option 4 :

Option 4 :

 

  

Q67.Task tracker has assigned fixed number of slots for map and reduce tasks.

Difficulty Level : Moderate

Status : Unanswered

Marks Obtained : 0

Response :

 
  Option 1 : T r u e

Option 1 :

True

 

Option 2 :

False

 

  

Q68.To improve the performance of the map-reduce task jar that contains map- reduce code is pushed to each slave node over HTTP.

Difficulty Level : Moderate

Status : Unanswered

Marks Obtained : 0

Response :

 
  Option 1 : T r u e

Option 1 :

True

 

Option 2 :

False

 

  

Q69.Map-reduce can take which type of format as input?

Difficulty Level : Moderate

Status : Unanswered

Marks Obtained : 0

Response :

 

Option 1 :

Text

Option 2 :

CSV

 
  Option 3 : Arbitrary

Option 3 :

Arbitrary

 

Option 4 :

None of these

 

  
 

Q70.Input files can be located at hdfs or local system for map-reduce.

Difficulty Level : Moderate

Status : Incorrect

Marks Obtained : 0

Response : 2

 
  Option 1 : T r u e

Option 1 :

True

Option 2 :

False

 

  
 

Q71.Is there any default InputFormat for input files in map-reduce process?

Difficulty Level : Moderate

Status : Unanswered

Marks Obtained : 0

Response :

 

Option 1 :

KeyValueInputFormat

 
  Option 2 : TextInputFormat. A a n d B None of these

Option 2 :

TextInputFormat. A and B None of these

 

Option 3 :

Option 4 :

 

  
 

Q72.An InputFormat is a class that provides the following functionality:

Difficulty Level : Moderate

Status : Unanswered

Marks Obtained : 0

Response :

 Marks Obtained : 0  Response : Option 1 : Selects the files or other

Option 1 :

Selects the files or other objects that should be used for input

Option 2 :

Defines the InputSplits that break a file into tasks

Option 3 :

Provides a factory for RecordReader objects that read the file

Option 4 :

All of the above

 

Q73.An InputSplit describes a unit of work that comprises a MapReduce program.

map task in a

Difficulty Level : Moderate

Status : Unanswered

Marks Obtained : 0

Response :

 
  Option 1 : O n e T w o T h r e e None

Option 1 :

One Two Three None of these

 
 

Option 2 :

Option 3 :

Option 4 :

 

  
 

Q74.The FileInputFormat and its descendants break a file up into chunks.

MB

Difficulty Level : Moderate

Status : Correct

Marks Obtained : 1

Response : 2

 

Option 1 :

128

 
  Option 2 : 6 4

Option 2 :

64

 

Option 3 :

32

Option 4 :

256

 

  
 

Q75.What allows several map tasks to operate on a single file in parallel?

Difficulty Level : Moderate

Status : Unanswered

Marks Obtained : 0

Response :

 Marks Obtained : 0  Response : Option 1 : Processing of a file in

Option 1 :

Processing of a file in chunks

Option 2 :

Configuration file properties

Option 3 :

Both A and B

Option 4 :

None of the above

 

Q76.The Record Reader is invoked InputSplit has been consumed.

on the input until the entire

Difficulty Level : Moderate

Status : Correct

Marks Obtained : 1