Big-Data, Hadoop Interview Question-Answer

Big-Data, Hadoop Interview Question-Answer Part – 2

Q.1 What are the main components of Big Data?

       A. MapReduce

       B. HDFS

       C. YARN

       D. All of the above

Ans : All of the above


Q.2 ________ is a collection of data that is huge in volume, yet growing exponentially with time.

       A. Big Dataabase

       B. Big Data

       C. Big Datafile

       D. Big DBMS

Ans : Big Data


Q.3 What are the different featured of Big Data Analytics?

       A. Open-Source

       B. Scalability

       C. Data Recovery

       D. All of above

Ans : All of above


Q.4 The data node and name node in Hadoop are

       A. Worker Node and Master Node respectively

       B. Master Node and Worker Node respectively

       C. Both Master Nodes

       D. Both Worker Nodes

Ans : Worker Node and Master Node respectively


Q.5 The size of data in Big Data is of ___________ bytes.

       A. Terra

       B. Mega

       C. Giga

       D. Peta

Ans : Peta


Q.6 For what can traditional IT systems provides a foundation when they are integrated with big data technologies like

       A. Big Data management and data mining

       B. Data warehousing and business intelligence

       C. Management of Hadoop clusters

       D. Collecting and storing unstructured data

Ans : Big Data management and data mining


Q.7 Point out the wrong statement

       A. Non-Relational databases require that schemas be defined before you can add data

       B. NoSQL databases are built to allow the insertion of data without a predefined schema

       C. All of the above

       D. NewSQL databases are built to allow the insertion of data without a predefined schema

Ans : Non-Relational databases require that schemas be defined before you can add data


Q.8 What are the 7 V’s of big data?

       A. Volume, Velocity

       B. Variety, Variability

       C. Veracity, Visualization, and Value

       D. All of the above

Ans : All of the above


Q.9 IBM and ______ have announced a major initiative to use Hadoop to support university courses in distributed computer programming

       A. Google

       B. Google Latitude

       C. Android (OS)

       D. Google Variations

Ans : Google


Q.10 Hadoop (a big data tool) works with number of related tools. Choose from the following, the common tools included into Hadoop

       A. MySQL, Google API and MapReduce

       B. MapReduce, Scala and Hummer

       C. MapReduce, H base and Hive

       D. MapReduce, Hummer and Heron

Ans : MapReduce, H base and Hive


Q.11 There are _______ forms of Big Data.

       A. 3

       B. 5

       C. 7

       D. 9

Ans : 7


Q.12 Which company developed Apache Kafka?

       A. Google

       B. LinkedIn

       C. Amazon

       D. Microsoft

Ans : LinkedIn


Q.13 In which year Apache Kafka was developed ?

       A. 2011

       B. 2012

       C. 2009

       D. 2014

Ans : 2011


Q.14 The types of Big Data are _________.

       A. Structured Data

       B. Semi-Structured Data

       C. Unstructured Data

       D. All of the above

Ans : All of the above


Q.15 Which of the following statement is true?
(1) Facebook has the worlds largest Hadoop cluster.
(2) Hadoop 2.0 allows live stream processing pf real time data

       A. Neither (1) nor (2)

       B. Both (1) and (2)

       C. (1) Only

       D. (2) Only

Ans : Both (1) and (2)


Q.16 In Jan-2021, All India Council for Technical Education (AICTE) joined hands with which of the following to train 5 lakh students and faculty on cybersecurity?

       A. Everdata Technologies

       B. Quick Heal Technologies

       C. eRaksha Foundation

       D. Cyber Peace Foundation

Ans : Cyber Peace Foundation


Q.17 Big Data is generally characterised by three Vs that are stand for _____, _____ and _____.

       A. Volume, Viscosity, Variety

       B. Variety, Volume, Vivid

       C. Viscosity, Volume, Velocity

       D. Volume, Variety, Velocity

Ans : Volume, Variety, Velocity


Q.18 Which of the following patforms does Hadoop run on

       A. Bare metal

       B. Debian

       C. Cross-platform

       D. Unix-like

Ans : Cross-platform


Q.19 Hadoop achieves reliability by replicating the data across multiple hosts and hence does not require ______ storage on hosts.

       A. RAID

       B. Standard RAID levels

       C. ZFS

       D. Operating Systems

Ans : RAID


Q.20 File system comes with the _____ engines, which consists of one job tracker, to which client applications submit MapReduce jobs.

       A. MapReduce

       B. Google

       C. Fucntional programming

       D. Facebook

Ans : MapReduce


Big-Data, Hadoop Interview Question-Answer Part – 2

Leave a Comment