Вы находитесь на странице: 1из 3

Nagendra M

Hadoop Developer
Bangalore, Karnataka - Email me on Indeed: indeed.com/r/Nagendra-M/da8072db63f32412
Around 3 years of experience in IT industry including Hadoop and production support.
1.5 Years of extensive experience in Hadoop (HDFS and map reduced type programming) and its ecosystem
(HBase, Hive, Pig)
Over 1.7 years of experience in java Production Support.
Experienced in Data Extraction, Transformation and Data Loading in Data Warehouse environment.
Excellent experience in installing, configuring and using ecosystem components like HBase, Hive, Pig,
Oozie, Sqoop.
Having good knowledge on HDFS, NameNode, DataNode, JobTracker, TaskTracker and there working
functionality.
Hands on experience in writing Hive Queries to process data.
Extensive working experience with Hadoop Core Concepts like HDFS and MapReduce.
Hands on experience in developing and the implementation of common MapReduce algorithms like Sorting
and searching according to the client requirement.
Worked on programming the text files, Bson files, semi structured data using Map Reduce.
Worked on Storing, Merging, moving, retrieving data in to HDFS using several Linux commands.
Aware of configuration and working with Apache Sqoop.
Solid experience in coding using SQL.
Worked on scheduling/monitoring jobs in Unix/Linux Servers.
Logical, Analytical and good interpersonal skills, Commitment to perform quality work.
Very flexible and can work independently as well as in a team environment.
Willing to update my knowledge and learn new skills according to business requirements.
Currently working as Software Engineer in Accenture Services Private Limited, Bangalore, India.
WORK EXPERIENCE
Hadoop Developer
Accenture - Bangalore, Karnataka - February 2013 to Present
Technologies Used : Hadoop, HDFS, MapReduce, PIG, Hive, Sqoop, MySql, RedHat Linux.
Duration : 2013 Feb to till Date
Project Description:
The purpose of the project is to store terabytes of log information generated by the ecommerce website and
extract meaning information out of it. Since the size of the logs dataset is huge, at such a large scale, the
company decided to use Hadoop technologies over existing systems. The data will be stored in Hadoop file
system and processed using Map/Reduce jobs. Which intern includes getting the raw html data from the
websites, Process the html to obtain product and pricing information, Extract various reports out of the product
pricing information and Export the information for further processing.
Roles and Responsibilities:
Move all data flat files generated from various retailers to HDFS for further processing.
Written the Apache PIG scripts to process the HDFS data.
Developed MapReduce jobs in java for data cleaning and pre-processing.
Handled bad records while processing big datasets
Created Hive tables to store the processed results in a tabular format.
Developed the Sqoop scripts in order to make the interaction between Pig and MySQL Database.
Developed the UNIX shell scripts for creating the reports from Hive data.
Gathered the log files information.
Experienced in managing and reviewing Hadoop log files
Programmed on the log files to identify the user location/time pattern using Map reduce.
Worked with HIVE database and used partitioning concepts for data retrieval.
Load and transform large sets of structured, semi structured and unstructured data
Involved in loading data from UNIX file system to HDFS
Involved in creating Hive tables, loading with data and writing hive queries which will run internally in
MapReduce way
Reviewed the HDFS usage and system design for future scalability
Production Support Engineer
Accenture - Bangalore, Karnataka - March 2012 to January 2013
Duration: 2012 Mar -2013 Jan
Project Description:
GALAXY is the global Referential system, providing static data information as well as market data like
price for most of the GIBD (Global Investment Banking Division) and DEAI (Derivatives action and index)
Information Systems, in the SG Corporate and Investment Bank. Galaxy will get the data from the REUTERS,
BLOOMBERG and will be consolidated those data then creates the golden copy of the data which is send to
back office like BDR. The sponsor of this project is Risk Referential Finance.
Roles and Responsibilities:
As a Production Support Engineer the primary responsibility is code level investigation of jobs.
Conducting routine system health checks, scripts enhancements and Maintenance of Database etc. acts
accordingly.
Responsible for solving user queries from various media like Interchange chat channel, Mails.
Responsible for providing the support in L1/L2 level depending on the priority of the issues to meet client's
SLA.
Give the status report of tickets to Manager of our department.
Attending War rooms meeting session with other Operation & support
Providing support to client on 24/7 bases.
Monitor & resolve all the P1/P2/P3 tickets in our queue.
Status reporting on a weekly basis.
Responsible to execute and monitor the daily, weekly and monthly jobs.
Responsible for the scheduled data cleanups.
EDUCATION
B.Tech in Computer Science & Engineering affiliated
JNTU - Anantapur, Andhra Pradesh
ADDITIONAL INFORMATION
Technical Skills:
Frameworks: HDFS and MapReduce
Ecosystem Tools: Hive, Sqoop, Pig, HBase, Ganglia.
RDBMS: Oracle 10g, MYSQL.
Operating systems: UNIX, Linux and Windows Family.
Languages: C, SQL and Core Java
IDE: Eclipse.

Вам также может понравиться