SlideShare una empresa de Scribd logo
1 de 3
Prasanna Kumar
Email: rajupr1990@gmail.com
Mobile: +91 7331108854/8884601978
Summary:
 Having total 4 Years of IT experience in Application and Hadoop development.
 Expertise in Hadoop Framework and Big Data concepts (HDFS, Map Reduce, Hive, Impala, Pig,
Phoenix, HBase, Sqoop, Flume, Oozie, Spark, Scala, Kudu).
 Having Experience in writing IMPALA and HIVE queries & Pig scripts.
 Having Experience in writing Phoenix scripts and HBase.
 Hands on with MapReduce Programming Model
 Having good knowledge in Spark, Scala, and Kudu.
 Importing and exporting data from HDFS using Sqoop.
 Experience in Setting Hadoop Cluster, Performance benchmarking and monitoring of Hadoop
clusters
 Having good working knowledge on UNIX command and Shell Programming.
 Having Experience in Oracle ADF.
 Experience in SDLC Cycle, Creating and maintaining Project Documents.
 Excellent communicational, analytical, business and interpersonal skills. Self-motivated with a
proactive,resourcefulapproach toproblem solving and ability toworkindependently and tolead
or be part of a team.
Work Experience:
 Worked as a Software Engineer in Tibco Software from Dec 2015 to Till Date.
 Worked as a Software Engineer in SLK Software Services from Apr 2014 to Nov 2015.
 Worked as an Associate Software Engineer in 3i InfoTech from Sep 2012 to Mar 2014.
Educational Qualifications:
 BE/B.TECH (ECE) from Sri Venkateswara University.
Technical Proficiencies:
Frameworks : Hadoop, Oracle ADF
Hadoop Ecosystem : HDFS, Hive, Impala, Phoenix, Pig, Map Reduce, Sqoop, Flume
Programming skills : Java, SQL
NoSQL : HBase
J2EE Technologie : Oracle ADF
Servers : Tomcat, Web Logic
Scripting language : Shell Script
DBMS : Oracle and MySQL
Version Tools/ IDE : SVN, Eclipse, JDeveloper11g
Operating Systems : Windows, UNIX
Professional Experience:
Project #1:
Title : Data Science D2O
Client : Nielsen
Environment : Impala, Hive, Phoenix, HDFS, Map Reduce, HBase
Role : Developer
Description: This project willdigitize data science to align with the Nielsen vision to become more
digital. All Data Science tools are dependent on Global Factory Data being available in NDX.
NDX is uniquely positioned to deliver local and global insights into consumer behavior and product
sales across categories in nearly 100 countries. Nielsen’s powerful combination of deep data and
insights arms clients with actionable intelligence for their business planning. Our tools provide
clients with timely, flexible analytics, presenting a holistic view of the marketplace. Infusing digital
into data science to unleash innovation & provide cutting edge solutions for our clients. The major
implementations are CVCalc & Replica in Data Science.
CVCALC:
 Implementation of new robust methodology forCV calculation based on modelling approach.
 Improved data stability through minimized sample changes during sample re-designs and
annual updates and data precision through more robust variance calculations.
 Integrate tool with NDX and other tools within it improving calculation efficiency
REPLICA:
 Replica is a tool for calculating Relative Standard Error. Using a variance engine which is
based on bootstrap principles. It simulates thevariation associated withsample participation
by forming replicated samples where sampling units are giving variable rights to participate
in the sample.
 This tool willallow standard decision in compliancewith our WBS,in terms of sample design
and MBD reporting. And therefore will translate in better quality to our client deliverables.
Responsibilities:
 Interaction with client in gathering and understanding the requirements
 Involved in writing Impala queries to load and process data in Hadoop file system.
 Involved in writing Phoenix scripts to load and process data in HBase.
 Writing Shell scripts that are invoked by TibcoBW.
 Writing Map Reduce code for bulk data loading to Phoenix.
 Involved in integrating Hive and HBase for purpose of generating reports.
 Involved in team level meetings for knowledge transfer.
Project #2:
Title : Research and Analysis in Sales and Operations.
Client : Saint-Gobain
Environment : HDFS, Map Reduce, Hive, Pig, Sqoop, Cloudera
Role : Developer
Description: TheProjectbased on sales and operation information forthe business analysts. These
reports show the current month sales, prior month, previous year and year to date sales information
for different business units and different products. We get the flat files from SAP BI.
Saint-Gobain is French multinational company founded in 1665 in Paris. Originally, a mirror
manufacturer, it now produces a widevariety of construction and high performance materials. Saint-
Gobain is organized intofour major sectors.Building Distribution, Construction Products,Innovative
Materials, Packaging. Each Sector is further organized into Business Units (BUs) that serve specific
markets within each Sector.
Responsibilities:
 Analyzing the functional Specifications bases on project required.
 Involved in loading data from UNIX file system to HDFS.
 Responsible for uploading dataset into Hadoop cluster.
 Supported Map Reduce programs those are running on the cluster.
 Involved in creating Hive tables, loading with data and writing hive queries which will run
internally in map reduce way.
 Created partitioned tables and bucketing, using data from various regions.
 Developed Sqoop scripts for having and interaction between HDFS, Hive and MySQL.
Project#3:
Title : Global Force Alliance Insurance
Role : Developer
Environment : Oracle ADF, Oracle10g, Jdeveloper11g
Description: Global Force Alliance is one of the reputed insurance companies in UK. It provides a
wide range of insurance policies such as Life Insurance, Fire Insurance, Motor Vehicle Insurance,
Accidental and healthcare Insurance etc. The main modules being contained in this project are
Production Processing, Production Definition, Customer Services, Customer Policy, Cashier, Billing,
and Underwriting. Production Processing: This is the entry point to start the business processes.
Insurance system automates the management of insurance activities:Branch Manager Details, Agent
Commission, Customer Policies Details, and Agents Details
Responsibilities
 Interaction with client in gathering and understanding the requirements.
 Created Database structure and extended the existing objects for customizations.
 Designed and implemented ADF Business Components using EO,VO, AM, VL, Associations.
 Developed User Interface using ADF Faces and ADF Task flows.
 Developed Pages using ADF, JSF Components.
 Application Testing, Debugging and Deployment.

Más contenido relacionado

La actualidad más candente

Nagarjuna_Damarla
Nagarjuna_DamarlaNagarjuna_Damarla
Nagarjuna_DamarlaNag Arjun
 
Optimizing your Hadoop Infastructure: An Industry Panel Presentation
Optimizing your Hadoop Infastructure: An Industry Panel PresentationOptimizing your Hadoop Infastructure: An Industry Panel Presentation
Optimizing your Hadoop Infastructure: An Industry Panel PresentationDataWorks Summit
 
Designing Data Pipelines for Automous and Trusted Analytics
Designing Data Pipelines for Automous and Trusted AnalyticsDesigning Data Pipelines for Automous and Trusted Analytics
Designing Data Pipelines for Automous and Trusted AnalyticsDataWorks Summit
 
SplunkSummit 2015 - Real World Big Data Architecture
SplunkSummit 2015 -  Real World Big Data ArchitectureSplunkSummit 2015 -  Real World Big Data Architecture
SplunkSummit 2015 - Real World Big Data ArchitectureSplunk
 
Sudhir hadoop and Data warehousing resume
Sudhir hadoop and Data warehousing resume Sudhir hadoop and Data warehousing resume
Sudhir hadoop and Data warehousing resume Sudhir Saxena
 
Continuous Data Ingestion pipeline for the Enterprise
Continuous Data Ingestion pipeline for the EnterpriseContinuous Data Ingestion pipeline for the Enterprise
Continuous Data Ingestion pipeline for the EnterpriseDataWorks Summit
 
An Introduction to the MapR Converged Data Platform
An Introduction to the MapR Converged Data PlatformAn Introduction to the MapR Converged Data Platform
An Introduction to the MapR Converged Data PlatformMapR Technologies
 
HP Vertica and MapR Webinar: Building a Business Case for SQL-on-Hadoop
HP Vertica and MapR Webinar: Building a Business Case for SQL-on-HadoopHP Vertica and MapR Webinar: Building a Business Case for SQL-on-Hadoop
HP Vertica and MapR Webinar: Building a Business Case for SQL-on-HadoopMapR Technologies
 
Making Big Data Analytics with Hadoop fast & easy (webinar slides)
Making Big Data Analytics with Hadoop fast & easy (webinar slides)Making Big Data Analytics with Hadoop fast & easy (webinar slides)
Making Big Data Analytics with Hadoop fast & easy (webinar slides)Yellowfin
 
Best Practices for Data Convergence in Healthcare
Best Practices for Data Convergence in HealthcareBest Practices for Data Convergence in Healthcare
Best Practices for Data Convergence in HealthcareMapR Technologies
 
Hadoop in the cloud – The what, why and how from the experts
Hadoop in the cloud – The what, why and how from the expertsHadoop in the cloud – The what, why and how from the experts
Hadoop in the cloud – The what, why and how from the expertsDataWorks Summit
 
3 Benefits of Multi-Temperature Data Management for Data Analytics
3 Benefits of Multi-Temperature Data Management for Data Analytics3 Benefits of Multi-Temperature Data Management for Data Analytics
3 Benefits of Multi-Temperature Data Management for Data AnalyticsMapR Technologies
 
Big Data/Hadoop Option Analysis
Big Data/Hadoop Option AnalysisBig Data/Hadoop Option Analysis
Big Data/Hadoop Option Analysiszafarali1981
 
Moustafa Soliman "HP Vertica- Solving Facebook Big Data challenges"
Moustafa Soliman "HP Vertica- Solving Facebook Big Data challenges" Moustafa Soliman "HP Vertica- Solving Facebook Big Data challenges"
Moustafa Soliman "HP Vertica- Solving Facebook Big Data challenges" Dataconomy Media
 
Game Changed – How Hadoop is Reinventing Enterprise Thinking
Game Changed – How Hadoop is Reinventing Enterprise ThinkingGame Changed – How Hadoop is Reinventing Enterprise Thinking
Game Changed – How Hadoop is Reinventing Enterprise ThinkingInside Analysis
 
MapR Product Update - Spring 2017
MapR Product Update - Spring 2017MapR Product Update - Spring 2017
MapR Product Update - Spring 2017MapR Technologies
 
End to End Machine Learning Open Source Solution Presented in Cisco Developer...
End to End Machine Learning Open Source Solution Presented in Cisco Developer...End to End Machine Learning Open Source Solution Presented in Cisco Developer...
End to End Machine Learning Open Source Solution Presented in Cisco Developer...Manish Harsh
 
Big Data Use Cases
Big Data Use CasesBig Data Use Cases
Big Data Use Casesboorad
 

La actualidad más candente (20)

Nagarjuna_Damarla
Nagarjuna_DamarlaNagarjuna_Damarla
Nagarjuna_Damarla
 
Optimizing your Hadoop Infastructure: An Industry Panel Presentation
Optimizing your Hadoop Infastructure: An Industry Panel PresentationOptimizing your Hadoop Infastructure: An Industry Panel Presentation
Optimizing your Hadoop Infastructure: An Industry Panel Presentation
 
Designing Data Pipelines for Automous and Trusted Analytics
Designing Data Pipelines for Automous and Trusted AnalyticsDesigning Data Pipelines for Automous and Trusted Analytics
Designing Data Pipelines for Automous and Trusted Analytics
 
SplunkSummit 2015 - Real World Big Data Architecture
SplunkSummit 2015 -  Real World Big Data ArchitectureSplunkSummit 2015 -  Real World Big Data Architecture
SplunkSummit 2015 - Real World Big Data Architecture
 
Sudhir hadoop and Data warehousing resume
Sudhir hadoop and Data warehousing resume Sudhir hadoop and Data warehousing resume
Sudhir hadoop and Data warehousing resume
 
Continuous Data Ingestion pipeline for the Enterprise
Continuous Data Ingestion pipeline for the EnterpriseContinuous Data Ingestion pipeline for the Enterprise
Continuous Data Ingestion pipeline for the Enterprise
 
An Introduction to the MapR Converged Data Platform
An Introduction to the MapR Converged Data PlatformAn Introduction to the MapR Converged Data Platform
An Introduction to the MapR Converged Data Platform
 
HP Vertica and MapR Webinar: Building a Business Case for SQL-on-Hadoop
HP Vertica and MapR Webinar: Building a Business Case for SQL-on-HadoopHP Vertica and MapR Webinar: Building a Business Case for SQL-on-Hadoop
HP Vertica and MapR Webinar: Building a Business Case for SQL-on-Hadoop
 
Hadoop dev 01
Hadoop dev 01Hadoop dev 01
Hadoop dev 01
 
Making Big Data Analytics with Hadoop fast & easy (webinar slides)
Making Big Data Analytics with Hadoop fast & easy (webinar slides)Making Big Data Analytics with Hadoop fast & easy (webinar slides)
Making Big Data Analytics with Hadoop fast & easy (webinar slides)
 
Best Practices for Data Convergence in Healthcare
Best Practices for Data Convergence in HealthcareBest Practices for Data Convergence in Healthcare
Best Practices for Data Convergence in Healthcare
 
Hadoop in the cloud – The what, why and how from the experts
Hadoop in the cloud – The what, why and how from the expertsHadoop in the cloud – The what, why and how from the experts
Hadoop in the cloud – The what, why and how from the experts
 
3 Benefits of Multi-Temperature Data Management for Data Analytics
3 Benefits of Multi-Temperature Data Management for Data Analytics3 Benefits of Multi-Temperature Data Management for Data Analytics
3 Benefits of Multi-Temperature Data Management for Data Analytics
 
Big Data/Hadoop Option Analysis
Big Data/Hadoop Option AnalysisBig Data/Hadoop Option Analysis
Big Data/Hadoop Option Analysis
 
BDaas- BigData as a service
BDaas- BigData as a service  BDaas- BigData as a service
BDaas- BigData as a service
 
Moustafa Soliman "HP Vertica- Solving Facebook Big Data challenges"
Moustafa Soliman "HP Vertica- Solving Facebook Big Data challenges" Moustafa Soliman "HP Vertica- Solving Facebook Big Data challenges"
Moustafa Soliman "HP Vertica- Solving Facebook Big Data challenges"
 
Game Changed – How Hadoop is Reinventing Enterprise Thinking
Game Changed – How Hadoop is Reinventing Enterprise ThinkingGame Changed – How Hadoop is Reinventing Enterprise Thinking
Game Changed – How Hadoop is Reinventing Enterprise Thinking
 
MapR Product Update - Spring 2017
MapR Product Update - Spring 2017MapR Product Update - Spring 2017
MapR Product Update - Spring 2017
 
End to End Machine Learning Open Source Solution Presented in Cisco Developer...
End to End Machine Learning Open Source Solution Presented in Cisco Developer...End to End Machine Learning Open Source Solution Presented in Cisco Developer...
End to End Machine Learning Open Source Solution Presented in Cisco Developer...
 
Big Data Use Cases
Big Data Use CasesBig Data Use Cases
Big Data Use Cases
 

Destacado

My Update cv
My Update cvMy Update cv
My Update cvsalah ali
 
Pro Tools Tier 2 Cert.
Pro Tools Tier 2 Cert.Pro Tools Tier 2 Cert.
Pro Tools Tier 2 Cert.Briana Bobo
 
CUTGroup Presentation for Open Indy Brigade Meetup on November 19, 2015
CUTGroup Presentation for Open Indy Brigade Meetup on November 19, 2015CUTGroup Presentation for Open Indy Brigade Meetup on November 19, 2015
CUTGroup Presentation for Open Indy Brigade Meetup on November 19, 2015smarziano
 
Summit DEP - „Transzendente Emergenz im Spiegel von Intermedialität: Zentrum ...
Summit DEP - „Transzendente Emergenz im Spiegel von Intermedialität: Zentrum ...Summit DEP - „Transzendente Emergenz im Spiegel von Intermedialität: Zentrum ...
Summit DEP - „Transzendente Emergenz im Spiegel von Intermedialität: Zentrum ...Linda Breitlauch
 
Coding... For Communicators?
Coding... For Communicators?Coding... For Communicators?
Coding... For Communicators?Cindy Royal
 
Using the Business Database to Search for Jobs
Using the Business Database to Search for JobsUsing the Business Database to Search for Jobs
Using the Business Database to Search for JobsReferenceUSA
 
Acer, Inc. Taiwan’S Rampaging Dragon
Acer,  Inc.    Taiwan’S  Rampaging  DragonAcer,  Inc.    Taiwan’S  Rampaging  Dragon
Acer, Inc. Taiwan’S Rampaging Dragonjoelnshisso
 
NCS 훈련과정편성
NCS 훈련과정편성NCS 훈련과정편성
NCS 훈련과정편성Jeong Seok Ha
 

Destacado (11)

My Update cv
My Update cvMy Update cv
My Update cv
 
ELPUB_2015
ELPUB_2015ELPUB_2015
ELPUB_2015
 
Pro Tools Tier 2 Cert.
Pro Tools Tier 2 Cert.Pro Tools Tier 2 Cert.
Pro Tools Tier 2 Cert.
 
CUTGroup Presentation for Open Indy Brigade Meetup on November 19, 2015
CUTGroup Presentation for Open Indy Brigade Meetup on November 19, 2015CUTGroup Presentation for Open Indy Brigade Meetup on November 19, 2015
CUTGroup Presentation for Open Indy Brigade Meetup on November 19, 2015
 
Resum
ResumResum
Resum
 
Summit DEP - „Transzendente Emergenz im Spiegel von Intermedialität: Zentrum ...
Summit DEP - „Transzendente Emergenz im Spiegel von Intermedialität: Zentrum ...Summit DEP - „Transzendente Emergenz im Spiegel von Intermedialität: Zentrum ...
Summit DEP - „Transzendente Emergenz im Spiegel von Intermedialität: Zentrum ...
 
Artistas famosos
Artistas famososArtistas famosos
Artistas famosos
 
Coding... For Communicators?
Coding... For Communicators?Coding... For Communicators?
Coding... For Communicators?
 
Using the Business Database to Search for Jobs
Using the Business Database to Search for JobsUsing the Business Database to Search for Jobs
Using the Business Database to Search for Jobs
 
Acer, Inc. Taiwan’S Rampaging Dragon
Acer,  Inc.    Taiwan’S  Rampaging  DragonAcer,  Inc.    Taiwan’S  Rampaging  Dragon
Acer, Inc. Taiwan’S Rampaging Dragon
 
NCS 훈련과정편성
NCS 훈련과정편성NCS 훈련과정편성
NCS 훈련과정편성
 

Similar a Prasanna Resume (20)

hadoop exp
hadoop exphadoop exp
hadoop exp
 
HimaBindu
HimaBinduHimaBindu
HimaBindu
 
Resume
ResumeResume
Resume
 
Manikyam_Hadoop_5+Years
Manikyam_Hadoop_5+YearsManikyam_Hadoop_5+Years
Manikyam_Hadoop_5+Years
 
Bharath Hadoop Resume
Bharath Hadoop ResumeBharath Hadoop Resume
Bharath Hadoop Resume
 
Neelima_Resume
Neelima_ResumeNeelima_Resume
Neelima_Resume
 
Robin_Hadoop
Robin_HadoopRobin_Hadoop
Robin_Hadoop
 
Sourav banerjee resume
Sourav banerjee   resumeSourav banerjee   resume
Sourav banerjee resume
 
sudipto_resume
sudipto_resumesudipto_resume
sudipto_resume
 
SreenivasulaReddy
SreenivasulaReddySreenivasulaReddy
SreenivasulaReddy
 
ChandraSekhar CV
ChandraSekhar CVChandraSekhar CV
ChandraSekhar CV
 
Madhu
MadhuMadhu
Madhu
 
Nagarjuna_Damarla_Resume
Nagarjuna_Damarla_ResumeNagarjuna_Damarla_Resume
Nagarjuna_Damarla_Resume
 
Hadoop Developer
Hadoop DeveloperHadoop Developer
Hadoop Developer
 
Srikanth hadoop 3.6yrs_hyd
Srikanth hadoop 3.6yrs_hydSrikanth hadoop 3.6yrs_hyd
Srikanth hadoop 3.6yrs_hyd
 
Shiv shakti resume
Shiv shakti resumeShiv shakti resume
Shiv shakti resume
 
PRAFUL_HADOOP
PRAFUL_HADOOPPRAFUL_HADOOP
PRAFUL_HADOOP
 
Srikanth hadoop hyderabad_3.4yeras - copy
Srikanth hadoop hyderabad_3.4yeras - copySrikanth hadoop hyderabad_3.4yeras - copy
Srikanth hadoop hyderabad_3.4yeras - copy
 
Kalyan Hadoop
Kalyan HadoopKalyan Hadoop
Kalyan Hadoop
 
PRAFUL_HADOOP
PRAFUL_HADOOPPRAFUL_HADOOP
PRAFUL_HADOOP
 

Prasanna Resume

  • 1. Prasanna Kumar Email: rajupr1990@gmail.com Mobile: +91 7331108854/8884601978 Summary:  Having total 4 Years of IT experience in Application and Hadoop development.  Expertise in Hadoop Framework and Big Data concepts (HDFS, Map Reduce, Hive, Impala, Pig, Phoenix, HBase, Sqoop, Flume, Oozie, Spark, Scala, Kudu).  Having Experience in writing IMPALA and HIVE queries & Pig scripts.  Having Experience in writing Phoenix scripts and HBase.  Hands on with MapReduce Programming Model  Having good knowledge in Spark, Scala, and Kudu.  Importing and exporting data from HDFS using Sqoop.  Experience in Setting Hadoop Cluster, Performance benchmarking and monitoring of Hadoop clusters  Having good working knowledge on UNIX command and Shell Programming.  Having Experience in Oracle ADF.  Experience in SDLC Cycle, Creating and maintaining Project Documents.  Excellent communicational, analytical, business and interpersonal skills. Self-motivated with a proactive,resourcefulapproach toproblem solving and ability toworkindependently and tolead or be part of a team. Work Experience:  Worked as a Software Engineer in Tibco Software from Dec 2015 to Till Date.  Worked as a Software Engineer in SLK Software Services from Apr 2014 to Nov 2015.  Worked as an Associate Software Engineer in 3i InfoTech from Sep 2012 to Mar 2014. Educational Qualifications:  BE/B.TECH (ECE) from Sri Venkateswara University. Technical Proficiencies: Frameworks : Hadoop, Oracle ADF Hadoop Ecosystem : HDFS, Hive, Impala, Phoenix, Pig, Map Reduce, Sqoop, Flume Programming skills : Java, SQL NoSQL : HBase J2EE Technologie : Oracle ADF Servers : Tomcat, Web Logic Scripting language : Shell Script DBMS : Oracle and MySQL Version Tools/ IDE : SVN, Eclipse, JDeveloper11g Operating Systems : Windows, UNIX
  • 2. Professional Experience: Project #1: Title : Data Science D2O Client : Nielsen Environment : Impala, Hive, Phoenix, HDFS, Map Reduce, HBase Role : Developer Description: This project willdigitize data science to align with the Nielsen vision to become more digital. All Data Science tools are dependent on Global Factory Data being available in NDX. NDX is uniquely positioned to deliver local and global insights into consumer behavior and product sales across categories in nearly 100 countries. Nielsen’s powerful combination of deep data and insights arms clients with actionable intelligence for their business planning. Our tools provide clients with timely, flexible analytics, presenting a holistic view of the marketplace. Infusing digital into data science to unleash innovation & provide cutting edge solutions for our clients. The major implementations are CVCalc & Replica in Data Science. CVCALC:  Implementation of new robust methodology forCV calculation based on modelling approach.  Improved data stability through minimized sample changes during sample re-designs and annual updates and data precision through more robust variance calculations.  Integrate tool with NDX and other tools within it improving calculation efficiency REPLICA:  Replica is a tool for calculating Relative Standard Error. Using a variance engine which is based on bootstrap principles. It simulates thevariation associated withsample participation by forming replicated samples where sampling units are giving variable rights to participate in the sample.  This tool willallow standard decision in compliancewith our WBS,in terms of sample design and MBD reporting. And therefore will translate in better quality to our client deliverables. Responsibilities:  Interaction with client in gathering and understanding the requirements  Involved in writing Impala queries to load and process data in Hadoop file system.  Involved in writing Phoenix scripts to load and process data in HBase.  Writing Shell scripts that are invoked by TibcoBW.  Writing Map Reduce code for bulk data loading to Phoenix.  Involved in integrating Hive and HBase for purpose of generating reports.  Involved in team level meetings for knowledge transfer.
  • 3. Project #2: Title : Research and Analysis in Sales and Operations. Client : Saint-Gobain Environment : HDFS, Map Reduce, Hive, Pig, Sqoop, Cloudera Role : Developer Description: TheProjectbased on sales and operation information forthe business analysts. These reports show the current month sales, prior month, previous year and year to date sales information for different business units and different products. We get the flat files from SAP BI. Saint-Gobain is French multinational company founded in 1665 in Paris. Originally, a mirror manufacturer, it now produces a widevariety of construction and high performance materials. Saint- Gobain is organized intofour major sectors.Building Distribution, Construction Products,Innovative Materials, Packaging. Each Sector is further organized into Business Units (BUs) that serve specific markets within each Sector. Responsibilities:  Analyzing the functional Specifications bases on project required.  Involved in loading data from UNIX file system to HDFS.  Responsible for uploading dataset into Hadoop cluster.  Supported Map Reduce programs those are running on the cluster.  Involved in creating Hive tables, loading with data and writing hive queries which will run internally in map reduce way.  Created partitioned tables and bucketing, using data from various regions.  Developed Sqoop scripts for having and interaction between HDFS, Hive and MySQL. Project#3: Title : Global Force Alliance Insurance Role : Developer Environment : Oracle ADF, Oracle10g, Jdeveloper11g Description: Global Force Alliance is one of the reputed insurance companies in UK. It provides a wide range of insurance policies such as Life Insurance, Fire Insurance, Motor Vehicle Insurance, Accidental and healthcare Insurance etc. The main modules being contained in this project are Production Processing, Production Definition, Customer Services, Customer Policy, Cashier, Billing, and Underwriting. Production Processing: This is the entry point to start the business processes. Insurance system automates the management of insurance activities:Branch Manager Details, Agent Commission, Customer Policies Details, and Agents Details Responsibilities  Interaction with client in gathering and understanding the requirements.  Created Database structure and extended the existing objects for customizations.  Designed and implemented ADF Business Components using EO,VO, AM, VL, Associations.  Developed User Interface using ADF Faces and ADF Task flows.  Developed Pages using ADF, JSF Components.  Application Testing, Debugging and Deployment.