SlideShare una empresa de Scribd logo
1 de 4
Name: NAGESWARA RAO DASARI Tel No : 91-9035234744(M)
 Email : dnr.nageswara@gmail.com
Career Objective
A challenging and vibrant career in a growing organization, where I could be able to learn and use my technical
skills which can contribute to the growth of organization in the field of technology.
Professional Summary
 Software Engineer, Capgemini India privet limited, Bangalore.
 Overall 3.10 Years of experience.
 2.5 years of work experience on BIG DATA Technologies(Hadoop)
 1.4 years of work experience on core java.
 Highly versatile and experienced in adapting and implementing the latest technologies in new application
solutions.
 Hands-on experience in designing and implementing solutions using Apache Hadoop, HDFS, Map Reduce,
Hive, PIG,Sqoop.
 Knowledge in Tableau reporting tools
 Experience in Agile software development process
 Strong knowledge in OOP Concepts, UML, Core Java and C.
 Good exposure on Windows and Linux platforms.
Technical Summary
 Big Data Ecosystem: Hadoop, Map Reduce, Pig, Hive.
 Good Knowledge: HBase, Sqoop, Flume, Oozie, Zookeeper, Spark and Scala.
 Languages & Frameworks: Core Java, SQL.
 Scripting Languages: Java script, Unix Shell Scripting
 Web Technologies : HTML, JavaScript, CSS
 Tools : Eclipse, Clear case, Git ,Putty ,Winscp, Maven
 Databases: Oracle 9i, MySQL
 Development Methodologies: Agile SCURM.
Educational Summary
 B.Tech in Electrical and Electronics Engineering from JNTU Kakinada
Name: NAGESWARA RAO DASARI Tel No : 91-9035234744(M)
 Email : dnr.nageswara@gmail.com
Assignments
 Banking Customer – Enterprise Data Provisioning Platform
Duration : Jul 2015 – Till date
Client: – BARCLAYS Bank , U K
Team Size : 31
Designation: Hadoop Developer
Project Description: - The Enterprise Data Provisioning Platform (EDPP), which is the desired build of the
Information Excellence project, will allow BARCLAYS to address new business needs and is in line with
BARCLAYS’s guiding principle which is to operate with excellence. The primary object of the EDPP project
is to institutionalize a Hadoop platform for data collected within BARCLAYS and make it available to
perform analytics on the collected data.
Environment:
Distribution CDH5, Apache Pig, Hive, Java , Unix , MySQL, Spark and Scala.
Roles & Responsibilities:
 Designing schemas in Hive.
 Moved all the data obtained from different sources into hadoop environment
 Created HIVE tables to store the processed results in a tabular format
 Written Map Reduce programs to process the HDFS data and convert into common format.
 Written shell scripts for automation of the loading process
 Resolving JIRA Tickets.
 Unit Testing and Performance tuning of hive queries
 Written various hive queries
 Involved in Client engagements
 Responsible to conduct scrum meetings.
 Retail Customer – TARGET Re-hosting of Web Intelligence Project
Duration : Nov 2014 – Jun 2015
Client: – TARGET , USA
Team Size : 15
Designation: Hadoop Developer
Project Description: - The purpose of the project is to store terabytes of log information generated by the
ecommerce website and extract meaning information out of it. The solution is based on the open source
Name: NAGESWARA RAO DASARI Tel No : 91-9035234744(M)
 Email : dnr.nageswara@gmail.com
BigData s/w Hadoop .The data will be stored in Hadoop file system and processed using Map/Reduce jobs.
Which intern includes getting the raw html data from the websites ,Process the html to obtain product and
pricing information, Extract various reports out of the product pricing information and Export the information
for further processing.
This project is mainly for the replatforming of the current existing system which is running on
WebHarvest a third party JAR and in MySQL DB to a new cloud solution technology called Hadoop which
can able to process large date sets (i.e. Tera bytes and Peta bytes of data) in order to meet the client
requirements with the increasing competion from his retailers.
Environment:
Distribution CDH5,Apache Pig, Hive, SQOOP,Java , Unix , PHP , MySQL
Roles & Responsibilities:
 Moved all crawl data flat files generated from various retailers to HDFS for further processing
 Written the Apache PIG scripts to process the HDFS data.
 Created HIVE tables to store the processed results in a tabular format
 Developed the sqoop scripts in order to make the interaction between Pig and MySQL Database.
 Involved in resolving the JIRAs based on Hadoop.
 Developed the Unix shell scripts for creating the reports from Hive data.
 Completely involved in the requirement analysis phase.
 Web Application – Intella Sphere
Duration : Dec 2012 – Jun 2014
Environment: :
Java, Mysql, Mongo db2.4.6, Activity Workflow, Svn
Designation: Java Developer
Project Description: - The brand essence of Intella Sphere is direct, analytical and engaging. It's all about
empowring business to gain the intelligence they need to grow and improve their brand in the new age of
marketing. Intella Sphere is the ultimate marketing tool, giving a company the devices they need to gain
market share, beat the competition and get true results. Intella Sphere understands these challenges better
than anyone and uses experience and innovation to create the right tools for a business to clearly understand
its audience and empower them to grow and engage with their community!
Responsibilities:
• DAO Created for all DB operations using Mongo Db api.
• Worked on Designing phase of application using Visual Paradigm tool.
• Implementing social oauth configuration.
• Using social api for social networks(Facebook, Twitter, LinkedIn, Blogger, Youtube).
• Implementing Activiti workflow.
• Implementing aggregations for calculating metrics.
• Working on mongo db replica set.
• Worked on development and production environments.
Resume (1)

Más contenido relacionado

La actualidad más candente

Game Changed – How Hadoop is Reinventing Enterprise Thinking
Game Changed – How Hadoop is Reinventing Enterprise ThinkingGame Changed – How Hadoop is Reinventing Enterprise Thinking
Game Changed – How Hadoop is Reinventing Enterprise ThinkingInside Analysis
 
How pig and hadoop fit in data processing architecture
How pig and hadoop fit in data processing architectureHow pig and hadoop fit in data processing architecture
How pig and hadoop fit in data processing architectureKovid Academy
 
VAMSHI KRISHNA GADDAM IDRBT Experienced RESUME
VAMSHI KRISHNA GADDAM IDRBT Experienced RESUMEVAMSHI KRISHNA GADDAM IDRBT Experienced RESUME
VAMSHI KRISHNA GADDAM IDRBT Experienced RESUMEvamshi krishna
 
Nagarjuna_Damarla
Nagarjuna_DamarlaNagarjuna_Damarla
Nagarjuna_DamarlaNag Arjun
 
Hortonworks & IBM solutions
Hortonworks & IBM solutionsHortonworks & IBM solutions
Hortonworks & IBM solutionsThiago Santiago
 
Optimizing your Hadoop Infastructure: An Industry Panel Presentation
Optimizing your Hadoop Infastructure: An Industry Panel PresentationOptimizing your Hadoop Infastructure: An Industry Panel Presentation
Optimizing your Hadoop Infastructure: An Industry Panel PresentationDataWorks Summit
 
IJSRED-V2I3P43
IJSRED-V2I3P43IJSRED-V2I3P43
IJSRED-V2I3P43IJSRED
 
The Big Picture on Big Data and Cognos
The Big Picture on Big Data and CognosThe Big Picture on Big Data and Cognos
The Big Picture on Big Data and CognosSenturus
 
A Tighter Weave – How YARN Changes the Data Quality Game
A Tighter Weave – How YARN Changes the Data Quality GameA Tighter Weave – How YARN Changes the Data Quality Game
A Tighter Weave – How YARN Changes the Data Quality GameInside Analysis
 
Big data with Hadoop
Big data with HadoopBig data with Hadoop
Big data with Hadoopobject arena
 
Hadoop as an Analytic Platform: Why Not?
Hadoop as an Analytic Platform: Why Not?Hadoop as an Analytic Platform: Why Not?
Hadoop as an Analytic Platform: Why Not?Inside Analysis
 
End to End Machine Learning Open Source Solution Presented in Cisco Developer...
End to End Machine Learning Open Source Solution Presented in Cisco Developer...End to End Machine Learning Open Source Solution Presented in Cisco Developer...
End to End Machine Learning Open Source Solution Presented in Cisco Developer...Manish Harsh
 
flexpod_hadoop_cloudera
flexpod_hadoop_clouderaflexpod_hadoop_cloudera
flexpod_hadoop_clouderaPrem Jain
 
Suresh_Yadav_Hadoop_Fresher_Resume
Suresh_Yadav_Hadoop_Fresher_ResumeSuresh_Yadav_Hadoop_Fresher_Resume
Suresh_Yadav_Hadoop_Fresher_ResumeSuresh Yadav
 
Technical Architect - Big Data systems
Technical Architect - Big Data systemsTechnical Architect - Big Data systems
Technical Architect - Big Data systemsMark Long
 
My other computer is a datacentre - 2012 edition
My other computer is a datacentre - 2012 editionMy other computer is a datacentre - 2012 edition
My other computer is a datacentre - 2012 editionSteve Loughran
 

La actualidad más candente (20)

Game Changed – How Hadoop is Reinventing Enterprise Thinking
Game Changed – How Hadoop is Reinventing Enterprise ThinkingGame Changed – How Hadoop is Reinventing Enterprise Thinking
Game Changed – How Hadoop is Reinventing Enterprise Thinking
 
Hareesh
HareeshHareesh
Hareesh
 
How pig and hadoop fit in data processing architecture
How pig and hadoop fit in data processing architectureHow pig and hadoop fit in data processing architecture
How pig and hadoop fit in data processing architecture
 
VAMSHI KRISHNA GADDAM IDRBT Experienced RESUME
VAMSHI KRISHNA GADDAM IDRBT Experienced RESUMEVAMSHI KRISHNA GADDAM IDRBT Experienced RESUME
VAMSHI KRISHNA GADDAM IDRBT Experienced RESUME
 
Arindam Sengupta _ Resume
Arindam Sengupta _ ResumeArindam Sengupta _ Resume
Arindam Sengupta _ Resume
 
Nagarjuna_Damarla
Nagarjuna_DamarlaNagarjuna_Damarla
Nagarjuna_Damarla
 
Hortonworks & IBM solutions
Hortonworks & IBM solutionsHortonworks & IBM solutions
Hortonworks & IBM solutions
 
Optimizing your Hadoop Infastructure: An Industry Panel Presentation
Optimizing your Hadoop Infastructure: An Industry Panel PresentationOptimizing your Hadoop Infastructure: An Industry Panel Presentation
Optimizing your Hadoop Infastructure: An Industry Panel Presentation
 
IJSRED-V2I3P43
IJSRED-V2I3P43IJSRED-V2I3P43
IJSRED-V2I3P43
 
Actian DataFlow Whitepaper
Actian DataFlow WhitepaperActian DataFlow Whitepaper
Actian DataFlow Whitepaper
 
The Big Picture on Big Data and Cognos
The Big Picture on Big Data and CognosThe Big Picture on Big Data and Cognos
The Big Picture on Big Data and Cognos
 
A Tighter Weave – How YARN Changes the Data Quality Game
A Tighter Weave – How YARN Changes the Data Quality GameA Tighter Weave – How YARN Changes the Data Quality Game
A Tighter Weave – How YARN Changes the Data Quality Game
 
Avinash Kant
Avinash KantAvinash Kant
Avinash Kant
 
Big data with Hadoop
Big data with HadoopBig data with Hadoop
Big data with Hadoop
 
Hadoop as an Analytic Platform: Why Not?
Hadoop as an Analytic Platform: Why Not?Hadoop as an Analytic Platform: Why Not?
Hadoop as an Analytic Platform: Why Not?
 
End to End Machine Learning Open Source Solution Presented in Cisco Developer...
End to End Machine Learning Open Source Solution Presented in Cisco Developer...End to End Machine Learning Open Source Solution Presented in Cisco Developer...
End to End Machine Learning Open Source Solution Presented in Cisco Developer...
 
flexpod_hadoop_cloudera
flexpod_hadoop_clouderaflexpod_hadoop_cloudera
flexpod_hadoop_cloudera
 
Suresh_Yadav_Hadoop_Fresher_Resume
Suresh_Yadav_Hadoop_Fresher_ResumeSuresh_Yadav_Hadoop_Fresher_Resume
Suresh_Yadav_Hadoop_Fresher_Resume
 
Technical Architect - Big Data systems
Technical Architect - Big Data systemsTechnical Architect - Big Data systems
Technical Architect - Big Data systems
 
My other computer is a datacentre - 2012 edition
My other computer is a datacentre - 2012 editionMy other computer is a datacentre - 2012 edition
My other computer is a datacentre - 2012 edition
 

Destacado

PHP Developer - 4 year exp
PHP Developer - 4 year expPHP Developer - 4 year exp
PHP Developer - 4 year expyogesh paghdal
 
Meinsen,David Final Resume
Meinsen,David Final ResumeMeinsen,David Final Resume
Meinsen,David Final ResumeDavid Meinsen
 
Krishna_4.5_Years_OracleAPEX_PLSQL
Krishna_4.5_Years_OracleAPEX_PLSQLKrishna_4.5_Years_OracleAPEX_PLSQL
Krishna_4.5_Years_OracleAPEX_PLSQLKrishna Chaitanya
 
Resume_McColgin_Quentin_Master_Scheduling
Resume_McColgin_Quentin_Master_SchedulingResume_McColgin_Quentin_Master_Scheduling
Resume_McColgin_Quentin_Master_SchedulingQuentin McColgin
 
Resume - Rajendra Gunjal
Resume - Rajendra GunjalResume - Rajendra Gunjal
Resume - Rajendra GunjalRajendra gunjal
 
Shankar 2015 MS Practices
Shankar 2015 MS PracticesShankar 2015 MS Practices
Shankar 2015 MS PracticesShankar D
 
pranayJ
pranayJpranayJ
pranayJPray B
 
Jollgin_SharePoint
Jollgin_SharePointJollgin_SharePoint
Jollgin_SharePointjollgin sam
 
Executive Summary
Executive SummaryExecutive Summary
Executive Summarysandeepiiht
 

Destacado (15)

libbycm_resume
libbycm_resumelibbycm_resume
libbycm_resume
 
PHP Developer - 4 year exp
PHP Developer - 4 year expPHP Developer - 4 year exp
PHP Developer - 4 year exp
 
Meinsen,David Final Resume
Meinsen,David Final ResumeMeinsen,David Final Resume
Meinsen,David Final Resume
 
Sumit resume
Sumit resumeSumit resume
Sumit resume
 
Krishna_4.5_Years_OracleAPEX_PLSQL
Krishna_4.5_Years_OracleAPEX_PLSQLKrishna_4.5_Years_OracleAPEX_PLSQL
Krishna_4.5_Years_OracleAPEX_PLSQL
 
Resume_McColgin_Quentin_Master_Scheduling
Resume_McColgin_Quentin_Master_SchedulingResume_McColgin_Quentin_Master_Scheduling
Resume_McColgin_Quentin_Master_Scheduling
 
CV_English_GalinaRubinshtein
CV_English_GalinaRubinshteinCV_English_GalinaRubinshtein
CV_English_GalinaRubinshtein
 
Resume - Rajendra Gunjal
Resume - Rajendra GunjalResume - Rajendra Gunjal
Resume - Rajendra Gunjal
 
Shankar 2015 MS Practices
Shankar 2015 MS PracticesShankar 2015 MS Practices
Shankar 2015 MS Practices
 
pranayJ
pranayJpranayJ
pranayJ
 
Jollgin_SharePoint
Jollgin_SharePointJollgin_SharePoint
Jollgin_SharePoint
 
MyResume
MyResumeMyResume
MyResume
 
Yeahia Resume
Yeahia ResumeYeahia Resume
Yeahia Resume
 
Executive Summary
Executive SummaryExecutive Summary
Executive Summary
 
Sarath_Kumar_Prabhakaran_Graduate_Resume
Sarath_Kumar_Prabhakaran_Graduate_ResumeSarath_Kumar_Prabhakaran_Graduate_Resume
Sarath_Kumar_Prabhakaran_Graduate_Resume
 

Similar a Resume (1) (20)

Robin_Hadoop
Robin_HadoopRobin_Hadoop
Robin_Hadoop
 
Bharath Hadoop Resume
Bharath Hadoop ResumeBharath Hadoop Resume
Bharath Hadoop Resume
 
Nagarjuna_Damarla_Resume
Nagarjuna_Damarla_ResumeNagarjuna_Damarla_Resume
Nagarjuna_Damarla_Resume
 
hadoop exp
hadoop exphadoop exp
hadoop exp
 
Hadoop Developer
Hadoop DeveloperHadoop Developer
Hadoop Developer
 
sudipto_resume
sudipto_resumesudipto_resume
sudipto_resume
 
Resume
ResumeResume
Resume
 
Shiv shakti resume
Shiv shakti resumeShiv shakti resume
Shiv shakti resume
 
Manikyam_Hadoop_5+Years
Manikyam_Hadoop_5+YearsManikyam_Hadoop_5+Years
Manikyam_Hadoop_5+Years
 
Deepankar Sehdev- Resume2015
Deepankar Sehdev- Resume2015Deepankar Sehdev- Resume2015
Deepankar Sehdev- Resume2015
 
Madhu
MadhuMadhu
Madhu
 
Resume
ResumeResume
Resume
 
ChandraSekhar CV
ChandraSekhar CVChandraSekhar CV
ChandraSekhar CV
 
Resume_Karthick
Resume_KarthickResume_Karthick
Resume_Karthick
 
Hadoop training kit from lcc infotech
Hadoop   training kit from lcc infotechHadoop   training kit from lcc infotech
Hadoop training kit from lcc infotech
 
Mukul-Resume
Mukul-ResumeMukul-Resume
Mukul-Resume
 
Monika_Raghuvanshi
Monika_RaghuvanshiMonika_Raghuvanshi
Monika_Raghuvanshi
 
Sureh hadoop 3 years t
Sureh hadoop 3 years tSureh hadoop 3 years t
Sureh hadoop 3 years t
 
ganesh_2+yrs_Java_Developer_Resume
ganesh_2+yrs_Java_Developer_Resumeganesh_2+yrs_Java_Developer_Resume
ganesh_2+yrs_Java_Developer_Resume
 
DeepeshRehi
DeepeshRehiDeepeshRehi
DeepeshRehi
 

Resume (1)

  • 1. Name: NAGESWARA RAO DASARI Tel No : 91-9035234744(M)  Email : dnr.nageswara@gmail.com Career Objective A challenging and vibrant career in a growing organization, where I could be able to learn and use my technical skills which can contribute to the growth of organization in the field of technology. Professional Summary  Software Engineer, Capgemini India privet limited, Bangalore.  Overall 3.10 Years of experience.  2.5 years of work experience on BIG DATA Technologies(Hadoop)  1.4 years of work experience on core java.  Highly versatile and experienced in adapting and implementing the latest technologies in new application solutions.  Hands-on experience in designing and implementing solutions using Apache Hadoop, HDFS, Map Reduce, Hive, PIG,Sqoop.  Knowledge in Tableau reporting tools  Experience in Agile software development process  Strong knowledge in OOP Concepts, UML, Core Java and C.  Good exposure on Windows and Linux platforms. Technical Summary  Big Data Ecosystem: Hadoop, Map Reduce, Pig, Hive.  Good Knowledge: HBase, Sqoop, Flume, Oozie, Zookeeper, Spark and Scala.  Languages & Frameworks: Core Java, SQL.  Scripting Languages: Java script, Unix Shell Scripting  Web Technologies : HTML, JavaScript, CSS  Tools : Eclipse, Clear case, Git ,Putty ,Winscp, Maven  Databases: Oracle 9i, MySQL  Development Methodologies: Agile SCURM. Educational Summary  B.Tech in Electrical and Electronics Engineering from JNTU Kakinada
  • 2. Name: NAGESWARA RAO DASARI Tel No : 91-9035234744(M)  Email : dnr.nageswara@gmail.com Assignments  Banking Customer – Enterprise Data Provisioning Platform Duration : Jul 2015 – Till date Client: – BARCLAYS Bank , U K Team Size : 31 Designation: Hadoop Developer Project Description: - The Enterprise Data Provisioning Platform (EDPP), which is the desired build of the Information Excellence project, will allow BARCLAYS to address new business needs and is in line with BARCLAYS’s guiding principle which is to operate with excellence. The primary object of the EDPP project is to institutionalize a Hadoop platform for data collected within BARCLAYS and make it available to perform analytics on the collected data. Environment: Distribution CDH5, Apache Pig, Hive, Java , Unix , MySQL, Spark and Scala. Roles & Responsibilities:  Designing schemas in Hive.  Moved all the data obtained from different sources into hadoop environment  Created HIVE tables to store the processed results in a tabular format  Written Map Reduce programs to process the HDFS data and convert into common format.  Written shell scripts for automation of the loading process  Resolving JIRA Tickets.  Unit Testing and Performance tuning of hive queries  Written various hive queries  Involved in Client engagements  Responsible to conduct scrum meetings.  Retail Customer – TARGET Re-hosting of Web Intelligence Project Duration : Nov 2014 – Jun 2015 Client: – TARGET , USA Team Size : 15 Designation: Hadoop Developer Project Description: - The purpose of the project is to store terabytes of log information generated by the ecommerce website and extract meaning information out of it. The solution is based on the open source
  • 3. Name: NAGESWARA RAO DASARI Tel No : 91-9035234744(M)  Email : dnr.nageswara@gmail.com BigData s/w Hadoop .The data will be stored in Hadoop file system and processed using Map/Reduce jobs. Which intern includes getting the raw html data from the websites ,Process the html to obtain product and pricing information, Extract various reports out of the product pricing information and Export the information for further processing. This project is mainly for the replatforming of the current existing system which is running on WebHarvest a third party JAR and in MySQL DB to a new cloud solution technology called Hadoop which can able to process large date sets (i.e. Tera bytes and Peta bytes of data) in order to meet the client requirements with the increasing competion from his retailers. Environment: Distribution CDH5,Apache Pig, Hive, SQOOP,Java , Unix , PHP , MySQL Roles & Responsibilities:  Moved all crawl data flat files generated from various retailers to HDFS for further processing  Written the Apache PIG scripts to process the HDFS data.  Created HIVE tables to store the processed results in a tabular format  Developed the sqoop scripts in order to make the interaction between Pig and MySQL Database.  Involved in resolving the JIRAs based on Hadoop.  Developed the Unix shell scripts for creating the reports from Hive data.  Completely involved in the requirement analysis phase.  Web Application – Intella Sphere Duration : Dec 2012 – Jun 2014 Environment: : Java, Mysql, Mongo db2.4.6, Activity Workflow, Svn Designation: Java Developer Project Description: - The brand essence of Intella Sphere is direct, analytical and engaging. It's all about empowring business to gain the intelligence they need to grow and improve their brand in the new age of marketing. Intella Sphere is the ultimate marketing tool, giving a company the devices they need to gain market share, beat the competition and get true results. Intella Sphere understands these challenges better than anyone and uses experience and innovation to create the right tools for a business to clearly understand its audience and empower them to grow and engage with their community! Responsibilities: • DAO Created for all DB operations using Mongo Db api. • Worked on Designing phase of application using Visual Paradigm tool. • Implementing social oauth configuration. • Using social api for social networks(Facebook, Twitter, LinkedIn, Blogger, Youtube). • Implementing Activiti workflow. • Implementing aggregations for calculating metrics. • Working on mongo db replica set. • Worked on development and production environments.