SlideShare una empresa de Scribd logo
1 de 33
Descargar para leer sin conexión
Hadoop Deployment and Configuration -Single machine and a cluster
Typical Hardware 
•HP Compaq 8100 Elite CMT PC Specifications · Processor: Intel Core i7-860 · RAM: 8GB PC3-10600 Memory (2X4GB) · HDD: 1TB SATA 3.5 · Network: Intel 82578 GbE(Integrated) 
•Network switch 
–NetgearGS2608Specifications · N Port · 10/100/1000 Mbps Gigabit Switch 
•Gateway node 
–Dell OptiplexGX280Specifications · Processor: Intel Pentium 4 2.80 GHz · RAM: 1GB
OS 
•Install the Ubuntu Server (Maverick Meerkat) operating system that is available for download from the Ubuntu releases site. 
•Some important points to remember while installing the OS 
–Ensure that the SSH server is selected to be installed 
–Enter the proxy details needed for systems to connect to the internet from within your network 
–Create a user on each installation 
•Preferably with the same password on each node
Prerequisites 
•Supported Platforms 
–GNU/Linux is supported as a development and production platform. Hadoop has been demonstrated on GNU/Linux clusters with 2000 nodes. 
–Win32 is supported as adevelopment platform. Distributed operation has not been well tested on Win32, so it is not supported as aproduction platform. 
•Required Software 
–Required software for Linux and Windows include: 
•JavaTM1.7.x, preferably from Sun, must be installed. 
•sshmust be installed andsshdmust be running to use the Hadoop scripts that manage remote Hadoop daemons. 
•Additional requirements for Windows include: 
–Cygwin-Required for shell support in addition to the required software above. 
•Installing Software 
–If your cluster doesn't have the requisite software you will need to install it. 
–For example on Ubuntu Linux: 
•$ sudoapt-get install ssh$ sudoapt-get install rsync 
•On Windows, if you did not install the required software when you installed cygwin, start the cygwininstaller and select the packages: 
–openssh-theNetcategory
Install Sun’s java JDK 
•Install Sun’s java JDK on each node in the cluster 
•Add the canonical partner repository to your list of apt repositories. 
•You can do this by adding the line below into your /etc/apt/sources.listfiledebhttp://archive.canonical.com/ maverick partner 
•Update the source list 
–sudoapt-get update 
•Install sun-java7-jdk 
–sudoapt-get install sun-java6-jdk 
•Select Sun’s java as the default on the machine 
–sudoupdate-java-alternatives -s java-6-sun 
•Verify the installation running the command 
–java –version
Adding a dedicated Hadoop system user 
•Use a dedicated Hadoop user account for running Hadoop. 
•While that’s not required it is recommended because it helps to separate the Hadoop installation from other software applications and user accounts running on the same machine (think: security, permissions, backups, etc). 
•This will add the userhduserand the grouphadoopto your local machine: 
–$ sudoaddgrouphadoop 
–$ sudoadduser--ingrouphadoophduser
Configuring SSH 
•Hadoop requires SSH access to manage its nodes, i.e. remote machines plus your local machine if you want to use Hadoop on it. 
•For single-node setup of Hadoop, we therefore need to configure SSH access tolocalhostfor thehduseruser we created in the previous slide. 
•Have SSH up and running on your machine and configured it to allow SSH public key authentication. http://ubuntuguide.org/ 
•Generate an SSH key for thehduseruser. 
user@ubuntu:~$ su-hduser 
hduser@ubuntu:~$ ssh-keygen-t rsa-P "" 
Generating public/private rsakey pair. 
Enter file in which to save the key (/home/hduser/.ssh/id_rsa): 
Created directory '/home/hduser/.ssh'. 
Your identification has been saved in /home/hduser/.ssh/id_rsa. 
Your public key has been saved in /home/hduser/.ssh/id_rsa.pub. 
The key fingerprint is: 
9b:82:ea:58:b4:e0:35:d7:ff:19:66:a6:ef:ae:0e:d2 hduser@ubuntu 
The key's randomartimage is: 
[...snipp...] 
hduser@ubuntu:~$
Configuring SSH 
•Second, you have to enable SSH access to your local machine with this newly created key. 
–hduser@ubuntu:~$ cat $HOME/.ssh/id_rsa.pub >> $HOME/.ssh/authorized_keys 
•The final step is to test the SSH setup by connecting to your local machine with thehduseruser. 
•The step is also needed to save your local machine’s host key fingerprint to thehduseruser’sknown_hostsfile. 
•If you have any special SSH configuration for your local machine like a non- standard SSH port, you can define host-specific SSH options in$HOME/.ssh/config(seeman ssh_configfor more information). 
hduser@ubuntu:~$ sshlocalhost 
The authenticity of host 'localhost(::1)' can't be established. 
RSA key fingerprint is d7:87:25:47:ae:02:00:eb:1d:75:4f:bb:44:f9:36:26. 
Are you sure you want to continue connecting (yes/no)? yes 
Warning: Permanently added 'localhost' (RSA) to the list of known hosts. 
Linux ubuntu2.6.32-22-generic #33-Ubuntu SMP Wed Apr 28 13:27:30 UTC 2010 i686 GNU/Linux 
Ubuntu 10.04 LTS 
[...snipp...] 
hduser@ubuntu:~$
Disabling IPv6 
•One problem with IPv6 on Ubuntu is that using0.0.0.0for the various networking-related Hadoop configuration options will result in Hadoop binding to the IPv6 addresses. 
•To disable IPv6 on Ubuntu 10.04 LTS, open/etc/sysctl.confin the editor of your choice and add the following lines to the end of the file: 
•You have to reboot your machine in order to make the changes take effect. 
•You can check whether IPv6 is enabled on your machine with the following command: 
•You can also disable IPv6 only for Hadoop as documented inHADOOP-3437. You can do so by adding the following line toconf/hadoop-env.sh: 
#disable ipv6 
net.ipv6.conf.all.disable_ipv6 = 1 
net.ipv6.conf.default.disable_ipv6 = 1 
net.ipv6.conf.lo.disable_ipv6 = 1 
$ cat /proc/sys/net/ipv6/conf/all/disable_ipv6 
export HADOOP_OPTS=-Djava.net.preferIPv4Stack=true
Hadoop Installation 
•You have todownload Hadoopfrom theApache Download Mirrorsand extract the contents of the Hadoop package to a location of your choice. 
•Say/usr/local/hadoop. 
•Make sure to change the owner of all the files to thehduseruser andhadoopgroup, for example: 
•Create a symlinkfromhadoop-xxxxxtohadoop 
$ cd/usr/local 
$ sudotar xzfhadoop-xxxx.tar.gz 
$ sudomv hadoop-xxxxxhadoop 
$ sudochown-R hduser:hadoophadoop
Update $HOME/.bashrc 
•Add the following lines to the end of the$HOME/.bashrcfile of userhduser. 
•If you use a shell other than bash, you should of course update its appropriate configuration files instead of.bashrc. 
# Set Hadoop-related environment variables 
export HADOOP_HOME=/usr/local/hadoop 
# Set JAVA_HOME (we will also configure JAVA_HOME directly for Hadoop later on) 
export JAVA_HOME=/usr/lib/jvm/java-6-sun 
# Some convenient aliases and functions for running Hadoop-related commands 
unaliasfs&> /dev/null 
alias fs="hadoopfs" 
unaliashls&> /dev/null 
alias hls="fs-ls"
Update $HOME/.bashrc 
# If you have LZO compression enabled in your Hadoop cluster and 
# compress job outputs with LZOP (not covered in this tutorial): 
# Conveniently inspect an LZOP compressed file from the command 
# line; run via: 
# 
# $ lzohead/hdfs/path/to/lzop/compressed/file.lzo 
# 
# Requires installed 'lzop' command. 
# 
lzohead() { 
hadoopfs-cat $1 | lzop-dc | head -1000 | less 
} 
# Add Hadoop bin/ directory to PATH 
export PATH=$PATH:$HADOOP_HOME/bin
Configuration files 
•The$HADOOP_INSTALL/hadoop/confdirectory contains some configuration files for Hadoop. These are: 
•hadoop-env.sh-This file contains some environment variable settings used by Hadoop. You can use these to affect some aspects of Hadoop daemon behavior, such as where log files are stored, the maximum amount of heap used etc. The only variable you should need to change in this file is JAVA_HOME, which specifies the path to the Java 1.5.x installation used by Hadoop. 
•slaves-This file lists the hosts, one per line, where the Hadoop slave daemons (datanodesand tasktrackers) will run. By default this contains the single entry localhost 
•hdfs-site.xml-This file contains generic default settings for Hadoop daemons and Map/Reduce jobs. Do not modify this file. 
•mapred-site.xml-This file contains site specific settings for the Hadoop Map/Reduce daemons and jobs. The file is empty by default. Putting configuration properties in this file will override Map/Reduce settings in the hadoop-default.xml file. Use this file to tailor the behavior of Map/Reduce on your site. 
•core-site.xml-This file contains site specific settings for all Hadoop daemons and Map/Reduce jobs. This file is empty by default. Settings in this file override those in hadoop-default.xml and mapred-default.xml. This file should contain settings that must be respected by all servers and clients in a Hadoop installation, for instance, the location of the namenode and the jobtracker.
Configuration : Single node 
•hadoop-env.sh : 
–The only required environment variable we have to configure for Hadoop in this case isJAVA_HOME. 
–Open etc/hadoop/conf/hadoop-env.shin the editor of your choice 
–set theJAVA_HOMEenvironment variable to the Sun JDK/JRE 6 directory 
–export JAVA_HOME=/usr/lib/jvm/java-6-sun 
•conf/*-site.xml 
–We configure following: 
–core-site.xml 
•hadoop.tmp.dir 
•fs.default.name 
–mapred-site.xml 
•mapred.job.tracker 
–hdfs-site.xml 
•dfs.replication
Configure HDFS 
•We will configure the directory where Hadoop will store its data files, the network ports it listens to, etc. 
•Our setup will use Hadoop’sDistributed File System,HDFS, even though our little “cluster” only contains our single local machine. 
•You can leave the settings below ”as is” with the exception of thehadoop.tmp.dirvariable which you have to change to the directory of your choice. 
•We will use the directory/app/hadoop/tmp 
•Hadoop’sdefault configurations usehadoop.tmp.diras the base temporary directory both for the local file system and HDFS, so don’t be surprised if you see Hadoop creating the specified directory automatically on HDFS at some later point. 
$ sudomkdir-p /app/hadoop/tmp 
$ sudochownhduser:hadoop/app/hadoop/tmp 
# ...and if you want to tighten up security, chmodfrom 755 to 750... 
$ sudochmod750 /app/hadoop/tmp
conf/core-site.xml 
<!--In: conf/core-site.xml --> 
<property> 
<name>hadoop.tmp.dir</name> 
<value>/app/hadoop/tmp</value> 
<description>A base for other temporary directories.</description> 
</property> 
<property> 
<name>fs.default.name</name> 
<value>hdfs://localhost:54310</value> 
<description>The name of the default file system. A URI whose 
scheme and authority determine the FileSystemimplementation. The 
uri'sscheme determines the configproperty (fs.SCHEME.impl) naming 
the FileSystemimplementation class. The uri'sauthority is used to 
determine the host, port, etc. for a filesystem.</description> 
</property>
conf/mapred-site.xml 
<!--In: conf/mapred-site.xml --> 
<property> 
<name>mapred.job.tracker</name> 
<value>localhost:54311</value> 
<description>The host and port that the MapReduce job tracker runs 
at. If "local", then jobs are run in-process as a single map 
and reduce task. 
</description> 
</property>
conf/hdfs-site.xml 
<!--In: conf/hdfs-site.xml --> 
<property> 
<name>dfs.replication</name> 
<value>1</value> 
<description>Default block replication. 
The actual number of replications can be specified when the file is created. 
The default is used if replication is not specified in create time. 
</description> 
</property>
Formatting the HDFS and Starting 
•To format the filesystem(which simply initializes the directory specified by thedfs.name.dirvariable), run the command 
•hadoopnamenode –format 
•Run start-all.sh : This will startup a Namenode, Datanode, Jobtrackerand a Tasktrackeron your machine 
•Run stop-all.sh to stop all processes
Download example input data 
•Create a directory inside /home/…/gutenberg 
•Download: 
–The Outline of Science, Vol. 1 (of 4) by J. Arthur Thomson http://www.gutenberg.org/ebooks/20417.txt.utf-8 
–The Notebooks of Leonardo DaVincihttp://www.gutenberg.org/cache/epub/5000/pg5000.txt 
–Ulysses by James Joycehttp://www.gutenberg.org/cache/epub/4300/pg4300.txt 
•Copy local example data to HDFS 
–hdfsdfs-copyFromLocalgutenberggutenberg 
•Check 
–hadoopdfs-lsgutenberg
Run the MapReduce job 
•Now, we run the WordCountexample job 
•hadoopjar /usr/lib/hadoop/hadoop-xxxx-example.jar wordcountgutenberggutenberg-out 
•This command will 
–read all the files in the HDFS directory/user/cloudera/gutenberg, 
–process it, and 
–store the result in the HDFS directory/user/cloudera/gutenberg-out 
•Check if the result is successfully stored in HDFS directorygutenberg-out 
–hdfsdfs–lsgutenberg-out 
•Retrieve the job result from HDFS 
–hdfsdfs–cat gutenberg-out/part-r-00000 
•Better: 
–hdfsdfs–cat gutenberg-out/part-r-00000 | sort –nk2,2 –r | less
Hadoop Web Interfaces 
•Hadoop comes with several web interfaces which are by default (seeconf/hadoop-default.xml) available at these locations: 
•http://localhost:50030/–web UI for MapReduce job tracker(s) 
•http://localhost:50060/–web UI for task tracker(s) 
•http://localhost:50070/–web UI for HDFS name node(s)
Cluster setup 
•Basic idea 
Box 1 
Single Node Cluster 
Master 
Box 2 
Single Node Cluster 
Master 
What we have done so far 
Master 
Slave 
Gateway 
Switch 
LAN 
Use BitviseTunnelierSSH port forwarding
Calling by name 
•Now that you have two single-node clusters up and running, we will modify the Hadoop configuration to make 
•one Ubuntu box the ”master” (which will also act as a slave) and 
•the other Ubuntu box a ”slave”. 
•We will call the designated master machine just themasterfrom now on and the slave-only machine the slave. 
•We will also give the two machines these respective hostnames in their networking setup, most notably in/etc/hosts. 
•If the hostnames of your machines are different (e.g.node01) then you must adapt the settings as appropriate.
Networking 
•connect both machines via a single hub or switch and configure the network interfaces to use a common network such as192.168.0.x/24. 
•To make it simple, 
•we will assign the IP address192.168.0.1to themastermachine and 
•192.168.0.2to theslavemachine. 
•Update/etc/hostson both machines with the following lines: 
# /etc/hosts (for master AND slave) 
192.168.0.1 master 
192.168.0.2 slave
SSH access 
•Thehduseruser on themaster(akahduser@master) must be able to connect a) to its own user account on the master–i.e.sshmasterin this context and not necessarilysshlocalhost–and b) to thehduseruser account on theslave(akahduser@slave) via a password-less SSH login. 
•you just have to add thehduser@master‘spublic SSH key (which should be in$HOME/.ssh/id_rsa.pub) to theauthorized_keysfile ofhduser@slave(in this user’s$HOME/.ssh/authorized_keys). 
•ssh-copy-id -i$HOME/.ssh/id_rsa.pub hduser@slave 
•Verify that the password-less access to all slaves from the master workssshhduser@slavesshhduser@master
How the final multi-node cluster will look like
Naming again 
•The master node will run the “master” daemons for each layer: 
–NameNodefor the HDFS storage layer, and 
–JobTrackerfor the MapReduce processing layer 
•Both machines will run the “slave” daemons: 
–DataNodefor the HDFS layer, and 
–TaskTrackerfor MapReduce processing layer 
•The “master” daemons are responsible for coordination and management of the “slave” daemons while the latter will do the actual data storage and data processing work. 
•Typically one machine in the cluster is designated as the NameNodeand another machine the as JobTracker, exclusively. 
•These are the actual “master nodes”. 
•The rest of the machines in the cluster act as both DataNodeand TaskTracker. 
•These are the slaves or “worker nodes”.
conf/masters (masteronly) 
•Theconf/mastersfile defines on which machines Hadoop will startsecondary NameNodesin our multi-node cluster. 
•In our case, this is just themastermachine. 
•The primary NameNodeand the JobTrackerwill always be the machines on which you run thebin/start-dfs.shandbin/start- mapred.shscripts, respectively 
•The primary NameNodeand the JobTrackerwill be started on the same machine if you runbin/start-all.sh 
•Onmaster, update/conf/mastersthat it looks like this: master
conf/slaves (masteronly) 
•Thisconf/slavesfile lists the hosts, one per line, where the Hadoop slave daemons (DataNodesand TaskTrackers) will be run. 
•We want both themasterbox and theslavebox to act as Hadoop slaves because we want both of them to store and process data. 
•Onmaster, updateconf/slavesthat it looks like this: 
•If you have additional slave nodes, just add them to theconf/slavesfile, one per line (do this on all machines in the cluster). 
master 
slave
conf/*-site.xml(all machines) 
•You have to change the configuration files 
•conf/core-site.xml, 
•conf/mapred-site.xml and 
•conf/hdfs-site.xml 
•on ALL machines: 
–fs.default.name : The name of the default file system. A URI whose scheme and authority determine the FileSystemimplementation. Set as hdfs://master:54310 
–mapred.job.tracker: The host and port that the MapReduce job tracker runs at. Set as master:54311 
–dfs.replication: Default block replication. Set as 2 
–mapred.local.dir: Determines where temporary MapReduce data is written. It also may be a list of directories. 
–mapred.map.tasks: As a rule of thumb, use 10x the number of slaves (i.e., number of TaskTrackers). 
–mapred.reduce.tasks: As a rule of thumb, use 2x the number of slave processors (i.e., number of TaskTrackers).
Formatting the HDFS and Starting 
•To format the filesystem(which simply initializes the directory specified by thedfs.name.dirvariable on the NameNode), run the command 
•hdfsnamenode –format 
•Starting the multi-node cluster 
–Starting the cluster is done in two steps. 
–First, the HDFS daemons are started: start-dfs.sh 
•NameNodedaemon is started on master, and 
•DataNodedaemons are started on all slaves (here:masterandslave) 
–Second, the MapReduce daemons are started: start-mapred.sh 
•JobTrackeris started onmaster, and 
•TaskTrackerdaemons are started on all slaves (here:masterandslave) 
•stop-mapred.sh followed by stop-dfs.sh to stop
End of session 
Day –1: Hadoop Deployment and Configuration -Single machine and a cluster 
Run the PiEstimatorexamplehadoopjar /usr/lib/hadoop/hadoop-xxxxx-example.jar pi 2 100000

Más contenido relacionado

La actualidad más candente

Introduction to hadoop administration jk
Introduction to hadoop administration   jkIntroduction to hadoop administration   jk
Introduction to hadoop administration jk
Edureka!
 
Hw09 Monitoring Best Practices
Hw09   Monitoring Best PracticesHw09   Monitoring Best Practices
Hw09 Monitoring Best Practices
Cloudera, Inc.
 
Rigorous and Multi-tenant HBase Performance Measurement
Rigorous and Multi-tenant HBase Performance MeasurementRigorous and Multi-tenant HBase Performance Measurement
Rigorous and Multi-tenant HBase Performance Measurement
DataWorks Summit
 
Hadoop - Disk Fail In Place (DFIP)
Hadoop - Disk Fail In Place (DFIP)Hadoop - Disk Fail In Place (DFIP)
Hadoop - Disk Fail In Place (DFIP)
mundlapudi
 
Beginning hive and_apache_pig
Beginning hive and_apache_pigBeginning hive and_apache_pig
Beginning hive and_apache_pig
Mohamed Ali Mahmoud khouder
 

La actualidad más candente (20)

Introduction to hadoop administration jk
Introduction to hadoop administration   jkIntroduction to hadoop administration   jk
Introduction to hadoop administration jk
 
Apache HBase Performance Tuning
Apache HBase Performance TuningApache HBase Performance Tuning
Apache HBase Performance Tuning
 
Hw09 Monitoring Best Practices
Hw09   Monitoring Best PracticesHw09   Monitoring Best Practices
Hw09 Monitoring Best Practices
 
HBaseCon 2015: HBase Performance Tuning @ Salesforce
HBaseCon 2015: HBase Performance Tuning @ SalesforceHBaseCon 2015: HBase Performance Tuning @ Salesforce
HBaseCon 2015: HBase Performance Tuning @ Salesforce
 
Cross-Site BigTable using HBase
Cross-Site BigTable using HBaseCross-Site BigTable using HBase
Cross-Site BigTable using HBase
 
Optimizing your Infrastrucure and Operating System for Hadoop
Optimizing your Infrastrucure and Operating System for HadoopOptimizing your Infrastrucure and Operating System for Hadoop
Optimizing your Infrastrucure and Operating System for Hadoop
 
Hive Anatomy
Hive AnatomyHive Anatomy
Hive Anatomy
 
Hadoop World 2011: Hadoop Troubleshooting 101 - Kate Ting - Cloudera
Hadoop World 2011: Hadoop Troubleshooting 101 - Kate Ting - ClouderaHadoop World 2011: Hadoop Troubleshooting 101 - Kate Ting - Cloudera
Hadoop World 2011: Hadoop Troubleshooting 101 - Kate Ting - Cloudera
 
Sqoop
SqoopSqoop
Sqoop
 
Improving Hadoop Performance via Linux
Improving Hadoop Performance via LinuxImproving Hadoop Performance via Linux
Improving Hadoop Performance via Linux
 
8a. How To Setup HBase with Docker
8a. How To Setup HBase with Docker8a. How To Setup HBase with Docker
8a. How To Setup HBase with Docker
 
HBaseCon 2013: Apache HBase Table Snapshots
HBaseCon 2013: Apache HBase Table SnapshotsHBaseCon 2013: Apache HBase Table Snapshots
HBaseCon 2013: Apache HBase Table Snapshots
 
Rigorous and Multi-tenant HBase Performance Measurement
Rigorous and Multi-tenant HBase Performance MeasurementRigorous and Multi-tenant HBase Performance Measurement
Rigorous and Multi-tenant HBase Performance Measurement
 
Hadoop - Disk Fail In Place (DFIP)
Hadoop - Disk Fail In Place (DFIP)Hadoop - Disk Fail In Place (DFIP)
Hadoop - Disk Fail In Place (DFIP)
 
Improving Hadoop Cluster Performance via Linux Configuration
Improving Hadoop Cluster Performance via Linux ConfigurationImproving Hadoop Cluster Performance via Linux Configuration
Improving Hadoop Cluster Performance via Linux Configuration
 
HBase 0.20.0 Performance Evaluation
HBase 0.20.0 Performance EvaluationHBase 0.20.0 Performance Evaluation
HBase 0.20.0 Performance Evaluation
 
Apache Pig: A big data processor
Apache Pig: A big data processorApache Pig: A big data processor
Apache Pig: A big data processor
 
Beginning hive and_apache_pig
Beginning hive and_apache_pigBeginning hive and_apache_pig
Beginning hive and_apache_pig
 
Hadoop Architecture_Cluster_Cap_Plan
Hadoop Architecture_Cluster_Cap_PlanHadoop Architecture_Cluster_Cap_Plan
Hadoop Architecture_Cluster_Cap_Plan
 
Five major tips to maximize performance on a 200+ SQL HBase/Phoenix cluster
Five major tips to maximize performance on a 200+ SQL HBase/Phoenix clusterFive major tips to maximize performance on a 200+ SQL HBase/Phoenix cluster
Five major tips to maximize performance on a 200+ SQL HBase/Phoenix cluster
 

Destacado

Hadoop Distriubted File System (HDFS) presentation 27- 5-2015
Hadoop Distriubted File System (HDFS) presentation 27- 5-2015Hadoop Distriubted File System (HDFS) presentation 27- 5-2015
Hadoop Distriubted File System (HDFS) presentation 27- 5-2015
Abdul Nasir
 
Hadoop first mr job - inverted index construction
Hadoop first mr job - inverted index constructionHadoop first mr job - inverted index construction
Hadoop first mr job - inverted index construction
Subhas Kumar Ghosh
 

Destacado (15)

Hadoop introduction
Hadoop introductionHadoop introduction
Hadoop introduction
 
Introduction hadoop adminisrtation
Introduction hadoop adminisrtationIntroduction hadoop adminisrtation
Introduction hadoop adminisrtation
 
Hadoop map reduce in operation
Hadoop map reduce in operationHadoop map reduce in operation
Hadoop map reduce in operation
 
Hadoop Distriubted File System (HDFS) presentation 27- 5-2015
Hadoop Distriubted File System (HDFS) presentation 27- 5-2015Hadoop Distriubted File System (HDFS) presentation 27- 5-2015
Hadoop Distriubted File System (HDFS) presentation 27- 5-2015
 
Simplifying Use of Hive with the Hive Query Tool
Simplifying Use of Hive with the Hive Query ToolSimplifying Use of Hive with the Hive Query Tool
Simplifying Use of Hive with the Hive Query Tool
 
Hadoop data management
Hadoop data managementHadoop data management
Hadoop data management
 
Hadoop map reduce v2
Hadoop map reduce v2Hadoop map reduce v2
Hadoop map reduce v2
 
01 hbase
01 hbase01 hbase
01 hbase
 
Hadoop exercise
Hadoop exerciseHadoop exercise
Hadoop exercise
 
Hadoop 2.0 handout 5.0
Hadoop 2.0 handout 5.0Hadoop 2.0 handout 5.0
Hadoop 2.0 handout 5.0
 
03 hive query language (hql)
03 hive query language (hql)03 hive query language (hql)
03 hive query language (hql)
 
Apache Hadoop YARN - Enabling Next Generation Data Applications
Apache Hadoop YARN - Enabling Next Generation Data ApplicationsApache Hadoop YARN - Enabling Next Generation Data Applications
Apache Hadoop YARN - Enabling Next Generation Data Applications
 
Configuring Your First Hadoop Cluster On EC2
Configuring Your First Hadoop Cluster On EC2Configuring Your First Hadoop Cluster On EC2
Configuring Your First Hadoop Cluster On EC2
 
Hadoop first mr job - inverted index construction
Hadoop first mr job - inverted index constructionHadoop first mr job - inverted index construction
Hadoop first mr job - inverted index construction
 
Big Data & Hadoop Tutorial
Big Data & Hadoop TutorialBig Data & Hadoop Tutorial
Big Data & Hadoop Tutorial
 

Similar a 02 Hadoop deployment and configuration

Single node setup
Single node setupSingle node setup
Single node setup
KBCHOW123
 
R hive tutorial supplement 1 - Installing Hadoop
R hive tutorial supplement 1 - Installing HadoopR hive tutorial supplement 1 - Installing Hadoop
R hive tutorial supplement 1 - Installing Hadoop
Aiden Seonghak Hong
 
Implementing Hadoop on a single cluster
Implementing Hadoop on a single clusterImplementing Hadoop on a single cluster
Implementing Hadoop on a single cluster
Salil Navgire
 
LuisRodriguezLocalDevEnvironmentsDrupalOpenDays
LuisRodriguezLocalDevEnvironmentsDrupalOpenDaysLuisRodriguezLocalDevEnvironmentsDrupalOpenDays
LuisRodriguezLocalDevEnvironmentsDrupalOpenDays
Luis Rodríguez Castromil
 

Similar a 02 Hadoop deployment and configuration (20)

Single node hadoop cluster installation
Single node hadoop cluster installation Single node hadoop cluster installation
Single node hadoop cluster installation
 
Single node setup
Single node setupSingle node setup
Single node setup
 
Configure h base hadoop and hbase client
Configure h base hadoop and hbase clientConfigure h base hadoop and hbase client
Configure h base hadoop and hbase client
 
R hive tutorial supplement 1 - Installing Hadoop
R hive tutorial supplement 1 - Installing HadoopR hive tutorial supplement 1 - Installing Hadoop
R hive tutorial supplement 1 - Installing Hadoop
 
Hadoop 2.4 installing on ubuntu 14.04
Hadoop 2.4 installing on ubuntu 14.04Hadoop 2.4 installing on ubuntu 14.04
Hadoop 2.4 installing on ubuntu 14.04
 
Exp-3.pptx
Exp-3.pptxExp-3.pptx
Exp-3.pptx
 
Run wordcount job (hadoop)
Run wordcount job (hadoop)Run wordcount job (hadoop)
Run wordcount job (hadoop)
 
Implementing Hadoop on a single cluster
Implementing Hadoop on a single clusterImplementing Hadoop on a single cluster
Implementing Hadoop on a single cluster
 
Hadoop installation on windows
Hadoop installation on windows Hadoop installation on windows
Hadoop installation on windows
 
Hadoop on osx
Hadoop on osxHadoop on osx
Hadoop on osx
 
Hadoop 2.0 cluster setup on ubuntu 14.04 (64 bit)
Hadoop 2.0 cluster setup on ubuntu 14.04 (64 bit)Hadoop 2.0 cluster setup on ubuntu 14.04 (64 bit)
Hadoop 2.0 cluster setup on ubuntu 14.04 (64 bit)
 
Hadoop single node setup
Hadoop single node setupHadoop single node setup
Hadoop single node setup
 
LuisRodriguezLocalDevEnvironmentsDrupalOpenDays
LuisRodriguezLocalDevEnvironmentsDrupalOpenDaysLuisRodriguezLocalDevEnvironmentsDrupalOpenDays
LuisRodriguezLocalDevEnvironmentsDrupalOpenDays
 
Hadoop meet Rex(How to construct hadoop cluster with rex)
Hadoop meet Rex(How to construct hadoop cluster with rex)Hadoop meet Rex(How to construct hadoop cluster with rex)
Hadoop meet Rex(How to construct hadoop cluster with rex)
 
Big data with hadoop Setup on Ubuntu 12.04
Big data with hadoop Setup on Ubuntu 12.04Big data with hadoop Setup on Ubuntu 12.04
Big data with hadoop Setup on Ubuntu 12.04
 
Hadoop completereference
Hadoop completereferenceHadoop completereference
Hadoop completereference
 
Hadoop single node installation on ubuntu 14
Hadoop single node installation on ubuntu 14Hadoop single node installation on ubuntu 14
Hadoop single node installation on ubuntu 14
 
Deployment and Management of Hadoop Clusters
Deployment and Management of Hadoop ClustersDeployment and Management of Hadoop Clusters
Deployment and Management of Hadoop Clusters
 
Setting up LAMP for Linux newbies
Setting up LAMP for Linux newbiesSetting up LAMP for Linux newbies
Setting up LAMP for Linux newbies
 
Professional deployment
Professional deploymentProfessional deployment
Professional deployment
 

Más de Subhas Kumar Ghosh

07 logistic regression and stochastic gradient descent
07 logistic regression and stochastic gradient descent07 logistic regression and stochastic gradient descent
07 logistic regression and stochastic gradient descent
Subhas Kumar Ghosh
 
06 how to write a map reduce version of k-means clustering
06 how to write a map reduce version of k-means clustering06 how to write a map reduce version of k-means clustering
06 how to write a map reduce version of k-means clustering
Subhas Kumar Ghosh
 
02 naive bays classifier and sentiment analysis
02 naive bays classifier and sentiment analysis02 naive bays classifier and sentiment analysis
02 naive bays classifier and sentiment analysis
Subhas Kumar Ghosh
 
Hadoop secondary sort and a custom comparator
Hadoop secondary sort and a custom comparatorHadoop secondary sort and a custom comparator
Hadoop secondary sort and a custom comparator
Subhas Kumar Ghosh
 
Hadoop deconstructing map reduce job step by step
Hadoop deconstructing map reduce job step by stepHadoop deconstructing map reduce job step by step
Hadoop deconstructing map reduce job step by step
Subhas Kumar Ghosh
 

Más de Subhas Kumar Ghosh (18)

07 logistic regression and stochastic gradient descent
07 logistic regression and stochastic gradient descent07 logistic regression and stochastic gradient descent
07 logistic regression and stochastic gradient descent
 
06 how to write a map reduce version of k-means clustering
06 how to write a map reduce version of k-means clustering06 how to write a map reduce version of k-means clustering
06 how to write a map reduce version of k-means clustering
 
05 k-means clustering
05 k-means clustering05 k-means clustering
05 k-means clustering
 
02 data warehouse applications with hive
02 data warehouse applications with hive02 data warehouse applications with hive
02 data warehouse applications with hive
 
06 pig etl features
06 pig etl features06 pig etl features
06 pig etl features
 
05 pig user defined functions (udfs)
05 pig user defined functions (udfs)05 pig user defined functions (udfs)
05 pig user defined functions (udfs)
 
04 pig data operations
04 pig data operations04 pig data operations
04 pig data operations
 
02 naive bays classifier and sentiment analysis
02 naive bays classifier and sentiment analysis02 naive bays classifier and sentiment analysis
02 naive bays classifier and sentiment analysis
 
Hadoop performance optimization tips
Hadoop performance optimization tipsHadoop performance optimization tips
Hadoop performance optimization tips
 
Hadoop Day 3
Hadoop Day 3Hadoop Day 3
Hadoop Day 3
 
Hadoop job chaining
Hadoop job chainingHadoop job chaining
Hadoop job chaining
 
Hadoop secondary sort and a custom comparator
Hadoop secondary sort and a custom comparatorHadoop secondary sort and a custom comparator
Hadoop secondary sort and a custom comparator
 
Hadoop combiner and partitioner
Hadoop combiner and partitionerHadoop combiner and partitioner
Hadoop combiner and partitioner
 
Hadoop deconstructing map reduce job step by step
Hadoop deconstructing map reduce job step by stepHadoop deconstructing map reduce job step by step
Hadoop deconstructing map reduce job step by step
 
Hadoop map reduce concepts
Hadoop map reduce conceptsHadoop map reduce concepts
Hadoop map reduce concepts
 
Hadoop availability
Hadoop availabilityHadoop availability
Hadoop availability
 
Hadoop scheduler
Hadoop schedulerHadoop scheduler
Hadoop scheduler
 
Greedy embedding problem
Greedy embedding problemGreedy embedding problem
Greedy embedding problem
 

Último

Call Girls Indiranagar Just Call 👗 7737669865 👗 Top Class Call Girl Service B...
Call Girls Indiranagar Just Call 👗 7737669865 👗 Top Class Call Girl Service B...Call Girls Indiranagar Just Call 👗 7737669865 👗 Top Class Call Girl Service B...
Call Girls Indiranagar Just Call 👗 7737669865 👗 Top Class Call Girl Service B...
amitlee9823
 
Delhi 99530 vip 56974 Genuine Escort Service Call Girls in Kishangarh
Delhi 99530 vip 56974 Genuine Escort Service Call Girls in  KishangarhDelhi 99530 vip 56974 Genuine Escort Service Call Girls in  Kishangarh
Delhi 99530 vip 56974 Genuine Escort Service Call Girls in Kishangarh
9953056974 Low Rate Call Girls In Saket, Delhi NCR
 
Al Barsha Escorts $#$ O565212860 $#$ Escort Service In Al Barsha
Al Barsha Escorts $#$ O565212860 $#$ Escort Service In Al BarshaAl Barsha Escorts $#$ O565212860 $#$ Escort Service In Al Barsha
Al Barsha Escorts $#$ O565212860 $#$ Escort Service In Al Barsha
AroojKhan71
 
Abortion pills in Doha Qatar (+966572737505 ! Get Cytotec
Abortion pills in Doha Qatar (+966572737505 ! Get CytotecAbortion pills in Doha Qatar (+966572737505 ! Get Cytotec
Abortion pills in Doha Qatar (+966572737505 ! Get Cytotec
Abortion pills in Riyadh +966572737505 get cytotec
 
Junnasandra Call Girls: 🍓 7737669865 🍓 High Profile Model Escorts | Bangalore...
Junnasandra Call Girls: 🍓 7737669865 🍓 High Profile Model Escorts | Bangalore...Junnasandra Call Girls: 🍓 7737669865 🍓 High Profile Model Escorts | Bangalore...
Junnasandra Call Girls: 🍓 7737669865 🍓 High Profile Model Escorts | Bangalore...
amitlee9823
 
FESE Capital Markets Fact Sheet 2024 Q1.pdf
FESE Capital Markets Fact Sheet 2024 Q1.pdfFESE Capital Markets Fact Sheet 2024 Q1.pdf
FESE Capital Markets Fact Sheet 2024 Q1.pdf
MarinCaroMartnezBerg
 

Último (20)

BabyOno dropshipping via API with DroFx.pptx
BabyOno dropshipping via API with DroFx.pptxBabyOno dropshipping via API with DroFx.pptx
BabyOno dropshipping via API with DroFx.pptx
 
Call Girls Indiranagar Just Call 👗 7737669865 👗 Top Class Call Girl Service B...
Call Girls Indiranagar Just Call 👗 7737669865 👗 Top Class Call Girl Service B...Call Girls Indiranagar Just Call 👗 7737669865 👗 Top Class Call Girl Service B...
Call Girls Indiranagar Just Call 👗 7737669865 👗 Top Class Call Girl Service B...
 
Sampling (random) method and Non random.ppt
Sampling (random) method and Non random.pptSampling (random) method and Non random.ppt
Sampling (random) method and Non random.ppt
 
Halmar dropshipping via API with DroFx
Halmar  dropshipping  via API with DroFxHalmar  dropshipping  via API with DroFx
Halmar dropshipping via API with DroFx
 
April 2024 - Crypto Market Report's Analysis
April 2024 - Crypto Market Report's AnalysisApril 2024 - Crypto Market Report's Analysis
April 2024 - Crypto Market Report's Analysis
 
Invezz.com - Grow your wealth with trading signals
Invezz.com - Grow your wealth with trading signalsInvezz.com - Grow your wealth with trading signals
Invezz.com - Grow your wealth with trading signals
 
Call Girls in Sarai Kale Khan Delhi 💯 Call Us 🔝9205541914 🔝( Delhi) Escorts S...
Call Girls in Sarai Kale Khan Delhi 💯 Call Us 🔝9205541914 🔝( Delhi) Escorts S...Call Girls in Sarai Kale Khan Delhi 💯 Call Us 🔝9205541914 🔝( Delhi) Escorts S...
Call Girls in Sarai Kale Khan Delhi 💯 Call Us 🔝9205541914 🔝( Delhi) Escorts S...
 
Delhi 99530 vip 56974 Genuine Escort Service Call Girls in Kishangarh
Delhi 99530 vip 56974 Genuine Escort Service Call Girls in  KishangarhDelhi 99530 vip 56974 Genuine Escort Service Call Girls in  Kishangarh
Delhi 99530 vip 56974 Genuine Escort Service Call Girls in Kishangarh
 
VidaXL dropshipping via API with DroFx.pptx
VidaXL dropshipping via API with DroFx.pptxVidaXL dropshipping via API with DroFx.pptx
VidaXL dropshipping via API with DroFx.pptx
 
Ravak dropshipping via API with DroFx.pptx
Ravak dropshipping via API with DroFx.pptxRavak dropshipping via API with DroFx.pptx
Ravak dropshipping via API with DroFx.pptx
 
VIP Model Call Girls Hinjewadi ( Pune ) Call ON 8005736733 Starting From 5K t...
VIP Model Call Girls Hinjewadi ( Pune ) Call ON 8005736733 Starting From 5K t...VIP Model Call Girls Hinjewadi ( Pune ) Call ON 8005736733 Starting From 5K t...
VIP Model Call Girls Hinjewadi ( Pune ) Call ON 8005736733 Starting From 5K t...
 
BDSM⚡Call Girls in Mandawali Delhi >༒8448380779 Escort Service
BDSM⚡Call Girls in Mandawali Delhi >༒8448380779 Escort ServiceBDSM⚡Call Girls in Mandawali Delhi >༒8448380779 Escort Service
BDSM⚡Call Girls in Mandawali Delhi >༒8448380779 Escort Service
 
Al Barsha Escorts $#$ O565212860 $#$ Escort Service In Al Barsha
Al Barsha Escorts $#$ O565212860 $#$ Escort Service In Al BarshaAl Barsha Escorts $#$ O565212860 $#$ Escort Service In Al Barsha
Al Barsha Escorts $#$ O565212860 $#$ Escort Service In Al Barsha
 
Data-Analysis for Chicago Crime Data 2023
Data-Analysis for Chicago Crime Data  2023Data-Analysis for Chicago Crime Data  2023
Data-Analysis for Chicago Crime Data 2023
 
Abortion pills in Doha Qatar (+966572737505 ! Get Cytotec
Abortion pills in Doha Qatar (+966572737505 ! Get CytotecAbortion pills in Doha Qatar (+966572737505 ! Get Cytotec
Abortion pills in Doha Qatar (+966572737505 ! Get Cytotec
 
Discover Why Less is More in B2B Research
Discover Why Less is More in B2B ResearchDiscover Why Less is More in B2B Research
Discover Why Less is More in B2B Research
 
Junnasandra Call Girls: 🍓 7737669865 🍓 High Profile Model Escorts | Bangalore...
Junnasandra Call Girls: 🍓 7737669865 🍓 High Profile Model Escorts | Bangalore...Junnasandra Call Girls: 🍓 7737669865 🍓 High Profile Model Escorts | Bangalore...
Junnasandra Call Girls: 🍓 7737669865 🍓 High Profile Model Escorts | Bangalore...
 
FESE Capital Markets Fact Sheet 2024 Q1.pdf
FESE Capital Markets Fact Sheet 2024 Q1.pdfFESE Capital Markets Fact Sheet 2024 Q1.pdf
FESE Capital Markets Fact Sheet 2024 Q1.pdf
 
Edukaciniai dropshipping via API with DroFx
Edukaciniai dropshipping via API with DroFxEdukaciniai dropshipping via API with DroFx
Edukaciniai dropshipping via API with DroFx
 
Carero dropshipping via API with DroFx.pptx
Carero dropshipping via API with DroFx.pptxCarero dropshipping via API with DroFx.pptx
Carero dropshipping via API with DroFx.pptx
 

02 Hadoop deployment and configuration

  • 1. Hadoop Deployment and Configuration -Single machine and a cluster
  • 2. Typical Hardware •HP Compaq 8100 Elite CMT PC Specifications · Processor: Intel Core i7-860 · RAM: 8GB PC3-10600 Memory (2X4GB) · HDD: 1TB SATA 3.5 · Network: Intel 82578 GbE(Integrated) •Network switch –NetgearGS2608Specifications · N Port · 10/100/1000 Mbps Gigabit Switch •Gateway node –Dell OptiplexGX280Specifications · Processor: Intel Pentium 4 2.80 GHz · RAM: 1GB
  • 3. OS •Install the Ubuntu Server (Maverick Meerkat) operating system that is available for download from the Ubuntu releases site. •Some important points to remember while installing the OS –Ensure that the SSH server is selected to be installed –Enter the proxy details needed for systems to connect to the internet from within your network –Create a user on each installation •Preferably with the same password on each node
  • 4. Prerequisites •Supported Platforms –GNU/Linux is supported as a development and production platform. Hadoop has been demonstrated on GNU/Linux clusters with 2000 nodes. –Win32 is supported as adevelopment platform. Distributed operation has not been well tested on Win32, so it is not supported as aproduction platform. •Required Software –Required software for Linux and Windows include: •JavaTM1.7.x, preferably from Sun, must be installed. •sshmust be installed andsshdmust be running to use the Hadoop scripts that manage remote Hadoop daemons. •Additional requirements for Windows include: –Cygwin-Required for shell support in addition to the required software above. •Installing Software –If your cluster doesn't have the requisite software you will need to install it. –For example on Ubuntu Linux: •$ sudoapt-get install ssh$ sudoapt-get install rsync •On Windows, if you did not install the required software when you installed cygwin, start the cygwininstaller and select the packages: –openssh-theNetcategory
  • 5. Install Sun’s java JDK •Install Sun’s java JDK on each node in the cluster •Add the canonical partner repository to your list of apt repositories. •You can do this by adding the line below into your /etc/apt/sources.listfiledebhttp://archive.canonical.com/ maverick partner •Update the source list –sudoapt-get update •Install sun-java7-jdk –sudoapt-get install sun-java6-jdk •Select Sun’s java as the default on the machine –sudoupdate-java-alternatives -s java-6-sun •Verify the installation running the command –java –version
  • 6. Adding a dedicated Hadoop system user •Use a dedicated Hadoop user account for running Hadoop. •While that’s not required it is recommended because it helps to separate the Hadoop installation from other software applications and user accounts running on the same machine (think: security, permissions, backups, etc). •This will add the userhduserand the grouphadoopto your local machine: –$ sudoaddgrouphadoop –$ sudoadduser--ingrouphadoophduser
  • 7. Configuring SSH •Hadoop requires SSH access to manage its nodes, i.e. remote machines plus your local machine if you want to use Hadoop on it. •For single-node setup of Hadoop, we therefore need to configure SSH access tolocalhostfor thehduseruser we created in the previous slide. •Have SSH up and running on your machine and configured it to allow SSH public key authentication. http://ubuntuguide.org/ •Generate an SSH key for thehduseruser. user@ubuntu:~$ su-hduser hduser@ubuntu:~$ ssh-keygen-t rsa-P "" Generating public/private rsakey pair. Enter file in which to save the key (/home/hduser/.ssh/id_rsa): Created directory '/home/hduser/.ssh'. Your identification has been saved in /home/hduser/.ssh/id_rsa. Your public key has been saved in /home/hduser/.ssh/id_rsa.pub. The key fingerprint is: 9b:82:ea:58:b4:e0:35:d7:ff:19:66:a6:ef:ae:0e:d2 hduser@ubuntu The key's randomartimage is: [...snipp...] hduser@ubuntu:~$
  • 8. Configuring SSH •Second, you have to enable SSH access to your local machine with this newly created key. –hduser@ubuntu:~$ cat $HOME/.ssh/id_rsa.pub >> $HOME/.ssh/authorized_keys •The final step is to test the SSH setup by connecting to your local machine with thehduseruser. •The step is also needed to save your local machine’s host key fingerprint to thehduseruser’sknown_hostsfile. •If you have any special SSH configuration for your local machine like a non- standard SSH port, you can define host-specific SSH options in$HOME/.ssh/config(seeman ssh_configfor more information). hduser@ubuntu:~$ sshlocalhost The authenticity of host 'localhost(::1)' can't be established. RSA key fingerprint is d7:87:25:47:ae:02:00:eb:1d:75:4f:bb:44:f9:36:26. Are you sure you want to continue connecting (yes/no)? yes Warning: Permanently added 'localhost' (RSA) to the list of known hosts. Linux ubuntu2.6.32-22-generic #33-Ubuntu SMP Wed Apr 28 13:27:30 UTC 2010 i686 GNU/Linux Ubuntu 10.04 LTS [...snipp...] hduser@ubuntu:~$
  • 9. Disabling IPv6 •One problem with IPv6 on Ubuntu is that using0.0.0.0for the various networking-related Hadoop configuration options will result in Hadoop binding to the IPv6 addresses. •To disable IPv6 on Ubuntu 10.04 LTS, open/etc/sysctl.confin the editor of your choice and add the following lines to the end of the file: •You have to reboot your machine in order to make the changes take effect. •You can check whether IPv6 is enabled on your machine with the following command: •You can also disable IPv6 only for Hadoop as documented inHADOOP-3437. You can do so by adding the following line toconf/hadoop-env.sh: #disable ipv6 net.ipv6.conf.all.disable_ipv6 = 1 net.ipv6.conf.default.disable_ipv6 = 1 net.ipv6.conf.lo.disable_ipv6 = 1 $ cat /proc/sys/net/ipv6/conf/all/disable_ipv6 export HADOOP_OPTS=-Djava.net.preferIPv4Stack=true
  • 10. Hadoop Installation •You have todownload Hadoopfrom theApache Download Mirrorsand extract the contents of the Hadoop package to a location of your choice. •Say/usr/local/hadoop. •Make sure to change the owner of all the files to thehduseruser andhadoopgroup, for example: •Create a symlinkfromhadoop-xxxxxtohadoop $ cd/usr/local $ sudotar xzfhadoop-xxxx.tar.gz $ sudomv hadoop-xxxxxhadoop $ sudochown-R hduser:hadoophadoop
  • 11. Update $HOME/.bashrc •Add the following lines to the end of the$HOME/.bashrcfile of userhduser. •If you use a shell other than bash, you should of course update its appropriate configuration files instead of.bashrc. # Set Hadoop-related environment variables export HADOOP_HOME=/usr/local/hadoop # Set JAVA_HOME (we will also configure JAVA_HOME directly for Hadoop later on) export JAVA_HOME=/usr/lib/jvm/java-6-sun # Some convenient aliases and functions for running Hadoop-related commands unaliasfs&> /dev/null alias fs="hadoopfs" unaliashls&> /dev/null alias hls="fs-ls"
  • 12. Update $HOME/.bashrc # If you have LZO compression enabled in your Hadoop cluster and # compress job outputs with LZOP (not covered in this tutorial): # Conveniently inspect an LZOP compressed file from the command # line; run via: # # $ lzohead/hdfs/path/to/lzop/compressed/file.lzo # # Requires installed 'lzop' command. # lzohead() { hadoopfs-cat $1 | lzop-dc | head -1000 | less } # Add Hadoop bin/ directory to PATH export PATH=$PATH:$HADOOP_HOME/bin
  • 13. Configuration files •The$HADOOP_INSTALL/hadoop/confdirectory contains some configuration files for Hadoop. These are: •hadoop-env.sh-This file contains some environment variable settings used by Hadoop. You can use these to affect some aspects of Hadoop daemon behavior, such as where log files are stored, the maximum amount of heap used etc. The only variable you should need to change in this file is JAVA_HOME, which specifies the path to the Java 1.5.x installation used by Hadoop. •slaves-This file lists the hosts, one per line, where the Hadoop slave daemons (datanodesand tasktrackers) will run. By default this contains the single entry localhost •hdfs-site.xml-This file contains generic default settings for Hadoop daemons and Map/Reduce jobs. Do not modify this file. •mapred-site.xml-This file contains site specific settings for the Hadoop Map/Reduce daemons and jobs. The file is empty by default. Putting configuration properties in this file will override Map/Reduce settings in the hadoop-default.xml file. Use this file to tailor the behavior of Map/Reduce on your site. •core-site.xml-This file contains site specific settings for all Hadoop daemons and Map/Reduce jobs. This file is empty by default. Settings in this file override those in hadoop-default.xml and mapred-default.xml. This file should contain settings that must be respected by all servers and clients in a Hadoop installation, for instance, the location of the namenode and the jobtracker.
  • 14. Configuration : Single node •hadoop-env.sh : –The only required environment variable we have to configure for Hadoop in this case isJAVA_HOME. –Open etc/hadoop/conf/hadoop-env.shin the editor of your choice –set theJAVA_HOMEenvironment variable to the Sun JDK/JRE 6 directory –export JAVA_HOME=/usr/lib/jvm/java-6-sun •conf/*-site.xml –We configure following: –core-site.xml •hadoop.tmp.dir •fs.default.name –mapred-site.xml •mapred.job.tracker –hdfs-site.xml •dfs.replication
  • 15. Configure HDFS •We will configure the directory where Hadoop will store its data files, the network ports it listens to, etc. •Our setup will use Hadoop’sDistributed File System,HDFS, even though our little “cluster” only contains our single local machine. •You can leave the settings below ”as is” with the exception of thehadoop.tmp.dirvariable which you have to change to the directory of your choice. •We will use the directory/app/hadoop/tmp •Hadoop’sdefault configurations usehadoop.tmp.diras the base temporary directory both for the local file system and HDFS, so don’t be surprised if you see Hadoop creating the specified directory automatically on HDFS at some later point. $ sudomkdir-p /app/hadoop/tmp $ sudochownhduser:hadoop/app/hadoop/tmp # ...and if you want to tighten up security, chmodfrom 755 to 750... $ sudochmod750 /app/hadoop/tmp
  • 16. conf/core-site.xml <!--In: conf/core-site.xml --> <property> <name>hadoop.tmp.dir</name> <value>/app/hadoop/tmp</value> <description>A base for other temporary directories.</description> </property> <property> <name>fs.default.name</name> <value>hdfs://localhost:54310</value> <description>The name of the default file system. A URI whose scheme and authority determine the FileSystemimplementation. The uri'sscheme determines the configproperty (fs.SCHEME.impl) naming the FileSystemimplementation class. The uri'sauthority is used to determine the host, port, etc. for a filesystem.</description> </property>
  • 17. conf/mapred-site.xml <!--In: conf/mapred-site.xml --> <property> <name>mapred.job.tracker</name> <value>localhost:54311</value> <description>The host and port that the MapReduce job tracker runs at. If "local", then jobs are run in-process as a single map and reduce task. </description> </property>
  • 18. conf/hdfs-site.xml <!--In: conf/hdfs-site.xml --> <property> <name>dfs.replication</name> <value>1</value> <description>Default block replication. The actual number of replications can be specified when the file is created. The default is used if replication is not specified in create time. </description> </property>
  • 19. Formatting the HDFS and Starting •To format the filesystem(which simply initializes the directory specified by thedfs.name.dirvariable), run the command •hadoopnamenode –format •Run start-all.sh : This will startup a Namenode, Datanode, Jobtrackerand a Tasktrackeron your machine •Run stop-all.sh to stop all processes
  • 20. Download example input data •Create a directory inside /home/…/gutenberg •Download: –The Outline of Science, Vol. 1 (of 4) by J. Arthur Thomson http://www.gutenberg.org/ebooks/20417.txt.utf-8 –The Notebooks of Leonardo DaVincihttp://www.gutenberg.org/cache/epub/5000/pg5000.txt –Ulysses by James Joycehttp://www.gutenberg.org/cache/epub/4300/pg4300.txt •Copy local example data to HDFS –hdfsdfs-copyFromLocalgutenberggutenberg •Check –hadoopdfs-lsgutenberg
  • 21. Run the MapReduce job •Now, we run the WordCountexample job •hadoopjar /usr/lib/hadoop/hadoop-xxxx-example.jar wordcountgutenberggutenberg-out •This command will –read all the files in the HDFS directory/user/cloudera/gutenberg, –process it, and –store the result in the HDFS directory/user/cloudera/gutenberg-out •Check if the result is successfully stored in HDFS directorygutenberg-out –hdfsdfs–lsgutenberg-out •Retrieve the job result from HDFS –hdfsdfs–cat gutenberg-out/part-r-00000 •Better: –hdfsdfs–cat gutenberg-out/part-r-00000 | sort –nk2,2 –r | less
  • 22. Hadoop Web Interfaces •Hadoop comes with several web interfaces which are by default (seeconf/hadoop-default.xml) available at these locations: •http://localhost:50030/–web UI for MapReduce job tracker(s) •http://localhost:50060/–web UI for task tracker(s) •http://localhost:50070/–web UI for HDFS name node(s)
  • 23. Cluster setup •Basic idea Box 1 Single Node Cluster Master Box 2 Single Node Cluster Master What we have done so far Master Slave Gateway Switch LAN Use BitviseTunnelierSSH port forwarding
  • 24. Calling by name •Now that you have two single-node clusters up and running, we will modify the Hadoop configuration to make •one Ubuntu box the ”master” (which will also act as a slave) and •the other Ubuntu box a ”slave”. •We will call the designated master machine just themasterfrom now on and the slave-only machine the slave. •We will also give the two machines these respective hostnames in their networking setup, most notably in/etc/hosts. •If the hostnames of your machines are different (e.g.node01) then you must adapt the settings as appropriate.
  • 25. Networking •connect both machines via a single hub or switch and configure the network interfaces to use a common network such as192.168.0.x/24. •To make it simple, •we will assign the IP address192.168.0.1to themastermachine and •192.168.0.2to theslavemachine. •Update/etc/hostson both machines with the following lines: # /etc/hosts (for master AND slave) 192.168.0.1 master 192.168.0.2 slave
  • 26. SSH access •Thehduseruser on themaster(akahduser@master) must be able to connect a) to its own user account on the master–i.e.sshmasterin this context and not necessarilysshlocalhost–and b) to thehduseruser account on theslave(akahduser@slave) via a password-less SSH login. •you just have to add thehduser@master‘spublic SSH key (which should be in$HOME/.ssh/id_rsa.pub) to theauthorized_keysfile ofhduser@slave(in this user’s$HOME/.ssh/authorized_keys). •ssh-copy-id -i$HOME/.ssh/id_rsa.pub hduser@slave •Verify that the password-less access to all slaves from the master workssshhduser@slavesshhduser@master
  • 27. How the final multi-node cluster will look like
  • 28. Naming again •The master node will run the “master” daemons for each layer: –NameNodefor the HDFS storage layer, and –JobTrackerfor the MapReduce processing layer •Both machines will run the “slave” daemons: –DataNodefor the HDFS layer, and –TaskTrackerfor MapReduce processing layer •The “master” daemons are responsible for coordination and management of the “slave” daemons while the latter will do the actual data storage and data processing work. •Typically one machine in the cluster is designated as the NameNodeand another machine the as JobTracker, exclusively. •These are the actual “master nodes”. •The rest of the machines in the cluster act as both DataNodeand TaskTracker. •These are the slaves or “worker nodes”.
  • 29. conf/masters (masteronly) •Theconf/mastersfile defines on which machines Hadoop will startsecondary NameNodesin our multi-node cluster. •In our case, this is just themastermachine. •The primary NameNodeand the JobTrackerwill always be the machines on which you run thebin/start-dfs.shandbin/start- mapred.shscripts, respectively •The primary NameNodeand the JobTrackerwill be started on the same machine if you runbin/start-all.sh •Onmaster, update/conf/mastersthat it looks like this: master
  • 30. conf/slaves (masteronly) •Thisconf/slavesfile lists the hosts, one per line, where the Hadoop slave daemons (DataNodesand TaskTrackers) will be run. •We want both themasterbox and theslavebox to act as Hadoop slaves because we want both of them to store and process data. •Onmaster, updateconf/slavesthat it looks like this: •If you have additional slave nodes, just add them to theconf/slavesfile, one per line (do this on all machines in the cluster). master slave
  • 31. conf/*-site.xml(all machines) •You have to change the configuration files •conf/core-site.xml, •conf/mapred-site.xml and •conf/hdfs-site.xml •on ALL machines: –fs.default.name : The name of the default file system. A URI whose scheme and authority determine the FileSystemimplementation. Set as hdfs://master:54310 –mapred.job.tracker: The host and port that the MapReduce job tracker runs at. Set as master:54311 –dfs.replication: Default block replication. Set as 2 –mapred.local.dir: Determines where temporary MapReduce data is written. It also may be a list of directories. –mapred.map.tasks: As a rule of thumb, use 10x the number of slaves (i.e., number of TaskTrackers). –mapred.reduce.tasks: As a rule of thumb, use 2x the number of slave processors (i.e., number of TaskTrackers).
  • 32. Formatting the HDFS and Starting •To format the filesystem(which simply initializes the directory specified by thedfs.name.dirvariable on the NameNode), run the command •hdfsnamenode –format •Starting the multi-node cluster –Starting the cluster is done in two steps. –First, the HDFS daemons are started: start-dfs.sh •NameNodedaemon is started on master, and •DataNodedaemons are started on all slaves (here:masterandslave) –Second, the MapReduce daemons are started: start-mapred.sh •JobTrackeris started onmaster, and •TaskTrackerdaemons are started on all slaves (here:masterandslave) •stop-mapred.sh followed by stop-dfs.sh to stop
  • 33. End of session Day –1: Hadoop Deployment and Configuration -Single machine and a cluster Run the PiEstimatorexamplehadoopjar /usr/lib/hadoop/hadoop-xxxxx-example.jar pi 2 100000