Вы находитесь на странице: 1из 4

Hadoop Installation Procedure

http://www.liberiangeek.net/2012/05/login-as-root-in-ubuntu-12-04-precise-pangolin/ 1) sudo passwd root sudo sh -c 'echo "greeter-show-manual-login=true" >> /etc/lightdm/lightdm.conf'

restart your system login : Username root password ssn 2) Go to Filesytem->etc->hosts Create map of master and slave for the corresponding IP address 3) Create a bigdata folder in FIlesystem/home Paste Hadoop 1.0.0 folder in Filesystem/home/bigdata folder 4) Type in terminal --> sudo gedit .bashrc Add the following 4 export lines export export export export HADOOP_DEV_HOME=/home/bigdata/hadoop-1.0.1 HADOOP_COMMON_HOME=$HADOOP_DEV_HOME HADOOP_HDFS_HOME=$HADOOP_DEV_HOME HADOOP_CONF_DIR=$HADOOP_DEV_HOME/conf

5) Goto /home/bigdata/hadoop-1.0.1/conf Open core-site.xml FOR NAMENODE Paste the following code in it <?xml version="1.0"?> <?xml-stylesheet type="text/xsl" href="configuration.xsl"?> <!-- Put site-specific property overrides in this file. --> <configuration> <property> <name>hadoop.tmp.dir</name> <value>/home/bigdata/hadoop-1.0.1/tmp</value> </property> <property>

<name>fs.default.name</name> <value>hdfs://master:33333/</value> </property> </configuration> Open core-site.xml FOR DATANODE Paste the following code in it <?xml version="1.0"?> <?xml-stylesheet type="text/xsl" href="configuration.xsl"?> <!-- Put site-specific property overrides in this file. --> <configuration> <!-- <property> <name>fs.default.name</name> <value>viewfs://slave:33331</value> </property> <property> <name>fs.viewfs.mounttable.default.link./sam</name> <value>hdfs://master:33333/home</value> </property> --> <!-- Location for local storage --> <property> <name>hadoop.tmp.dir</name> <value>/home/bigdata/hadoop-1.0.1/tmp/</value> </property> <property> <name>fs.default.name</name> <value>hdfs://master:33333/</value> </property> </configuration> 6) Open hdfs-site.xml in the same conf folder FOR NAMENODE Paste the following code in it <?xml version="1.0"?> <?xml-stylesheet type="text/xsl" href="configuration.xsl"?> <!-- Put site-specific property overrides in this file. --> <configuration> <property> <name>dfs.namenode.name.dir</name> <value>nn1</value> </property> <property> <name>dfs.http.address</name> <value>master:44444</value>

</property> <property> <name>dfs.replication</name> <value>1</value> </property> </configuration> FOR DATANODE <?xml version="1.0"?> <?xml-stylesheet type="text/xsl" href="configuration.xsl"?> <!-- Put site-specific property overrides in this file. --> <configuration> <property> <name>dfs.http.address</name> <value>master:44444</value> </property> <property> <name>dfs.replication</name> <value>1</value> </property> </configuration> 7) Paste the following java path in /home/bigdata/hadoop-1.0.1/libexec export JAVA_HOME=/usr/lib/jvm/java-6-openjdk-i386 8) Create 'tmp' folder in /home/bigdata/hadoop-1.0.1 where 'dfs' folder will be created after connection is established between Namenode and Datanode 9) Add the slave host names to the conf/slaves file on the NameNode on which the command 10) Format the namenode in terminal using command /home/bigdata/hadoop-1.0.1/bin/hadoop namenode -format 10) Type '/home/bigdata/hadoop-1.0.1/bin/start-dfs.sh' in terminal to start Hadoop 11) Type '/home/bigdata/hadoop-1.0.1/bin/stop-dfs.sh' in terminal to shut down Hadoop 12) Create a sampledata folder in '/home' to store the files to be sent to datanode 13) Type '/home/bigdata/hadoop-1.0.1/bin/hadoop dfs -copyFromLocal /home/sampledata/hdfssite.xml hdfs://master:33333/' 14) Check if the file is received in '/home/bigdata/hadoop-1.0.1/tmp/dfs/data/current' 15) Change proxy settings. Add exception list-> master,slave 16) Open browser, and type http://master:44444/dfshealth.jsp

17)