Hadoop 0.23.6安装实践1-单机开发版安装
发布日期:2021-09-16 04:36:25 浏览次数:12 分类:技术文章

本文共 10803 字,大约阅读时间需要 36 分钟。

hadoop 0.23开始添加了yarn(MRv2)模块,文件结构也发生了较大的变化。以下就安装部署流程详细记录一下。

安装环境:

  1. 系统:Ubuntu 12.10
  2. hadoop:0.23.6
  3. jdk:sun 1.7.0_21

安装步骤:

一.安装JDK

安装 orcale jdk,并且配置环境以及设置成默认(略)

检查jdk是否正确安装和配置

在主目录下执行java -version

如果出现下面类似结果则ok

hadoop@ubuntu:~$ java -version

java version "1.7.0_21"
Java(TM) SE Runtime Environment (build 1.7.0_21-b11)
Java HotSpot(TM) Server VM (build 23.21-b01, mixed mode)

二.安装hadoop

1.下载:

2.安装

tar -zxvf  hadoop-0.23.6.tar.gz

mv hadoop /opt/

cd /opt

sudo ln -s /opt/hadoop-0.23.6 /opt/hadoop

 

 三.安装ssh server

 

sudo apt-get install openssh-server

 四.添加hadoop用户

为了方便hadoop的管理,最好添加一个单独的用户来管理hadoop,例如添加hadoop用户

执行以下命令

sudo adduser hadoop

然后会提示输入密码,设置好密码就可以了

这时候只要执行

su hadoop

输入密码后就可以切换到hadoop用户下

注:

为了使hadoop帐号下能够sudo,建议做以下添加

在:

%sudo    ALL=(ALL:ALL) ALL 后添加

hadoop   ALL=(ALL:ALL) ALL

 

 五.配置本机ssh无密码登录

 

hadoop@ubuntu:~$ ssh-keygen -t rsa -""

hadoop@ubuntu:~$ cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys

测试:

hadoop@ubuntu:~$ ssh localhost
Welcome to Ubuntu 12.10 (GNU/Linux 3.5.0-17-generic i686)
 * Documentation:  https://help.ubuntu.com/
340 packages can be updated.
105 updates are security updates.
Last login: Thu Apr 18 07:18:03 2013 from localhost

 六.配置Hadoop

chown  -R hadoop:hadoop /opt/hadoop

chown  -R hadoop:hadoop /opt/hadoop-0.23.6

su hadoop

1. 配置jdk及hadoop环境变量

在~/.bashrc文件里追加

export JAVA_HOME=/usr/lib/jvm/java-7-sun

export JRE_HOME=${JAVA_HOME}/jre
export HADOOP_HOME=/opt/hadoop
export CLASSPATH=.:${JAVA_HOME}/lib:${JRE_HOME}/lib
export PATH=${JAVA_HOME}/bin:$HADOOP_HOME/bin:$PATH
export HADOOP_COMMON_HOME=$HADOOP_HOME
export HADOOP_HDFS_HOME=$HADOOP_HOME
export HADOOP_CONF_DIR=$HADOOP_HOME/etc/hadoop
export YARN_CONF_DIR=$HADOOP_HOME/etc/hadoop

2. Hadoop文件配置

hadoop@ubuntu:~$ cd /opt/hadoop/etc/hadoop/

hadoop@ubuntu:/opt/hadoop/etc/hadoop$ vi yarn-env.sh

追加以下配置

export HADOOP_FREFIX=/opt/hadoop

export HADOOP_COMMON_HOME=${HADOOP_FREFIX}
export HADOOP_HDFS_HOME=${HADOOP_FREFIX}
export PATH=$PATH:$HADOOP_FREFIX/bin
export PATH=$PATH:$HADOOP_FREFIX/sbin
export HADOOP_MAPRED_HOME=${HADOOP_FREFIX}
export YARN_HOME=${HADOOP_FREFIX}
export HADOOP_CONF_HOME=${HADOOP_FREFIX}/etc/hadoop
export YARN_CONF_DIR=${HADOOP_FREFIX}/etc/hadoop
vi core-site.xml

 

fs.defaultFS
hdfs://localhost:12200
hadoop.tmp.dir
/opt/hadoop/hadoop-root
fs.arionfs.impl
org.apache.hadoop.fs.pvfs2.Pvfs2FileSystem
The FileSystem for arionfs.
 hadoop@ubuntu:/opt/hadoop/etc/hadoop$ vi hdfs-site.xml

 

 

dfs.namenode.name.dir
file:/opt/hadoop/data/dfs/name
true
dfs.namenode.data.dir
file:/opt/hadoop/data/dfs/data
true
dfs.replication
1
dfs.permission
false
 hadoop@ubuntu:/opt/hadoop/etc/hadoop$ vi mapred-site.xml

 

 

mapreduce.framework.name
yarn
mapreduce.job.tracker
hdfs://localhost:9001
true
mapreduce.map.memory.mb
1536
mapreduce.map.java.opts
-Xmx1024M
mapreduce.reduce.memory.mb
3072
mapreduce.reduce.java.opts
-Xmx2560M
mapreduce.task.io.sort.mb
512
mapreduce.task.io.sort.factor
100
mapreduce.reduce.shuffle.parallelcopies
50
mapreduce.system.dir
file:/opt/hadoop/data/mapred/system
mapreduce.local.dir
file:/opt/hadoop/data/mapred/local
true
 hadoop@ubuntu:/opt/hadoop/etc/hadoop$ vi yarn-site.xml

 

 

yarn.nodemanager.aux-services
mapreduce.shuffle
yarn.nodemanager.aux-services.mapreduce.shuffle.class
org.apache.hadoop.mapred.ShuffleHandler
mapreduce.framework.name
yarn
user.name
hadoop
yarn.resourcemanager.address
localhost:54311
yarn.resourcemanager.scheduler.address
localhost:54312
yarn.resourcemanager.webapp.address
localhost:54313
yarn.resourcemanager.resource-tracker.address
localhost:54314
yarn.web-proxy.address
localhost:54315
mapred.job.tracker
localhost
 到此所有配置完成,其中上述的localhost可以改成成本机的ip或者host,至于配置里面的属性待后续相关文章叙述。

 

 七.启动并运行wordcount程序

1.设置JAVA_HOME

 

hadoop@ubuntu:/opt/hadoop$ vi libexec/hadoop-config.sh在if [[ -z $JAVA_HOME ]]; then  # On OSX use java_home (or /Library for older versions)  if [ "Darwin" == "$(uname -s)" ]; then    if [ -x /usr/libexec/java_home ]; then      export JAVA_HOME=($(/usr/libexec/java_home))    else      export JAVA_HOME=(/Library/Java/Home)    fi  fi  # Bail if we did not detect it  if [[ -z $JAVA_HOME ]]; then    echo "Error: JAVA_HOME is not set and could not be found." 1>&2    exit 1  fifi之前添加export JAVA_HOME=/usr/lib/jvm/java-7-sun
 2. 格式化namenode

 

 

hadoop@ubuntu:/opt/hadoop$ hadoop namenode -format
 3.启动

 

 

hadoop@ubuntu:/opt/hadoop/sbin$ ./start-dfs.shStarting namenodes on [localhost]localhost: starting namenode, logging to /opt/hadoop-0.23.6/logs/hadoop-hadoop-namenode-ubuntu.outlocalhost: starting datanode, logging to /opt/hadoop-0.23.6/logs/hadoop-hadoop-datanode-ubuntu.outStarting secondary namenodes [0.0.0.0]0.0.0.0: starting secondarynamenode, logging to /opt/hadoop-0.23.6/logs/hadoop-hadoop-secondarynamenode-ubuntu.outhadoop@ubuntu:/opt/hadoop/sbin$ ./start-yarn.sh starting yarn daemonsstarting resourcemanager, logging to /opt/hadoop-0.23.6/logs/yarn-hadoop-resourcemanager-ubuntu.outlocalhost: starting nodemanager, logging to /opt/hadoop-0.23.6/logs/yarn-hadoop-nodemanager-ubuntu.out
 4.检查启动是否成功

 

 

hadoop@ubuntu:/opt/hadoop/sbin$ jps5036 DataNode5246 SecondaryNameNode5543 NodeManager5369 ResourceManager4852 NameNode5816 Jps
 5.试着运行wordcount

 

1)构造输入数据

生成一个字符文本文件

hadoop@ubuntu:/opt/hadoop$ cat tmp/test.txt a c b a b d f f e b a c c d g i s a b c d e a b f g e i k m m n a b d g h i j a k j e

 2)上传到hdfs

hadoop@ubuntu:/opt/hadoop$  hadoop fs -mkdir /testhadoop@ubuntu:/opt/hadoop$  hadoop fs -copyFromLocal tmp/test.txt /testhadoop@ubuntu:/opt/hadoop$ hadoop fs -ls  /testFound 1 items-rw-r--r--   1 hadoop supergroup         86 2013-04-18 07:47 /test/test.txt

 3)执行程序

hadoop@ubuntu:/opt/hadoop$ hadoop jar share/hadoop/mapreduce/hadoop-mapreduce-examples-0.23.6.jar wordcount /test/test.txt /test/out                 #其中/test/out 为输出目录13/04/18 22:41:11 INFO input.FileInputFormat: Total input paths to process : 113/04/18 22:41:11 INFO util.NativeCodeLoader: Loaded the native-hadoop library13/04/18 22:41:11 WARN snappy.LoadSnappy: Snappy native library not loaded13/04/18 22:41:12 INFO mapreduce.JobSubmitter: number of splits:113/04/18 22:41:12 WARN conf.Configuration: mapred.jar is deprecated. Instead, use mapreduce.job.jar13/04/18 22:41:12 WARN conf.Configuration: mapred.output.value.class is deprecated. Instead, use mapreduce.job.output.value.class13/04/18 22:41:12 WARN conf.Configuration: mapreduce.combine.class is deprecated. Instead, use mapreduce.job.combine.class13/04/18 22:41:12 WARN conf.Configuration: mapreduce.map.class is deprecated. Instead, use mapreduce.job.map.class13/04/18 22:41:12 WARN conf.Configuration: mapred.job.name is deprecated. Instead, use mapreduce.job.name13/04/18 22:41:12 WARN conf.Configuration: mapreduce.reduce.class is deprecated. Instead, use mapreduce.job.reduce.class13/04/18 22:41:12 WARN conf.Configuration: mapred.input.dir is deprecated. Instead, use mapreduce.input.fileinputformat.inputdir13/04/18 22:41:12 WARN conf.Configuration: mapred.output.dir is deprecated. Instead, use mapreduce.output.fileoutputformat.outputdir13/04/18 22:41:12 WARN conf.Configuration: mapred.map.tasks is deprecated. Instead, use mapreduce.job.maps13/04/18 22:41:12 WARN conf.Configuration: mapred.output.key.class is deprecated. Instead, use mapreduce.job.output.key.class13/04/18 22:41:12 WARN conf.Configuration: mapred.working.dir is deprecated. Instead, use mapreduce.job.working.dir13/04/18 22:41:13 INFO mapred.ResourceMgrDelegate: Submitted application application_1366295287642_0001 to ResourceManager at localhost/127.0.0.1:5431113/04/18 22:41:13 INFO mapreduce.Job: The url to track the job: http://localhost:54315/proxy/application_1366295287642_0001/13/04/18 22:41:13 INFO mapreduce.Job: Running job: job_1366295287642_000113/04/18 22:41:21 INFO mapreduce.Job: Job job_1366295287642_0001 running in uber mode : false13/04/18 22:41:21 INFO mapreduce.Job:  map 0% reduce 0%13/04/18 22:41:36 INFO mapreduce.Job:  map 100% reduce 0%13/04/18 22:41:36 INFO mapreduce.Job: Task Id : attempt_1366295287642_0001_m_000000_0, Status : FAILEDKilled by external signal13/04/18 22:41:37 INFO mapreduce.Job:  map 0% reduce 0%13/04/18 22:42:11 INFO mapreduce.Job:  map 100% reduce 0%13/04/18 22:42:26 INFO mapreduce.Job:  map 100% reduce 100%13/04/18 22:42:26 INFO mapreduce.Job: Job job_1366295287642_0001 completed successfully13/04/18 22:42:27 INFO mapreduce.Job: Counters: 45

  4)查看结果

hadoop@ubuntu:/opt/hadoop$ hadoop fs -ls /testFound 2 itemsdrwxr-xr-x   - hadoop supergroup          0 2013-04-18 22:42 /test/out-rw-r--r--   1 hadoop supergroup         86 2013-04-18 07:47 /test/test.txthadoop@ubuntu:/opt/hadoop$ hadoop fs -ls /test/outFound 2 items-rw-r--r--   1 hadoop supergroup          0 2013-04-18 22:42 /test/out/_SUCCESS-rw-r--r--   1 hadoop supergroup         56 2013-04-18 22:42 /test/out/part-r-00000hadoop@ubuntu:/opt/hadoop$ hadoop fs -cat /test/out/part-r-0000013/04/18 22:45:25 INFO util.NativeCodeLoader: Loaded the native-hadoop librarya	7b	6c	4d	4e	4f	3g	3h	1i	3j	2k	2m	2n	1s	1

 

转载地址:https://blog.csdn.net/wliufu/article/details/84420686 如侵犯您的版权,请留言回复原文章的地址,我们会给您删除此文章,给您带来不便请您谅解!

上一篇:Hadoop 0.23.6安装实践1-单机开发版安装
下一篇:zookeeper学习(02)——zookeeper的基本原理

发表评论

最新留言

逛到本站,mark一下
[***.202.152.39]2024年04月18日 07时20分01秒