hadoopredismongodb-创新互联

一、环境

创新互联专注于企业网络营销推广、网站重做改版、忻府网站定制设计、自适应品牌网站建设、H5网站设计商城网站制作、集团公司官网建设、成都外贸网站制作、高端网站制作、响应式网页设计等建站业务,价格优惠性价比高,为忻府等各大城市提供网站开发制作服务。

系统      CentOS7.0 64位

namenode01    192.168.0.220

namenode02    192.168.0.221

datanode01    192.168.0.222

datanode02    192.168.0.223

datanode03    192.168.0.224

二、配置基础环境

在所有的机器上添加本地hosts文件解析

[root@namenode01 ~]# tail -5 /etc/hosts 192.168.0.220 namenode01 192.168.0.221 namenode02 192.168.0.222 datanode01 192.168.0.223 datanode02 192.168.0.224 datanode03

在5台机器上创建hadoop用户,并设置密码是hadoop,这里只以naemenode01为例子

[root@namenode01 ~]# useradd hadoop [root@namenode01 ~]# passwd hadoop Changing password for user hadoop. New password:  BAD PASSWORD: The password is shorter than 8 characters Retype new password:  passwd: all authentication tokens updated successfully.

配置5台机器hadoop用户之间互相免密码ssh登录

#namenode01的操作 [root@namenode01 ~]# su - hadoop [hadoop@namenode01 ~]$ ssh-keygen  Generating public/private rsa key pair. Enter file in which to save the key (/home/hadoop/.ssh/id_rsa):  Created directory '/home/hadoop/.ssh'. Enter passphrase (empty for no passphrase):  Enter same passphrase again:  Your identification has been saved in /home/hadoop/.ssh/id_rsa. Your public key has been saved in /home/hadoop/.ssh/id_rsa.pub. The key fingerprint is: 1c:7e:89:9d:14:9a:10:fc:69:1e:11:3d:6d:18:a5:01 hadoop@namenode01 The key's randomart p_w_picpath is: +--[ RSA 2048]----+ |     .o.E++=.    | |      ...o++o    | |       .+ooo     | |       o== o     | |       oS.=      | |        ..       | |                 | |                 | |                 | +-----------------+ [hadoop@namenode01 ~]$ ssh-copy-id -i ~/.ssh/id_rsa.pub namenode01 [hadoop@namenode01 ~]$ ssh-copy-id -i ~/.ssh/id_rsa.pub namenode02 [hadoop@namenode01 ~]$ ssh-copy-id -i ~/.ssh/id_rsa.pub datanode01 [hadoop@namenode01 ~]$ ssh-copy-id -i ~/.ssh/id_rsa.pub datanode02 [hadoop@namenode01 ~]$ ssh-copy-id -i ~/.ssh/id_rsa.pub datanode03 #验证结果 [hadoop@namenode01 ~]$ ssh namenode01 hostname namenode01 [hadoop@namenode01 ~]$ ssh namenode02 hostname namenode02 [hadoop@namenode01 ~]$ ssh datanode01 hostname datanode01 [hadoop@namenode01 ~]$ ssh datanode02 hostname datanode02 [hadoop@namenode01 ~]$ ssh datanode03 hostname datanode03 #在namenode02上操作 [root@namenode02 ~]# su - hadoop [hadoop@namenode02 ~]$ ssh-keygen  Generating public/private rsa key pair. Enter file in which to save the key (/home/hadoop/.ssh/id_rsa):  Enter passphrase (empty for no passphrase):  Enter same passphrase again:  Your identification has been saved in /home/hadoop/.ssh/id_rsa. Your public key has been saved in /home/hadoop/.ssh/id_rsa.pub. The key fingerprint is: a9:f5:0d:cb:c9:88:7b:71:f5:71:d8:a9:23:c6:85:6a hadoop@namenode02 The key's randomart p_w_picpath is: +--[ RSA 2048]----+ |                 | |                 | |            .  o.| |         . ...o.o| |        S +....o | |       +.E.O o.  | |      o ooB o .  | |       ..        | |      ..         | +-----------------+ [hadoop@namenode02 ~]$ ssh-copy-id -i ~/.ssh/id_rsa.pub namenode01 [hadoop@namenode02 ~]$ ssh-copy-id -i ~/.ssh/id_rsa.pub namenode02 [hadoop@namenode02 ~]$ ssh-copy-id -i ~/.ssh/id_rsa.pub datanode01 [hadoop@namenode02 ~]$ ssh-copy-id -i ~/.ssh/id_rsa.pub datanode02 [hadoop@namenode02 ~]$ ssh-copy-id -i ~/.ssh/id_rsa.pub datanode03 #验证结果 [hadoop@namenode02 ~]$ ssh namenode01 hostname namenode01 [hadoop@namenode02 ~]$ ssh namenode02 hostname namenode02 [hadoop@namenode02 ~]$ ssh datanode01 hostname datanode01 [hadoop@namenode02 ~]$ ssh datanode02 hostname datanode02 [hadoop@namenode02 ~]$ ssh datanode03 hostname datanode03 #在datanode01上操作 [root@datanode01 ~]# su - hadoop [hadoop@datanode01 ~]$ ssh-keygen  Generating public/private rsa key pair. Enter file in which to save the key (/home/hadoop/.ssh/id_rsa):  Enter passphrase (empty for no passphrase):  Enter same passphrase again:  Your identification has been saved in /home/hadoop/.ssh/id_rsa. Your public key has been saved in /home/hadoop/.ssh/id_rsa.pub. The key fingerprint is: 48:72:20:69:64:e7:81:b7:03:64:41:5e:fa:88:db:5e hadoop@datanode01 The key's randomart p_w_picpath is: +--[ RSA 2048]----+ | +O+=            | | +=*.o           | | .ooo.o          | | . oo+ .         | |. . ... S        | | o               | |. . E            | | . .             | |  .              | +-----------------+ [hadoop@datanode01 ~]$ ssh-copy-id -i ~/.ssh/id_rsa.pub namenode01 [hadoop@datanode01 ~]$ ssh-copy-id -i ~/.ssh/id_rsa.pub namenode02 [hadoop@datanode01 ~]$ ssh-copy-id -i ~/.ssh/id_rsa.pub datanode01 [hadoop@datanode01 ~]$ ssh-copy-id -i ~/.ssh/id_rsa.pub datanode02 [hadoop@datanode01 ~]$ ssh-copy-id -i ~/.ssh/id_rsa.pub datanode03 #验证结果 [hadoop@datanode01 ~]$ ssh namenode01 hostname namenode01 [hadoop@datanode01 ~]$ ssh namenode02 hostname namenode02 [hadoop@datanode01 ~]$ ssh datanode01 hostname datanode01 [hadoop@datanode01 ~]$ ssh datanode02 hostname datanode02 [hadoop@datanode01 ~]$ ssh datanode03 hostname datanode03 #datanode02上操作 [hadoop@datanode02 ~]$ ssh-keygen  Generating public/private rsa key pair. Enter file in which to save the key (/home/hadoop/.ssh/id_rsa):  Enter passphrase (empty for no passphrase):  Enter same passphrase again:  Your identification has been saved in /home/hadoop/.ssh/id_rsa. Your public key has been saved in /home/hadoop/.ssh/id_rsa.pub. The key fingerprint is: 32:aa:88:fa:ce:ec:51:6f:de:f4:06:c9:4e:9c:10:31 hadoop@datanode02 The key's randomart p_w_picpath is: +--[ RSA 2048]----+ |      E.         | |      ..         | |       .         | |      .          | |    . o+So       | |   . o oB        | |  . . oo..       | |.+ o o o...      | |=+B   . ...      | +-----------------+ [hadoop@datanode02 ~]$ ssh-copy-id -i ~/.ssh/id_rsa.pub namenode01 [hadoop@datanode02 ~]$ ssh-copy-id -i ~/.ssh/id_rsa.pub namenode02 [hadoop@datanode02 ~]$ ssh-copy-id -i ~/.ssh/id_rsa.pub datanode01 [hadoop@datanode02 ~]$ ssh-copy-id -i ~/.ssh/id_rsa.pub datanode02 [hadoop@datanode02 ~]$ ssh-copy-id -i ~/.ssh/id_rsa.pub datanode03 #验证结果 [hadoop@datanode02 ~]$ ssh namenode01 hostname namenode01 [hadoop@datanode02 ~]$ ssh namenode02 hostname namenode02 [hadoop@datanode02 ~]$ ssh datanode01 hostname datanode01 [hadoop@datanode02 ~]$ ssh datanode02 hostname datanode02 [hadoop@datanode02 ~]$ ssh datanode03 hostname datanode03 #datanode03上操作 [root@datanode03 ~]# su - hadoop [hadoop@datanode03 ~]$ ssh-keygen  Generating public/private rsa key pair. Enter file in which to save the key (/home/hadoop/.ssh/id_rsa):  Enter passphrase (empty for no passphrase):  Enter same passphrase again:  Your identification has been saved in /home/hadoop/.ssh/id_rsa. Your public key has been saved in /home/hadoop/.ssh/id_rsa.pub. The key fingerprint is: f3:f3:3c:85:61:c6:e4:82:58:10:1f:d8:bf:71:89:b4 hadoop@datanode03 The key's randomart p_w_picpath is: +--[ RSA 2048]----+ |      o=.        | |      ..o.. .    | |       o.+ * .   | |      . . E O    | |        S  B o   | |         o. . .  | |          o  .   | |           +.    | |            o.   | +-----------------+ [hadoop@datanode03 ~]$ ssh-copy-id -i ~/.ssh/id_rsa.pub namenode01 [hadoop@datanode03 ~]$ ssh-copy-id -i ~/.ssh/id_rsa.pub namenode02 [hadoop@datanode03 ~]$ ssh-copy-id -i ~/.ssh/id_rsa.pub datanode01 [hadoop@datanode03 ~]$ ssh-copy-id -i ~/.ssh/id_rsa.pub datanode02 [hadoop@datanode03 ~]$ ssh-copy-id -i ~/.ssh/id_rsa.pub datanode03 #验证结果 [hadoop@datanode03 ~]$ ssh namenode01 hostname namenode01 [hadoop@datanode03 ~]$ ssh namenode02 hostname namenode02 [hadoop@datanode03 ~]$ ssh datanode01 hostname datanode01 [hadoop@datanode03 ~]$ ssh datanode02 hostname datanode02 [hadoop@datanode03 ~]$ ssh datanode03 hostname datanode03

三、安装jdk环境

[root@namenode01 ~]# wget http://download.oracle.com/otn-pub/java/jdk/8u74-b02/jdk-8u74-linux-x64.tar.gz?AuthParam=1461828883_648d68bc6c7b0dfd253a6332a5871e06 [root@namenode01 ~]# tar xf jdk-8u74-linux-x64.tar.gz -C /usr/local/ #配置环境变量配置文件 [root@namenode01 ~]# cat /etc/profile.d/java.sh JAVA_HOME=/usr/local/jdk1.8.0_74 JAVA_BIN=/usr/local/jdk1.8.0_74/bin JRE_HOME=/usr/local/jdk1.8.0_74/jre PATH=$PATH:/usr/local/jdk1.8.0_74/bin:/usr/local/jdk1.8.0_74/jre/bin CLASSPATH=/usr/local/jdk1.8.0_74/jre/lib:/usr/local/jdk1.8.0_74/lib:/usr/local/jdk1.8.0_74/jre/lib/charsets.jar export JAVA_HOME PATH #加载环境变量 [root@namenode01 ~]# source /etc/profile.d/java.sh [root@namenode01 ~]# which java /usr/local/jdk1.8.0_74/bin/java #测试结果 [root@namenode01 ~]# java -version java version "1.8.0_74" Java(TM) SE Runtime Environment (build 1.8.0_74-b02) Java HotSpot(TM) 64-Bit Server VM (build 25.74-b02, mixed mode) #将环境变量配置文件和二进制包复制到其余的4台机器上 [root@namenode01 ~]# scp -r /usr/local/jdk1.8.0_74 namenode02:/usr/local/ [root@namenode01 ~]# scp -r /usr/local/jdk1.8.0_74 datanode01:/usr/local/ [root@namenode01 ~]# scp -r /usr/local/jdk1.8.0_74 datanode02:/usr/local/ [root@namenode01 ~]# scp -r /usr/local/jdk1.8.0_74 datanode03:/usr/local/ [root@namenode01 ~]# scp /etc/profile.d/java.sh namenode02:/etc/profile.d/                                                                                                      100%  308     0.3KB/s   00:00     [root@namenode01 ~]# scp /etc/profile.d/java.sh datanode01:/etc/profile.d/                                                                                            100%  308     0.3KB/s   00:00     [root@namenode01 ~]# scp /etc/profile.d/java.sh datanode02:/etc/profile.d/                                                                                                         100%  308     0.3KB/s   00:00     [root@namenode01 ~]# scp /etc/profile.d/java.sh datanode03:/etc/profile.d/ #测试结果,以namenode02为例子 [root@namenode02 ~]# source /etc/profile.d/java.sh  [root@namenode02 ~]# java -version java version "1.8.0_74" Java(TM) SE Runtime Environment (build 1.8.0_74-b02) Java HotSpot(TM) 64-Bit Server VM (build 25.74-b02, mixed mode)

四、安装hadoop

#下载hadoop软件 [root@namenode01 ~]# wget http://apache.fayea.com/hadoop/common/hadoop-2.5.2/hadoop-2.5.2.tar.gz [root@namenode01 ~]# tar xf hadoop-2.5.2.tar.gz -C /usr/local/ [root@namenode01 ~]# chown -R hadoop.hadoop /usr/local/hadoop-2.5.2/ [root@namenode01 ~]# ln -sv /usr/local/hadoop-2.5.2/ /usr/local/hadoop ‘/usr/local/hadoop’ -> ‘/usr/local/hadoop-2.5.2/’ #添加hadoop的环境变量配置文件 [root@namenode01 ~]# cat /etc/profile.d/hadoop.sh HADOOP_HOME=/usr/local/hadoop PATH=$HADOOP_HOME/bin:$PATH export HADOOP_BASE PATH #切换到hadoop用户下,检查jdk环境是否正常 [root@namenode01 ~]# su - hadoop Last login: Thu Apr 28 15:17:16 CST 2016 from datanode01 on pts/1 [hadoop@namenode01 ~]$ java -version java version "1.8.0_74" Java(TM) SE Runtime Environment (build 1.8.0_74-b02) Java HotSpot(TM) 64-Bit Server VM (build 25.74-b02, mixed mode) #开始编辑hadoop的配置文件 #编辑hadoop的环境变量文件 [hadoop@namenode01 ~]$ vim /usr/local/hadoop/etc/hadoop/hadoop-env.sh export JAVA_HOME=/usr/local/jdk1.8.0_74        #修改JAVA_HOME变量的值 #编辑core-site.xml文件 [hadoop@namenode01 ~]$ vim /usr/local/hadoop/etc/hadoop/core-site.xml                          hadoop.tmp.dir                 /home/hadoop/temp                                   fs.defaultFS                 hdfs://mycluster                                   io.file.buffers.size                 131072          #编辑hdfs-site.xml文件 [hadoop@namenode01 ~]$ vim /usr/local/hadoop/etc/hadoop/hdfs-site.xml dfs.namenode.name.dir /data/hdfs/dfs/name    #namenode目录 dfs.datanode.data.dir /data/hdfs/data        #datanode目录 dfs.permissions false dfs.nameservices mycluster        #和core-site.xml文件中保持一致 dfs.ha.namenodes.mycluster namenode01,namenode02        #namenode节点 dfs.namenode.rpc-address.mycluster.namenode01 namenode01:8020 dfs.namenode.rpc-address.mycluster.namenode02 namenode02:8020 dfs.namenode.http-address.mycluster.namenode01 namenode01:50070 dfs.namenode.http-address.mycluster.namenode02 namenode02:50070         #namenode往journalnode写edits文件,填写所有的journalnode节点 dfs.namenode.shared.edits.dir qjournal://namenode01:8485;namenode02:8485;datanode01:8485;datanode02:8485;datanode03:8485/mycluster dfs.journalnode.edits.dir /data/hdfs/journal    #journalnode目录 dfs.client.faliover.proxy.provider.mycluster org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider dfs.ha.fening.methods sshfence        #通过什么方法进行fence操作 dfs.ha.fencing.ssh.private-key-files /home/hadoop/.ssh/id_rsa    #主机之间的认证 dfs.ha.fencing.ssh.connect-timeout 6000 dfs.ha.automatic-failover.enabled false    #关闭主备自动切换,后面通过zookeeper来切换 dfs.replication 3        #replicaion的数量,默认为3分,少于这个数量会报错 dfs.webhdfs.enabled true dfs.permissions false #编辑yarn-site.xml文件 [hadoop@namenode01 ~]$ vim /usr/local/hadoop/etc/hadoop/yarn-site.xml  yarn.nodemanager.aux-service mapreduce_shuffle yarn.resourcemanager.address namenode01:8032 yarn.resourcemanager.scheduler.address namenode01:8030 yarn.resourcemanager.resource-tracker.address namenode01:8031 yarn.resourcemanager.admin.address namenode01:8033 yarn.resourcemanager.webapp.address namenode01:8033 yarn.nodemanager.resource.memory-mb 15360 #编辑mapred-site.xml文件 [hadoop@namenode01 ~]$ cp /usr/local/hadoop/etc/hadoop/mapred-site.xml.template /usr/local/hadoop/etc/hadoop/mapred-site.xml [hadoop@namenode01 ~]$ vim /usr/local/hadoop/etc/hadoop/mapred-site.xml mapreduce.framework.name yarn mapredue.jobtracker.http.address namenode01:50030 mapreduce.jobhistory.address namenode01:10020 mapreduce.jobhistory.webapp.address namenode01:19888 #编辑slaves配置文件 [hadoop@namenode01 ~]$ cat /usr/local/hadoop/etc/hadoop/slaves  datanode01 datanode02 datanode03 #在namenodee01上切换到root用户下,创建相应的目录 [root@namenode01 ~]# mkdir /data/hdfs [root@namenode01 ~]# chown hadoop.hadoop /data/hdfs/ #将hadoop用户的环境变量配置文件复制到其余4台机器上 [root@namenode01 ~]# scp /etc/profile.d/hadoop.sh namenode02:/etc/profile.d/ [root@namenode01 ~]# scp /etc/profile.d/hadoop.sh datanode01:/etc/profile.d/ [root@namenode01 ~]# scp /etc/profile.d/hadoop.sh datanode02:/etc/profile.d/   [root@namenode01 ~]# scp /etc/profile.d/hadoop.sh datanode03:/etc/profile.d/ #复制hadoop安装文件到其余的4台机器上 [root@namenode01 ~]# scp -r /usr/local/hadoop-2.5.2/ namenode02:/usr/local/ [root@namenode01 ~]# scp -r /usr/local/hadoop-2.5.2/ datanode01:/usr/local/ [root@namenode01 ~]# scp -r /usr/local/hadoop-2.5.2/ datanode02:/usr/local/ [root@namenode01 ~]# scp -r /usr/local/hadoop-2.5.2/ datanode03:/usr/local/ #修改目录的权限,以namenode02为例 [root@namenode02 ~]# chown -R hadoop.hadoop /usr/local/hadoop-2.5.2/ [root@namenode02 ~]# ln -sv /usr/local/hadoop-2.5.2/ /usr/local/hadoop ‘/usr/local/hadoop’ -> ‘/usr/local/hadoop-2.5.2/’ [root@namenode02 ~]# ll /usr/local |grep hadoop lrwxrwxrwx  1 root   root     24 Apr 28 17:19 hadoop -> /usr/local/hadoop-2.5.2/ drwxr-xr-x  9 hadoop hadoop  139 Apr 28 17:16 hadoop-2.5.2 #创建目录 [root@namenode02 ~]# mkdir /data/hdfs [root@namenode02 ~]# chown -R hadoop.hadoop /data/hdfs/ #检查jdk环境 [root@namenode02 ~]# su - hadoop Last login: Thu Apr 28 15:12:24 CST 2016 on pts/0 [hadoop@namenode02 ~]$ java -version java version "1.8.0_74" Java(TM) SE Runtime Environment (build 1.8.0_74-b02) Java HotSpot(TM) 64-Bit Server VM (build 25.74-b02, mixed mode) [hadoop@namenode02 ~]$ which hadoop /usr/local/hadoop/bin/hadoop

五、启动hadoop

#在所有服务器执行hadoop-daemon.sh start journalnode,要在hadoop用户下执行 #只贴出namenoe01的过程 [hadoop@namenode01 ~]$ /usr/local/hadoop/sbin/hadoop-daemon.sh start journalnode starting journalnode, logging to /usr/local/hadoop-2.5.2/logs/hadoop-hadoop-journalnode-namenode01.out #在namenode01上执行 [hadoop@namenode01 ~]$ hadoop namenode -format #说明:第一次启动的时候需要执行hadoop namenoe -format,非首次启动则运行hdfs namenode  -initializeSharedEdits 这里需要解释一下。    首次启动是指安装的时候就配置了HA,hdfs还没有数据。这时需要用format命令把namenode1格式化。    非首次启动是指原来有一个没有配置HA的HDFS已经在运行了,HDFS上已经有数据了,现在需要配置HA而加入一台namenode。这时候namenode1通过initializeSharedEdits命令来初始化journalnode,把edits文件共享到journalnode上。 #开始启动namenode节点 #在namenode01上执行 [hadoop@namenode01 ~]$ /usr/local/hadoop/sbin/hadoop-daemon.sh start namenode #在namenode02上执行 [hadoop@namenode02 ~]$ /usr/local/hadoop/sbin/hadoop-daemon.sh start namenode-bootstrapStandby #启动datanode节点 [hadoop@datanode01 ~]$ /usr/local/hadoop/sbin/hadoop-daemon.sh start datanode [hadoop@datanode02 ~]$ /usr/local/hadoop/sbin/hadoop-daemon.sh start datanode [hadoop@datanode03 ~]$ /usr/local/hadoop/sbin/hadoop-daemon.sh start datanode #验证结果 #查看namenode01结果 [hadoop@namenode01 ~]$ jps 2467 NameNode        #namenode角色 2270 JournalNode 2702 Jps #查看namenode02的结果 [hadoop@namenode01 ~]$ ssh namenode02 jps 2264 JournalNode 2680 Jps #查看datanode01的结果 [hadoop@namenode01 ~]$ ssh datanode01 jps 2466 Jps 2358 DataNode        #datanode角色 2267 JournalNode #查看datannode02的结果 [hadoop@namenode01 ~]$ ssh datanode02 jps 2691 Jps 2612 DataNode        #datanode角色 2265 JournalNode #查看datanode03的结果 [hadoop@namenode01 ~]$ ssh datanode03 jps 11987 DataNode        #datanode角色 12067 Jps 11895 JournalNode

六、zookeeper高可用环境搭建

#下载软件,使用root用户的身份去安装 [root@namenode01 ~]# wget http://apache.fayea.com/zookeeper/zookeeper-3.4.6/zookeeper-3.4.6.tar.gz #解压文件/usr/local下,并修改权限 [root@namenode01 ~]# tar xf zookeeper-3.4.6.tar.gz -C /usr/local/ [root@namenode01 ~]# chown -R hadoop.hadoop /usr/local/zookeeper-3.4.6/ #修改zookeeper配置文件 [root@namenode01 ~]# cp /usr/local/zookeeper-3.4.6/conf/zoo_sample.cfg /usr/local/zookeeper-3.4.6/conf/zoo.cfg [root@namenode01 ~]# egrep -v "^#|^$" /usr/local/zookeeper-3.4.6/conf/zoo.cfg  tickTime=2000 initLimit=10 syncLimit=5 dataDir=/data/hdfs/zookeeper/data dataLogDir=/data/hdfs/zookeeper/logs clientPort=2181 server.1=namenode01:2888:3888 server.2=namenode02:2888:3888 server.3=datanode01:2888:3888 server.4=datanode02:2888:3888 server.5=datanode03:2888:3888 #配置zookeeper环境变量 [root@namenode01 ~]# cat /etc/profile.d/zookeeper.sh export ZOOKEEPER_HOME=/usr/local/zookeeper-3.4.6 export PATH=$PATH:$ZOOKEEPER_HOME/bin #在namenode01上创建相关的目录和myid文件 [root@namenode01 ~]# mkdir -p /data/hdfs/zookeeper/{data,logs} [root@namenode01 ~]# tree /data/hdfs/zookeeper /data/hdfs/zookeeper ├── data └── logs [root@namenode01 ~]# echo "1" >/data/hdfs/zookeeper/data/myid [root@namenode01 ~]# cat /data/hdfs/zookeeper/data/myid 1 [root@namenode01 ~]# chown -R hadoop.hadoop /data/hdfs/zookeeper [root@namenode01 ~]# ll /data/hdfs/ total 0 drwxrwxr-x 3 hadoop hadoop 17 Apr 29 10:05 dfs drwxrwxr-x 3 hadoop hadoop 22 Apr 29 10:05 journal drwxr-xr-x 4 hadoop hadoop 28 Apr 29 10:42 zookeeper #将zookeeper安装目录和环境变量配置文件复制到其余的几台机器上,以复制到namenode02为例 [root@namenode01 ~]# scp -r /usr/local/zookeeper-3.4.6 namenode02:/usr/local/ [root@namenode01 ~]# scp /etc/profile.d/zookeeper.sh namenode02:/etc/profile.d/ #namenode02上创建相关的目录和文件,并修改相应目录的权限 [root@namenode02 ~]# chown -R hadoop.hadoop /usr/local/zookeeper-3.4.6/ [root@namenode02 ~]# ll /usr/local/ |grep zook drwxr-xr-x  10 hadoop hadoop 4096 Apr 29 10:47 zookeeper-3.4.6 [root@namenode02 ~]# mkdir -p /data/hdfs/zookeeper/{data,logs} [root@namenode02 ~]# echo "2" >/data/hdfs/zookeeper/data/myid [root@namenode02 ~]# cat /data/hdfs/zookeeper/data/myid 2 [root@namenode02 ~]# chown -R hadoop.hadoop /data/hdfs/zookeeper [root@namenode02 ~]# ll /data/hdfs/ |grep zook drwxr-xr-x 4 hadoop hadoop 28 Apr 29 10:50 zookeeper #在datanode01上创建相关的目录和文件,并修改相应目录的权限 [root@datanode01 ~]# chown -R hadoop.hadoop /usr/local/zookeeper-3.4.6/ [root@datanode01 ~]# ll /usr/local/ |grep zook drwxr-xr-x  10 hadoop hadoop 4096 Apr 29 10:48 zookeeper-3.4.6 [root@datanode01 ~]# mkdir -p /data/hdfs/zookeeper/{data,logs} [root@datanode01 ~]# echo "3" >/data/hdfs/zookeeper/data/myid [root@datanode01 ~]# cat /data/hdfs/zookeeper/data/myid 3 [root@datanode01 ~]# chown -R hadoop.hadoop /data/hdfs/zookeeper [root@datanode01 ~]# ll /data/hdfs/ |grep zook drwxr-xr-x 4 hadoop hadoop 28 Apr 29 10:54 zookeeper #在datanode02上创建相关的目录和文件,并修改相应目录的权限 [root@datanode02 ~]# chown -R hadoop.hadoop /usr/local/zookeeper-3.4.6/ [root@datanode02 ~]# ll /usr/local/ |grep zook drwxr-xr-x  10 hadoop hadoop 4096 Apr 29 10:49 zookeeper-3.4.6 [root@datanode02 ~]# mkdir -p /data/hdfs/zookeeper/{data,logs} [root@datanode02 ~]# echo "4" >/data/hdfs/zookeeper/data/myid [root@datanode02 ~]# cat /data/hdfs/zookeeper/data/myid 4 [root@datanode02 ~]# chown -R hadoop.hadoop /data/hdfs/zookeeper [root@datanode02 ~]# ll /data/hdfs/ |grep zook drwxr-xr-x 4 hadoop hadoop 28 Apr 29 10:56 zookeeper #在datanode03上创建相关的目录和文件,并修改相应目录的权限 [root@datanode03 ~]# chown -R hadoop.hadoop /usr/local/zookeeper-3.4.6/ [root@datanode03 ~]# ll /usr/local/ |grep zook drwxr-xr-x  10 hadoop hadoop 4096 Apr 29 18:49 zookeeper-3.4.6 [root@datanode03 ~]# mkdir -p /data/hdfs/zookeeper/{data,logs} [root@datanode03 ~]# echo "5" >/data/hdfs/zookeeper/data/myid [root@datanode03 ~]# cat /data/hdfs/zookeeper/data/myid 5 [root@datanode03 ~]# chown -R hadoop.hadoop /data/hdfs/zookeeper [root@datanode03 ~]# ll /data/hdfs/ |grep zook drwxr-xr-x 4 hadoop hadoop 28 Apr 29 18:57 zookeeper #在5台机器上已hadoop的身份穷zookeeper #namenode01上启动 [hadoop@namenode01 ~]$ /usr/local/zookeeper-3.4.6/bin/zkServer.sh start JMX enabled by default Using config: /usr/local/zookeeper-3.4.6/bin/../conf/zoo.cfg Starting zookeeper ... STARTED #namenode02上启动 [hadoop@namenode02 ~]$ /usr/local/zookeeper-3.4.6/bin/zkServer.sh start JMX enabled by default Using config: /usr/local/zookeeper-3.4.6/bin/../conf/zoo.cfg Starting zookeeper ... STARTED #datanode01上启动 [hadoop@datanode01 ~]$ /usr/local/zookeeper-3.4.6/bin/zkServer.sh start JMX enabled by default Using config: /usr/local/zookeeper-3.4.6/bin/../conf/zoo.cfg Starting zookeeper ... STARTED #datanode02上启动 [hadoop@datanode02 ~]$ /usr/local/zookeeper-3.4.6/bin/zkServer.sh start JMX enabled by default Using config: /usr/local/zookeeper-3.4.6/bin/../conf/zoo.cfg Starting zookeeper ... STARTED #datanode03上启动 [hadoop@datanode03 ~]$ /usr/local/zookeeper-3.4.6/bin/zkServer.sh start JMX enabled by default Using config: /usr/local/zookeeper-3.4.6/bin/../conf/zoo.cfg Starting zookeeper ... STARTED #查看namenode01的结果 [hadoop@namenode01 ~]$ jps 2467 NameNode 3348 QuorumPeerMain    #zookeeper进程 3483 Jps 2270 JournalNode [hadoop@namenode01 ~]$ zkServer.sh status JMX enabled by default Using config: /usr/local/zookeeper-3.4.6/bin/../conf/zoo.cfg Mode: follower #查看namenode02的结果 [hadoop@namenode01 ~]$ ssh namenode02 jps 2264 JournalNode 2888 QuorumPeerMain 2936 Jps [hadoop@namenode01 ~]$ ssh namenode02 'zkServer.sh status' JMX enabled by default Using config: /usr/local/zookeeper-3.4.6/bin/../conf/zoo.cfg Mode: follower #查看datanode01的结果 [hadoop@namenode01 ~]$ ssh datanode01 jps 2881 QuorumPeerMain 2358 DataNode 2267 JournalNode 2955 Jps [hadoop@namenode01 ~]$ ssh datanode01 'zkServer.sh status' JMX enabled by default Using config: /usr/local/zookeeper-3.4.6/bin/../conf/zoo.cfg Mode: follower #查看datanode02的结果 [hadoop@namenode01 ~]$ ssh datanode02 jps 2849 QuorumPeerMain 2612 DataNode 2885 Jps 2265 JournalNode [hadoop@namenode01 ~]$ ssh datanode02 'zkServer.sh status' JMX enabled by default Using config: /usr/local/zookeeper-3.4.6/bin/../conf/zoo.cfg Mode: follower #查看datanode03的结果 [hadoop@namenode01 ~]$ ssh datanode03 jps 11987 DataNode 12276 Jps 12213 QuorumPeerMain 11895 JournalNode [hadoop@namenode01 ~]$ ssh datanode03 'zkServer.sh status' JMX enabled by default Using config: /usr/local/zookeeper-3.4.6/bin/../conf/zoo.cfg Mode: leader

另外有需要云服务器可以了解下创新互联cdcxhl.cn,海内外云服务器15元起步,三天无理由+7*72小时售后在线,公司持有idc许可证,提供“云服务器、裸金属服务器、高防服务器、香港服务器、美国服务器、虚拟主机、免备案服务器”等云主机租用服务以及企业上云的综合解决方案,具有“安全稳定、简单易用、服务可用性高、性价比高”等特点与优势,专为企业上云打造定制,能够满足用户丰富、多元化的应用场景需求。


网站名称:hadoopredismongodb-创新互联
网站地址:http://scyanting.com/article/ccsiis.html