当前位置:操作系统 > Unix/Linux >>

[HBase]完全分布式安装过程详解

[HBase]完全分布式安装过程详解
 
HBase版本:0.90.5
Hadoop版本:0.20.2
OS版本:CentOS
安装方式:完全分布式(1个master,3个regionserver)
1)解压缩HBase安装文件 
[hadoop@node01 ~]$ tar -zxvf hbase-0.90.5.tar.gz
解压缩成功后的HBase主目录结构如下:
[hadoop@node01 hbase-0.90.5]$ ls -l
total 3636
drwxr-xr-x. 3 hadoop root    4096 Dec  8  2011 bin
-rw-r--r--. 1 hadoop root  217043 Dec  8  2011 CHANGES.txt
drwxr-xr-x. 2 hadoop root    4096 Dec  8  2011 conf
drwxr-xr-x. 4 hadoop root    4096 Dec  8  2011 docs
-rwxr-xr-x. 1 hadoop root 2425490 Dec  8  2011 hbase-0.90.5.jar
-rwxr-xr-x. 1 hadoop root  997956 Dec  8  2011 hbase-0.90.5-tests.jar
drwxr-xr-x. 5 hadoop root    4096 Dec  8  2011 hbase-webapps
drwxr-xr-x. 3 hadoop root    4096 Apr 12 19:03 lib
-rw-r--r--. 1 hadoop root   11358 Dec  8  2011 LICENSE.txt
-rw-r--r--. 1 hadoop root     803 Dec  8  2011 NOTICE.txt
-rw-r--r--. 1 hadoop root   31073 Dec  8  2011 pom.xml
-rw-r--r--. 1 hadoop root    1358 Dec  8  2011 README.txt
drwxr-xr-x. 8 hadoop root    4096 Dec  8  2011 src
2)配置hbase-env.sh 
[hadoop@node01 conf]$ vi hbase-env.sh
# The java implementation to use.  Java 1.6 required.
export JAVA_HOME=/usr/java/jdk1.6.0_38
 
# Extra Java CLASSPATH elements.  Optional.
export HBASE_CLASSPATH=/home/hadoop/hadoop-0.20.2/conf
 
3)配置hbase-site.xml 
[hadoop@node01 conf]$ vi hbase-site.xml
<configuration>
  <property>
     <name>hbase.rootdir</name>
     <value>hdfs://node01:9000/hbase</value>
  </property>
  <property>
      <name>hbase.cluster.distributed</name>
      <value>true</value>
   </property>
   <property>
        <name>hbase.zookeeper.quorum</name>
        <value>node01,node02,node03,node04</value>
    </property>
    <property>
         <name>hbase.zookeeper.property.dataDir</name>
         <value>/var/zookeeper</value>
     </property>
</configuration>
 
4)配置regionservers
[hadoop@node01 conf]$ vi regionservers
node02
node03
node04
 
5) 替换Jar包
[hadoop@node01 lib]$ mv hadoop-core-0.20-append-r1056497.jar hadoop-core-0.20-append-r1056497.sav
[hadoop@node01 lib]$ cp ../../hadoop-0.20.2/hadoop-0.20.2-core.jar .
[hadoop@node01 lib]$ ls
activation-1.1.jar          commons-net-1.4.1.jar                 jasper-compiler-5.5.23.jar  jetty-util-6.1.26.jar       slf4j-api-1.5.8.jar
asm-3.1.jar                 core-3.1.1.jar                        jasper-runtime-5.5.23.jar   jruby-complete-1.6.0.jar    slf4j-log4j12-1.5.8.jar
avro-1.3.3.jar              guava-r06.jar                         jaxb-api-2.1.jar            jsp-2.1-6.1.14.jar          stax-api-1.0.1.jar
commons-cli-1.2.jar         hadoop-0.20.2-core.jar                jaxb-impl-2.1.12.jar        jsp-api-2.1-6.1.14.jar      thrift-0.2.0.jar
commons-codec-1.4.jar       hadoop-core-0.20-append-r1056497.sav  jersey-core-1.4.jar         jsr311-api-1.1.1.jar        xmlenc-0.52.jar
commons-el-1.0.jar          jackson-core-asl-1.5.5.jar            jersey-json-1.4.jar         log4j-1.2.16.jar            zookeeper-3.3.2.jar
commons-httpclient-3.1.jar  jackson-jaxrs-1.5.5.jar               jersey-server-1.4.jar       protobuf-java-2.3.0.jar
commons-lang-2.5.jar        jackson-mapper-asl-1.4.2.jar          jettison-1.1.jar            ruby
commons-logging-1.1.1.jar   jackson-xc-1.5.5.jar                  jetty-6.1.26.jar            servlet-api-2.5-6.1.14.jar
 
6) 向其它3个结点复制Hbase相关配置
[hadoop@node01 ~]$scp -r ./hbase-0.90.5 node02:/home/hadoop
[hadoop@node01 ~]$scp -r ./hbase-0.90.5 node03:/home/hadoop
[hadoop@node01 ~]$scp -r ./hbase-0.90.5 node04:/home/hadoop
7) 添加HBase相关环境变量 (所有结点)
[hadoop@node01 conf]$ su - root
Password:
[root@node01 ~]# vi /etc/profile
export HBASE_HOME=/home/hadoop/hbase-0.90.5
export PATH=$PATH:$HBASE_HOME/bin
 
8)启动Hadoop,创建HBase主目录
[hadoop@node01 ~]$ $HADOOP_INSTALL/bin/start-all.sh
starting namenode, logging to /home/hadoop/hadoop-0.20.2/bin/../logs/hadoop-hadoop-namenode-node01.out
node02: starting datanode, logging to /home/hadoop/hadoop-0.20.2/bin/../logs/hadoop-hadoop-datanode-node02.out
node04: starting datanode, logging to /home/hadoop/hadoop-0.20.2/bin/../logs/hadoop-hadoop-datanode-node04.out
node03: starting datanode, logging to /home/hadoop/hadoop-0.20.2/bin/../logs/hadoop-hadoop-datanode-node03.out
hadoop@node01's password:
node01: starting secondarynamenode, logging to /home/hadoop/hadoop-0.20.2/bin/../logs/hadoop-hadoop-secondarynamenode-node01.out
starting jobtracker, logging to /home/hadoop/hadoop-0.20.2/bin/../logs/hadoop-hadoop-jobtracker-node01.out
node04: starting tasktracker, logging to /home/hadoop/hadoop-0.20.2/bin/../logs/hadoop-hadoop-tasktracker-node04.out
node02: starting tasktracker, logging to /home/hadoop/hadoop-0.20.2/bin/../logs/hadoop-hadoop-tasktracker-node02.out
node03: starting tasktracker, logging to /home/hadoop/hadoop-0.20.2/bin/../logs/hadoop-hadoop-tasktracker-node03.out
[hadoop@node01 ~]$ jps
5332 Jps
5030 NameNode
5259 JobTracker
5185 SecondaryNameNode
[hadoop@node02 ~]$ jps
4603 Jps
4528 TaskTracker
4460 DataNode
[hadoop@node01 ~]$ hadoop fs -mkdir hbase
 
9)启动HBase
[hadoop@node01 conf]$ start-hbase.sh
hadoop@node01's password: node03: starting zookeeper, logging to /home/hadoop/hbase-0.90.5/bin/../logs/hbase-hadoop-zookeeper-node03.out
node04: starting zookeeper, logging to /home/hadoop/hbase-0.9
CopyRight © 2012 站长网 编程知识问答 www.zzzyk.com All Rights Reserved
部份技术文章来自网络,