hadoopd的secondarynamenode的使用
core-site.xml:
[html] <!-- Put site-specific property overrides in this file. -->
<configuration>
<property>
<name>hadoop.tmp.dir</name>
<value>/home/hadoop/fs/tmp</value>
</property>
<property>
<name>fs.default.name</name>
<value>hdfs://hadoop:9000</value>
</property>
<property>
<name>fs.checkpoint.period</name>
<value>300</value>
</property>
<property>
<name>fs.checkpoint.size</name>
<value>67108864</value>
</property>
<property>
<name>fs.checkpoint.dir</name>
<value>/home/hadoop/checkpoint/namesecodary</value>
</property>
</configuration>
<!-- Put site-specific property overrides in this file. -->
<configuration>
<property>
<name>hadoop.tmp.dir</name>
<value>/home/hadoop/fs/tmp</value>
</property>
<property>
<name>fs.default.name</name>
<value>hdfs://hadoop:9000</value>
</property>
<property>
<name>fs.checkpoint.period</name>
<value>300</value>
</property>
<property>
<name>fs.checkpoint.size</name>
<value>67108864</value>
</property>
<property>
<name>fs.checkpoint.dir</name>
<value>/home/hadoop/checkpoint/namesecodary</value>
</property>
</configuration>hdfs-site.xml:
[html] <!-- Put site-specific property overrides in this file. -->
<configuration>
<property>
<name>dfs.replicatioin</name>
<value>1</value>
</property>
<property>
<name>dfs.data.dir</name>
<value>/home/hadoop/fs/data</value>
</property>
<property>
<name>dfs.name.dir</name>
<value>/home/hadoop/fs/name</value>
</property>
<property>
<name>dfs.back.http.address</name>
<value>hadoop:50070</value>
</property>
<property>
<name>dfs.permissions</name>
<value>false</value>
</property>
</configuration>
<!-- Put site-specific property overrides in this file. -->
<configuration>
<property>
<name>dfs.replicatioin</name>
<value>1</value>
</property>
<property>
<name>dfs.data.dir</name>
<value>/home/hadoop/fs/data</value>
</property>
<property>
<name>dfs.name.dir</name>
<value>/home/hadoop/fs/name</value>
</property>
<property>
<name>dfs.back.http.address</name>
<value>hadoop:50070</value>
</property>
<property>
<name>dfs.permissions</name>
<value>false</value>
</property>
</configuration>mapred-site.xml:
[html] <!-- Put site-specific property overrides in this file. -->
<configuration>
<property>
<name>mapred.job.tracker</name>
<value>hadoop:9001</value>
</property>
</configuration>
<!-- Put site-specific property overrides in this file. -->
<configuration>
<property>
<name>mapred.job.tracker</name>
<value>hadoop:9001</value>
</property>
</configuration>
1.首先输入jps,查看hadoop进程,如下:
[plain] hadoop@hadoop:~/hadoop-1.0.4/bin$ jps
16018 TaskTracker
15646 DataNode
17250 NameNode
15870 JobTracker
17296 Jps
15790 SecondaryNameNode
hadoop@hadoop:~/hadoop-1.0.4/bin$ jps
16018 TaskTracker
15646 DataNode
17250 NameNode
15870 JobTracker
17296 Jps
15790 SecondaryNameNode2.kill掉namenode的进程,继续查看hadoop进程,如下:
[plain] hadoop@hadoop:~/hadoop-1.0.4/bin$ kill 17250
hadoop@hadoop:~/hadoop-1.0.4/bin$ jps
16018 TaskTracker
15646 DataNode
17328 Jps
15870 JobTracker
15790 SecondaryNameNode
hadoop@hadoop:~/hadoop-1.0.4/bin$ kill 17250
hadoop@hadoop:~/hadoop-1.0.4/bin$ jps
16018 TaskTracker
15646 DataNode
17328 Jps
15870 JobTracker
15790 SecondaryNameNode3. 删除dfs.name.dir的目录的全部内容;
4. 这里没有按照参考的博客那样把secondarynamenode的fs.checkpoint.dir的内容拷贝到namenode的dfs.name.dir下面,而是直接使用./hadoop namenode -importCheckPoint命令,这个命令应该已经含有了拷贝的命令了,出现下面的logs信息:
[plain] hadoop@hadoop:~/hadoop-1.0.4/bin$ ./hadoop namenode -importCheckPoint
13/05/28 15:37:14 INFO namenode.NameNode: STARTUP_MSG:
/************************************************************
STARTUP_MSG: Starting NameNode
STARTUP_MSG: host = hadoop/192.168.128.138
STARTUP_MSG: args = [-importCheckPoint]
STARTUP_MSG: version = 1.0.4
STARTUP_MSG: build = https://svn.apache.org/repos/asf/hadoop/common/branches/branch-1.0 -r 1393290; compiled by 'hortonfo' on Wed Oct 3 05:13:58 UTC 2012
补充:web前端 , HTML/CSS ,