第8步,启动各个服务
在hadoopmaster上格式化namenode:
[hadoop@hadoopmaster hadoop]$ su - hadoop
[hadoop@hadoopmaster hadoop]$ cd /usr/local/hadoop/
[hadoop@hadoopmaster hadoop]$ ./bin/hdfs namenode -format --有下面的信息则为成功(如果需要重新格式化namenode,需先删除各节点的data文件,再重建;操作为rm -rf /usr/local/hadoop/data/ ; mkdir /usr/local/hadoop/data/)
。。。。。。
15/12/14 11:41:23 INFO common.Storage: Storage directory /usr/local/hadoop/data/dfs/name has been successfully formatted.
15/12/14 11:41:23 INFO namenode.NNStorageRetentionManager: Going to retain 1 images with txid >= 0
15/12/14 11:41:23 INFO util.ExitUtil: Exiting with status 0
15/12/14 11:41:23 INFO namenode.NameNode: SHUTDOWN_MSG:
/************************************************************
SHUTDOWN_MSG: Shutting down NameNode at li.cluster.com/10.1.1.35
************************************************************/
[hadoop@hadoopmaster hadoop]$ ./sbin/start-dfs.sh --启动hdfs
15/12/14 11:44:01 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Starting namenodes on [hadoopmaster]
hadoopmaster: starting namenode, logging to /usr/local/hadoop/logs/hadoop-hadoop-namenode-li.cluster.com.out
hadoopslave2: starting datanode, logging to /usr/local/hadoop/logs/hadoop-hadoop-datanode-storage2.cluster.com.out
hadoopslave3: starting datanode, logging to /usr/local/hadoop/logs/hadoop-hadoop-datanode-storage3.cluster.com.out
hadoopslave1: starting datanode, logging to /usr/local/hadoop/logs/hadoop-hadoop-datanode-storage1.cluster.com.out
Starting secondary namenodes [hadoopmaster]
hadoopmaster: starting secondarynamenode, logging to /usr/local/hadoop/logs/hadoop-hadoop-secondarynamenode-li.cluster.com.out
15/12/14 11:44:23 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
-----------------------------------------------------
基本操作里会有一个警告错误:
15/11/14 14:58:10 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
解决方法:
软件包路径:
# 笔记目录/arch/hadoop_soft/hadoop-native-64-2.6.0.tar
GlusterFS is a scalable network filesystem. Using common off-the-shelf hardware, you can create large, distributed storage solutions for media streaming, data analysis, and other data- and bandwidth-intensive tasks. GlusterFS is free and open source software.
在其中一个存储节点上做执行下面的命令(我这里是在storage1.cluster.com上10.1.1.4),其它不用执行
# gluster volume stop gv0 --这里是在storage1.cluster.com 10.1.1.4上做的,其它存储节点不用执行
Stopping volume will make its data inaccessible. Do you want to continue? (y/n) y
volume stop: gv0: success
# gluster volume info --在所有存储节点上都可以查看到gv0状态变为stopped
Volume Name: gv0
Type: Replicate
Volume ID: 0000e76a-6f2a-4a1f-9db0-3ca451ce72e7
Status: Stopped
Number of Bricks: 1 x 3 = 3
Transport-type: tcp
Bricks:
Brick1: storage1.cluster.com:/data/gv0
Brick2: storage2.cluster.com:/data/gv0
Brick3: storage3.cluster.com:/data/gv0
# gluster volume delete gv0 --这里是在storage1.cluster.com 10.1.1.4上做的,其它存储节点不用执行
Deleting volume will erase all information about the volume. Do you want to continue? (y/n) y
volume delete: gv0: success
# gluster volume info --在所有存储节点上都可以查看,没有gv0的信息了,说明这个volumn被删除了
第十步:
再做成stripe模式的卷
# gluster volume create gv0 stripe 3 storage1.cluster.com:/data/gv0 storage2.cluster.com:/data/gv0 storage3.cluster.com:/data/gv0 --这里是在storage1.cluster.com 10.1.1.4上做的,其它存储节点不用执行
volume create: gv0: failed: /data/gv0 is already part of a volume
--这里报错,说gv0已经是一个卷的一部分,因为前面测试的数据还在
解决方法:
1,换一个目录再做
2,加一个force参数,强制在/data/gv0目录上做
我这里使用force强制做
# gluster volume create gv0 stripe 3 storage1.cluster.com:/data/gv0 storage2.cluster.com:/data/gv0 storage3.cluster.com:/data/gv0 force --这里是在storage1.cluster.com 10.1.1.4上做的,其它存储节点不用执行
volume create: gv0: success: please start the volume to access data
# gluster volume start gv0 --这里是在storage1.cluster.com 10.1.1.4上做的,其它存储节点不用执行
volume start: gv0: success
# gluster volume info --所在存储节点上都可以查看到下面的信息,模式变为了stripe条带
Volume Name: gv0
Type: Stripe
Volume ID: db414c98-7d74-4e20-abef-68814734ac07
Status: Started
Number of Bricks: 1 x 3 = 3
Transport-type: tcp
Bricks:
Brick1: storage1.cluster.com:/data/gv0
Brick2: storage2.cluster.com:/data/gv0
Brick3: storage3.cluster.com:/data/gv0
第十一步:
客户端再次挂载进行读写测试
# mount -t glusterfs 10.1.1.4:/gv0 /test1
# mount -t glusterfs 10.1.1.5:/gv0 /test2
# mount -t glusterfs 10.1.1.6:/gv0 /test3
测试结果比较:
replica模式 类似raid1,写1G,会占3G
stripe模式 类似raid0,写3G,会一人占1G,总共还是3G
补充二:
在线缩容:
# gluster volume remove-brick gv0 storage3.cluster.com:/data/gv0 force
Removing brick(s) can result in data loss. Do you want to Continue? (y/n) y --这样会导致被remove这个brick数据丢失
volume remove-brick commit force: success
# gluster volume info --看状态会发现少了一个brick
Volume Name: gv0
Type: Distribute
Volume ID: 03e6a74d-8e24-44b4-b750-5a91f1b54ff9
Status: Started
Number of Bricks: 2
Transport-type: tcp
Bricks:
Brick1: storage1.cluster.com:/data/gv0
Brick2: storage2.cluster.com:/data/gv0