1.Ceph集群环境
使用3台虚拟机,包括其中1个admin节点,三台虚拟机同时承担3个monitor节点和3个osd节点
操作系统采用CentOS Minimal 7 下载地址:
2. 前提准备所有的主机都进行
# hostnamectl set-hostname ceph1 \\修改主机名
# vi /etc/sysconfig/network-scripts/ifcfg-ens32 或者 nmtui \\配置IP地址
# systemctl restart network \\重启网络服务
\\ 由于安装的CentOS Minimal版,tab键无法补全命令参数,建议执行一条命令,老鸟可以忽略
#yum -y install bash-completion.noarch
# date \\查看系统时间,保证各系统的时间一致
#echo '192.168.59.131 ceph1' >> /etc/hosts \\修改hosts文件,添加所有服务器的映射
#setenforce 0 \\关闭selinux
#sed -i 's/SELINUX=enforcing/SELINUX=disabled/g' /etc/selinux/config \\修改配置文件,使关闭selinux永久生效
#firewall-cmd --zone=public --add-port=6789/tcp --permanent
#firewall-cmd --zone=public --add-port=6800-7100/tcp --permanent \\ 添加防火墙策略
#firewall-cmd --reload \\使其防火墙策略生效
#ssh-keygen \\生成SSH密钥
#ssh-copy-id root@ceph1 \\需各服务器之间进行拷贝
3. 开始进行ceph-deploy的部署,部署其中一台机器即可
# vi /etc/yum.repos.d/ceph.repo \\新增ceph yum源,输入以下内容
[ceph-noarch]
name=Ceph noarch packages
baseurl=http://download.ceph.com/rpm-luminous/el7/noarch
enabled=1
gpgcheck=1
type=rpm-md
gpgkey=https://download.ceph.com/keys/release.asc
#yum update && reboot \\更并重启系统
#yum install ceph-deploy -y \\安装ceph-deploy
a. 此处出现错误
Downloading packages:
(1/4): python-backports-ssl_match_hostname-3.4.0.2-4.el7.noarch.rpm | 12 kB 00:00:00
(2/4): python-backports-1.0-8.el7.x86_64.rpm | 5.8 kB 00:00:02
ceph-deploy-1.5.38-0.noarch.rp FAILED ] 90 kB/s | 298 kB 00:00:04 ETA
http://download.ceph.com/rpm-luminous/el7/noarch/ceph-deploy-1.5.38-0.noarch.rpm: [Errno -1] Package does not match intended download. Suggestion: run yum --enablerepo=ceph-noarch clean metadata
Trying other mirror.
(3/4): python-setuptools-0.9.8-7.el7.noarch.rpm | 397 kB 00:00:05
Error downloading packages:
ceph-deploy-1.5.38-0.noarch: [Errno 256] No more mirrors to try.
处理方法如下:
#rpm -ivh
b. 此处出现错误:
Retrieving http://download.ceph.com/rpm-luminous/el7/noarch/ceph-deploy-1.5.38-0.noarch.rpm
warning: /var/tmp/rpm-tmp.gyId2U: Header V4 RSA/SHA256 Signature, key ID 460f3994: NOKEY
error: Failed dependencies:
python-distribute is needed by ceph-deploy-1.5.38-0.noarch
处理方法:
# yum install python-distribute -y
再次执行
#rpm -ivh
4. 部署monitor服务
#mkdir ~/ceph-cluster && cd ~/ceph-cluster \\新建集群配置目录
#ceph-deploy new ceph1 ceph2 ceph3 \\部署完后生产3个文件,一个Ceph配置文件、一个monitor密钥环和一个日志文件
#ls -l
-rw-r--r-- 1 root root 266 Sep 19 16:41 ceph.conf
-rw-r--r-- 1 root root 172037 Sep 19 16:32 ceph-deploy-ceph.log
-rw------- 1 root root 73 Sep 19 11:03 ceph.mon.keyring
#ceph-deploy mon create-initial \\初始化群集
5. 安装ceph
#ceph-deploy install ceph1 ceph2 ceph3 \\在ceph1 ceph2 ceph3上安装ceph
a. 此处出现错误
[ceph1][DEBUG ] Retrieving https://download.ceph.com/rpm-jewel/el7/noarch/ceph-release-1-0.el7.noarch.rpm
[ceph1][WARNIN] warning: /etc/yum.repos.d/ceph.repo created as /etc/yum.repos.d/ceph.repo.rpmnew
[ceph1][DEBUG ] Preparing... ########################################
[ceph1][DEBUG ] Updating / installing...
[ceph1][DEBUG ] ceph-release-1-1.el7 ########################################
[ceph1][WARNIN] ensuring that /etc/yum.repos.d/ceph.repo contains a high priority
[ceph_deploy][ERROR ] RuntimeError: NoSectionError: No section: 'ceph'
处理方式:
# yum remove ceph-release -y
再次执行#ceph-deploy install ceph1 ceph2 ceph3
6. 创建OSD
# ceph-deploy disk list ceph{1,2,3} \\列出各服务器磁盘
# ceph-deploy --overwrite-conf osd prepare ceph1:sdc:/dev/sdb ceph2:sdc:/dev/sdb ceph3:sdc:/dev/sdb \\准备磁盘 sdb 作为journal盘,sdc作为数据盘
# ceph-deploy osd activate ceph1:sdc:/dev/sdb ceph2:sdc:/dev/sdb ceph3:sdc:/dev/sdb \\激活osd
此处出现一个错误,没有从网上查到解决方式,请高手赐教,此处的错误没有影响ceph的部署,通过命令已显示磁盘已成功mount
[ceph1][WARNIN] ceph_disk.main.FilesystemTypeError: Cannot discover filesystem type: device /dev/sdc: Line is truncated:
[ceph1][ERROR ] RuntimeError: command returned non-zero exit status: 1
[ceph_deploy][ERROR ] RuntimeError: Failed to execute command: /usr/sbin/ceph-disk -v activate --mark-init systemd --mount /dev/sdc
# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 0 20G 0 disk
├─sda1 8:1 0 1G 0 part /boot
└─sda2 8:2 0 19G 0 part
├─cl-root 253:0 0 18G 0 lvm /
└─cl-swap 253:1 0 1G 0 lvm [SWAP]
sdb 8:16 0 30G 0 disk
└─sdb1 8:17 0 5G 0 part
sdc 8:32 0 40G 0 disk
└─sdc1 8:33 0 40G 0 part /var/lib/ceph/osd/ceph-0
sr0 11:0 1 680M 0 rom
rbd0 252:0 0 1G 0 disk /root/rbddir
7. 部署成功
# ceph -s
cluster e508bdeb-b986-4ee8-82c6-c25397a5f1eb
health HEALTH_OK
monmap e2: 3 mons at{ceph1=192.168.59.131:6789/0,ceph2=192.168.59.132:6789/0,ceph3=192.168.59.133:6789/0}
election epoch 10, quorum 0,1,2 ceph1,ceph2,ceph3
osdmap e55: 3 osds: 3 up, 3 in
flags sortbitwise,require_jewel_osds
pgmap v13638: 384 pgs, 5 pools, 386 MB data, 125 objects
1250 MB used, 118 GB / 119 GB avail
384 active+clean
问题解决:
# ceph-deploy osd activate ceph1:sdc:/dev/sdb ceph2:sdc:/dev/sdb ceph3:sdc:/dev/sdb \\激活osd
此处出现一个错误,没有从网上查到解决方式,请高手赐教,此处的错误没有影响ceph的部署,通过命令已显示磁盘已成功mount
[ceph1][WARNIN] ceph_disk.main.FilesystemTypeError: Cannot discover filesystem type: device /dev/sdc: Line is truncated:
[ceph1][ERROR ] RuntimeError: command returned non-zero exit status: 1
[ceph_deploy][ERROR ] RuntimeError: Failed to execute command: /usr/sbin/ceph-disk -v activate --mark-init systemd --mount /dev/sdc
原因:由于ceph对磁盘进行了分区,/dev/sdb磁盘分区为/dev/sdb1
正确的命令为:
# ceph-deploy osd activate ceph1:sdc1:/dev/sdb1 ceph2:sdc1:/dev/sdb1 ceph3:sdc1:/dev/sdb1
铸剑团队签名:
【总监】十二春秋之,3483099@qq.com;
【Master】戈稻不苍,han169@126.com;
【Java开发】雨鸶,343691194@qq.com;思齐骏惠,qiangzhang1227@163.com;小王子,545106057@qq.com;巡山小钻风,840260821@qq.com;
【VS开发】豆点,2268800211@qq.com;
【系统测试】土镜问道,847071279@qq.com;尘子与自由,695187655@qq.com;
【大数据】沙漠绿洲,caozhipan@126.com;张三省,570417591@qq.com;
【网络】夜孤星,11297761@qq.com;
【系统运营】三石头,261453882@qq.com;平凡怪咖,591169003@qq.com;
【容灾备份】秋天的雨,18568921@qq.com;
【安全】保密,你懂的。
原创作者:三石头
著作权归作者所有。商业转载请联系作者获得授权,非商业转载请注明出处。