OpenStack架构任务

随着公司业务发展,服务器本地存储以无法满足公司工作需求,所以公司决定使用分布式ceph作为云平台后端存储,需要你按照公司要求搭建ceph存储,完成对接。

节点规划

ip地址 主机名 节点
192.168.1.10 controller openstack(ALL-IN-one)
192.168.1.11 ceph1 Monitor/OSD
192.168.1.12 ceph2 OSD
192.168.1.13 ceph3 OSD

初始化操作

openstack节点需要提前准备好,本文就不再演示。

以下是关于ceph集群相关操作。

配置yum源

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
[root@ceph1 ~]# yum install vim bash-completion  ntpd lrzsz vsftpd -y  &&  echo "anon_root=/opt" >> /etc/vsftpd/vsftpd.conf  &&  systemctl enable --now vsftpd

[root@ceph1 ~]# cat /etc/yum.repos.d/local.repo
[centos]
name=centos
baseurl=ftp://ceph1/centos
gpgcheck=0
enabled=1
[ceph]
name=ceph
baseurl=ftp://ceph1/ceph
gpgcheck=0
enabled=1

[root@ceph1 ~]# for i in ceph2 ceph3 controller;do scp /etc/yum.repos.d/local.repo $i:/etc/yum.repos.d/local.repo ;done

关闭防火墙和selinux

1
2
3
4
5
6
7
[root@ceph1 ~]# setenforce 0
[root@ceph1 ~]# echo "setenforce 0 " >> /etc/rc.local
[root@ceph1 ~]# chmod +x /etc/rc.local
[root@ceph1 ~]# getenforce
Permissive

[root@ceph1 ~]# systemctl disable --now firewalld

SSH无密钥和时间同步

  • 配置hosts文件
1
2
3
4
[root@ceph1 ~]# echo "192.168.100.11 ceph1
192.168.100.12 ceph2
192.168.100.13 ceph3
192.168.100.10 controller" >> /etc/hosts
  • 配置SSH

    ceph集群之间需要配置免密登录,但与openstack之间非必须,为了方便传输文件,建议一同配置。

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
[root@ceph1 ~]# ssh-keygen
Generating public/private rsa key pair.
Enter file in which to save the key (/root/.ssh/id_rsa):
Created directory '/root/.ssh'.
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /root/.ssh/id_rsa.
Your public key has been saved in /root/.ssh/id_rsa.pub.
The key fingerprint is:
SHA256:qykaqh35tVFas6XVAmvzJ9/sF5pShrEaMkMkUOwgEaM root@ceph1
The key's randomart image is:
+---[RSA 2048]----+
| +o.+. |
|.... o . |
|E . o o . |
| . . o o |
| . S + = |
| . O @ + o . |
| + + B = + o .|
| o +.. = . = = .|
|+ o...+ o.+. |
+----[SHA256]-----+

[root@ceph1 ~]# for i in ceph1 ceph2 ceph3;do ssh-copy-id root@$i;done
  • 配置时间同步

    所有节点都需要进行时间同步,此处以ceph1为时间服务器,其他节点与ceph1同步。

    以下为ceph1的操作,其余节点只需要添加或修改server ceph1 iburst即可

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
[root@ceph1 ~]# vim /etc/chrony.conf

server ceph1 iburst

allow 192.168.0.0/16

local stratum 10

[root@ceph1 ~]# systemctl restart chronyd
[root@ceph1 ~]# chronyc sources
210 Number of sources = 1
MS Name/IP address Stratum Poll Reach LastRx Last sample
===============================================================================
^* ceph1 10 6 37 5 -19ns[-6046ns] +/- 8582ns

创建Ceph集群

安装软件包

ceph-deployceph集群的配置工具,只需要ceph1安装即可。

所有ceph节点(ceph1、ceph2、ceph3)都安装(ceph-mon、ceph-osd、ceph-mds、ceph-radosgw)。

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
[root@node1 ~]# yum install -y ceph-deploy 

[root@node1 ~]# for i in ceph1 ceph2 ceph3
do
ssh $i "yum install -y ceph"
done

[root@ceph1 ~]# ceph -v
ceph version 14.2.22 (ca74598065096e6fcbd8433c8779a2be0c889351) nautilus (stable)

[root@ceph2 ~]# ceph -v
ceph version 14.2.22 (ca74598065096e6fcbd8433c8779a2be0c889351) nautilus (stable)

[root@ceph3 ~]# ceph -v
ceph version 14.2.22 (ca74598065096e6fcbd8433c8779a2be0c889351) nautilus (stable)

集群配置

下面的命令都是在ceph1节点操作的,不需要在ceph2和ceph3操作。

进入/etc/ceph 进行命令配置

  • 自动生成配置文件
1
2
3
[root@node1 ~]# cd /etc/ceph

[root@node1 ceph]# ceph-deploy new ceph1 ceph2 ceph3
  • 创建MON管理(集群监控组件)
1
2
3
[root@node1 ceph]# ceph-deploy mon create-initial 

# ceph-deploy gatherkeys ceph1 # 配置认证
  • 擦除硬盘分区表以及内容
1
2
3
4
5
[root@node1 ceph]# ceph-deploy disk zap ceph1 /dev/sdb

[root@node1 ceph]# ceph-deploy disk zap ceph2 /dev/sdb

[root@node1 ceph]# ceph-deploy disk zap ceph3 /dev/sdb
  • 创建OSD
1
2
3
4
5
6
7
8
[root@node1 ceph]# ceph-deploy osd create ceph1 --data /dev/sdb

[root@node1 ceph]# ceph-deploy osd create ceph2 --data /dev/sdb

[root@node1 ceph]# ceph-deploy osd create ceph3 --data /dev/sdb

# 若是需要缓存盘(sdc为缓存盘,sdb为数据盘)`缓存盘也需要清除数据`
[root@node1 ceph]# ceph-deploy osd create node1:sdc:/dev/sdb
  • 检查
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
[root@ceph1 ceph]# ceph -s
cluster:
id: c5334dbd-cee4-4d97-b1e8-2b2994542ab4
health: HEALTH_WARN # 状态
no active mgr
mons are allowing insecure global_id reclaim

services:
mon: 3 daemons, quorum ceph1,ceph2,ceph3 (age 2m)
mgr: no daemons active
osd: 3 osds: 3 up (since 13s), 3 in (since 13s)

data:
pools: 0 pools, 0 pgs
objects: 0 objects, 0 B
usage: 0 B used, 0 B / 0 B avail
pgs:

# no active mgr:表示没有运行集群管理组件
# mons are allowing insecure global_id reclaim:表示MON允许全局id进行不安全回收,即回收全局id时不进行身份验证
  • 创建mgr(集群管理组件)
  • 关闭非安全回收全局id
1
2
3
4
5
[root@node1 ceph]# ceph-deploy mgr create ceph1 ceph2 ceph3

# mgr在只ceph1上创建也是可以的

[root@node1 ceph]# ceph config set mon auth_allow_insecure_global_id_reclaim false
  • 检查
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
[root@ceph1 ceph]# ceph -s
cluster:
id: 1f8a2a56-0102-4667-92de-5d3c205054e2
health: HEALTH_OK # OK 即可

services:
mon: 3 daemons, quorum ceph1,ceph2,ceph3 (age 119s)
mgr: ceph1(active, since 57s)
osd: 3 osds: 3 up (since 65s), 3 in (since 65s)

data:
pools: 0 pools, 0 pgs
objects: 0 objects, 0 B
usage: 3.0 GiB used, 297 GiB / 300 GiB avail
pgs:

dashboard

同大多数集群一样,ceph也拥有自己的图形化界面。

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
# 在每个mgr节点安装
[root@ceph1 ceph]# yum install ceph-mgr-dashboard -y


# 开启mgr功能
[root@ceph1 ceph]# ceph mgr module enable dashboard


# 生成并安装自签名的证书
[root@ceph1 ceph]# ceph dashboard create-self-signed-cert


# 创建一个dashboard登录用户名密码(guest:123456)
[root@ceph1 ceph]# ceph dashboard ac-user-create guest 123456 administrator

# 把密码导入文件,通过-i指定密码文件获取密码
[root@ceph1 ceph]# echo "123456" > /root/pswd
[root@ceph1 ceph]# ceph dashboard ac-user-create guest -i /root/pswd administrator


# 查看服务访问方式
[root@ceph1 ceph]# ceph mgr services
"dashboard": "https://ceph1:8443/"

Ceph对接OpenStack

OpenStack操作

此处使用的时all-in-one,所有操作都在controller节点操作。

若是controller和compute双节点,需要换节点操作时会有标明。

  • 安装ceph软件包
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
# controller安装ceph
[root@controller ~]# yum install ceph -y

# 将ceph集群的/etc/ceph 进行拷贝至controller节点 /etc/ceph
[root@ceph ceph1]# scp -r /etc/ceph/ controller:/etc/

# controller检测ceph集群
[root@controller ~]# ceph -s
cluster:
id: c5334dbd-cee4-4d97-b1e8-2b2994542ab4
health: HEALTH_OK

services:
mon: 3 daemons, quorum ceph1,ceph2,ceph3 (age 38m)
mgr: ceph1(active, since 38m), standbys: ceph2, ceph3
osd: 3 osds: 3 up (since 38m), 3 in (since 17h)

data:
pools: 1 pools, 128 pgs
objects: 0 objects, 0 B
usage: 3.0 GiB used, 297 GiB / 300 GiB avail
pgs: 128 active+clean

配置Cinder块存储服务

  • 创建存储池
1
2
3
4
5
6
7
8
# 下面的命令在/etc/ceph目录操作
[root@controller ~]# cd /etc/ceph/
[root@controller ceph]# ls
ceph.bootstrap-mds.keyring ceph.bootstrap-osd.keyring ceph.client.admin.keyring ceph-deploy-ceph.log rbdmap
ceph.bootstrap-mgr.keyring ceph.bootstrap-rgw.keyring ceph.conf ceph.mon.keyring

# 创建存储池
[root@controller ceph]# ceph osd pool create volumes 128
  • 创建Cinder认证
1
2
3
4
5
6
7
8
[root@controller ceph]# ceph auth get-or-create client.cinder mon 'allow r' osd 'allow class-read object_prefix rbd_children, allow rwx pool=volumes,allow rx pool=images'
[client.cinder]
key = AQAQEZFkaQB3KhAAR+dfrp+J2yK8XqQjamQyxQ==


# mon 'allow r':授予 client.cinder 只读访问 Ceph Monitor。

# osd 'allow class-read object_prefix rbd_children, allow rwx pool=volumes,allow rx pool=images':授予 client.cinder 读取 RBD 子镜像以及在 volumes 池中读取、读写和在 images 池中读取的权限。
  • 把keyring拷贝给controller

这些ceph命令本该在ceph节点执行 , 但是controller是ceph的客户端 且已拥有ceph集群的配置文件, 也可以进行ceph操作

1
2
3
4
5
6
[root@controller ceph]# ceph auth get-or-create client.cinder | ssh controller tee /etc/ceph/ceph.client.cinder.keyring
[client.cinder]
key = AQAQEZFkaQB3KhAAR+dfrp+J2yK8XqQjamQyxQ==

# 修改权限
[root@controller ceph]# chown cinder:cinder /etc/ceph/ceph.client.cinder.keyring
  • 生成UUID

    在controller节点节点上生成 UUID,定义 secret.xml 文件,设置密钥给 Libvirt

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
# 生成UUID
[root@controller ceph]# uuidgen
9226f2ac-8796-424d-8062-fd24ee833f8c

# 编写secret.xml密钥文件
[root@controller ceph]# vi secret.xml
<secret ephemeral='no' private='no'>
<uuid>9226f2ac-8796-424d-8062-fd24ee833f8c</uuid>
<usage type='ceph'>
<name>client.cinder secret </name>
</usage>
</secret>


# 定义(define)密钥文件,并保证生成的保密字符串是安全的。在接下来的步骤中需要使用这个保密的字符串值
[root@controller ceph]# virsh secret-define --file secret.xml
生成 secret 9226f2ac-8796-424d-8062-fd24ee833f8c

# 生成key
[root@controller ceph]# ceph auth get-key client.cinder > client.cinder.key

# 在 virsh 里设置好最后一步生成的保密字符串值,创建完成后查看系统的密钥文件。
[root@controller ceph]# virsh secret-set-value --secret 9226f2ac-8796-424d-8062-fd24ee833f8c --base64 $(cat client.cinder.key)
secret 值设定

[root@controller ceph]# virsh secret-list
UUID Usage
--------------------------------------------------------------------------------
9226f2ac-8796-424d-8062-fd24ee833f8c ceph client.cinder secret
  • 修改配置文件
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
# 在计算节点修改配置文件
[root@controller ~]# vi /etc/cinder/cinder.conf
[DEFAULT]
enabled_backends = ceph

[ceph]
volume_driver = cinder.volume.drivers.rbd.RBDDriver
rbd_pool = volumes
rbd_flatten_volume_from_snapshot = false
rbd_max_clone_depth = 5
rbd_store_chunk_size = 4
rbd_user = cinder
rbd_ceph_conf = /etc/ceph/ceph.conf
rbd_secret_uuid = 9226f2ac-8796-424d-8062-fd24ee833f8c
glance_api_version = 2
rados_connect_timeout = -1
volume_backend_name = ceph

# 重启服务(controller和compute都重启)
[root@controller ~]# systemctl restart openstack-cinder-volume.service
  • 创建卷测试
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
[root@controller ceph]# source /etc/keystone/admin-openrc.sh


# 创建卷类型
[root@controller ceph]# cinder type-create ceph
+--------------------------------------+------+-------------+-----------+
| ID | Name | Description | Is_Public |
+--------------------------------------+------+-------------+-----------+
| 9f7f310f-93ad-4a84-b2a5-b6df6b4bd328 | ceph | - | True |
+--------------------------------------+------+-------------+-----------+


# 设置ceph卷类型后端为ceph
[root@controller ceph]# cinder type-key ceph set volume_backend_name=ceph


# 创建测试卷 1G
[root@controller ceph]# cinder create --volume-type ceph --name ceph-test 1
+--------------------------------+--------------------------------------+
| Property | Value |
+--------------------------------+--------------------------------------+
| attachments | [] |
| availability_zone | nova |
| bootable | false |
| consistencygroup_id | None |
| created_at | 2023-06-20T03:13:48.000000 |
| description | None |
| encrypted | False |
| id | 55a1bae4-ee7e-4bdb-9ce3-68f603e64c95 |
| metadata | {} |
| migration_status | None |
| multiattach | False |
| name | ceph-test |
| os-vol-host-attr:host | None |
| os-vol-mig-status-attr:migstat | None |
| os-vol-mig-status-attr:name_id | None |
| os-vol-tenant-attr:tenant_id | 1ca63c9e48af499fad25bdff5d9d9bac |
| replication_status | None |
| size | 1 |
| snapshot_id | None |
| source_volid | None |
| status | creating |
| updated_at | None |
| user_id | 96e338b31eda48b1adc0cf71be780a46 |
| volume_type | ceph |
+--------------------------------+--------------------------------------+


# 查看
[root@controller ceph]# cinder show ceph-test
+--------------------------------+--------------------------------------+
| Property | Value |
+--------------------------------+--------------------------------------+
| attached_servers | [] |
| attachment_ids | [] |
| availability_zone | nova |
| bootable | false |
| consistencygroup_id | None |
| created_at | 2023-06-20T03:13:48.000000 |
| description | None |
| encrypted | False |
| id | 55a1bae4-ee7e-4bdb-9ce3-68f603e64c95 |
| metadata | |
| migration_status | None |
| multiattach | False |
| name | ceph-test |
| os-vol-host-attr:host | controller@ceph#ceph |
| os-vol-mig-status-attr:migstat | None |
| os-vol-mig-status-attr:name_id | None |
| os-vol-tenant-attr:tenant_id | 1ca63c9e48af499fad25bdff5d9d9bac |
| replication_status | None |
| size | 1 |
| snapshot_id | None |
| source_volid | None |
| status | available |
| updated_at | 2023-06-20T03:13:50.000000 |
| user_id | 96e338b31eda48b1adc0cf71be780a46 |
| volume_type | ceph |
+--------------------------------+--------------------------------------+
# status为abailable即为成功

ceph检查

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
[root@ceph1 ~]# ceph -s
cluster:
id: c5334dbd-cee4-4d97-b1e8-2b2994542ab4
health: HEALTH_WARN
application not enabled on 1 pool(s)
# 应用程序未在 1 个池上启用
services:
mon: 3 daemons, quorum ceph1,ceph2,ceph3 (age 59m)
mgr: ceph1(active, since 59m), standbys: ceph2, ceph3
osd: 3 osds: 3 up (since 59m), 3 in (since 17h)

data:
pools: 1 pools, 128 pgs
objects: 5 objects, 132 B
usage: 3.0 GiB used, 297 GiB / 300 GiB avail
pgs: 128 active+clean

io:
client: 2.0 KiB/s rd, 2 op/s rd, 0 op/s wr



[root@ceph1 ~]# ceph health detail
HEALTH_WARN application not enabled on 1 pool(s)
POOL_APP_NOT_ENABLED application not enabled on 1 pool(s)
application not enabled on pool 'volumes'
use 'ceph osd pool application enable <pool-name> <app-name>', where <app-name> is 'cephfs', 'rbd', 'rgw', or freeform for custom applications.


[root@ceph1 ~]# ceph osd pool application enable volumes rgw
enabled application 'rgw' on pool 'volumes'


[root@ceph1 ~]# ceph -s
cluster:
id: c5334dbd-cee4-4d97-b1e8-2b2994542ab4
health: HEALTH_OK

services:
mon: 3 daemons, quorum ceph1,ceph2,ceph3 (age 63m)
mgr: ceph1(active, since 63m), standbys: ceph2, ceph3
osd: 3 osds: 3 up (since 63m), 3 in (since 18h)

data:
pools: 1 pools, 128 pgs
objects: 5 objects, 132 B
usage: 3.0 GiB used, 297 GiB / 300 GiB avail
pgs: 128 active+clean

Ceph对接OpenStack-Glance

创建 images地址池

  • 并为images地址池创建认证用户
1
2
3
4
5
6
7
[root@controller ceph]# ceph osd pool create images 64
pool 'images' created


[root@controller ceph]# ceph auth get-or-create client.glance mon 'allow r' osd 'allow class-read object_prefix rbd_children, allow rwx pool=images'
[client.glance]
key = AQC4cZJk/oswHhAATXhKypxgZPHBTPPGbOOpSw==

拷贝 keyring

  • 并keying修改权限
1
2
3
4
5
6
[root@controller ceph]# ceph auth get-or-create client.glance | ssh controller tee /etc/ceph/ceph.client.glance.keyring
[client.glance]
key = AQC4cZJk/oswHhAATXhKypxgZPHBTPPGbOOpSw==


[root@controller ceph]# chown glance:glance /etc/ceph/ceph.client.glance.keyring

配置Glance服务

现在已经完成了 Ceph 侧所需的配置,接下来通过配置 OpenStack Glance,将 Ceph 用作 后端存储,配置 OpenStack Glance 模块来将其虚拟机镜像存储在 Ceph RDB 中。

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
[root@controller ~]# vim /etc/glance/glance-api.conf

[DEFAULT]
rpc_backend = rabbit
show_image_direct_url = True


[glance_store]
#stores = file,http
#default_store = file
#filesystem_store_datadir = /var/lib/glance/images/
stores = rbd
default_store = rbd
rbd_store_pool = images
rbd_store_user = glance
rbd_store_ceph_conf = /etc/ceph/ceph.conf
rbd_store_chunk_size = 8


# 重启服务
[root@controller ~]# openstack-service restart glance-api

检测

要在 Ceph 中启动虚拟机,Glance 镜像的格式必须为 RAW。

1
2
[root@controller ~]# qemu-img convert -p -f qcow2 -O raw cirros-0.3.4-x86_64-disk.img cirros.raw
(100.00/100%)
  • 上传测试
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
[root@controller ~]# glance image-create --name="CirrOS-ceph" --disk-format=raw --container-format=bare < /root/cirros.raw
+------------------+----------------------------------------------------------------------------------+
| Property | Value |
+------------------+----------------------------------------------------------------------------------+
| checksum | 56730d3091a764d5f8b38feeef0bfcef |
| container_format | bare |
| created_at | 2023-06-21T03:55:10Z |
| direct_url | rbd://c5334dbd-cee4-4d97-b1e8-2b2994542ab4/images/eb5e7a96-4bdd-4f9a- |
| | 94f4-dd6e93950c16/snap |
| disk_format | raw |
| id | eb5e7a96-4bdd-4f9a-94f4-dd6e93950c16 |
| min_disk | 0 |
| min_ram | 0 |
| name | CirrOS-ceph |
| owner | 1ca63c9e48af499fad25bdff5d9d9bac |
| protected | False |
| size | 41126400 |
| status | active |
| tags | [] |
| updated_at | 2023-06-21T03:55:12Z |
| virtual_size | None |
| visibility | shared |
+------------------+----------------------------------------------------------------------------------+

在Ceph镜像池中查询

可以在 Ceph 的镜像池中查询镜像 ID 来验证新添加的镜像。

可以发现存储在 Ceph 存储池中的 id 与创建的镜像 id 一致。而原本 Glance 的默认存储 路径中没有镜像。

1
2
3
4
5
6
7
[root@controller ~]#  rbd ls images
eb5e7a96-4bdd-4f9a-94f4-dd6e93950c16

[root@controller ~]# ll /var/lib/glance/images/
总用量 511476
-rw-r-----. 1 glance glance 510459904 6月 19 15:37 6de3010c-c851-4622-afaf-06546f5419a5
-rw-r-----. 1 glance glance 13287936 6月 19 15:53 9a9cea64-5659-43cc-bc06-7ce3ea08ed41

Ceph对接OpenStack-Nova

创建vm地址池

1
2
3
4
[root@controller ceph]# ceph osd pool create vms 32
pool 'vms' created


nova compute 使用 RBD 有两种方式。一种是将 cinder volume 挂接给虚拟机;另一种是 从 cinder volume 上启动虚拟机,此时 Nova 需要创建一个 RBD image,把 glance image 的内 容导入,再交给 libvirt。

这边验证第一种 Nova 使用 Ceph 的情况.

修改配置文件

1
2
3
4
5
6
7
8
9
10
[root@controller ~]# vim /etc/nova/nova.conf 

[libvirt]
virt_type = qemu
nject_key = True
rbd_user = cinder
rbd_secret_uuid = 9226f2ac-8796-424d-8062-fd24ee833f8c

# 重启服务
[root@controller ~]# systemctl restart openstack-nova-compute.service

创建虚拟机

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
[root@controller ~]# openstack server create --image eb5e7a96-4bdd-4f9a-94f4-dd6e93950c16 --network net2 --flavor flavor-1 ceph-vm1
+-------------------------------------+----------------------------------------------------+
| Field | Value |
+-------------------------------------+----------------------------------------------------+
| OS-DCF:diskConfig | MANUAL |
| OS-EXT-AZ:availability_zone | |
| OS-EXT-SRV-ATTR:host | None |
| OS-EXT-SRV-ATTR:hypervisor_hostname | None |
| OS-EXT-SRV-ATTR:instance_name | |
| OS-EXT-STS:power_state | NOSTATE |
| OS-EXT-STS:task_state | scheduling |
| OS-EXT-STS:vm_state | building |
| OS-SRV-USG:launched_at | None |
| OS-SRV-USG:terminated_at | None |
| accessIPv4 | |
| accessIPv6 | |
| addresses | |
| adminPass | 5uShLetV8GQc |
| config_drive | |
| created | 2023-06-21T04:36:36Z |
| flavor | flavor-1 (93083d88-ae87-4dcc-a22f-bcd7472daf1f) |
| hostId | |
| id | b9e555a2-d1b2-4ab5-a9f9-3d1beb665253 |
| image | CirrOS-ceph (eb5e7a96-4bdd-4f9a-94f4-dd6e93950c16) |
| key_name | None |
| name | ceph-vm1 |
| progress | 0 |
| project_id | 1ca63c9e48af499fad25bdff5d9d9bac |
| properties | |
| security_groups | name='default' |
| status | BUILD |
| updated | 2023-06-21T04:36:36Z |
| user_id | 96e338b31eda48b1adc0cf71be780a46 |
| volumes_attached | |
+-------------------------------------+----------------------------------------------------+



[root@controller ~]# openstack server list
+--------------------------------------+----------+---------+--------------------+-------------+----------+
| ID | Name | Status | Networks | Image | Flavor |
+--------------------------------------+----------+---------+--------------------+-------------+----------+
| b9e555a2-d1b2-4ab5-a9f9-3d1beb665253 | ceph-vm1 | ACTIVE | net2=192.168.100.9 | CirrOS-ceph | flavor-1 |
| 8505ec6a-2d59-4732-9c96-aa0abbd59e01 | cirros | SHUTOFF | net2=192.168.100.7 | cirros | flavor-1 |
| 2457a7bd-cb20-48bf-a8ad-302a28775e14 | test | SHUTOFF | net=192.168.1.27 | centos7.5 | flavor-3 |
+--------------------------------------+----------+---------+--------------------+-------------+----------+
# ceph-vm1显示运行中,即为正常
  • 挂载云硬盘
1
2
3
4
5
6
7
8
[root@controller ~]# openstack server add volume ceph-vm1 ceph-test
[root@controller ~]# cinder list
+--------------------------------------+--------+-----------+------+-------------+----------+--------------------------------------+
| ID | Status | Name | Size | Volume Type | Bootable | Attached to |
+--------------------------------------+--------+-----------+------+-------------+----------+--------------------------------------+
| 55a1bae4-ee7e-4bdb-9ce3-68f603e64c95 | in-use | ceph-test | 1 | ceph | false | b9e555a2-d1b2-4ab5-a9f9-3d1beb665253 |
+--------------------------------------+--------+-----------+------+-------------+----------+--------------------------------------+
# 挂载成功,nova compute 使用 RBD 的第一种方式验证成功。