Ceph RGW及S3接口操作

2024-07-04 414 0

RGW 对象存储

对象存储是什么?
可以理解是一个海量的存储空间,可以通过API在任何时间、任何地点访问对象存储里面的数据。我们常用的阿里云OSS、七牛云存储以及百度网盘、私有网盘等都属于对象存储。

Cpeh是一个分布式对象存储系统,通过它的对象网关(object gateway),也就是RADOS网关(radosgw)提供对象存储接口。RADOS网关利用librgw (RADOS网关库)和librados这些库,允许应用程序跟CEPH对象存储建立连接。Ceph通过RESTful API提供可访问且最稳定的多租户对象存储解决方案之一。

RADOS网关提供RESTful接口让用户的应用程序将数据存储到CEPH集群中。RADOS网关接口满足以下特点;

  • 兼容Swift: 这是为了OpenStack Swift API提供的对象存储功能
  • 兼容S3: 这是为Amazon S3 API提供的对象存储功能
  • Admin API: 这也称为管理API或者原生API,应用程序可以直接使用它来获取访问存储系统的权限以及管理存储系统

除了上述的特点,对象存储还有以下特点

  • 支持用户认证
  • 使用率分析
  • 支持分片上传 (自动切割上传重组)
  • 支持多站点部署、多站点复制

对象存储网关架构讲解

5.1 部署RGW存储网关

使用ceph对象存储我们需要安装对象存储网关(RADOSGW)
ceph-radosgw软件包我们之前是已经安装过了

[root@ceph01 ceph]# ceph-radosgw
[root@ceph01 ceph]# rpm -qa|grep ceph-radosgw
ceph-radosgw-14.2.22-0.el7.x86_64

部署对象存储网关
这里我使用ceph01当做存储网关来使用

[root@ceph01 ceph]# cd /etc/ceph/
[root@ceph01 ceph]# ceph-deploy rgw create ceph01

rgw 部署完毕

[root@ceph01 ceph]# ceph -s|grep rgw
    rgw: 1 daemon active (ceph01)
[root@ceph01 ceph]# ceph -s|grep rgw
    rgw: 1 daemon active (ceph01)
[root@ceph01 ceph]# ss -tunlp | grep 7480
tcp    LISTEN     0      128       *:7480                  *:*                   users:(("radosgw",pid=10115,fd=47))
tcp    LISTEN     0      128    [::]:7480               [::]:*                   users:(("radosgw",pid=10115,fd=48))

修改默认端口为80

[root@ceph01 ceph]# vim /etc/ceph/ceph.conf
[client.rgw.ceph01]
rgw_frontends = "civetweb port=80"

# client.rgw.[主机名] 这里需要注意修改的主机名

同步配置并重启服务

[root@ceph01 ceph]# ceph-deploy --overwrite-conf config push ceph01 ceph02 ceph03
[root@ceph01 ceph]# systemctl restart ceph-radosgw.target
[root@ceph01 ceph]# ss -tunlp | grep radosgw
tcp    LISTEN     0      128       *:80                    *:*                   users:(("radosgw",pid=10867,fd=44))

若想http支持https

[root@ceph01 ceph]# vim /etc/ceph/ceph.conf
[client.rgw.ceph01]
rgw_frontends = civetweb port=443s ssl_certificate=/etc/ceph/keyandcert.pem

调用对象存储网关

[root@ceph01 ceph]# radosgw-admin user create --uid ceph-s3-user --display-name "Ceph S3 User Demo sunday"
{
    "user_id": "ceph-s3-user",
    "display_name": "Ceph S3 User Demo sunday",
    "email": "",
    "suspended": 0,
    "max_buckets": 1000,
    "subusers": [],
    "keys": [
        {
            "user": "ceph-s3-user",
            "access_key": "H190EO33F4EXTOCGJEP7",
            "secret_key": "hAYlNFcm2QSCT3oFiCoMfaapkQY7RqKdFQqBQF3Q"
        }
    ],
    "swift_keys": [],
    "caps": [],
    "op_mask": "read, write, delete",
    "default_placement": "",
    "default_storage_class": "",
    "placement_tags": [],
    "bucket_quota": {
        "enabled": false,
        "check_on_raw": false,
        "max_size": -1,
        "max_size_kb": 0,
        "max_objects": -1
    },
    "user_quota": {
        "enabled": false,
        "check_on_raw": false,
        "max_size": -1,
        "max_size_kb": 0,
        "max_objects": -1
    },
    "temp_url_keys": [],
    "type": "rgw",
    "mfa_ids": []
}
# 若忘记密钥信息 查看命令
# radosgw-admin user info --uid ceph-s3-user

复制上面的

"access_key": "H190EO33F4EXTOCGJEP7"
"secret_key": "hAYlNFcm2QSCT3oFiCoMfaapkQY7RqKdFQqBQF3Q"
yum install python-boto
vim s3.py
#!python
import boto
import boto.s3.connection
access_key = "H190EO33F4EXTOCGJEP7" # 修改
secret_key = "hAYlNFcm2QSCT3oFiCoMfaapkQY7RqKdFQqBQF3Q" # 修改

conn = boto.connect_s3(
    aws_access_key_id = access_key,
    aws_secret_access_key = secret_key,
    host = '192.168.77.41', port = 80, # 修改
    is_secure=False, calling_format = boto.s3.connection.OrdinaryCallingFormat(),
    )
bucket = conn.create_bucket('ceph-s3-bucket')
for bucket in conn.get_all_buckets():
    print "{name}".format(
        name = bucket.name,
        created = bucket.creation_date,
)

[root@ceph01 ceph]# python s3.py 
ceph-s3-bucket

执行完Python脚本,我们可以看到在pool创建了一个default.rgw.backets.index的索引

[root@ceph01 ceph]# ceph osd lspools
1 sunday
2 .rgw.root
3 default.rgw.control
4 default.rgw.meta
5 default.rgw.log
6 default.rgw.buckets.index

命令行调用

SDK调用方式不太适合运维操作,运维更倾向于命令行操作。

[root@ceph01 ceph]# yum install -y s3cmd

配置

[root@ceph01 ceph]# s3cmd --configure

Enter new values or accept defaults in brackets with Enter.
Refer to user manual for detailed description of all options.

Access key and Secret key are your identifiers for Amazon S3. Leave them empty for using the env variables.
Access Key: H190EO33F4EXTOCGJEP7 # 粘入
Secret Key: hAYlNFcm2QSCT3oFiCoMfaapkQY7RqKdFQqBQF3Q # 粘入
Default Region [US]: CN

Use "s3.amazonaws.com" for S3 Endpoint and not modify it to the target Amazon S3.
S3 Endpoint [s3.amazonaws.com]: 192.168.77.41:80 # S3地址

Use "%(bucket)s.s3.amazonaws.com" to the target Amazon S3. "%(bucket)s" and "%(location)s" vars can be used
if the target S3 system supports dns based buckets.
DNS-style bucket+hostname:port template for accessing a bucket [%(bucket)s.s3.amazonaws.com]: 192.168.77.41:80/%(bucket)s  # s3访问格式

Encryption password is used to protect your files from reading
by unauthorized persons while in transfer to S3
Encryption password:  # 不加密
Path to GPG program [/bin/gpg]: 

When using secure HTTPS protocol all communication with Amazon S3
servers is protected from 3rd party eavesdropping. This method is
slower than plain HTTP, and can only be proxied with Python 2.7 or newer
Use HTTPS protocol [Yes]: no # 不开启https

On some networks all internet access must go through a HTTP proxy.
Try setting it here if you can't connect to S3 directly
HTTP Proxy server name: # Enter 不设置代理服务器

New settings:
  Access Key: H190EO33F4EXTOCGJEP7
  Secret Key: hAYlNFcm2QSCT3oFiCoMfaapkQY7RqKdFQqBQF3Q
  Default Region: CN
  S3 Endpoint: 192.168.77.41:80
  DNS-style bucket+hostname:port template for accessing a bucket: 192.168.77.41:80/%(bucket)s
  Encryption password: 
  Path to GPG program: /bin/gpg
  Use HTTPS protocol: False
  HTTP Proxy server name: 
  HTTP Proxy server port: 0

Test access with supplied credentials? [Y/n] Y # 测试访问权限
Please wait, attempting to list all buckets...
Success. Your access key and secret key worked fine :-)

Now verifying that encryption works...
Not configured. Never mind.

Save settings? [y/N] y # 保存配置
Configuration saved to '/root/.s3cfg' # 配置文件路径 

s3cmd 查看

[root@ceph01 ~]# s3cmd ls 
2024-04-16 17:09  s3://ceph-s3-bucket

s3cmd 创建bucket

[root@ceph01 ~]# s3cmd mb s3://s3cmd-sunday-demo
Bucket 's3://s3cmd-sunday-demo/' created 

# ceph-s3-bucket 内容 目前是空
[root@ceph01 ~]# s3cmd ls s3://ceph-s3-bucket

s3cmd上传

[root@ceph01 ~]# s3cmd put /etc/ s3://ceph-s3-bucket/etc/ --recursive

如果put提示ERROR: S3 error: 416 (InvalidRange)
需要将ceph.conf配置文件添加参数

[root@ceph01 ceph]# vim /etc/ceph/ceph.conf 
...
mon_max_pg_per_osd = 1000

[root@ceph01 ceph]# ceph-deploy --overwrite-conf config push ceph01 ceph02 ceph03

# 并重启服务
[root@ceph01 ceph]# ssystemctl restart ceph-mon@ceph01.service
[root@ceph01 ceph]# ssh ceph02 "systemctl restart ceph-mon@ceph02.service"
[root@ceph01 ceph]# ssh ceph03 "systemctl restart ceph-mon@ceph03.service"

s3cmd下载

[root@ceph01 ceph]# s3cmd get s3://ceph-s3-bucket/etc/hosts proxy-s3
download: 's3://ceph-s3-bucket/etc/hosts' -> 'proxy-s3'  [1 of 1]
 221 of 221   100% in    0s     4.99 KB/s  done

s3cmd 删除

[root@ceph01 ceph]# s3cmd del s3://ceph-s3-bucket/etc/hosts
delete: 's3://ceph-s3-bucket/etc/hosts'

# 目录递归删除
[root@ceph01 ceph]# s3cmd del s3://ceph-s3-bucket/etc/ --recursive

最终我们数据会在pools里面生成

[root@ceph01 ceph]# ceph osd lspools
1 sunday
2 .rgw.root
3 default.rgw.control
4 default.rgw.meta
5 default.rgw.log
6 default.rgw.buckets.index
7 default.rgw.buckets.data

[root@ceph01 ceph]# s3cmd put /etc/hosts s3://ceph-s3-bucket/etc/
upload: '/etc/hosts' -> 's3://ceph-s3-bucket/etc/hosts'  [1 of 1]
 221 of 221   100% in    0s     9.67 KB/s  done

# 数据
[root@ceph01 ceph]# rados -p default.rgw.buckets.data ls
7a6ae047-4053-4aa8-a19b-501830c408d3.4774.1_etc/hosts

# 索引
[root@ceph01 ceph]# rados -p default.rgw.buckets.index ls
.dir.7a6ae047-4053-4aa8-a19b-501830c408d3.4774.1
.dir.7a6ae047-4053-4aa8-a19b-501830c408d3.4774.2

相关文章

CephFS 文件系统
Ceph RBD
Ceph osd 命令
Ceph Prometheus监控
Ceph 限额配置
Ceph RBD 删除不了处理过程

发布评论