hi
one last clarification: if I create a new FS i can create subvolumes, i
would like to be able to correct the existing FS thank you for your help.
Le ven. 23 juin 2023 à 10:55, karon karon <karon.geek(a)gmail.com> a écrit :
Hello,
I recently use cephfs in version 17.2.6
I have a pool named "*data*" and a fs "*kube*"
it was working fine until a few days ago, now i can no longer create a new
subvolume*, *it gives me the following error:
Error EINVAL: invalid value specified for ceph.dir.subvolume
here is the command used:
ceph fs subvolume create kube newcsivol --pool_layout data
from what I understand it seems that it creates the subvolume but
immediately puts it in the trash !? here is the log :
2023-06-23T08:30:53.307+0000 7f2b929d2700 0 log_channel(audit) log [DBG]
> : from='client.86289 -' entity='client.admin'
cmd=[{"prefix": "fs subvolume
> create", "vol_name": "kube", "sub_name":
"newcsivol", "group_name": "csi",
> "pool_layout": "data", "target": ["mon-mgr",
""]}]: dispatch
> 2023-06-23T08:30:53.307+0000 7f2b8a1d1700 0 [volumes INFO
> volumes.module] Starting _cmd_fs_subvolume_create(group_name:csi,
> pool_layout:data, prefix:fs subvolume create, sub_name:newcsivol,
> target:['mon-mgr', ''], vol_name:kube) < ""
> 2023-06-23T08:30:53.327+0000 7f2b8a1d1700 0 [volumes INFO
> volumes.fs.operations.versions.subvolume_v2] cleaning up subvolume with
> path: newcsivol
> 2023-06-23T08:30:53.331+0000 7f2b8a1d1700 0 [volumes INFO
> volumes.fs.operations.versions.subvolume_base] subvolume path
> 'b'/volumes/csi/newcsivol'' moved to trashcan
> 2023-06-23T08:30:53.331+0000 7f2b8a1d1700 0 [volumes INFO
> volumes.fs.async_job] queuing job for volume 'kube'
> 2023-06-23T08:30:53.335+0000 7f2b8a1d1700 0 [volumes INFO
> volumes.module] Finishing _cmd_fs_subvolume_create(group_name:csi,
> pool_layout:data, prefix:fs subvolume create, sub_name:newcsivol,
> target:['mon-mgr', ''], vol_name:kube) < ""
> 2023-06-23T08:30:53.335+0000 7f2b8a1d1700 -1 mgr.server reply reply (22)
> Invalid argument invalid value specified for ceph.dir.subvolume
> 2023-06-23T08:30:53.339+0000 7f2b461bf700 -1 client.0 error registering
> admin socket command: (17) File exists
> 2023-06-23T08:30:53.339+0000 7f2b461bf700 -1 client.0 error registering
> admin socket command: (17) File exists
> 2023-06-23T08:30:53.339+0000 7f2b461bf700 -1 client.0 error registering
> admin socket command: (17) File exists
> 2023-06-23T08:30:53.339+0000 7f2b461bf700 -1 client.0 error registering
> admin socket command: (17) File exists
> 2023-06-23T08:30:53.339+0000 7f2b461bf700 -1 client.0 error registering
> admin socket command: (17) File exists
> 2023-06-23T08:30:53.363+0000 7f2b461bf700 -1 client.0 error registering
> admin socket command: (17) File exists
> 2023-06-23T08:30:53.363+0000 7f2b461bf700 -1 client.0 error registering
> admin socket command: (17) File exists
> 2023-06-23T08:30:53.363+0000 7f2b461bf700 -1 client.0 error registering
> admin socket command: (17) File exists
> 2023-06-23T08:30:53.363+0000 7f2b461bf700 -1 client.0 error registering
> admin socket command: (17) File exists
> 2023-06-23T08:30:53.363+0000 7f2b461bf700 -1 client.0 error registering
> admin socket command: (17) File exists
> 2023-06-23T08:30:53.383+0000 7f2b479c2700 -1 client.0 error registering
> admin socket command: (17) File exists
> 2023-06-23T08:30:53.383+0000 7f2b479c2700 -1 client.0 error registering
> admin socket command: (17) File exists
> 2023-06-23T08:30:53.383+0000 7f2b479c2700 -1 client.0 error registering
> admin socket command: (17) File exists
> 2023-06-23T08:30:53.383+0000 7f2b479c2700 -1 client.0 error registering
> admin socket command: (17) File exists
> 2023-06-23T08:30:53.383+0000 7f2b479c2700 -1 client.0 error registering
> admin socket command: (17) File exists
> 2023-06-23T08:30:53.507+0000 7f2b3ff33700 0 [prometheus INFO
> cherrypy.access.139824530773776] 192.168.240.231 - - [23/Jun/2023:08:30:53]
> "GET /metrics HTTP/1.1" 200 194558 ""
"Prometheus/2.33.4"
> 2023-06-23T08:30:54.219+0000 7f2b3ddaf700 0 [dashboard INFO request] [
> 172.29.2.142:33040] [GET] [200] [0.003s] [admin] [22.0B]
> /api/prometheus/notifications
> 2023-06-23T08:30:54.223+0000 7f2b929d2700 0 log_channel(audit) log [DBG]
> : from='mon.0 -' entity='mon.' cmd=[{"prefix":
"balancer status", "format":
> "json"}]: dispatch
> 2023-06-23T08:30:54.227+0000 7f2b3a5a8700 0 [dashboard INFO request] [
> 172.29.2.142:49348] [GET] [200] [0.019s] [admin] [22.0B] /api/prometheus
> 2023-06-23T08:30:54.227+0000 7f2b929d2700 0 log_channel(audit) log [DBG]
> : from='mon.0 -' entity='mon.' cmd=[{"prefix":
"balancer status", "format":
> "json"}]: dispatch
> 2023-06-23T08:30:54.231+0000 7f2b3d5ae700 0 [dashboard INFO request] [
> 172.29.2.142:39414] [GET] [200] [0.022s] [admin] [9.3K]
> /api/prometheus/rules
> 2023-06-23T08:30:54.275+0000 7f2ba39d4700 0 log_channel(cluster) log
> [DBG] : pgmap v2116480: 145 pgs: 145 active+clean; 2.8 GiB data, 12 GiB
> used, 1.5 TiB / 1.5 TiB avail; 5.5 KiB/s wr, 0 op/s
my fs info :
# ceph fs ls
> name: kube, metadata pool: metadata, data pools: [data ]
thank for your help
best regards
Karim