I have to correct myself. It also fails on an export with "sync" mode. Here is
an strace on the client (strace ln envs/satwindspy/include/ffi.h
mambaforge/pkgs/libffi-3.3-h58526e2_2/include/ffi.h):
[...]
stat("mambaforge/pkgs/libffi-3.3-h58526e2_2/include/ffi.h", 0x7ffdc5c32820) = -1
ENOENT (No such file or directory)
lstat("envs/satwindspy/include/ffi.h", {st_mode=S_IFREG|0664, st_size=13934,
...}) = 0
linkat(AT_FDCWD, "envs/satwindspy/include/ffi.h", AT_FDCWD,
"mambaforge/pkgs/libffi-3.3-h58526e2_2/include/ffi.h", 0) = -1 EROFS (Read-only
file system)
[...]
write(2, "ln: ", 4ln: ) = 4
write(2, "failed to create hard link 'mamb"..., 80failed to create hard link
'mambaforge/pkgs/libffi-3.3-h58526e2_2/include/ffi.h') = 80
[...]
write(2, ": Read-only file system", 23: Read-only file system) = 23
write(2, "\n", 1
) = 1
lseek(0, 0, SEEK_CUR) = -1 ESPIPE (Illegal seek)
close(0) = 0
close(1) = 0
close(2) = 0
exit_group(1) = ?
+++ exited with 1 +++
Has anyone advice?
Thanks!
=================
Frank Schilder
AIT Risø Campus
Bygning 109, rum S14
________________________________________
From: Frank Schilder <frans(a)dtu.dk>
Sent: Wednesday, March 22, 2023 2:44 PM
To: ceph-users(a)ceph.io
Subject: [ceph-users] ln: failed to create hard link 'file name': Read-only file
system
Hi all,
on an NFS re-export of a ceph-fs (kernel client) I observe a very strange error. I'm
un-taring a larger package (1.2G) and after some time I get these errors:
ln: failed to create hard link 'file name': Read-only file system
The strange thing is that this seems only temporary. When I used "ln src dst"
for manual testing, the command failed as above. However, after that I tried "ln -v
src dst" and this command created the hard link with exactly the same path arguments.
During the period when the error occurs, I can't see any FS in read-only mode, neither
on the NFS client nor the NFS server. Funny thing is that file creation and write still
works, its only the hard-link creation that fails.
For details, the set-up is:
file-server: mount ceph-fs at /shares/path, export /shares/path as nfs4 to other server
other server: mount /shares/path as NFS
More precisely, on the file-server:
fstab: MON-IPs:/shares/folder /shares/nfs/folder ceph
defaults,noshare,name=NAME,secretfile=sec.file,mds_namespace=FS-NAME,_netdev 0 0
exports: /shares/nfs/folder -no_root_squash,rw,async,mountpoint,no_subtree_check DEST-IP
On the host at DEST-IP:
fstab: FILE-SERVER-IP:/shares/nfs/folder /mnt/folder nfs defaults,_netdev 0 0
Both, the file server and the client server are virtual machines. The file server is on
Centos 8 stream (4.18.0-338.el8.x86_64) and the client machine is on AlmaLinux 8
(4.18.0-425.13.1.el8_7.x86_64).
When I change the NFS export from "async" to "sync" everything works.
However, that's a rather bad workaround and not a solution. Although this looks like
an NFS issue, I'm afraid it is a problem with hard links and ceph-fs. It looks like a
race with scheduling and executing operations on the ceph-fs kernel mount.
Has anyone seen something like that?
Thanks and best regards,
=================
Frank Schilder
AIT Risø Campus
Bygning 109, rum S14
_______________________________________________
ceph-users mailing list -- ceph-users(a)ceph.io
To unsubscribe send an email to ceph-users-leave(a)ceph.io