Lines Matching +full:single +full:- +full:system

1 .. SPDX-License-Identifier: GPL-2.0
4 Ceph Distributed File System
7 Ceph is a distributed network file system designed to provide good
14 * High availability and reliability. No single point of failure.
15 * N-way replication of data across storage nodes
32 re-replicated in a distributed fashion by the storage nodes themselves
34 system extremely efficient and scalable.
37 in-memory cache above the file namespace that is extremely scalable,
39 and can tolerate arbitrary (well, non-Byzantine) node failures. The
42 particular, inodes with only a single link are embedded in
44 loaded into its cache with a single I/O operation. The contents of
48 The system offers automatic data rebalancing/migration when scaling
52 When the file system approaches full, new nodes can be easily added
57 system. Snapshot creation and deletion are as simple as 'mkdir
66 `_<SNAPSHOT-NAME>_<INODE-NUMBER>`. Since filenames in general can't have
67 more than 255 characters, and `<node-id>` takes 13 characters, the long
68 snapshot names can take as much as 255 - 1 - 1 - 13 = 240.
71 files and bytes. That is, a 'getfattr -d foo' on any directory in the
72 system will reveal the total number of nested regular files and
75 no 'du' or similar recursive scan of the file system is required.
77 Finally, Ceph also allows quotas to be set on any directory in the system.
82 setfattr -n ceph.quota.max_bytes -v 100000000 /some/dir
83 getfattr -n ceph.quota.max_bytes /some/dir
86 cooperation of the client mounting the file system to stop writers when a
95 # mount -t ceph user@fsid.fs_name=/[subdir] mnt -o mon_addr=monip1[:port][/monip2[:port]]
97 You only need to specify a single monitor, as the client will get the
103 …# mount -t ceph cephuser@07fe3187-00d9-42a3-814b-72a4d5e7d5be.cephfs=/ /mnt/ceph -o mon_addr=1.2.3…
110 # mount -t ceph cephuser@cephfs=/ /mnt/ceph -o mon_addr=mon-addr
114 # mount -t ceph cephuser@cephfs=/ /mnt/ceph -o mon_addr=192.168.1.100/192.168.1.101
129 fsid=cluster-id
149 of a non-responsive Ceph file system. The default is 60
190 Don't use the RADOS 'copy-from' operation to perform remote object
215 - https://github.com/ceph/ceph-client.git
217 and the source for the full system is at