Persistent storage¶
Warning
To ensure persistence in a Swarm cluster, it is mandatory to have a shared file system.
This chapter will show some solutions to share data between multiple nodes.
The following solution exist (This list is not exhaustive) :
A SAMBA/NFS server
A S3 storage mounted as a filesystem (s3fs-fuse, CephFS, ….)
A S3 storage mounted as a block device (RADOS Block Device, ….)
Note
Performance using listed solutions has not been tested.
Using a S3 compatible storage¶
Prerequisites¶
s3fs-fuse installed on all machine running the stack container.
A S3 storage.
How to configure OnSphere¶
Operation on each machine running the container¶
Create a mount point for the S3 (for example /mnt/s3).
Create a file containing the ACCESS_KEY_ID and SECRET_ACCESS_KEY with read and write right only.
echo ACCESS_KEY_ID:SECRET_ACCESS_KEY > ${HOME}/.passwd-s3fs chmod 600 ${HOME}/.passwd-s3fs
Mount the S3 bucket
s3fs <bucket> /mnt/s3 -o passwd_file=${HOME}/.passwd-s3fs,use_path_request_style,url=http://<hostname or ip>:<port>/Warning
If an error such as
mount /mnt/s3:/var/lib/containers/storage/overlay, flags: 0x1000: permission denied
appears, theuser_allow_other
option needs to be activated on/etc/fuse.conf
and the-o allow_root
option needs to be added to the s3fs command.
Operation on the configuration¶
Create the volume on
stack.volumes
variables-data: driver: local driver_opts: type: "none" o: "bind" device: "/mnt/s3/variables"
Use the volume on the
module.service
Push the configuration