Persistent storage

Warning

To ensure persistence in a Swarm cluster, it is mandatory to have a shared file system.

This chapter will show some solutions to share data between multiple nodes.

The following solution exist (This list is not exhaustive) :

  • A SAMBA/NFS server

  • A S3 storage mounted as a filesystem (s3fs-fuse, CephFS, ….)

  • A S3 storage mounted as a block device (RADOS Block Device, ….)

  • A GlusterFS

Note

Performance using listed solutions has not been tested.

Using a S3 compatible storage

Prerequisites

  • s3fs-fuse installed on all machine running the stack container.

  • A S3 storage.

How to configure OnSphere

Operation on each machine running the container

  1. Create a mount point for the S3 (for example /mnt/s3).

  2. Create a file containing the ACCESS_KEY_ID and SECRET_ACCESS_KEY with read and write right only.

echo ACCESS_KEY_ID:SECRET_ACCESS_KEY > ${HOME}/.passwd-s3fs
chmod 600 ${HOME}/.passwd-s3fs
  1. Mount the S3 bucket

s3fs <bucket> /mnt/s3 -o passwd_file=${HOME}/.passwd-s3fs,use_path_request_style,url=http://<hostname or ip>:<port>/

Warning

If an error such as mount /mnt/s3:/var/lib/containers/storage/overlay, flags: 0x1000: permission denied appears, the user_allow_other option needs to be activated on /etc/fuse.conf and the -o allow_root option needs to be added to the s3fs command.

Operation on the configuration

  1. Create the volume on stack.volumes

variables-data:
driver: local
driver_opts:
  type: "none"
  o: "bind"
  device: "/mnt/s3/variables"
  1. Use the volume on the module.service

  2. Push the configuration