Backups
Overview
The Cloudron backup system creates a portable snapshot of the platform data and application data. Each app is backed up independently allowing them to be restored, cloned or migrated independently.
Unlike VM snapshots, these backups contain only the necessary information for reinstallation of Cloudron or app. For example, application code and system libraries are not part of a backup because Cloudron packages are read-only and can never change. Runtime files (lock files, logs) and temporary files generated by apps are not backed up either. Only the database and app user data is backed up. This design significantly reduces the size of backups.
Storage providers
Amazon S3
To get started:
- Create a bucket in S3.
Lifecycle rules
S3 buckets can have lifecycle rules to automatically remove objects after a certain age. When using
the rsync
format, these lifecycle rules may remove files from the snapshot
directory and will cause
the backups to be corrupt. For this reason, we recommend not setting any lifecycle rules that delete objects
after a certain age. Cloudron will periodically clean up old backups based on the retention period.
- AWS has two forms of security credentials - root and IAM. When using root credentials, follow the instructions here to create access keys. When using IAM, follow the instructions here to create a user and use the following policy to give the user access to the bucket:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": "s3:*",
"Resource": [
"arn:aws:s3:::<your bucket name>",
"arn:aws:s3:::<your bucket name>/*"
]
}
]
}
- In the Cloudron dashboard, choose
Amazon S3
from the drop down.
Backblaze B2
To get started:
- Create a Backblaze B2 bucket
Lifecycle rules
Versioning is enabled by default in Backblaze. This means that despite Cloudron periodically deleting old backups,
a copy of them is still retained by Backblaze. Over time, these copies tend to add up and can result in a significant
cost. We recommend changing the Lifecycle Settings
to Keep only the last version of the file
. Given that the Cloudron
backups are already versioned by date, you won't need any other copies.
-
Create Access key and Secret Access Key from the
Application Keys
section in Backblaze. Be sure to provide read and write access to the bucket. You should restrict access of the key to just the backup bucket. -
Make a note of the
keyID
andapplicationKey
. As noted in their docs:
Access Key <your-application-key-id>
Secret Key <your-application-key>
- In the Cloudron dashboard, choose
Backblaze B2
from the drop down. The Endpoint URL has the forms3.<region>.backblazeb2.com
, whereis similar to us-west-004
.
CIFS
To get started:
-
Hosting providers like Hetzner and OVH provider storage boxes that be mounted using Samba/CIFS.
-
In the Cloudron dashboard, choose
CIFS Mount
from the drop down.
Hetzner Storage Box
We recommend using SSHFS for Hetzner Storage Box since it is much faster and efficient storage wise compared to CIFS.
When using Hetzner Storage Box with CIFS, the Remote Directory is /backup
for the main account. For sub accounts,
the Remote Directory is /subaccount
.
Cloudflare R2
To get started:
-
Create a Cloudflare R2 bucket.
-
Generate S3 auth tokens for the bucket.
-
In the Cloudron dashboard, choose
Cloudflare R2
from the down down. The S3 endpoint in shown in the Cloudflare dashboard.
Remove the bucket name in Cloudflare URL
Cloudflare dashboard shows a URL that contains the the bucket name in the end. On Cloudron, you should set the Endpoint
without the bucket name in the end.
Contabo Object Storage
To get started:
-
Create a Contabot Storage bucket.
-
Obtain S3 credentials for the storage.
-
In the Cloudron dashboard, choose
Contabo Object Storage
from the drop down.
DigitalOcean Spaces
To get started:
-
Create a DigitalOcean Spaces bucket in your preferred region following this guide.
-
Create DigitalOcean Spaces access key and secret key following this guide.
-
In the Cloudron dashboard, choose
DigitalOcean Spaces
from the drop down.
Rate limits
In our tests, we hit a few issues including missing implementation for copying large files (> 5GB), severe rate limits and poor performance when deleting objects. If you plan on using this provider, keep an eye on your backups. Cloudron will notify admins by email when backups fail.
Exoscale SOS
To get started:
-
Create a Exoscale SOS bucket
-
Create Access key and Secret Access Key from the Exoscale dashboard
-
In the Cloudron dashboard, choose
Exoscale SOS
from the drop down.
EXT4
To get started:
-
Attach an external EXT4 hard disk to the server. Depending on where your server is located, this can be a DigitalOcean Block Storage, AWS Elastic Block Store, Linode Block Storage.
-
If required, format it using
mkfs.ext4 /dev/<device>
. Then, runblkid
orlsblk
to get the UUID of the disk. -
In the Cloudron dashboard, choose
EXT4 Mount
from the drop down.
Do not add /etc/fstab
entry
When choosing this storage provider, do not add an /etc/fstab
entry for the mount point. Cloudron will add and manage
a systemd mount point.
Filesystem
To get started:
- Create a directory on the server where backups will be stored.
External Disk
Having backups reside in the same physical disk as the Cloudron server is dangerous. For this reason, Cloudron will show a warning when you use this provider.
- In the Cloudron dashboard, choose
Filesystem
from the drop down.
The Use hardlinks
option can be checked to make the Cloudron use hardlinks
'same' files across backups to conserve space. This option has little to no effect when using the tgz
format.
Filesystem (mountpoint)
Use this provider, when the built-in providers (EXT4, CIFS, NFS, SSHFS) don't work for you.
To get started:
-
Setup a mount point manually on the server.
-
In the Cloudron dashboard, choose
Filesystem (mountpoint)
from the drop down. This option differs from theFilesystem
provider in that it checks if the backup directory is mounted before a backup. This check ensure that if the mount is down, Cloudron is not backing up to the local hard disk.
Google Cloud Storage
To get started:
-
Create a Cloud Storage bucket following this guide.
-
Create a service account key in JSON format.
-
In the Cloudron dashboard, choose
Google Cloud Storage
from the drop down.
Hetzner Object Storage
To get started:
-
Create a Object Storage bucket following this guide.
-
Create S3 API keys
-
In the Cloudron dashboard, choose
Hetzner Object Storage
from the drop down.
IDrive e2
To get started:
-
Create a IDrive e2 Storage bucket
-
Create Access key and Secret Access Key from the IDrive e2 Dashboard
-
In the Cloudron dashboard, choose
IDrive e2
from the drop down.
IONOS (Profitbricks)
To get started:
-
Create a bucket in the S3 Web Console
-
Create Object Storage Keys in the S3 Key Management
-
In the Cloudron dashboard, choose
IONOS (Profitbricks)
from the drop down.
Linode Object Storage
To get started:
-
Create a Linode Object Storage bucket
-
Create Access key and Secret Access Key from the Linode dashboard
-
In the Cloudron dashboard, choose
Linode Object Storage
from the drop down.
Minio
To get started:
- Install Minio following the installation instructions.
Install Minio on another server
Do not setup Minio on the same server as the Cloudron! Using the same server will inevitably result in data loss if something goes wrong with the server's disk. The minio app on Cloudron is meant for storing assets and not backups.
-
Create a bucket on Minio using the Minio CLI or the web interface.
-
In the Cloudron dashboard, choose
Minio
from the drop down.- The
Endpoint
field can also contain a custom port. For example,http://192.168.10.113:9000
. - For HTTPS installations using a self-signed certificate, select the
Accept Self-Signed certificate
option.
- The
NFS
To get started:
-
Setup an external NFSv4 server. If you need help setting up a NFSv4 server, see this article or this guide.
-
In the Cloudron dashboard, choose
NFS mount
from the drop down.
Insecure traffic
Please note that NFS traffic is unencrypted and can be tampered. For this reason, you must use NFS mounts only on secure private networks. For backups, we recommend using encryption to make the setup secure.
OVH Object Storage
OVH Public Cloud has OpenStack Swift Object Storage and supports S3 API. Getting S3 credentials is a bit convoluted, but possible as follows:
-
Download the OpenStack RC file from horizon interface
-
source openrc.sh
and thenopenstack ec2 credentials create
to get the access key and secret -
In the Cloudron dashboard, choose
OVH Object Storage
from the drop down.
Scaleway Object Storage
To get started:
-
Create a Scaleway Object Storage bucket.
-
Create access key and secret key from the credentials section
-
In the Cloudron dashboard, choose
Scaleway Object Storage
from the drop down.
Storage Class
The Storage Class must be set to STANDARD
. Setting it to GLACIER
will result in an error
because server side copy operation is not supported in that mode.
SSHFS
To get started:
-
Setup an external server and make sure SFTP is enabled in the sshd configuration of the server.
-
In the Cloudron dashboard, choose
SSHFS mount
from the drop down.
Hetzner Storage Box
When using Hetzner Storage Box, the Remote Directory is /home
for the main account.
We have found sub accounts to be unreliable with SSHFS. We recommend using CIFS instead if you want to use subaccounts.
UpCloud Object Storage
To get started:
-
Create a UpCloud Object Storage.
-
Create a bucket inside the Object Storage.
-
Click the S3 API Access link to get access credentials.
-
In the Cloudron dashboard, choose
UpCloud Object Storage
from the down down.
Multipart copy limitation
Some regions of UpCloud, like NYC and CHI, do not implement the multipart copy operation. This
restriction prevents large files (5GB) from being copied. For tgz
format, if the backup is more than
5GB, the backup will fail. For rsync
format, files greater than 5GB will not backup properly.
Vultr Object Storage
To get started:
-
Create a Vultr Object Storage bucket
-
Make a note of the access key and secret key listed in the bucket management UI.
-
In the Cloudron dashboard, choose
Vultr Object Storage
from the drop down.
Wasabi
To get started:
-
Create a Wasabi bucket
-
Create Access key and Secret Access Key from the Wasabi dashboard
-
In the Cloudron dashboard, choose
Wasabi
from the drop down.
XFS
To get started:
-
Attach an external XFS hard disk to the server. Depending on where your server is located, this can be a DigitalOcean Block Storage, AWS Elastic Block Store, Linode Block Storage.
-
If required, format it using
mkfs.xfs /dev/<device>
. Then, runblkid
orlsblk
to get the UUID of the disk. -
In the Cloudron dashboard, choose
XFS Mount
from the drop down.
Do not add /etc/fstab
entry
When choosing this storage provider, do not add an /etc/fstab
entry for the mount point. Cloudron will add and manage
a systemd mount point.
No-op
This storage backend disables backups. When backups are disabled, updates to apps cannot be rolled back and result in data loss. This backend only exists for testing purposes.
Backup formats
Cloudron supports two backup formats - tgz
(default) and rsync
. The tgz
format stores
all the backup information in a single tarball whereas the rsync
format stores all backup
information as files inside a directory.
Both formats have the same content
The contents of the tgz
file when extracted to disk will be the exact same as the contents of the
rsync
directory. Both the formats are complete and portable.
tgz format
The tgz
format uploads an app's backup as a gzipped tarball. This format is very efficient when
having a large number of small number files.
This format has the following caveats:
-
Most Cloud storage API require the content length to be known in advance before uploading data. For this reason, Cloudron uploads big backups in chunks. However, chunked (multi-part) uploads cannot be parallelized and also take up as much RAM as the chunk size.
-
tgz
backup uploads are not incremental. This means that if an app generated 10GB of data, Cloudron has to upload 10GB every time it makes a new backup.
rsync format
The rsync
format uploads individual files to the backup storage. It keeps track of what
was copied the last time around, detects what changed locally and uploads only the changed files
on every backup. Note that despite uploading 'incrementally', tgz format can be significantly
faster when uploading a large number of small files (like source code repositories) because
of the large number HTTP requests that need to be made for each file.
This format has the following caveats:
-
By tracking the files that were uploaded the last time around, Cloudron minimizes uploads when using the rsync format. To make sure that each backup directory is "self contained" (i.e can be simply copied without additional tools), Cloudron issues a 'remote copy' request for each file.
-
File uploads and remote copies are parallelized.
-
When using backends like Filesystem, CIFS, EXT4, NFS & SSHFS, the rsync format can hardlink 'same' files across backups to conserve space. Note that while the protocols themselves support hardlinks, support for hardlinks depends ultimately on the remote file system.
-
When encryption is enabled, file names are optionally encrypted.
Encryption
Backups can be optionally encrypted (AES-256-CBC) with a secret key. When encryption is enabled, Cloudron will encrypt both the filename and it's contents.
There are some limitations to lengths of filenames when encryption is enabled:
- File names can be max 156 bytes. See this comment for an explanation. If backups are failing because of
KeyTooLong
errors, you can run the following command in Web Terminal to detect the offending file and rename it to something shorter:
cd /app/data
find . -type f -printf "%f\n" | awk '{ print length(), $0 | "sort -rn" }' | less
- Backup backends like S3 have max object path length as 1024. There is an overhead of around 20 bytes per file name in a path. So, if you have a directory which is 10 level deep, there is a 200 byte overhead. Filename encryption can be optionally turned off.
Keep password safe
Cloudron does not save a copy of the password in the database. If you lose the password, there is no way to decrypt the backups.
Filenames
When using encryption with the rsync
format, file names can be optionally encrypted.
Maximum encrypted filename length
Linux file system has a maximum path size of 4096. However, most of the storage backends have a max file name size which is far less. For example, the max size of file names in S3 is 1024. If you have long file names (full path), then you can turn off encryption of file names.
File format
The Cloudron CLI tool has subcommands like backup encrypt
, backup decrypt
, backup encrypt-filename
and backup decrypt-filename
that can help inspect encrypted files. See the Decrypt backups for more information.
Four 32 byte keys are derived from the password via scrypt with a hardcoded salt:
- Key for encrypting files
- Key for the HMAC digest of encrypted file
- Key for encrypting file names
- Key for the HMAC digest of the file name for deriving it's IV (see below)
Each encrypted file has:
- A 4 byte magic
CBV2
(Cloudron Backup v2) - A 16 byte IV. This IV is completely random per file.
- File encrypted used AES-256-CBC
- A 32 byte HMAC of the IV and encrypted blocks
Each encrypted filename has:
- A 16 byte IV. This IV is derived from HMAC of the filename. This is done this way because the sync algorithm requires the encryption to be deterministic to locate the file upstream.
- Filename encrypted using AES-256-CBC
Schedule
The backup schedule & retention policy can be set in the Backups
view.
The Backup Interval
determines how often you want the backups to be created. If a backup fails (because say
the external service is down or some network error), Cloudron will retry sooner than the backup interval. This way
Cloudron tries to ensure that a backup is created for every interval duration.
-
The backup process runis with a nice of 15. This makes sure that it gets low priority if the Cloudron is doing other things.
-
The backup task runs with a configurable memory limit. This memory limit is configured in
Backups
->Configure
->Advanced
. -
There is currently a timeout of 12 hours for the backup to complete.
Retention Policy
The Retention Policy
determines how backups are retained. For example, a retention policy of 1 week means that all
backups older than a week are deleted. The policy 7 daily
means to keep a single backup for each day for the last 7 days.
So, if 5 backups were created today, Cloudron will remove 4 of them. It does not mean to keep 7 backups a day. Similarly, the term
4 weekly
means to keep a single backup for each week for the last 4 weeks.
The following are some of the important rules used to determine if a backup should be retained:
-
For installed apps and box backups, the latest backup is always retained regardless of the policy. This ensures that even if all the backups are outside of the retention policy, there is still atleast one backup preserved. This change also ensure that the latest backup of stopped apps is preserved when not referenced by any box backup.
-
An App backup that was created right before an app updates is also marked as special and persisted for 3 weeks. The rationale is that sometimes while the app itself is working fine, some errors/bugs only get noticed after a couple of weeks.
-
For uninstalled apps, the latest backup is removed as per the policy.
-
If the latest backup is already part of the policy, it is not counted twice.
-
Errored and partial backups are removed immediately.
Cleanup Backups
The Backup Cleaner
runs every night and removes backups based on the Retention Policy.
Cloudron also keeps track of the backups in it's database. The Backup Cleaner
checks if entries in the
database exist in the storage backend and removes stale entries from the database automatically.
You can trigger the Backup Cleaner
using the Cleanup Backups
button:
If you click on the Logs button after triggering Cleanup Backups
, you will see the exact reason why each
individual backup is retained. In the logs, a box_
prefix indicates that it is a full Cloudron backup where app_
prefix
indicates that it is an app backup.
keepWithinSecs
means the backup is kept because of the retention policy.reference
means that this backup is being referenced by another backup. When you make a full Cloudron backup, it takes the backup of each app as well. In this case, each app backup is "referenced" by the parent "box" backup.preserveSecs
means the backup is kept because it is the backup of a previous version of the app before an app update. We keep these backups for 3 weeks in case an update broke something and it took you some time to figure that something broke.
Preserve specific backups
See backup labels section on how to preserve specific backups regardless of the retention policy.
Old Local Backups
By default, Cloudron stores backups in the filesystem at /var/backups
. If you move backups to an external location, previous
backups have to be deleted manually by SSHing into the server.
- SSH into the server.
- Run
cd /var/backups
to change directory. - There may be several timestamped directories. You can delete them using
rm -rf /var/backups/<timestamped-directory>
. - The
snapshot
subdirectory can be removed usingrm -rf /var/backups/snapshot
.
Backup Labels
App Backups can be tagged with a label for readability. Use the Edit
button to change a backup's label.
In addition, specific backups can be preserved for posterity using the preserve checkbox:
Snapshot App
To take a backup of a single, click the Create Backup
button in the Backups
section of the
app's configure UI.
Concurrency Settings
When using one of the cloud storage providers (S3, GCS), the upload, download and copy concurrency can be configured to speed up backup and restore operations.
-
Upload concurrency - the number of file uploads to be done in parallel.
-
Download concurrency - the number of file downloads to be done in parallel.
-
Copy concurrency - the number of remote file copies to be done in parallel. Cloudron conserves bandwidth by not re-uploading unchanged files and instead issues a remote file copy request.
There are some caveats that you should be aware of when tuning these values.
-
Concurrency values are highly dependent on the storage service. These values change from time to time and as such it's not possible to give a standard recommendation for these values. In general, it's best to be conservative since backup is just a background task. Some services like Digital Ocean Spaces can only handle 20 copies in parallel before you hit rate limits. Other provides like AWS S3, can comfortably handle 500 copies in parallel.
-
Higher upload concurrency necessarily means you have to increase the memory limit for the backup.
Snapshot Cloudron
To take a backup of Cloudron and all the apps, click the Backup now
button in the Settings
page:
Warning
When using the no-op
backend, no backups are taken. If backups are stored on the same server, be
sure to download them before making changes in the server.
Disable automatic backups
An app can be excluded from automatic backups from the 'Advanced settings' in the Configure UI:
Note that the Cloudron will still create backups before an app or Cloudron update. This is required so that it can be reverted to a sane state should the update fail.
Warning
Disabling automatic backup for an app puts the onus on the Cloudron adminstrator to backup the app's files
regularly. This can be done using the Cloudron CLI tool's
cloudron backup create
command.
Clone app
To clone an app i.e an exact replica of the app onto another domain, first create an app backup and click the clone button of the corresponding backup:
This will bring up a dialog in which you can enter the location of the new cloned app:
Restore app
Apps can be restored to a previous backup by clicking on the Restore
button.
Both data and code are reverted
Restoring will also revert the code to the version that was running when the backup was created. This is because the current version of the app may not be able to handle old data.
Import App Backup
Migrating apps or moving apps from one Cloudron to another works by first creating a backup of the app on the old Cloudron, optionally copying the backup to the new Cloudron's server and importing the backup on the new Cloudron. You can also use this approach to resurrect an uninstalled app from it's backup.
Use the following steps will migrate an app:
- First, create a backup of the app on the old Cloudron
- If the old Cloudron is backing up to the filesystem, copy the backup of this app to the new server. You can determine the backup id using the
Copy to Clipboard
action in the Backups view. You can use a variety of tools like scp, rclone, rsync to copy over the backup depending on your backup configuration.
- If the old Cloudron is not backing up to the filesystem, download the backup configuration of this backup. This is simply a file that helps copy/paste the backup configuration setting to the new server.
-
Install a new app on the new Cloudron. When doing so, make sure that the version of the app on the old cloudron and new cloudron is the same.
-
Go to the
Backups
view and click onImport
.
- You can now upload the backup configuration which you downloaded from the previous step to auto-fill the import dialog.
- Alternately, enter the credentials to access the backup.
Backup Path
Backup path is the relative path to the backup. It's usually of the form path/to/<timestamp>/app_xx
.
Restore Email
There is currently no built-in way to restore specific emails or email folders, but this can be done manually using the process below. Mail is backed up in this directory when viewed uncompressed (is using tgz, otherwise is always uncompressed if rsync): <backupMount>/snapshot/box/mail/vmail/<mailbox>/mail/*
The example scenario: A user deleted a folder of emails in their mailbox and needs the folder (and it's emails) restored, the folder is called "CriticalEmails".
- SCP to the backup disk or service
- Locate the mail folder that the user may have deleted. In the example above, we would find this missing folder located at
<backupMount>/snapshot/box/mail/vmail/<mailbox>/mail/.CriticalEmails
- Copy that folder (replacing
with the actual email address needed) to this location on your Cloudron disk: /home/yellowtent/boxdata/mail/vmail/<mailbox>/mail/.CriticalEmails
and ensure the permissions match that of the other folders (should bedrwxr--r-- ubuntu:ubuntu
) - Restart the mail service in Cloudron
The user should now be able to see the mail folder named "CriticalEmails" (in this example) and all the emails associated with that folder.
Restore Cloudron
To restore from a backup:
- If you have the old Cloudron around, go to
Backups
and download the latest backup configuration. If you don't have access to the old Cloudron, then you have to first determine the Backup ID, by following this guide.
- Install Cloudron on a new server with Ubuntu LTS (20.04/22.04/24.04):
wget https://cloudron.io/cloudron-setup
chmod +x cloudron-setup
./cloudron-setup --version x.y.z # version must match your backup version
Backup & Cloudron version
The Cloudron version and backup version must match for the restore to work. If you installed a wrong version by mistake, it's easiest to just start over with a fresh Ubuntu installation and re-install Cloudron.
-
If your domains use Wildcard, Manual or No-op DNS provider, you should manually switch the DNS of the domains to the new server's IP. At the minimum, you have to change the IP of the dashboard domain (i.e
my.domain.com
). Note that if you do not switch the IP for the app domains, the restore of those apps will fail and you have to trigger the restore of the app from the Backups section when you are ready to switch the IP. -
Navigate to
http://<ip>
and click onLooking to restore
located at the bottom: -
Provide the backup information to restore from. If you downloaded the backup configuration from your previous installation, you can upload it here to fill up the fields.
Alternately, you can just fill up the form by hand:
Warning
When using the filesystem provider, ensure the backups are owned by the yellowtent
user.
Also, ensure that the backups are in the same file system location as the old Cloudron.
-
Cloudron will download the backup and start restoring:
The new Cloudron server is an exact clone of the old one - all your users, groups, email, apps, DNS settings, certificates, will be exactly as-is before. The main exception is the backup settings which is not restored and will be set to the config that was provided in the restore UI. For this, reason, be sure to re-verify your backup configuration after restore.
Graphs and performance data is also not persisted across a migration because the new server characteristics are most likely totally different from the old server.
Dry Run
When you restore Cloudron, Cloudron will automatically update the DNS to point to the new server. Using the
Dry run
feature you can skip the DNS setup. This allows you to test your backups or get a feel
of how your apps might perform if you switch the server, without affecting your current installation.
To do a dry run of Cloudron restore:
- Create an entry in your
/etc/hosts
file. The/etc/hosts
overrides DNS and these entries will direct your machine to go this new server, instead of your existing server when you visit the dashboard domain. Note that this entry has to made on your PC/Mac and not on the Cloudron server. In addition, these entries only affect your PC/Mac and not any other device. Assuming,1.2.3.4
is your new server IP, add an entry like this:
# this makes the dashboard accessible
1.2.3.4 my.cloudrondomain.com
# add this for every app you have/want to test.
1.2.3.4 app1.cloudrondomain.com
1.2.3.4 app2.cloudrondomain.com
-
Follow the steps in Restore cloudron. Check the
Dry run
checkbox: -
Once restored, the browser will automatically navigate to
https://my.cloudrondomain.com
. It is important to check that this is indeed the new server and not your old server! The easiest way to verify this is to go theNetwork
view and check the IP Address. If the IP is still of the old server, this is most likely a browser DNS caching issue. Generally, restarting the browser or trying to access the browser in anonymous mode resolves the issue. -
If you want to make the "switch" to the new server, go to the
Domains
view and click onSync DNS
. -
You can now remove the entries in
/etc/hosts
and shutdown your old server.
Move Cloudron to another server
If the existing server's cpu/disk/memory can be resized (as is the case with most server providers), then simply resize it and reboot the server. Cloudron will automatically adapt to the available resources after a server resize.
To migrate to a different server or move Cloudron to a different server provider:
- Take a complete backup of the existing
Cloudron. Click the
Backup now
button in theSettings
page.
Backup location
We recommend backing up to an external service like S3 or Digital Ocean Spaces. This is because the backups become immediately available for the new server to restore from. If you use the filesystem for backups, you have to copy the backup files manually to the new server using rsync or scp.
- Download the backup configuration of the newly made backup. This is simply a file that helps copy/paste the backup configuration setting to the new server.
- Follow the steps to restore Cloudron on the new server. It is recommended to not delete the old server until migration to new server is complete and you have verified that all data is intact (instead, just power it off).
Backup & Cloudron version
The Cloudron version and backup version must match for the restore to work. To install a
specific version of Cloudron, pass the version
option to cloudron-setup
. For example,
cloudron-setup --version 3.3.1
.