Continental Innovates with Rancher and Kubernetes
The details of restoring your cluster from backup are different depending on your version of RKE.
If there is a disaster with your Kubernetes cluster, you can use rke etcd snapshot-restore to recover your etcd. This command reverts etcd to a specific snapshot and should be run on an etcd node of the the specific cluster that has suffered the disaster.
rke etcd snapshot-restore
The following actions will be performed when you run the command:
rke remove
rke up
Warning: You should back up any important data in your cluster before running rke etcd snapshot-restore because the command deletes your current Kubernetes cluster and replaces it with a new one.
The snapshot used to restore your etcd cluster can either be stored locally in /opt/rke/etcd-snapshots or from a S3 compatible backend.
/opt/rke/etcd-snapshots
Available as of v1.1.4
If the snapshot contains the cluster state file, it will automatically be extracted and used for the restore. If you want to force the use of the local state file, you can add --use-local-state to the command. If the snapshot was created using an RKE version before v1.1.4, or if the snapshot does not contain a state file, make sure the cluster state file (by default available as cluster.rkestate) is present before executing the command.
--use-local-state
cluster.rkestate
To restore etcd from a local snapshot, run:
$ rke etcd snapshot-restore --config cluster.yml --name mysnapshot
The snapshot is assumed to be located in /opt/rke/etcd-snapshots.
Note: The pki.bundle.tar.gz file is not needed because RKE v0.2.0 changed how the Kubernetes cluster state is stored.
pki.bundle.tar.gz
When restoring etcd from a snapshot located in S3, the command needs the S3 information in order to connect to the S3 backend and retrieve the snapshot.
$ rke etcd snapshot-restore \ --config cluster.yml \ --name snapshot-name \ --s3 \ --access-key S3_ACCESS_KEY \ --secret-key S3_SECRET_KEY \ --bucket-name s3-bucket-name \ --folder s3-folder-name \ # Optional - Available as of v0.3.0 --s3-endpoint s3.amazonaws.com
Note: if you were restoring a cluster that had Rancher installed, the Rancher UI should start up after a few minutes; you don’t need to re-run Helm.
--name
--config
cluster.yml
--s3
--s3-endpoint
--access-key
--secret-key
--bucket-name
--folder
--region
--ssh-agent-auth
--ignore-docker-version
Before you run this command, you must:
etcd
After the restore, you must rebuild your Kubernetes cluster with rke up.
Warning: You should back up any important data in your cluster before running rke etcd snapshot-restore because the command deletes your current etcd cluster and replaces it with a new one.
The snapshot must be manually synched across all etcd nodes.
The pki.bundle.tar.gz file is also expected to be in the same location.