You can use HYCU Backup and Recovery for Nutanix to protect your Kubernetes application.
A common use case is when the database of a WordPress application deployed to a Nutanix-hosted Kubernetes cluster becomes corrupted.
The reasons for the database corruption may be a faulty custom plugin or administrative tasks such as data import or export. The effect is almost always the same - your web page is not operational until the database and data are restored.
WordPress uses a relational database to store blog articles, their related objects and metadata, and the local file system to store assets, such as pictures in a blog post. Because the container’s file system on Kubernetes cannot be used for storage, WordPress requires Persistent Volumes to store data.
By default, WordPress uses two pods. One for the application server and one for the MySQL server. The following image shows a sample WordPress deployment.
The Persisted Volume can be easily identified by application name "wordpress-1582210716" in the CLAIM column.
Nutanix Karbon shows the same Persistent Volume "data-wordpress-1582210716-mariadb-0".
The Workers are displayed in the following figure.
In HYCU for Nutanix, Workers are visible under the "Virtual Machines" panel.
Preparing Kubernetes Worker VMs and Configuring Credentials
PVCs on Nutanix are basically Volume Groups attached to the worker VM on the operating system (OS) level. To discover them in HYCU, you need to create a “Credential Group” (SSH credentials for worker nodes) and assign it to all workers.
By default, SSH access to a worker VM is allowed only with certificates. HYCU for Nutanix will support certificate based authentication in an upcoming release. Currently, you need to create a dedicated account on the worker and allow SSH connections with user/password authentication.
Follow these steps to sign into the worker VM via SSH and update its configuration:
- Open Karbon and select your cluster.
- From the Actions menu, select SSH Access and download the .sh file.
- On the cluster page, go to Nodes / Worker and make a note of the IP address.
- From a terminal, launch the downloaded shell script (.sh) and provide the worker IP address.
The script will establish an SSH connection to the worker VM.
- Create a new account named "hycu", set a password, and add the account to the root group:
sudo useradd hycu
sudo passwd hycu
sudo usermod -a -G root hycu
- Enable PasswordAuthentication in the /etc/ssh/sshd_config file:
sudo nano /etc/ssh/sshd_config
- Save the file, exit the editor, and restart the ssh daemon.
sudo service sshd restart
- Sign into HYCU for Nutanix and go to the Virtual Machines panel.
Select the worker VM for which you have configured the PasswordAuthentication and click Credentials.
- In the Credentials dialog, click New to add a new credential group. Insert credentials from the previously created "hycu" account, and then click Save.
- With the selected worker VM, click Assign to allow HYCU to use the credentials group for Kubernetes PV discovery.
- Repeat this procedure for all worker VMs of your Kubernetes cluster.
Protecting the Kubernetes persistent volumes
Applications hosted on Kubernetes have many options for storing their data. For persistent volumes that reside on worker nodes, you can enable data protection by backing up virtual machines on which these applications are running.
In HYCU for Nutanix all your Kubernetes cluster workers are listed in the Virtual Machines panel.
- Select the worker node virtual machines that you want to back up.
- Click Policies. The Policies dialog box appears.
- From the list of policies, select the desired backup policy.
- Click Assign to assign the backup policy to the selected virtual machines.
For more information on choosing the right backup policy and details on backing up virtual machines, see "Chapter 4 - Protecting data" in the HYCU Data Protection for Nutanix User Guide. (https://www.hycu.com/wp-content/uploads/2017/03/HYCU-Data-Protection-for-Nutanix_UserGuide.pdf)
Restoring an application
NOTE Depending on what was corrupted and your Kubernetes deployment, you may need to restore multiple PVs.
To see which node the PV is attached to, use kubectl describe pod [name] command:
Mariadb holds its data on worker-0:
To see which Persistent Volume needs to be restored, use the ab command:
In the above example, the PV for the mariadb instance is "pvc-7bd663f4-53f1-11ea-aad8-506b8d37f3d8".
To manage your worker node VM backup snapshots, do the following:
- Navigate to the Virtual Machines panel.
- In the Virtual Machines panel, click the worker node VM that you want to restore to open the Details section.
For example, see the "karbon-wordpress-91ec75-k8s-worker-0" VM.
- In the Details section that appears at the bottom of the screen, select the desired restore point.
Click Restore VM. The VM Restore Options dialog box opens.
- Select Restore vDisks, and then click Next.
- In the vDisks list that appears, select the disks that you want to restore, and then click Next.
- Choose Original location to overwrite the corrupted PV.
- Click Restore.
If you restored the disk snapshots to the original location, the Kubernetes cluster will pick up the restored VDs immediately because the Kubernetes configuration remained unchanged, and your application should be running.