iTranslated by AI

The content below is an AI-generated translation. This is an experimental feature, and may contain errors. View original article
🆙

Upgrading Splunk AppDynamics On-Premises Virtual Appliance (Part 1)

に公開

Introduction

I published articles last year titled "Installing Splunk AppDynamics On-Premises Virtual Appliance" (Part 1, Part 2) (hereafter referred to as the installation articles). In those articles, I performed a fresh installation of AppD VA version 25.7.0. Since then, AppD VA version 25.10.0 was released in December 2025 [1].

In this article, I will share the record of upgrading AppD VA from version 25.7 to 25.10 in my environment. For the upgrade, I am referring to the product manual.

This article, as Part 1, covers the upgrade prerequisites, backup, environment preparation, and execution of the upgrade script. Part 2 will explain the deployment status verification, cluster recreation, restoration, and a simple operation check.

Upgrade Prerequisites

This article assumes the following:

  • Target audience
    • Has knowledge of what AppD VA is and how to install it.
    • Infrastructure engineers, SREs, or application developers considering an AppD VA upgrade.
  • AppD VA versions: Using 25.7.0 before the upgrade and 25.10.0 after the upgrade (major version upgrade).
  • Environment: Reusing the AWS environment used in the installation articles for running AppD VA version 25.7.0.
    • The preparation VM used in the same articles will also be reused.
    • The Go version of the command-line tool yq must be additionally installed on the preparation VM in advance.
  • For security or to reduce the amount of text, some information in configuration files and command input/output will be replaced with "■" or "(Omitted)".

Backup

First, as a preparation for the backup before the upgrade, stop the services that are already running. In the installation articles, only the AppDynamics basic services were started, so in this article, only these will be targeted for stopping.

$ appdcli stop appd
Decrypting secret /var/appd/config/secrets.yaml.encrypted

hook[prepare] logs | ************ Performing prepare steps **********
hook[prepare] logs | Writing run specs
hook[prepare] logs | Updating node ip address
hook[prepare] logs | Updating domain name
hook[prepare] logs | ************ prepare steps completed  **********
hook[prepare] logs |

hook[prepare] logs |

hook[prepare] logs |
Building dependency release=cluster-agent, chart=charts/cluster-agent
(Omitted)
DELETED RELEASES:
NAME                   NAMESPACE             DURATION
bootstrap-authn        authn                       0s
auth-service           authn                       0s
auth-service-ingress   authn                    1m19s
bootstrap              cisco-controller            0s
mysql                  mysql                       0s
schema-registry        schema-registry             1s
cert-manager-ext       cert-manager                1s
mysql-certs            mysql                       1s
cluster                                            2s
ingress                ingress-master              2s
replicator             replicator                  2s
postgres               postgres                    2s
redis-ext              redis                       3s
elasticsearch          es                          3s
redis                  redis                       5s
fluent-bit             fluent                      5s
kafka                  kafka                       6s
eum                    cisco-eum                   6s
events                 cisco-events                6s
synthetic              cisco-synthetic             7s
controller             cisco-controller            7s
authz-service          authz                       7s
cluster-agent          cisco-cluster-agent         7s
ingress-nginx          ingress                     7s

Delete successful

If Delete successful is output, it is generally OK for now. However, at this stage, AppD VA is internally in the process of stopping, and it may take about 2 to 3 minutes until the shutdown is complete. Therefore, execute the following command here.

$ watch "kubectl get pods -A -o jsonpath='{range .items[?(@.metadata.deletionTimestamp)]}{.metadata.namespace}{\"\t\"}{.metadata.name}{\"\n\"}{end}'"

The execution result of this command will initially include several Kubernetes namespace names and pod names in the AppD VA application. Then wait for a while (e.g., about 2 to 3 minutes), and it's OK if eventually nothing is output.

Next, stop the Kubernetes operator in the AppD VA application.

$ appdcli stop operators

hook[prepare] logs |

hook[prepare] logs | ************ Performing prepare steps **********
hook[prepare] logs | Writing run specs
hook[prepare] logs | Updating node ip address
hook[prepare] logs | Updating domain name
hook[prepare] logs | ************ prepare steps completed  **********
hook[prepare] logs |
Building dependency release=pg-operator, chart=charts/pg-operator
(Omitted)
DELETED RELEASES:
NAME                     NAMESPACE        DURATION
mysql-operator           mysql-operator         1s
cert-manager-crds        cert-manager           1s
elastic-operator         elastic-system         2s
strimzi-kafka-operator   kafka-operator         2s
cert-manager             cert-manager           3s
pg-operator              pg-operator            3s

Delete operators successful

If Delete operators successful is output, it's OK.

Then, actually perform the backup using the appdcli run backup command. The backup here refers to the backup of configuration information (settings/definitions/certificates, etc.), not the actual data on the data disk in AppD VA (such as DB contents). In other words, it is a kind of metadata. Since the disk for actual data is retained as is, there is no particular need to back it up here.

$ appdcli run backup
All pods are not in Running or Terminating state.
=============================================================================
Backup of PV, PVCs, namespaces and config files
=============================================================================
PV backup executed successfully to /home/appduser/backup/0599/pv-list.yaml
PVC backup executed successfully to /home/appduser/backup/0599/pvc-list.yaml
Namespace backup executed successfully to /home/appduser/backup/0599/namespace-list.yaml
Exporting ingress certs
Exporting secret: onprem-feed-sys
=============================================================================
Backup archive /home/appduser/backup/backup-0599.tar created
Copy this file to a location outside the current VMs for use during upgrade
=============================================================================
Backup completed.

If Backup completed. is output, it's OK.

Finally, copy the output backup file (in the above example, /home/appduser/backup/backup-0599.tar) to a VM other than the AppD VA VMs. Since this method depends on the environment, its description is omitted in this article.

Environment Preparation

Changing Settings for the Deployment Script

To upgrade AppD VA in an AWS environment, we reuse the deployment scripts and AWS resources (profile, VPC, S3, IAM role) used during the installation.

First, log in to the preparation VM (all subsequent shell operations are executed on the preparation VM). Then, move to the directory containing the set of AWS scripts and check the file list.

$ cd work/appd-virtual-appliance/deploy/aws/
$ ls -l
total 22020168
-rwxr-xr-x. 1 ec2-user ec2-user         290 Oct 17 13:36 01-aws-create-profile.sh
-rwxr-xr-x. 1 ec2-user ec2-user        9413 Oct 17 13:36 02-aws-add-vpc.sh
-rwxr-xr-x. 1 ec2-user ec2-user         558 Oct 17 13:36 03-aws-create-image-bucket.sh
-rwxr-xr-x. 1 ec2-user ec2-user         853 Oct 17 13:36 04-aws-import-iam-role.sh
-rwxr-xr-x. 1 ec2-user ec2-user         251 Oct 17 13:36 05-aws-upload-image.sh
-rwxr-xr-x. 1 ec2-user ec2-user        1309 Oct 17 13:36 06-aws-import-snapshot.sh
-rwxr-xr-x. 1 ec2-user ec2-user         918 Oct 17 13:36 07-aws-register-snapshot.sh
-rwxr-xr-x. 1 ec2-user ec2-user        2519 Oct 17 13:36 08-aws-create-vms.sh
-rw-r--r--. 1 ec2-user ec2-user        1775 Oct 19 21:47 08-aws-create-vms.sh.patch
-rw-r--r--. 1 ec2-user ec2-user          30 Oct 19 18:25 ami.id
-rw-r--r--. 1 ec2-user ec2-user 22548578304 Oct 12 08:46 appd_va_25.7.0.2255.ami
-rwxr-xr-x. 1 ec2-user ec2-user         550 Oct 17 13:36 aws-delete-vms.sh
-rw-r--r--. 1 ec2-user ec2-user         788 Oct 19 17:10 config.cfg
-rw-r--r--. 1 ec2-user ec2-user         597 Oct 19 17:49 disk-image-file-role-policy.json
-rw-r--r--. 1 ec2-user ec2-user          36 Oct 19 18:21 snapshot.id
drwxr-xr-x. 2 ec2-user ec2-user         128 Oct 17 13:36 upgrade
-rw-r--r--. 1 ec2-user ec2-user         102 Oct 19 22:05 user-data.ec2
-rw-r--r--. 1 ec2-user ec2-user         233 Oct 19 17:45 vmimport-role-trust-policy.json

Next, to check the current settings, look at the contents of the config.cfg file.

config.cfg
# Resource Tags
TAGS="{Key=■■■■■■■■_■■■■_■■■■■■■■■■■■■■,Value=private},{Key=■■■■■■■■_■■■■■■■■■■■_■■■■,Value=non-prd}"

# Deployment configs
AWS_PROFILE=default
AWS_REGION=ap-northeast-1
VPC_NAME=akihiko-vpc
SUBNET_NAME=akihiko-subnet-private1-ap-northeast-1a
IGW_NAME=akihiko-igw
RT_NAME=akihiko-rtb-private1-ap-northeast-1a
SG_NAME=akihiko-sg-ec2-all-traffic-from-alb-allowed
VPC_CIDR="10.0.0.0/16"
SUBNET_CIDR="10.0.128.0/20"
IMAGE_IMPORT_BUCKET=akihiko-■■■■■■■■■■■■■■
APPD_RAW_IMAGE="appd_va_25.7.0.2255.ami"
APPD_IMAGE_NAME="akihiko-appd-va-25.7.0-ec2-disk1"

# VM configurations
VM_TYPE=m5a.4xlarge
VM_NAME_1=akihiko-vm31-appdva
VM_NAME_2=akihiko-vm32-appdva
VM_NAME_3=akihiko-vm33-appdva
VM_OS_DISK=200
VM_DATA_DISK=500

# IPs to permit 
VPN_IPS=(
   10.0.0.0/20
   10.0.16.0/24
)

In the config.cfg file, you set the disk image filename as the value of the APPD_RAW_IMAGE environment variable. Currently, it is set to appd_va_25.7.0.2255.ami, which is for AppD VA version 25.7.0. Therefore, first download the AMI type disk image for AppD VA version 25.10.0 from the Splunk AppDynamics download site (approximately 21 GB). The download destination should be the same directory as config.cfg.

curl -L -O -H "Authorization: Bearer ■■■ (Omitted) ■■■" "https://download.appdynamics.com/download/prox/download-file/appd-va/25.10.0.2749/appd_va_25.10.0.2749.ami"

Then, set the downloaded appd_va_25.10.0.2749.ami as the value for the APPD_RAW_IMAGE environment variable. Along with this, it's better to change the version part of the APPD_IMAGE_NAME value to 25.10.0. The diff for these changes is as follows:

config.cfg
@@ -12,8 +12,8 @@
 VPC_CIDR="10.0.0.0/16"
 SUBNET_CIDR="10.0.128.0/20"
 IMAGE_IMPORT_BUCKET=akihiko-appd-va-bucket
-APPD_RAW_IMAGE="appd_va_25.7.0.2255.ami"
-APPD_IMAGE_NAME="akihiko-appd-va-25.7.0-ec2-disk1"
+APPD_RAW_IMAGE="appd_va_25.10.0.2749.ami"
+APPD_IMAGE_NAME="akihiko-appd-va-25.10.0-ec2-disk1"

 # VM configurations
 VM_TYPE=m5a.4xlarge

Executing the Deployment Script

After changing the contents of the config.cfg file for AppD VA version 25.10.0, the next step is to execute the deployment script. For the upgrade, we will reuse and execute the following scripts from those used during the installation.

  • 05-aws-upload-image.sh
  • 06-aws-import-snapshot.sh
  • 07-aws-register-snapshot.sh
$ ./05-aws-upload-image.sh
Uploading the image ...
upload: ./appd_va_25.10.0.2749.ami to s3://akihiko-■■■■■■■■■■■■■■/appd_va_25.10.0.2749.ami
2026-01-12 10:02:14 22548578304 appd_va_25.10.0.2749.ami

Similar to the AppD VA installation, this script uploads a relatively large file of about 21 GB to the S3 bucket. Therefore, it is important that the preparation VM executing this script is located near the EC2 instance VMs where AppD VA is installed in terms of network. It's OK if the uploaded AMI filename is output.

$ ./06-aws-import-snapshot.sh
Import Task ID: import-snap-498c214e863fdfc2t
Waiting for import task to proceed...
{
    "ImportSnapshotTasks": [
        {
            "ImportTaskId": "import-snap-498c214e863fdfc2t",
            "SnapshotTaskDetail": {
                "DiskImageSize": 22548578304.0,
                "Format": "RAW",
                "Progress": "19",
                "SnapshotId": "",
                "Status": "active",
                "StatusMessage": "downloading/converting",
                "Url": "s3://akihiko-■■■■■■■■■■■■■■/appd_va_25.10.0.2749.ami",
                "UserBucket": {}
            },
            "Tags": []
        }
    ]
}
Current Status: active
(Omitted)
Current Status: active
Current Status: completed
Snapshot import completed.
Snapshot ID: snap-0f725ab523efdfb26

Snapshot import completed. and the Snapshot ID below it will be output if successful.

$ ./07-aws-register-snapshot.sh
Using snapshot ...
{
    "Snapshots": [
        {
            "StorageTier": "standard",
            "TransferType": "standard",
            "CompletionTime": "2026-01-12T01:17:53.338000+00:00",
            "FullSnapshotSizeInBytes": 22548578304,
            "SnapshotId": "snap-0f725ab523efdfb26",
            "VolumeId": "vol-ffffffff",
            "State": "completed",
            "StartTime": "2026-01-12T01:12:50.853000+00:00",
            "Progress": "100%",
            "OwnerId": "677276102422",
            "Description": "Created by AWS-VMImport service for import-snap-498c214e863fdfc2t",
            "VolumeSize": 21,
            "Encrypted": true,
            "KmsKeyId": "arn:aws:kms:ap-northeast-1:■■■■■■■■■■■■:key/■■■■■■■■-■■■■-■■■■-■■■■-■■■■■■■■■■■■"
        }
    ]
}
AMI ID: ami-0fad5ad20c9990a98

It's OK if no errors are output and the metadata of the used snapshot is output in JSON format.

So far, the steps are similar to the AppD VA installation, but from here, we will execute the upgrade script.

Executing the Upgrade Scripts

The scripts for upgrading AppD VA are located in the upgrade subdirectory, which is one level below the scripts executed so far. Move to that directory and check the file list.

cd upgrade/
$ ls -l
total 16
-rw-r--r--. 1 ec2-user ec2-user 2794 Oct 17 13:36 01-aws-get-vm-details.sh
-rw-r--r--. 1 ec2-user ec2-user  575 Oct 17 13:36 02-aws-terminate-vms.sh
-rw-r--r--. 1 ec2-user ec2-user  544 Oct 17 13:36 03-aws-get-vm-status.sh
-rwxr-xr-x. 1 ec2-user ec2-user 1819 Oct 17 13:36 04-aws-create-vms.sh

Add execution permissions to the *.sh files.

$ chmod a+x *.sh
$ ls -l
total 16
-rwxr-xr-x. 1 ec2-user ec2-user 2794 Oct 17 13:36 01-aws-get-vm-details.sh
-rwxr-xr-x. 1 ec2-user ec2-user  575 Oct 17 13:36 02-aws-terminate-vms.sh
-rwxr-xr-x. 1 ec2-user ec2-user  544 Oct 17 13:36 03-aws-get-vm-status.sh
-rwxr-xr-x. 1 ec2-user ec2-user 1819 Oct 17 13:36 04-aws-create-vms.sh

For the meaning of each script, it is best to check the description on the GitHub site for VA AWS deployment reference scripts or directly refer to the contents of the *.sh files. We will execute these four scripts in order.

First, run 01-aws-get-vm-details.sh to get detailed information about the existing VMs.

$ ./01-aws-get-vm-details.sh
Instance id for akihiko-vm31-appdva is i-0be03d8706fd0cdfb
Network intf id for akihiko-vm31-appdva is eni-00894ad4fa8c39f6f
Attachment id for akihiko-vm31-appdva is eni-attach-0ac49377b83e11bb3
Data disk for akihiko-vm31-appdva is vol-01a2242c56b588f9a
Created vm_details.yaml with config details
Instance id for akihiko-vm32-appdva is i-08b14f183b3db77ff
Network intf id for akihiko-vm32-appdva is eni-0b4b80d4082576e8d
Attachment id for akihiko-vm32-appdva is eni-attach-0f9bcc7ea3ed8f134
Data disk for akihiko-vm32-appdva is vol-0b61727a12f0a3009
Created vm_details.yaml with config details
Instance id for akihiko-vm33-appdva is i-0d737e7d1fc6156b1
Network intf id for akihiko-vm33-appdva is eni-0165bc62faeef44b0
Attachment id for akihiko-vm33-appdva is eni-attach-0baf100e4ad245d30
Data disk for akihiko-vm33-appdva is vol-0e42a0b5e295d70c7
Created vm_details.yaml with config details

It's OK if the message Created vm_details.yaml with config details is output a total of three times.

Next, execute 02-aws-terminate-vms.sh to terminate the VMs while retaining the network interfaces and data disks of the existing VMs.

$ ./02-aws-terminate-vms.sh
Terminating instance akihiko-vm31-appdva
Network instance eni-00894ad4fa8c39f6f and data disk volume vol-01a2242c56b588f9a will be retained
{
    "TerminatingInstances": [
        {
            "InstanceId": "i-0be03d8706fd0cdfb",
            "CurrentState": {
                "Code": 32,
                "Name": "shutting-down"
            },
            "PreviousState": {
                "Code": 16,
                "Name": "running"
            }
        }
    ]
}
Terminating instance akihiko-vm32-appdva
Network instance eni-0b4b80d4082576e8d and data disk volume vol-0b61727a12f0a3009 will be retained
{
    "TerminatingInstances": [
        {
            "InstanceId": "i-08b14f183b3db77ff",
            "CurrentState": {
                "Code": 32,
                "Name": "shutting-down"
            },
            "PreviousState": {
                "Code": 16,
                "Name": "running"
            }
        }
    ]
}
Terminating instance akihiko-vm33-appdva
Network instance eni-0165bc62faeef44b0 and data disk volume vol-0e42a0b5e295d70c7 will be retained
{
    "TerminatingInstances": [
        {
            "InstanceId": "i-0d737e7d1fc6156b1",
            "CurrentState": {
                "Code": 32,
                "Name": "shutting-down"
            },
            "PreviousState": {
                "Code": 16,
                "Name": "running"
            }
        }
    ]
}

It's OK if no errors are output and the metadata of the terminated VMs is output in JSON format.

However, this script ends without waiting for the termination of the VMs to complete. Therefore, execute 03-aws-get-vm-status.sh to determine if the VM termination is complete.

$ ./03-aws-get-vm-status.sh
Get instance akihiko-vm31-appdva details
---------------------------------------
|          DescribeInstances          |
+----------------------+--------------+
|         Name         |    State     |
+----------------------+--------------+
|  akihiko-vm31-appdva |  terminated  |
+----------------------+--------------+
Get instance akihiko-vm32-appdva details
---------------------------------------
|          DescribeInstances          |
+----------------------+--------------+
|         Name         |    State     |
+----------------------+--------------+
|  akihiko-vm32-appdva |  terminated  |
+----------------------+--------------+
Get instance akihiko-vm33-appdva details
---------------------------------------
|          DescribeInstances          |
+----------------------+--------------+
|         Name         |    State     |
+----------------------+--------------+
|  akihiko-vm33-appdva |  terminated  |
+----------------------+--------------+
Wait for instances to terminate before creating new VMs

It's OK if the State of all three VMs is terminated as shown in the result above. If not, wait for a while and run 03-aws-get-vm-status.sh again. Proceed to the next step after all three VMs have reached the terminated state. If more time passes, the "DescribeInstances" table may no longer be output, but that state is also fine.

Regarding the content of the last script, 04-aws-create-vms.sh, I will make the following changes for my environment.

  • The output of 07-aws-register-snapshot.sh included lines for "Encrypted": true and "KmsKeyId": "(Omitted)". Therefore, add those parameters to the value of the --block-device-mappings parameter in the aws ec2 run-instances command.

The diff for those changes is as follows:

04-aws-create-vms.sh
@@ -14,6 +14,9 @@
     exit 1
 fi

+# Use the same KMS key as the imported snapshot
+KMS_KEY_ID="arn:aws:kms:ap-northeast-1:■■■■■■■■■■■■:key/■■■■■■■■-■■■■-■■■■-■■■■-■■■■■■■■■■■■"
+
 echo "Creating the VMs ..."
 for VM_ID in 1 2 3; do
     VM_NAME_VAR="VM_NAME_${VM_ID}"
@@ -38,7 +41,7 @@
                        	  --instance-type "${VM_TYPE}" \
                           --network-interfaces "[{\"NetworkInterfaceId\":\"${network_intf_id}\",\"DeviceIndex\":0}]" \
                           --block-device-mappings \
-                          "DeviceName=/dev/sda1,Ebs={VolumeSize=${VM_OS_DISK},VolumeType=gp3}" \
+                          "DeviceName=/dev/sda1,Ebs={VolumeSize=${VM_OS_DISK},VolumeType=gp3,Encrypted=true,KmsKeyId=${KMS_KEY_ID}}" \
                   	  --user-data file://user-data.ec2 \
                           --no-cli-pager \
                           --tag-specifications "ResourceType=instance,Tags=[{Key=Name,Value=${VM_NAME}},${TAGS}]" \

This diff content depends on the environment, so adjust it as necessary.

In my environment, I save this diff content as a file named 04-aws-create-vms.sh.patch. Then, I apply the patch and execute the 04-aws-create-vms.sh file as follows:

$ patch 04-aws-create-vms.sh 04-aws-create-vms.sh.patch
patching file 04-aws-create-vms.sh
$ ./04-aws-create-vms.sh
Creating the VMs ...
Waiting for instance to come to running state
{
    "VolumeId": "vol-01a2242c56b588f9a",
    "InstanceId": "i-06336fe4c34f5cc6b",
    "Device": "/dev/sdb",
    "State": "attaching",
    "AttachTime": "2026-01-12T07:44:41.354000+00:00"
}
Waiting for instance to come to running state
{
    "VolumeId": "vol-0b61727a12f0a3009",
    "InstanceId": "i-00a2731b97c3eb5bd",
    "Device": "/dev/sdb",
    "State": "attaching",
    "AttachTime": "2026-01-12T07:45:01.186000+00:00"
}
Waiting for instance to come to running state
{
    "VolumeId": "vol-0e42a0b5e295d70c7",
    "InstanceId": "i-047abcd0cd165c9f1",
    "Device": "/dev/sdb",
    "State": "attaching",
    "AttachTime": "2026-01-12T07:45:20.863000+00:00"
}

Wait for a while and confirm the completion of the startup for these three VMs (EC2 instances from VM_NAME_1 to VM_NAME_3) using the AWS Management Console or AWS CLI. The description of this confirmation method is omitted in this article.

Summary

In this article, as Part 1 of the AppD VA upgrade, I explained the upgrade prerequisites and then performed the backup, environment preparation, and execution of the upgrade script. For the AppD VA upgrade platform, I used AWS, just as in the installation articles.

In Part 2, as a continuation of this article, I will check the deployment status after the upgrade, recreate the cluster, restore, and perform a simple operation check (scheduled to be posted at a later date).

脚注
  1. Release Notes ↩︎

Discussion