Managing your WordPress sites with shell scripts and Kinsta API

If you manage many WordPress sites, you’re probably always on the lookout for ways to simplify and speed up your workflows.

Now, imagine this: with a single command in your terminal, you can trigger manual backups for all your sites, even if you’re managing dozens of them. That’s the power of combining shell scripts with the Kinsta API.

This guide teaches you how to use shell scripts to set up custom commands that make managing your sites more efficient.

Prerequisites

Before we start, here’s what you need:

  1. A terminal: All modern operating systems come with terminal software, so you can start scripting right out of the box.
  2. An IDE or text editor: Use a tool you’re comfortable with, whether it’s VS Code, Sublime Text, or even a lightweight editor like Nano for quick terminal edits.
  3. A Kinsta API key: This is essential for interacting with the Kinsta API. To generate yours:
    • Log in to your MyKinsta dashboard.
    • Go to Your Name > Company Settings > API Keys.
    • Click Create API Key and save it securely.
  4. curl and jq: Essential for making API requests and handling JSON data. Verify they’re installed, or install them.
  5. Basic programming familiarity: You don’t need to be an expert, but understanding programming basics and shell scripting syntax will be helpful.

Writing your first script

Creating your first shell script to interact with the Kinsta API is simpler than you might think. Let’s start with a simple script that lists all the WordPress sites managed under your Kinsta account.

Step 1: Set up your environment

Begin by creating a folder for your project and a new script file. The .sh extension is used for shell scripts. For instance, you can create a folder, navigate to it, and create and open a script file in VS Code using these commands:

mkdir my-first-shell-scripts
cd my-first-shell-scripts
touch script.sh
code script.sh

Step 2: Define your environment variables

To keep your API key secure, store it in a .env file instead of hardcoding it into the script. This allows you to add the .env file to .gitignore, preventing it from being pushed to version control.

In your .env file, add:

API_KEY=your_kinsta_api_key

Next, pull the API key from the .env file to your script by adding the following to the top of your script:

#!/bin/bash
source .env

The #!/bin/bash shebang ensures the script runs using Bash, while source .env imports the environment variables.

Step 3: Write the API request

First, store your company ID (found in MyKinsta under Company Settings > Billing Details) in a variable:

COMPANY_ID="<your_company_id>"

Next, add the curl command to make a GET request to the /sites endpoint, passing the company ID as a query parameter. Use jq to format the output for readability:

curl -s -X GET /
  "https://api.kinsta.com/v2/sites?company=$COMPANY_ID" /
  -H "Authorization: Bearer $API_KEY" /
  -H "Content-Type: application/json" | jq

This request retrieves details about all sites associated with your company, including their IDs, names, and statuses.

Step 4: Make the script executable

Save the script and make it executable by running:

chmod +x script.sh

Step 5: Run the script

Execute the script to see a formatted list of your sites:

./list_sites.sh

When you run the script, you’ll get a response similar to this:

{
  "company": {
    "sites": [
      {
        "id": "a8f39e7e-d9cf-4bb4-9006-ddeda7d8b3af",
        "name": "bitbuckettest",
        "display_name": "bitbucket-test",
        "status": "live",
        "site_labels": []
      },
      {
        "id": "277b92f8-4014-45f7-a4d6-caba8f9f153f",
        "name": "duketest",
        "display_name": "zivas Signature",
        "status": "live",
        "site_labels": []
      }
    ]
  }
}

While this works, let’s improve it by setting up a function to fetch and format the site details for easier readability.

Step 6: Refactor with a function

Replace the curl request with a reusable function to handle fetching and formatting the site list:

list_sites() {
  echo "Fetching all sites for company ID: $COMPANY_ID..."
  
  RESPONSE=$(curl -s -X GET "https://api.kinsta.com/v2/sites?company=$COMPANY_ID" /
    -H "Authorization: Bearer $API_KEY" /
    -H "Content-Type: application/json")

  # Check for errors
  if [ -z "$RESPONSE" ]; then
    echo "Error: No response from the API."
    exit 1
  fi

  echo "Company Sites:"
  echo "--------------"
  echo "$RESPONSE" | jq -r '.company.sites[] | "/(.display_name) (/(.name)) - Status: /(.status)"'
}

# Run the function
list_sites

When you execute the script again, you’ll get neatly formatted output:

Fetching all sites for company ID: b383b4c-****-****-a47f-83999c5d2...
Company Sites:
--------------
bitbucket-test (bitbuckettest) - Status: live
zivas Signature (duketest) - Status: live

With this script, you’ve taken your first step toward using shell scripts and the Kinsta API for automating WordPress site management. In the next sections, we explore creating more advanced scripts to interact with the API in powerful ways.

Advanced use case 1: Creating backups

Creating backups is a crucial aspect of website management. They allow you to restore your site in case of unforeseen issues. With the Kinsta API and shell scripts, this process can be automated, saving time and effort.

In this section, we create backups and address Kinsta’s limit of five manual backups per environment. To handle this, we’ll implement a process to:

  • Check the current number of manual backups.
  • Identify and delete the oldest backup (with user confirmation) if the limit is reached.
  • Proceed to create a new backup.

Let’s get into the details.

The backup workflow

To create backups using the Kinsta API, you’ll use the following endpoint:

POST /sites/environments/{env_id}/manual-backups

This requires:

  1. Environment ID: Identifies the environment (like staging or production) where the backup will be created.
  2. Backup Tag: A label to identify the backup (optional).

Manually retrieving the environment ID and running a command like backup <environment ID> can be cumbersome. Instead, we’ll build a user-friendly script where you simply specify the site name, and the script will:

  1. Fetch the list of environments for the site.
  2. Prompt you to choose the environment to back up.
  3. Handle the backup creation process.

Reusable functions for clean code

To keep our script modular and reusable, we’ll define functions for specific tasks. Let’s go through the setup step by step.

1. Set up base variables

You can do away with the first script you created or create a new script file for this. Start by declaring the base Kinsta API URL and your company ID in the script:

BASE_URL="https://api.kinsta.com/v2"
COMPANY_ID="<your_company_id>"

These variables allow you to construct API endpoints dynamically throughout the script.

2. Fetch all sites

Define a function to fetch the list of all company sites. This allows you to retrieve details about each site later.

get_sites_list() {
  API_URL="$BASE_URL/sites?company=$COMPANY_ID"

  echo "Fetching all sites for company ID: $COMPANY_ID..."
  
  RESPONSE=$(curl -s -X GET "$API_URL" /
    -H "Authorization: Bearer $API_KEY" /
    -H "Content-Type: application/json")

  # Check for errors
  if [ -z "$RESPONSE" ]; then
    echo "Error: No response from the API."
    exit 1
  fi

  echo "$RESPONSE"
}

You’ll notice this function returns an unformatted response from the API. To get a formatted response. You can add another function to handle that (although that is not our concern in this section):

list_sites() {
  RESPONSE=$(get_sites_list)

  if [ -z "$RESPONSE" ]; then
    echo "Error: No response from the API while fetching sites."
    exit 1
  fi

  echo "Company Sites:"
  echo "--------------"
  # Clean the RESPONSE before passing it to jq
  CLEAN_RESPONSE=$(echo "$RESPONSE" | tr -d '/r' | sed 's/^[^{]*//') # Removes extra characters before the JSON starts

  echo "$CLEAN_RESPONSE" | jq -r '.company.sites[] | "/(.display_name) (/(.name)) - Status: /(.status)"'
}

Calling the list_sites function displays your sites as shown earlier. The main goal, however, is to access each site and its ID, allowing you to retrieve detailed information about each site.

3. Fetch site details

To fetch details about a specific site, use the following function, which retrieves the site ID based on the site name and fetches additional details, like environments:

get_site_details_by_name() {
  SITE_NAME=$1
  if [ -z "$SITE_NAME" ]; then
    echo "Error: No site name provided. Usage: $0 details-name "
    return 1
  fi

  RESPONSE=$(get_sites_list)

  echo "Searching for site with name: $SITE_NAME..."

  # Clean the RESPONSE before parsing
  CLEAN_RESPONSE=$(echo "$RESPONSE" | tr -d '/r' | sed 's/^[^{]*//')

  # Extract the site ID for the given site name
  SITE_ID=$(echo "$CLEAN_RESPONSE" | jq -r --arg SITE_NAME "$SITE_NAME" '.company.sites[] | select(.name == $SITE_NAME) | .id')

  if [ -z "$SITE_ID" ]; then
    echo "Error: Site with name /"$SITE_NAME/" not found."
    return 1
  fi

  echo "Found site ID: $SITE_ID for site name: $SITE_NAME"

  # Fetch site details using the site ID
  API_URL="$BASE_URL/sites/$SITE_ID"

  SITE_RESPONSE=$(curl -s -X GET "$API_URL" /
    -H "Authorization: Bearer $API_KEY" /
    -H "Content-Type: application/json")

  echo "$SITE_RESPONSE"
}

The function above filters the site using the site name and then retrieves additional details about the site using the /sites/<site-id> endpoint. These details include the site’s environments, which is what we need to trigger backups.

Creating backups

Now that you’ve set up reusable functions to fetch site details and list environments, you can focus on automating the process of creating backups. The goal is to run a simple command with just the site name and then interactively choose the environment to back up.

Start by creating a function (we’re naming it trigger_manual_backup). Inside the function, define two variables: the first to accept the site name as input and the second to set a default tag (default-backup) for the backup. This default tag will be applied unless you choose to specify a custom tag later.

trigger_manual_backup() {
  SITE_NAME=$1
  DEFAULT_TAG="default-backup"

  # Ensure a site name is provided
  if [ -z "$SITE_NAME" ]; then
    echo "Error: Site name is required."
    echo "Usage: $0 trigger-backup "
    return 1
  fi

  # Add the code here

}

This SITE_NAME is the identifier for the site you want to manage. You also set up a condition so the script exits with an error message if the identifier is not provided. This ensures the script doesn’t proceed without the necessary input, preventing potential API errors.

Next, use the reusable get_site_details_by_name function to fetch detailed information about the site, including its environments. The response is then cleaned to remove any unexpected formatting issues that might arise during processing.

SITE_RESPONSE=$(get_site_details_by_name "$SITE_NAME")

if [ $? -ne 0 ]; then
  echo "Error: Failed to fetch site details for site /"$SITE_NAME/"."
  return 1
fi

CLEAN_RESPONSE=$(echo "$SITE_RESPONSE" | tr -d '/r' | sed 's/^[^{]*//')

Once we have the site details, the script below extracts all available environments and displays them in a readable format. This helps you visualize which environments are linked to the site.

The script then prompts you to select an environment by its name. This interactive step makes the process user-friendly by eliminating the need to remember or input environment IDs.

ENVIRONMENTS=$(echo "$CLEAN_RESPONSE" | jq -r '.site.environments[] | "/(.name): /(.id)"')

echo "Available Environments for /"$SITE_NAME/":"
echo "$ENVIRONMENTS"

read -p "Enter the environment name to back up (e.g., staging, live): " ENV_NAME

The selected environment name is then used to look up its corresponding environment ID from the site details. This ID is required for API requests to create a backup.

ENV_ID=$(echo "$CLEAN_RESPONSE" | jq -r --arg ENV_NAME "$ENV_NAME" '.site.environments[] | select(.name == $ENV_NAME) | .id')

if [ -z "$ENV_ID" ]; then
  echo "Error: Environment /"$ENV_NAME/" not found for site /"$SITE_NAME/"."
  return 1
fi

echo "Found environment ID: $ENV_ID for environment name: $ENV_NAME"

In the code above, a condition is created so that the script exits with an error message if the provided environment name is not matched.

Now that you have the environment ID, you can proceed to check the current number of manual backups for the selected environment. Kinsta’s limit of five manual backups per environment means this step is crucial to avoid errors.

Let’s start by fetching the list of backups using the /backups API endpoint.

API_URL="$BASE_URL/sites/environments/$ENV_ID/backups"
BACKUPS_RESPONSE=$(curl -s -X GET "$API_URL" /
  -H "Authorization: Bearer $API_KEY" /
  -H "Content-Type: application/json")

CLEAN_RESPONSE=$(echo "$BACKUPS_RESPONSE" | tr -d '/r' | sed 's/^[^{]*//')
MANUAL_BACKUPS=$(echo "$CLEAN_RESPONSE" | jq '[.environment.backups[] | select(.type == "manual")]')
BACKUP_COUNT=$(echo "$MANUAL_BACKUPS" | jq 'length')

The script above then filters for manual backups and counts them. If the count reaches the limit, we need to manage the existing backups:

  if [ "$BACKUP_COUNT" -ge 5 ]; then
    echo "Manual backup limit reached (5 backups)."
    
    # Find the oldest backup
    OLDEST_BACKUP=$(echo "$MANUAL_BACKUPS" | jq -r 'sort_by(.created_at) | .[0]')
    OLDEST_BACKUP_NAME=$(echo "$OLDEST_BACKUP" | jq -r '.note')
    OLDEST_BACKUP_ID=$(echo "$OLDEST_BACKUP" | jq -r '.id')

    echo "The oldest manual backup is /"$OLDEST_BACKUP_NAME/"."
    read -p "Do you want to delete this backup to create a new one? (yes/no): " CONFIRM

    if [ "$CONFIRM" != "yes" ]; then
      echo "Aborting backup creation."
      return 1
    fi

    # Delete the oldest backup
    DELETE_URL="$BASE_URL/sites/environments/backups/$OLDEST_BACKUP_ID"
    DELETE_RESPONSE=$(curl -s -X DELETE "$DELETE_URL" /
      -H "Authorization: Bearer $API_KEY" /
      -H "Content-Type: application/json")

    echo "Delete Response:"
    echo "$DELETE_RESPONSE" | jq -r '[
      "Operation ID: /(.operation_id)",
      "Message: /(.message)",
      "Status: /(.status)"
    ] | join("/n")'
  fi

The condition above identifies the oldest backup by sorting the list based on the created_at timestamp. It then prompts you to confirm whether you’d like to delete it.

If you agree, the script deletes the oldest backup using its ID, freeing up space for the new one. This ensures that backups can always be created without manually managing limits.

Now that there is space, let’s proceed with the code to trigger backup for the environment. Feel free to skip this code, but for a better experience, it prompts you to specify a custom tag, defaulting to “default-backup” if none is provided.

read -p "Enter a backup tag (or press Enter to use /"$DEFAULT_TAG/"): " BACKUP_TAG

if [ -z "$BACKUP_TAG" ]; then
  BACKUP_TAG="$DEFAULT_TAG"
fi

echo "Using backup tag: $BACKUP_TAG"

Finally, the script below is where the backup action happens. It sends a POST request to the /manual-backups endpoint with the selected environment ID and backup tag. If the request is successful, the API returns a response confirming the backup creation.

API_URL="$BASE_URL/sites/environments/$ENV_ID/manual-backups"
RESPONSE=$(curl -s -X POST "$API_URL" /
  -H "Authorization: Bearer $API_KEY" /
  -H "Content-Type: application/json" /
  -d "{/"tag/": /"$BACKUP_TAG/"}")

if [ -z "$RESPONSE" ]; then
  echo "Error: No response from the API while triggering the manual backup."
  return 1
fi

echo "Backup Trigger Response:"
echo "$RESPONSE" | jq -r '[
  "Operation ID: /(.operation_id)",
  "Message: /(.message)",
  "Status: /(.status)"
] | join("/n")'

That’s it! The response obtained from the request above is formatted to display the operation ID, message, and status for clarity. If you call the function and run the script, you’ll see output similar to this:

Available Environments for "example-site":
staging: 12345
live: 67890
Enter the environment name to back up (e.g., staging, live): live
Found environment ID: 67890 for environment name: live
Manual backup limit reached (5 backups).
The oldest manual backup is "staging-backup-2023-12-31".
Do you want to delete this backup to create a new one? (yes/no): yes
Oldest backup deleted.
Enter a backup tag (or press Enter to use "default-backup"): weekly-live-backup
Using backup tag: weekly-live-backup
Triggering manual backup for environment ID: 67890 with tag: weekly-live-backup...
Backup Trigger Response:
Operation ID: backups:add-manual-abc123
Message: Adding a manual backup to environment in progress.
Status: 202

Creating commands for your script

Commands simplify how your script is used. Instead of editing the script or commenting out code manually, users can run it with a specific command like:

./script.sh list-sites
./script.sh backup 

At the end of your script (outside all the functions), include a conditional block that checks the arguments passed to the script:

if [ "$1" == "list-sites" ]; then
  list_sites
elif [ "$1" == "backup" ]; then
  SITE_NAME="$2"
  if [ -z "$SITE_NAME" ]; then
    echo "Usage: $0 trigger-backup "
    exit 1
  fi
  trigger_manual_backup "$SITE_NAME"
else
  echo "Usage: $0 {list-sites|trigger-backup }"
  exit 1
fi

The $1 variable represents the first argument passed to the script (e.g., in ./script.sh list-sites, $1 is list-sites). The script uses conditional checks to match $1 with specific commands like list-sites or backup. If the command is backup, it also expects a second argument ($2), which is the site name. If no valid command is provided, the script defaults to displaying usage instructions.

You can now trigger a manual backup for a specific site by running the command:

./script.sh backup

Advanced use case 2: Updating plugins across multiple sites

Managing WordPress plugins across multiple sites can be tedious, especially when updates are available. Kinsta does a great job handling this via the MyKinsta dashboard, through the bulk action feature we introduced last year.

But if you do not like working with user interfaces, the Kinsta API provides another opportunity to create a shell script to automate the process of identifying outdated plugins and updating them across multiple sites or specific environments.

Breaking down the workflow

1. Identify sites with outdated plugins: The script iterates through all sites and environments, searching for the specified plugin with an update available. The following endpoint is used to fetch the list of plugins for a specific site environment:

GET /sites/environments/{env_id}/plugins

From the response, we filter for plugins where "update": "available".

2. Prompt user for update options: It displays the sites and environments with the outdated plugin, allowing the user to select specific instances or update all of them.

3. Trigger plugin updates: To update the plugin in a specific environment, the script uses this endpoint:

PUT /sites/environments/{env_id}/plugins

The plugin name and its updated version are passed in the request body.

The script

Since the script is lengthy, the full function is hosted on GitHub for easy access. Here, we’ll explain the core logic used to identify outdated plugins across multiple sites and environments.

The script starts by accepting the plugin name from the command. This name specifies the plugin you want to update.

PLUGIN_NAME=$1

if [ -z "$PLUGIN_NAME" ]; then
  echo "Error: Plugin name is required."
  echo "Usage: $0 update-plugin "
  return 1
fi

The script then uses the reusable get_sites_list function (explained earlier) to fetch all sites in the company:

echo "Fetching all sites in the company..."

# Fetch all sites in the company
SITES_RESPONSE=$(get_sites_list)
if [ $? -ne 0 ]; then
  echo "Error: Failed to fetch sites."
  return 1
fi

# Clean the response
CLEAN_SITES_RESPONSE=$(echo "$SITES_RESPONSE" | tr -d '/r' | sed 's/^[^{]*//')

Next comes the heart of the script: looping through the list of sites to check for outdated plugins. The CLEAN_SITES_RESPONSE, which is a JSON object containing all sites, is passed to a while loop to perform operations for each site one by one.

It starts by extracting some important data like the site ID, name, and display name into variables:

while IFS= read -r SITE; do
  SITE_ID=$(echo "$SITE" | jq -r '.id')
  SITE_NAME=$(echo "$SITE" | jq -r '.name')
  SITE_DISPLAY_NAME=$(echo "$SITE" | jq -r '.display_name')

  echo "Checking environments for site /"$SITE_DISPLAY_NAME/"..."

The site name is then used alongside the get_site_details_by_name function defined earlier to fetch detailed information about the site, including all its environments.

SITE_DETAILS=$(get_site_details_by_name "$SITE_NAME")
CLEAN_SITE_DETAILS=$(echo "$SITE_DETAILS" | tr -d '/r' | sed 's/^[^{]*//')

ENVIRONMENTS=$(echo "$CLEAN_SITE_DETAILS" | jq -r '.site.environments[] | "/(.id):/(.name):/(.display_name)"')

The environments are then looped through to extract details of each environment, such as the ID, name, and display name:

while IFS= read -r ENV; do
  ENV_ID=$(echo "$ENV" | cut -d: -f1)
  ENV_NAME=$(echo "$ENV" | cut -d: -f2)
  ENV_DISPLAY_NAME=$(echo "$ENV" | cut -d: -f3)

  echo "Checking plugins for environment /"$ENV_DISPLAY_NAME/"..."

For each environment, the script now fetches its list of plugins using the Kinsta API.

PLUGINS_RESPONSE=$(curl -s -X GET "$BASE_URL/sites/environments/$ENV_ID/plugins" /
  -H "Authorization: Bearer $API_KEY" /
  -H "Content-Type: application/json")

CLEAN_PLUGINS_RESPONSE=$(echo "$PLUGINS_RESPONSE" | tr -d '/r' | sed 's/^[^{]*//')

Next, the script checks if the specified plugin exists in the environment and has an available update:

OUTDATED_PLUGIN=$(echo "$CLEAN_PLUGINS_RESPONSE" | jq -r --arg PLUGIN_NAME "$PLUGIN_NAME" '.environment.container_info.wp_plugins.data[] | select(.name == $PLUGIN_NAME and .update == "available")')

If an outdated plugin is found, the script logs its details and adds them to the SITES_WITH_OUTDATED_PLUGIN array:

if [ ! -z "$OUTDATED_PLUGIN" ]; then
  CURRENT_VERSION=$(echo "$OUTDATED_PLUGIN" | jq -r '.version')
  UPDATE_VERSION=$(echo "$OUTDATED_PLUGIN" | jq -r '.update_version')

  echo "Outdated plugin /"$PLUGIN_NAME/" found in /"$SITE_DISPLAY_NAME/" (Environment: $ENV_DISPLAY_NAME)"
  echo "  Current Version: $CURRENT_VERSION"
  echo "  Update Version: $UPDATE_VERSION"

  SITES_WITH_OUTDATED_PLUGIN+=("$SITE_DISPLAY_NAME:$ENV_DISPLAY_NAME:$ENV_ID:$UPDATE_VERSION")
fi

This is what the logged details of outdated plugins would look like:

Outdated plugin "example-plugin" found in "Site ABC" (Environment: Production)
  Current Version: 1.0.0
  Update Version: 1.2.0
Outdated plugin "example-plugin" found in "Site XYZ" (Environment: Staging)
  Current Version: 1.3.0
  Update Version: 1.4.0

From here, we perform plugin updates for each plugin using its endpoint. The full script is in this GitHub repository.

Summary

This article guided you through creating a shell script to interact with the Kinsta API.

Take some time to explore the Kinsta API further — you’ll discover additional features you can automate to handle tasks tailored to your specific needs. You might consider integrating the API with other APIs to enhance decision-making and efficiency.

Lastly, regularly check the MyKinsta dashboard for new features designed to make website management even more user-friendly through its intuitive interface.

The post Managing your WordPress sites with shell scripts and Kinsta API appeared first on Kinsta®.

版权声明:
作者:cc
链接:https://www.techfm.club/p/193750.html
来源:TechFM
文章版权归作者所有,未经允许请勿转载。

THE END
分享
二维码
< <上一篇
下一篇>>