Automating Grafana Dashboards on Azure with Terraform — Part 4: Creating an Azure DevOps Pipeline for Multiple Terraform Stacks
Throughout this series, I’ve been automating the deployment of an Azure Managed Grafana instance to simplify the process of building a telemetry dashboard for an Azure-based solution. This solution consisted of an Azure Data Explorer (ADX) cluster to store the telemetry data and a Grafana Dashboard to display that telemetry in meaningful ways.
As you can see, in the below diagram this automation system was built using three distinct Terraform providers. One to provision the infrastructure on Azure azurerm, one to provision the Dashboards to Grafana grafana , and one to provision database schema to Azure Data Explorer that drives the dashboard adx (more on this gem of a community provider later).
IMAGE
We employed an end-to-end pipeline on Azure DevOps that executed three distinct Terraform Apply operations on three distinct Terraform root modules. This is similar to how the future feature of Terraform Stacks works but using existing capabilities of Terraform and a whole bunch of Azure DevOps “glue” to stick it all together.
The folder structure of the Azure DevOps Git Repo looked like this:
- .azdo-pipelines
- src
- terraform
- adx
- grafana
- infra
We kept our Azure DevOps YAML pipelines in the .azdo-pipelines directory and we organized our Terraform into three sub-folders, one for each of the root modules, in the src\terraform directory.
In the Azure DevOps pipeline I have the following Jobs:
- terraform_infra
- terraform_grafana
- terraform_adx
Both the jobs terraform_grafana and terraform_adx are dependent on the terraform_infra because that job provisions the Azure Infrastructure needed to execute the other two. You can’t provision Grafana Dashboards to an Azure Managed Grafana instance that doesn’t exist and you can’t provision table schema and functions to an Azure Data Explorer (ADX) cluster that doesn’t exist either!
The trick is to get the outputs from the Azure Infrastructure job to the respective job that requires them.
As illustrated in the above diagram I output a variable called adxUri and pass that along to the terraform_adx job that is going to provision the tables and functions to the ADX cluster. The Terraform user’s identity (a Service Principal) has already been granted access to the ADX cluster using an Azure Role Assignment.
Likewise, I pass the grafanaName and the grafanaEndpoint along to the terraform_grafana job. The grafanaName is the Azure Resource Name and the grafanaEndpoint is the URI to the Azure Grafana Managed instance — similar to the ADX Cluster URI. Sometimes the URI and the name are not easily translatable — usually because the URI might contain a random string or some other detail I can’t build from the name alone, hence I have to pass them both.
Once in the terraform_grafan job I need to execute the bash scripts using the Azure CLI and the amg extension in order to create-if-not-exists the Service Account and delete-if-exists, then create a new authentication token.
Modifying my bash script to obtain the token ever so slightly to expose the token as an Azure DevOps task output, allowing the Terraform Apply task within the job to use it to authenticate with Grafana.
echo 'Create grafana service-account token'
GRAFANA_SERVICE_ACCOUNT_TOKEN=$(az grafana service-account token create --name $GRAFANA_NAME --service-account $GRAFANA_SERVICE_ACCOUNT_NAME --token $GRAFANA_SERVICE_ACCOUNT_TOKEN_NAME --time-to-live 15d --query @.{key:key} | jq -r '.key')
echo "##vso[task.setvariable variable=grafanaToken;isOutput=true]$GRAFANA_SERVICE_ACCOUNT_TOKEN"
Notice I am using the absurdely cryptic Azure DevOps Output Variable command syntax. Once you get it working, I highly recommend not changing this, like, EVER. I mean it. Don’t do it.
Within the terraform_infra job you should verify the Terraform outputs to ensure they are properly set as output variables within the job.
- task: Bash@3
displayName: "Check Terraform Outputs"
inputs:
targetType: inline
script: |
echo 'Grafana Endpoint: '$GRAFANA_ENDPOINT
echo 'Grafana Name: '$GRAFANA_NAME
env:
GRAFANA_ENDPOINT: $(tfout.grafanaEndpoint)
GRAFANA_NAME: $(tfout.grafanaName)
Then you can reference them as input variables within the terraform_grafana job. Make sure to set the dependsOn attribute for the terraform_grafana job otherwise those input variables will not resolve!
- job: terraform_grafana
dependsOn: terraform_infra
variables:
- name: grafanaEndpoint
value: $[ dependencies.terraform_infra.outputs['out.grafanaEndpoint'] ]
- name: grafanaName
value: $[ dependencies.terraform_infra.outputs['out.grafanaName'] ]
You can also add some nice output verifications to ensure everything is coming through in the terraform_grafana job. Some Defensive Azure DevOps’ing never hurt anybody!
- task: Bash@3
inputs:
targetType: inline
script: |
echo 'Grafana Endpoint: '$(grafanaEndpoint)
echo 'Grafana Name: '$(grafanaName)
Now that we know which Grafana instance we are talking to we can go get a fresh Grafana authentication token for our Grafana Service Account.
- task: Bash@3
name: grafana_service_account
displayName: "Grafana Service Account"
inputs:
filePath: .azdo-pipelines/scripts/grafana-service-account.sh
env:
GRAFANA_NAME: $(grafanaName)
GRAFANA_SERVICE_ACCOUNT_NAME: terraform
GRAFANA_SERVICE_ACCOUNT_TOKEN_NAME: terraform
ARM_CLIENT_ID: $(ARM_CLIENT_ID)
ARM_CLIENT_SECRET: $(ARM_CLIENT_SECRET)
ARM_TENANT_ID: $(ARM_TENANT_ID)
I’ve encapsulated all of the Grafana Azure CLI code into a reusable bash script that really helps out. Here is the full script below:
echo 'Azure CLI authN'
az login --service-principal -u $ARM_CLIENT_ID -p $ARM_CLIENT_SECRET --tenant $ARM_TENANT_ID
az extension add --name amg
echo 'List grafana service-accounts'
if [ $(az grafana service-account list --name $GRAFANA_NAME | jq 'length == 0') == "true" ]; then
echo 'Create grafana service-accounts'
az grafana service-account create --name $GRAFANA_NAME --service-account $GRAFANA_SERVICE_ACCOUNT_NAME --role Admin --query @.{name:name} | jq '.name'
else
echo 'Service account already exists!'
fi
echo 'List grafana service-account tokens'
if [ $(az grafana service-account token list --name $GRAFANA_NAME --service-account $GRAFANA_SERVICE_ACCOUNT_NAME | jq 'length == 0' ) == true ]; then
echo 'Token does not exist'
else
echo 'Token already exists!'
az grafana service-account token list --name $GRAFANA_NAME --service-account $GRAFANA_SERVICE_ACCOUNT_NAME
az grafana service-account token delete --name $GRAFANA_NAME --service-account $GRAFANA_SERVICE_ACCOUNT_NAME --token $GRAFANA_SERVICE_ACCOUNT_TOKEN_NAME
az grafana service-account token list --name $GRAFANA_NAME --service-account $GRAFANA_SERVICE_ACCOUNT_NAME
fi
echo 'Create grafana service-account token'
GRAFANA_SERVICE_ACCOUNT_TOKEN=$(az grafana service-account token create --name $GRAFANA_NAME --service-account $GRAFANA_SERVICE_ACCOUNT_NAME --token $GRAFANA_SERVICE_ACCOUNT_TOKEN_NAME --time-to-live 15d --query @.{key:key} | jq -r '.key')
echo "##vso[task.setvariable variable=grafanaToken;isOutput=true]$GRAFANA_SERVICE_ACCOUNT_TOKEN"
Now we are ready to run Terraform Apply and make all our Grafana dreams come true.
- task: Bash@3
displayName: "terraform apply"
inputs:
filePath: .azdo-pipelines/scripts/terraform-with-backend.sh
workingDirectory: src/terraform/grafana
arguments: apply -auto-approve -var-file="./env/$(EnvironmentName).tfvars"
env:
ARM_CLIENT_ID: $(ARM_CLIENT_ID)
ARM_CLIENT_SECRET: $(ARM_CLIENT_SECRET)
ARM_SUBSCRIPTION_ID: $(ARM_SUBSCRIPTION_ID)
ARM_TENANT_ID: $(ARM_TENANT_ID)
BACKEND_RESOURCE_GROUP_NAME: $(BACKEND_RESOURCE_GROUP_NAME)
BACKEND_STORAGE_ACCOUNT_NAME: $(BACKEND_STORAGE_ACCOUNT_NAME)
BACKEND_STORAGE_ACCOUNT_CONTAINER_NAME: $(BACKEND_STORAGE_ACCOUNT_CONTAINER_NAME)
WORKSPACE_NAME: $(EnvironmentName)
TF_BACKEND_KEY: "$(ApplicationName)-grafana"
TF_VAR_grafana_endpoint: $(grafanaEndpoint)
TF_VAR_grafana_auth: $(grafana_service_account.grafanaToken)
I hope you’ve enjoyed our journey of automating Grafana dashboards with Azure Managed Grafana and the power of the azurerm and grafana Terraform providers. Now, it’s your turn to take the next step. Embrace the potential of automation and unlock the future of monitoring with ‘Telemetry Transformed.’
Until then — Happy Azure Terraforming!