Skip to main content
Knowledge Base

How to pass userdata when creating an ec2-instance with the service catalog?

Answer

A customer asked: > Could you please share the option to pass our userdata while creating an ec2-instance using the below the terragrunt module? > https://github.com/gruntwork-io/terraform-aws-service-catalog/tree/master/modules/services/ec2-instance which calls this module under the hood: https://github.com/gruntwork-io/terraform-aws-server/tree/v0.13.7/modules/single-server

Let's first start by understanding how user data is currently configured and passed in the module. In [the ec2-instance service catalog module](https://github.com/gruntwork-io/terraform-aws-service-catalog/blob/master/modules/services/ec2-instance/main.tf#L97-L114) you indicated, the user-data script is specified in the user-data.sh file here (in the root of the same module). You will notice that user-data.sh file is a template that has values expecting to be interpolated later, such as the following: ```bash readonly users_for_ip_lockdown=(${ip_lockdown_users}) start_ec2_baseline \ "${enable_cloudwatch_log_aggregation}" \ "${enable_ssh_grunt}" \ "${enable_fail2ban}" \ "${enable_ip_lockdown}" \ "${ssh_grunt_iam_group}" \ "${ssh_grunt_iam_group_sudo}" \ "${log_group_name}" \ "${external_account_ssh_grunt_role_arn}" \ "$${users_for_ip_lockdown[@]}" # Need a double dollar-sign here to avoid Terraform interpolation volume_json=$(echo ${ebs_volumes} | base64 -d) for name in $(echo $${volume_json} | jq -r 'keys[]') ; do mount_point=$(echo $${volume_json} | jq -r ".\"$${name}\".mount_point") device_name=$(echo $${volume_json} | jq -r ".\"$${name}\".device_name") owner=$(echo $${volume_json} | jq -r ".\"$${name}\".owner") id=$(echo ${ebs_volume_data} | base64 -d | jq -r "[.\"$${name}\"][0].id") mount-ebs-volume \ --aws-region "${ebs_aws_region}" \ --volume-id "$${id}" \ --device-name "$${device_name}" \ --mount-point "$${mount_point}" \ --owner "$${owner}" done ``` On [lines 97 to 114 of the main.tf](https://github.com/gruntwork-io/terraform-aws-service-catalog/blob/c5dd04e93dff43d00b54faac44f77c0afb0a2018/modules/services/ec2-instance/main.tf#L97-L114) file in that module, a local variable called `base_user_data` is created by making a call to the terraform function `templatefile` to load the above ^ user-data.sh template into memory after passing in the variables expected by the script. With that done, a new local map is created in[ lines 77 to 81 here](https://github.com/gruntwork-io/terraform-aws-service-catalog/blob/c5dd04e93dff43d00b54faac44f77c0afb0a2018/modules/services/ec2-instance/main.tf#L77-L81) which represents the structure expected by the variable cloud_init_parts which is defined [here on lines 154 to 162 of variables.tf](https://github.com/gruntwork-io/terraform-aws-service-catalog/blob/c5dd04e93d/modules/services/ec2-instance/variables.tf#L154-L162). As noted in a comment there, [this doc ](https://registry.terraform.io/providers/hashicorp/template/latest/docs/data-sources/cloudinit_config)explains the use of `template_cloudinit_config` which is a generic definition for cloud-specific user data mechanisms such as AWS user-data. All that said, [here's the official guide](https://github.com/gruntwork-io/terraform-aws-service-catalog/blob/master/modules/services/ec2-instance/core-concepts.md#how-do-i-use-user-data) to configuring user data within your own terraform / terragrunt config.