Skip to main content
Knowledge Base

How do I get S3 bucket folder structure using Terratest?

Answer

### Requesting an approach to get S3 bucket details from terratest methods I am writing terratest go lang script for the automated validation of S3 bucket , hence requesting for the scripts/link to scripts or suitable methods to achieve below requirement which is pretty straight-forward for validating any S3 bucket ### What am i able to do ?: I am able to create S3 bucket [func CreateS3Bucket(t testing.TestingT, region string, name string)](https://pkg.go.dev/github.com/gruntwork-io/terratest/modules/aws#CreateS3Bucket) , Assert S3 bucket [func AssertS3BucketExists(t testing.TestingT, region string, name string)](https://pkg.go.dev/github.com/gruntwork-io/terratest/modules/aws#AssertS3BucketExists) to make sure it is created and get the length of content [func GetS3ObjectContentsE(t testing.TestingT, awsRegion string, bucket string, key string) (string, error)](https://pkg.go.dev/github.com/gruntwork-io/terratest/modules/aws#GetS3ObjectContentsE) provided content name is provided ### What i am NOT able to do or what i am seeking from you ?: I would like to know - how many folders got created in my S3 bucket - how many files are present in my S3 bucket - basically folder structure of my S3 bucket If i get a clear folder structure , folder names , sub folder names , file names present in my S3 bucket then i can assert the same with requirement and i can validate S3 bucket functionality completely I am not able achieve my requirement using [func GetS3ObjectContentsE(t testing.TestingT, awsRegion string, bucket string, key string) (string, error)](https://pkg.go.dev/github.com/gruntwork-io/terratest/modules/aws#GetS3ObjectContentsE) as it returned whole content in some unreadable format Please help me to get this validation done as it is pretty straight forward and i have many test cases pipelined to accomplish the same in current sprint Note : I inserted a code as well here where i am calling terraform function to create S3 bucket == > t.Run("reports3", func(t *testing.T) ,load a content into S3 bucket ==> t.Run("s3config", func(t *testing.T) I am able to validate S3 bucket creation using AssertS3BucketExists function but i am NOT getting any approach/hook to validate what folders have been created in my S3 bucket ```go package test import ( "fmt" "os" "path/filepath" "strings" "testing" "time" "github.com/fatih/color" "github.com/gruntwork-io/terratest/modules/aws" "github.com/gruntwork-io/terratest/modules/random" "github.com/gruntwork-io/terratest/modules/terraform" test_structure "github.com/gruntwork-io/terratest/modules/test-structure" "github.com/magiconair/properties" "github.com/stretchr/testify/assert" ) var ( rootFolder = "../.." //Root folder where terraform files should be (relative to the test folder) terraformFolderRelativeToRoot1 = "/modules/reporting-service-s3-bucket" //Relative path to reporting-service-s3 terraform module being tested from the root folder terraformFolderRelativeToRoot2 = "/modules/reporting-service-config" //Relative path to reporting-service-config terraform module being tested from the root folder blue = color.New(color.FgBlue) //Blue foreground color for printing result boldBlue = blue.Add(color.Bold) greenBackground = boldBlue.Add(color.BgGreen) //Green background color for printing the result randomId = random.UniqueId() stateBucket = strings.ToLower(fmt.Sprintf("test-s3backend-%s", randomId)) // Backend bucket p = properties.MustLoadFile("../config.properties", properties.UTF8) //Creating propery file Object awsRegion = p.MustGetString("awsRegion") // Reading awsRegion from property file and assigning value to local variable expectedBucketName = string(strings.ToLower(fmt.Sprintf("reportauto-s3-%s", randomId))[:17]) artifactoryurl = p.MustGetString("artifactoryurl") // Reading artifactoryurl from property file and assigning value to local variable repo = p.MustGetString("repo") // Reading repo from property file and assigning value to local variable // snapshotrepository = p.MustGetString("snapshotrepository") // Reading snapshotrepository from property file and assigning value to local variable configzipForAssetion = "config/config_v1.0.0.zip" // For comparing with Actual version ) // Basic test of the private-s3-bucket and report service config module that runs 'apply', 'destroy' and validate func TestReportingServiceConfig(t *testing.T) { t.Parallel() tempTestFolder1 := test_structure.CopyTerraformFolderToTemp(t, rootFolder, terraformFolderRelativeToRoot1) //Copy the terraform folder to a temp folder tempTestFolder2 := test_structure.CopyTerraformFolderToTemp(t, rootFolder, terraformFolderRelativeToRoot2) //Copy the terraform folder to a temp folder aws.CreateS3Bucket(t, awsRegion, stateBucket) // Creating state bucket time.Sleep(1 * 50 * time.Second) t.Run("reports3", func(t *testing.T) { terraformOptions := terraform.WithDefaultRetryableErrors(t, &terraform.Options{ TerraformDir: tempTestFolder1, Vars: map[string]interface{}{ "aws_region": awsRegion, "reporting_service_bucket_name": expectedBucketName, }, // Storing state files remotely in Backend S3 bucket BackendConfig: map[string]interface{}{ "bucket": stateBucket, "key": awsRegion + "/report-s3/terraform.tfstate", "region": awsRegion, }, Upgrade: false, }) defer test_structure.RunTestStage(t, "cleanup", func() { terraform.Destroy(t, terraformOptions) aws.EmptyS3Bucket(t, awsRegion, stateBucket) aws.DeleteS3Bucket(t, awsRegion, stateBucket) cleaningTemp("/tmp/TestReportingServiceConfig*") cleaningTemp("go.*") }) test_structure.RunTestStage(t, "deploy", func() { terraform.InitAndApply(t, terraformOptions) }) reportingservicebucketname := terraform.Output(t, terraformOptions, "reporting_service_bucket_name") test_structure.RunTestStage(t, "validate", func() { actualS3BucketName := reportingservicebucketname aws.AssertS3BucketExists(t, awsRegion, actualS3BucketName) greenBackground.Printf("Expected reporting-service-s3-bucket %s is matching with Actual reporting-service-s3-bucket %s in terratest assertion ==> %t\n", expectedBucketName, actualS3BucketName, assert.Equal(t, expectedBucketName, actualS3BucketName)) }) t.Run("s3config", func(t *testing.T) { test_structure.RunTestStage(t, "deploy", func() { terraformOptions := terraform.WithDefaultRetryableErrors(t, &terraform.Options{ // Set the path to the Terraform code that will be tested. TerraformDir: tempTestFolder2, Vars: map[string]interface{}{ "aws_region": awsRegion, "reporting_service_s3_bucket_name": reportingservicebucketname, "artifactory_url": artifactoryurl, "repository": repo, //"snapshot_repository": snapshotrepository, }, // Storing state files remotely in Backend S3 bucket BackendConfig: map[string]interface{}{ "bucket": stateBucket, "key": awsRegion + "/report-config/terraform.tfstate", "region": awsRegion, }, Upgrade: false, }) test_structure.SaveTerraformOptions(t, tempTestFolder2, terraformOptions) }) defer test_structure.RunTestStage(t, "cleanup_reporting_service_config", func() { terraformOptions := test_structure.LoadTerraformOptions(t, tempTestFolder2) terraform.Destroy(t, terraformOptions) }) test_structure.RunTestStage(t, "deploy_reporting_service_config", func() { terraformOptions := test_structure.LoadTerraformOptions(t, tempTestFolder2) terraform.InitAndApply(t, terraformOptions) }) test_structure.RunTestStage(t, "validate_reporting_service_config", func() { s3Content, _ := aws.GetS3ObjectContentsE(t, awsRegion, expectedBucketName, configzipForAssetion) print("s3Content here:", s3Content) if len(s3Content) > 1 { greenBackground.Printf("S3 Bucket has contnent with size %d\n", len(s3Content)) } }) terraformOptions := test_structure.LoadTerraformOptions(t, tempTestFolder2) reportingserviceconfigversion := terraform.Output(t, terraformOptions, "reporting_service_config_version") greenBackground.Printf("S3 CONFIG output IS HERE with version %s\n", reportingserviceconfigversion) }) }) } func cleaningTemp(tempPath string) { files, err := filepath.Glob(tempPath) if err != nil { panic(err) } for _, f := range files { if err := os.RemoveAll(f); err != nil { panic(err) } } greenBackground.Printf("Cleared Temp directory %s\n", tempPath) } ```

Hello, try to use "%v" format key - it should provide formatted value. Additional points: * `bucketObjects` value can be truncated which will require to do additional request * "Prefix" is optional and can be used to filter "subdirectories" Simplified example: ``` t.Parallel() region := "eu-west-1" s3BucketName := "acme-example-stage-eu-west-1-tf-state" s3Client, err := NewS3ClientE(t, region) if err != nil { t.Fatal(err) } listObjectsParams := &s3.ListObjectsV2Input{ Bucket: aws.String(s3BucketName), } for { bucketObjects, err := s3Client.ListObjectsV2(listObjectsParams) if err != nil { t.Fatal(err) } fmt.Printf("%v\n", bucketObjects) if !*bucketObjects.IsTruncated { break } listObjectsParams.ContinuationToken = bucketObjects.NextContinuationToken } ``` Output: ``` { Contents: [{ ETag: "\"aaa\"", Key: "stage/_global/account-baseline/terraform.tfstate", LastModified: 2021-04-27 09:33:06 +0000 UTC, Size: 1286, StorageClass: "STANDARD" },{ ETag: "\"bbb\"", Key: "stage/us-east-1/mysql/terraform.tfstate", LastModified: 2021-04-27 09:33:47 +0000 UTC, Size: 1402, StorageClass: "STANDARD" },{ ETag: "\"ccc\"", Key: "stage/us-east-1/vpc/terraform.tfstate", LastModified: 2021-04-27 09:33:25 +0000 UTC, Size: 8667, StorageClass: "STANDARD" }], IsTruncated: false, KeyCount: 3, MaxKeys: 1000, Name: "acme-example-stage-eu-west-1-tf-state", Prefix: "" } ``` Limit list for "subdirectories": ``` ... listObjectsParams := &s3.ListObjectsV2Input{ Bucket: aws.String(s3BucketName), Prefix: aws.String("stage/us-east-1"), } ... ``` Output: ``` { Contents: [{ ETag: "\"bbb\"", Key: "stage/us-east-1/mysql/terraform.tfstate", LastModified: 2021-04-27 09:33:47 +0000 UTC, Size: 1402, StorageClass: "STANDARD" },{ ETag: "\"ccc\"", Key: "stage/us-east-1/vpc/terraform.tfstate", LastModified: 2021-04-27 09:33:25 +0000 UTC, Size: 8667, StorageClass: "STANDARD" }], IsTruncated: false, KeyCount: 2, MaxKeys: 1000, Name: "acme-example-stage-eu-west-1-tf-state", Prefix: "stage/us-east-1" } ``` The individual objects can be also processed by looping on `Contents` field ``` for _, item := range bucketObjects.Contents { fmt.Printf("path: %s\n", *item.Key) } ``` https://docs.aws.amazon.com/code-samples/latest/catalog/gov2-s3-ListObjects-ListObjectsv2.go.html