S3 Native State Locking
State locking is an important feature in Terraform that protects your Terraform state. Terraform produces “state”, a snapshot of your deployed configuration, stored in a file that tracks resource attributes, their relationships and more. State locking ensures only one operation can modify this file at a time, preventing conflicts and potential data corruption.
Terraform uses (remote1) backends to store the state file on a storage system. Each backend represents a specific storage system and manages its own approach to reading, writing, and locking the state file. For a backend to support state locking, it must adhere to a simple contract2: it needs to be able to Lock
and Unlock
a specific state. When Terraform starts an operation, it first acquires the lock (Lock
), and once the operation is complete, it releases the lock (Unlock
).
Before Terraform 1.10, the Amazon S3 (s3
) backend relied on DynamoDB for state locking. When Terraform needed to modify the state, it would create a lock by writing a lock item to a DynamoDB table, including a unique identifier for the client. Once the operation was complete, the lock was released by deleting the lock item. However, relying on DynamoDB added complexity and introduced an additional dependency for users, conflicting with the Terraform community’s preference for minimal setup requirements.
Amazon S3 Conditional Writes
Amazon S3 conditional writes simplify data integrity by preventing overwrites of existing objects. This mechanism works by validating whether an object with the same key already exists in the bucket, effectively enabling conditional write operations.
To learn more about conditional writes, refer to the Amazon S3 documentation.
S3-native State Locking
The introduction of S3 conditional writes made it possible to implement a locking mechanism without relying on DynamoDB, by offloading the locking responsibility to S3 itself.
In Terraform 1.10, we introduced an experimental feature, use_lockfile
, to the S3 backend. This feature uses S3 conditional writes to create a lock file in the S3 bucket at the beginning of most operations. The lock file, a simple JSON object, is created using the If-None-Match
header in the PutObject HTTP request. This ensures the file is only written if it does not already exist. If the object exists, S3 returns an error, preventing conflicts. This pattern, often referred to as optimistic locking3, allows Terraform to defer managing locks to the storage system. Whereas if the write is successful, Terraform knows it has the lock and can proceed with the operation.
Enabling S3-native state locking is simple—just set the use_lockfile
argument to true in the s3
backend configuration.
terraform {
backend "s3" {
bucket = "mybucket"
key = "path/to/my/key"
region = "us-east-1"
use_lockfile = true
}
}
Challenges
Implementing S3-native state locking came with its fair share of challenges. The biggest and most obvious was ensuring we didn’t cause a breaking change. Aside from requiring users to reinitialize their backend, the migration from DynamoDB to S3-native locking had to be straightforward. This meant making sure the new implementation would just work with existing configurations—S3 buckets, IAM policies, and more. And trust me, there are a lot possible combinations.
The crux of addressing this challenge was scoping the feature carefully. We had to be clear about what we were changing and, just as importantly, what we weren’t. For example, state locking in the s3
backend has always been opt-in, unlike the gcs
or azurerm
backends, where state locking is enabled by default using conditional writes. This feature presented a great opportunity to simplify the configuration and align the behavior of the s3
backend with these other backends. However, we decided to keep the opt-in behavior as is. Changing this would have been too big a shift for the scope of this feature.
Because we weren’t changing the default behavior and needed to ensure we didn’t break existing configurations, we had to offer a clear migration path. We did this by supporting the simultaneous use of both DynamoDB and S3-native locking. This allowed users to migrate at their own pace, giving them the ability to enable S3-native state locking in parallel with the existing DynamoDB lock. Once users were confident in the new implementation, they could remove any DynamoDB-related arguments from the backend and rely entirely on S3-native locking. This approach added complexity to the implementation, but it was necessary to offer a smooth transition. There were plenty of edge cases to account for—like what happens if the lock file is successfully written to S3, but the DynamoDB lock fails to be acquired, and so on.
Another challenge is ensuring the lock file is written to S3 in the same way as the state file. In Terraform’s case, users often rely on assumptions about the settings applied during PutObject
requests for their state file, such as specific headers, encryption options, and apply strict S3 bucket policies. These settings may be explicitly defined or implicitly applied through defaults at different layers4, and any deviation in consistency when writing the lock file could cause unexpected behavior. For example, if the lock file doesn’t match the encryption settings of the state file, S3 access policies might prevent it from being read or written, resulting in Terraform producing an error diagnostic. This, of course, would be a breaking change from the user’s perspective.
Despite the challenges, we addressed them by carefully scoping the feature, offering a clear migration path, and ensuring consistency in the way the lock file is written to S3. The result was a feature that simplified state locking for users, removing the need for DynamoDB and reducing the setup requirements for the s3
backend.
An oversight
After the release, I kept an eye on the Terraform repository for feedback and issues, and things were relatively quiet. However, a user eventually reported a problem with S3 Object Lock-enabled buckets. After digging into it, I realized it was an oversight on my part.
In the initial iteration, we appear to have overlooked preserving the default behavior of the skip_checksum
flag for the lock file written to S3. By default, if this argument is not set in the backend block, the S3 checksum algorithm behavior defaults to SHA256. This causes the underlying S3 AWS SDK v2 serializers to automatically append the required x-amz-sdk-checksum-algorithm
header, but this header wasn’t being set for the lock file.
And this header is required for any PutObject
request to an S3 Object Lock-enabled bucket. Without it, requests fail with a 400 Bad Request error:
InvalidRequest: Content-MD5 OR x-amz-checksum- HTTP header is required for Put Object requests with Object Lock parameters
Thanks to a helpful user who reported this issue, we were able to address it in Terraform 1.10.1. We appreciate the community’s help in identifying and resolving issues like this.
Looking Ahead
The release of this feature brought significant positive feedback. At AWS re:Invent 2024, I spoke with many excited users just days after Terraform 1.10 was released. This feedback validated the effort, showing how simplifying state locking by removing dependencies like DynamoDB aligned with user needs.
Terraform 1.11
In 1.11, we plan to remove the “experimental” verbiage from the use_lockfile
argument documentation. The default value will remain false
, keeping locking as an opt-in feature. The dynamodb_table
, dynamodb_endpoint
, and endpoints.dynamodb
arguments will be marked as deprecated.
Terraform 1.12
No changes to these arguments are expected in 1.12.
Later Releases
In future releases, I anticipate that the configuration may be simplified further, potentially by deprecating some DynamoDB-related arguments. The use_lockfile
argument will likely remain opt-in, following the same approach as with DynamoDB locking.
Reflecting
Implementing S3-native state locking was a long-awaited feature that we were excited to deliver. It’s one of those changes where every decision and line of code changed carries weight, as any updates to this code path impact millions of users. At the time, we carefully reviewed the S3-native state locking proposal multiple times, knowing parts of the existing code in the backend were over a decade old—a testament to its stability, but also a reason to tread cautiously.
Amazon S3’s extensive features and subtle nuances introduced unique challenges. Supporting S3-compatible providers on a best-effort basis5 added another layer of complexity. Yet, the effort proved worthwhile.
Despite the challenges, we recognized the value of this feature. Working on it reminded me how critical it is to approach changes to core features with care and respect for the ecosystem’s diversity.
Seeing it become the second most upvoted pull request merged in the Terraform repository, I knew this feature resonated with the community. The excitement and support from users validated the effort and confirmed we were on the right track. I’d like to thank jar-b, gdavidson and yakdriver for their collaboration on this feature. Working with such skilled and supportive teammates made a big difference in delivering this feature successfully.
For more details, see https://github.com/hashicorp/terraform/pull/35661
👟 Footnotes
The default backend is not a remote backend, but a local one. Remote backends leverage a remote storage system to store the state file, enabling collaboration as well as inheriting the storage system’s features, such as state locking, versioning, and encryption. ↩︎
State locking involves more than just
Lock
andUnlock
operations; this is a simplification for clarity. ↩︎Optimistic locking allows concurrent access to a resource with the assumption that conflicts are rare. Changes are made freely, but before committing, the system checks if another process has modified the resource. If a conflict is found, the operation is typically rejected. In the case of Terraform, the conflict is detected by S3 returning an error when the lock file already exists. ↩︎
The
s3
backend involves several moving parts, with both explicit configurations and implicit behaviors at play. Users can define many configurable arguments, but beyond these, the aws-sdk-go-v2, which Terraform relies on to construct the S3 manager, applies additional defaults. For example, headerslike x-amz-sdk-checksum-algorithm
are appended automatically by the SDK. Combined with otherPutObject
defaults that are applied implicitly, there is the risk that the lock file may not be written to S3 in the same way as the state file, leading to unexpected behavior if a user assumes consistency (e.g., strict S3 bucket policies). ↩︎Support for S3 Compatible storage providers is offered as “best effort”. HashiCorp only tests the
s3
backend against Amazon S3, so they cannot offer any guarantees when using an alternate provider. ↩︎