Amazon S3 Security and Access Control Mastery
Master S3 security with secure-by-default settings, data perimeters, access control patterns, and advanced troubleshooting techniques from AWS re:Invent 2024.
What You'll Learn
📋 Prerequisites
- AWS account with S3 and IAM permissions
- AWS CLI installed and configured
- Understanding of AWS IAM concepts (Users, Groups, Roles, Policies)
- Basic knowledge of JSON and S3 bucket operations
- Read: AWS IAM Policy Mastery
🎯 What You'll Learn
- S3 secure-by-default settings and how to audit legacy buckets
- Building robust data perimeters with explicit deny policies
- Implementing Resource Control Policies (RCPs) at the organization level
- Four scalable patterns for S3 access control including policies, buckets, access points, and grants
- Deep dive into S3 Access Grants and AWS Lake Formation
- Advanced troubleshooting with new deny-reason strings and Access Analyzer
- SSE-KMS encryption and dual-authorization requirements
🏷️ Topics Covered
S3 Security Architecture Overview
Amazon S3 has evolved dramatically in recent years with security-first defaults and sophisticated access control mechanisms. This guide covers the latest best practices from AWS re:Invent 2024, including the new Resource Control Policies, enhanced troubleshooting capabilities, and scalable access patterns for enterprise environments.
🔒 Secure by Default
All new S3 buckets ship with encryption enabled, public access blocked, and ACLs disabled since early 2023.
🛡️ Data Perimeter
Explicit deny policies that block unauthorized network access and enforce organizational boundaries.
📊 S3 Access Grants
Directory-based access control that maps Identity Center groups to S3 prefixes with user-level audit trails.
S3 Secure-by-Default Settings
Since early 2023, AWS has made S3 secure by default. Every new bucket automatically ships with three critical security features enabled. However, legacy buckets created before this change require manual configuration.
🔧 Default Security Features
Check Bucket Security Settings
# Check if your bucket has secure defaults enabled
aws s3api get-bucket-encryption --bucket YOUR-BUCKET-NAME
aws s3api get-public-access-block --bucket YOUR-BUCKET-NAME
aws s3api get-bucket-acl --bucket YOUR-BUCKET-NAME🔧 1. SSE-S3 Encryption ON
All objects are automatically encrypted at rest using AES-256 with AWS-managed keys.
Enable Default Encryption
{
"Rules": [
{
"ApplyServerSideEncryptionByDefault": {
"SSEAlgorithm": "AES256"
},
"BucketKeyEnabled": true
}
]
}🔧 2. Block Public Access ON
Prevents buckets and objects from being accidentally made public through ACLs or bucket policies.
Enable Public Access Block
aws s3api put-public-access-block \
--bucket YOUR-BUCKET-NAME \
--public-access-block-configuration \
"BlockPublicAcls=true,IgnorePublicAcls=true,BlockPublicPolicy=true,RestrictPublicBuckets=true"🔧 3. ACLs Disabled
Forces all access control through IAM policies and bucket policies, eliminating ACL complexity.
Disable Bucket ACLs
aws s3api put-bucket-ownership-controls \
--bucket YOUR-BUCKET-NAME \
--ownership-controls Rules='[{ObjectOwnership=BucketOwnerEnforced}]'SSE-KMS Dual Authorization
When using SSE-KMS encryption, every S3 operation requires authorization from both S3 and KMS services. Understanding this dual-authorization model is critical for troubleshooting access issues.
SSE-KMS Required Permissions
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "S3DataPlaneAccess",
"Effect": "Allow",
"Action": [
"s3:GetObject",
"s3:PutObject"
],
"Resource": "arn:aws:s3:::EXAMPLE-BUCKET/*"
},
{
"Sid": "KMSKeyAccess",
"Effect": "Allow",
"Action": [
"kms:Decrypt",
"kms:GenerateDataKey"
],
"Resource": "arn:aws:kms:us-east-1:123456789012:key/01234567-89ab-cdef-0123-456789abcdef"
}
]
}Authentication Flow
S3 Authorization Check
S3 verifies the principal has permission for the requested action (GetObject, PutObject, etc.)
KMS Authorization Check
KMS verifies the principal can decrypt/encrypt with the specified customer-managed key
Operation Success
Only if both checks pass, the operation completes successfully
Building a Data Perimeter
A data perimeter uses explicit deny statements to ensure data can only be accessed from trusted networks and by authorized principals within your organization. This implements a zero-trust model for your S3 data.
🔧 Organization and Network Perimeter
data-perimeter-policy.json
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "DenyOutsideOrganization",
"Effect": "Deny",
"Principal": "*",
"Action": "s3:*",
"Resource": [
"arn:aws:s3:::EXAMPLE-BUCKET",
"arn:aws:s3:::EXAMPLE-BUCKET/*"
],
"Condition": {
"StringNotEquals": {
"aws:PrincipalOrgID": "o-12345678"
},
"BoolIfExists": {
"aws:PrincipalIsAWSService": "false"
}
}
},
{
"Sid": "DenyUntrustedNetworks",
"Effect": "Deny",
"Principal": "*",
"Action": "s3:*",
"Resource": [
"arn:aws:s3:::EXAMPLE-BUCKET",
"arn:aws:s3:::EXAMPLE-BUCKET/*"
],
"Condition": {
"StringNotEqualsIfExists": {
"aws:SourceVpc": ["vpc-11112222", "vpc-33334444"]
},
"NotIpAddress": {
"aws:SourceIp": ["203.0.113.0/24", "198.51.100.0/24"]
},
"BoolIfExists": {
"aws:ViaAWSService": "false"
}
}
}
]
}🔧 Resource Control Policies (RCPs)
Instead of copying perimeter policies to thousands of buckets, use the new Resource Control Policy feature to enforce them organization-wide from a single location.
Create Organization-wide RCP
# Create RCP at organization root
aws organizations create-policy \
--name "S3DataPerimeterPolicy" \
--description "Enforce data perimeter across all S3 buckets" \
--type RESOURCE_CONTROL_POLICY \
--content file://data-perimeter-policy.json
# Attach to organization root or specific OUs
aws organizations attach-policy \
--policy-id p-12345678 \
--target-id r-abcd \
--target-type ROOTFour Scalable Access Control Patterns
Choose the right access control pattern based on your scale requirements and organizational structure. Each pattern has specific use cases, limitations, and implementation considerations.
🔧 1. Plain Bucket Policy
When to use: Few datasets, simple role layout, traditional IAM-centric approach.
Hard limit: 20 KB policy size constrains you to tens of prefixes before hitting the limit. ≈30 prefixes max
simple-bucket-policy.json
{
"Statement": [
{
"Effect": "Allow",
"Principal": {"AWS": "arn:aws:iam::123456789012:role/DataScienceRole"},
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::data-lake/science/*"
},
{
"Effect": "Allow",
"Principal": {"AWS": "arn:aws:iam::123456789012:role/AnalyticsRole"},
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::data-lake/analytics/*"
}
]
}🔧 2. Bucket-per-Dataset
When to use: You can physically split data and create new buckets for logical separation.
Gotchas: Requires data migration and application URL rewiring when implementing. 1M buckets/account
bucket-per-dataset-terraform.tf
resource "aws_s3_bucket" "dataset_buckets" {
for_each = var.datasets
bucket = "${var.org_prefix}-${each.key}-data"
}
resource "aws_iam_policy" "dataset_access" {
for_each = var.datasets
name = "${each.key}DataAccess"
policy = jsonencode({
Statement = [{
Effect = "Allow"
Action = ["s3:GetObject", "s3:ListBucket"]
Resource = [
aws_s3_bucket.dataset_buckets[each.key].arn,
"${aws_s3_bucket.dataset_buckets[each.key].arn}/*"
]
}]
})
}🔧 3. S3 Access Points
When to use: Need per-prefix policies without migrating data. Each team gets their own policy.
Gotchas: Applications must switch to Access Point endpoints instead of bucket endpoints. 10K APs/bucket
access-point-configuration.json
# Create Access Point
aws s3control create-access-point \
--account-id 123456789012 \
--name analytics-team-ap \
--bucket data-lake \
--policy file://analytics-team-policy.json
# Access Point Policy
{
"Statement": [{
"Effect": "Allow",
"Principal": {"AWS": "arn:aws:iam::123456789012:role/AnalyticsRole"},
"Action": ["s3:GetObject", "s3:ListBucket"],
"Resource": [
"arn:aws:s3:us-east-1:123456789012:accesspoint/analytics-team-ap",
"arn:aws:s3:us-east-1:123456789012:accesspoint/analytics-team-ap/object/analytics/*"
]
}]
}🔧 4. S3 Access Grants
When to use: Large-scale data lakes with thousands of users requiring user-level audit trails.
Best for: Directory groups mapping to datasets, human-level permissions with Identity Center integration. 100K grants/org
s3-access-grants-setup.sh
# Create Access Grants instance
aws s3control create-access-grants-instance \
--account-id 123456789012 \
--identity-center-arn arn:aws:sso:::instance/ssoins-1234567890abcdef
# Create grant for a group
aws s3control create-access-grant \
--account-id 123456789012 \
--access-grants-location-id default \
--grantee Type=GROUP,Identifier=DataScientists \
--permission READ \
--application-arn arn:aws:s3:::data-lake \
--s3-prefix-type Object \
--configuration S3SubPrefix=science/S3 Access Grants and Lake Formation
For enterprise-scale data lakes, S3 Access Grants and AWS Lake Formation provide user-level access control with comprehensive audit trails. These services bridge the gap between directory services and data access.
S3 Access Grants Architecture
Identity Center Groups
Users organized in directory groups (Active Directory, Identity Center)
Access Grants
Map groups to S3 prefixes with specific permissions (READ, WRITE, READWRITE)
Credential Vending
SDK swaps user session for short-lived IAM credentials
Audit Trail
CloudTrail shows actual user, not application role
🔧 Lake Formation for Structured Data
AWS Lake Formation extends the Access Grants concept to structured data in the Glue Data Catalog, enabling table-level, column-level, and row-level security.
lake-formation-permissions.py
import boto3
lf_client = boto3.client('lakeformation')
# Grant table access with column filtering
response = lf_client.grant_permissions(
Principal={'DataLakePrincipalIdentifier': 'arn:aws:iam::123456789012:role/AnalystRole'},
Resource={
'Table': {
'DatabaseName': 'sales_data',
'Name': 'customer_transactions',
'ColumnNames': ['transaction_id', 'amount', 'date'], # Column-level security
'ColumnWildcard': {'ExcludedColumnNames': ['ssn', 'credit_card']} # Exclude sensitive columns
}
},
Permissions=['SELECT'],
PermissionsWithGrantOption=[]
)
# Row-level security with data filters
response = lf_client.create_data_cells_filter(
TableData={
'DatabaseName': 'sales_data',
'TableName': 'customer_transactions',
'Name': 'region_filter',
'RowFilter': {
'FilterExpression': 'region = \'us-west-2\'' # Only show transactions from specific region
}
}
)Advanced Troubleshooting
AWS has significantly improved S3 troubleshooting with enhanced denial reason strings and the IAM Access Analyzer for S3. These tools eliminate the "403 mystery" that plagued S3 debugging.
🔧 Enhanced Denial Reasons
Access-denied errors now include specific information about which policy type caused the denial and whether it was explicit or implicit.
CloudTrail Error Event
{
"errorCode": "AccessDenied",
"errorMessage": "Access Denied",
"additionalEventData": {
"AuthorizationFailureReason": "EXPLICIT_DENY",
"PolicyType": "BUCKET_POLICY",
"PolicyArn": "arn:aws:s3:::my-bucket",
"DenyingStatement": "DataPerimeterDeny"
}
}Resolution: Check the bucket policy for an explicit deny statement with Sid "DataPerimeterDeny" that's blocking the request.
🔧 KMS Denial Example
KMS Access Denied
{
"errorCode": "AccessDenied",
"errorMessage": "The ciphertext refers to a customer master key that does not exist",
"additionalEventData": {
"AuthorizationFailureReason": "EXPLICIT_DENY",
"PolicyType": "KMS_KEY_POLICY",
"KeyId": "arn:aws:kms:us-east-1:123456789012:key/01234567-89ab-cdef-0123-456789abcdef"
}
}Resolution: The KMS key policy doesn't grant the required permissions. Add kms:Decrypt to the key policy or use the ViaService condition.
🔧 IAM Access Analyzer for S3
The new S3-specific dashboard in IAM Access Analyzer provides a region-wide view of all buckets that are public or shared outside your organization.
Enable Access Analyzer for S3
# Create Access Analyzer for your organization
aws accessanalyzer create-analyzer \
--analyzer-name S3SecurityAnalyzer \
--type ORGANIZATION \
--tags Key=Purpose,Value=S3SecurityAudit
# Generate findings for S3 resources
aws accessanalyzer start-resource-scan \
--analyzer-arn arn:aws:access-analyzer:us-east-1:123456789012:analyzer/S3SecurityAnalyzer \
--resource-arn arn:aws:s3:::my-bucket
# Review findings
aws accessanalyzer list-findings \
--analyzer-arn arn:aws:access-analyzer:us-east-1:123456789012:analyzer/S3SecurityAnalyzer \
--filter criteria=resourceType,equals=S3BucketS3 Security Best Practices
Start with Secure Defaults
Ensure all buckets have Block Public Access enabled, encryption at rest configured, and ACLs disabled. Audit legacy buckets and upgrade them to secure defaults.
Implement Data Perimeters
Use explicit deny policies to prevent access from untrusted networks and outside your organization. Deploy these via Resource Control Policies for centralized management.
Choose the Right Scale Pattern
Use bucket policies for simple cases, many buckets for clear data separation, Access Points for per-team policies, and Access Grants for user-level control.
Enable Comprehensive Logging
Turn on CloudTrail data events for authoritative object-level auditing. Server Access Logs are still useful but CloudTrail provides richer context and integration.
Understand Dual Authorization
For SSE-KMS, remember that both S3 and KMS must authorize each request. Design your policies and key policies accordingly.
Use Access Analyzer
Regularly review the IAM Access Analyzer for S3 dashboard to identify unintended external access and validate policies against security best practices.
Common Pitfalls and Solutions
🔧 Mixing SSE-KMS Keys
Problem: Objects encrypted with aws/s3 managed key, but bucket default changed to customer-managed CMK.
Solution: Use S3 Batch Operations with copy-in-place to re-encrypt existing objects with the new key.
Re-encrypt with new key
aws s3control create-job \
--account-id 123456789012 \
--operation S3PutObjectCopy \
--manifest 'S3ObjectManifest={Bucket=manifest-bucket,Key=manifest.csv}' \
--priority 10 \
--role-arn arn:aws:iam::123456789012:role/S3BatchOperationsRole🔧 Incomplete SSE-KMS Permissions
Problem: Granted s3:PutObject but forgot kms:GenerateDataKey, causing writes to fail.
Solution: Always pair S3 write permissions with corresponding KMS permissions.
Complete SSE-KMS Write Policy
{
"Statement": [
{
"Effect": "Allow",
"Action": ["s3:PutObject", "s3:PutObjectAcl"],
"Resource": "arn:aws:s3:::bucket/*"
},
{
"Effect": "Allow",
"Action": ["kms:GenerateDataKey", "kms:GenerateDataKeyWithoutPlaintext"],
"Resource": "arn:aws:kms:region:account:key/key-id"
}
]
}🔧 Race Conditions in Bucket Policies
Problem: Multiple teams editing the same bucket policy leads to JSON bloat and conflicting changes.
Solution: Move to S3 Access Points so each team manages their own policy document independently.
🎉 Congratulations!
S3 Security Mastery Achievement
You now have comprehensive knowledge of S3 security and access control, including:
Secure Defaults
S3 secure-by-default settings and legacy bucket remediation
Data Perimeters
Building robust data perimeters with explicit deny policies
Resource Control Policies
Implementing Resource Control Policies for organization-wide governance
Access Control Patterns
Choosing the right access control pattern for your scale requirements
User-Level Access Control
Using S3 Access Grants and Lake Formation for user-level access control
Advanced Troubleshooting
Advanced troubleshooting with enhanced denial reasons and Access Analyzer
SSE-KMS Best Practices
SSE-KMS dual authorization requirements and best practices