Notes from the Field

Apart from standard AWS recommendations around s3 data protection, these are some tips from hands on engagements around S3 data backups (from data centers)

 

  1. S3 resources should ideally be isolated to a specific account - possibly something like a Shared Data or Shared Services Account.
  2. Access to S3 resources should be limited to certain account admins.
  3. SCPs should be optionally used to whitelist IP addresses that can access the buckets.  These can be useful, but only if the S3 buckets do not have a resource based policy. Otherwise, they have no effect.
  4. Granular permissions on S3 objects should be considered (ACLs) vs. broader permissions on s3 buckets. Public access should only be on a per-object basis.
  5. Encryption at rest using native S3 SSE or KMS.  Encryption in transit via TLS (most transfer tools like Veritas and Commvault support TLS)
  6. Implement Object Locks with a retention period (should match the lifecycle period - so if 3 months before moving to Glacier, then Object lock should be 3 months)
  7. Disable Public Access for the Bucket - This is an obvious one.

Summary and Next Steps

This is a live document - an ongoing list of best practices around s3 backups.



Need an experienced AWS/GCP/Azure Professional to help out with your Public Cloud Strategy? Set up a time with Anuj Varma.