Вы находитесь на странице: 1из 7

AWS Account Access Keys

The account access keys provide full access to the AWS resources owned by the account
 Access key ID (a 20-character, alphanumeric string).
 Secret access key (a 40-character string).
The access key ID uniquely identifies an AWS account.
It can be sent to authenticated requests to Amazon S3. Sharing AWS account access keys
reduces security, and creating individual AWS accounts for each employee might not be
practical. In such scenarios, use IAM user access for employees.

IAM USER ACCESS


AWS Identity and Access Management (IAM) is used to create users under your AWS account
with their own access keys and attach IAM user policies granting appropriate resource access
permissions to them.
 IAM enables company to create groups of users and grant group-level permissions that apply to
all users in that group.
 These users are referred to as IAM users that you create and manage within AWS.
 The parent account controls a user's ability to access AWS.
 Any resources an IAM user creates are under the control of and paid for by the parent AWS
account

Temporary Security Credentials


 IAM enables to grant temporary access keys and a security token to any IAM user to enable
them to access your AWS services and resources.
 These users can be managed from outside AWS. These are referred to as federated users.
Additionally, users can be applications that you create to access your AWS resources.
 An IAM user can request these temporary security credentials for their own use or hand them
out to federated users or applications.
 When requesting temporary security credentials for the users, you must provide a user name
and an IAM policy defining the permissions you want to associate with these temporary security
credentials.

Bucket Restrictions
 A bucket is owned by the AWS account that created it.
 By default, 100 buckets can be created in each AWS account. For additional buckets,
increase bucket limit by submitting a service limit increase.
 There is no limit to the number of objects that can be stored in a bucket and no difference in
performance whether you use many buckets or just a few.
 After you have created a bucket, you can't change its Region.

Empty a Bucket
 You can empty a bucket's content (that is, delete all content, but keep the bucket)
programmatically using the AWS SDK.
 You can also specify lifecycle configuration on a bucket to expire objects so that Amazon S3
can delete them.
 There are additional options, such as using Amazon S3 console and AWS CLI, but there are
limitations on this method based on the number of objects in your bucket and the bucket's
versioning status.
S3 Default Encryption
 Amazon S3 default encryption provides a way to set the default encryption behavior for an
S3 bucket.
 The objects are encrypted using server-side encryption with either Amazon S3-managed
keys (SSE-S3) or AWS KMS-managed keys (SSE-KMS).
 When you use server-side encryption, Amazon S3 encrypts an object before saving it to disk
in its data centers and decrypts it when you download the objects.
 There are no new charges for using default encryption for S3 buckets.
 Requests to configure the default encryption feature incur standard Amazon S3 request
charges.

Object
 Key – name of the object
 Version ID - string that Amazon S3 generates when you add an object to a bucket
 Value - The content that you are storing. An object value can be any sequence of bytes. (0-
5TB)
 Metadata – A set of name-value pairs with which you can store information regarding the
object.
 Subresources – Amazon S3 uses the subresource mechanism to store object-specific
additional information.
o acl
o torrent

Storage Classes for Frequently Accessed Objects
STANDARD—The default storage class if you don't specify the storage class when you upload
an object
REDUCED_REDUNDANCY – Designed for noncritical, reproducible data that can be stored
with less redundancy than the STANDARD storage class.

Storage Class That Automatically Optimizes Frequently and Infrequently Accessed


Objects
 INTELLIGENT_TIERING storage class - Optimize storage costs by automatically moving
data to the most cost-effective storage access tier, without performance impact or
operational overhead.
 The INTELLIGENT_TIERING storage class stores objects in two access tiers:
 one tier that is optimized for frequent access
 another lower-cost tier that is optimized for infrequently accessed data.

 For a small monthly monitoring and automation fee per object, Amazon S3 monitors
access patterns of the objects in the INTELLIGENT_TIERING storage class and moves
objects that have not been accessed for 30 consecutive days to the infrequent access
tier.

 There are no retrieval fees when using the INTELLIGENT_TIERING storage class.

 If an object in the infrequent access tier is accessed, it is automatically moved back to


the frequent access tier.
 No additional tiering fees apply when objects are moved between access tiers within the
INTELLIGENT_TIERING storage class.
Storage Classes for Infrequently Accessed Objects
The STANDARD_IA and ONEZONE_IA storage classes are designed for long-lived and
infrequently accessed data.

STANDARD_IA and ONEZONE_IA objects are available for millisecond access (similar to the
STANDARD storage class).
Amazon S3 charges a retrieval fee for these objects, so they are most suitable for infrequently
accessed data.

Use Case:
Storing backups
Older, infrequent data which still needs millisecond access

The STANDARD_IA and ONEZONE_IA storage classes are suitable for objects larger than 128
KB that you plan to store for at least 30 days.

STANDARD_IA—Amazon S3 stores the object data redundantly across multiple geographically


separated Availability Zone. This storage class offers greater availability and resiliency than the
ONEZONE_IA class

ONEZONE_IA—Amazon S3 stores the object data in only one Availability Zone, which makes it
less expensive than STANDARD_IA. However, the data is not resilient to the physical loss of
the Availability Zone resulting from disasters, such as earth quakes and floods.

GLACIER Storage Class


The GLACIER storage class is suitable for archiving data where data access is infrequent. This
storage class offers the same durability and resiliency as the STANDARD storage class.

When you choose the GLACIER storage class, Amazon S3 uses the low-cost Amazon S3
Glacier service to store the objects. Although the objects are stored in Amazon S3 Glacier,
these remain Amazon S3 objects that you manage in Amazon S3, and you cannot access them
directly through Amazon S3 Glacier.

Object Versioning
Use versioning to keep multiple versions of an object in one bucket. For example, you could
store my-image.jpg (version 111111) and my-image.jpg (version 222222) in a single bucket.
Versioning protects you from the consequences of unintended overwrites and deletions.
You can also use versioning to archive objects so you have access to previous versions.

When you DELETE an object, all versions remain in the bucket and Amazon S3 inserts a delete
marker
The delete marker becomes the current version of the object.
By default, GET requests retrieve the most recently stored version.
Performing a simple GET Object request when the current version is a delete marker returns
a 404 Not Found error
You can, however, GET a noncurrent version of an object by specifying its version ID.
You can permanently delete an object by specifying the version you want to delete.
Only the owner of an Amazon S3 bucket can permanently delete a version.
Object Lifecycle Management
A lifecycle configuration is a set of rules that define actions that Amazon S3 applies to a group
of objects. There are two types of actions:

Transition actions—Define when objects transition to another storage class.


Expiration actions—Define when objects expire. Amazon S3 deletes expired objects on your
behalf.

Operations on Objects
Uploading objects—You can upload objects of up to 5 GB in size in a single operation. For
objects greater than 5 GB you must use the multipart upload API.

Copying objects— creates a copy of an object that is already stored in Amazon S3.

Upload objects in a single operation—With a single PUT operation, you can upload objects
up to 5 GB in size.
You can use the AWS SDK to upload objects. The SDK provides wrapper libraries for you to
upload data easily. However, if your application requires it, you can use the REST API directly in
your application.

Upload objects in parts—Using the multipart upload API, you can upload large objects, up to 5
TB.

 Delete a single object—Amazon S3 provides the DELETE API that you can use to
delete one object in a single HTTP request.
 Delete multiple objects—Amazon S3 also provides the Multi-Object Delete API that
you can use to delete up to 1000 objects in a single HTTP request.

Amazon S3 Analytics – Storage Class Analysis

By using Amazon S3 analytics storage class analysis you can analyze storage access patterns
to help you decide when to transition the right data to the right storage class.

This new Amazon S3 analytics feature observes data access patterns to help you determine
when to transition less frequently accessed STANDARD storage to the STANDARD_IA (IA, for
infrequent access) storage class.
Amazon S3 Inventory
Amazon S3 inventory provides comma-separated values (CSV), Apache optimized row
columnar (ORC) or Apache Parquet (Parquet) output files that list your objects and their
corresponding metadata on a daily or weekly basis for an S3 bucket or a shared prefix (that is,
objects that have names that begin with a common string)

You can configure multiple inventory lists for a bucket. You can configure what object metadata
to include in the inventory, whether to list all object versions or only current versions, where to
store the inventory list file output, and whether to generate the inventory on a daily or weekly
basis. You can also specify that the inventory list file be encrypted.
You can query Amazon S3 inventory using standard SQL by using Amazon Athena, Amazon
Redshift Spectrum, and other tools such as Presto, Apache Hive, and Apache Spark. It's easy
to use Athena to run queries on your inventory files. You can use Athena for Amazon S3
inventory queries in all Regions where Athena is available.

mazon S3 Bucket and Object Ownership

Buckets and objects are Amazon S3 resources. By default, only the resource owner can access
these resources. The resource owner refers to the AWS account that creates the resource. For
example:
 The AWS account that you use to create buckets and upload objects owns those
resources.

 If you upload an object using AWS Identity and Access Management (IAM) user or role
credentials, the AWS account that the user or role belongs to owns the object.

 A bucket owner can grant cross-account permissions to another AWS account (or users
in another account) to upload objects. In this case, the AWS account that uploads
objects owns those objects. The bucket owner does not have permissions on the objects
that other accounts own, with the following exceptions:
o The bucket owner pays the bills. The bucket owner can deny access to any
objects, or delete any objects in the bucket, regardless of who owns them.
o The bucket owner can archive any objects or restore archived objects regardless
of who owns them. Archival refers to the storage class used to store the objects.

Protecting Data in Amazon S3

Amazon S3 provides a highly durable storage infrastructure designed for mission-critical and
primary data storage. Objects are redundantly stored on multiple devices across multiple
facilities in an Amazon S3 region. To help better ensure data durability, Amazon
S3 PUT and PUT Object copy operations synchronously store your data across multiple
facilities before returning SUCCESS. Once the objects are stored, Amazon S3 maintains their
durability by quickly detecting and repairing any lost redundancy.

Amazon S3 also regularly verifies the integrity of data stored using checksums. If Amazon S3
detects data corruption, it is repaired using redundant data. In addition, Amazon S3 calculates
checksums on all network traffic to detect corruption of data packets when storing or retrieving
data.
Amazon S3 further protects your data using versioning. You can use versioning to preserve,
retrieve, and restore every version of every object stored in your Amazon S3 bucket. With
versioning, you can easily recover from both unintended user actions and application failures.
By default, requests retrieve the most recently written version. You can retrieve older versions of
an object by specifying a version of the object in a request.
Introduction to Amazon S3 Object Lock

Amazon S3 Object Lock enables you to store objects using a "Write Once Read Many"
(WORM) model. Using S3 Object Lock, you can prevent an object from being deleted or
overwritten for a fixed amount of time or indefinitely. S3 Object Lock enables you to meet
regulatory requirements that require WORM storage or simply to add an additional layer of
protection against object changes and deletion.

S3 Object Lock provides two ways to manage object retention: retention periods and legal
holds. A retention period specifies a fixed period of time during which an object remains locked.
During this period, your object will be WORM-protected and can't be overwritten or deleted.

A legal hold provides the same protection as a retention period, but has no expiration date.
Instead, a legal hold remains in place until you explicitly remove it. Legal holds are independent
from retention periods: an object version can have both a retention period and a legal hold, one
but not the other, or neither.

Retention Modes

Amazon S3 Object Lock provides two retention modes: Governance and Compliance.
These retention modes apply different levels of protection to your objects. You can
apply either retention mode to any object version that is protected by S3 Object Lock.

In Governance mode, users can't overwrite or delete an object version or alter its lock
settings unless they have special permissions. Governance mode enables you to
protect objects against deletion by most users while still allowing you to grant some
users permission to alter the retention settings or delete the object if necessary.

In Compliance mode, a protected object version can't be overwritten or deleted by any


user, including the root user in your AWS account. Once an object is locked in
Compliance mode, its retention mode can't be changed and its retention period can't be
shortened. Compliance mode ensures that an object version can't be overwritten or
deleted for the duration of the retention period.

Retention Periods

A retention period protects an object version for a fixed amount of time. When you place
a retention period on an object version, Amazon S3 stores a timestamp in the object
version's metadata to indicate when the retention period expires. After the retention
period expires, the object version can be overwritten or deleted unless you also placed
a legal hold on the object version.
Legal Holds

S3 Object Lock also enables you to place a legal hold on an object version. Like a
retention period, a legal hold prevents an object version from being overwritten or
deleted. However, a legal hold doesn't have an associated retention period and remains
in effect until removed. Legal holds can be freely placed and removed by any user with
the s3:PutObjectLegalHold permission.

Вам также может понравиться