AWS Certified Developer – Associate Level

AWS SDK Boto Configuration

1) If you’re executing code against AWS on an EC2 instance that is assigned an IAM role, which of the following is a true statement?

The code will assume the same permissions as the EC2 role

2) An IAM role, when assigned to an EC2 instance, will allow code to be executed on that instance without API access keys.

True

Explanation
An EC2 instance can assume an IAM role with the given IAM role permissions. Any code executed on the EC2 that assumes the role can access any API calls if the required permissions are assigned. The app or CLI on the EC2 instance that assumed the IAM role does not have to have API access credentials keys.

3) You need to already know Python in order to take this course.

False

Explanation
The AWS Certified Developer Associate Level Certification focuses on developer concepts. In this course we utilize Python to demonstrate certain concepts. However, knowing Python is not a requirement for taking the certification. Being familiar with REST API and the core API calls for common AWS services is required. You do not have to be a developer to take the certification but it is highly suggested.

4) If you are connecting to AWS from a computer, not an EC2 instance, you need to create an AWS user, attach permissions, and use the API access key and secret access key in your Python code.

True

Explanation
AWS is all an API. Almost all services/resources are available via an API. You can access the API calls without running an application on AWS. With a given access key and secrete access key you can connect to AWS API from any server/instance even if it is not running on AWS.

###############################

1) While working with the AWS API you receive the following error message: 409 Conflict. What might be the cause of this error?
Bucket already exists

Explanation
S3 error messages utilize HTTP error codes

2) Amazon S3 can use what type of server side encryption?
AES256

3) Which of the following is a valid S3 bucket name?
mybucket.com

Explanation
Bucket names cannot start with a . or – characters. S3 bucket names can contain both the . and – characters. There can only be one . or one – between labels. E.G mybucket-com mybucket.com are valid names but mybucket–com and mybucket..com are not valid bucket names.

4) You successfully upload an item to the US-STANDARD region. You then immediately make another API call and attempt to read the object. What will happen?
US-STANDARD has read-after-write consistency, so you will be able to retrieve the object immediately

Explanation
All regions now have read-after-write consistency for PUT operations of new objects. Read-after-write consistency allows you to retrieve objects immediately after creation in Amazon S3. Other actions still follow the eventual consistency model.

5) One of your requirements is to setup an S3 bucket to store your files like documents and images. However, those objects should not be directly accessible via the S3 URL, they should ONLY be accessible from pages on your website so that only your paying customers can see them. How could you implement this?
You can use a bucket policy and check for the aws:Referer key in a condition, where that key matches your domain

Explanation
You could use a bucket policy like this: { “Version”: “2012-10-17”, “Id”: “example”, “Statement”: [ { “Sid”: “Allow get requests referred by http://www.example.com and example.com.”, “Effect”: “Allow”, “Principal”: “*”, “Action”: “s3:GetObject”, “Resource”: “arn:aws:s3:::examplebucket/*”, “Condition”: { “StringLike”: {“aws:Referer”: [“http://www.example.com/*”,”http://example.com/*”%5D} } }, { “Sid”: “Explicit deny to ensure requests are allowed only from specific referer. Remember that explicit denies override all other permissions.”, “Effect”: “Deny”, “Principal”: “*”, “Action”: “s3:*”, “Resource”: “arn:aws:s3:::examplebucket/*”, “Condition”: { “StringNotLike”: {“aws:Referer”: [“http://www.example.com/*”,”http://example.com/*”%5D} } } ] }

6) What is the maximum number of S3 buckets allowed per AWS account?
100

Explanation
AWS accounts are allowed 100 buckets per account (not region).

7) Your application is trying to upload a 6 GB file to Simple Storage Service and receive a “Your proposed upload exceeds the maximum allowed object size.” error message. What is a possible solution for this?
Use the multipart upload API for this object

Explanation
Multipart upload is required for objects of 5GB in size or larger.

8) Which of the following request headers, when specified in an API call, will cause an object to be SSE?
x-amz-server-side-encryption

Explanation
See these links for more information: http://docs.aws.amazon.com/AmazonS3/latest/dev/SSEUsingRESTAPI.html

9) You decide to create a bucket on AWS S3 called ‘bestbucketever’ and then perform the following actions in the order that they are listed here. – You upload a file to the bucket called ‘file1’ – You enable versioning on the bucket – You upload a file called ‘file2’ – You upload a file called ‘file3’ – You upload another file called ‘file2’ Which of the following is true for your bucket ‘bestbucketever’?
The version ID for file1 will be null, there will be 2 version IDs for file2 and 1 version ID for file3

Explanation
You can enable versioning on a bucket, even if that bucket already has objects in it. The already existing objects, though, will show their versions as null. All new objects will have version IDs.

10) Buckets can contain both encrypted and non encrypted objects.
True

11) While hosting a static website with Amazon S3, your static JavaScript code attempts to include resources from another S3 bucket but permission is denied. How might you solve the problem?
Enable CORS Configuration

Explanation
CORS configuration allows JavaScript resources to communicate across bucket domains

12) Server-side encryption is about data encryption at rest. That is, Amazon S3 encrypts your data at the object level as it writes it to disk in its data centers and decrypts it for you when you go to access it. There are a few different options depending on how you choose to manage the encryption keys. One of the options is called ‘Server-Side Encryption with Amazon S3-Managed Keys (SSE-S3)’. Which of the following best describes how this encryption method works?

Each object is encrypted with a unique key employing strong encryption. As an additional safeguard, it encrypts the key itself with a master key that it regularly rotates.

Explanation
With this encryption option, Amazon S3 handles all of the encryption/decryption of objects, including the rotation of keys. Other options allow you to manage your own keys if you want, but not the method mentioned in the question.

13) Which of the descriptions below best describes what the following bucket policy does? { “Version”:”2012-10-17″, “Id”:”Linux Academy Question”, “Statement”:[ { “Sid”:”Linux Academy Question”, “Effect”:”Allow”, “Principal”:”*”, “Action”:”s3:GetObject”, “Resource”:”arn:aws:s3:::linuxacademybucket/*”, “Condition”:{ “StringLike”:{“aws:Referer”:[“http://www.linuxacademy.com/*”,”http://www.amazon.com/*”%5D} } } ] }

It allows read access to the bucket ‘linuxacademybucket’ but only if it is accessed from linuxacademy.com or amazon.com

14) In regards to their data consistency model, AWS states that “Amazon S3 buckets in all Regions provide read-after-write consistency for PUTS of new objects and eventual consistency for overwrite PUTS and DELETES.” What does AWS actually mean when they say Read-after-write consistency for PUTS of new objects?
If you write a new key to S3, you will be able to retrieve any object immediately afterwards. Also, any newly created object or file will be visible immediately, without any delay.

15) You decide to configure a bucket for static website hosting. As per the AWS documentation, you create a bucket named ‘mybucket.com’ and then you enable website hosting with an index document of ‘index.html’ and you leave the error document as blank. You then upload a file named ‘index.html’ to the bucket. After clicking on the endpoint of mybucket.com.s3-website-us-east-1.amazonaws.com you receive 403 Forbidden error. You then change the CORS configuration on the bucket so that everyone has access, however you still receive the 403 Forbidden error. What additional step do you need to do so that the endpoint is accessible to everyone?

Change the permissions on the index.html file also, so that everyone has access.

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s