AWS Certified Developer – Update

Since my last update, I’m proud to say that I’ve completed my course!  The major components of the second half of this course concerned Storage and Database implementations.  Here’s a brief rundown.

How is Storage handled on AWS?

The main storage offering on AWS is called S3.  This is basically bulk storage in AWS offered for a flat fee based on usage.  If you’d rather use a dynamic and shared storage that auto-scales, you’d set up a resource in Elastic File System.  The platform offers several different configurations based on availablity and performance expectations.   Versioning and replication of containers is supported.  Short and long term backups are covered by Glacier.  This is long term storage of snapshots of data at a lower fee, but binds you to a time commitment.  Those of us that have Disaster Recovery responsibilities can use this to implement Father->Grandfather backup strategies.

How are Databases implemented in AWS?

The conventional implementation of databases is by provisioning virtual DB instances.  You can choose your preferred framework, like MS-SQL or Oracle and then select the tier you need within that framework.   For the NoSQL crowd, there’s Dynamo DB which offers low latency databases for high traffic services and data analysis tools.  Certification note: The exam is mentioned by the course as being very heavy on Dynamo DB.  Calculating performance is a prominent item on the cert as you have a lot of fine tuning control with access. The key is to find that sweet spot where your bandwidth is sufficient without over provisioning.

While IAS, EC2 and S3 made up the lion’s share of the course, the remainder was short overviews of additional services such as:

  • Simple Queue Service + Simple Notification Service – Used as a clearinghouse to trigger shared events throughout your environments.
  • Simple Workflow Service – Used for management of back end processing in your API or Service Layer.
  • Cloud Formation – A framework for creating templates that provision predetermined purpose-built sets of AWS resources.
  • Elastic Beanstalk – Basically a wizard for provisioning auto-scaling application hosting environments that are immediately ready to run code.
  • Shared Responsibility Model – An overview of the demarcations between what integrity concerns the customer is responsible for as well as AWS.
  • Route 53 – More of an actual overview of DNS architecture rather than anything special about its AWS implementation.
  • Virtual Private Cloud (VPC) – This covers configuration of public and private zones of resources and defining rules for interoperation between them. Basically, taking everything we’ve learned and pulling it all together into something useful at the enterprise level. The analogy used for this is to think of a VPC as a logical data center.

Now that I’ve completed the course, it’s time to put my money where my mouth is and successfully pass the exam by the end of summer per my original commitment.  Wish me luck!

I’d love to hear any feedback on this post and invite you to share your own experiences and opinions on AWS, either your own projects or learning tracks. As always, if you have any questions or comments, please feel free to add them here or address them to john@benedettitech.com.

Thanks for looking in!