AWS Cloud Practitioner Comprehensive Guide

AWS Cost and Usage

The AWS Cost & Usage Report is your one-stop shop for accessing the most detailed information available about your AWS costs and usage.The AWS Cost & Usage Report lists AWS usage for each service category used by an account and its IAM users in hourly or daily line items, as well as any tags that you have activated for cost allocation purposes.

https://aws.amazon.com/aws-cost-management/aws-cost-and-usage-reporting/

 

AWS Pricing Calculator

AWS Pricing Calculator is a web service that you can use to estimate the cost for your AWS monthly bill based on your expected usage.

 

AWS Systems Manager

AWS Systems Manager provides a unified user interface so you can view operational data from multiple AWS services and allows you to automate operational tasks across your AWS resources.

 

AWS Budgets 

AWS Budgets gives you the ability to set custom budgets that alert you when your costs or usage exceed (or are forecasted to exceed) your budgeted amount.

 

AWS Artifact

 AWS Artifact is a self-service audit artifact retrieval portal that provides customers with on-demand access to AWS’ compliance documentation and AWS agreements. You can use AWS Artifact Agreements to review, accept, and track the status of AWS agreements such as the Business Associate Addendum (BAA).

You can also use AWS Artifact Reports to download AWS security and compliance documents, such as AWS ISO certifications, Payment Card Industry (PCI), and System and Organization Control (SOC) reports.

https://aws.amazon.com/artifact/

 

Amazon DynamoDB

Amazon DynamoDB is a fast and flexible NoSQL database service for all applications that need consistent, single-digit millisecond latency at any scale. It is a fully managed cloud database and supports both document and key-value store models. Its flexible data model, reliable performance, and automatic scaling of throughput capacity, makes it a great fit for mobile, web, gaming, ad tech, IoT, and many other applications.

https://aws.amazon.com/dynamodb/

 

Access keys

Access keys consist of an access key ID and secret access key, which are used to sign programmatic requests to AWS using the CLI or the SDK.

https://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_access-keys.html

 

AWS CloudFront

To deliver content to global end users with lower latency, Amazon CloudFront uses a global network of Edge Locations and Regional Edge Caches in multiple cities around the world. Amazon CloudFront uses this network to cache copies of your content close to your end-users. Amazon CloudFront ensures that end-user requests are served by the closest edge location. As a result, end-user requests travel a short distance, improving performance for your end-users, while reducing the load on the origin servers.

How are AWS Global Accelerator and CloudFront related? AWS Global Accelerator and CloudFront are two separate services that use the AWS global network and its edge locations around the world. CloudFront improves performance for both cacheable (e.g., images and videos) and dynamic content (e.g. dynamic site delivery). Global Accelerator is a good fit for specific use cases, such as gaming, IoT or Voice over IP.

Amazon CloudFront only uses Edge Locations or Regional Edge Caches.

https://aws.amazon.com/cloudfront/

 

AWS Snowball

AWS Snowball is a petabyte-scale data transport solution that uses secure appliances to transfer large amounts of data into and out of the AWS cloud. Using Snowball addresses common challenges with large-scale data transfers, including high network costs, long transfer times, and security concerns. AWS Customers use Snowball to migrate analytics data, genomics data, video libraries, image repositories, and backups. Transferring data with Snowball is simple, fast, secure, and can cost as little as one-fifth the cost of using high-speed internet.

Additionally, With AWS Snowball, you can access the compute power of the AWS Cloud locally and cost-effectively in places where connecting to the internet might not be an option. AWS Snowball is a perfect choice if you need to run computing in rugged, austere, mobile, or disconnected (or intermittently connected) environments.

With AWS Snowball, you have the choice of two devices, Snowball Edge Compute Optimized with more computing capabilities, suited for higher performance workloads, or Snowball Edge Storage Optimized with more storage, which is suited for large-scale data migrations and capacity-oriented workloads.

Snowball Edge Storage Optimized is the optimal choice if you need to securely and quickly transfer dozens of terabytes to petabytes of data to AWS. It is also a good fit for running general purpose analysis such as IoT data aggregation and transformation.

Snowball Edge Compute Optimized is the optimal choice if you need powerful compute and high-speed storage for data processing. Examples include high-resolution video processing, advanced IoT data analytics, and real-time optimization of machine learning models.

AWS Marketplace is the service that provides this catalog. AWS Marketplace is a digital catalog with thousands of software listings from independent software vendors that make it easy to find, test, buy, and deploy software that runs on AWS. AWS Marketplace includes software listings from categories such as security, networking, storage, machine learning, business intelligence, database, and DevOps.

AWS Storage Gateway is the service that enables your on-premises applications to seamlessly use AWS cloud storage.

AWS Snowmobile is the exabyte-scale data migration service that allows you to move very large datasets from on-premises to AWS.

https://aws.amazon.com/snowball/

 

Penetration Testing on AWS

AWS customers are welcome to carry out security assessments and penetration tests against their AWS infrastructure without prior approval for 8 services:

1- Amazon EC2 instances, NAT Gateways, and Elastic Load Balancers.

2- Amazon RDS.

3- Amazon CloudFront.

4- Amazon Aurora.

5- Amazon API Gateways.

6- AWS Lambda and Lambda Edge functions.

7- Amazon Lightsail resources.

8- Amazon Elastic Beanstalk environments.

The AWS customers are responsible for performing penetration tests against their AWS infrastructure.

AWS customers are allowed to perform penetration tests against their AWS infrastructure, but they must ensure that their activities are aligned with AWS policies.

AWS customers are allowed to perform penetration testing on both AWS-managed services such as Amazon RDS and customer-managed services such as Amazon EC2.

The difference between AWS-managed services and customer-managed services:

For AWS-managed services such as Amazon RDS and Amazon DynamoDB, AWS is responsible for performing all the operations needed to keep the service running.

The AWS-managed services automate time-consuming administration tasks such as hardware provisioning, software setup, patching and backups. The AWS-managed services free customers to focus on their applications so they can give them the fast performance, high availability, security and compatibility they need.

Examples of AWS-managed services include Amazon RDS, Amazon DynamoDB, Amazon Redshift, Amazon WorkSpaces, Amazon CloudFront, Amazon CloudSearch, and several other services.

On the other hand, customer-managed services are services that are completely managed by the customer. For example, a service such as Amazon Elastic Compute Cloud (Amazon EC2) is categorized as Infrastructure as a Service (IaaS) and, as such, requires the customer to perform all of the necessary security configuration and management tasks. Customers that deploy an Amazon EC2 instance are responsible for the management of the guest operating system (including updates and security patches), any application software or utilities installed by the customer on the instances, and the configuration of the AWS-provided firewall (called a security group) on each instance.

Examples of customer-managed services include Amazon Elastic Compute Cloud (Amazon EC2), Amazon Virtual Private Cloud (Amazon VPC), and AWS Identity And Access Management (AWS IAM).

https://aws.amazon.com/security/penetration-testing/

 

IAM

An IAM group is a collection of IAM users that are managed as a unit. Groups let you specify permissions for multiple users, which can make it easier to manage the permissions for those users. For example, you could have a group called Admins and give that group the types of permissions that administrators typically need. Any user in that group automatically has the permissions that are assigned to the group. If a new user joins your organization and needs administrator privileges, you can assign the appropriate permissions by adding the user to that group. Similarly, if a person changes jobs in your organization, instead of editing that user's permissions, you can remove him or her from the old groups and add him or her to the appropriate new groups.

IAM role An IAM role is an IAM identity that you can create in your account that has specific permissions. IAM roles allow you to delegate access (for a limited time) to users or services that normally don't have access to your organization's AWS resources. IAM users or AWS services can assume a role to obtain temporary security credentials that can be used to interact with specific AWS resources. 

You can use roles to delegate access to users, applications, or services that don't normally have access to your AWS resources. For example, you might want to grant users in your AWS account access to resources they don't usually have, or grant users in one AWS account access to resources in another account. Or you might want to allow a mobile app to use AWS resources, but not want to embed AWS keys within the app. Sometimes you want to give AWS access to users who already have identities defined outside of AWS, such as in your corporate directory. Or, you might want to grant access to your account to third parties so that they can perform an audit on your resources. For these scenarios, you can delegate access to AWS resources using an IAM role.

IAM users An IAM user is an entity that you create in AWS to represent the person or application that uses it to directly interact with AWS. A primary use for IAM users is to give people the ability to sign in to the AWS Management Console for interactive tasks and to make programmatic requests to AWS services using the API or CLI. A user in AWS consists of a name, a password to sign into the AWS Management Console, and up to two access keys that can be used with the API or CLI. When you create an IAM user, you grant it permissions by making it a member of a group that has appropriate permission policies attached (recommended), or by directly attaching policies to the user.

An IAM role is similar to an IAM user, in that it is an AWS identity with permission policies that determine what the identity can and cannot do in AWS. However, instead of being uniquely associated with one person, a role is intended to be assumable by anyone (or any service, application, ...etc) who needs it. Also, a role does not have standard long-term credentials such as a password or access keys associated with it. Instead, when you assume a role, it provides you with temporary security credentials for your role session. IAM roles are meant to be assumed by authorized entities, such as IAM users, applications, or an AWS service such as Amazon EC2.

AWS Organizations AWS Organizations can be used to group AWS accounts, not IAM users (the employees). AWS Organization helps you to centrally manage billing; control access, compliance, and security; and share resources across multiple AWS accounts.

https://docs.aws.amazon.com/IAM/latest/UserGuide/id.html

 

AWS Support Plans

AWS Infrastructure Event Management is a short-term engagement with AWS Support, included in the Enterprise-level Support product offering, and available for additional purchase for Business-level Support subscribers. AWS Infrastructure Event Management partners with your technical and project resources to gain a deep understanding of your use case and provide architectural and scaling guidance for an event. Common use-case examples for AWS Event Management include advertising launches, new product launches, and infrastructure migrations to AWS.

AWS Personal Health Dashboard AWS Personal Health Dashboard provides alerts and remediation guidance when AWS is experiencing events that may impact you. While the Service Health Dashboard displays the general status of AWS services, Personal Health Dashboard gives you a personalized view into the performance and availability of the AWS services underlying your AWS resources.

AWS Knowledge Center AWS Knowledge Center is not part of the Enterprise support plan. AWS Knowledge Center is available for everyone free of charge. The AWS Knowledge Center helps answer the questions most frequently asked by AWS customers. The AWS Knowledge Center does not provide guidance on a case-by-case basis.

AWS Support Concierge Service AWS Support Concierge Service assists customers with account and billing inquiries.

https://aws.amazon.com/premiumsupport/features/

 

AWS Auto Scaling

AWS Auto Scaling is the feature that automates the process of adding/removing server capacity (based on demand). Autoscaling allows you to reduce your costs by automatically turning off resources that aren’t in use. On the other hand, Autoscaling ensures that your application runs effectively by provisioning more server capacity if required.

https://aws.amazon.com/autoscaling/

 

EC2 Auto Scaling

Before cloud computing, you had to overprovision infrastructure to ensure you had enough capacity to handle your business operations at the peak level of activity. Now, you can provision the amount of resources that you actually need, knowing you can instantly scale up or down with the needs of your business. This reduces costs and improves your ability to meet your users’ demands.

The concept of Elasticity involves the ability of a service to scale its resources out or in (up or down) based on changes in demand. For example, Amazon EC2 Autoscaling can help automate the process of adding or removing Amazon EC2 instances as demand increases or decreases.

Reducing interdependencies between application components is much more related to the concept of “Loose Coupling”. Loose coupling is an approach that involves interconnecting the components in a system or network so that those components depend on each other to the least extent practical. Engineers should architect their system or application such that failure in one component does not negatively affect other components. Loosely coupled components make the system resilient and allow it to recover gracefully from failure.

It is not possible to scale on-premises resources automatically. When deploying on-premises, you have to guess on your infrastructure capacity needs.

Elastic Load Balancers do not scale resources. Elastic Load Balancers distributes incoming application traffic across multiple targets, such as Amazon EC2 instances, containers, IP addresses, and Lambda functions.

https://aws.amazon.com/ec2/autoscaling/

https://wa.aws.amazon.com/wat.concept.elasticity.en.html

 

Amazon ElastiCache

Amazon ElastiCache is a web service that makes it easy to deploy, operate, and scale an in-memory data store or cache in the cloud. The service improves the performance of web applications by allowing you to retrieve information from fast, managed, in-memory data stores, instead of relying entirely on slower disk-based databases.

The primary purpose of an in-memory data store is to provide ultrafast (submillisecond latency) and inexpensive access to copies of data. Querying a database is always slower and more expensive than locating a copy of that data in a cache. Some database queries are especially expensive to perform. An example is queries that involve joins across multiple tables or queries with intensive calculations. By caching (storing) such query results, you pay the price of the query only once. Then you can quickly retrieve the data multiple times without having to re-execute the query.

https://aws.amazon.com/elasticache/

 

AWS Cost Explorer

AWS Cost Explorer is a free tool that you can use to view your costs and usage. You can view data up to the last 13 months, forecast how much you are likely to spend for the next 12 months, and get recommendations for what Reserved Instances to purchase. You can use AWS Cost Explorer to see patterns in how much you spend on AWS resources over time, identify areas that need further inquiry, and see trends that you can use to understand your costs. You can also specify time ranges for the data, and view time data by day or by month.

What does the AWS Finance team do? The AWS Finance Team provides data driven analysis, strategic decision support, financial planning, and controllership to teams that plan and build data centers, design and source servers, and develop and sell cloud services at massive scale to developers and businesses all over the world.

https://docs.aws.amazon.com/awsaccountbilling/latest/aboutv2/cost-explorer-what-is.html

 

Amazon VPC console 

You can use the Amazon Virtual Private Cloud console to launch AWS resources, such as Amazon EC2 instances. You can use it to specify an IP address range for the VPC, add subnets, associate security groups, and configure route tables.

 

Amazon Route 53

Amazon Route 53 is a global service that provides highly available and scalable Domain Name System (DNS) services, domain name registration, and health-checking web services. It is designed to give developers and businesses an extremely reliable and cost effective way to route end users to Internet applications by translating names like example.com into the numeric IP addresses, such as 192.0.2.1, that computers use to connect to each other.

Route 53 also simplifies the hybrid cloud by providing recursive DNS for your Amazon VPC and on-premises networks over AWS Direct Connect or AWS VPN.

https://aws.amazon.com/route53/

 

AWS Global infrastructure

The AWS Global infrastructure is built around Regions and Availability Zones (AZs). Each AWS Region is a separate geographic area. Each AWS Region has multiple, isolated locations known as Availability Zones. Availability Zones in a region are connected with low latency, high throughput, and highly redundant networking. These Availability Zones offer AWS customers an easier and more effective way to design and operate applications and databases, making them more highly available, fault tolerant, and scalable than traditional single datacenter infrastructures or multi-datacenter infrastructures.

In addition to replicating applications and data across multiple data centers in the same Region using Availability Zones, you can also choose to increase redundancy and fault tolerance further by replicating data between geographic Regions (especially if you are serving customers from all over the world). You can do so using both private, high speed networking and public internet connections to provide an additional layer of business continuity, or to provide low latency access across the globe.

A subnet is a range of IP addresses in your VPC.

Edge locations are not used to host applications. Edge locations are used by CloudFront to cache and distribute content to your global customers with low latency.

VPC refers to the virtual private cloud which is a virtual network that you define. Deploying the application across multiple VPC’s within the same region will not help your global customers.

https://aws.amazon.com/about-aws/global-infrastructure/

 

AWS Support Plans

Included as part of the Enterprise Support plan, the Support Concierge Team are AWS billing and account experts that specialize in working with enterprise accounts. The Concierge team will quickly and efficiently assist you with your billing and account inquiries, and work with you to help implement billing and account best practices so that you can focus on running your business.

Support Concierge service includes:

** 24 x7 access to AWS billing and account inquires.

** Guidance and best practices for billing allocation, reporting, consolidation of accounts, and root-level account security.

** Access to Enterprise account specialists for payment inquiries, training on specific cost reporting, assistance with service limits, and facilitating bulk purchases.

https://aws.amazon.com/premiumsupport/features/

https://aws.amazon.com/premiumsupport/plans/enterprise/

 

Amazon CloudFront

Amazon CloudFront is a fast content delivery network (CDN) service that securely delivers data, videos, applications, and APIs to customers globally with low latency, high transfer speeds, all within a developer-friendly environment.

The use cases of Amazon CloudFront include:

1- Accelerate static website content delivery.

CloudFront can speed up the delivery of your static content (for example, images, style sheets, JavaScript, and so on) to viewers across the globe. By using CloudFront, you can take advantage of the AWS backbone network and CloudFront edge servers to give your viewers a fast, safe, and reliable experience when they visit your website.

2- Live & on-demand video streaming.
The Amazon CloudFront CDN offers multiple options for streaming your media – both pre-recorded files and live events – at sustained, high throughput required for 4K delivery to global viewers.

 3- Security.

CloudFront integrates seamlessly with AWS Shield for Layer 3/4 DDoS mitigation and AWS WAF for Layer 7 protection.

4- Customizable content delivery with Lambda@Edge.

Lambda@Edge is a feature of Amazon CloudFront that lets you run code closer to users of your application, which improves performance and reduces latency.

https://aws.amazon.com/cloudfront/

 

Amazon Inspector

Amazon Inspector is an automated security assessment service that helps improve the security and compliance of applications deployed on AWS. Amazon Inspector automatically assesses applications for vulnerabilities or deviations from best practices. After performing an assessment, Amazon Inspector produces a detailed list of security findings prioritized by level of severity. These findings can be reviewed directly or as part of a detailed assessment report which is available via the Amazon Inspector console or API. To help get started quickly, Amazon Inspector includes a knowledge base of hundreds of rules mapped to common security best practices and vulnerability definitions. Examples of built-in rules include checking for remote root login being enabled, or vulnerable software versions installed. These rules are regularly updated by AWS security researchers.

https://aws.amazon.com/inspector/

 

AWS Approach to Infrastructure (Monolithic vs Microservices) 

In the traditional data center-based model of IT, once infrastructure is deployed, it typically runs whether it is needed or not, and all the capacity is paid for, regardless of how much it gets used. In the cloud, resources are elastic, meaning they can instantly grow ( to maintain performance) or shrink ( to reduce costs).

How does AWS work with monolithic architectures? AWS recommends adopting microservices architecture, not monolithic architecture. With monolithic architectures, application components are tightly coupled and run as a single service. With a microservices architecture, an application is built as loosely coupled components.

Benefits of microservices architecture include:

1- Microservices allow each service to be independently scaled to meet demand for the application feature it supports.

2- Teams are empowered to work more independently and more quickly.

3- Microservices enable continuous integration and continuous delivery, making it easy to try out new ideas and to roll back if something doesn’t work.

4- Service independence increases an application’s resistance to failure. In a monolithic architecture, if a single component fails, it can cause the entire application to fail. With microservices, applications handle total service failure by degrading functionality and not crashing the entire application.

What are parallelized tasks? An example of parallelization is when you use a load balancer to distribute the incoming requests across multiple asynchronous instances or when you use the AWS multipart upload to upload large objects in parts. Adjusting capacity up or down based on demand defines the AWS Cloud elasticity not the parallelization.

https://wa.aws.amazon.com/wat.concept.elasticity.en.html

http://aws001.s3.amazonaws.com/trailhead/TrailHead_ArchitectingInTheCloud.pdf

 

Amazon CloudFront

Amazon CloudFront is a global content delivery network (CDN) service that gives businesses and web application developers an easy and cost effective way to distribute content (such as videos, data, applications, and APIs) with low latency and high data transfer speeds. Like other AWS services, Amazon CloudFront is a self-service, pay-per-use offering, requiring no long term commitments or minimum fees. With CloudFront, your files are delivered to end-users using a global network of edge locations. CloudFront is integrated with other AWS services such as AWS Shield for DDoS mitigation, Amazon S3, Elastic Load Balancing or Amazon EC2 as origins for your applications, and Lambda@Edge to run custom code close to your viewers. 

https://aws.amazon.com/cloudfront/

 

Amazon Trusted Advisor

AWS Trusted Advisor offers a rich set of best practice checks and recommendations across five categories: cost optimization; security; fault tolerance; performance; and service limits. Like your customized cloud security expert, AWS Trusted Advisor analyzes your AWS environment and provides security recommendations to protect your AWS environment. The service improves the security of your applications by closing gaps, examining permissions, and enabling various AWS security features.

 AWS Trusted Advisor is an online tool that provides you real time guidance to help you provision your resources following AWS best practices. AWS Trusted Advisor offers a rich set of best practice checks and recommendations across five categories: cost optimization; security; fault tolerance; performance; and service limits (also referred to as service quotas).

 AWS Trusted Advisor improves the security of your application by closing gaps, enabling various AWS security features, and examining your permissions.

The core security checks include: (Important)

1- Security Groups - Specific Ports Unrestricted.

Checks security groups for rules that allow unrestricted access to specific ports. Unrestricted access increases opportunities for malicious activity (hacking, denial-of-service attacks, loss of data).

2- Amazon S3 Bucket Permissions.

Checks buckets in Amazon Simple Storage Service (Amazon S3) that have open access permissions. Bucket permissions that grant List access to everyone can result in higher than expected charges if objects in the bucket are listed by unintended users at a high frequency. Bucket permissions that grant Upload/Delete access to everyone create potential security vulnerabilities by allowing anyone to add, modify, or remove items in a bucket. This check examines explicit bucket permissions and associated bucket policies that might override the bucket permissions.

3- MFA on Root Account.

Checks the root account and warns if multi-factor authentication (MFA) is not enabled. For increased security, AWS recommends that you protect your account by using MFA, which requires a user to enter a unique authentication code from their MFA hardware or virtual device when interacting with the AWS console and associated websites.

https://aws.amazon.com/premiumsupport/trustedadvisor/

 

Types of Cloud Computing

There are three Cloud Computing Models: 

1) Infrastructure as a Service (IaaS) - Infrastructure as a Service (IaaS) contains the basic building blocks for cloud IT and typically provide access to networking features, computers (virtual or on dedicated hardware), and data storage space. IaaS provides you with the highest level of flexibility and management control over your IT resources and is most similar to existing IT resources that many IT departments and developers are familiar with today.   

2) Platform as a Service (PaaS) - Platform as a Service (PaaS) removes the need for your organization to manage the underlying infrastructure (usually hardware and operating systems) and allows you to focus on the deployment and management of your applications. This helps you be more efficient as you don’t need to worry about resource procurement, capacity planning, software maintenance, patching, or any of the other undifferentiated heavy lifting involved in running your application.   

3) Software as a Service (SaaS) - Software as a Service (SaaS) provides you with a completed product that is run and managed by the service provider. In most cases, people referring to Software as a Service are referring to end-user applications. With a SaaS offering you do not have to think about how the service is maintained or how the underlying infrastructure is managed; you only need to think about how you will use that particular piece of software. A common example of a SaaS application is web-based email which you can use to send and receive email without having to manage feature additions to the email product or maintain the servers and operating systems that the email program is running on. 

Networking services are provided as part of the IaaS model.

https://docs.aws.amazon.com/aws-technical-content/latest/aws-overview/types-of-cloud-computing.html

 

Well Architected Reliability Pillar

The reliability term encompasses the ability of a system to recover from infrastructure or service disruptions, dynamically acquire computing resources to meet demand, and mitigate disruptions such as misconfigurations or transient network issues. The automatic provisioning of resources and the ability to recover from failures meet these criteria.

"Applying the principle of least privilege to all AWS resources" Principle of least privilege is a security concept  related to access management.

"Providing compensation to customers if issues occur" AWS generally does not provide compensation to customers if issues occur and doing so has nothing to do with reliability.

"All AWS services are considered Global Services, and this design helps customers serve their international users" AWS services are either Global, Regional or specific to an Availability Zone. Among all the services that AWS offers, only a few of them are considered global services. Examples of AWS global services include: Amazon CloudFront, AWS Shield, AWS Identity and Access Management (AWS IAM) and Amazon Route 53. This answer is incorrect because NOT ALL AWS Services are Global.

https://docs.aws.amazon.com/wellarchitected/latest/reliability-pillar/wellarchitected-reliability-pillar.pdf

 

AWS Quick Starts

AWS Quick Start Reference Deployments outline the architectures for popular enterprise solutions on AWS and provide AWS CloudFormation templates to automate their deployment. Each Quick Start launches, configures, and runs the AWS compute, network, storage, and other services required to deploy a specific workload on AWS, using AWS best practices for security and availability.

Quick Starts are built by AWS solutions architects and partners to help you deploy popular technologies on AWS, based on AWS best practices. These accelerators reduce hundreds of manual installation and configuration procedures into just a few steps, so you can build your production environment quickly and start using it immediately.

https://aws.amazon.com/quickstart/

 

AWS CloudTrail

AWS CloudTrail is a service that enables governance, compliance, operational auditing, and risk auditing of your AWS account. With CloudTrail, you can log, continuously monitor, and retain account activity related to actions across your AWS infrastructure. CloudTrail provides event history of your AWS account activity, including actions taken through the AWS Management Console, AWS SDKs, command line tools, and other AWS services. This event history simplifies security analysis, resource change tracking, and troubleshooting.

https://aws.amazon.com/cloudtrail/

 

Vulnerability Reporting

The AWS Abuse team can assist you when AWS resources are being used to engage in the following types of abusive behavior:     

  1. Spam: You are receiving unwanted emails from an AWS-owned IP address, or AWS resources are being used to spam websites or forums.
  2. Port scanning: Your logs show that one or more AWS-owned IP addresses are sending packets to multiple ports on your server, and you believe this is an attempt to discover unsecured ports.

III. Denial of service attacks (DOS): Your logs show that one or more AWS-owned IP addresses are being used to flood ports on your resources with packets, and you believe this is an attempt to overwhelm or crash your server or software running on your server.    

  1. Intrusion attempts: Your logs show that one or more AWS-owned IP addresses are being used to attempt to log in to your resources.
  2. Hosting objectionable or copyrighted content: You have evidence that AWS resources are being used to host or distribute illegal content or distribute copyrighted content without the consent of the copyright holder.
  3. Distributing malware: You have evidence that AWS resources are being used to distribute software that was knowingly created to compromise or cause harm to computers or machines on which it is installed.

Note: Anyone can report abuse of AWS resources, not just AWS customers.

The AWS Security team is responsible for the security of services offered by AWS.

The AWS Concierge team can assist you with the issues that are related to your billing and account management.

The AWS Customer Service team is at the forefront of this transformational technology assisting a global list of customers that are taking advantage of a growing set of services and features to run their mission-critical applications. The team helps AWS customers understand what Cloud Computing is all about, and whether it can be useful for their business needs.

https://aws.amazon.com/security/vulnerability-reporting/

 

The AWS Management Console

The AWS Management Console allows you to access and manage Amazon Web Services through a simple and intuitive web-based user interface. You can also use the AWS Console mobile app to quickly view resources on the go.

AWS CLI The AWS Command Line Interface (CLI) is a unified tool to manage your AWS services. With just one tool to download and configure, you can control multiple AWS services from the command line and automate them through scripts.

AWS SDK The AWS SDK (Software Development Kit) allows you to interact with AWS services using your preferred programming language.

AWS API AWS API refers to the AWS application programming interface.

https://aws.amazon.com/console/

 

AWS Regions 

Each AWS Region is a separate geographic area. Each AWS Region has multiple, isolated locations known as Availability Zones. When designing your AWS Cloud architecture, you should make sure that your system will continue to run even if failures happen. You can achieve this by deploying your AWS resources in multiple Availability zones. Availability zones are isolated from each other; therefore, if one availability zone goes down, the other Availability Zones will still be up and running, and hence your application will be more fault-tolerant. In addition to availability zones, you can build a disaster recovery solution by deploying your AWS resources in other regions. If an entire region goes down, you will still have resources in another region able to continue to provide a solution. Finally, you can use the Elastic Load Balancing service to regularly perform health checks and distribute traffic only to healthy instances.

https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-regions-availability-zones.html

https://aws.amazon.com/elasticloadbalancing/

 

Microservices

As application complexity increases, a desirable attribute of an IT system is that it can be broken into smaller, loosely coupled components. This means that IT systems should be designed in a way that reduces interdependencies—a change or a failure in one component should not cascade to other components. On the other hand if the components of an application are tightly coupled and one component fails, the entire application will also fail. Therefore when designing your application, you should always decouple its components.

Decoupling allows you to deal with your application as multiple independent components (microservices) not as a single, cohesive unit.

There is no relation between decoupling an application and tracking API calls. API calls are tracked by AWS CloudTrail.

Decoupling is the exact opposite of having a monolithic application. A monolithic application is designed to be self-contained; components of the program are interconnected and interdependent rather than loosely coupled as is the case with Microservices applications (or loosely-coupled applications). Decoupling allows the update of any microservices application component to occur quickly and independently of the remainder of the application. This allows developers to work independently to update multiple components at the same time. On the other hand, a monolithic application is a single unit and takes more time and effort to be updated.

https://aws.amazon.com/microservices/

 

Amazon Relational Database Service (Amazon RDS)

Amazon Relational Database Service (Amazon RDS) makes it easy to set up, operate, and scale a relational database in the cloud. It provides cost-efficient, resizable capacity while automating time-consuming administration tasks such as hardware provisioning, operating system maintenance, database setup, patching and backups. It frees you to focus on your applications so you can give them the fast performance, high availability, security and compatibility they need.

Amazon RDS can be used to host Amazon Aurora, PostgreSQL, MySQL, MariaDB, Oracle, and Microsoft SQL Server databases.

 

https://aws.amazon.com/rds/

 

Well-Architected Framework

The Well-Architected Framework identifies a set of general design principles to facilitate good design in the cloud:

1- Stop guessing your capacity needs: Eliminate guessing about your infrastructure capacity needs. When you make a capacity decision before you deploy a system, you might end up sitting on expensive idle resources or dealing with the performance implications of limited capacity. With cloud computing, these problems can go away. You can use as much or as little capacity as you need, and scale up and down automatically.

2- Test systems at production scale: In the cloud, you can create a production-scale test environment on demand, complete your testing, and then decommission the resources. Because you only pay for the test environment when it's running, you can simulate your live environment for a fraction of the cost of testing on premises.

3- Automate to make architectural experimentation easier: Automation allows you to create and replicate your systems at low cost and avoid the expense of manual effort. You can track changes to your automation, audit the impact, and revert to previous parameters when necessary.

4- Allow for evolutionary architectures: Allow for evolutionary architectures. In a traditional environment, architectural decisions are often implemented as static, one-time events, with a few major versions of a system during its lifetime. As a business and its context continue to change, these initial decisions might hinder the system's ability to deliver changing business requirements. In the cloud, the capability to automate and test on demand lowers the risk of impact from design changes. This allows systems to evolve over time so that businesses can take advantage of innovations as a standard practice.

5- Drive architectures using data: In the cloud you can collect data on how your architectural choices affect the behavior of your workload. This lets you make fact-based decisions on how to improve your workload. Your cloud infrastructure is code, so you can use that data to inform your architecture choices and improvements over time.

6- Improve through game days: Test how your architecture and processes perform by regularly scheduling game days to simulate events in production. This will help you understand where improvements can be made and can help develop organizational experience in dealing with events.

Instead of provisioning a large compute capacity to handle the spikes in load, it is recommended to use the AWS Auto Scaling service to add or remove instances based on demand. The AWS Auto Scaling service allows you to automatically provision new resources to meet demand and maintain performance. When demand drops, AWS Auto Scaling will automatically remove any excess resource capacity, so you avoid overspending.

 Reservations in AWS are not an appropriate choice when you need to test your production environment, AWS reservations have a minimum term of one year.

 In AWS, you can test and provision your resources on-demand and pay only for what you use with no long-term contracts. This enables you to make any changes you want in your architecture design at any time without any risks.

https://docs.aws.amazon.com/wellarchitected/latest/framework/wellarchitected-framework.pdf

 

Amazon Virtual Private Cloud (Amazon VPC)

Amazon Virtual Private Cloud (Amazon VPC) allows you to carve out a portion of the AWS Cloud that is dedicated to your AWS account. Amazon VPC enables you to launch AWS resources into a virtual network that you've defined. This virtual network closely resembles a traditional network that you'd operate in your own data center, with the benefits of using the scalable infrastructure of AWS.

AWS Dedicated Hosts: An Amazon EC2 Dedicated Host is a physical server with EC2 instance capacity fully dedicated to your use. Dedicated Hosts can save you money by enabling you to leverage your existing server-bound software license investments (e.g., Windows Server, Windows SQL Server, and SUSE Linux Enterprise Server) within EC2, subject to your license terms. Dedicated Hosts also give you more flexibility, visibility, and control over the placement of instances on dedicated hardware. This makes it easier to ensure you deploy your instances in a way that meets your compliance and regulatory requirements.

AWS VPN: AWS VPN is comprised of two services: AWS Site-to-Site VPN and AWS Client VPN. AWS Site-to-Site VPN enables you to securely connect your on-premises network or branch office site to AWS. AWS Client VPN enables you to securely connect users (from any location) to AWS or on-premises networks.

A subnet is a range of IP addresses within a VPC.

https://aws.amazon.com/vpc/

 

DDoS 

AWS provides flexible infrastructure and services that help customers implement strong DDoS mitigations and create highly available application architectures that follow AWS Best Practices for DDoS Resiliency. These include services such as Amazon Route 53, Amazon CloudFront, Elastic Load Balancing, and AWS WAF to control and absorb traffic, and deflect unwanted requests. These services integrate with AWS Shield, a managed DDoS protection service that provides always-on detection and automatic inline mitigations to safeguard web applications running on AWS.

 

https://aws.amazon.com/answers/networking/aws-ddos-attack-mitigation/

 

AWS Databases

Amazon DynamoDB is a NoSQL database service. NoSQL databases are used for non-structured data that are typically stored in JSON-like, key-value documents.

Amazon Redshift Amazon Redshift is a data warehouse service that only supports relational data, NOT key-value data.

Amazon Redshift is a fast, fully managed data warehouse service that is specifically designed for online analytic processing (OLAP) and business intelligence (BI) applications, which require complex queries against large datasets.

Amazon Aurora Amazon Aurora is a MySQL and PostgreSQL-compatible relational database NOT a key-value database.

Amazon RDS Amazon RDS is a relational database NOT a key-value database.

https://aws.amazon.com/dynamodb/

https://aws.amazon.com/products/databases/

 

AWS CloudFormation

AWS CloudFormation allows you to use programming languages or a simple text file to model and provision, in an automated and secure manner, all the resources needed for your applications across all regions and accounts. You create a template that describes all the AWS resources that you want (like Amazon EC2 instances or Amazon RDS DB instances), and AWS CloudFormation takes care of provisioning and configuring those resources for you. You don't need to individually create and configure AWS resources and figure out what's dependent on what; AWS CloudFormation handles all that for you.

https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/Welcome.html

 

Storage Classes

The S3 Intelligent-Tiering storage class is designed to optimize costs by automatically moving data to the most cost-effective access tier, without performance impact or operational overhead. It works by storing objects in two access tiers: one tier that is optimized for frequent access and another lower-cost tier that is optimized for infrequent access. For a small monthly monitoring and automation fee per object, Amazon S3 monitors access patterns of the objects in S3 Intelligent-Tiering, and moves the ones that have not been accessed for 30 consecutive days to the infrequent access tier. If an object in the infrequent access tier is accessed, it is automatically moved back to the frequent access tier. There are no retrieval fees when using the S3 Intelligent-Tiering storage class, and no additional tiering fees when objects are moved between access tiers. It is the ideal storage class for long-lived data with access patterns that are unknown or unpredictable.

 

https://aws.amazon.com/s3/storage-classes/

 

Autoscaling and Elastic Load Balancing

You should attempt to build as much automation as possible in both detecting and reacting to failure. You can use services like ELB and Amazon Route53 to configure health checks and mask failure by only routing traffic to healthy endpoints. In addition, Auto Scaling can be configured to automatically replace unhealthy nodes. You can also replace unhealthy nodes using the Amazon EC2 auto-recovery feature or services such as AWS OpsWorks and AWS Elastic Beanstalk. It won’t be possible to predict every possible failure scenario on day one. Make sure you collect enough logs and metrics to understand normal system behavior. After you understand that, you will be able to set up alarms that trigger automated response or manual intervention.

ECR Amazon Elastic Container Registry (ECR) is a Docker container registry that allows developers to store, manage, and deploy Docker container images.

Amazon Athena Amazon Athena is an interactive query service that is mainly used to analyze data in Amazon S3 using standard SQL.

Amazon EC2 Amazon EC2 is a server-based compute service. Fault tolerance is not built-in, you have to architect for fault tolerance using the services we mentioned above.

Lambda is a serverless compute service. Serverless computing provides built-in fault tolerance. You don't need to architect for this capability since the service provides it by default.

https://docs.aws.amazon.com/autoscaling/ec2/userguide/auto-scaling-benefits.html

https://aws.amazon.com/elasticloadbalancing/

 

AWS Personal Health Dashboard

AWS Personal Health Dashboard provides alerts and remediation guidance when AWS is experiencing events that may impact you. While the Service Health Dashboard displays the general status of AWS services, Personal Health Dashboard gives you a personalized view into the performance and availability of the AWS services underlying your AWS resources.

The benefits of the AWS personal health dashboard include:

**A personalized View of Service Health: Personal Health Dashboard gives you a personalized view of the status of the AWS services that power your applications, enabling you to quickly see when AWS is experiencing issues that may impact you. For example, in the event of a lost EBS volume associated with one of your EC2 instances, you would gain quick visibility into the status of the specific service you are using, helping save precious time troubleshooting to determine root cause.

**Proactive Notifications: The dashboard also provides forward looking notifications, and you can set up alerts across multiple channels, including email and mobile notifications, so you receive timely and relevant information to help plan for scheduled changes that may affect you. In the event of AWS hardware maintenance activities that may impact one of your EC2 instances, for example, you would receive an alert with information to help you plan for, and proactively address any issues associated with the upcoming change.

**Detailed Troubleshooting Guidance: When you get an alert, it includes remediation details and specific guidance to enable you to take immediate action to address AWS events impacting your resources. For example, in the event of an AWS hardware failure impacting one of your EBS volumes, your alert would include a list of your affected resources, a recommendation to restore your volume, and links to the steps to help you restore it from a snapshot. This targeted and actionable information reduces the time needed to resolve issues.

You can check your applications for vulnerabilities using other services such as Amazon Inspector.

You can get help about cost optimization using other services such as the AWS Trusted Advisor.

You can get information about the current status and availability of the AWS services any time using the AWS Service Health Dashboard that is available at this link: https://status.aws.amazon.com/

https://aws.amazon.com/premiumsupport/phd/

 

AWS Database Migration Service (DMS)

AWS Database Migration Service (DMS) helps you migrate databases to AWS easily and securely. The source database remains fully operational during the migration, minimizing downtime to applications that rely on the database. The AWS Database Migration Service can migrate your data to and from most widely used commercial and open-source databases. The service supports homogeneous migrations such as Oracle to Oracle, as well as heterogeneous migrations between different database platforms, such as Oracle to Amazon Aurora or Microsoft SQL Server to MySQL. It also allows you to stream data to Amazon Redshift from any of the supported sources including Amazon Aurora, PostgreSQL, MySQL, MariaDB, Oracle, SAP ASE, and SQL Server, enabling consolidation and easy analysis of data in the petabyte-scale data warehouse. AWS Database Migration Service can also be used for continuous data replication with high availability.   

 

Amazon Simple Queue Service (SQS)

Amazon Simple Queue Service (SQS) is a fully managed message queuing service that enables you to send, store, and receive messages between software components at any volume, without losing messages or requiring other services to be available. SQS lets you decouple application components so that they run independently, increasing the overall fault tolerance of the system. Multiple copies of every message are stored redundantly across multiple availability zones so that they are available whenever needed.

 

Amazon Aurora

Amazon Aurora is a MySQL and PostgreSQL-compatible relational database built for the cloud. Amazon Aurora combines the performance and availability of traditional enterprise databases with the simplicity and cost-effectiveness of open source databases. It delivers up to five times the throughput of standard MySQL and up to three times the throughput of standard PostgreSQL. Amazon Aurora is designed to be compatible with MySQL and with PostgreSQL, so that existing applications and tools can run without requiring modification. It is available through Amazon Relational Database Service (RDS), freeing you from time-consuming administrative tasks such as provisioning, patching, backup, recovery, failure detection, and repair.

Can you install a MySQL database on an EC2 instance? You can Install MySQL on an EC2 instance, depending on how you set up the EC2 instance, you may have to manage the database and the backup processes yourself; it may not be automatic.

 

https://aws.amazon.com/rds/aurora/ 

 

AWS Enterprise Support

For Enterprise-level customers, a TAM (Technical Account Manager) provides technical expertise for the full range of AWS services and obtains a detailed understanding of your use case and technology architecture. TAMs work with AWS Solution Architects to help you launch new projects and give best practices recommendations throughout the implementation life cycle. Your TAM is the primary point of contact for ongoing support needs, and you have a direct telephone line to your TAM.

 

https://aws.amazon.com/premiumsupport/plans/

 

Amazon CloudWatch

Amazon CloudWatch is a service that monitors AWS cloud resources and the applications you run on AWS. You can use Amazon CloudWatch to collect and track metrics, collect and monitor log files, set alarms, and automatically react to changes in your AWS resources. Amazon CloudWatch can monitor AWS resources such as Amazon EC2 instances, Amazon DynamoDB tables, and Amazon RDS DB instances, as well as custom metrics generated by your applications and services, and any log files your applications generate. You can use CloudWatch to detect anomalous behavior in your environments, take automated actions, troubleshoot issues, and discover insights to keep your applications running smoothly.

https://aws.amazon.com/cloudwatch/

 

AWS Organizations

 AWS Organizations helps customers centrally govern their environments as they grow and scale their workloads on AWS. Whether customers are a growing startup or a large enterprise, Organizations helps them to centrally manage billing; control access, compliance, and security; and share resources across their AWS accounts.

AWS Organizations has five main benefits:

1) Centrally manage access polices across multiple AWS accounts.

2) Automate AWS account creation and management.

3) Control access to AWS services.

4) Consolidate billing across multiple AWS accounts.

5) Configure AWS services across multiple accounts.

https://aws.amazon.com/organizations/

 

Six Advantages of Cloud Computing

All of the physical security are taken care of for you. Amazon data centers are surrounded by three physical layers of security. “Nothing can go in or out without setting off an alarm”. It’s important to keep bad guys out, but equally important to keep the data in which is why Amazon monitors incoming gear, tracking every disk that enters the facility. And “if it breaks we don’t return the disk for warranty. The only way a disk leaves our data center is when it’s confetti.”

Most (not all) data and network security are taken care of for you. When we talk about the data/network security, AWS has a “shared responsibility model” where AWS and the customer share the responsibility of securing them. For example, the customer is responsible for creating rules to secure their network traffic using the security groups and is also responsible for protecting data with encryption.

"Increasing speed and agility" is also a correct answer because in a cloud computing environment, new IT resources are only a click away, which means it requires less time to make those resources available to developers - from weeks to just minutes. This results in a dramatic increase in agility for the organization, since the cost and time it takes to experiment and develop is significantly lower.

The Physical infrastructure is a responsibility of AWS, not the customer.

AWS customers are responsible for building and operating their applications.

Security is a shared responsibility between AWS and the customer. For example, the customer has to manage who can access and use AWS resources using the IAM service.

https://docs.aws.amazon.com/aws-technical-content/latest/aws-overview/six-advantages-of-cloud-computing.html

 

AWS Multi-Factor Authentication (MFA)

AWS Multi-Factor Authentication (MFA) is a simple best practice that adds an extra layer of protection on top of your user name and password. With MFA enabled, when a user signs in to an AWS website, they will be prompted for their user name and password (the first factor—what they know), as well as for an authentication code from their AWS MFA device (the second factor—what they have). Taken together, these multiple factors provide increased security for your AWS account settings and resources.

https://aws.amazon.com/iam/details/mfa/

 

Scaling in AWS

Horizontal Scaling:

Scaling horizontally takes place through an increase in the number of resources (e.g., adding more hard drives to a storage array or adding more servers to support an application). This is a great way to build Internet-scale applications that leverage the elasticity of cloud computing.

Vertical Scaling:

Scaling vertically takes place through an increase in the specifications of an individual resource (e.g., upgrading a server with a larger hard drive, adding more memory, or provisioning a faster CPU). On Amazon EC2, this can easily be achieved by stopping an instance and resizing it to an instance type that has more RAM, CPU, I/O,or networking capabilities. This way of scaling can eventually hit a limit and it is not always a cost efficient or highly available approach. However, it is very easy to implement and can be sufficient for many use cases especially as a short term solution.

Vertical-scaling is often limited to the capacity constraints of a single machine, scaling beyond that capacity often involves downtime and comes with an upper limit. With horizontal-scaling it is often easier to scale dynamically by adding more machines in parallel. Hence, in most cases, horizontal-scaling is recommended over vertical-scaling. 

https://wa.aws.amazon.com/wat.concept.horizontal-scaling.en.html

 

Principal of Least Privilege

The principle of least privilege is one of the most important security practices and it means granting users the required permissions to perform the tasks entrusted to them and nothing more. The security administrator determines what tasks users need to perform and then attaches the policies that allow them to perform only those tasks. You should start with a minimum set of permissions and grant additional permissions when necessary. Doing so is more secure than starting with permissions that are too lenient and then trying to tighten them down.

https://docs.aws.amazon.com/IAM/latest/UserGuide/best-practices.html#grant-least-privilege

 

AWS Shared Responsibility Model

Under the shared responsibility model, AWS is responsible for the hardware and software that run AWS services. This includes patching the infrastructure software and configuring infrastructure devices. As a customer, you are responsible for implementing best practices for data encryption, patching guest operating system and applications, identity and access management, and network & firewall configurations.

The AWS Customer is responsible for all network and firewall configurations, including the configuration of Security Groups, Network Access Control Lists (Network ACLs), and Routing tables.

According to the AWS Shared Responsibility Model, AWS Customers are responsible for Client-side encryption and Server-side encryption. However, for some AWS fully managed services such as Amazon DynamoDB, server-side encryption is automatically done by AWS. Amazon DynamoDB transparently encrypts and decrypts all tables when they are written to disk. There is no option to enable or disable Server-side encryption.

AWS offers a lot of services and features that help AWS customers protect their data in the cloud. Customers can protect their data by encrypting it in transit and at rest. They can use CloudTrail to log API and user activity, including who, what, and from where calls were made. They can also use the AWS Identity and Access Management (IAM) to control who can access or edit their data.

Shared Controls are controls which apply to both the infrastructure layer and customer layers, but in completely separate contexts or perspectives. In a shared control, AWS provides the requirements for the infrastructure and the customer must provide their own control implementation within their use of AWS services.

Examples include:

** Patch Management – AWS is responsible for patching the underlying hosts and fixing flaws within the infrastructure, but customers are responsible for patching their guest OS and applications.

** Configuration Management – AWS maintains the configuration of its infrastructure devices, but a customer is responsible for configuring their own guest operating systems, databases, and applications.

** Awareness & Training - AWS trains AWS employees, but a customer must train their own employees.

A computer on which AWS runs one or more virtual machines is called a host machine, and each virtual machine is called a guest machine. AWS drives the concept of virtualization by allowing the physical host machine to operate multiple virtual machines as guests (for multiple customers) to help maximize the effective use of computing resources such as memory, network bandwidth and CPU cycles.

The customer is responsible for securing their network by configuring Security Groups, Network Access control Lists (Network ACLs), and Routing Tables. The customer is also responsible for setting a password policy on their AWS account that specifies the complexity and mandatory rotation periods for their IAM users' passwords.

Disk disposal: Disk disposal ( Storage Device Decommissioning): When a storage device has reached the end of its useful life, AWS procedures include a decommissioning process that is designed to prevent customer data from being exposed to unauthorized individuals. All decommissioned magnetic storage devices are degaussed and physically destroyed in accordance with industry-standard practices.

Controlling physical access to compute resources: AWS is responsible for controlling physical access to the data centers.

Patching the Network infrastructure: Patching the underlying infrastructure is the responsibility of AWS. The customer is responsible for patching the Operating System of their EC2 instances and any software installed on these instances.

Customers should be aware that their responsibilities may vary depending on the AWS services chosen. For example, when using Amazon EC2, you are responsible for applying operating system and application security patches regularly. However, such patches are applied automatically when using Amazon RDS.

AWS products that fall into the well-understood category of Infrastructure as a Service (IaaS)—such as Amazon EC2, Amazon VPC, and Amazon S3—are completely under your control and require you to perform all of the necessary security configuration and management tasks. For example, for EC2 instances, you’re responsible for management of the guest OS (including updates and security patches), any application software or utilities you install on the instances, and the configuration of the AWS-provided firewall (called a security group) on each instance. These are basically the same security tasks that you’re used to performing no matter where your servers are located.

AWS is responsible for the security configuration of its managed services. Examples of these types of services include Amazon DynamoDB, Amazon RDS, Amazon Redshift, Amazon Elastic MapReduce, and Amazon WorkSpaces. For most of these services, all you have to do is to configure logical access controls on the resources and protect your account credentials, but overall, the security configuration work is performed by the service.

A computer on which AWS runs one or more virtual machines is called a host machine, and each virtual machine is called a guest machine. AWS drives the concept of virtualization by allowing the physical host machine to operate multiple virtual machines as guests (for multiple customers) to help maximize the effective use of computing resources such as memory, network bandwidth and CPU cycles.

Patching the guest operating system is the responsibility of AWS for the managed services only (such as Amazon RDS). The customer is responsible for patching the guest OS for other services (such as Amazon EC2).

AWS is responsible for patching the underlying hosts, upgrading the firmware, and fixing flaws within the infrastructure for all services, including Amazon EC2.

https://aws.amazon.com/compliance/shared-responsibility-model/

https://aws.amazon.com/what-is-cloud-object-storage/

 

AWS Consolidated Billing

For billing purposes, the consolidated billing feature of AWS Organizations treats all the accounts in the organization as one account. This means that all accounts in the organization can receive the hourly cost benefit of Reserved Instances that are purchased by any other account. For example, Suppose that Fiona and John each have an account in an organization. Fiona has five Reserved Instances of the same type, and John has none. During one particular hour, Fiona uses three instances and John uses six, for a total of nine instances on the organization's consolidated bill. AWS bills five instances as Reserved Instances, and the remaining four instances as On-demand instances.

With Consolidated Billing, you can combine the usage across all accounts in the organization to share the Reserved Instance discounts, volume pricing discounts, and Savings Plans. This can result in a lower charge for your project, department, or company than with individual standalone accounts.

Do purchased instances have better performance than On-demand instances? There is no difference in performance between On-demand and Reserved instances of the same type.

Can Reserved Instance discounts can only be shared with the master account? The Reserved Instance discounts can be shared with all accounts in the organization.

https://docs.aws.amazon.com/aws-technical-content/latest/cost-optimization-reservation-models/consolidated-billing.html

https://aws.amazon.com/organizations/

 

AWS Pricing

There are no startup or termination fees associated with Amazon EC2.

AWS pay-as-you-go pricing model is similar to how you pay for utilities like water and electricity. With Amazon EC2 on-demand instances, you only pay for the compute capacity you consume, and once you stop using them, there are no additional costs or termination fees.

With On-Demand instances, you pay for compute capacity by the hour or the second depending on which instances you run. No longer-term commitments or upfront payments are needed.

With per-second billing, you pay for only what you use. It takes cost of unused minutes and seconds in an hour off of the bill, so you can focus on improving your applications instead of maximizing usage to the hour. Especially, if you manage instances running for irregular periods of time, such as dev/testing, data processing, analytics, batch processing and gaming applications, can benefit.

Per-second billing is available for instances launched in:

- On-Demand, Reserved and Spot forms

- All regions and Availability Zones

- Amazon Linux, Windows and Ubuntu

https://aws.amazon.com/ec2/pricing/

 

AWS Consolidated Billing

AWS consolidated billing enables an organization to consolidate payments for multiple AWS accounts within a single organization by making a single paying account. For billing purposes, AWS treats all the accounts on the consolidated bill as one account. Some services, such as Amazon EC2 and Amazon S3 have volume pricing tiers across certain usage dimensions that give the user lower prices when they use the service more. For example if you use 50 TB in each account you would normally be charged $23 *50*3 (because they are 3 different accounts), But with consolidated billing you would be charged $23*50+$22*50*2 (because they are treated as one account) which means that you would save $100.

 HOW IT WORKS

After you create an organization and verify that you own the email address associated with the master (management) account, you can invite existing AWS accounts to join your organization. When you invite an account, the AWS Organizations service sends an invitation to the account owner, who decides whether to accept or decline the invitation. If they accept, their account becomes a member of that organization.

At the moment an account accepts the invitation to join an organization, the master account of the organization becomes liable for all charges accrued by the new member account. The payment method attached to the member account is no longer used. Instead, the payment method attached to the master account of the organization pays for all charges accrued by the member account.

https://docs.aws.amazon.com/awsaccountbilling/latest/aboutv2/consolidated-billing.html

https://docs.aws.amazon.com/organizations/latest/userguide/orgs_manage_accounts_invites.html

https://aws.amazon.com/s3/pricing/

 

Amazon EBS Snapshots and Subscriptions

Creating snapshots of EBS Volumes can help ensure that you have a backup of your EBS volumes just in case any issues arise. You can use Amazon Data Lifecycle Manager (Amazon DLM) to automate the creation, retention, and deletion of EBS snapshots.

Automating snapshot management with Amazon DLM helps you to:

- Protect valuable data by enforcing a regular backup schedule.

- Retain backups as required by auditors or internal compliance.

- Reduce storage costs by deleting outdated backups.

- Create disaster recovery backup policies that back up data to isolated accounts.

Amazon EBS encryption offers a straight-forward encryption solution for your EBS resources that doesn't require you to build, maintain, and secure your own key management infrastructure. Encryption operations occur on the servers that host EC2 instances, ensuring the security of both data-at-rest and data-in-transit between an instance and its attached EBS storage.

It is the responsibility of AWS to control and restrict access to its data centers.

To make a backup of your EBS volumes you should use the Snapshot feature. Snapshots can provide a Copy-on-Write Consistency (reflect the exact image of the volume at the point-in-time of the snapshot).

It is the responsibility of AWS to regularly update firmware on hardware devices.

EBS Snapshots are incremental backups, which means that only the blocks on the device that have changed after your last snapshot are saved. This minimizes the time required to create the snapshot and saves on storage costs by not duplicating data.

https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/EBSSnapshots.html

https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/EBSEncryption.html

 

AWS Instance Types

Spot instances Spot instances are not the right choice when applications must run without interruption.

Spot instances provide a discount (up to 90%) off the On-Demand price. The Spot price is determined by long-term trends in supply and demand for EC2 spare capacity. If the Spot price exceeds the maximum price you specify for a given instance or if capacity is no longer available, your instance will automatically be interrupted.

Spot Instances are a cost-effective choice if you can be flexible about when your applications run and if you don't mind if your applications get interrupted. For example, Spot Instances are well-suited for data analysis, batch jobs, background processing, and optional tasks. 

Reserved instances Reserved instances are recommended for Customers that can commit to using EC2 over a 1 or 3-year term to reduce their total computing costs. Even if the project will last for more than a year, the cost-benefit for acquiring Reserved Instances is not as great as the cost-benefit from using Spot Instances. The Spot option provides the largest discount (up to 90%).

Reserved instances are not appropriate when the reservation length needs to be less than one year. The shortest reservation length for a reserved instance is one year.

On-demand instances On-demand instances are significantly less cost-effective than spot instances.

With On-Demand instances, you pay for compute capacity by the hour or second (minimum of 60 seconds) with no long-term commitments. You can increase or decrease your compute capacity depending on the demands of your application and only pay for what you use.

The use of On-Demand instances frees you from the costs and complexities of planning, purchasing, and maintaining hardware and transforms what are commonly large fixed costs into much smaller variable costs. On-Demand instances also remove the need to buy “safety net” capacity to handle periodic traffic spikes.

Dedicated instances Dedicated instances are used when you need your instances to be physically isolated at the host hardware level from instances that belong to other AWS accounts. Dedicated instances are significantly more expensive than Spot Instances

Dedicated instances can be used if you require your instance be physically isolated at the host hardware level from instances that belong to other AWS accounts.

When your needs change, you can exchange your Convertible Reserved Instances and continue to benefit from the reservation's pricing discount. With Convertible RIs, you can exchange one or more Reserved Instances for another Reserved Instance with a different configuration, including instance family, operating system, and tenancy. There are no limits to how many times you perform an exchange, as long as the new Convertible Reserved Instance is of an equal or higher value than the original Convertible Reserved Instances that you are exchanging.

"Standard RIs" You cannot exchange Standard Reserved Instances, but you can modify them. You can modify attributes such as the Availability Zone, instance size (within the same instance family), and scope of your Reserved Instance (regional or zonal). Standard RIs provide the most significant discount (up to 72% off On-Demand) and are best suited for steady-state usage.

"Elastic RIs" and "Premium RIs" are not valid RI types.

https://aws.amazon.com/ec2/pricing/reserved-instances/

https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ri-convertible-exchange.html

 

Amazon S3

 

Companies today need the ability to simply and securely collect, store, and analyze their data at a massive scale. Amazon S3 is object storage built to store and retrieve any amount of data from anywhere – web sites and mobile apps, corporate applications, and data from IoT sensors or devices.  It’s a simple storage service that offers highly available, and infinitely scalable data storage infrastructure at very low costs. It is designed to deliver 99.999999999% durability, and stores data for millions of applications used by market leaders in every industry. S3 provides comprehensive security and compliance capabilities that meet even the most stringent regulatory requirements. It gives customers flexibility in the way they manage data for cost optimization, access control, and compliance. S3 provides query-in-place functionality, allowing you to run powerful analytics directly on your data at rest in S3. And Amazon S3 is the most supported cloud storage service available, with integration from the largest community of third-party solutions, systems integrator partners, and other AWS services.

Amazon S3 stores any number of objects, but each object does have a size limitation. Individual Amazon S3 objects can range in size from a minimum of 0 bytes to a maximum of 5 terabytes.

https://aws.amazon.com/s3/

 

AWS Budgets: AWS Budgets gives you the ability to set custom budgets that alert you when your costs or usage exceed (or are forecasted to exceed) your budgeted amount.

AWS Cost Explorer: AWS Cost Explorer provides an easy-to-use interface that lets you visualize, understand, and manage your AWS costs and usage over time.

AWS Elastic Load Balancer: AWS Elastic Load Balancer (ELB) is a service that distributes the incoming application traffic to multiple targets that you define.

Amazon Aurora: Amazon Aurora doesn’t support NoSQL databases. Amazon Aurora is a MySQL and PostgreSQL-compatible relational database.

Amazon Redshift: Amazon Redshift doesn’t support non-relational data. Amazon Redshift is a fully managed data warehouse service that allows you to run complex analytic queries against petabytes of structured data using standard SQL and your existing Business Intelligence (BI) tools.

Amazon S3 Glacier Deep Archive- is an extremely low-cost storage service that provides secure, durable, and flexible storage for long-term data backup and archival. With Amazon S3 Glacier Deep Archive, customers can reliably store their data for as little as $1 per terabyte per month, a significant savings compared to on-premises solutions. Amazon Glacier enables customers to offload the administrative burdens of operating and scaling storage to AWS, so that they don’t have to worry about capacity planning, hardware provisioning, data replication, hardware failure detection and repair, or time-consuming hardware migrations.

S3 Intelligent-Tiering: S3 Intelligent-Tiering is ideal for data with unknown or changing access patterns.

S3 Intelligent-Tiering is the first cloud object storage class that delivers automatic cost savings by moving data between two access tiers - frequent access and infrequent access - when access patterns change.

AWS Marketplace: AWS Marketplace is a curated digital catalog that makes it easy for customers to find, buy, deploy, and manage third-party software and services that customers need to build solutions and run their businesses. AWS Marketplace includes thousands of software listings from popular categories such as security, networking, storage, machine learning, business intelligence, database, and DevOps. AWS Marketplace also simplifies software licensing and procurement with flexible pricing options and multiple deployment methods. Customers can quickly launch pre-configured software with just a few clicks, and choose software solutions in AMI and SaaS formats, as well as other formats. Flexible pricing options include free trial, hourly, monthly, annual, multi-year, and BYOL, and get billed from one source, AWS.

Amazon EBS: Amazon EBS is a block level storage that provides storage volumes for use with Amazon EC2 and Amazon RDS. Amazon EBS is not a cost-effective choice here. Amazon Elastic Block Store (Amazon EBS) is a storage service, NOT a database service.

AWS Organizations: AWS Organizations provides central governance and management across multiple AWS accounts.

AWS Systems Manager: AWS Systems Manager gives you visibility and control of your infrastructure on AWS. Systems Manager provides a unified user interface so you can view operational data from multiple AWS services and allows you to automate operational tasks across your AWS resources.

AWS Certificate Manager: AWS Certificate Manager is a service that lets you easily provision, manage, and deploy public and private Secure Sockets Layer/Transport Layer Security (SSL/TLS) certificates for use with AWS services and your internal connected resources

AWS Storage Gateway: AWS Storage Gateway is not a caching service, it is a hybrid storage service that enables your on-premises applications to seamlessly use AWS cloud storage.

Amazon EBS volume: An Amazon EBS volume is a durable, block-level storage device that you can attach to a single EC2 instance. You can use EBS volumes as primary storage for data that requires frequent updates, such as the system drive for an instance or storage for a database application. You can also use them for throughput-intensive applications that perform continuous disk scans.

AWS OpsWorks: AWS OpsWorks is a configuration management service that provides managed instances of Chef and Puppet. Chef and Puppet are automation platforms that allow you to use code to automate the configurations of your servers. OpsWorks lets you use Chef and Puppet to automate how servers are configured, deployed, and managed across your Amazon EC2 instances or on-premises compute environments.

Amazon EMR: EMR is used to process vast amounts of data easily and securely. Use cases include: big data, log analysis, web indexing, data transformations (ETL), machine learning, financial analysis, scientific simulation, and bioinformatics.

AWS Config: AWS Config is a fully managed service that provides you with an AWS resource inventory, configuration history, and configuration change notifications to enable security and governance.

Amazon CloudFront: Amazon CloudFront gives businesses and web application developers an easy and cost effective way to distribute content globally with low latency and high data transfer speeds.

Amazon S3 is an object level storage built to store and retrieve any amount of data from anywhere – web sites and mobile apps, corporate applications, and data from IoT sensors or devices. It is designed to deliver 99.999999999% durability, and stores data for millions of applications used by market leaders in every industry.

Amazon EFS: Amazon EFS is a file-level storage technology that provides massively parallel shared access to thousands of Amazon EC2 instances, enabling your applications to achieve high levels of aggregate throughput and IOPS with consistently low latencies.

Amazon Instance Store: An instance store provides temporary block-level storage for your EC2 instances. Instance store is ideal for temporary storage of information that changes frequently, such as buffers, caches, scratch data, and other temporary content.

AWS CloudFormation: AWS CloudFormation allows you to use programming languages or a simple text file to model and provision, in an automated and secure manner, all the resources needed for your applications across all regions and accounts.

Amazon Kinesis Video Streams: Amazon Kinesis Video Streams enables you to securely stream video from connected devices (IoT devices) to AWS for analytics, machine learning (ML), playback, and other processing. Kinesis Video Streams automatically provisions and elastically scales all the infrastructure needed to ingest streaming video data from millions of devices. It durably stores, encrypts, and indexes video data in your streams, and allows you to access your data through easy-to-use APIs.

Amazon SNS: Amazon Simple Notification Service (SNS) is a fully managed pub/sub messaging service that enables you to decouple microservices, distributed systems, and serverless applications. Using Amazon SNS topics, your publisher systems can fan out messages to a large number of subscriber endpoints for parallel processing, including AWS Lambda functions, and HTTP/S webhooks. Additionally, SNS can be used to fan out notifications to end users using mobile push, SMS, and email. 

AWS Trusted Advisor: AWS Trusted Advisor is an online tool that provides customers with real time guidance to help them provision their resources following AWS best practices.

IAM Groups: IAM groups are not used to manage multiple AWS accounts. An IAM group is a collection of IAM users - within the same AWS account - that are managed as a unit. IAM Groups let customers specify permissions for multiple users, which can make it easier to manage the permissions for those users. For example, customers could have a group called Admins and give that group the types of permissions that administrators typically need.

AWS Config: AWS Config is a fully managed service that provides customers with an AWS resource inventory, configuration history, and configuration change notifications to enable security and governance.

Concierge Support Team: The AWS Concierge Support Team is a specialized offering available only to customers having an Enterprise Support subscription. The Concierge Team assists customers with their billing and account inquiries.

Amazon CloudWatch: Amazon CloudWatch is used to monitor the utilization of AWS resources and services. You can use CloudWatch to visualize system metrics, take automated actions, troubleshoot performance issues, discover insights to optimize your applications, and ensure they are running smoothly.

AWS Direct Connect: AWS Direct Connect allows you to establish a dedicated network connection from your premises to AWS.

AWS Regions: An AWS Region is a physical location in the world where AWS have multiple Availability Zones. Availability Zones consist of one or more discrete data centers, each with redundant power, networking, and connectivity, housed in separate facilities.

AWS VPN: AWS VPN is comprised of two services: AWS Site-to-Site VPN and AWS Client VPN. AWS Site-to-Site VPN enables you to securely connect your on-premises network or branch office site to AWS. AWS Client VPN enables you to securely connect users (from any location) to AWS or on-premises networks.

AWS Shield: AWS Shield does not provide security recommendations. AWS Shield is a managed Distributed Denial of Service (DDoS) protection service that safeguards applications running on AWS.

AWS Management Console: The AWS Management Console is used to access and manage Amazon Web Services through a simple and intuitive web-based user interface. The console itself doesn’t provide any recommendations.

AWS Secrets Manager: AWS Secrets Manager does not provide security recommendations. AWS Secrets Manager is a secrets management service that enables you to store, retrieve, rotate, audit, and monitor secrets centrally. AWS Secrets Manager allows you to manage secrets such as database credentials, on-premises resource credentials, SaaS application credentials, third-party API keys, and Secure Shell (SSH) keys.

AWS Config: AWS Config is a fully managed service that provides you with an AWS resource inventory, configuration history, and configuration change notifications to enable security and governance. With AWS Config you can discover existing AWS resources, export a complete inventory of your AWS resources with all configuration details, and determine how a resource was configured at any point in time. These capabilities enable compliance auditing, security analysis, and resource change tracking.

AWS CloudTrail: AWS CloudTrail is an AWS service that can be used to monitor all user interactions with the AWS environment.

AWS Lambda: AWS Lambda is a serverless compute service.

Amazon SES: Amazon SES (Amazon Simple Email Service) is a flexible, affordable, and highly-scalable email messaging platform for businesses and developers.

Amazon Connect: Amazon Connect is a cloud-based contact center service that makes it easy for businesses to deliver customer service at low cost. Amazon Connect is a self-service, cloud-based contact center service that makes it easy for any business to deliver better customer service at lower cost. Amazon Connect cannot be used to send billing notifications.

AWS Direct Connect: AWS Direct Connect is a cloud service solution that is used to establish a dedicated network connection between your premises and AWS.

AWS OpsWorks: AWS OpsWorks is a configuration management service that provides managed instances of Chef and Puppet. Chef and Puppet are automation platforms that allow you to use code to automate the configurations of your servers.

Amazon Inspector: Amazon Inspector is an automated security assessment service that helps improve the security and compliance of applications deployed on AWS.

EC2 Instance Usage Report: This report shows you your historical EC2 instance usage, and helps you plan for future EC2 usage. EC2 Instance Usage Reports are designed to make it easier for you to track and better manage your EC2 usage and spending.

AWS Trusted Advisor: AWS Trusted Advisor is an online tool that provides real time guidance to help you provision your resources following AWS best practices.

AWS Server Migration Service: AWS Server Migration Service (SMS) is used to migrate your on-premises workloads to AWS.

AWS Application Discovery Service: AWS Application Discovery Service helps enterprise customers plan migration projects by gathering information about their on-premises data centers.

Amazon VPC: Amazon Virtual Private Cloud (Amazon VPC) lets you provision a logically isolated section of the AWS Cloud where you can launch AWS resources in a virtual network that you define. You have complete control over your virtual networking environment. Amazon VPC is not a managed service, you are responsible for managing almost everything when using the Amazon VPC service.

Amazon Elastic Compute Cloud: Amazon Elastic Compute Cloud (Amazon EC2) is a service that gives you complete control over your compute resources. Apart from patching the underlying host - which is the responsibility of AWS - you are responsible for managing almost everything in your server instances when using Amazon EC2.

Amazon S3 Standard: S3 Standard offers high durability, availability, and performance object storage for frequently accessed data.

Amazon S3 Standard-Infrequent Access: Amazon S3 Standard-Infrequent Access (S3 Standard-IA) is for data that is accessed less frequently, but requires rapid access when needed.

Amazon S3 Glacier: Amazon S3 Glacier is a low-cost storage class for data that is rarely accessed; such as archived data.

AWS Customer Service: AWS Customer Service can help AWS customers with their billing and account inquiries, and it is included in all AWS support plans (Basic, Developer, Business, and Enterprise). However, due to the fact that AWS Customer Service is not dedicated to specific types of inquiries, it is not as quick or as efficient as the AWS Support Concierge. AWS Support Concierge is available only for AWS Enterprise support subscribers and is dedicated only to help AWS customers with their billing and account inquiries.

AWS Operations Support: AWS Operations Support is an Enterprise support program that provides operations assessments and analysis to identify gaps across the operations lifecycle, as well as recommendations based on best practices.

AWS Personal Health Dashboard: AWS Personal Health Dashboard provides a personalized view of the health of the specific services that are powering your workloads and applications. AWS Personal Health Dashboard proactively notifies you when AWS experiences any events that may affect you, helping provide quick visibility and guidance to minimize the impact of events in progress, and plan for any scheduled changes, such as AWS hardware maintenance.

Infrastructure Event Management (IEM): AWS Infrastructure Event Management (IEM) is a structured program available to Enterprise Support customers (and Business Support customers for an additional fee) that helps you plan for large-scale events such as product or application launches, infrastructure migrations, and marketing events. With Infrastructure Event Management, you get strategic planning assistance before your event, as well as real-time support during these moments that matter most for your business. AWS Infrastructure Event Management is not for day-to-day support needs.

AWS Identity and Access Management (IAM) user: An AWS Identity and Access Management (IAM) user is an entity that you create in AWS to represent the person or service that uses it to directly interact with AWS. A primary use for IAM users is to grant individuals access to the AWS Management Console for interactive tasks and / or to make programmatic requests to AWS services using the API or CLI.

AWS Consulting Partners: AWS Consulting Partners are not part of AWS support. AWS Consulting Partners are professional services firms that help customers design, architect, build, migrate, and manage their workloads and applications on AWS. Consulting Partners include System Integrators, Strategic Consultancies, Agencies, Managed Service Providers, and Value-Added Resellers.

Amazon DynamoDB: Amazon DynamoDB does not support MySQL. Amazon DynamoDB is a NoSQL database service.

Amazon Neptune: Amazon Neptune is a graph database service, not a MySQL database service. Amazon Neptune is used to build and run applications that work with highly connected datasets, such as social networking, recommendation engines, and knowledge graphs.

Amazon Cognito Amazon Cognito allows you to add user sign-up, sign-in, and access control to your web and mobile apps quickly and easily.

AWS KMS AWS KMS provides a highly available key storage, management, and auditing solution for you to encrypt data within your own applications and control the encryption of stored data across AWS services.

AWS Config AWS Config is a service that enables you to monitor, assess, and audit all changes made to your AWS resources.

Amazon Redshift: Amazon Redshift is not a MySQL database service. Amazon Redshift is a fully managed data warehouse service that makes it simple and cost-effective to analyze all your data using standard SQL and your existing Business Intelligence (BI) tools.

Amazon DynamoDB: Amazon DynamoDB is not a MySQL database service. Amazon DynamoDB is a fully managed NoSQL database service.