Cloud: AWS Beginner's Guide

Virtual Private Cloud (VPC)

  • VPC is a virtual infrastructure or a data center.
  • AWS client has full control over resources and Virtual Compute instance (Virtual servers) hosted inside that VPC.
  • Its similar to having your own Data Center inside AWS.
  • It secure and Logically isolated from other VPCs on AWS.
  • VPC cannot span across region i.e. VPC is region specific.
  • One VPC can have multiple Availability Zones (AZ) (Min 2 or more).
  • Subnet cannot extend beyond AZ.
  • VPC can have one or more IP subnet inside each AZ.

CIDR and IP Address range 

(Classless Inter-Domain Routing, sometimes called supernetting).
  • Once the VPC is created you cannot change it CIDR block range (Create New VPC)
  • Size of CIDR block Min/28 or Max 16.
  • Different CIDR is VPC cannot overlap.
  • Expand your VPC by adding new CIDR IP address range (Secondary you can delete).
  • AWS reserve 1st Four IP and Last one IP in each subnet.
  • Say 10.0.0.0/24 
    • 10.0.0.0 Network ID
    • 10.0.0.1 VPC router
    • 10.0.0.2 DNS related
    • 10.0.0.3 Future use
    • 10.0.0.255 Broadcast

Implied Router

  • No request has to be made, automatic facility.
  • Implied router automatically does communication between subnets and outside internet world.

Route Tables

  • These are tables that have entries which say what is the destination and target for that packet.

Security Groups:

  • Security group are basically virtual firewalls that protect your Virtual Servers or EC2 instances 

Network Access Control List (N. ACL's)

  • First Line of defense.
  • Security group functions at Virtual NIC level where as the network ACL they work at the subnet level.

Internet Gateways

  • VPC without internet gateway communication with the internet - NO.
  • Horizontally Scaled, Redundant,Highly available VPC component.
  • Only ONE Internet Gateway per VPC.
  • It support both IPv4 and IPv6.

Virtual Private Gateway

  • The Virtual private gateway will take you to your own premise or headquarters or beaches through VPN or Direct connect  

Direct Connect

  • AWS Direct Connect is a cloud service solution that makes it easy to establish a dedicated network connection from your premises to AWS
  • Using AWS Direct Connect, you can establish private connectivity between AWS and your Data-Center, office or colocation environment
  • In many cases can reduce your network costs, increase bandwidth throughput, and provide a more consistent network experience that Internet based connections(VPN)
  • AWS Direct Connect makes it easy to scale your connection to meet your needs
  • AWS Direct Connect provides 1 Gbps to 10 Gbps  connection, you can easily provision multiple connection if you need the capacity.

Default VPC

  • When AWS account is created, its by default created in all AWS regions.
  • It has default CIDR, Security groups N-ACL & Route tables.
  • Default subnets is each AZ.
  • Internet gateway is also by default.

Custom VPC

  • This VPC is created by account owner
  • At the time of Creation decide CIDR
  • It also has default security groups, N-ACL and route tables.
  • No default subnet is each AZ
  • Does not have internet gateway , attach one if you require. 

Route Tables and Security Group. 

Route Tables

  • There are tables that have entries which say what is the destination and target for that packet.
  • Each subnet MUST have ONLY ONE route table.
  • One ROUTE table can be associated with multiple subnet.
  • If you don't specify a subnet to route table association, the subnet will be associated with the Main (Default) VPC route table.
  • Main table is the default route table that gets created automatically when you create VPC (can edit can’t delete).
  • Subnet association can be changed to another Route table (Custom)
  • Custom route table can become Main Route table
  • Every Route table has default rule for all VPC subnet to communicate ( you cannot modify or delete)

Security Group

  • Security Group is virtual firewall
  • It controls traffic at virtual server or EC2 Instance (Specifically associated with virtual network interface also known as ENI- Elastic Network Interface)
  • So it is the defense in depth, basically that last defense component in the VPC.
  • An EC2 Instance must have security group at launch.
  • Any EC2 instance of any AZ can use any security group of that VPC ( as SG is resource of VPC)
  • Security group and stateful and Directional.
    • If Inbound traffic is allowed ,return traffic (outbound ) is allowed ( no rules required)
    • If Outbound Traffic is allowed, return traffic ( Inbound ) is allowed ( no rules required)
  • Can have only PERMIT rule (allow rule)
  • DENY rule not possible.
  • All rules are checked to find Permit rule
  • Implicit deny rule at end (by default) 
  • Default Security group is Default or Custom VPC
    • Inbound rules allows Multiple EC2 instance assigned to same security group talk to each other
    • All Outbound traffic is allowed by default.

Note: Adding Inbound rules can also allow Multiple EC2 instance with Different SG in same/different subnet to talk to each other.
  • Custom security group is Default or Custom VPC
  • No Inbound rules - all inbound traffic denied by default
  • All outbound traffic is allowed by default.
  • Security group all rules can be changed Inbound and Outbound (not like Route Table)
  • Default Security Group cannot be deleted
  • Change to Security Group effect immediately

N-ACL’s

  • This function is performed on implied router
  • The Implied VPC router host the N-ACL
  • It work at subnet level
  • N-ACL are stateless 
  • We can have Permit & Deny both rules in a N-ACL's 
  • NACL is set of rules, each has number
  • NACL rules are checked for PERMIT from lower no until Permit is found or explicit deny is reached
  • You can Insert rules, so reasonable spacing of no is recommendation
  • N-ACL end with explicit deny which cannot be deleted.
  • Subnet must be associated with NACL, else default NACL associated automatically.
  • Default NACL allows all Inbound and Outbound traffic by default.
  • Custom NACL denies all Inbound & Outbound traffic by default.
  • Change to NACL effect is immediate like SG.
  • When NACL preferred over SG?
  • Inbound in NACL means coming from outside subnet. Outbound mean going out of subnet
  • Inbound for SG means coming from outside the instance. Outbound mean going out of Instance ENI.

NAT Instance and NAT Gateway

NAT Instance

  • NAT instance allow private subnet EC2 Instance to get to internet (Proxy)
  • The NAT instance is configured in Public subnet
  • NAT instance to be assigned to security group
  • Source Destination check must be disabled.

NAT Gateway:

  • Its AWS managed Services
  • NAT Gateway works only on Elastic IP ADD
  • Cannot be assigned to security Group
  • AWS responsible for security / Patching etc.

VPC Peering

  • VPC Peering is a network connection done between two VPC that enable to route traffic between them (Ipv4 or Ipv6)
  • You can have VPC peering between VPC's in the same account, a different account, same region and even across region know at Inter region VPC Peering ( IPv6 is not supported)
  • AWS uses the existing infrastructure of a VPC to create a VPC peering connection; It is neither a gateway nor a VPN connection and does not rely on separate piece of physical hardware
  • There is no single point of failure for communication or bandwidth bottleneck
  • It is simple and cost effective way to share resources between regions or replicate data for geographic redundancy across regions.

To establish a VPC peering Connections

  • The owner of VPC send a request to the owner of peer VPC to create a VPC peering connection
  • The peer VPC can be owned by you or another AWS account
  • Peer VPC cannot have CIDR block that overlaps with requester VPC
  • The Owner of peer VPC Accept the VPC peering connection request to activate it
  • Add a route to one or more of your VPC's  route tables that points to the IP address range of peer VPC.
  • If required, update the security group rules so that instance can communicate to and from peer VPC (Within the region)

Multiple VPC  Peering

  • A VPC peering connection is a one to one relationship between two VPC's
  • You can create multiple VPC peering connection for each VPC that you own
  • Transitive peering relationship are not supported

Note: VPC B and VPC C cannot send traffic directly to each other through VPC A. VPC peering does not support transitive peering relationships, nor edge to edge routing
  • Edge to Edge Routing Through a VPN Connection or an AWS Direct Connect Connection

ELASTIC Compute Cloud (EC2)

  • EC2 Services provides resizable compute capacity in the cloud.
  • You get root/administrator access to your EC2 Instance (SSH/RDP).
  • EC2 SLA is 99.95 for each region during any monthly period (Approx 22 min per month DT).
  • You can provision your EC2 Instance on shared or dedicated hosts( (Physical server).
  • To access an EC2 Instance you require a Key pair.
  • When you launch a new EC2 instance you can create a Private Public key pair.
  • You can download the private key Only Once.
  • If Instance is launched without key pair ,you will not be able to access it (RDP/SSH)
  • There is a 20 EC2 instance soft limit per account one can submit a request to AWS to increase it.       
    SAN
    NAS
    Block level access
    File level access
    RAW disk 
    File system is already created and then shared (NFS/CIFS)
    Device:EMC


  • Device: NETApps.
  • Two types of block store devices are supported
    • Elastic Block Store : Its Persistent and gives you block access
    • Instance Store : Non-persistent virtual disk allocated to EC2 instance, root instance store can be max 10 GB.( If the Linux server is built on non-persistent disk. shutdown option is unavailable on reboot is available.)
  • EC2 Instance root / boot volumes can be EBS or Instance store volumes.
  • EBS backed EC2 instance (has EBS root volume C Drive is EBS back )
  • Instance store backed EC2 Instance (has an Instance store root volume)






Elastic Block Storage (EBS).

 Amazon EBS volumes types fall into two categories

SSD-backed volumes

  • Optimizes for transactional workload involving frequent read/write operation high IOPS and low latency
  • Can be used as boot volumes.

HDD-backed volumes

  • Optimized for large streaming workload(youtube), big data, log processing
  • Cannot be used as boot volumes.



  • You can add volume, modify volume size & volume Type.
  • You cannot decrease EBS volume Size.
  • For running EC2 instance you cannot detach / Reattach EBS root volume (C-Drive cannot detach/attach, only D-Drive can be detach/attach).
  • EBS has 99.999 availability.  

EBS optimized instances

  • EBS optimized EC2 instances enable the full use of an EBS volume's provisioned IOPS
  • They deliver dedicated performance between EC2 instance and their attached EBS volumes
  • Are designed to work with all EDS volumes types 
  • EBS optimization is available you should have  good network devices (moderate or high) 

EC2 Purchasing Option

  • Reserved Instances:
    • We do not actually buy instances, rather we reserve long term capacity and pay for it for instance family / configuration, any time you run (in that AZ o region)
    • On-demand instance that matches the reserved one reserves pricing will apply to the on-demand instance.
    • You can purchase it at a significant discount
    • Purchased reserve instance are always available
    • When purchased with an AZ scope, capacity reservation in the AZ is guaranteed
    • Term option  are 1 year or 3 years . 
    • Once purchased it CANNOT be refunded or cancelled.
    • You can however, sell them on AWS reserved instance marketplace (Standard only, Convertible cannot be sold)
    • You are billed for reserved instance, whether it is running or stopped.
    • Reserved instance do not renew automatically when the reserved term expires , rather billing reverts to on-demand billing
    • You have no control over which EC2 in-demand instance will have the reserved instance process applied to
    • Reserved instance benefits can only apply to on-demand instance ( not Spot on dedicated)

Note: If you using Ec2 servers for more that 6 month use RI (because we get more discount )and if you using Ec2 instance for 2 week then go for spot instances
RI is for you should have commitment for time and Spot instance are cheap but life no commitment .
  • EC2 Spot Instances
    • AWS spot instances allow customers to us computer capacity without upfront commitments at prices cheaper than on demand instance pricing.
    • Customers bid on spot instances, AWS off-peak pricing fluctuates, if the price meet the bid price, the instances are allocated for bedding account.
    • Spot instances may be terminated at anytime by aws when the market price goes higher than the previous bid process by the clients ( who got the instances).
    • All Instance families are available for stop instances (T2 for example)
Use Cases

  • Use it if you are flexible regarding the time you want to run your applications.
  • Use if your applications can be interrupted without any impact  in case AWS terminated the instances)
  • Suitable for
    • Data analysis
    • Batch jobs
    • Background processing
    • Optional tasks.

Note: Amazon Machine Image (AMI).

EC2 Instance state

  • Once the instance comes to running state it receives a private DNS hostname and possible a Public DNs hostname (Depends on whether its is configured  to receive public IP address)
  • You can stop, start reboot & Terminate your instances.
  • If you reboot an Ec2 instance, it is considered as running and does not add addition hour for your bill ( Not applicable for per second billing which is minimum 60 second billings)
  • Eg : Linux Basic instance are second base.
    • Linux Instance is used 50 sec then change is 60 sec ( mim change is start from 60 sec)
    • Linux Instance is used 70 sec the change will be 70 sec.
  • Stopping and starting add an hour to you billing
  • When you stop an instance AWS shuts  it down.
  • Instance store backed instance Cannot be Stopped, they can only be rebooted and terminated
  •  EBS backed Instance Can be stopped, No Changes for stopped EC2 Instance (but EBS incur charges)
  • When you stop EBS backed Instance any data is Instance store volume is lost
  • When EC2 is restarted (means Started ) it will go on another physical host likely)

When you stop EBS backed Instance

  • EC2 Instance retains its private IP4 add and IP6 Add
  • EC2 Instance released it IP4 public IP back to AWS pool
  • EC2 Instance restains is Elastic IP Add ( It incur charges even if its not used like EBS).

EC2 Reboot

Best Practice
  • Use Ec2 reboot and not the Instance OS reboot
  • AWS when initiates a reboot wait for 4 minutes, if instance did not reboot will force hard reboot
  • AWS reboot creates as AWS cloud trail logs which is useful for troubleshooting and Audit purpose. 

EC2 Instance Termination.

  • By Default EBS root device volumes (created by Default when Instance is created ) are deleted automatically when the EC2 instance is terminated
  • Any Additional EBS volume(non boot/root) volumes attached to the instance persists even after Ec2 instance is terminated
  • You can modify this behavior of any EBS volume during Instance launch or later by Modifying the DeletOsTermination" attribute
  • You can view the EBS root volume "DeleteONTermination" behavior from "Block Device Mapping").

EC2 Termination - Protection

  • This is a feature which you can enable so that Ec2 is protected against accidental termination
  • Cloud-Watch cannot terminate Ec2 Instance with Termination protection enabled.
  • If you want to terminate an instance that has Termination Protection on, you can do by choosing OS shutdown and configure AWS to treat OS shutdown as instance termination
  • EC2 Termination protection can be configured at launch while running or stopped( If EBS backed Volume)

EC2 Placement Groups

  •  It's a logical grouping(clustering) of Ec2 instance in the same AZ or in different availability zones
  •  A placement group determines how instance s are placed on underlying hardware. 
  • Two types of placement groups:
    • Cluster-cluster instance into a low-latency group in a single availability zone.
    • Spread-Spreads instance across underlying hardware in multiple AZs (Possible across peer VPC)
    • There is no charge for creation a placement group (Ec2 Change

Cluster Placement Groups

  • A Cluster placement group is within a single availability zone
  • It is recommended when your applications majority of the network traffic is between the instances in the group (Specific instance are supported)
  • Recommended to launch this placement group instances at the same time and of the same type.
  • If you try to add instance to the placement group(stopped state) and you can't due to availability reasons, try to stop and start all instances
  • This may result in migration to other hosts that have availability of the specific instance types requested for the group.

Spread Placement Groups

  • A Spread placement group is a group of instances that are each placed on distinct underlying hardware (so you can mix instance type)
  • Spread placement group are recommended for applications that have a small number ( Max 7 in one AZ) of critical instance that should be kept separate from each other.
  • If you start or launch an instance in a spread placement group and there is insufficient unique hardware to fulfill the request, the request fails
  • You can retry later, no need to stop and restart like cluster placement groups

EC2 Placement Groups

  • A placement group name must be unique within an AWS account for region
  • You Cannot merge two placement groups
  • An instance Cannot be launched in multiple placement group.

Elastic network interface card

  • By default ETH0 is the primary network interface
  • You can't move or detach the primary (eth0) interface from an instance
  • You can add more interfaces to your Ec2 instance(number of additional interfaces is determined by the instance family /type)
  • You can create only one additional ethernet interface (eth1) when launching an EC2 instance ( but no Public IPv4 add assigned to Eth0, you need to assign Elastic IP add)
  • An ENI is bound to an availability zone
  • You can specify which AZ  you want the additional ENI be added in.
  • You can specify exactly which IP add is the subnet to be configured on your instances
  • Else AWS will assign one automatically from the available subnet IP Add
  • Security groups apply to ENI and not to individual IPs on the interface, hence all IP add are  subject to ENI's security group
  • Attached ENI when the instance is running is called "hot Attached"
  • Attaching ENI when the instance is stopped is called "warm attach"
  • Attaching ENI when the instance is launched is called "cold attach"

EBS-Snapshots

  • EBS snapshots are point in time images/copies of your EBS volumes
  • EBS snapshots are stored on S3, however you cannot access them directly, you can only access them through EC2 APIs
  • Snapshot of EBS volume can be done manually or automated through life cycle management.
  • Any data written to the volume after the snapshot process is initiated will not be included in the resulting snapshot
  • EBS snapshots are stored incrementally
  • EBS volumes are AZ specific
  • Snapshots are region specific. 
  • The snapshot is created immediately (it may stay in pending status until it completed)
  • This may take a few minutes or hours to complete (for large volumes ) especially for the first time snapshot of a volume
  • When the snapshot status is pending, you can still access the volume but the I/O might be slower because of snapshot activity. 
  • You can create or restore a snapshot to an EBS volume of the same or larger size than the original volume size, from which the snapshot was initially created ( not smaller size)
  • You can take a snapshot of EBS volume while the volume is in use on a running EC2 instance
  • To create a snapshot for a root (boot) EBS volume, you should (recommended) stop the instance first then take the snapshot ( if Instance store backed Ec2 instance??)
  • Any data cached by the operating system (OS) or in memory , will not be included which means the snapshot will not be 100% consistent. 
  • To create snapshot for a root(btt) EBS I/O’s or unmount the volume if possible (or stop the instance for root volumes).
  • You can re-mount the volume while the snapshot status is pending (being processed).
  • Low cost storage on S3 and guarantee to restore full data from the snapshot
  • You are changed for S3 storage and data transfer to S3 from your EBS volume you are taking snapshot. 
  • EBS snapshot are created asynchronously
  • Deleting a snapshot of a volume has no effect on the volume
  • Deleting a volume has no effect on the snapshot made from it.

To Migrate an EBS from one AZ to another

  • Create a snapshot (region specific)
  • Create an EBS volume from the snapshot in the intended AZ (otherAZ)
  • To migrate an EBS from one region to another
  • Create a snapshot of the volume
  • Copy the snapshot and specify the new region
  • In the new region create a volume out of the copied snapshot. 

EBS Encryption Key


  • To encrypt a volume or snapshot you need an encryption key.
  • There keys are called Customer Master keys(CMKs) and are managed by AWS key Management service(KMS)
  • When encrypting the first EBS volumes one can use a default CMK key (snapsot created using this key cannot be shared) 
  • EBS Encryption is supported on all EBS volume type, and all EC2 instance families
  • Snapshots of encrypted volumes are also encrypted
  • Creating an EBS volume from an encrypted snapshot will result in an encrypted volume.
  • Encrypted volumes are accessed exactly like unencrypted once
  • You can attach an encrypted and unencrypted volumes to the same EC2 instance but the instance has to support encrypted volume( earlier few Instance did not had support of encryption).
  • Data encryption at rest means encrypting data while it is stored on the data storage device
  • There are many ways you can encrypt data on an EBS volume at rest, while the volume is attached to an EC2 instance.

EBS Encryption at Rest


  • Use third party EBS volumes (SSD / HDD) encryption tools/ softwares
  • Use encrypted EBS volumes
  • Use encryption at the OS level (using data encryption plugin/driver)
  • Encrypt data at the application level before storing to the volume,
  • Use  encryption file system on top of the EBS volume.

EBS Encryption Data in Transit


  • When you encrypt data on an EBS volume, data is actually encrypted on the EC2 instance then transferred to store it on the EBS volume
  • So data is transit between EC2 and Encrypted EBS volume is also encrypted

Changing the Encryption state


  • Attach a new encrypted EBS volume to the EC2 instance 
  • Copy the data from the un-encrypted volume to the new volume ( and vice versa)
  • Both volumes MUST be on the same EC2 instance

Encrypted existing volume


  • Create a snapshot of the un-encrypted volume
  • Copy the snapshot and choose encryption for the new copy, this will create an encrypted copy of the snapshot
  • Use this new copy to create an EBS volume which will be encrypted too
  • Attach the new encrypted EBS volume to the EC2 instance 
  • You can delete the one with unencrypted data.

Amazon Machine Image (AMI)

Creating AMIs

To Create your own AMI from an instance-store backed EC2 instance’s (root volume)


  • Launch an Ec2 instance from an aws instance store backed AMI.
  • Update the root volume as you require (software, patches , app… etc)
  • Create the AMI which will upload the AMI as a bundles to S3.
  • You need to specify the S3 bucket( User Bucket) to load the AMI/Bundle
  • Register the AMI(Manually) so that AWS EC2 can find it to launch further EC2 instances
  • Since your new AMI is stored in an AWS s3 bucket, S3 changes applied until you de-register the AMI and delete the S3 store object.

To Create your own AMI from an EBS backed EC2 instance’s (root volume)


  • After launching EBS backed Ec2 Instance update the root volume as required (software, patches , app ...etc)
  • Store the instance to ensure data consistency and integrity then create the AMI.
  • AWS registers the created AMIs automatically.
  • During the AMI-creation process AWS creates snapshots of your instance’s root volumes and any other EBS volumes attached to your Instance.
  • You are charged for storage cost as long as the snapshot are stored in s3.
  • In EBS case NO need to specify one of your S3 buckets

De-Registering AMIs


  • When you do not need an AMI you can del-register it
  • De-registered AMI’s cannot be used to launch further instances (AWS EC2 will not find it
  • De-registering an AMI will not impact those instances created from the AMI while it was registered. 

Deleting AMI.


  • Deregister AMI
  • Delete the bundle in Amazon S3 (instance store backed Ec2 Instance)
  • Select the snapshot -look for the AMI ID in the Description column ( EBS backed EC2 instance).

Sharing & Copying EBS snapshots.


  • When you share your snapshot of a volume you are actually shared all the data on that volume used to create the snapshot
  • You can share your unencrypted snapshots with the AWS community by making them public or share with a selected AWS account by making them private.
  • You cannot make your encrypted snapshot public.
  • You can share your encrypted snapshot with specific AWS accounts by making them private and giving access to keys ( but not default key).

Sharing EBS snapshots.


  • Make sure that you use a non-default/custom CMK key to encrypt the snapshot.
  • Configure cross-account permissions in order to give access to the custom CMK key used to encrypt the snapshot.
  • Mark the snapshot private then enter the aws account with which you want to share the snapshot

Copying EBS snapshots


  • To use an encrypted snapshot shared by other account
  • First create a copy of the snapshot ( re-encrypt the shared encrypted snapshot during the copy process using own CMK key to have full control)
  • Use that copy to restore/create EBS volume
  • It you try to copy an encrypted snapshot without having permissions to the encryption key the copy process will fail silently.

Uses case copying snapshots:


  • Geographic expansion.
  • Disaster recovery.
  • Migrating to another region
  • Encryption ( of un-encrypted volumes).
  • Data retention and auditing requirements, .

Elastic Load Balancer 


  • An internet -facing load balancer has a public resolvable DNS name
  • Domain name is resolved to the ELB DNS name instead of EC2 ip address.
  • There are 3 types of load balancers in the AWS offering (Our Focus on Classical load balancer)
  • Classical load balancer (ELB) supports.
  • HTTP,HTTPS,TCP,SSL
  • Protocol port : 1-65535
  • It support IP4 (IP6 and dual stack for others type of ELB)

ELB -Listeners


  • AN ELB listener checks for connection requests
  • You can configure the protocol/ port on which your ELB listener listens for connection requests
  • Frontend listernet check for traffic from client to the ELB
  • Backend listener and configured with protocol/port to check for traffic from the ELB to the EC2 instances 
  • Registered EC2 instances are those that are defined under the ELB ( not automatic), it takes some time to register Ec2 under ELB.
  • You start to be charged hourly (also for partial hours) once the ELB is active.
  • The load balancer also monitors the health of its registered instances and ensures that it routes traffic only to healthy instances show “In-Service”
  • When the ELB detects an unhealthy instance it stops routing traffic to that instance, show “Out of Service”
To ensure that the ELB services can scale ELB nodes in each AZ

  • Ensure that the subnet defined for the load balancer is at least /27 in size
  • At least 8 available IP addresses so ELB nodes can use to scale.
  • Deleting the ELB does not affect or delete the EC2 instances registered with it.
  • Before you delete the ELB, it is recommended that you point the route53 (or DNS server) to same where else other than the ELB.

ELBS Types

Internet facing:

  • ELB nodes will have public IP addresses
  • DNS will resolve the ELB DNS name to these IP addresses.
  • It routes traffic to the private IP address of your registered  EC2 instances (no public IP required) 
  • Format - name-2043996682.us-east-1.elb.amazonaws.com/
Internal ELB

  • ELB nodes will have private IP Addresses
  • Its routes traffic to the private ip addresses of your registered Ec2 instances 
  • Format: name-2043996682.us-east-1.elb.amazonaws.com/

ELB Cross Zone load balancing


  • When CZLB enabled the ELB will distribute traffic evenly between registered EC2 instances
  • If you have 5 EC2 instances in one AZ and 3 in another, enable CZLB each registered EC2 instance will be getting around the same amount of traffic load from the ELB.
  • Only one subnet can be defined for the ELB in an AZ
  • If you try and select another one in the same AZ it will replace the former one
  • Two subnet from different AZ are recommended by AWS

Auto Scaling


  • It's an AWS feature that allow your AWS compute need (EC2 instances fleet) to grow or shrink depending on your workload requirements
  • Autoscaling help you save  cost by cutting down the number of Ec2 instance when not needed and scaling out to add more instance only when it is required 

Auto Scaling components

Launch Configuration:


  • It's the configuration template used to create new EC2 instances for the ASG
  • It defines parameters like Instance family instance type, AMI, Key Pair, Block devices and Security groups
  • It cannot be edited after creation.

AS Group:


  • It's the logical grouping of EC2 instances managed by AS policy
  • An ASG can have a minimum , maximum and desired of EC2  instances
  • It can be edited after creation.

Scaling Policy (Plan)

Determines when and how the ASG scales or shrinks(On-Demand / Dynamic scaling)

Auto Scaling


  • Auto scaling can span Multi-AZs within the same AWS region(Not Regions), so it can be used to create Fault Tolerant designs on AWS.
  • There is no additional cost for launching AS groups, you pay only for EC2 instances
  • It works well with AWS ELB, Cloud watch.
  • Auto Scaling service always tries to distribute Ec2 instance evenly across AZ where it is enabled (if not done so it will Auto rebalance - 10% or one extra Ec2 is allowed).

Auto Scaling Re-Balancing

What can cause an Imbalance of Ec2 instances:

  • You manually change AZ’s where your AS is in effect (Adding or removing AZ).
  • Manually requesting termination of Ec2 instances from your ASG 
  • An AZ that did not have enough Ec2 capacity, not has enough capacity and it is one of your ASG AZ.
  • An AZ will spot instances market price meeting your bid prices. 
  • You can Detach a running EC2 instance from an AS group

Auto Scaling


  • You can then manage the detached instances independendent or attach it to another AS Group
  • When you detach an instance you can decrease the ASG desired capacity, if you do not decrease the AS group will launch another instance to replace the one detached 
  • You can manually move an instance from an ASG and put it in standby state
  • Instances in Standby are still managed by Auto Scaling
  • Auto Scaling does not perform Health Checks on instances in standby state(Never Unhealthy)
  • They do not count toward available EC2 instance for workload / application use. 
  • When you delete an ASG its parameters minimum, maximum and desired capacity are all set to Zero(it shows under AWS Console as well) hence it would terminate all its EC2 instances 
  • If you want to keep the EC2 instances and manage them independently, you can manually detach them first then delete the ASG.

Simple Storage Services S3.


  • S3 is a storage for the internet
  • It has a simple web services interface for simple storing and retrieval of any amount of data from anywhere on the internet
  • S3 is Object-based storage and NOT a block storage(Audio, Video, Snapshots , etc)
  • S3 has a distributed data-store architecture (redundantly stored in multiple locations).
  • An object size stored in an S3 bucket can be up to 5 TB.
  • There is unlimited storage.
  • A bucket can be viewed as a container for object (Default its private)
  • A bucket is a flat container of objects 
  • It does not provide any hierarchical of objects (actual folders)
  • You can use name (object key) for folder in a bucket when using the AWS console.
  • You can not create nested buckets (a bucket inside another)
  • Bucket ownership is not transferable
  • An S3 bucket is region specific
  • Cross region replication can be done 
  • S3 bucket name (key) are globally unique across all AWS regions

S3 Consistency levels


  • Read after write (Immediate or Strong)  Consistency of PUTs for new objects  loads to S3)
  • A PUT is an HTTP request to store the data
  • Eventual Consistency for overwrite PUTs and DELETES ( for Changes / update to exist Objects in S3).

Simple Storage Service s3


  • S3 has 99.00% availability. 
  • Amazon Guarantees 99.999999999% durability. 
  • Tiered Storage available (Standard type is default). 

Database Type

Relational Database


  • A Relational database is a data structure that allow you to link information from tables or different types of tables
  • It Normalizes data into structures 
  • It means it requires schema which strictly defines tables columns,indexes and relationships between tables
  • Virtually all Relational DBs uses structured Query language (SQL).
  • Best Suited for Online Transaction processing (OLTP)
  • Requires high end hardware as its performance is dependent on that (Complex querying)
  • Example of the relation DB are MYSQL, Oracle DB2 and SQL servers. 

Non-Relational Database


  • It's the Simplest Form DB, non-relational Database store data without a structured mechanism to link data from different tables to one another
  • They are high performance that are non-schema based unlike relational DB
  • Requires low cost hardware.
  • Mach faster writes Compared to relational DB
  • Easier to Develop
  • Best Suited for Online Analytical Processing (OLAP)
  • DynamoDB is an example of Non-relational DB.

Relational Database Services (RDS)

Its fully managed Relational DB Engine service where AWS is responsible for:

  • Security and patching of the database instances
  • Automated backup for the DB instance (default setting)
  • Software updates for the database DB engine
  • If multi-AZ is selected then Synchronous replication between the active and standby DB instances is same region.
  • Automatic failover if Multi-AZ option was selected at launch
  • Every DB instance has weekly maintenance window if you did not specify one at the time you create the DB instance, AWS will choose one randomly for you (30 minutes long).

AWS is NOT responsible for:


  • Managing DB Settings
  • Building a relational DB Schema
  • DB performance tuning

Support Database models:


  • MySQL

Two Licencing models


  • Bring your own license (BYOL)
  • License provided by AWS. 

Up to 40 DB Instance per account


  • 10 of this 40 can be ORACLE or MySQL Server under License Included model.
Or 

  • Under BYOL model all 40 can be any DB engine you need 
Note: Also be aware Amazon RDS use EBS volumes (not Instance Store) 

  • You can NOT read/write to Standby RDS DB instance
  • Depending on the instance class it may take 1 to few minutes to failover the standby instance
  • Recommended to implement DB connection retries into your application, so failover does not create an issue.
Running a DB instance as a multi-AZ deployment can further reduce the impact of a maintenance event because Amazon RDS applies operation system updates by following these steps:

  • Perform maintenance on the standby
  • Promote the standby to primary
  • Perform maintenance on the old primary which becomes the new standby.
When you modify the database engine for your DB instance in a Multi-AZ deployment

  • Amazon RDS upgrades both the primary and secondary DB instances at the same time
  • In this case the database engine for the entire Multi-AZ deployment is shutdown during the upgrade.

Read Replicas


  • It allows you to have a read-only of your production database
  • This is achieved by using Asynchronous replication from the primary RDS instance to read replica
  • You use read replica primary for very read heavy database workload not for DR.

Different types of backups:


1.Automated backups

  • It allows you to recover your database to any point in time within a retention period
  • The retention period can be between one and 35 days
  • Automated backup are enabled by default 
  • The backup data is stored in S3 and you get free storage space equal to the size of your database, so if you have an RDS instance of 10GB you will get 10GB worth of storage
  • Automatic deleted when a RDS instance is deleted

2. Snapshot backups

  • DB Snapshot are done manually (i.e. they are user initiated)
  • They are stored even after you delete the original RDS instance, unlike automated. 

Data warehouse


  • A data warehouse is a relational database that is designed for query and analysis rather than for transaction processing
  • It usually contains historical data derived from transaction data.  

RedShift


  • Redshift is an AWS,fully managed Petabyte scale data warehouse service in the cloud
  • Amazon Redshift gives you fast querying capabilities over structured data using familiars SQL-base clients and business intelligence(BI) tool
  • Queries are distributed and parallelized across multiple physical resources 
  • Its is suited for OLAP-based use cases
  • Can store huge amounts of data ( a database) but can not ingest huge amount of data in real time (not like what Kinesis can do).
  • Example cases:
    • Sales Reporting
    • Health Case analytics

Red shift can


  • Fully recover from a node or component failure
  • It automatically patches and perform data backup
  • Backup can be stored for a user defined retention period 
  • Is 10 times faster than traditional SQL RelationalDB
  • Redshift has much faster performance that other SQL DB’s
  • Data is stored sequentially is columns instead of row 
  • Columnar based DB is ideal for data warehousing and analytics 
  • Requires for few i/O which can greatly enhances performance 
  • Redshift automatically select the compression scheme 

Kinesis:


  • Streaming of data means data that is generated and sent continuously from a large number (100s or hundreds of 1000s) of data sources ,where data is sent in small sizes( Usually in Kbytes or MBytes)
  • Kinesis is as a platform for streaming data on AWS (used for IoT and bigdata Analytics)
  • It offers powerful services to make it easy to load and analyses streaming data
  • Kinesis can continuously capture and store Terabytes of data per hour from 100s of thousand of sources like
  • IoT sensors data
    • Log files from customers of mobile application and web apps
    • In-game player activities 
    • Financial trading floors and stock markets

Route53

Amazon Route 53 is a highly available and scalable Domain Name System (DNS) web services

Three main functions:


  • Register domain names 
  • Route Internet traffic to the resources for your domain
  • Check the health of your resources 

Name Servers:


  • Servers in the Domain Name System (DNS) that help to translate domain names into the IP addresses that computers use to communicate with one another.
  • Name servers are either recursive name server also known as DNS resolver or authoritative name servers. 

A/AAAA Records:


  • These are called host records, Like business card

CNAME Records:


  • Its an alternative records or an alias for another record
  • The DNS protocol does not allow you to create a CNAME record for the top node of a DNS namespace also known as the zone zpex (root Domain or Nakes Domain)

Alias Records:


  • You can use it to route queries to AWS services like CLB/S3 Bucket, etc
  • Alias record can be created for the top node of a DNA namespace also known as the zone apex.

DynamoDB


  • Amazon DynamoDB is a fast and flexible NoSQL database service for any applications that need consistent, single-digit millisecond latency at any scale
  • Its fully managed database and supports key-value data models 
  • Stored on SSD storage
  • You don't need to specify the full schema upfront when creating a table.
  • You only need to declare the primary key for your table (which is unique)
  • This reduces the upfront cost of designing you data model because you can easily modify your schema as your application’s needs change

AD Connector


  • AD Connector is a dual Availability Zone proxy service that connects AWS to your on-premises directory.
  • With AD connector, you can streamline identity management
  • There must be a VPC or Direct Connection.
  • Small AD Connector is designed for smaller organizations of up to 5000 users and large for upto 75000 users
  • AD connector’s performance is highly correlated to…. 
  • Network latency in your on-premises networks and Performance of your existing Active Directory

CloudFormation


  • CloudFormationation allows you to use a simple text file model to provision all the resources needed for your applications across all regions and accounts
  • This file serves as the single source of truth for you cloud environment
  • No additional cost (Only U pay for resource) 
CloudFormationTemplate in json: 
{
     "AWSTemplateFormatVersion" : "2010-09-09",
     "Description" : "This is a basic template to create an S3 bucket",

                "Resources" : {
                       "S3Bucket" : {
                                  "Type" : "AWS::S3::Bucket",
                                         "Properties" : {
                                                     "AccessControl" : "PublicRead",
                                                      "BucketName" : "Vallabhdarole1232222"
                                                                   }
                                                    }
                                            }
}

CloudTrail


  • Cloudtrail allow you to continuously monitor, and retain account activity related to your AWS infrastructure
  • It provides history of events that took place through the AWS management console, AWS SDKs command line tools and other AWS services. 

Cloud Watch


  • Amazon Cloudwatch monitors your Amazon AWS resources (like EC2 ) in real time
    • For example - CPU utilization of you EC2 instance 
  • Cloud watch alarm can be used as Triggers in you AWS 
    • For example to Scale ASG.

AWS Lambda


  • AWS Lambda is highly available compute service that lets you run code without provisioning or management services
  • AWS Lambda executes your code only when needed and scales automatically from as few requests per day to thousand per second.
  • Just Supply your code, in one of the languages that AWS lambda  supports like Node.js, Java, C# and Python
  • Use case example:
    • You can use AWS Lambda to run your code in response to:
    • Events such as changes to data in an Amazon S3 bucket or Amazon DynamoDB table

Simple Notification Services (SNS)


  • SNS is a fast, flexible, fully managed push notification service
  • It a web services that coordinate and manage the delivery or sending of messages (from the cloud ) to subscriber
  • Messages published to an SNS topics will be delivered to the subscribers immediately (endpoint or clients).

Simple Queue Service (SQS)

  • SQS is a fast, reliable, fully managed message queuing service
  • It is a web service that gives you access to message queues that store messages waiting to be processed
  • It offers a reliable, highly scalable , hosted queue for storing messages between computers

No comments:

Post a Comment