Skip to the content.
AWS Developer Associate

RDS, Aurora & ElastiCache

RDS Overview

Advantage over using RDS versus deploying DB on EC2

RDS - Storage Auto Scaling

RDS Read Replicas for read scalability

Read Replicas - Use Cases

Read Replicas - Network Cost

RDS Multi AZ (Disaster Recovery)

RDS - From Single-AZ to Multi-AZ

Read Replicas Multi-AZ
Scale the read workload of your DB Failover in case of AZ outage (high availability)
Can create up to 5 Read Replicas Data is only read/written to the main database
Data is only written to the main DB Can only have 1 other AZ as failover

RDS Custom

Amazon Aurora

Aurora High Availability and Read Scaling

Amazon-Aurora-Architecture

Aurora DB Cluster

Aurora DB Cluster

Features of Aurora

RDS & Aurora Security

Amazon ElastiCache Overview

ElastiCache Solution Architecture - DB Cache

db-cache

ElastiCache Solution Architecture - User Session Store

User Session Store

ElastiCache - Redis vs Memcached Replication

REDIS MEMCACHED
Multi AZ with Auto-Failover Multi-node for partitioning of data (sharding)
Read Replicas to scale reads and have high availability No high availability (replication)
Data Durability using AOF persistence Non persistent
Backup and restore features No backup and restore
  Multi-threaded architecture

ElastiCache - Cache Security

ElastiCache Replication: Cluster Mode Disabled

ElastiCache Replication: Cluster Mode Enabled

Caching Implementation Considerations

Lazy Loading / Cache-Aside / Lazy Population

Pros Cons
Only requested data is cached (the cache isn’t filled up with unused data) Cache miss penalty that results in 3 round trips, noticeable delay for that request
Node failures are not fatal (just increased latency to warm the cache) Stale data: data can be updated in the database and outdated in the cache

Lazy Loading

Write Through - Add or Update cache when database is updated

Pros Cons
Data in cache is never stale, reads are quick Missing Data until it is added / updated in the DB. Mitigation is to implement Lazy Loading strategy as well
Write penalty vs Read penalty (each write requires 2 calls) Cache churn – a lot of the data will never be read

 Write Through

Cache Evictions and Time-to-live (TTL)

Which caching design pattern is the most appropriate?

Amazon MemoryDB for Redis