All posts by tarry

SIMPLE DESIGNS FOR ON DEMAND SCALING OUT AND SCALING DOWN RESOURCES

SCALING means latency should not fluctuate while scaling resources.

  1. Add more resources , also distribute work -Horizontal scale out  ,shared nothing architecture , loadbalancer / reverse proxy , deployment stamps and geographic distribution for web scale architectures (geographic proximity  , blast radius )
    1. Stateful systems – Data intensive systems , compute and data are co-located , requires chatty interaction with datastores
    2. Stateless
  2. Persistence /data storage –
    1. Choice of storage
      1. SQL/No-SQL/Graph/Object/File System
      2. Polyglot persistence – if component requires different storage requirements
    2. Sharding
      1. List/Range/Consistent hashing
    3. Transaction semantics & Consistency models (dirty reads , eventual consistency ,strong consistency)
    4. Indexing
    5. Caching
    6. Connection pooling
    7. Scope of data access (not amplify read/write , do what you want)
  3. Concurrency
    1. Concurrent executions
    2. Locks
      1. Lock free data structures
      2. Optimistic locks – You assume that no one is going to update . U read version and write when version is same.
    3. Transaction
      1. Eventual consistency
      2. Idempotent operations  – reduce needs of transactions
  4. Asynchronous processing / Non -blocking IO (disk/network)
    1. Offload CPU-intensive task as background tasks
    2. Use non-blocking framework for i/o operations / network operations
    3. Async processing
      1. Accept + Poll
      2. Accept + callback
  5. Messaging /Queuing
    1. Decouple Producer/consumer
    2. Buffering
      1. Intermittent spikes
    3. Head of line blocking – one blocks everything
      1. Convoy – change lanes like in heavy traffic in one queue and light traffic in another queue
    4. SEDA anyone ? – state driven
  6. Safeguard system /admission control
    1. Circuit breakers
    2. Limit to resources
      1. Ex: bounded queues or data structures
    3. Bulkheads
    4. Sane defaults

Things to avoid

Sync IO

Managed heap and garbage colllections

Caches with low hit rate

Impedence mismatch across layers

Hidden performance issues in libraries/ framework

Poor instrumentation

At sufficient scale there are no corner cases

Scale and performance as an after thought

Orthogonal concerns while scaling

resiliency , availability ,security , frugality , monitoring , automation