Architectural Patterns Part 2 — Event-Driven & Microservices

In Lesson 5, we explored layered architecture, MVC, and the monolith. Those patterns organize code within a single deployable unit. This lesson covers two patterns that organize code across deployable units: event-driven architecture and microservices. These are the patterns you encounter when systems grow beyond what a single team or a single server can handle.

Event-Driven Architecture (EDA)

Core Idea

In an event-driven architecture, components communicate by producing and consuming events — records of something that happened. Instead of Service A calling Service B directly (“hey, process this data”), Service A announces “new sensor reading arrived” and any interested service picks it up. The producer doesn’t know — or care — who is listening.

Think of it like a structural monitoring system. When an accelerometer detects a vibration event, it doesn’t call the analysis module directly. It publishes a reading. The analysis module, the logging system, the alert service, and the dashboard all consume that event independently.

Key Components

  • Producer: The component that generates events. A sensor gateway, a user action handler, or a computation pipeline that finished a batch.
  • Broker: The intermediary that receives, stores, and routes events. Examples: Apache Kafka, RabbitMQ, AWS SNS/SQS, Redis Streams. The broker decouples producers from consumers.
  • Consumer: The component that subscribes to events and reacts to them. Multiple consumers can process the same event independently.
  Event-Driven Architecture
  =========================

  +-----------+     +--------+     +------------+
  | Sensor    |---->|        |---->| Analysis   |
  | Gateway   |     |        |     | Engine     |
  +-----------+     |        |     +------------+
                    | Broker |
  +-----------+     | (Kafka)|     +------------+
  | User      |---->|        |---->| Alert      |
  | Actions   |     |        |     | Service    |
  +-----------+     |        |     +------------+
                    |        |
                    |        |---->+------------+
                    |        |     | Dashboard  |
                    +--------+     +------------+

  Producers publish events to the broker.
  Consumers subscribe to topics independently.
  Producers don’t know who consumes their events.

Advantages

  • Loose coupling: Producers and consumers evolve independently. Adding a new consumer — say, a machine-learning anomaly detector — requires zero changes to existing components.
  • Scalability: Each consumer scales independently. If the analysis engine is the bottleneck, you scale it without touching the sensor gateway or the dashboard.
  • Resilience: If a consumer goes down, events are retained in the broker. When the consumer recovers, it picks up where it left off. No data is lost.
  • Extensibility: New features are new consumers. The existing system doesn’t change, which means existing tests don’t break.

Disadvantages

  • Complexity: The flow of data through the system is harder to trace. There’s no single call stack to follow — events fan out asynchronously.
  • Eventual consistency: Because events propagate asynchronously, different parts of the system may have different views of the current state at any given moment. The dashboard might show data from 2 seconds ago while the analysis engine has already processed newer readings.
  • Ordering guarantees: Events may arrive out of order, especially under load. If order matters (e.g., time-series sensor data), you need explicit ordering mechanisms like partitioning by sensor ID.
  • Testing difficulty: End-to-end testing is harder because you need to spin up brokers, producers, and consumers together. Unit tests are fine, but integration tests become complex.

When EDA Shines in Engineering

Event-driven architecture is a natural fit for engineering systems that involve continuous data streams. Structural health monitoring, IoT sensor networks, real-time control systems, and telemetry pipelines all benefit. The pattern also works well when you need to decouple the speed of data production from the speed of data processing — sensors produce data at fixed rates regardless of how fast your analysis engine runs.

Key takeaway: EDA excels when data flows continuously and multiple independent systems need to react to the same events. If your system is request/response (user clicks, server responds), EDA may be unnecessary overhead.

Microservices Architecture

Core Idea

A microservices architecture decomposes a system into small, independently deployable services, each responsible for a single business capability. Each service has its own codebase, its own database (ideally), and its own deployment pipeline. Services communicate over the network, typically via HTTP/REST or messaging.

Compare this to the monolith from Lesson 5. In a monolith, the meshing engine, the solver, the post-processor, and the user management system are all in one deployable artifact. In microservices, each is a separate service that can be developed, tested, deployed, and scaled independently.

  Microservices Architecture
  ==========================

  +----------+    +----------+    +-----------+
  |  API     |    | Meshing  |    | Solver    |
  | Gateway  |--->| Service  |--->| Service   |
  +----------+    +----+-----+    +-----+-----+
       |               |               |
       |          +----+-----+   +-----+-----+
       |          | Mesh DB  |   | Results   |
       |          +----------+   | Store     |
       |                         +-----------+
       |
       +-------->+-----------+   +-----------+
                 | User      |   | User DB   |
                 | Service   +---+           |
                 +-----------+   +-----------+

  Each service owns its data.
  Services communicate over the network.
  Each can be deployed independently.

Advantages

  • Independent deployment and scaling: You can deploy the solver service 10 times a day without touching the user service. You can scale the meshing service to 20 instances while the user service stays at 2.
  • Technology heterogeneity: The meshing service can be written in C++ for performance. The user service can be Python/Django. The solver can be Fortran wrapped in a REST API. Each team picks the best tool for their domain.
  • Team autonomy: Each service is owned by a team that can make its own decisions about internal design, database schema, and release schedule. This is the real reason most organizations adopt microservices.
  • Fault isolation: If the meshing service crashes, users can still log in, view previous results, and submit support tickets. A monolith crash takes everything down.

Disadvantages

  • Operational complexity: Instead of deploying one application, you deploy 10, 50, or 200. Each needs monitoring, logging, health checks, and auto-scaling rules. You need container orchestration (Kubernetes), service discovery, and centralized logging.
  • Network overhead: What was a function call in a monolith is now an HTTP request. Latency increases. Network failures become a category of bugs you didn’t have before.
  • Distributed transactions: If a job submission needs to create a record in the job service and reserve compute in the resource service, you can’t use a simple database transaction. You need sagas, eventual consistency, or compensation logic.
  • Testing complexity: Integration tests require spinning up multiple services with their databases. Contract testing becomes essential to ensure services agree on API shapes.
  • Team overhead: Microservices require mature DevOps practices. If your team doesn’t have CI/CD, automated testing, and monitoring, microservices will make everything worse, not better.

Conway’s Law

“Any organization that designs a system will produce a design whose structure is a copy of the organization’s communication structure.” — Melvin Conway, 1967.

This isn’t just an observation — it’s a force of nature in software. If you have three teams, you’ll get three services (or three major modules). If your teams are organized by technology layer (frontend team, backend team, database team), your architecture will be layered. If your teams are organized by business capability (meshing team, solver team, user-management team), your architecture will be service-oriented.

The practical implication: don’t choose microservices unless your organization is structured to support them. A single team of 5 people maintaining 12 microservices is fighting Conway’s Law, and Conway’s Law always wins.

Key takeaway: Architecture follows team structure. If you want to change your architecture, you often need to change your organization first. This is the “Inverse Conway Maneuver” — deliberately structuring teams to get the architecture you want.

When to Choose What

Situation Recommended Approach Why
Small team (1–5), single product Monolith Simplicity. One codebase, one deploy, one database. Ship fast.
Growing team (5–15), clear domain boundaries Modular monolith Enforce boundaries inside the monolith. Prepare for future splitting.
Multiple teams (15+), independent release cadences Microservices Team autonomy becomes the bottleneck. Microservices solve organizational scaling.
High-throughput data streams, real-time processing Event-driven Decouples producers from consumers. Handles bursts gracefully.
Mixed: multiple teams + real-time data Microservices + EDA Services communicate via events. Common in IoT and monitoring platforms.

Key takeaway: Microservices primarily solve an organizational problem, not a technical one. They let multiple teams ship independently. If you don’t have multiple teams, you probably don’t need microservices. Start with a well-structured monolith and extract services only when the organizational pain justifies the operational cost.

Exercise 6.1: Architecture Decision Record

Architecture Decision Record (ADR) for Structural Health Monitoring

Scenario: You’re the lead engineer for a structural health monitoring (SHM) platform. The system monitors 2,000 bridge sensors producing acceleration, strain, and temperature data at 100 Hz. Requirements:

  • Ingest and store all raw sensor data (200,000 readings/second)
  • Run real-time anomaly detection on incoming streams
  • Provide a web dashboard showing live sensor status and historical trends
  • Generate daily/weekly reports for bridge operators
  • Support adding new analysis algorithms without downtime
  • Team: 8 engineers (3 data/ML, 3 backend, 2 frontend)

Write an ADR using this template:

# ADR-001: System Architecture for SHM Platform

## Status
Proposed

## Context
[Describe the problem and constraints]

## Options Considered

### Option A: Monolith
[Pros, cons, and assessment for this scenario]

### Option B: Microservices with REST
[Pros, cons, and assessment for this scenario]

### Option C: Microservices with Event-Driven Communication
[Pros, cons, and assessment for this scenario]

## Decision
[Which option and why]

## Consequences
[What this decision means for the team and system]

Consider: data volume, team structure, real-time requirements, extensibility needs, and operational complexity your team can handle.

Discussion Guide

Option C (Microservices + EDA) is the strongest fit for this scenario. Here’s why:

  • 200,000 readings/second is a high-throughput stream — perfect for an event broker like Kafka.
  • The team is already organized by capability (data/ML, backend, frontend), which maps naturally to service boundaries.
  • “Add new analysis algorithms without downtime” is exactly what EDA enables: new consumers subscribe to the stream without modifying existing services.
  • The dashboard, reporting, and anomaly detection are independent consumers of the same data stream.

Option B (Microservices + REST) would struggle with the high-throughput requirement. REST is request/response, not stream-oriented. Option A (Monolith) would work initially but would struggle with the real-time processing demands and would make independent algorithm deployment impossible.

The key consequence of choosing Option C is operational complexity: the team needs Kafka expertise, container orchestration, and distributed monitoring. With 8 engineers, this is feasible but tight.

Quiz

A 3-person startup builds a web application with 12 microservices: one for authentication, one for user profiles, one for notifications, one for billing, and so on. They’re spending 60% of their time on DevOps infrastructure. What is the most likely architectural mistake they’ve made?

Answer

Premature microservices. A 3-person team does not have the organizational scaling problem that microservices solve. They have one team, one release cadence, and one deployment pipeline. A well-structured monolith would give them the same code organization benefits with a fraction of the operational overhead. Conway’s Law predicts this: a 3-person team should produce a system structured for 3 people, not 12 independent services. The 60% DevOps overhead is the tax they pay for fighting this reality.