Capstone Project Presentations & Course Review

Capstone Project Brief: The Challenge

The capstone project asks you to apply everything you’ve learned in this course to design a software system for a real engineering problem. You are not writing code. You are doing what architects do: making design decisions, justifying trade-offs, and communicating your reasoning clearly.

Choose one of the following prompts, or propose your own (Option E).

Option A: Cloud-Based Structural Analysis Platform

Design a cloud-based platform that allows structural engineering firms to submit finite element analysis (FEA) jobs via a web interface. The platform must support multiple concurrent clients, handle jobs ranging from 5 minutes to 48 hours, store and organize results, and provide real-time job status updates. Key constraints: multi-tenancy with strict data isolation between clients, cost optimization for variable workloads (peak at 500 concurrent jobs, off-peak at 20), and integration with at least two commercial FEM solvers (ANSYS, Abaqus).

Option B: Structural Health Monitoring (SHM) Network

Design a distributed system for monitoring the structural health of 200 bridges across a region. Each bridge has 50–200 sensors (accelerometers, strain gauges, temperature, wind speed) streaming data at 100 Hz. The system must detect anomalies in real-time (within 30 seconds), store historical data for 10 years, support trend analysis and ML-based predictive maintenance, and generate automated alerts with configurable thresholds. Key constraint: the system must continue operating if any single component fails — bridge monitoring is safety-critical.

Option C: Digital Twin Platform for Infrastructure

Design a digital twin platform that maintains real-time virtual models of physical infrastructure (bridges, buildings, dams). The platform ingests sensor data, updates physics-based models, runs simulations to predict future behavior, and visualizes results through a web dashboard. Key constraints: models must update within 5 minutes of receiving new sensor data, support what-if scenario analysis (“what happens if load increases by 20%?”), and maintain full audit history for regulatory compliance.

Option D: AI-Augmented Simulation Workflow

Design a system that uses AI/ML to accelerate engineering simulations. The system should: accept geometry and boundary conditions, use ML surrogate models for rapid preliminary analysis, fall back to full FEM simulation when confidence is low, learn from completed simulations to improve surrogate accuracy, and provide uncertainty quantification for all results. Key constraints: engineers must be able to understand why the system chose surrogate vs. full simulation, and all results must include confidence intervals.

Option E: Your Own Proposal

Propose your own engineering software system. It must be a system you’ve encountered professionally or academically, and it must be complex enough to require meaningful architectural decisions. Submit a 1-paragraph description for approval before beginning.

Deliverables

1. Architecture Diagram (1 page)

A single-page architecture diagram that communicates the system’s structure. The diagram must include:

  • All major components (services, databases, queues, external systems)
  • Communication protocols between components (REST, gRPC, WebSocket, message queues)
  • Data flows with direction arrows
  • Trust boundaries (where does authentication/authorization happen?)
  • Deployment targets (which cloud service model and specific service for each component)

Use a standard notation (C4 model component diagrams are recommended but not required). The diagram should be understandable by someone who has not read your justification document.

2. Justification Document (2 pages)

A written document with the following five sections:

Section Content Approximate Length
1. Requirements Analysis Functional and non-functional requirements. Identify the top 3 quality attributes (from Lesson 4) that drive your architecture. ~1/3 page
2. Architecture Decisions 3–5 key decisions (e.g., monolith vs. microservices, SQL vs. NoSQL, sync vs. async). For each: what you chose, what you rejected, and why. ~1/2 page
3. Trade-Off Analysis For each architecture decision, what did you sacrifice? What are the risks of your chosen approach? Under what conditions would you choose differently? ~1/2 page
4. Systems Thinking Identify at least 2 feedback loops in your system (1 reinforcing, 1 balancing). Explain what behavior they drive and how your architecture manages them. ~1/3 page
5. Failure Modes Identify 3 ways your system could fail. For each: what triggers it, what’s the blast radius, and what architectural mechanism mitigates it? ~1/3 page

Presentation Format

Each presentation is 15 minutes total:

Segment Duration Content
Context 2 min What problem does the system solve? Who are the users? What are the key constraints?
Architecture walkthrough 3 min Walk through the architecture diagram. Explain each component and how they interact.
Key decisions 3 min Present 2–3 of your most important design decisions with trade-off analysis.
Q&A 7 min Audience and instructor questions. Expect questions about trade-offs, failure modes, and alternative approaches.
Tip: The strongest presentations are those where the presenter can answer “why not?” questions confidently. If you chose PostgreSQL, be ready to explain why not MongoDB. If you chose microservices, be ready to explain why not a monolith. The ability to articulate rejected alternatives is the hallmark of genuine architectural reasoning.

Peer Review Framework

Each presentation is peer-reviewed by the audience. Use the following rubric, with each criterion weighted at 25%:

Criterion (25% each) Excellent (5) Good (4) Adequate (3) Needs Work (1–2)
Architecture Clarity Diagram is clear, complete, and self-explanatory. All components, connections, and data flows are labeled. Diagram is mostly clear with minor omissions. Diagram exists but requires verbal explanation to understand. Diagram is confusing, incomplete, or missing key components.
Design Principle Application Explicitly references and correctly applies principles from the course (separation of concerns, least privilege, etc.). Applies principles correctly but doesn’t always name them explicitly. Some principles are applied but others are violated without justification. Design violates fundamental principles without awareness.
Trade-Off Reasoning Every major decision includes what was sacrificed and under what conditions the decision would change. Trade-offs are discussed for most decisions. Some trade-offs are mentioned but analysis is shallow. Decisions are presented as obviously correct with no trade-off discussion.
Completeness All deliverables complete. Failure modes, feedback loops, and security considerations are addressed. Most deliverables complete with minor gaps. Some deliverables incomplete or superficial. Major deliverables missing or fundamentally incomplete.

Course Review: What Changed in Your Thinking?

Looking back across 20 lessons, the goal was not to teach you a specific technology or framework. Technologies change. The goal was to shift how you think about software. Here are the five major shifts this course aimed to produce:

Shift 1: From Tools to Systems

Before: “I know Python, Docker, and AWS. I can build anything.”

After: “Tools are necessary but insufficient. The critical skill is designing systems where components interact correctly under real-world conditions.”

Knowing how to use a hammer doesn’t mean you can design a building. This course was about the building, not the hammer.

Shift 2: From Correctness to Trade-Offs

Before: “There’s a right answer. I need to find the best technology/pattern/approach.”

After: “Every decision sacrifices something. The question is not ‘what’s best?’ but ‘what’s best for this context, given these constraints, at this time?’”

Engineering is the art of making informed trade-offs under uncertainty. There are no right answers — only trade-offs you can justify.

Shift 3: From Single-Machine to Distributed

Before: “My code works on my machine. Deployment is someone else’s problem.”

After: “Distributed systems have fundamentally different failure modes. Network partitions, eventual consistency, and cascading failures are architectural concerns, not operational accidents.”

The network is not reliable. The clock is not accurate. The disk is not permanent. Designing for these realities is what separates scripts from systems.

Shift 4: From Implementation to Design

Before: “The hard part is writing the code.”

After: “The hard part is deciding what code to write. Architecture, API design, and data modeling determine whether the code you write will survive contact with reality.”

AI has made this shift even starker: if code generation is nearly free, the value is entirely in the design decisions that precede it.

Shift 5: From Individual to Systemic

Before: “If every component works, the system works.”

After: “Systems exhibit emergent behavior. A system can fail even when every individual component is functioning correctly. Feedback loops, delays, and interactions between components determine system behavior.”

This is the systems thinking perspective. It’s the most fundamental shift, and it applies far beyond software.

Career Perspective: The Engineer in 2026 and Beyond

The skills that will matter most for engineers working with software over the next decade are not specific technologies. They are meta-skills:

  1. Architectural judgment. The ability to evaluate designs against requirements and constraints, including designs generated by AI. This means understanding patterns, trade-offs, and failure modes well enough to say “this won’t work because…” and explain why.
  2. Trade-off communication. The ability to explain technical decisions to non-technical stakeholders. “We chose this approach because it optimizes for X at the cost of Y, which is acceptable because Z.” This is how engineering decisions get funded and supported.
  3. Systems reasoning. The ability to see feedback loops, emergent behavior, and failure cascades before they happen. This is the difference between firefighting and prevention.
  4. AI collaboration. The ability to effectively prompt, evaluate, and refine AI-generated work. This is not about knowing prompt engineering tricks — it’s about having enough domain knowledge to direct AI precisely and evaluate its output critically.
  5. Cross-domain translation. The ability to translate between engineering domains and software engineering. You understand both the physics of the problem and the architecture of the solution. This combination is rare and valuable.
Key insight: The engineers who will thrive are those who can do what AI cannot: exercise judgment under novel constraints, reason about trade-offs in context, understand organizational dynamics, and take accountability for design decisions. These are the skills this course has aimed to develop.

Final Exercise: “What Changed in My Thinking?”

Exercise: Reflect on the five shifts described above. Write brief answers to the following three questions:
  1. Which shift was the most significant for you personally? Describe a specific example from your own work where this shift would have changed how you approached a problem. What would you have done differently?
  2. Think about a system you’ve built or contributed to. If you were to redesign it with the knowledge from this course, what are the top 3 architectural decisions you would change? For each, explain: what you did originally, what you would do now, and why the new approach is better for your specific context.
  3. How has your relationship with AI coding tools changed? Before this course, how did you use AI for coding? How will you use it differently now? Be specific about which tasks you’ll delegate to AI and which you’ll keep as human decisions.

There are no wrong answers to these questions. The goal is honest reflection on how your mental models have shifted.

Recommended Reading

If this course sparked your interest, here is a curated reading path organized by progression:

Start Here

  • A Philosophy of Software Design by John Ousterhout — The clearest articulation of software complexity and how to manage it. Short, opinionated, and immediately applicable. Read this first.
  • Thinking in Systems: A Primer by Donella H. Meadows — The foundational text on systems thinking. Written for a general audience but deeply relevant to software. Explains feedback loops, leverage points, and system archetypes.

Then

  • Designing Data-Intensive Applications by Martin Kleppmann — The definitive guide to the architecture of modern data systems. Covers distributed systems, consistency models, stream processing, and batch processing with exceptional clarity.
  • Software Architecture: The Hard Parts by Neal Ford, Mark Richards, Pramod Sadalage & Zhamak Dehghani — Focuses specifically on trade-off analysis in architecture. Excellent for developing the judgment to choose between competing approaches.
  • The Pragmatic Programmer by David Thomas & Andrew Hunt (20th Anniversary Edition) — Timeless advice on software craftsmanship that spans from coding practices to career development. The chapter on “Tracer Bullets” alone is worth the read.

When Ready

  • Site Reliability Engineering by Betsy Beyer, Chris Jones, Jennifer Petoff & Niall Richard Murphy (Google SRE Book) — How Google operates large-scale systems. Free online. Focuses on the operational side: monitoring, incident response, capacity planning.
  • Building Evolutionary Architectures by Neal Ford, Rebecca Parsons & Patrick Kua — How to design architectures that can evolve over time. Introduces fitness functions for architectural governance.

Papers

  • “Out of the Tar Pit” by Ben Moseley & Peter Marks (2006) — A seminal paper on managing software complexity through functional and relational approaches.
  • “A Note on Distributed Computing” by Jim Waldo et al. (1994) — Why distributed systems are fundamentally different from local computation. Still relevant 30+ years later.
  • “How Do Committees Invent?” by Melvin Conway (1968) — The original Conway’s Law paper. Short and prescient.
  • “No Silver Bullet” by Fred Brooks (1986) — Why there is no single approach that will produce an order-of-magnitude improvement in software productivity. The accidental vs. essential complexity distinction remains fundamental.

Tools to Explore

  • Architecture Decision Records (ADRs) — Start documenting your design decisions using the format from Lesson 5. Tools: adr-tools, or simply markdown files in your repository.
  • C4 Model — A pragmatic approach to diagramming software architecture at four levels of abstraction. Tool: Structurizr.
  • Architectural Katas — Practice exercises for architectural design, similar to coding katas but focused on system design. Resource: Neal Ford’s Architectural Katas website.
  • Chaos Engineering — Deliberately inject failures into your systems to test resilience. Tools: Chaos Monkey (Netflix), Litmus (Kubernetes), AWS Fault Injection Simulator.

Final Quiz

Question: You’ve been asked to review an AI-generated architecture for a new engineering platform. The AI produced a detailed design with component diagrams, API specifications, and deployment configurations. What is the correct approach?

  1. Accept the architecture as-is, since AI tools are now reliable enough for production architecture.
  2. Reject the architecture entirely and design from scratch, since AI cannot do architecture.
  3. Evaluate the architecture against your requirements and constraints. Accept what fits, redesign what doesn’t, and document the reasoning for both.
  4. Run the AI prompt again with more detail and accept the improved output.
Answer

c) Evaluate the architecture against your requirements and constraints. Accept what fits, redesign what doesn’t, and document the reasoning for both.

This is the “architect as AI orchestrator” model from Lesson 18. AI-generated architectures are useful starting points, not finished products. Option (a) abdicates professional responsibility — you are accountable for the architecture, regardless of who or what generated it. Option (b) wastes the legitimate value AI provides in generating variants and identifying patterns. Option (d) assumes the problem is insufficient input rather than insufficient evaluation. The correct approach is to apply your engineering judgment: compare the generated architecture against requirements (completeness), constraints (feasibility), quality attributes (trade-offs), and failure modes (resilience). Keep what’s good, fix what’s not, and document why for both decisions. This is what it means to be an engineer.