For the past few years, we've been building complex hardware/software systems with AI-assisted tools and kept running into the same structural problem: classical Scrum optimizes for adaptability but has no intrinsic mechanism for verifiable, auditable quality at scale. The traditional V-Model gives you traceability and rigor but it's rigid, project-bound, and collapses under iteration.
We didn't want to choose between them. So we spent some time working out whether they could be unified and validated the approach on real hardware/software projects before publishing anything.
The result is Agile V, an open standard we're publishing under CC BY-SA 4.0. The core idea is that the "V" shouldn't be a one-time project journey, it should vibrate at the task level, continuously, with every build.
A few things we think are worth discussing:
The Red Team Protocol. The Test Design Agent reads the requirements, not the code, to build the verification suite. This prevents "success bias" — where tests only check what the code happens to do rather than what it was supposed to do. It mirrors how a separate QA team should work in practice, but enforces it structurally.
Living Compliance. In regulated industries (medical, aerospace, automotive), compliance documentation is typically a post-hoc chore that happens before an audit. We think this is the wrong model. Compliance should be an inherent output of the workflow itself, the audit trail writes itself as decisions are made.
Human Curation over Human Execution. The shift AI actually enables isn't automating the build — it's changing what humans are responsible for. Humans should be designing intent, reviewing evidence, and making judgment calls. Not writing boilerplate.
We're genuinely curious whether others have run into the same tension between Agile and V-Model in regulated or safety-critical work, and whether this framing resonates or misses something. Would love to hear from people who've tried to apply formal verification or traceability in fast-moving teams.
For the past few years, we've been building complex hardware/software systems with AI-assisted tools and kept running into the same structural problem: classical Scrum optimizes for adaptability but has no intrinsic mechanism for verifiable, auditable quality at scale. The traditional V-Model gives you traceability and rigor but it's rigid, project-bound, and collapses under iteration.
We didn't want to choose between them. So we spent some time working out whether they could be unified and validated the approach on real hardware/software projects before publishing anything.
The result is Agile V, an open standard we're publishing under CC BY-SA 4.0. The core idea is that the "V" shouldn't be a one-time project journey, it should vibrate at the task level, continuously, with every build.
A few things we think are worth discussing:
The Red Team Protocol. The Test Design Agent reads the requirements, not the code, to build the verification suite. This prevents "success bias" — where tests only check what the code happens to do rather than what it was supposed to do. It mirrors how a separate QA team should work in practice, but enforces it structurally.
Living Compliance. In regulated industries (medical, aerospace, automotive), compliance documentation is typically a post-hoc chore that happens before an audit. We think this is the wrong model. Compliance should be an inherent output of the workflow itself, the audit trail writes itself as decisions are made.
Human Curation over Human Execution. The shift AI actually enables isn't automating the build — it's changing what humans are responsible for. Humans should be designing intent, reviewing evidence, and making judgment calls. Not writing boilerplate.
We're genuinely curious whether others have run into the same tension between Agile and V-Model in regulated or safety-critical work, and whether this framing resonates or misses something. Would love to hear from people who've tried to apply formal verification or traceability in fast-moving teams.
Christopher and Joshua