How to Write Software Requirements That Developers Can Actually Build From
Write clear software requirements with user stories, acceptance criteria, and edge cases. Discovery session framework included.
TL;DR
Bad requirements are the number one reason software projects fail. This guide gives you a repeatable framework: run structured discovery calls, write user stories with measurable acceptance criteria, systematically identify edge cases, and produce a requirement document that a developer can estimate and build from — without a single "the system should be user-friendly" in sight.
Prerequisites
- Access to stakeholders or end users who can articulate the business problem
- A text editor or collaboration tool (Google Docs, Notion, Confluence)
- Basic understanding of the domain you are specifying for
- Willingness to say "I don't know yet" and iterate
Step 1: The Discovery Call Framework
Before you write a single requirement, you need to understand the problem. Most requirement failures trace back to skipping or rushing this step. Here is a structured framework for running a discovery call that actually produces useful output.
Pre-Call Preparation (15 minutes)
Send the stakeholder a brief questionnaire at least 24 hours before the call:
1. What problem are we solving? (1-2 sentences)
2. Who experiences this problem? (specific roles/personas)
3. What happens today without this solution? (current workaround)
4. What does "done" look like? (success metric)
5. Are there any hard deadlines or constraints?
The Call Structure (60 minutes)
Use this agenda — deviate as needed, but always cover these blocks:
00:00 - 05:00 Context alignment (restate what you know)
05:00 - 20:00 Problem deep-dive ("show me how you do this today")
20:00 - 35:00 Solution exploration ("what if we...")
35:00 - 50:00 Constraint mapping (budget, timeline, integrations)
50:00 - 55:00 Priority ranking (must-have vs nice-to-have)
55:00 - 60:00 Next steps and timeline
Key Questions to Ask
These questions consistently uncover hidden requirements:
- "Walk me through the last time this happened." — Forces concrete examples instead of abstract wishes.
- "What would you do if [feature X] wasn't available?" — Reveals which features are truly essential.
- "Who else touches this data/process?" — Exposes integration requirements and stakeholders you haven't talked to.
- "What's the worst thing that could happen?" — Identifies error handling and security requirements.
- "How will you know this is working?" — Defines measurable success criteria.
Post-Call Deliverable
Within 24 hours, send a structured summary back to the stakeholder for confirmation. This is your first draft of requirements — not the final document, but the foundation.
Step 2: Writing User Stories That Work
A user story is not a requirement by itself. It is a placeholder for a conversation. But a well-written user story makes that conversation productive.
The Format
As a [specific role],
I want to [concrete action],
So that [measurable business outcome].
Good vs Bad Examples
BAD: As a user, I want a dashboard, so that I can see data.
GOOD: As a warehouse manager, I want to see today's pending
shipments sorted by deadline, so that I can prioritize
which orders to pick first.
BAD: As an admin, I want to manage users.
GOOD: As an account administrator, I want to deactivate a user
account and reassign their open tasks to another team
member, so that no work is lost when someone leaves.
The INVEST Checklist
Every user story should be:
- Independent — Can be developed without depending on other stories
- Negotiable — Details can be discussed; it's not a contract yet
- Valuable — Delivers clear value to a user or the business
- Estimable — A developer can roughly size it
- Small — Can be completed in one sprint (1-2 weeks max)
- Testable — You can write a test that proves it works
If a story fails any of these, split it or rewrite it. A story that fails "Estimable" usually means the problem is not well enough understood — go back to discovery.
Step 3: Acceptance Criteria — The Contract
Acceptance criteria turn vague stories into buildable specifications. They are the contract between the product owner and the development team. If the acceptance criteria pass, the story is done. Period.
Given-When-Then Format
Story: As a warehouse manager, I want to see today's pending
shipments sorted by deadline.
Acceptance Criteria:
Given I am logged in as a warehouse manager
And there are 3 pending shipments due today
When I open the dashboard
Then I see exactly those 3 shipments
And they are sorted by deadline ascending
And each entry shows: order ID, customer name, deadline time,
item count
Given I am logged in as a warehouse manager
And there are no pending shipments due today
When I open the dashboard
Then I see an empty state message: "No shipments due today"
Given I am logged in as a warehouse manager
And a shipment deadline has passed
When I open the dashboard
Then that shipment is highlighted in red
And it appears at the top of the list regardless of sort order
Rules for Good Acceptance Criteria
- Be specific about data. "Shows relevant information" is useless. List exactly which fields.
- Cover the empty state. What happens when there is no data? This is forgotten in 80% of specs.
- Cover the error state. What happens when the API call fails? When the user lacks permission?
- Include boundaries. What if there are 10,000 shipments? Is there pagination? A max display count?
- Make assertions testable. "The page loads quickly" is not testable. "The page loads in under 2 seconds with 100 records" is.
Step 4: Edge Case Identification
Edge cases are where bugs live. A systematic approach to finding them saves weeks of back-and-forth during development.
The Edge Case Matrix
For every feature, walk through these categories:
Category | Questions to Ask
------------------+-------------------------------------------
Empty state | What if there's no data? Zero items?
Boundary values | What about max length? Negative numbers? Zero?
Permissions | What if user lacks access? Role changes mid-session?
Concurrency | What if two users edit simultaneously?
Timezones | Does this cross timezone boundaries?
Connectivity | What if the network drops mid-action?
Data formats | Unicode? Special characters? RTL text?
Scale | What if there are 1M records? 1 record?
State transitions | What if the item changes state during viewing?
External deps | What if a third-party API is down?
Practical Exercise
Take your most complex user story. Set a timer for 15 minutes. Walk through the matrix above. You will find at least 5 edge cases you hadn't considered. Write each one as an additional acceptance criterion or a separate story if it's large enough.
Step 5: The Requirement Document Format
Here is a proven template that balances completeness with readability. Developers, designers, and product owners can all work from this document.
# Feature: [Name]
## Context
- Problem statement (2-3 sentences)
- Current workaround
- Success metric
## User Stories
- US-001: As a [role], I want [action], so that [outcome]
- US-002: ...
## Acceptance Criteria
### US-001
- AC-001.1: Given... When... Then...
- AC-001.2: Given... When... Then...
## Data Requirements
- Input fields: [list with types, constraints, validation rules]
- Output fields: [list with formats]
- Storage: [where does this data live?]
## Non-Functional Requirements
- Performance: [specific targets, e.g., "< 2s response time"]
- Security: [auth method, data sensitivity level]
- Accessibility: [WCAG level, specific needs]
## Dependencies
- External systems: [list with API docs links]
- Internal services: [list]
- Data migrations: [if any]
## Out of Scope
- [Explicitly list what this feature does NOT include]
## Open Questions
- [List unresolved items with assignee and due date]
The "Out of Scope" section is arguably the most important. It prevents scope creep by making boundaries explicit. If it's not in scope, it's not in the estimate, and it's not in the sprint.
Step 6: Anti-Patterns to Avoid
These are the most common requirement mistakes, collected from hundreds of failed projects:
1. The "Solution Spec" Disguised as Requirements
BAD: "Use a PostgreSQL database with a users table that has
columns id, name, email."
GOOD: "The system must store user profiles with name and email.
Data must persist across sessions and be queryable by
email."
Requirements describe what and why. Architecture describes how. Don't mix them unless the technology is a hard constraint.
2. The "Vague Adjective" Trap
Flag these words in any requirement document — they are almost always meaningless:
- "User-friendly" — Compared to what? Measured how?
- "Fast" — 100ms? 5 seconds? Under what load?
- "Scalable" — To 100 users? 10 million?
- "Secure" — Against what threat model?
- "Modern" — This will age badly. Be specific about technologies or patterns.
3. The "Missing Negative" Problem
Requirements that only describe the happy path. Ask: "What should the system do when [everything goes wrong]?"
4. The "Kitchen Sink" Requirement
BAD: "The system should handle all user management including
registration, login, password reset, profile editing,
role assignment, account deletion, audit logging, and
SSO integration."
This is 8 separate features stuffed into one sentence. Each needs its own story, criteria, and estimate.
5. The "Copy-Paste from Competitor" Approach
"It should work like Slack" is not a requirement. Slack has thousands of features built by hundreds of engineers over 10 years. Be specific about which behavior you want and why.
Troubleshooting & Considerations
Stakeholders who can't articulate what they want
Show them examples. Create low-fidelity mockups (pen and paper is fine). Ask them to react to something concrete rather than create from scratch. "Is it more like A or more like B?" is easier to answer than "What do you want?"
Requirements that keep changing
Some change is expected and healthy. But if core requirements change after development starts, you have a discovery problem, not a development problem. Go back to the discovery call framework and do it properly.
Developers say "I can't estimate this"
This means the requirement is not specific enough. Ask the developer: "What information do you need to estimate?" Document their questions, get answers from stakeholders, and update the requirement.
Too many requirements, not enough time
Use the MoSCoW method: Must have, Should have, Could have, Won't have. Be honest about "Won't have" — it's not "Won't have ever," it's "Won't have in this release." This forces prioritization and prevents the kitchen sink problem.
Prevention & Best Practices
Continuous Discovery
Don't treat requirements as a one-time phase. Run brief discovery sessions (30 minutes) every sprint to refine upcoming work. This catches misunderstandings early when they're cheap to fix.
The Three Amigos
Before development starts on any story, have a 15-minute conversation with exactly three people: a product person, a developer, and a tester. Each brings a different perspective that catches different gaps.
Requirement Reviews
Treat requirements like code — they need reviews. Have a developer read through acceptance criteria and ask: "Can I build exactly this with no ambiguity?" If not, refine.
Living Documentation
Requirements that live in email threads or Slack messages are requirements that get lost. Use a single source of truth (wiki, Notion, Confluence) and link from your project management tool. Every story card should link to its detailed spec.
Version Control for Requirements
Track changes to requirements. When a stakeholder asks "why did we build it this way?", you need to point to the requirement and who approved it. This is not bureaucracy — it's protection for everyone involved.
Template Consistency
Use the same requirement document format for every feature. Consistency reduces cognitive load for everyone reading them and makes it harder to accidentally skip a section.
Need Expert Help?
Want me to join your client call and deliver the spec? €150.
Book Now — €150100% money-back guarantee