In the beginning, there was waterfall. It made perfect sense. You define your requirements because how can you know what to build unless you have well understood requirements?
Then you designed your system. Writing code without a plan just ends up an unmaintainable mess.
Then comes the code. So much code that all works nicely with the system design, or at least can be forced to work with the system design.
Then comes testing. Sure some bugs are bound to appear, but nothing that will challenge the requirements or the system design.
Then you release! Users are happy and everyone lives happily ever after.
It never really worked out that way. Software projects always ended up over budget and developers often crunched to get releases out. A lot of time, the software is also fundamentally unuseable and gets abandoned. There's some survivor bias here because you don't hear about those projects as much as the successful ones.
The waterfall model made sense at the time because it looks similar to other engineering or manufacturing projects.
Two important factors are at play here.
The first is that civil engineering projects tend to also go vastly over budget (looking at you Big Dig).
The second is that software is unlike most projects. You don't make a skycraper assuming you're going to release new versions of that skyscraper every month or even every year. The cost of most engineering projects prohibits rapid iteration. Software isn't cheap to build, but it is a lot cheaper to iterate on than construction.
People also expect iteration on software. No one looks goes to a skyscraper, thinks "this would have been nicer 2 feet to the left" or "a rollercoaster at the top would be cool", and expects that to be a reasonable request. People do have reasonable expectations that there will be new features for the software they use or the UI refined to be better.
After years of suffering under the waterfall model, a group of software developers realized there was a better way. Introducing Agile Software Development!
This is going to be a loaded topic because there's the original principles vs how agile software development is implemented today. I like the original principles. I particularly like the very first one
"Individuals and interactions over processes and tools". There's a lot of meaning in there that we'll dig into later.
Instead of spreading out the waterfall process over the course of say two years, the common way of being "agile" is condensing and repeating the waterfall process every two weeks or every month.
While most people accepted that the waterfall was broken, they still accepted that everything in the waterfall should be done in those proportions and in that order. Hell, I believed that my first decade as a software developer.
What have we ended up with? Software projects are still over budget. Developers still crunch to get releases out. Users are often unhappy with most software that gets built.
My original interpretation of "Individuals and interactions over processes and tools" was that processes should be built around the team you have, not some team of theoretical professionals. For the most part, this is still true. Trying to come up with a process and forcing it on the team is the equivalent of putting a round peg in a square hole.
I once worked at a company that mandated story points used across all engineering teams. My team struggled quite a bit with it so I had them estimate in hours instead. We ended up with far more accurate estimates by treating the hours as story points. This is the "wrong" way to handle story points according to the agile consultants' doctrine, but having a doctrine is the exact opposite of "Individuals and interactions over processes and tools". The point of agile software development is to be pragmatic and try things until you find something that works.
The problem with my interpretation was it was incredibly limited in scope. I was so focused on the development team that I wasn't seeing everything else. The biggest part of what I was not seeing was the mentality that requirements come before writing code.
Requirements come before writing code.
That just sounds so sensible. How can it possibly be a problem?
It bookends some of the most important people in software development: the people who use the software.
Users are involved in the requirements gathering process as part of user research. We talk to them again when the software is close to release. In the mean time, we assume the requirements we have gathered are going to be excellent and plan around them. We design complex systems around them. We built our test cases around them. When what we've built proves to be unworkable, we suffer the sunk cost fallacy and try to make what we built work.
Sprints are supposed to solve this with "rapid" iteration of two weeks. The problem is that a lot of meaningful software can't be built in two weeks. So instead, we split the initial development into sprints, but the first release for users could be months away.
I never internalized what I started doing a few years ago until recently, but I've been using an alternative that's worked quite well. I write code as part of requirements gathering.
If we look at science:
Data is analyzed
A hypothesis is formed
An experiment is run
Data is collected
Repeat
This is sensible because hypotheses are often wrong. The process is not built around proving hypotheses right. The process is built on discovering the truth.
In software development, our truth is creating something that provides value. The only experiment that matters is when we have working software in front of users. Not clickable UI prototypes. Working software.
The notion that requirements come before writing code assumes that our hypotheses are always correct. I believe this single assumption is the biggest hurdle every software project has today.
So how does writing code in the requirements gathering process help? If meaningful software can't be built in two weeks, how can you get working software in the hands of users quickly?
There's "working" as in the software can be released and used by people on its own. And then there's "working" as in users can use the software, but its not releasable as there's no data persistence, disaster recovery, or security of any kind.
"Working" does not have to mean ready for production. Working just means a user can do some task in the software and provide feedback. This can mean a rapidly built single page app that uses localstorage for persistence. It may have no login and only runs on a developer's machine, but that developer can take their machine to the user and collect feedback. Instead of waiting for a two week sprint to talk to users, you can talk to users every week. Or every other day. Or every day. Or multiple times a day.
That code is obviously going to be pretty trash. That's fine. Maybe a function can be salvaged here and there, but the point isn't to write code that can be used long term. The point is to make sure that when you write code for the long term, it will be the right code.
Does that mean requirements gathering is going to take a long time? Yes. This probably worries people about their timelines. I'd actually argue this will result in faster development.
Think about your own software development process. How much of it is in meetings? How much of it is asking questions on Slack and then waiting for responses from product managers? How much of it is spent pivoting mid-development cycle? How much of it is spent arguing over what users actually want? How much of it is spent staring at your screen because you're anxiously wondering if anything being built is going to matter? (Maybe that last one was just past me, but I doubt it)
A lot of time is spent on not being confident in the work that doesn't get logged in a JIRA ticket or even talked about in a retrospective.
That time could be spent on getting confidence.
This 👆