Once you get to the point of something working, the business is going to move on ahead without giving your team any time to go back to refine and improve.
Bug fixes will be given some time but telling management/business, we need 2 months to go and clean up everything we've built over the last 6 since we now have an idea of how to structure this capability is gonna be met with a laugh and a no.
Inevitably something will break or a new feature will not be possible cause of existing limitations and everyone will get mad since no one told them something could break without an improvement even though you told them well beforehand that the ground was shaky.
I think companies not prone to this are ones where their product is a technical one like cloud services where the business really is the engineering and engineering isn't a means to an end.
Personally it feels like modern management and organizational practices have prioritized feature velocity over a lot of other concerns. Business likes this because they can come in, request X number of features and then everyone works like hell to get those features in
Then, seeing the speed with which those first X features got implemented, they now request Y features and the cycle repeats.
But constantly measuring feature/release velocity means that things that do not directly benefit new features/releases get de-emphasized, such as encouraging developers to not just implement a feature, but go back to their code and try to disentangle the code they just wrote from any other code they may have stepped on. And it's even harder to get the business to agree to not push out features but instead give time to just go back, look and what's there, and figure out how to make it possible to add the next Y amount of features
There's something intoxicating about being able to have a bunch of teams pushing out new updates, but these high velocities can make it near impossible to revisit something. Hell, I've gone back to code bases on projects I haven't touched in only a few months to suddenly find everything has become riddled with spaghetti code and weird hacks to bypass systems. It works, but each release starts developing longer and longer bug fixing time
In my experience one way this manifests is how every new project you'll get PM and management rushing you to use some dead simple user authentication so you do then 6-8 months later they're asking you to add RBAC to all the features you've had nothing like that planned for. Like it's the most obvious eventuality with any of these apps sold B2B and it's always put off and fucks up any architectures and forces various "pivots" because management couldn't be bothered to listen and prioritize foundational stuff.
Management practices are bad because managers/directors/VPs are not the first to be fired.
In a scenario where the management would be the first to be fired for non-delivery of features, management would go to extreme lengths to improve developer environments, tech debt, and in keeping people happy and for longer duration.
100% this. Velocity above all else, in conjunction with the bad practices around treating product as the source of truth pretty much always, and pushing back on technically infeasible (for the time frame, usually) is a no go.
I know as engineers we have some salary privilege, but few groups get squeezed as hard by both sides of the business layers as engineers do, in my experience.
> Personally it feels like modern management and organizational practices have prioritized feature velocity over a lot of other concerns.
I agree. In the defense of business, our industry typically does a horrible job explaining concerns other than features. This isn't going to change things across the board, but I think a meaningful percentage of businesses will make better decisions with more mature presentations.
Here’s what I do: once I’ve gotten something working, I don’t put up the PR immediately. I let it sit for a day, then I look at it the next morning with fresh eyes. Inevitably, I find things to clean up, improve, and refactor.
Maybe you’re talking about getting something “working” in the larger sense, as in a full feature, made up of lots of PRs, but slowing things down just a little bit and focusing on quality in each individual PR is much easier to budget for (no one notices the extra half day) and buys you a lot of quality in the long run.
In practice it isn't even that much more likely to break, or even add features to. The code is just much uglier and more complex than it could be, and probably slower. But it runs and keeps running.
Most of the time those business types are correct on this.
The only thing is that once the original team has moved on, then if the code is too complex, it can become almost impossible to change.
It's a hard balance to strike. On one hand, the code is ugly and difficult to understand, but it has that ugliness for a reason, it's solving edge cases that you don't remember and a rip and replace is always expensive and no guarantee it won't devolve just as quickly.
How do you strike the balance of the dev team wanting to fix unbroken code for long term health, and investing in new features that grow the business.
Personally, I am biased towards encapsulation as a means to handle a lot of these types of tech debt. Wrap the old stuff in an orchestration layer and build new features with the orchestration layer in the middle. It's a bit of sweeping the dirt under a rug, but it also gives you a real solid base for later coming back and cleaning up the ugliness if it's really needed by giving solid contracts between the consumers and the orchestration layer and the legacy system and the orchestration layer.
> we need 2 months to go and clean up everything we've built over the last 6 since we now have an idea of how to structure this capability is gonna be met with a laugh and a no.
How do we break this mold? While this absolutely does happen with some management, it's not all management in my experience.
I am an engineer that's found myself in a management role, and I want my team to do exactly this--don't invest tons of up-front effort trying to guess the models and abstractions we're gonna need. Build, iterate, and we'll clean it up when we know what we don't know right now.
It is blatantly obvious to me that things will be on shaky ground, I have a keen sense of what will break and when. And I'm totally good with that! I put "architect for real" time into the roadmap.
But even still, I get pushback, sometimes a lot of it. Like the idea of shipping functional-but-ugly code is somehow totally unacceptable for some reason (even when it's obvious the "pretty" version isn't even future-proofed or appreciably better). And the excuse is usually "Well we'll never have time to fix it".
In a 20+ year career so far, I've never seen a company break the mold. At best, the company will pay lip service to software quality, but reality is that everyone up the food chain is incentivized to run as fast as they can cramming features. At worst, quality will be deliberately shunned: "Get it to barely work to the point where the customer won't outright reject it, and then ship it!"
If there is any company out there that still encourages and rewards the craftsmanship and attention to quality/detail that is embodied in that old Steve Jobs quote[1] I haven't found that company yet.
1: “When you’re a carpenter making a beautiful chest of drawers, you’re not going to use a piece of plywood on the back, even though it faces the wall and nobody will ever see it. You’ll know it’s there, so you’re going to use a beautiful piece of wood on the back. For you to sleep well at night, the aesthetic, the quality, has to be carried all the way through.”
I was going to point out that , sadly it’s not just software companies that are run like that. Honestly I wouldn’t know who to point the finger at, consumers crave “features”(be it software or hardware ones) because companies have been pushing them as the Nr1 selling point.
> Like the idea of shipping functional-but-ugly code is somehow totally unacceptable for some reason (even when it's obvious the "pretty" version isn't even future-proofed or appreciably better). And the excuse is usually "Well we'll never have time to fix it".
You are thinking of it from your team's perspective. We have some shipping code, we just need to clean it up and architect it properly to do the exact same thing! Sounds reasonable, right?
Now think about it from the point of view of the customer that's paying you money for the feature. Do you think they would want a feature that has "architect for real" in the roadmap after they pay you money for it? Would you pay money for such software? [Yes, they don't need to know, but think about what if they did.]
How this actually plays out in the real world is that customers pay money based on a somewhat vague promise of features in the future. If your "architect for real" delays the launch of those features (as it likely will, these 'simple refactors' have a habit of becoming not-so-simple), then it's bad look. So customer-focused teams try to get it right the first time and deliver a cohesive, architected feature.
I did say "try" above. Like all real software, it's impossible to be perfect. But "we'll do it for real afterwards" attitudes like yours often tend to produce friction, and this is my attempt to explain why.
Tech debt means the next is feature 5% more expensive... compounding.
I'm now on a project that has spent 2 months (10 people months) to add a box to the checkout screen for the cashier to enter a special discount amount. No business logic, straight pass through. Originally, I thought it could be done in a week by one person: nope. Once you get in the weeds, it takes this.
This is now 4000% more expensive (literally, do the math) because of compound interest. Tell me that isn't valuable to the customer.
If that were the only reason tech debt were bad, I wouldn't worry about it so much. But I'd estimate at least 25% of the bugs in our software come about because of existing tech-debt - developers make reasonable assumptions about how things work when implementing new features (or fixing other bugs), but because of hidden tech debt, those assumptions end up not being true in significant corner cases. A classic example is when code is originally written for a very specific purpose, then later gets "somewhat" generalised in order to support a wider-range of cases - but lots of the original code that only made sense for the initial implementation is left in place, instead of properly separating out the "specific-case" code and the shared logic. Years later devs (not involved in the original development) reasonably assume it's intended to act as a general purpose component and use it as such, only to find it starts misbehaving because it was never properly refactored from its original implementation.
I feel like way too many people here haven't exactly experienced a long term project. Spending extra 20% now to get it right always pays off on anything longer than say a year or two. Even spending 50% or 100% sometimes does.
In my experience it tends to work like this. New feature is added. Client tests it and it works great. Client has a few more subrequests to make the feature excellent for them that take a month or two to implement. In the meantime huge amounts of data are building up in the database. The even newer features get implemented and suddenly the customer system explodes in a ball of fire. Turns out they are taking those new features and doing things you never expected with other existing features and now your system isn't performant enough. Now the entire process of adding new features for other clients has stopped as 'redneck architect' the system to a point it doesn't fall over, all while the customer is pissed.
> Now think about it from the point of view of the customer that's paying you money for the feature. Do you think they would want a feature that has "architect for real" in the roadmap after they pay you money for it? Would you pay money for such software? [Yes, they don't need to know, but think about what if they did.]
At least I am this kind of person. :-) I also believe that on HN, you would find some people who are also of this kind of breed.
The problem rather is that many customers do not value this.
TBH it really depends on the industry and the nature of the product. Like, even within the same industry consider Video Game A vs. B
A) you are making some single player narrative adventure. You don't expect many post launch updates and no DLC plans. You don't really care if the code is spaghetti as long as the core features work: maybe in a potential follow up you clean up a little bit, but not much. The art assets are the biggest factor for reuse.
This sounds like the kind of product you work with and while it hurts on an engineering level, I can understand the need to focus more on getting something out than "doing it the right way".
But you always have to keep in mind as a company that employees look out for their best interests, too (especially in thei current market). Some employees are fine providing a product, but others may have other aspirations. They may have eyes on another role and saying that they were essentially a scripter isn't good for their velocity. Being able to describe an elegant system they architected and how they improved the performance of a product will be what gets them through their career.
B) you are making some multi-player, networked arena battler. You have 2+ years of battle-passes and quarterly character roster/weapon updates. You want to be flexible to provide new modes of battle based on player feedback and be able to quickly rebalance battle systems to ensure nothing ever feels too over/underpowered. Fast iteration times post-launch are a must to properly address this timeline
This is where those suggestions from engineering should provide more pause. If you need to move quick 6+ months down the line, maybe those 2 months of re-factoring is worth looking into. Granted, this isn't at all how real multiplayer modern video games work (just throw it all at QA, it's fine), but there should be more discipline in how the code is organized and documented. Especially if you potentially need to hire new talent to ramp up post launch.
Perhaps you're a testament to why we actually want "managers who are also engineers" in these roles - for exactly cases like these, where you have the experience to know what "done" means.
I had a sobering realization the other day. Of all the companies that I have worked for over the years, the ones that had really solid engineering practices, the kind that made me proud, didn't make any money.
That doesn't mean that it's a general rule as I am sure there are plenty of exceptions, but it was still striking. In fact, I think I can say that the places that I have worked that were the most profitable were the ones where everybody agreed that the code base was an absolute mess.
Successful businesses can paper over bad code with money, because ultimately bad code just costs the business money.
A better way of thinking about this, I think, is that code quality expresses itself first and foremost in the product's long-term quality. And code quality, like all product quality, costs money.
But not all companies need good quality. Each business niche has a different quality level : product cost niche. A business will do best if the engineering quality matches the business needs, and vice versa.
A company that has spent money writing beautiful code that costs more to support than that business can generate, is in just as bad a place as a business that is growing massively and burning cash papering over a low-quality engineering department with all their new sales contract money.
Good code is basically investment in future velocity of the project.
And vice versa. But while "take a loan to get first version out of the door" is widely understood to be generally good decision, "invest into code so it can pay dividends later" appears to be some dark incantation nobody seems to understand...
Your table says code does not matter, but it does for consumer software. That's why Friendster failed and Google succeeded. Figma was made possible by WebAssembly. A lot of businesses had no "business" at the beginning; just a good product (and the code is part of that).
A quantitative way to say that is that rightsizing¹ your quality for your requirements maximizes the chances of success.
But code quality is much more flexible than business quality, so that large changes on code still have a smaller impact on success. What absolutely doesn't mean that code is irrelevant; it just means that marketing in incredibly important.
1 - What is a different word from "maximizing" for a reason.
Bad code can lose customers and eventually fail the business. I've seen it. The code was for paying customers and once the system was buggy enough, they left. All driven by clueless agressive management who read too many blogs about MVP etc.
How about "the companies with a very profitable business can afford to have a mess" and so they continue to have a mess because they are already surviving.
I understand where you are coming from, but I disagree with this. It is the engineering team that says whether or not something is "done", and if there is tech debt that must be addressed before a feature can be implemented, then it needs to be done as part of that feature.
In majority of the cases I have encountered the tech debt is optional to address, the risks around them have varying implications depending on the case and there is no blanket decision to cover them. In most of these cases, it is simply a matter of explaining the business people the risks of taking shortcuts. In the end not every tech debt needs to be paid immediately, and not every tech debt is bad. If the tech debt allows you to make money today instead of 2 months from now, you might very well take the debt and think about it later if what you need is money.
I'd recommend talking to business people and making sure they understand the risks of these decisions, then they will be more in favor of addressing them, in some cases even more than you would do yourself.
Couple of things I've done. One is build refactoring into the estimate for the improvements, especially since if you did take two months to refactor and then added improvements, you might have to refactor yet again.
Another thing I do is reserve X% of sprint stories for maintenance work such as refactoring so some improvements do get made. I don't think you ever fully pay down tech debt, you just have to do as much as you can given other constraints.
> I think companies not prone to this are ones where their product is a technical one like cloud services where the business really is the engineering and engineering isn't a means to an end.
I currently work for a company providing "cloud services" to other software shops. Part of what drew me here was that the ethos around how we get stuff done does encapsulate this. The engineering culture is pervasive. Here's to hoping we can maintain that as we grow.
This is an hypothesis, so take what I'm saying with a lot of salt, not just a grain.
I've been exploring a tool in ruby called packwerk, that helps introducing gradual boundaries within a system (made by shopify).
The beauty is that while you can see violations you can still run your code.
Now, my thought process is that identify the core high level boundaries is very important to design a system, so this type of tooling will go a long way.
So far to enforce boundaries you either have nothing, which leads to a design that's just a mess, or you have something strong like go packages, which can slow you down.
Design is super valuable, but there are just parts of the software that have little enough business value or will never be touched again, so nobody cares.
I think design being adjusted to business and team needs make sense, but we might not have explored hoe to do it. Also, I noticed it's not something every software developer is good at, which means not everybody can do it
Even worse is when a product gets mature and a team IS given a blank slate to do a rewrite, but the company insists on repeating design mistakes despite being specifically apprised of them and how they cripple the product... simply because they fear "change."
This happens at tech-centric companies that sell software. The biggest, in fact. It's depressing.
They should laugh and tell you no. Interrogate why they need 2 additional months to build it the right way. Why didn't they build it the right way to begin with, adjusting approach at every step?
In my experience, engineers (I am one too) tend to reach for quick and easy more than correct and hard, and that choice is coming from them, not the business.
I just had a ticket that was supposed to take 2 hours but took 6. A process that was created 2 contractors ago was unknown to anyone and I had to figure it out from scratch. My PM complained that I didn't complete in the time estimated and that some of these hours couldn't be billed.
If that's the situation, why would anyone explore the "right way to begin with" instead of what's quickest? The right way is the way the business accounts for, not what creates the best quality product and experience. Story as old as time.
"Supposed to take 2 hours"... Who said it should take 2 hours? And an estimate is not a promise. You did the right thing. Sounds like your PM is terrible.
> In my experience, engineers (I am one too) tend to reach for quick and easy more than correct and hard, and that choice is coming from them, not the business
In my experience, devs do this because they're required to meet a deadline that is shorter than it should be.
Even if they get a say in the timeline they might botch their initial estimate. I might guess 2 months to build this thing for which I have barely even had a chance to look at, let alone design, because I'm still trying to finish up my last project and then when I get into it and find a cluster bomb waiting for me they're already planning my next quarter. Or some things need to be rushed because they're holding up 5 other projects
It is true, the art of building complex products is to avoid shutting any doors by choosing short cuts or acquiring tech debt that can't be paid. If there are features that can't be implemented because of limitations, it's probably already too late.
>In my experience, engineers (I am one too) tend to reach for quick and easy more than correct and hard, and that choice is coming from them, not the business.
you get what you pay for. I tend to suggest the easy and "correct" way on any given feature with estimates for each. My lead will 99% go for easy. Don't know who up the chain is at fault, but that is clearly the preference.
>It's their job to stand by the truth of the work. If management can't deal with that, I'd be looking for my next place to work.
pretty easy way to end up jumping jobs every 2-3 years. I haven't found that "good management" yet, 6 years and 3 jobs later. It may not even exist in my industry.
The consulting, it is either done the first time, or it isn't.
There isn't anyone taking care of the code like a tiny bonsai tree, unless there is a consulting contract to do exactly that.
One thing I learned moving from product development to consulting, is that developers that are too deep into engineering organizations usually don't realize how much money per hour they burn doing beautiful coding.
When each hour of coding has a price tag to it, one realizes there is a certain need to map how that value turns into the expected ROI from business side.
Which for the majority of companies whose software isn't their main business, rather a means to an end, e.g. sell shoes, the value of perfect code how it is preached at conferences isn't exactly what will help to sell more shoes.
> One thing I learned moving from product development to consulting, is that developers that are too deep into engineering organizations usually don't realize how much money per hour they burn doing beautiful coding
Did you also learn that not doing enough is just going to cost next person touching the code, compounding on eachother till someone has to rewrite it?
There is a certain level of engineering where it's obviously code wankery with no actual profit but it is not close to "first, easiest to code answer to a problem that developer can make".
Spending 100% or 200% extra on designing and coding something is probably not worth it most of the time, but spending 20% over minimum is near always worth it.
Sure if your project takes 6 months and you then throw it away, fair enough, but for anything that has ongoing maintenance and development I've seen waaay to many projects falling into MVP trap.
Lets put it this way, there are consulting gigs where people get paid by the ticket, and they get a fixed hour budget for example 3 tickets, and move into the next customer after they are done.
Naturally when customers are willing to pay for quality time, the approach is different.
> Once you get to the point of something working, the business is going to move on ahead without giving your team any time to go back to refine and improve.
Just recently on HN: 'Software engineers hate code'[0]
It does seem once every modular part of your system works, it's enshrined as a microservice and forgotten about. Tech debt happens this way.
> Bug fixes will be given some time but telling management/business, we need 2 months to go and clean up everything we've built over the last 6 since we now have an idea of how to structure this capability is gonna be met with a laugh and a no.
There are certainly many businesses where this is true, but there are also many business where it isn't true at all.
Unpopular opinion: There is not nearly enough design in most software development these days. Any sort of reasonable planning and writing and spec-ing tends to be derided as "waterfall"/Big Design Up Front and therefore inherently bad.
You can blame Agile/XP but at root I think it has more to do with many developers abhorring tasks other than just writing code. Documents? Meetings? Talking to future end users? Not fun!!
Then they get frustrated they don't get time to fix their tech debt. How about not going so into debt in the first place?
By all means do some prototyping but then throw it away after a week or two. Use it to test hypotheses you've already written down somewhere. You'll have more success if you're asking for two weeks to clean up tech debt instead of two months.
Ironically extreme programming (the first real iteration of Agile) was big on getting requirements, creating what some call spikes (POCs of concepts or demos), and talking to stakeholders as one of the key priorities.
The chunking of 2 week sprints is a natural result of this, where the idea is you get together alot in the first few days of a sprint, plan some loose but defined stuff, iterate on it, come back for a day or two in the middle, re-iterate, and then show your work and plan the next cycle. Work should be introduced in such a way it can be chunked in small pieces like this.
This is why TDD became highly coupled to Aigle/XP by the original practitioners around Agile development. Tests are your first validation of an idea, a way to write code-as-documentation and feature validation, before you actually implemented the thing in the product line. (side note: TDD has bee distorted too, by both "zealots" and the opposition, largely lost its original intent and execution)
The real problem, as I have observed, is that everyone is still waterfall or some version of "waterfall lite" and doesn't actually observe the intentions behind Agile. Its been completely devoid of the meaning behind the original manifesto. Hardly any place follows it in its true form, I feel.
> You can blame Agile/XP but at root I think it has more to do with many developers abhorring tasks other than just writing code. Documents? Meetings? Talking to future end users? Not fun!!
In whatever BS performance review process in your company, are engineers going to be recognized for writing documents, meetings, talking to future and end users?
If yes, engineers will do it. As it stands, in large companies, management just wants engineers to code their life out.
In my experience even when you want to do those things more often than not it's management that sees it as a waste of time and wants you to get to coding. So many times we've had to "pivot" in projects because management couldn't be bothered to let us plan any architecture.
I think this is the failure of the project leaders to incorporate usability into the discussion from the outset. It takes a strong will to sit there and say "We can build you what you want, but without at least one pass on usability and optimization, it's going to be fat, slow, and the end users are going to hate it."
Documents are an artifact you can point to, and getting comments and references also looks good.
Managers don't care how many hours you sit in meetings, but if you magically show leadership by driving meetings and somehow document that, then it counts.
No one talks to end users though not engineers anyway. We just build dumb things the PMs want.
Completely agree, but you're implicitly assuming that developers have a lot of leeway in doing this design in the first place. Many non-technical managers seem to assume that developers should not have any say in the design and functionality of the product. I think this is a huge mistake -- I think software engineers are the best people to ultimately make decisions about how the product should work (with heavy input from stakeholders and users, of course) -- but the reality is that managers often don't trust them to do this and don't cede this power to them.
The problem with meetings and design docs is that not many people will take the time to really grasp the problem. The comments are usually only superficially about the stuff you did write and rarely point out anything you missed entirely. Everyone has their own stuff going on so unless your work overlaps with theirs, no one is interested.
Considering all the design churn I see in products large and small, I'd say the pendulum often swings in the other direction. Vanity changes appear with little or no thought to accessibility, often regressing for all except the slice of rich, young, clear-sighted, able-bodied people with fast internet and a recent Mac model.
The problem with the Planned Design vs Extreme Evolutionary Design is that BOTH are appropriate for different circumstances.
I've found a balance that works well, almost by accident or necessity of the situation.
At the outset with a new product, even if the team thinks they know what they are doing and what they intend, they really do not. This is (had better be) a new product/service, and no one really knows how it will really interact with the customers/users, and how the components will interact. You are about to build a MVP, then core product. THIS is the time for an extreme programming type approach. Get things working ASAP, get feedback from real users and real running systems. Most importantly, the ONLY PLAN is to THROW AWAY THIS VERSION.
Then, once you have basic experience with your users, know what they prioritize, and experience with your system's behavior, NOW is the time to plan and design a system that will be scalable and maintainable. Depending on the project, situation, and how the throw-away version went, this may be after v0.9 or after v2.1, or maybe just after base or breakeven revenue or a funding round.
The cool thing is that in the early times when speed is most critical to survival, all worries about technical debt are eliminated — it'll just be thrown away soon, and the scalability and maintainability issues are punted until you really have enough information to answer them well.
I have never had the experience of developing a product from MVP to mass market. Is ‘Throw Away then Strategize/Plan’ utilized with any regularity? Would it follow that the initial team who scraps together the MVP would be different than the Strategize/Plan team that might be hired specifically because they have the experience/background for building large scale systems (assuming expanded funding comes from market validation via the MVP)?
> Is ‘Throw Away then Strategize/Plan’ utilized with any regularity
Yes. But I'd frame it a different way. The initial implementation is the source of empirical data that you need to formulate a viable plan. You're not building something to "throw it away". You're building something so that all subsequent decisions can be made on a solid foundation.
To be concrete, I've seen teams faced with a technical decision who a) argue about it in meetings for weeks, bringing only opinions and personal experiences to the table vs. b) set up a proof of concept for all options and run benchmarks which can be used to make informed decisions. (a) takes longer and produces an inferior product - I've never seen an exception.
>> argue about it in meetings for weeks, bringing only opinions
YUP:
One test is worth a thousand opinions.
(and yes, the entire point of the throw-away version is to test every aspect of the MVP++. And sure, you may keep some chunks of it, but that is sort of based on luck. A big part of the attitude on the initial version is that no one needs to think about technical debt here - it's all a big experiment, done to generate information and data to build a solid foundation.)
It takes some strong organisational discipline to really maintain that throw away mindset - I suppose that's where it helps to make an explicit goal to throw the initial version away.
Another aspect is that (in my experience / opinion) if you're doing an MVP then it really needs to be as minimal as possible - big enough to give you evidence for your future plans but not comparable to what you eventually hope to build. Otherwise it's hard to adapt or to throw away.
The biggest thing I've found to help in any design work is making sure the engineers really understand the problem they're solving, so they can judge independently if plans they're making are complimentary to it.
> making sure the engineers really understand the problem they're solving
Yes, absolutely. And how would software engineers gain that understanding?
Experience is the key here. Avoiding an initial implementation because you're afraid of "wasting work" is counterproductive - it cuts off the primary mechanism by which developers gain understanding! That's like saying athletes shouldn't practice because it wastes their energy for the big game.
One small note on terminology that might help: Using "MVP" implies that what you're developing is a product. It's more accurate to call this "POC" or proof-of-concept since that's exactly what it does - with that data in place, the product and engineering plans can proceed faster and more effectively knowing they have a firm tether to reality.
I arrived at the same wisdom eg. the "Throw Away then Strategize/Plan" process, but... how the heck to you manage to sell/explain this to people at the same or higher levels?
Imo lots of people are very disgusted by this, mainly because the (a) concervatives/waterfall-heads are horrified by the idea of launching something no thoroughly engineered, while the (b) evolutionary-design folks never want a clean-rewrite from scratch, they'll cling to that "throw-away version" and try to "evolutionarily" refactor it gradually into what they now know it will be needed (and this always fails).
Good point - to some degree I got lucky in the organizational area; the situation where it really proved itself was where I was one of four co-founders & I was in the CTO chair.
I can see how it could degrade in the ways you mentioned (and more!) in more mature orgs. Which, to me makes it an even better idea there, but harder to politically navigate.
The Agile shop I currently work for treats UML as mandatory before any code is written. Then we proceed to ignore all designs until the next time someone has to look at the design doc again
Does anyone else find themselves performing a UML-like design phase, but with code?
I like to create classes/methods with little/no/mock implementation to see how it will fit together. If there's something where I'm not sure how it works (third party API/lib I've not used before) I'll get more granular with it.
There's tooling to produce diagrams from code if someone really wants it. But either way my design phase is now usable code.
That being said, I still have to admit this rarely survives contact with actual implementation. It just feels better.
Yeah this is sorely needed -- I did wonder why Martin Fowler was espousing this pretty out-of-touch viewpoint in 2023 when design is a major part of feature completion.
On average, I'd say we have much better design and UX now compared to 2004!
It is interesting to read perspectives from a past era to understand how we got to today’s status quo.
In my career I’ve encountered a lot of projects that have been designed to death; They start with good intentions to do things “right” and then hire a lot of people who do a lot of designs and documents and meetings and committees. Two years later, nothing is done because everyone is too busy designing and re-designing.
At the core of it all is the idea that if you’re not following all of these formal techniques and processes then you can’t possibly deliver anything good.
I think it has too much regimentation to be a good communication device, or even a good design specification. Labeled boxes, arrows, and sometimes color are more than sufficient for diagramming how a design works. The small details in a design doc are not going to stay the same when building the thing, so writing them down is just a waste of time.
(Personally, I’m a fan of doing less design, less documentation, and making the code as obvious as possible. It’s a lot of friction to have multiple sources of truth that have to be kept in correspondence with each other.)
UML was, at least partly, advertised as something that would make software developers obsolete. You just needed domain experts to wire up the right class diagrams, sequence diagrams, activity diagrams, etc., and it would output code that met the specifications. No developers needed!
Imagine you want to have a conversation with a colleague about how to approach a new project, but some smart-ass is constantly pestering you, forcing you to use a heavily formalized, technical language that requires you to put every idea into a well defined box. Caveats, questions and loosely defined entities, processes or relationships cannot be expressed.
Or "(2000)", actually, since it seems to all be a 2000 keynote which has been digitized. Probably a good time for a retrospective - 23 years ought to provide a lot of hindsight & commentary.
I'm so glad the industry has largely moved passed this sort of dogmatic 'purism.' Conversations like this still happen of course, but they don't seem to happen with the frequency or volume that they did 20 years ago when this article was written.
I think it's more of an evolutionary stage in a developer's career.
Relatively inexperienced developers tend to very myopically zoom in on particular qualities that are supposedly good, and explain everything that is wrong with software and software development with the lack of this quality; and everything that is right with the presence of this quality.
Of course this phase can't last too many years, as such conceptualizations don't tend to survive contact with the realities of software development. They'll inevitably encounter undeniably good code written with complete irreverence for their most holiest of ideals, and they'll themselves write bad code despite kneeling at the holy altar of clean code or memory safety or pure functions or whatever. Purism is ultimately a completely untenable position.
YAGNI should be applied to things that you're adding. Features, database columns, interface methods, etc. Everything being added must have a need, and that need needs to be immediate and real.
Refactoring is inherently not about adding anything. If you're adding things as part of refactoring, presumably it's in the service of removing even more things, such as in consolidating multiple similar abstractions into one. If you find yourself adding lots of new abstractions as part of refactoring, YAGNI applies. But "YAGNI applies" doesn't mean "don't do it", it means "make sure you actually do need it, right now, and not in some hypothetical future."
There's a way to plan ahead for likely futures without adding features, and in my mind that doesn't violate YAGNI. You're not planning for one specific hypothetical future, you're trying to make sure your application is robust in the face of likely changes. You don't actually perform those changes, but you do ask yourself how those changes would work in your current application.
I use the analogy of creating a model bridge for a train set. YAGNI would be looking at the bridge and saying "yeah but what about tanks? What if you wanted to land an airplane on that bridge?" -- until you're literally doing it, just stop. It's for model trains; the end. But going over to the bridge and jiggling it, shaking it, seeing if it makes a weird rattle, making sure it's level, pushing on it to see how it bends, and fixing it if any of those things happen -- that's not really YAGNI. That's making sure it's robust and will last into the future, however you decide to use it.
Right, if you're having to choose between two models that meet your current needs equally well, and are about the same cost/effort, then asking that question is great. If one of the models is easily extensible to heavier loads, and one is not, absolutely you should pick the extensible one. Or if you're thinking you've got a great design, asking how you'd extend it to heavier loads can validate or challenge that. Those are great considerations and asserting "YAGNI!" here would be a mistake.
To me, whether you use waterfall or agile or something in between, or whether your managers give you full independence or measure every little metric, or whether you choose to do a lot of design up front or build a MVP right away, ultimately has little impact on the future of your codebase.
Nothing replaces experience in the domain. For an example, if your domain requires N-addresses per customer and you build the system allowing only 1 address across all your systems, you will probably be stuck in a refactor/rewrite hell-hole down the line. I don't think any amount of planning or lack of planning or waterfall or agile could guarantee you figuring this out, but someone that has worked in the field for 10 years could tell you in 5 seconds.
Definitely agree, and I think the place that I’ve seen most errors manifest is often the data model (including your example). APIs can always have a v2, classes can be extended, front ends can be reworked… but the data model is essentially the last line where abstractions end. It will forever codify the burden of representing hopeful future states while needing to account for every past state. Database migrations can be costly, risky, and often difficult to revert.
I might also see the world this way right now because I’m currently deep in data model/migration hell with my current project due to my formerly (and possibly still presently) incomplete understanding of how the system and use cases would evolve over time.
UML != design. People simply don't want to be translating across various definition languages and just write code instead. The codebase itself, if organized well, can serve as a design definition just as well as UML does.
> … [Extreme Programming (XP), an early “agile” method] involves a lot of design, but does it in a different way than established software processes. XP has rejuvenated the notion of evolutionary design with practices that allow evolution to become a viable design strategy. It also provides new challenges and skills as designers need to learn how to do a simple design, how to use refactoring to keep a design clean, and how to use patterns in an evolutionary style.
(Emphasis mine)
So, have we met the challenge of:
* implementing a simple design first, then
* iterating (“refactoring”), the original design as more information is uncovered, then
* ensuring that the design is implemented using well-worn solutions (“patterns”)
?
I think that there are social forces that work against this ideal, specifically:
* Software projects are funded as if they’ll be “done” on a certain date, after which improving the implementation will be considered too risky/not worth it.
* Developers like to code, and designing (reading other people’s code, negotiating improvements etc.) is not what they want to do. Best to jump to a greenfield project that isn’t boring “maintenance” work.
This results in a lot of half-designed, half-done, “works well enough just don’t touch it” software.
Most software out there follows the "Big Ball of Mud" [1] essay almost verbatim. My (anecdotal) rule of thumb is that after around 10 years of this kind of development, the inefficiency grows so big that development stalls. That happens when the ratio of new features to new breakages approaches one.
Then the company adds more and more developers to work on the same codebase, which will not help because their productivity will be super-slow. Instead they should be working in rebuilding the product.
Of course, at some point doing a huge design and making everything technically perfect from the beginning is wasteful. The main mistake to me, however, is not realizing the long-term costs and that doing it "right" is not so expensive.
One does not even need to design everything, just have enough foresight to not paint oneself into a corner design-wise.
There is an important topic missing from the article and conversation: cost and schedule. Efficiency is the real impetus for good engineering design.
The author mentions “other engineering disciplines” use design to ensure proper functionality.
But design engineering is about spending the least amount of money and time to meet the specified requirements, including production.
For example, bridges are designed and built using just the right amount of strength and durability (with a safety factor) with the least amount of effort, and not much more.
Software should be designed to meet the bare minimum requirements to be built in the most cost and schedule efficient fashion, and not much more.
Standards, languages, API’s, IDE’s, operating systems, databases, etc. should all be chosen to meet the minimum requirements with:
1. the minimum amount of time;
2. the minimum amount of cost; and
3. the maximum amount of flexibility and/or support-ability for future adaptations.
…and not much more.
This is the goal of good software engineering design.
Most modern software is E-type systems, and by that very nature [d]evolve into API specifications even if your team intended something completely different. Reasonable frameworks tend to accelerate this common trend by starting off with a well defined visitor and or facade pattern.
The current key design principle is to colocate problem domains in confined modular partitions with those responsible. If the team leads don't do this, than the infrastructure rots with fragile products in less than 18 months.
The law states that it works in all cases but the justification only refers to the publisher's confidence that the answer is no, not that a headline can magically change reality. Whether "design is dead" is a lot more nuanced than a simple yes or no
Bug fixes will be given some time but telling management/business, we need 2 months to go and clean up everything we've built over the last 6 since we now have an idea of how to structure this capability is gonna be met with a laugh and a no.
Inevitably something will break or a new feature will not be possible cause of existing limitations and everyone will get mad since no one told them something could break without an improvement even though you told them well beforehand that the ground was shaky.
I think companies not prone to this are ones where their product is a technical one like cloud services where the business really is the engineering and engineering isn't a means to an end.