A free feature in software, is like a free lunch: there's no such thing as a free lunch. The value of something is directly related to how much it does what you need, and how much it doesn't try to do stuff you don't need. Most "free" features, are things that programmers had to add for something else, or are mostly implemented because of something else, so they figure, "hey, it's free" and just release it -- which makes it a distraction, a support cost, a potential bug magnet, and something at least a few customers will learn to use -- even if there's a better way to do it, thus giving you a legacy nightmare.
The idea of a "free feature" is proof that engineers shouldn't pretend to be product managers (and vise versa).
The first rule of software development is, "there are NO free features".
Every feature added increases complexity, documentation costs, potential bugs, support costs, testing time, training time and so on. All of those "don't needs" hurt a product. If only 1% of your users are using the feature, then it may be a detriment to the other 99%; even if they donít know it.
To be a good engineer (or to engineer a good product) you need to know what to leave out. In fact, that is what good engineering and good marketing is; not trying to do it all, knowing what to throw out, and what to ship now! As Steve Jobs says, 'real artists ship'. A lot of that is because real artists need to know what not to try to ship.
Windows, and most products Microsoft puts out, are very heavy in features that you don't need. Most of the problems with setting up and supporting Windows and Microsoft Applications, can be traced directly to that same problem of unneeded features and configurability (that you donít want). Think how much better the system would be if it did only what you wanted and had only one switch 'Work like you want'. The same can be applied to UNIX and UNIX applications, and to most Applications and platforms for that matter. This is the most significant problem in software design today.
So the toughest part of any product is deciding, Which group are you designing for?
When you design for the "edge cases" or the exceptions, then you'll have a product that only those edge users could love. If you canít pick a market, and design for that, then you make a nightmare that many markets may consider, but none will love. That means that the first person that can come along and make something better for your market (more focused and targeted to them), stands a chance of eating your lunch.
A perfect example to me is X-Windows. UNIX needed a way to do windowing and User Interface. So programmers started working on X-Windows. But they kept thinking about what they might need, and what it could have, and all these layers on layers to have layers you could swap out and replace, and it would be incredibly versatile; and in the end they built a nightmare. XLib was the basics, but it needs X intrinsics on top. Then you need Motif on top of that to define how things will actually behave. Then many Windowing systems are built on that (like CDE, KDE, GDE, Gnome, and so on). It was a bottoms-up approach to design that added more complexity than value, took years (decades) to add enough layers to make it usable from the top, and none of the geeks ever asked, "What do users really want?" In the end, X does almost nothing well, because it wastes so much effort trying to do everything and be all things to all people.
You also have to think of how much time and money and wasted opportunities to get all that stuff done; and it wasn't useful until it was all done. In the end, what do they have? Something that takes longer to really learn how to program well because there are so many layers, but is big and bloated and few really like it, or use it (if given the opportunity to use anything else). You could in theory replace anything, anywhere; and it has a ton of features -- that no one should use; like Application level themes (themes should be consistent across the environment, not changing on a per Application level). So it has taken 20 years to deliver something that is inferior (in usability or programability) to everything else. Not a big win.
One of the biggest problems today in software design is this over configuration. All programmers are bad, UNIX programmers some of the worst. You only need to look at BIND or SendMail (or even Apache) and their configuration files to understand how out of control this 'be all things to all people' stuff has gotten. These packages work well, and I use them; but it is important to remember that people use them in spite of the nightmare of configuring them, not because of it. They hugely increase the cost of usage. You could easily go into those programs, rip about about 90% of the configurability, and end with something much easier to use.
All these packages are popular, but they probably wouldn't be used except for two reason: they are cheap (see free), and once people waste all the time to learn how to use them, they are afraid, or can't afford, to learn (or try) anything else. Fortunately, users don't have to use them, or they wouldn't have died in the marketplace, only programmers and network administrators, who will spend far more time learning how to set things up.
Give me what I want
The whole in the philosophy is not looking at tradeoffs that I want, but instead assuming that a program can be all things to all people. I donít care if it is good for other people or their solutions: I need it to work for my requirements.
When you try to do all things to all people, you increase the complexity and make it worse for my requirements. And eventually you have to make languages with syntaxes and rules just to communicate to those Applications to tell them what you want. And eventually you have to be a programmer just to use a program. This has crept to many programs, like Excel which practically requires a few languages to use, or Word Macros, VBA scripts, and so on.
In fact that is my problem with Unix itself. It isn't written for users, or to solve user problems; it is the most user hostile system I know of. It is written for, and great for, programmers only. But you really have to program (at least the shell) to get it to what you want. Even a shell (the command line itself) is a program; just that it is a program where Iím issuing commands one line at a time with. Great for me as a coder: sucks to be my Grandma trying to use it. This is more of a problem when some people can't tell the difference, or know who their target audience is.
Configuration options are often a cowards way out
An engineer (or marketing person) that doesn't want to do all the studying and analysis to figure out which way is best or "right" (or better for most things), so instead they just make it an option. Then they don't have to think about it, it works for both. Push the engineering and marketing problem onto the user or support people, and dodge their responsibility. Whimps! It is much easier than doing your job, and reasoning out a position and taking a stance about what is right. Instead they make the decision of no-decision and just try to do it all, ignoring the cost of doing that.
If that wasn't bad enough, to some engineers dodging though configurability becomes so ingrained that they start to attack the others that do make a stand (and choose something). So the 'do everything' folks, attack the 'do something well', because their solution isnít trying to be all things to all people. Instead of realizing how good the App is at doing what it needs to do for its target market, they compare it against all that things it isn't supposed to do anyway.
This is where I believe some of the Mac hatred came from. It was easy, focused and very usable. Some hated that because it represented the antithesis of everything the 'configurability above sanity' crowd stands for. They realized that threat, and hated it.
All systems have good and bad points. And there is some good in being able to change something -- but many refuse to admit the bad and costs associated with configurability.
There are ways to add "scaled" interfaces, that hide as much of the complexity as is possible, and so on. And some problems just are complex and need that complexity. I'm not talking about that, I'm talking about not taking out the stuff you can, because you don't want to have to think.
The point is that every setting or option is not free; each has a cost. Imagine you could reconfigure a car so that the pedals could be reversed. Some might like that -- but the costs are too high. That inconsistency means everyone must know that the pedals might be reversed on the car they get into. They should also need to know how to switch them back. And there may be no known logical reason for reversing the pedals. In computers that extra feature sells, and 'only an idiot wouldn't design it without that possibility', or so the anti-logic goes. It doesn't matter that drivers will make mistakes, since their reflexes can be trained only one way. It doesn't matter if those mistakes cost lives (or time and money). Many people want their bad configurability, and they donít care who they kill to get it. Only the reasonable can reason out that since it has so many costs, and so few returns, that less choice is often better.