INGRES was a major learning experience for me, because I was a product manager and totally focused on creating the absolute best product in the world. You know what we found? It didn't matter. Oracle kicked our asses because they focused on positioning, marketing and selling as opposed to building the best product.
There's some interesting information further down in the piece:
Oracle had its problems scaling, like any company does. But what they did really well was define a set of clear measurable corporate objectives, and push those all the way down on a quarterly basis…measuring quarterly, and making sure that everyone understood their role in participating towards whatever that year's and quarter's target was. That's what really enables an organization to run at 100 miles an hour without having any kind of low-level task management.
Clear sales objectives. Focus. Getting people to internalize company objectives:
David Marquet [...] says you can’t empower people by decree. While you might be able to ask someone to make a decision for themselves, that’s not true empowerment (or true leadership). Why? Because you’re still making the decision to ask them to make the decision. That means they can’t move, or think, or act without you. The way to empower people is by creating an environment where they naturally start making decisions for themselves. [...] Leaving space, creating trust, and having the full faith that someone else will rise to the challenge themselves.
Back in the mid-1990s, I did a lot of web work for traditional media. That often meant figuring out what the client was already doing on the web, and how it was going, so I’d find the techies in the company, and ask them what they were doing, and how it was going. Then I’d tell management what I’d learned. This always struck me as a waste of my time and their money; I was like an overpaid bike messenger, moving information from one part of the firm to another. I didn’t understand the job I was doing until one meeting at a magazine company.
The thing that made this meeting unusual was that one of their programmers had been invited to attend, so management could outline their web strategy to him. After the executives thanked me for explaining what I’d learned from log files given me by their own employees just days before, the programmer leaned forward and said “You know, we have all that information downstairs, but nobody’s ever asked us for it.”
I remember thinking “Oh, finally!” I figured the executives would be relieved this information was in-house, delighted that their own people were on it, maybe even mad at me for charging an exorbitant markup on local knowledge. Then I saw the look on their faces as they considered the programmer’s offer. The look wasn’t delight, or even relief, but contempt. The situation suddenly came clear: I was getting paid to save management from the distasteful act of listening to their own employees.
Approaching a problem with a design thinking mindset, however, certainly takes into account what a customer says, but simply as one input among many. In this approach, observation of the way people really live, development of a deep understanding of the real problems they have, and an appreciation of the “hacks” they devise to overcome them can deliver an understanding of prospective customer’s needs more accurate than what any of those prospective customers could ever articulate on their own.
And then, from that understanding, an entirely new, highly differentiated product can be delivered that surprises and delights. From a business perspective, the emotion and attachment said product inspires breaks down price sensitivity and builds brand attachment, and inspires the sort of viral marketing that can’t be bought.
What failed wasn’t the vision but the timing and the absence of a refinement process. Technologies which succeed commercially are not “moonshots.” They come from a grinding, laborious process of iteration and discovery long after the technology is invented.
The technology is one part of the problem to be solved, the other is how to get people to use it. And that problem is rooted in understanding the jobs people have to get done and how the technology can be used defensibly. That’s where the rub is. An unused technology is a tragic failure.
Companies that set up so-called "skunk works" operations tend to forget that the original Skunk Works had customers and worked closely with those customers. When it operated as a mere technology incubator it tended to produce duds. Its major successes were produced for the CIA and the Air Force.
Apple is really the moonshot idea turned on its head. It has the "skunk works" at the top and treats traditional operations as part of the product being created. [...] Product development should be "C-level", including prototyping, and the process of turning out millions of those products, marketing them and selling them, etc, should be assumed. It should also be headless and hence killable once the market for the product dries up, regardless of how long that takes.
At McKinsey, we were taught three approaches to making decisions under uncertainty:
- Day One Hypothesis
- Directionally Right, Same Order of Magnitude
- What Do You Have to Believe?
These three tools have been immensely helpful to me in my own career, especially when I’m asked to make a decision about a future business opportunity where there are some key unknowns.
If you think about what you are trying to accomplish in a meeting with someone you are managing and you plot the following:
- On the x axis - whether you clearly communicated the issue to the person
- On the y axis - whether they walk out of the meeting happy or mad at you
Dick's point is you want to optimize for the x axis, clear and crisp communication, and not worry too much about the y axis.
The important dynamic here is that a combination of very cheap off-the-shelf chips and free off-the-shelf software means that Android/ARM has become a new de facto platform for any piece of smart connected electronics. It might have a screen and it might connect to the internet, but it’s really a little computer doing something useful and specialised, and it probably has nothing to do with Google.
Now, stop thinking about it as a phone. How do the economics of product design and consumer electronics change when you can deliver a real computer running a real Unix operating system with an internet connection and a colour touch screen for $35? How about when that price falls further? Today, anyone who can make a pocket calculator can make something like this, and for not far off the same cost. The cost of putting a real computer with an internet connection into a product is collapsing. What does that set of economics enable?
One of the great descriptions of what real testing looks like comes from Valve software, in a piece detailing the making of its game Half-Life. After designing a game that was only sort of good, the team at Valve revamped its process, including constant testing:
This [testing] was also a sure way to settle any design arguments. It became obvious that any personal opinion you had given really didn’t mean anything, at least not until the next test. Just because you were sure something was going to be fun didn’t make it so; the testers could still show up and demonstrate just how wrong you really were.
“Any personal opinion you had given really didn’t mean anything.” So it is in the government; an insistence that something must work is worthless if it actually doesn’t.
An effective test is an exercise in humility; it’s only useful in a culture where desirability is not confused with likelihood. For a test to change things, everyone has to understand that their opinion, and their boss’s opinion, matters less than what actually works and what doesn’t. (An organization that isn’t learning from its users decided it doesn’t want to learn from its users.)
This is true in all of software development. I see it on a regular basis on the XWiki.org dev lists: heated arguments about such or such features between committers and stakeholders, without much actual data to base our opinions on - and I'm as guilty of this as anyone else.
Given that this can be an issue even when all participants are of good faith, it's easy to understand why in the political / government realm things have the potential to go a lot wronger.
Among any set of groups, almost all the groups think their group is delivering more and other groups are delivering less. In a company with many groups, managers generally believe their group as a whole is performing better by relevant measures and thus should not be held to the same distribution or should have a larger budget. Groups tend to believe their work is harder, more strategic, or just more valuable while underestimating those contributions from other groups.
A must-read for anyone who'll have to tweak or implement a performance review system in the context of their work.