I really ought to read an OKR book, because the telephone-game version I hear about seems problematic.
For example, Austin's Measuring and Managing Performance in Organizations[0] gives a helpful 3-party model for understanding how simplistic measurement-by-numbers goes awry. He starts with a Principal-Agent and then adds a Customer as the 3rd party; the net effect is that as a Principal becomes more and more energetic in enforcing a numerical management scheme, the Customer is at first better served and then served much worse.
As a side effect he recreates or overlaps with the "Equal Compensation Principle" (described in Milgrom & Roberts' Economics, Organization and Management). Put briefly: give a rational agent more than one thing to do, and they will only do the most profitable thing for them to do. To avoid this problem you need perfectly equal compensation of their alternatives, but that's flawed too, because you rarely want an agent to divide their time exactly into equal shares.
Then there's the annoyance that most goals set are just made the hell up. Just yanked out from an unwilling fundament. Which means you're not planning, you're not objective, you're not creating comparative measurement. It's a lottery ticket with delusions of grandeur. In Wheeler & Chambers' Understanding Statistical Process Control, the authors emphasise that you cannot improve a process that you have not first measured and then stablised. If you don't have a baseline, you can't measure changes. If it's not a stable process, you can't tell if changes are meaningful or just noise. As they put it, more pithily:
> This is why it is futile to try and set a goal on an unstable process -- one cannot know what it can do. Likewise it is futile to set a goal for a stable process -- it is already doing all that it can do! The setting of goals by managers is usually a way of passing the buck when they don't know how to change things.
That last sentence summarises pretty much how I feel about my strawperson impressions of OKRs.
> Put briefly: give a rational agent more than one thing to do, and they will only do the most profitable thing for them to do. To avoid this problem you need perfectly equal compensation of their alternatives, but that's flawed too, because you rarely want an agent to divide their time exactly into equal shares.
I would argue the system is working as intended. Contrary to your assertions, you don't want employees spreading effort like peanut butter, you want to focus them on executing one or two things quickly and getting value out of that quickly. Instead of launching 12 features a year from now, I'd rather launch 1 feature a month.
> you cannot improve a process that you have not first measured and then stablised.
There is of course, a certain amount of reasoning under uncertainty involved. One of the lessons many folks learn from a/b testing and OKRs is just how hard it is to actually make a difference, and folks need practice calibrating.
> Contrary to your assertions, you don't want employees spreading effort like peanut butter, you want to focus them on executing one or two things quickly and getting value out of that quickly.
That's not quite what I was driving at. Optimisation is made on the measurement. Measurement is only necessary because the Agent is not perfectly observable, there is an information asymmetry between Principal and Agent.
That's why Austin's model is so helpful. There are many things that must be done in order to best satisfy the Customer. Some of those are measurable, some are less measurable. But a rational Agent looks at any basket of measurements and will optimise for one of them: the one that pays best.
It's not enough to say "just this one feature and no peanut butter please". You have to define what the one feature is. You have to provide an exact measure for it. Agents can then either optimise honestly, or they can go further and optimise fraudulently. If honestly, the Principal realises that they actually need a basket of values to be optimised. But then they need to apply equal compensation, because the Agent will simply ignore any measurement that doesn't maximise their results.
I believe measurement is useful. But I also believe that connecting it to even the whiff of reward or punishment is beyond merely futile and well into being destructive.
> I really ought to read an OKR book, because the telephone-game version I hear about seems problematic.
I've read John Doerr's Measure What Matters OKR book and personally used OKRs for a few quarters. Google's re:Work site about OKRs is short and adequate summary:
For example, Austin's Measuring and Managing Performance in Organizations[0] gives a helpful 3-party model for understanding how simplistic measurement-by-numbers goes awry. He starts with a Principal-Agent and then adds a Customer as the 3rd party; the net effect is that as a Principal becomes more and more energetic in enforcing a numerical management scheme, the Customer is at first better served and then served much worse.
As a side effect he recreates or overlaps with the "Equal Compensation Principle" (described in Milgrom & Roberts' Economics, Organization and Management). Put briefly: give a rational agent more than one thing to do, and they will only do the most profitable thing for them to do. To avoid this problem you need perfectly equal compensation of their alternatives, but that's flawed too, because you rarely want an agent to divide their time exactly into equal shares.
Then there's the annoyance that most goals set are just made the hell up. Just yanked out from an unwilling fundament. Which means you're not planning, you're not objective, you're not creating comparative measurement. It's a lottery ticket with delusions of grandeur. In Wheeler & Chambers' Understanding Statistical Process Control, the authors emphasise that you cannot improve a process that you have not first measured and then stablised. If you don't have a baseline, you can't measure changes. If it's not a stable process, you can't tell if changes are meaningful or just noise. As they put it, more pithily:
> This is why it is futile to try and set a goal on an unstable process -- one cannot know what it can do. Likewise it is futile to set a goal for a stable process -- it is already doing all that it can do! The setting of goals by managers is usually a way of passing the buck when they don't know how to change things.
That last sentence summarises pretty much how I feel about my strawperson impressions of OKRs.
[0] https://www.amazon.com/Measuring-Managing-Performance-Organi...
[1] https://www.amazon.com/Economics-Organization-Management-Pau...
[2] https://www.amazon.com/Understanding-Statistical-Process-Con..., though I prefer Montgomery's Introduction to Statistical Quality Control as a much broader introduction with less of an old-man-yells-at-cloud vibe -- https://www.amazon.com/Introduction-Statistical-Quality-Cont...