Crises are “come as you are” events, not springboards, simulations say

My PhD research topic deals with resource constraints and, by implication, resource shocks such as the oil crisis of 1973 – and the current slow-mo energy crisis. What I’m trying to do is to sort out whether sudden constraints, such as the 1973 event, act as springboards of technological advancement. In other words, I’m trying to figure out if we can rely on our ingenuity and opposable thumbs to get us out from the mess they’re getting us into.

After some trouble with initial simulations, I now have a bunch of data from simulated companies solving simulated resource shocks by developing new (simulated) technologies. Granted, the simulated companies are more of a “toy model” kind of a firms (think one step above microeconomic equations) but at least this is a toy model with some pedigree – the fabled NK model to be exact, as seen on Levinthal (1997), Rivkin (2000), Kauffman et al. (2000), Frenken (2001, 2006) and others. For those not so much into artificial societies, this approach should be at least theoretically tolerable, if not exactly universally accepted, by many organizational scientists and technology researchers.

I won’t go into nitty-gritty of technical detail here – I’ll be presenting that in PREBEM 2011 and will post the full paper here later – but for those who are interested in the results and conclusions, more after the jump.

1. Constraints tend to be neutral – they’re about as likely to help you as they are to shoot you in the foot.

I ran well over three million simulations with varying parameters, but I was unable to find a combination where constraints would be, on average, consistently beneficial for the development of technology. What I measured was the average change in performance of simulated technologies resulting from the constraint. What I got was a very slight but statistically nonsignificant positive bias. The variability of the results was great, however; it’s therefore very plausible that constraints lead to good results – or that constraints lead to bad results. It’s very much a hit or miss, flip a coin type proposition; optimists rejoice, pessimists knew this all before.

Mean performance change (black solid line with error bars) and mean performance after constraint (red solid line).

This here is one representative simulation. As you can see, at most, the average performance improvement is less than one percent. The results hold for all values of K, or (roughly) complexity of technology. As standard errors of the mean overlap the “no change” line (red dashed line), it’s safe to say the results are not statistically significant. However, variability (black dotted line, or one standard deviation) is high. By the way, the red line shows the performance optimum is found when K is near 4 – as is the case with most NK simulations, no matter the exact dynamics. Incidentally, when analyzing real-life technologies, many observers seem to approximate their K values to around 3…5. Coincidence, biased analysis, or a general feature of how the evolution of complex systems moves towards the performance optimum? You decide.

2. The future (of constrained innovation) is here – it’s just not very widely distributed.

As you might have guessed from the topic, it turns out that simulated constraints aren’t really very good springboards for technology development. Firms do develop new technologies,  but constraints don’t seem to be very good at motivating firms to do so. Instead, with assumptions of competitive behavior that at least seem quite plausible, constraints tend to be very good at causing firms to adopt technologies that already exist.

But what if that technology happens to be inferior in every regard? Tough luck, bud. Availability trumps potentiality most of the time, unless we gift the firms with a bit more than plausible foresight and patience. Even when there’s very little or no pressure to imitate successful firms, up to 80% of technologies used after the constraint are exact copies of technologies in use before the constraint. Now, strictly speaking the simulations were about cases where a) there are no short-term substitutes for the technology, and b) the technologies are largely mature, but still. (Immature technologies might not suffer from constraints that much – they might slow down development a little bit, but as path dependencies are not yet fully set, there’s lot more ways to go around the problem. But that’s speculation.)

In other words, you can have a pretty good idea of what a constraint – say, a radical increase in price of energy – would probably do to the development of technology, if (and it’s a big if) you were well versed in the lore of energy technology. And a bit lucky so that no “black swan” type events materialize.

Anyway, here’s a graph to show you:

Two different settings for competitive intensity (lower has higher) and the share of pre-existing designs after the constraint.

The upper panel shows the average share of pre-existing designs (i.e. identical to those existing before the constraint) at moderate level of competitive intensity. At realistic levels of K, we get around 70% shares of pre-existing designs.

The lower panel shows the same when competitive intensity is set up to eleven, forcing companies to imitate superior designs if their technologies fall even a tiny bit below the average performance.

Interestingly, variability of average share increases in the latter case. The explanation is that if a company happens to come up with a new, superior technology, other companies are forced to adopt it. On the other hand, chances of getting such a breakthrough are slim – in industries where competition is so intense that even ninja pirates would call it a day, there are likely to be only 1-2 technologies in existence before the constraint. Tough luck if the constraint hits the key component in that technology, and there are no immediate alternatives, as there might be in cases where competition is not so intense.

This gets us to the last takeaway of today, which is

3. Constraints are bad news for diversity.

When you have an industry or an organization or whatever, and working practices (or tools, or technologies) in the said entity need to be “competitive,” what happens if a constraint decreases the performance of one of the working practices? Why, unless there is immediately visible light at the end of the tunnel, i.e. the working practice can be quickly redeveloped into a much better practice, the said practice is likely to be discarded or replaced with something else.

Herein lies the problem. Say that this first constraint eliminated half of the variety of practices in use, and now the practice and practitioners are unavailable – laid off, dismantled, or just not giving a damn. If a second constraint hits you, your odds of having a practice, or tool, or technology, or whatever that is not affected by constraint are likely to be only half of what they were before. (Actually, only given if constraints are equally likely to hit all the components, which is not probably the case, but this suffices for illustrative purposes.)

Entropy (≈diversity) of technologies in use, as a function of time.

So, even though constraints do not seem to be really good at springboarding new technologies, they seem to be very good at killing off those that do not perform at the time. As you can see on the picture on the left, which effectively measures the diversity of technologies in use. Even allowing radical innovation does not really help; and notice again what happens when competition is extremely intense. You’re right – it’s very much “up or out” policy in there.

Much of the time this is good – after all, the discarded technologies are inefficient – but it could cause a trap, if, when the constraint comes, there isn’t enough variety among technologies in use.

As constraints tend to be somewhat unexpected, the takeaway message from this last point should be clear: do not put all the eggs in the same basket. It’s common sense, really, and should be familiar to anyone with even high school education in ecology – monocultures tend to be pretty darn susceptible to viruses and other nasty stuff.

That’s it, folks – for more crunchy data, you’ll have to wait for the paper. Questions and comments welcome, the stage is yours now. References below for those who really want.

References

Frenken, K. (2001). Understanding product innovation using complex systems theory. University of Amsterdam, Amsterdam.

Frenken, K. (2006). Innovation , Evolution and Complexity Theory. Cheltenham and Northampton: Edward Elgar.

Kauffman, S., Lobo, J., & Macready, W. G. (2000). Optimal search on a technology landscape. Journal of Economic Behavior & Organization, 43, 141-166.

Levinthal, D. A. (1997). Adaptation on Rugged Landscapes. Management Science, 43(7), 934-950.

Rivkin, J. W. (2000). Imitation of complex strategies. Management Science, 46, 824-844.

About J. M. Korhonen

as himself
This entry was posted in Notes in process, Scarcities and constraints, Simulations and tagged , , , . Bookmark the permalink.

What's your take?

Please log in using one of these methods to post your comment:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s