May 1996 (final)
TOWARDS A GENERALISED COASE THEOREM
A general theory of bounded rationality, rule-following behaviour
and the emergence of social and institutional structures
as a substitute for market transactions
Bertin Martens[1]
Paper presented at a conference on "Commerce, complexity and
evolution"
at the University of New South Wales, Sydney (12-13 February 1996)
Abstract
This paper argues that the assumptions underlying the neo-classical
economic paradigm of perfect competition are not realistic. In particular,
assuming exogenously fixed production technology and consumer preferences, as
well as the availability of perfectly symmetric information at zero cost
contradicts the findings on the inherent bounded rationality of human
behavioural decisions.
The paper builds on an information-theory approach. Bounded rationality is
reflected in rule- or algorithm following behaviour. Establishing rules require
an investment in information costs. Sticking to established rules entails
economies of scale stemming from this investment. Behavioural rules can be
stored in the human brain but can also be "farmed-out" into equipment
goods and social structures. Farming-out reduces transaction cost (opportunity
cost of information storage and processing). Increasing control and policing
costs set limits on farming-out.
A Generalised version of the Coase Theorem is derived from this approach
and it is shown that this provides a more realistic basis than the
neo-classical paradigm to explain a wider range of individual (economic) choice
behaviour. It allow allows to explain the emergence of rules, social structures
and institutions as substitutes for negotiated market transactions and lays the
basis for a theory of institutions.
Keywords: production theory,
consumer behaviour, Coase Theorem, transaction costs, bounded rationality,
institutional economics, evolutionary economics
1. Introduction
Present-day economics is characterised by a deep abyss between mainstream
neo-classical economic theory and the practise of business management.
Neo-classical theory is build on the perfect competition paradigm that leads to
general equilibrium and the highest possible state of welfare. It is driven by
two exogenous sets of fixed algorithms, consumer preferences and production
technology. Equilibrium is reached when all pairs of marginal costs and benefit
ratios are equalised. At that point entropy is maximised and economic activity
- agents making choices - must necessarily cease because no agent can further
improve his position. At best, economic activity keeps on going in reproductive
mode whereby agents eternally exchange the same mix of goods and services at
the same prices. In the absence of external impulses, the economic system dies
an entropy death.
I have already argued in a previous paper (Martens, 1995) that the
neo-classical perfect competition paradigm is incomplete from a systems theory
point of view. An entropy-maximising system is not self-sustainable. It needs
an entropy-decreasing force - in this case a competition-reducing force - to
keep it going. Behavioural innovation constitutes such a counterbalancing
force. This can be generated by regular re-programming of the exogenous
behavioural algorithms (consumption preferences and technology) with new ideas.
Innovation fuels competition and keeps the economic system away from entropy
death.
Re-programming of these exogenous algorithms[2] implies endogenisation of these
algorithms into the economic system. This has been attempted since the
mid-1980s by so-called endogenous growth theory (Romer, 1986, and Lucas, 1988).
While these attempts initially remained within the confines of the
neo-classical perfect competition paradigm, Romer (1990a, 1994) has shown that
innovation-based endogenous growth theory basically conflicts with the
neo-classical model because it violates the convexity requirement that is
needed to reach equilibrium. Economic theory thus needs to switch to a new
model, and indeed a new paradigm, taking into account both optimising behaviour
in function of competitive forces and innovation in behavioural algorithms to
escape from competition.
In this paper, I will first retrace Romer's argument on the conflict between
neo-classical production theory and innovative producer behaviour and try to
push this analysis somewhat further in the direction of an information or
knowledge-based approach. Secondly, I will transplant the same arguments to
consumer behaviour, a domain that has been neglected both by endogenous growth
theory and the neo-Schumpeterian innovation school. Then I will present, in
narrative format, the outlines of a new model that is more firmly rooted in
present-day neuropsychological and information theories. It combines both
neo-classical rational optimising behaviour and boundedly rational
rule-following behaviour, both by consumers and producers. Finally, I will show
how this approach results in a generalised version of the Coase Theorem. This
may constitute the basis for a more comprehensive explanation of general
socio-economic dynamics, including social structures and institutions. That is,
more comprehensive than the neo-classical general equilibrium model.
2. Innovative producer behaviour
Since the early 1950s, mainstream economics' treatment of production is
based almost entirely on the neo-classical Solow-model (Solow, 1956). The
production process is a technological "black box" that transforms
factor inputs (capital goods and labour) into outputs (production). The
transformation ratios between factor inputs and outputs (factor productivity)
are considered exogenous to the economic process. Empirical estimation of these
transformation ratios, by Solow himself and others, showed, however, that its
capacity to explain output growth was limited. An unexplained growth residual
remained: the so-called Solow-residual. Clearly, there is productivity growth
or technological progress inside the production black-box. But the
neo-classical model is unable to explain this progress and considers it to be
exogenous to the economic system.
In the 1980s, two different gateways were explored to endogenise
technological progress in the economic system. The first started from Nelson
and Winter's (1982) micro-economic evolutionary approach to economic change
that build the foundations for most of the present neo-Schumpeterian
entrepreneurial innovation models. The second gateway was situated at a more
macro-economic level, where Romer (1986, 1987, 1990) and Lucas (1988)
transformed Arrow's (1962) learning-by-doing model into an endogenous growth
model.
Nelson and Winter and the neo-Schumpeterian school have sought inspiration
in genetic adaptation models in biology to explain innovative producer
behaviour and focus on innovation as the driving force behind economic growth.
Competitiveness is treated as an evolutionary problem: producers must adapt or
perish. The market position of individual firms continuously changes because of
innovations by competitors. A wide variety of evolutionary models has been
build around this theme, starting with Nelson & Winter (1982). A good
overview with recent examples is presented in the December 1994 issue of the
"Journal of Evolutionary Economics", including Dosi (1994), Ulph
& Owen (1994), Silverberg & Verspagen (1994), etc. Aghion and Howitt
(1993) have developed a micro-macro model where economic growth, including
business cycles, is driven by innovation and creative destruction.
The almost exclusive focus of all these models on the firm as the locus of
innovation allows us to classify them as evolutionary supply-siders. Their own
models explain how this supply-side bias has been caused by historical
path-dependency (David, 1993) on Schumpeter's (1934) initial approach.
The general mechanism of these evolutionary models could be summarised as
follows. The core of the models is constructed around investment in R&D
that yields innovations, generated by a stochastic mechanism. These innovations
are than linked to a standard production model. They improve the quality of
output and/or increase productivity in the production process. Quality
improvements are reflected in price increases as consumers are willing to pay a
higher price for "better" products. Productivity improvements result
in production cost savings. Both improvements can be coupled to time-patterns
that simulate inertia in diffusion of innovation and spill-over to other
producers, and thus the evolution over time of relative monopoly power in the
market. Although the replication of ideas can normally be done at virtually
zero marginal cost, in practise they are protected by legal patents, secrecy
and time-consuming learning processes to acquire the ideas.
A key question is whether the standard neo-classical hypothesis of profit
maximising behaviour by producers remains valid in these models. The strong
version of profit maximisation, based on ex-ante rational expectations, can
certainly not be maintained as innovations are randomly generated events and
the co-evolving character of producers' competitiveness in a market excludes
forecasts about competitors' behaviour. A weaker version, based on ex-post optimisation
through competitive selection mechanisms is more acceptable. The impact of a
weakened profit maximisation hypothesis has been discussed at length in the
literature (for instance, Dietrich, 1994) and should not necessarily de-rail
the coherence of the neo-classical train of thought.
However, innovation-based evolutionary models violate much more fundamental
neo-classical principles. They introduce imperfect competition as the driving
force for innovation. It strengthens a producer's monopoly power in the market
and allows it to increase prices above prevailing market-prices for
"standard" (non-innovative) products. General competitive equilibrium
analysis does not hold anymore as innovative monopoly power allows producers to
set prices above the marginal cost of production (which is anyway close to zero
for ideas). Typically in neo-Schumpeterian innovation-driven models, prices are
determined through mark-up procedures, completed by market share allocation
mechanisms among producers, without taking into account changes in consumer
demand (see next section).
Furthermore, ideas are non-rival goods. Contrary to ordinary goods, which
are rival, ideas can be used by many users at the same time without loss of
benefits or additional costs for any of them. It is important to underline here
that this synchronic (same-time) non-rivalry of ideas pertains only to the idea
itself, and not to its material carrier (paper, diskettes, video, any
communication media). The material carrier may only produce diachronically
(over-time) non-rival series of copies of the embodied idea. For example, the
ideas embodied in the Windows operating system that runs my PC are used by
millions of people all over the world at this very moment (synchronically). But
my own registered Windows copy, that came on a floppy disk with a unique serial
number, can only be used by one PC at the same time, although with a virtually
endless series of repeats over time (diachronically).
Romer (1990) has demonstrated that non-rival goods result in production
functions that have a degree of homogeneity higher than one. Consequently,
Euler's theorem on the allocation of factor income according to marginal
productivity is not valid anymore and factors are not remunerated according to
their marginal productivity. Classic production functions, for instance those
of the Cobb-Douglas type, can, in principle, not be used anymore since they
become meaningless for the allocation of factor income. Some neo-Schumpeterian
models try to solve this problem by splitting the economy in two sectors, one
that produces non-rival innovation and a second that produces ordinary rival
goods (with bought innovation inputs) which remains subject to the classical
production functions (for example Aghion and Howitt, 1993). But this does not
solve the problem of the first sector's incompatibility with neo-classical
models.
Innovation-driven models thus violate at least three neo-classical
principles: ex-ante rational profit maximising behaviour of producers, perfect
competition on product markets (price exceeds marginal costs) and perfect
competition in production factor markets (remuneration exceeds marginal
productivity). The neo-Schumpeterians have never claimed to be or wanting to be
consistent with the neo-classical paradigm. On the contrary, they thrive on
imperfect competition which they claim - rather successfully - to be closer to
reality. Indeed, the objective of business managers is not to operate on a
perfect level playing field with their competitors but rather to differentiate
their products through price and non-price strategies. Clearly, there is not
much left of neo-classical producer theory.
A critique of the neo-Schumpeterian evolutionary supply-side models is that,
despite these violations of neo-classical production behaviour hypothesis,
their approach is still largely based on the same neo-classical production
model with aggregate variables (profits, investments, prices and production).
The only "innovation" which they introduce is to link production to a
stochastic innovation generator, fuelled by investments. Production of material
outputs (goods) remains a function of capital (investments) and labour. The
production algorithm itself, the series of transformations in the "black
box" that generate the output, is not explicit or explained. The process
of embodiment of the innovative idea in the production process, the changes
that it induces in the production algorithm, are not explained. Innovation is
still measured indirectly by its results, changes in production costs and
market prices, not by its intrinsic impact on the production process.
3. The origins of economies of scale in production
Romer's (1990) explanation that the non-rival nature of ideas causes economies
of scale in production is intuitively understandable but it does not explain
the fundamental origins of economies of scale in production processes.
Ideas are expressed in information algorithms. Algorithms are rules that
separate order from chaos (Gell-Mann, 1995). The information contained in an
algorithms reduces entropy, it creates differentiation between previously
undifferentiated elements. Information can not be stored or transmitted without
an energy/matter vehicle. It has to be stored in a material form, whether in
our brains, on a computer disk or on paper. Every "created" object -
objects worked on by men - are, by definition, moulded according to a specific
algorithm, an idea. To work on something is to shape it, to differentiate it
from a less-ordered environment from which it is extracted. In a purely
thermodynamic sense, to work means to lower entropy in a sub-system by
extracting energy from the outside world, to imprint a lower-entropy
informational algorithm on a sub-set of a higher-entropy system. Putting that
imprint takes energy, work (Georgescu-Roegen).
The essence of production is the transmission of algorithms, the application
of rules that increase order. Recent developments in the economic theory of
production (Scazierri, 1993) look at production as a process, a
sequential/parallel chaining of rules for action, algorithmic transformations.
Innovation could then be defined as a change in the chaining of algorithmic
transformations or a change in a particular algorithm itself. Such changes are
brought about through learning processes (that embody new algorithms in labour
or in management) and the production of new equipment goods (that embody the
algorithm of an innovative idea). Different algorithms produce different
outputs altogether, not just improvements in the quality of an existing
product.
In the pre-industrial and industrial stages of economic development,
moulding of matter according to certain simple algorithms was the predominant
production activity in the economic process. As long as the algorithmic
"blueprint" remained resident in the human brain only, there were few
economies of scale in such production processes. Each human work cycle resulted
in only one copy of the algorithm. The cost of each replication of the algorithm
(labour input) was more or less constant. Matter and energy based activities
are subject to the laws of thermodynamics: total energy in a closed system
remains constant and decays (lower entropy or availability) through work.
With the development of tools and machines (equipment or capital goods),
economies of scale set in. The essence of capital goods is that a piece of
equipment embodies an algorithm, generated in the brain of the designer. This
single copy of the designer's algorithm can be imprinted endlessly (many
copies) on other pieces of matter. One work cycle - the embodiment of the
algorithm in the equipment - generates a (potentially) endless series of copies
of the algorithm. The capital good acts as an intermediary storage "memory"
for the algorithm, which can be copied endlessly without changing or eroding or
destroying its content. In practise, the number of copies is limited only by
material wear and tear of the equipment good (physical depreciation). With
every copy, a tiny bit of matter is also displaced and the capital good
physically decays. Equipment or capital goods generate economies of scale
because a single investment in design of an algorithm generates many copies at
zero marginal design cost. There is no need to re-design the algorithm for each
copy. The cost of running the equipment is, of course, positive.
Thus, the distinction between capital goods and recurrent inputs in a
production process is entirely defined by the transmission mechanism of their
algorithmic information content. A capital good transfers only the content of
its embodied algorithm (possibly with a marginal amount of matter) to the
output; a recurrent input transfers both the full amount of matter that carries
the algorithm as well as the embodied algorithm itself into the output. In
other words, the distinction between rivalry and non-rivalry depends on the
extend to which the material algorithmic carrier is incorporated in the output.
A production process can than be described as a series of transformations, some
of which involve the incorporation of matter into an output product (rival
stages of production) and others which involve only algorithmic incorporations
(non-rival stages of production).
Non-rival transmission of algorithms is a fundamental source of economies of
scale in production processes. A production process that involves only
algorithmic transfers enjoys virtually endless economies of scale, limited only
by the wear and tear of the hardware. For instance, copying public domain
software into the Internet (the transfer of the physical carrier for the
software algorithm, electricity, is a marginal cost compared to the value of
the software).
4. Innovative consumer behaviour
Both endogenous growth theory and the neo-Schumpeterian approach to endogenisation
of innovation and technical change have neglected the consumer side of the
innovation story. They introduce changes in the exogenous set of technology
parameters but leave exogenous consumer preferences untouched.
Neo-classical general equilibrium models assume that consumer preferences
are exogenous and fixed. A strong statement in defence of this assumption has
been made by Becker and Stigler (1976), although Becker (1991) seems to have
somewhat softened his views. Pollak (1976, 1977, 1978) has weakened the
neo-classical stance and allowed for various sources of endogenous influences
on consumer preferences: own past preferences, preferences of other consumers
and prices. He underlines that endogenously determined consumer preferences
rarely result in personal utility maximisation. Since Pollak's seminal work on
this issue, an endless series of variations on the theme reduced consumer
sovereignty have been developed. Birchandani et.al. (1992) introduce a theory
of fads, fashion and customs on the basis of "information cascades",
that bears close resemblance to Arthur's (1989) lock-in mechanism. Ditmar
(1994) has eroded consumer sovereignty to the bone. Her socio-economic
investigations led to the conclusion that consumer behaviour is largely dependent
on norms and values within peer groups. A substantial body of psycho-economic
literature has developed around the theme of "socialisation" of
consumers from early childhood onwards. See, for instance, Lea (1990). In
short, the perfectly sovereign neo-classical consumer, who maximises utility
solely in function of his own ex-ante exogenous preferences, does not exist
anymore in present day economic theory. If he would still exist, then the vast
amounts spend on marketing campaigns would not make sense.
However, economic theory has still not taken the last step in the
endogenisation of consumer sovereignty, that is the emergence of preferences
for new goods ex nihilo. How can a
consumer know (ex-ante) how much preference he will attach to all possible new and
improved goods that will be available in the future? Obviously, he can not.
Preferences can only be revealed after the producer has put his innovative good
on the market. Ulph and Owen (1994) have tried to find a way around this by
augmenting consumer preference by the amount of quality improvement as
reflected in the price increase. But this technique does not solve the question
how consumers can (ex-ante) determine the augmentation of their preference for
an improved good without knowing the nature of the improvements. Furthermore,
it does not allow for the introduction of new goods; only improvements to
existing goods can occur.
Preferences for new goods must necessarily emerge endogenously and
ex-nihilo. That ruins the neo-classical consumption model but stands much
closer to our perception of reality. But just like the technological black-box
in neo-classical production theory, so does the transformation of acquired
consumer goods into consumer satisfaction remain a black-box mechanism in
neo-classical consumption models.
Birchandani et.al. (1992) come much closer to ex-nihilo emerging preferences
with their information cascade models. This brings us closer to an information-
or algorithm-based approach to the consumption process. Consumption could be
described as a match-making process between learned algorithms or
representations in the consumer's mind and the algorithms or representations
embodied in consumer goods. A consumer will buy a good when the embodied
algorithmic representation corresponds to the representation
("preference") in his mind. Consumer satisfaction occurs when the
embodied representation is copied to (matched with) the representation.
Non-durable consumer goods can only be consumed once: both the embodied
algorithmic representation and the material carrier are "destroyed"
in the consumption process. This category contains goods that are physically
(matter/energy) taken in by the human body. Example: food, heating. All other
consumer goods become durable in the sense that the embodied algorithm can be
consumed endlessly as long as wear&tear of the material carrier allows it.
Only the embodied algorithm is copied to the consumer, and can be copied
repeatedly[3].
Representations in the consumer's mind are established (learned) through
communication (social interaction, information gathering). Producers produce
goods and services that correspond to these representations or they may try to
modify the representations in the consumer's mind through marketing campaigns.
For example, clothing. The structure of the tissue (mechanical and chemical
algorithm) slows down heat dissipation or absorption by the body which improves
physical survival chances. In that sense, and from a purely utilitarian point
of view, old fashioned clothing will do as well as modern clothing. However,
people generally prefer fashionable clothing. Fashion means that the embodied
design (algorithmic representation) has been modified to take into account
preferences for certain colours and styles that have been established through
social communication processes, not on the basis of individual consumer
sovereignty. They constitute the preferences in the consumer's mind on what one
should wear this season.
"Utility" or "preference" become tautological concepts
in this view. Useful is what is perceived as useful in a given individual and
social information context. Consequently, utility is fully determined within
the system and utility maximisation can not be the driving force behind
consumer behaviour anymore. Utility functions lose their status of objective
functions, which they had acquired in the neo-classical economic model.
Describing the consumption process as transfers of algorithmic
representations also allows us to widen the neo-classic consumption concept and
to include "consumption of representations", apart from consumption
of goods and services. For instance, we can "consume" behaviour,
attitudes, ideas and information emitted by other persons. We can enjoy their emission
of representations, dislike them or simply don't care. This widening of the
consumption concept has two substantial implications for the neo-classical
consumption model.
First, it introduces consumption of "pure" information. Consumers
are willing to pay entrance fees to be in the presence of persons they like or
to hear ideas and see scenes which they approve of. They don't pay for the
material carrier of the information or representation (book, audio disc,
theatre) but for the representation itself. In the neo-classical model,
"pure" information (representations) is assumed to be freely
available and symmetrical, that is identical for all economic agents. In the
algorithmic approach, information usually comes at a cost and is, in any case,
not symmetrical because supply of and demand for a particular information or
representation is not homogenous among economic agents.
Second, it allows for the introduction of "negative consumption",
consumption of things which we do not like but still care for. We may incur
substantial opportunity costs to stay far away from people and situations we
dislike. We may spend money on buying an opposition party newspaper just to
read how ridiculous their ideas are. We spend time protesting in the streets
against nuclear bombs or human rights violations by governments in distant
countries where we will never set foot. Neo-classic consumption models have
difficulties coping with such consumer behaviour. The introduction of
algorithmic representations as the basis for consumer behaviour facilitates the
analysis. Consumer preferences then come in three kinds: those you like, those
you hate and those you ignore; only the latter has no impact on your allocation
of opportunity costs. The neo-classical consumption model can only cope with
the first kind.
Intermezzo
In the preceding sections, we have tried to shed some light on two
black-boxes in the neo-classical model, namely production and consumption
processes. We have shown how the underlying transformation mechanisms (inputs
into outputs, goods into consumer satisfaction), that are implicit in the
neo-classical model, can be made explicit by adopting an algorithmic approach.
Production and consumption can be described as algorithmic (rules,
representations) transfer activities. This approach, however, casts serious
doubts over the neo-classic assumption of exogenously fixed production
technology and consumer preferences. Algorithm-based behaviour is, almost by
definition, learned behaviour, established through communication processes that
are endogenous in the human interaction model. We are now confronted with the
problem that endogenisation of production and consumption parameters renders
the economic model steerless: where do we start, where do we go to?
Economics is fundamentally the science of human choice: what is the best
choice given the circumstances? The word "best" implicitly assumes
that there is something to be optimised: profits, income, human satisfaction,
etc. Optimisation requires a driving force, a motive for choice and action.
Endogenisation consumption parameters takes away the motive for consumer
behaviour. What motivates consumption behaviour and consumer choice when
consumer satisfaction derived from bought goods constantly changes and - worse
- depends on the choice itself. Like in every detective story, we need a motive
for behaviour. The search for a new motive - beyond consumer preference - is
the subject of the next section. We leave economics for a while, and start
roaming around in information and evolution theories, biology and psychology.
5. Survival probability maximisation under bounded rationality
The starting point in the neo-classical world view is that economic agents
are, at any moment, rational optimisers. They have, at any moment and at zero
opportunity cost, perfect information on their economic environment that
permits them to maximise a utility/profits objective function subject to
exogenously given preferences/technology. Ever since the work of H. Simon on
bounded rationality, we know that this is an unrealistic assumption in most
circumstances. Rational choice assumes the continuous re-calculation of
pleasures and pains. Such behaviour implies enormous information gathering and
processing costs as well as transaction costs to continually change
arrangements. Given the limited computing capacity of the human brain,
continuous and comprehensive optimisation is impossible. At best, intermittent
and partial optimisation can be achieved, based on a limited set of
information. Even if the brain's computing capacity was not limited, the
fundamentally unknowable universe would only allow for partial comprehension
and thus bounded rationality.
It is precisely in response to imperfect information or bounded rationality
that complex adaptive systems have developed. Gell-Mann (1995) calls them
Information Gathering and Utilising Systems (IGUS). IGUS sift through the
limited amount of available information to identify regularities (algorithms)
in an uncertain universe. Regularities separate randomness and uncertainty from
order (synchronic) and predictability (diachronic order). The algorithms
identified by IGUS allow to - partially - reduce the unpredictability of the
world. Algorithm identification is called "learning". The learned
algorithm is stored in memory and can be used repeatedly. Although algorithms
are only an imperfect approximation of reality, being able to identify and
memorise them enhances the survival probability of IGUS. Rather than simply
awaiting the course of external events and hoping that none of these will be
harmful or even lethal, IGUS can try to foresee the course of events and devise
strategies to reduce harm and increase benefits. Langlois and Csontos (1993)
have shown how this two-track approach (rule-following combined with intermittent
re-optimisation) can converge to a single behavioural track. This is what
biological, including human, behaviour is about: maximise perceived survival
probability in a world of uncertainty. You can not be sure to find food anytime
anywhere, so you devise (learn) behavioural algorithms that permit you to
increase chances of finding food.
Survival probability maximisation should not necessarily be interpreted
strictly in the live-or-death sense but should be considered in a context of
mostly marginal behavioural adaptations that smoothen the path of life and
steer it so as to maintain or improve perceived survival probabilities. Like in
Darwinian evolution, survival maximisation does not have to be an explicit
behavioural objective. But it is always implicitly there: IGUS that do not
improve their survival probability, improve their death probability - by
definition.
The qualification "perceived" is essential. A person does what he
thinks is best for him. Because of inherent uncertainty, IGUS have no means of
knowing which behavioural rule will indeed maximise absolute survival
probability. A change towards more "optimal" behaviour and higher
"perceived" survival probability is relative only to the present
challenges and offers no guarantee to cope with future challenges (Hodgson,
1993). Consequently, IGUS can only maximise perceived survival probability.
6. Layering and farming-out of algorithms
Order-creating IGUS exist at various levels of complexity, from simple biological
structures to complex entities with cognitive capacities, like humans, to
complex societies of humans. Simple biological structures react mechanically to
external events. The mechanisms are designed in such a way as to collect the
necessary material and energy inputs that prevent the internal
entropy-increasing process from reaching the limits where the structure
disintegrates. Algorithms are pre-programmed and reactions to events are fixed.
If unforeseen (un-programmed) event impact on the structure, it disintegrates.
In more complex biological structures, algorithms are genetically encoded,
which allows programming of far more diversified but still fixed or innate
reaction patterns. Reaction patterns can not adapt to unforeseen incoming
information. However, in Darwinian evolution models, randomly generated genetic
mutations may result in better adapted species with increased survival chances.
The word "random" is essential here: the behaviour of the biological
structure itself does not influence these mutations. The Lamarckian
evolutionary model is not applicable.
The evolution towards cognitive capacity and the emergence of brains in
evolution marks the gradual shift from purely Darwinian selection
(pre-programmed genetic) to Lamarckian selection (adaptive programming or
learned behaviour) (Hodgson, 1993). Biological organisms have acquired the
capacity to create a second layer of behavioural algorithms that can be learned
and memorised and even repeatedly re-programmed in the course of the organism's
lifetime. This has considerably enhanced flexibility in behaviour and increased
survival probability in a more varied range of environments and circumstances.
The effective internal complexity[4]
(Gell-Mann, 1995, p.56) of IGUS behaviour has thus increased because they are
able to identify and memorise more concise descriptions of regularities in
their environment, and react accordingly.
Recent developments in neuro-psychology clearly demonstrate the validity of
"double-layered" algorithmic behaviour. Damasio (1994) has reversed
Descartes' mind-body dualism into a unified approach, whereby the brain is
build on, but also dependent on, a bodily substrate. The innate mechanisms of
the old brain cortex are largely genetically pre-programmed and include
pleasure-pain (emotional) algorithms that ensure survival-oriented behaviour of
the body. Development of the neo-cortex has enabled learning and memorisation
of algorithms that allow us to interpret representations (symbols, images) of
the outside world. Damasio shows, however, that purely rational (learned
algorithms) behaviour is not possible without the interference of early cortex
emotional algorithms. Behavioural decisions are often incomputable on the basis
of learned algorithms because they do not give complete information and
emotional mechanisms intervene to cut endless reasoning short. Pre-programmed
emotional algorithms reduce brain computing time and thus economise on brain
use, making room for more decisions to be taken in a shorter time span. Damasio
defines "normal" behaviour as mostly rule-following, steered by
emotional algorithms, and sometimes rationally optimising to adapt new learned
algorithms.
Wilson (1975) had already formulated a similar hypothesis on hierarchies of
genetic and learned algorithms. He maintains that social behaviour is
fundamentally subordinate to a limited set of biologically programmed
behavioural rules that orchestrate behavioural responses so as to ensure
survival and reproduction. Both Damasio and Wilson arrive at the conclusion
that human behaviour is based on "double-layered" programming:
genetically pre-programmed emotional algorithms and learned adaptive
behavioural algorithms. It is both Darwinian and Lamarckian and the most
efficient approach to survival given the limited information gathering and
computing capacity of the human brain.
Beyond building successive more complex and adaptive internal layers of
algorithms, a third stage in the evolutionary process of IGUS can be
identified, which could be described as "farming-out" of algorithms.
The limits to IGUS's internal information processing and storage capacity
result in a need to economise on that internal capacity and search for external
processing and storage possibilities.
We have already identified one type of "farming-out" in sections 2
and 3, that is the development of tools, equipment. As explained in section 3,
tools, instruments and machines embody fixed knowledge algorithms, designed by
the IGUS's cognitive capacity, but stored externally in a material object.
Another type of "farming-out" behavioural algorithms is the
emergence of social (group) behavioural rules. Skaperdas (1991, 1992, 1995) has
shown how behavioural convergence and equilibrium positions can emerge from
economic interaction between individuals without common behavioural rules.
Exchange between individuals can be voluntary (a transaction) or involuntary
(theft, fight). Under certain conditions (similar perceptions of risk, not too
different fighting technology) it is better to negotiate a voluntary exchange
(certainty) than to force an involuntary exchange (uncertain fight). Perceived
survival probability is normally maximised by switching to voluntary
transactions, unless one of the parties has significantly lower risk
perceptions and/or vastly superior fighting technology. Certain situations,
such as prisoner's dilemma, may offer an incentive for co-operation (Axelrod,
1986) while others may not.
Strictly speaking, this is not necessarily a form of "farming-out"
because there is not always an external entity that takes responsibility for
storage and implementation of "group" algorithms. Such entity exists
only when an institutional set-up is created for that purpose (a club, a
government, a board of directors, etc.). In fact, we need more analytical
material to describe what is happening here. That is the subject of the
remainder of this paper.
7. Algorithmic behaviour and transaction costs
We have identified the perpetrator of economic behaviour - the human IGUS -
and his motive - perceived survival probability maximisation in an uncertain
world. The motive led him to build ever more complex and adaptive layers of
algorithms, in defence against the inherent uncertainty of a partially comprehensible
universe. He even started to "farm out" algorithms because the limits
on his brain capacity create positive opportunity costs for information.
Processing one package of information prevents him from processing another
package: he has to make a choice.
Positive information costs give rise to an asymmetrical or non-homogenous
distribution of information among economic agents. We go through different
learning processes and acquire different sets of algorithmic knowledge, either
by choice or because of purely environmental factors. Learning processes start
at birth and gradually lock the human carrier into specific behavioural
algorithms (language, social rules, perceptions of the environment, skills).
Individuals grow up in different environments, have different experiences and
learn different rules of behaviour. Consequently, information asymmetries are
bound to occur: some agents dispose of algorithms that others don't have. On
the information supply side, information monopolies lead to information pricing
at higher than the marginal cost of production - that is, above zero since
information is a non-rival good. On the information demand side, because of
limited brain capacity, information and algorithm acquisition (learning) has
opportunity costs: agents have to make a selection of what they want to learn
themselves and what they prefer to leave to others (farming-out), in function
of perceived opportunity costs.
The emergence of information asymmetries and positive information costs as a
result of bounded human rationality has several consequences.
First of all, it gives the final coup de grĂ¢ce to the neo-classical economic universe. The unrealistic assumptions on
exogenously fixed consumer preferences and production technology were already
demolished in the preceding pages. The last key assumption, perfectly symmetric
information distribution at zero cost, is crushed under the weight of bounded
rationality.
Secondly, and more importantly, it is economical for human IGUS to stick to
already learned algorithms as long as possible. A switch to new behavioural
algorithms entails information costs and possibly costs for the acquisition of
the material carriers of the new behavioural algorithms. Alternatively, we can
say that there are economies of scale from sticking to the same behavioural
algorithms as long as possible. Discussing the costs and benefits related to
algorithms brings us to the debate on transaction costs - a key concept in a
boundedly rational economic environment.
The transaction cost debate was initiated by Coase (1937) and revived by
Williamson and the New Institutional School in the 1980s (see, for instance,
Williamson, 1993 for an overview). Coase opened the debate from a supply-side
angle: Why do firms exist[5]? Why can't
individual producers trade parts of production processes among each other to
arrive at the final product? Alternatively, why can't all firms be absorbed
into one huge company? His answer was that firms are a means to circumvent the
inherent transaction costs of market-based exchanges: the cost of acquiring
information on supply and demand, the cost of negotiating a separate deal for
each transaction, the cost of uncertainty. Firms are more cost-effective than a
network of individual producers working through open-market transactions. Firms
circumvent the market because they work on the basis of contracts that fix
quantities, qualities and prices, rather than passing every time through an
open market transactions. These contractual arrangements save transaction
costs. On the other hand, all firms can not be amalgamated into a single
company because the amount of information required to supervise the whole
company would be overwhelming (and very costly). It could not possibly be
processed by a single IGUS and would thus require decentralised decision-making
anyway, thus eroding the presumed benefits of integration.
Coase saw the firm as a set of contracts between individual producers who
economise on transaction costs by by-passing the market. In his view, it is
indeed possible to out-wit the market. The New Institutional School that
emerged in the 1980s is founded on the view that firms are a "nexus of
contracts". For an overview, see Williamson and Winter (1993). As Demsetz
(1993) and Dietrich (1994) point out, transaction cost minimisation is a
necessary but not a sufficient condition for the emergence of firms. Total firm
cost is minimised, and profit maximised, if and only if the sum of production
and transaction costs is minimised. Dietrich (1994) maintains that the firm is
not just a "nexus of contracts" but also a
"production-distribution" unit. However, since we defined production
as the application of a series of algorithms, by machines and men, we can
safely abandon that distinction. Contracts describe agreed behavioural
algorithms.
In a way, Coase has discovered a new source of costs that was unknown to
neo-classical economics. He has added transaction costs to production costs.
Grossman and Hart (1986) show that transaction costs can never be reduced to
zero. That would amount to fixing all possibilities in contractual rules and
leave no space for unforeseen events. But even the most elaborate contract is
necessarily incomplete as rules have been identified in a boundedly rational
environment. Because of this inherent uncertainty, transaction costs are always
positive. The Coasian layer on top of the neo-classical cost landscape can be
very thin but never be fully erased.
But its importance goes much further than just another cost layer. The
introduction of transactions costs in the economic universe has wiped out
perfect information and, most importantly, dethroned the market as the ultimate
efficient allocator: contractual arrangements can be more efficient that
open-market transactions. The neo-classical economic universe appears to be
just a special case of the Coasian Universe, with transaction (and thus
information) costs set to zero.
8. A Generalised Coase Theorem
The original firm-related Coase Theorem can easily be generalised to include
all types of groupings of individual economic agents, not only producers, and
all types of economies of scale stemming from rule-following behaviour.
Generating behavioural algorithms requires an investment in information
acquisition, processing and storage (transaction costs). The more often the
algorithm is used, the lower the amortisation costs of the initial investment
in information. (Transaction) cost are minimised when the algorithm is applied
to as many decision-making occasions as possible. This conclusion applies to
all behavioural algorithms, independent of their carrier and means of storage
(in the human brain, in a capital good, in a contractual arrangement between
individuals, in social institutions).
The case of rules embodied in capital goods has already been discussed in
section 3 above. The longer a producer sticks to existing equipment goods and a
consumer to acquired durables, the higher the value of economies of scale
accumulated over time. Durable goods can endlessly replicate the embodied
algorithm at no additional cost, subject to physical wear and tear only.
Switching to new durables involves transaction (search and learning costs) and
acquisition costs.
The extension to all types of groupings of economic agents is also easy to
understand. In line with North's (1994) definition of an institution, a social
grouping (a social structure, an institution) can be defined as any group of
individuals that adhere to an agreed set of rules. Thus households, clans and
tribes, nations, companies, sports-clubs, etc. can be called groupings or
institutions. In this extension of the Coase Theorem, groupings or social
structures are advantageous because they reduce transaction costs among members
of the group. Social structures are a way to "circumvent the market":
rather than going through lengthy discussions and open-market negotiations
(with an uncertain outcome) every time we meet somebody, we better stick to an
agreed set of behavioural rules.
Take language, for instance, which is a set of commonly agreed rules for the
use and interpretation of vocal and written symbols, for the purpose of
communication within a group of persons. Other social groups have different
sets of rules for the same purpose of communication. Acquiring (learning) this
agreed set of rules takes time and effort and thus transaction costs. For an
individual, born in a community, transaction cost minimisation implies that it
costs less (in terms of opportunity cost of time) to stick to the prevailing
language and thus stay within that community. However, a company aiming at
profit maximisation may decide on exports to other communities, using a
different language. In that case, it may be worth the effort for the company
manager to learn the other language so as to be able to communicate with the
other community and sell his products there.
If we would pursue this economies-of-scale approach to information costs to
its limits, we would end up with a single firm organising all production
processes - which is tantamount to a centralised plan economy - , a single
country inhabited by people speaking a single language and adhering to the same
set of social conventions, behavioural rules and consumption patterns. Imagine
the enormous savings in transactions costs from translations, international
institutions, legal cases between companies, inter cultural conflicts, etc.!
Clearly, that is not the real world. Fortunately or unfortunately, there are
factors that limit integrationist trends due to economies of scale and maintain
diversity in behavioural patterns.
A social grouping is stable and sustainable if and only if none of the
members can increase his or her perceived survival probability by switching to
another set of rules. If more attractive alternative arrangements exist in
other groupings, strong defection motives will occur and possibly lead to
disintegration. There are various reasons for defection, all related to the
motive of maximising perceived survival probability which drives individuals to
minimise personal costs (or socialise personal costs) and maximise personal
benefits (or privatise social benefits). In a social interaction environment,
this can be achieved by driving a wedge between private and public costs and
benefits.
Permeable and/or incompletely defined rules-sets facilitate the emergence
and widening of the wedge and trigger free-rider behaviour. Individuals can get
away with slightly or totally disobeying the rule without incurring individual
costs that exceed their individual benefits. Rules are often approximative only
and provide only a rough separation between order and disorder. They need
interpretation and adaptation to specific situations. A sub-set of society may
be designated or designate itself to manage the rules: the bureaucrats, the
managers, the priests. When the transaction costs related to policing the
rules-set exceed savings in transactions costs from adhering to the rules, than
the rules cease to be useful to the group and it disintegrates.
It can be concluded that a Generalised Coase Theorem (a) not only provides a
more realistic basis than the neo-classical paradigm to explain a wider range
of individual (economic) choice behaviour but (b) also explains the emergence
of rules, social structures and institutions as substitutes for negotiated
market transactions. It lays the basis for a theory of institutions.
References:
Aghion, P. and Howitt, P., "A model of growth through creative
destruction", in "Technology and the wealth of nations", D.
Foray and Ch. Freeman, editors, OECD, London, 1993.
Arrow, K. J., "The economic implications of learning by doing",
Review of Economic Studies, June 1962, vol 29, p. 385-406.
Arthur, W.B. "Competing technologies, increasing returns and lock-in by
historical events", Economic Journal, 99 March 1989, p 116-131.
Becker, G.S. "A note on restaurant pricing and other examples of social
influences on price", Journal of Political Economy, vol. 99 nr 5, 1991,
Becker, G.S., and Stigler, J. "De gustibus non est disputandum",
American Economic Review, May 1976.
Bikhchandani S., Hirschleifer D. and Welch I. "A theory of fads,
fashion, custom and cultural change as informational cascades", Journal of
Political Economy, vol 100 nr 5, 1992.
Damasio, A., "Descartes error: emotion, reason and the human
brain", Avon Books, 1994.
David, P.A., "Path dependence and predictability in dynamic systems: a
paradigm for historical economics", in "Technology and the wealth of
nations", D. Foray and Ch. Freeman, editors, OECD, London, 1993.
Delorme, R. and Dopfer, K. (editors) "The political economy of
diversity: evolutionary perspective on economic order and disorder",
Algar, Aldershot, 1994.
Demsetz, H., "The theory of the firm revisited", in Williamson, O.
and Winter, S., editors, "The nature of the firm", Oxford University
Press, 1993.
--- "Towards a theory of property rights", American Economic
Review, 57/1967.
Dietrich, M., "Transaction cost economics and beyond", Routledge,
1994.
Ditmar, H., "Material possessions as stereotypes: material images of
different socio-economic groups", Journal of Economic psychology, 1994,
vol. 15, p. 561.
Dosi G. and Nelson R. "An introduction to evolutionary theories in
economics", Journal of Evolutionary Economics vol 4 nr 3, September 1994.
Gell-Mann, M., "The quark and the jaguar", Freeman, 1995.
Grossman, S. and Hart, O. "The costs and benefits of ownership",
Journal of political economy, 94/1986.
Hart, O., "Incomplete contracts and the theory of the firm", in
Williamson, O. and Winter, S., editors, "The nature of the firm",
Oxford University Press, 1993.
Hodgson, G., "The nature of selection in biology and economics",
in "Rationality, institutions and economic methodology", U. Maki
et.al., editors, Routledge, 1993.
Langlois, R. and Csontos, L., "Optimization, rule-following, and the
methodology of situational analysis", in "Rationality, institutions
and economic methodology", U. Maki et.al., editors, Routledge, 1993.
Lea, , "On socialisation", special issue of the Journal of
economic psychology, 1990.
Lesourne, J. "Chance, necessity and human will power: a theoretical
paradigm for evolutionary economics", in Delorme, R. and Dopfer, K.
(editors) "The political economy of diversity: evolutionary perspective on
economic order and disorder", Algar, Aldershot, 1994.
Lucas R. E., "On the mechanics of economic development", Journal
of Monetary Economics 22 (1988), p 3-42.
Martens, B. "The introduction of complexity: towards a new paradigm in
economics", forthcoming in the proceedings of the "Einstein meets
Margritte" conference, Free University of Brussels, 28 May 1995. Available
at http://cleamc11.vub.ac.be/clea/
Matthews, R. "The economics of institutions and the sources of
growth", Economic Journal, vol 96, 1986, p. 903-918.
Nelson R. and Winter S. "An evolutionary theory of economic
change", Harvard University Press, Cambridge MA, 1982.
North, D.C. "Economic performance through time", American Economic
Review, June 1994
North, D.C. "What do we mean by rationality", Public Choice vol
77, 1993, p.159-162.
Pasinetti, L., "The economic theory of institutions", in Delorme,
R. and Dopfer, K. (editors) "The political economy of diversity:
evolutionary perspective on economic order and disorder", Algar,
Aldershot, 1994.
Romer, P. "Increasing returns and long-run growth", Journal of
Political Economy, vol 94 nr 5, 1986, p.1002-1037.
--- "Growth based on increasing returns due to specialisation",
American Economic Review, vol 77 nr 2, May 1987.
--- "Are non-convexities important for understanding growth?",
American Economic Review, May 1990.
--- "Endogenous technological change", Journal of Political
Economy, vol 98, nr 5, 1990, p. s71-s102.
--- "Ideas gaps and object gaps in economic development", Journal
of monetary economics, vol 32, 1993, p. 543-573.
--- "The origins of endogenous growth", Journal of Economic
Perspectives, vol 8 nr 1, winter 1994, p. 3-22.
Pollak, R., "Endogenous tastes in demand and welfare analysis",
American Economic Review, May 1978.
--- "Interdependent preferences", American Economic Review, June
1976.
--- "Price dependent preference", American Economic Review, March
1977.
Scazierri, R., "A theory of production", Oxford, 1993.
Schumpeter, J. "The theory of economic development", Oxford
University Press, 1934.
Skaperdas, S., "Conflict and attitudes toward risk", American
Economic Review, May 1991.
--- "Cooperation, conflict and power in the absence of property
rights", American Economic Review, September 1992.
--- "Risk aversion in contests", Economic Journal, July 1995.
Solow, R. "A contribution to the theory of economic growth",
Quarterly Journal of Economics, February 1956, nvol 70, p.65-94.
--- "Technical change and the aggregate production function",
Review of Economics and Statistics, August 1957, vol 39, p. 312-320.
Stern, N. "The determinants of growth", Economic Journal, vol 101,
January 1991, p. 122-133.
Ulph D. and Owen R., "Racing in two dimensions", Journal of
Evolutionary Economics vol 4 nr 3, September 1994.
Williamson, O. and Winter, S., editors, "The nature of the firm",
Oxford University Press, 1993.
Wilson, E. "Sociobiology", Harvard University Press, 1975.