62489




THE PRECAUTIONARY PRINCIPLE AND THE SOCIAL STANDARD




Odin K. Knudsen1 and Pasquale L. Scandizzo2




           Abstract


Scientific progress offers tremendous potential benefits to society but also
presents risks. While research focuses on how to manifest the benefits of any
new technology, the outside community fears the consequences that
technology may inadvertently have on social goods such as the environment,
public health and security. To balance the benefits of the progress of science
against the risks associated with its application is one of the major public
policy challenges of the 21st century.
One approach to handling public risks from scientific uncertainty is through
the application of the precautionary principle. In the strongest form of this
principle, technology should not be advanced until the risks are fully known
and mitigated. This “do-no-harm�? approach places the burden of proof on
the implementation of the technology. A weaker version of the principle
proposes that the risks need to be accessed and evaluated against the benefits
before progressing on the technology, but that preventive action should not be
delayed by a motivation based on uncertainty.
While the stronger version of the precautionary principle has been vigorously
supported by many environmental movements, critics argue that its
application would stifle technological change. Although the second principle
has more support among policymakers, its critics argue that it does not go far
1
    The World Bank
2
    The University of Rome, “Tor Vergata�?


                                            1
enough - even if the risks are small for an outcome, the consequences would
be unacceptable if some threshold of impact is surpassed.
In this paper, we argue that the precautionary principle is an extension of the
scientific method in the Popperian tradition and has precedent in hypothesis
testing. Under this framework, we then explore an approach that captures the
essence of the weaker precautionary principle but also accounts for the
“unacceptable�? outcome through the use of a social standard or threshold of
harm. Under this methodology, a social standard is established and
accounted for in the cost-benefit analysis. The existence of the social
standard creates an additional cost or benefit to the assessment of a project.
We illustrate the methodology with a discussion of two cases: the “mad cow�?
disease and the regulation on carbon emission.




                                      2
      1.    The           Precautionary              Principle:   a   Controversial
Interpretation


      Highly controversial both in its formulations and interpretations, the
precautionary principle has become a source of dispute on science, politics
and international trade. The concept was first expressed for European
environmental policies in the late 1970s and was gradually absorbed by
European law to the point of becoming the main principle of environmental
regulation in the Treaty on European Union (1992)3.
       As a guideline for lawmakers and government officials, the
precautionary principle has a German origin (Vorsorge Prinzip ) and has
been extensively used in Germany since 1980 as a basis for environmental
legislation. However, Maione (2002) reports that a leading legal expert has
found no fewer than 11 different interpretations of the principle in German
law. The use of the principle by other European elites has further broadened
its interpretation and scope for law making and as a yardstick for court
decisions. In this respect, the principle does not have direct quantitative
implications. In court decisions, for example, it can be seen as a criterion to
decide whether responsibility for the liability created by implementing a
given technology under scientific uncertainty can be reasonably assigned on
the basis of lack of prudence or failure to prevent possible dangers.
       Extended to environmental health policies and research and
development guidelines, the principle has stirred endless controversy, in part
because of its ambiguities and in part because it is being allegedly used as an
instrument of trade protectionism. The fuzzy nature of the principle lies in
several ambiguities of its formulation. The basic ambiguity derives from the
mixed potential nature of the principle, which on one hand seems to evoke a
basis for decision making, while, on the other hand, suggests a norm. For
example, the World Charter for Nature (United Nations, 1982) states “where
potential adverse effects are not fully understood, the activities should not
proceed.�? In this case, which is one of its strongest formulations, the
principle can be interpreted as a prudential indication or as the legal basis for
prohibition. Interpreted literally, since information is never complete and
certain, this prescription would practically exclude any new action or

3
    See http://europa.eu.int/en/record/mt/top.html


                                               3
technology. Furthermore the explicit ‘norm’ in this case is zero or no harm,
which places no weight on balancing relative benefits. In a weaker form, the
principal gives no guidance on how a norm could be determined in a case
where potential benefits could be high and some harm or losses could be
tolerated.
       The Rio Declaration (United Nations, 1992) states that lack of “full
scientific certainty shall not be used as a reason for postponing cost-effective
measures to prevent environmental degradation�?. This statement seems to
suggest that preventive measures should be taken without delay, even when
no sufficient scientific evidence is available to indicate which type of
prevention can be implemented and how. Of course, the statement could be
interpreted as a principle to invoke a temporary suspension, which in turn
would imply that prudence is exercised through the so called learn-and-then-
act principle. As Gollier (2001) persuasively argues, if it is interpreted as
advocating a commitment of resources to prevention, however, i.e. as a
separate undertaking leading to the implementation of a series of projects
or of a program of prevention, then the learn-and- then-act principle would
seem to go counter the principle of prudence. In this case, however, it could
be argued that also the new technology, which was the original cause for
concern, should be stopped on the basis of the learn-and –then-act principle.
Even though the interpretation is ambiguous, therefore, the precautionary
principle can be purported to ultimately discourage action without sufficient
information.
        A further cause of ambiguity lies in what might be called the lack of
methodological sharpness of a concept which is mainly used to generate
policy statements and legal quarrels, rather than guidelines for action. A
“precautionary stand�? could perhaps be identified with a conservative
attitude toward new actions and undertaking, but how conservative should it
be? Without a quantitative criterion for prudence, one clearly has no
guidance beyond common sense. For example, a 1990 declaration on
protection of the North Sea calls for action to be taken even if there is “no
scientific evidence to prove a causal link between emissions [of wastes onto
ocean waters] and effects�?.
      Impressive as they may appear to the general public and dedicated
environmentalists, these types of argument are considered with great
perplexity by many scientists, if used to advocate halting the application of
the new technology on the basis of the precautionary principle. According to
David Appel (2001) it is dubious whether the precautionary principle is
consistent with science, which after all can never prove a negative. “A lot of

                                       4
scientists get very frustrated with consumer groups, who want absolute
confidence that transgenic crops are going to be absolutely safe�?, says
Allison A. Snow, an ecologist at Ohio State University. “We don't scrutinize
regular crops, and a lot of inventions, that carefully�?.
       In a well documented article in The Scientific American, Appel
reports, however, some favorable opinions from leading scientists who don't
see the precautionary principle as antithetical to the rigorous approach of
science. “The way I usually think about it is that the precautionary principle
actually shines a bright light on science�?, states Ted Schettler, science
director for the Science and Environmental Health Network (SEHN), a
consortium of environmental groups that is a leading proponent of the
principle in North America. According to Carolyn Raffensperger, SEHN’s
executive director, on the other hand, “commodification�? of modern science
is put in question by the precautionary principle. This principle should be
seen as calling for a new kind of science, more responsive to societal needs
for prevention of diseases and maintenance of the environment.
       Raffensperger and other scientists also see an important connection
between the precautionary principle and the need for researchers to raise
their social consciousness. As in the last chapter of the famous book by
Monot (1991) on the “Uncertainty and the Necessity�? the precautionary
principle appears to invoke a sense of the public good and of ethics that
should have priority on pure technical considerations by scientists.
       Many difficulties in dealing with the precautionary principle from an
economic point of view have to do with the fact that principles are not
readily incorporated in economic models based on different principles. The
attempts by economists to tackle with the principle are mainly based on
extensions of cost benefit analysis and other techniques of decision making
under uncertainty (see, for example, Gollier (2001), Maione(2002)). These
attempts, however, ignore the fact that the philosophical basis of the
principle does not conform to these models, because it is more of a
deontological than of a consequential nature (Knudsen and Scandizzo
(2005)). In other words, the principle does not claim to be a guide for action
to select a decision on the basis of the appraisal of its foreseeable
consequences. On the contrary, it tends to identify a course of action that “ is
right�? regardless, to an extent, of its immediate consequences and whose
effects can only be appraised over the long run, once its application has been
sufficiently extensive and courageous.




                                       5
      2.     Precaution and Prudence in Hypothesis Testing



        Recalling history, and specially the position that has made Karl
Popper (1902-1994) the point of reference of modern science, one can
identify the principle of “falsificationism�? as one form of prudent behavior
that translates itself into a precautionary principle of a sort. According to
Popper (1959), we cannot conclusively affirm a hypothesis, but we can
conclusively negate it. The Popperian approach, in fact, in presence of a
scientific hypothesis, “shifts the burden�? of the proof to those who claim that
the hypothesis is true, but does this in a rather subtle manner. Any
hypothesis, in fact, it is argued, may not be proven true, because new
evidence may always force its abandonment, but it can only be demonstrated
false. This suggests that a prudent strategy to decide whether or not a
hypothesis is worth adopting, is that we always confront a positive
hypothesis with the so called “null�? hypothesis and that this is given priority.
If the evidence is sufficiently strong that the null hypothesis may be rejected,
one might say that there is some degree of corroboration to the hypothesis in
question. This principle is very general, in the sense that it appears to
support prudent decision making even outside of the pure realm of research
and science. In project evaluation, for example, the “best practice�?
methodology, recommended by the Little and Mirrlees classical manual
(1970), is based on the idea that a project should be undertaken only if one
fails to reject the hypothesis that the “situation without the project�? is better
than its “with the project�? alternative. There is wide consensus among
practitioners, furthermore, that such a comparison should be performed by
weighing the evidence against the project more heavily than the evidence in
favor of the project.
       Does the precautionary principle fall within the Popperian approach?
To the extent that it may be interpreted as requiring that the burden of the
proof be borne by those who hypothesize that an action is harmless, the
precautionary principle appears indeed a simple extension of hypothesis
testing. In other words, just as in pure scientific endeavor an hypothesis may
be interpreted as a perturbation of an existing paradigm of knowledge, the
“null�? hypothesis is preferred to its alternative and a simpler hypothesis is
preferred to a more complex one, in the case of research, investment or

                                       6
production, it is the proposed action that perturbs the status quo and is
considered more complex. As a consequence, the hypothesis that an
unsatisfactory status quo may be modified without danger by a proposed
action should be falsified, and only after a sufficiently massive negative
testing, it may be considered sufficiently corroborated by negative evidence
to justify undertaking the action. Conversely, if it is the status quo that is
suspected to be dangerous, the hypothesis to be falsified would be that it
should prevail against an appropriate action that would remove or reduce the
danger.
       Consider more closely the statistical procedure that translates the
Popperian prescription into a decision algorithm. In each problem
considered, the question of interest is cast in the framework of two
competing claims: the null hypothesis (indicated with H0) and the
alternative hypothesis (H1). Between these two competing claims, special
consideration is given to the null hypothesis.
      If the evidence collected aims to disprove or reject a particular
hypothesis, we give priority to the null hypothesis, in the sense that it cannot
be rejected unless the evidence against it is sufficiently strong. We thus
formulate the problem in a way that assigns the burden of the proof to the
hypothesis that is put forward. For example, if a new drug is being
experimented, and we would like to test whether it is effective against a
certain disease, we formulate the null hypothesis: H0= the drug is no more
effective than a placebo, against H1= the drug is more effective than a
placebo. The experiment is thus considered in favor of the adoption of the
drug if the evidence that it provides against the null hypothesis is sufficiently
strong. It is clear that the precautionary principle can be interpreted within
this context as suggesting a similar strategy, by considering systematically as
the null hypothesis, that a new technology may be dangerous as compared to
the existing one (the status quo) or to a next best alternative. We would say,
for example: H0 = there is more danger in using GMOs, than in the
traditional technology, against H1: there is no more danger than in the
traditional technology. Thus, the choice of the dangerous endeavor for the
null hypothesis implies that danger is somewhat the “natural�? state of the
world, while safety is the exception.
       But it has been argued (Gollier, 2001) that not taking a decision is
itself a decision since the status quo may be as dangerous or even more
dangerous than the action proposed. This is a well known problem in project
evaluation, where the general prescription, in fact, is not to use as the
“alternative without the project�? the status quo, but the situation resulting

                                       7
from the most likely action or set of actions that would occur in the absence
of project choice. In the context of hypothesis testing, if the null hypothesis
is that the proposed action is no better than inaction, the implicit assumption
is that the situation resulting from non-adopting the action is less costly and
less volatile as compared with the alternative proposed. In other words, a
concept of an evolving status quo is used rather than of a static one.
       In some cases, however, a concept of the status quo (even of an
evolving one) may not be meaningful, since the decision maker may be
forced to select one of two alternatives which are both costly and volatile. If
the option to wait to gain more information is not available, the two
alternatives can still be compared by assigning the burden of the proof to the
action that appears, at least in principle, more dangerous.
       The special consideration that we give to the null hypothesis thus
embeds a precautionary principle, if we use it systematically to shift the
burden of the proof to what is proposed as theory or action. The elements of
prudence embedded in this strategy are two: first, we create a model of the
world where all choices are uncertain and dangerous. Thus, the null
hypothesis relates to the statement that the endeavor examined is
unacceptably dangerous, whereas the alternative hypothesis relates to
establishing that it can be undertaken with reasonable confidence in its
safety if / when the null is rejected. Second, because the final conclusion,
once the test has been carried out, is always given in terms of the null
hypothesis, we either “reject H0 in favor of H1�? or “do not reject H0�?; we
never conclude “reject H1�?, or even “accept H1�?. If we conclude “do not
reject H0�?, this does not necessarily mean that the null hypothesis is true, i.e.
that we should reject the action (or the idea) proposed. It only suggests that
there is not sufficient evidence against H0 in favor of H1 and until such
evidence can be mastered, it is preferable supersede in the adoption of H1.
Rejecting the null hypothesis only suggests that the alternative hypothesis
may be true.
       Implicit in the decision to possibly reject a hypothesis are two types of
errors and losses associated with each error. Statistical testing has refined
this decision by the use of a rather sophisticated numerical machinery that is
basically due to the combined work of Neyman and Pearson (1928) and by
R. A. Fisher (1949). In weighing the evidence against a hypothesis to falsify,
the theory of statistical testing tells us that one can incur two classes of
errors: error of type one, when a true hypothesis is rejected and error of type
two when one fails to reject a false hypothesis. Given a set of observations,
the weight given by the decision-maker to the two types of error determines

                                       8
the dividing line between the observations that are considered in favor (or,
more precisely, not against) or against the hypothesis tested. This dividing
line is really a standard that is more conservative, the higher is the weight
given to the error of type two – the failure to reject a false hypothesis – and
is therefore more precautionary in nature. For example, suppose that one
wants to limit the risk of failing to reject a hypothesis when it is false to a
given probability. In this case, one will interpret the evidence against it, as a
proof of its falseness, whenever the consequence of an alternative
interpretation would cause the risk of failing to falsify a false hypothesis to
exceed the amount fixed on a priori grounds. Similarly, if one does not want
to risk failing to prevent a damage, she will interpret the evidence of danger
as a prescription for abstaining from action, whenever her information is not
sufficiently positive to deny the risk of damage with a confidence large
enough on the basis of prior criteria.
       In hypothesis testing, a type I error occurs when the null hypothesis is
rejected when it is in fact true; that is, H0 is wrongly rejected. In the
example of a new technology, the null hypothesis might be that the new
technology is no better, on average, than the current technology; that is H0=
the new technology does not produce a significantly better result than the old
one on average. A type I error would occur if we concluded that the new
technology produced a different effect when in fact this was not true. Type II
error, on the other hand, arises when we do not reject the null hypothesis,
when, in fact we should reject it. Or by applying the precautionary principle
to a new technology that should be adopted, but isn’t, because the evidence
of its safety is judged insufficient to reject the (null) hypothesis that it may
be dangerous.
       The prudence (i.e. the precautionary stance) that characterizes
hypothesis testing implies that type I error is more serious, and therefore
more important to avoid, than a type II error. As a consequence, the test
procedure is formulated in a way that guarantees a “low�? probability of
rejecting the null hypothesis wrongly. While the probability of type II error
is generally unknown, the probability of a type I error can be precisely
computed and is referred to as the significance level of the test. This, in turn
creates the possibility of dividing information into two mutually exclusive
subsets (one of which may be empty). These are: (i) the region of
acceptance, defined as the subset where the evidence is against the null
hypothesis, and is therefore favorable to the alternative proposed, (ii) the
region of rejection, i.e. the subset where the evidence favors the null and the
alternative is rejected. These two regions are separated by a threshold (i.e. a


                                       9
dividing line defined in terms of an index of the data used to test the
hypothesis) and may be used to determine the significance level of the test.
Either the significance level or the threshold have to be established on the
basis of prior considerations. They determine one another and reflect how
strongly the scientific community or society feels that type I risk should be
avoided.
       When uncertainty is high because of the lack of information and the
variability of the phenomenon studied, scientific prudence suggests that
judgement should be suspended and proposed action forestalled. For any
given set of data, type I and type II errors are inversely related and the
smaller the risk of one, the higher the risk of the other. As a consequence,
the significance level (i.e. the preset probability level of error of type I) at
which one decides to operate is in practice a threshold that determines the
cost of following the procedure in terms of error of type II.
       The quantitative choice requires setting a level for the probability of
type I error, i.e. for the risk of falsely rejecting the null hypothesis and, as a
consequence, possibly being induced into the proposed course of action. The
level of this probability is called the significance level of the test. It
determines both the probability of type II error (the error of not rejecting the
null hypothesis when it is false) and the operational impact of the procedure
in discriminating among competing actions or between action and inaction.
In practice it corresponds to a dividing line between the observations in
favor and against the proposed action. Such a dividing line is a standard that
depends on the consensus of the community that utilizes and supervises the
testing procedure. In other words, it represents a standard, on the basis of
which the decision-maker may decide whether the data support or do not
support the call for the action. If the null hypothesis is stated in a way that
proposes a new action or investment that poses possible adverse impacts,
then setting the level of type I error is in essence equivalent to applying the
precautionary principle.



3.    The Precautionary Principle as a Social Standard


      To be consistent with the precautionary principle, it should be the
primary burden of the decision-maker to reject the (null) hypothesis that the
action proposed does not introduce some new and significant dangers with


                                       10
respect to the natural evolution of the world. But what is a significant level
of danger? The definition of significance is linked to the probability of type I
error and this in turn depends on (or determines) a standard accepted by the
members of the community organized around the rational rule of the test.
This standard, however, cannot be reasonably expected to be developed for
each individual project. Planners and regulators, as well as those who
appraise and evaluate projects need a more general way to address the issue
of precaution and hypothesis testing.
       In order to develop a feasible solution to this problem, we resort to
formalize the social standard (Scandizzo and Knudsen, 1980, 1996) using
the concept of a policy function. This concept does not require interpersonal
comparisons of utilities or incomes, since, by directly considering social
targets and instruments, it implicitly assumes that there is a consensus of the
broad costs of the instruments and benefits of the targets attached to specific
achievements and/or actions. This is the case, for example of the quadratic
policy loss function, where benefits and costs are measured as quadratic
“distances�? from given targets (for benefits) or initial positions (for costs).
In our case, we can assume, more generally, that social well being depends
on the stringency of the social standard that is set as a target to implement,
as well as on the implementation difficulties created by setting and enforcing
those targets. Thus, the lower the threshold of danger below which society is
proposing to assure that everyone is, the higher social welfare, but, at the
same time, the higher the gap between the present situation and what would
be desirable in the light of the social standard. In other words, the more
stringent the social standard, the more distant from this social ideal is the
current situation and the potentially more costly is the achievement of the
social standard.
       The trade-off between the stringency of the standard and the size of
the expected distance between the threshold and the actual outcomes
captures in general terms a relationship that we are often confronted with in
decision-making. This is the case, for example, of a lower threshold of
intervention to sanction pollution (a more stringent standard to avoid
possible adverse effects) versus the increase in enforcement or conforming
costs that this implies. In statistics, as we have seen before, a more stringent
criterion for type 1 error (larger rejection region) tends to generate a higher
level of type 2 error (reject what should be accepted). A larger rejection
region is equivalent to a more stringent standard. This reduces the cost of
taking decisions that would turn out to be wrong, but increases the cost of
not taking the decisions that would turn out to be right.


                                      11
       To give this description more rigor, we specify a social value
function as follows:


       (1)          L = L ( R, T ) T ≥ 0
and where R is the value of damage that is considered as the maximum
acceptable by society, and T is the expected gap between the actual level of
damage of the states of the world where the damage is unacceptably high
and the standard.4
      The function is increasing both in the standard R ( i.e. the level of
damage that prompts public action) and in the gap T: thus, if we apply it to
an undesirable variable, such as an indicator of danger, it can be considered
a loss function. That is, the more slack the standard, the greater is the
potential losses to society from adopting a dangerous course of action. At the
same time, the less is the gap between the standard and the state that can be
expected to be achieved. The lower, therefore, are the expected costs that
society must devote to meeting the standard. These costs may be just the
opportunity costs of the actions that have to be foregone to abide by the
standard, as, for example, when a technology is not adopted. Alternatively,
as in the case of controlling carbon emissions through costly technology,
they may take the form of cash outlays that have to be borne to force the
actions to be taken to satisfy the standard.
       Defining M as the maximum value of damage of an action, we can
imagine that all actions that may bring damages between R and M are
socially unacceptable or unsustainable. An action resulting in a damage
above the social standard causes a loss as a function of the value of the gap.


       More specifically, the expected gap T can be defined as follows:

                    M                                          R
       (2)     T=   ∫ ydF ( y ) − R(1 − ( F ( R)) = Ey − ( R − ∫ F ( y )dy) ,
                    R                                           0




4
 L(.) is assumed to be a well behaved function, with ∂L / ∂R > 0 ,
∂L / ∂T > 0, ∂ 2 L / ∂R 2 > 0, ∂ 2 L / ∂T 2 > 0.


                                               12
where the second expression has been obtained by expanding by parts the
integral of the first expression, and F(y) is the probability distribution of the
loss y . It is important to realize that, as expressions (1) and (2) show, while
the gain from adopting a more stringent standard accrues to the whole
society, the cost depends only on those states of nature that prompt the
enforcement action (or, equivalently, where the damage is effectively
produced). Using the definition in (2), and differentiating with respect to R ,
we obtain:


              dL ∂L ∂L dT ∂L ∂L
      (3)       =  +     =  −   (1 − F ( R))
              dR ∂R ∂T dR ∂R ∂T




       For example, suppose that we confront the problem of adopting or non
adopting an action (a project or a technology), that may be potentially
harmful, on the basis of the evidence available. Assume that the danger
attains to a known general class (for example, carbon emission). We assume
two possible states of nature: θ 1 = unsustainable danger and θ 2 = safety or
sustainable danger. Evidence on the states of nature is summarized in one or
more observations of the random variable y. As in hypothesis testing, we
define two possible actions: a1 = do not adopt if the observation of the
random variable falls in the danger zone, i.e. y>R; a 2 = adopt in the
alternative case, i.e. y ≤ R . We ask ourselves what is the value of the
rejection threshold (the social standard) R that minimizes the loss function
specified in (1) :
       By equating to zero the first derivative of the function L(.), as defined
in (3) , we obtain:
                                              ∂L / ∂R
      (4)        pr ( y > R) = 1 − F ( R) =
                                              ∂L / ∂T




                                              13
       Expression (4) states that, in order to minimize the loss, the
probability of being above the standard should equate the ratio between the
expected marginal gain from tightening the standard (the numerator in (4))
and the expected marginal loss arising from the costs to enforce the standard.
These costs arise because the standard will not be automatically enforced,
but a certain number of outcomes will tend to deviate from it, yielding the
gap T (the denominator in (4)). Thus, if this ratio is greater or equal to one,
the standard should be tightened up to the point where this benefit cost ratio
is equal to the probability that the random variable falls outside the zone of
acceptance. As we tighten the standard, the probability that any outcome will
fall in the non acceptance zone will increase. In general, we can reasonably
expect the benefit cost ratio to decrease, since expected benefits will
decrease and expected costs will increase as we make the standard harder
and harder to satisfy. If the ratio in (4) remained greater or equal to one as
we tightened the standard, we would be led to the extreme precautionary
prescription that the standard should be set in such a way that the probability
of observing an event not complying with the standard! In this case,
whatever the observation y , we would always reject the technology. This is
equivalent to the strongest form of the precautionary principle.


                                                                      ∂L / ∂R
      Since the probability in (4) must be less than one, the ratio           at
                                                                      ∂L / ∂T
the optimum point will be lower than one. This simply means that saving
costs by relaxing the standard is a poor substitute for having a higher degree
of security or, in more general terms, that a more proportional reduction of
error of type 2 is needed to compensate an increase in error of type 1. As the
standard increases, its marginal benefit will decrease, while the marginal
cost of upholding it will increase.
       As shown in detail in the appendix, the marginal gain from tightening
the standard is a function of the weight assigned to error of type 1, i.e. the
error of adopting the wrong decision by being too “lax�? on the standard. By
the same reasons, the marginal cost of enforcing the standard is a function of
the weight attached to error of type 2, i.e. of rejecting an action that should
have been accepted and, as a consequence, generates costs in the form of lost
opportunities, regret, and enforcement costs. For a given level of social
welfare, the trade off between the two types of errors will be represented by
an indifference curve, which will be convex toward the origin and whose
                                ∂L / ∂R
slope at any point will equal           .
                                ∂L / ∂T


                                            14
       Figure 2 shows the problem graphically, as a choice of a combination
of the values for the standard and the expected gap. Each indifference curve
depicts the combinations of these two variables for a given level of social
loss and such a level is lower, the closer the curve is to the origin. A higher
slope for the curve will imply a more prudent value judgement on the need
to reduce danger and, therefore, a higher weight to type 1 error as compared
to type 2 error. The convex frontier between R and T represents the
relationship defined by equation (2). Its slope is given by the probability of
an outcome falling in the rejection area 1 − F ( R ) , so that, for R=0, its slope is
always 1. Choosing the standard is equivalent to pick a point along the
lowest indifference curve achievable under the constraint given by the
definition of the expected gap in equation (2). Uncertainty (for example, in
the form increased variance) on the outcome y has the effect of increasing
social losses. As variance increases, the frontier between R and T expands
outward driven by the probability density function of y, but always crossing
the x axis at E(y) as is easily seen by putting R = 0 in equation (2). The
optimum standard R* and corresponding expected gap T* increases to R**
and T**, indicating that as uncertainty increases, the optimum standard
becomes less stringent in order to control the increase in the expected gap in
achieving it. As a consequence, the losses to society increase as shown by
the outward loss curves.
       It is important to recognize that the choice of the standard and, by
implication, of the acceptance and rejection zone, implies both a value
judgement and a probabilistic appraisal. The value judgement concerns the
nature and the size of possible danger and, as a consequence, the importance
of type 1 and type 2 error. Therefore, the possibility of a catastrophic event
or a strong commitment to the integrity of the environment will translate
itself into a higher marginal value assigned to the standard (i.e. a higher
weight to error of type 1 vis a vis error of type 2). The probabilistic
assessment, on the other hand, is necessary to determine the value of the
standard , given its relative marginal value, from the size of the admissible
range of error (the gap between the standard and the expected outcome ).




                                         15
       In many cases, the value judgement supersedes the probability
assessment, in the sense that the marginal values are driven by strong
convictions or fears, while little or nothing is known about the probability
distribution of the relevant variables. When catastrophic losses are feared,
for example, we may expect a very high ratio between marginal benefits
and costs. The resulting standard will be very stringent under a wide range
of possible probability distributions, so that lack of knowledge about the
latter will have little consequence on the recommended course of action. On
the other hand, if the ratio between the marginal benefit and the marginal
cost is small, the probabilistic assessment becomes more crucial. In other
words, the farther we move from the stronger form of the precautionary
principle, the more important becomes appraising the facts, rather than
imposing value judgements to one’s actions.
       In sum, we can see the precautionary principle as arising from a
continuum. At one extreme, the precautionary principle is present in its
strong form. Danger is seen as an overriding issue and probability is
irrelevant, in the sense that the social standard requires a 100% compliance,
with no tolerance for outcomes falling out of the permitted zone. At the
other extreme, danger is seen as a cost that can be countervailed by the
benefits of exposing oneself to it. Here, both the evaluation of the benefit
cost ratio and the probabilistic assessment of risks are important.




                                     16
           Box 1: Examples of Loss Functions and Standards
           For example, assume that the loss function has the following simple
form:
                            1 b 2
           L = A + aR −        R + cT
                            2M
       Minimizing risk according to expression (4), and assuming that the
distribution of y is uniform between 0 and M, we obtain:
                         c−a      c−a
                    R=       M =2     Ey
                         c−b      c−b
           where Ey = M / 2 is the expected value of y .
       Since R cannot exceed M , a ≤ b , and the closer to one another the
weight of the linear and the quadratic term, the closer the threshold level
will have to be to the maximum possible danger M 5.
           Another example is given by the Cobb-Douglas function:
                     L = − AR α T β
      Assuming again that the distribution function for the random damage
y is uniform in the interval [0,M], taking the first derivative of (7) with
respect to R, applying definition (2) and equating to zero, we find :
                            α          2α
                     R=          M =        Ey
                          α + 2β     α + 2β
      In this case, which if the parameters are all positive, is a proper
minimum, the optimum standard is always less than the maximum
sustainable damage, and is smaller, ceteris paribus, the lower the absolute
value of the elasticity of the loss function with respect to the standard , as
compared with the elasticity with respect to the gap.



                                                                               (M − R)
                                                                                       2
5
    For the uniform distribution U(0,M), the expression for the gap is:   T=           . Given this
                                                                                 2M
functional form, it is easy to check that expression (6) corresponds to a minimum loss, whatever are the
values of the parameters b and c, provided that b ≤ a.



                                                       17
4.    Some practical examples: the Mad Cow Disease and the
      Carbon Emission Program

4.1 The Mad Cow Disease


       Many critics of the precautionary principle fault it on the grounds that
it does not take into account probabilities, or cost benefit ratios. In part, the
argument arises from the fact that traditional risk analysis is based on the
calculation of expected loss. For example, Gollier (2001), considers the case
of mad cow disease (MCD) in Europe and examines the appropriateness of
action in the case where being a victim of the disease is equivalent to a
financial loss of 50 times GDP per capita. Assuming that the risks are well
diversified in the economy and the victims are fully compensated for the
reduction in their life expectancy, one can calculate the risk for British
citizens for the next 20 years on the basis of some evidence pointing to an
average probability equal to 10 −4 of contracting the disease. The ensuing
expected loss, equal to 50 p , would amount to a loss of income of roughly
0.5% of GDP. Thus, Gollier concludes,�?…if there existed a method to
eliminate (MCD) risk for human beings in one shot, it would be efficient to
implement it only if it cost less than 0.5% of GDP.�?
      The above argument, however, implicitly assumes that the benefits, in
form of the expected monetary gains of the program should be assigned the
same weight of the expected costs. It does not take into account that the two
weights might be different because of the different evaluation that society
may give to the risks of exposing consumers to death because of insufficient
precautions (error of type1), versus the risks of penalizing producers because
of excessive precautions (error of type 2). The question that the British
government has confronted, when it has decided to undertake preventive and
regulatory measures is not whether the costs of the program (0.1%) of GDP
are matched by correspondent, certain benefits. It is rather if, by bearing
these costs, there is a reasonable degree of confidence that danger (in the
form of error of type 1) can be reduced to an acceptable level. The value
attached to the reduction of the danger is not the expected value of the
program in terms of reduction of deaths, but the value attributed ex ante to
the lower probability of encountering the danger minus the value attributed
to the higher probability of wasting good meat. Benefits, therefore, by

                                       18
lowering the probability that the consumer encounters infected meat, consist
in the reduction of expected danger and are realized for all states of the
world. Costs, on their part, by increasing the probability that some non
infected meat does not reach the market, have to be borne only in the states
exceeding the social standard, that is, for the outcomes that fall in the zone
of danger. The crucial relation, in this respect, is equation (4), which can be
written as6:
                  ∂L
           (8)       (1 − F ( R)) = ∂L / ∂R
                  ∂T
        The MCD British prevention program consisted in the application of
a series of control to meat production, and in the enforcement of a series of
standards to be met and documented on the health of the animals and other
key features of the production process. The program is not perfect and
“leakages�? to the market could occur. Also the estimates on probability of
death have a variance attached to them. Simulation models of the disease
could give some estimates of this variance and therefore the probability
distribution of potential costs to GDP of mad cow disease. Society could
either impose a complete ban on beef production – e.g. apply the strong
precautionary principle – or set a standard that bounds social costs with
some probability.
                                                         ∂L
           Referring back to equation (8),                  (1 − F ( R)) represents the value given
                                                         ∂T
to error of type 2 (the cost) , which is composed of two parts: (i) a part
depending on the value that society assigns to each unit increase in the
expected gap between the outcome and the standard and, (ii) a part
depending on the probability for an outcome to exceed the standard. If the
latter were, for example 0.20, the program would be acceptable if the value
assigned to the reduction of danger from a tightening in the standard (the
reduction of error of type 1) per unit of reduction of the standard were at
least 1/5 of the incremental costs per unit of increase of the expected gap. In
other words, assume that the costs, in terms of “good meat�? not reaching the
market because of the program, were as high as .5% of GDP (the upper
bound for a benefit-cost ratio of 1 without the “safety�? or precautionary
bound). The expected value of these costs, under the program, would be .1%
of GDP. If the monetary consequences of a reduction of danger (error of
type 1) had the same social weight than the monetary consequences of a
production loss (an increase in the quantity of good meat expected to be

6
    See also the appendix for further elaborations on this theme.


                                                       19
wasted), the program would have to generate benefits (in the form, for
example of reduction of mortality rates) at least equal to .1% of GDP. By the
precautionary principle, on the other hand, the positive monetary
consequences of the program (reduction of error of type 1) should be
weighted more heavily than the negative consequences (reduction of type 2
error). For example, if the weight assigned to error of type 1 were twice the
error of type 2 , the benefits of the program from the reduction of the danger
deriving from “bad�? meat reaching the market could be as little as .05% of
GDP and the program would still be justified.


4.2. The carbon emission program in EU

       Consider now the case of carbon emission. Greenhouse gas (GHG)
emissions have been attributed the responsibility of the so called “Global
warming�?. In particular, the big rise of CO2 emissions7 linked to human
activities, observed during the twentieth century, has been correlated with
the high increase of average temperatures . Even if the idea of “Global
Warming�? itself is still questioned, many policy interventions have been
launched in the past 15 years to face this issue. Among them the Kyoto
protocol, which came into force since 16th February 2005, is by far the most
prominent and most studied case.
       The Kyoto protocol has the objective to reduce CO2 emissions of
developed economies during the period 2008 to 2012 by at least 5 percent
from their aggregate 1990 level. In an independent policy move, the
European Union is pushing to reduce its CO2 emissions 8 percent below the
1990 level.
        The Kyoto protocol regulation philosophy is based on 3 different
“flexible mechanisms�?: Emission trading (ET), Clean Development
Mechanism (CDM) and Joint Implementation (JI). ET is considered the most
important mechanisms among the three. It requires to set-up an international
emission trading sytem, thus creating a market for CO2 that will enable
private agent to make the most efficient use of emission rights and to limit
the costs of compliance by looking at market price signals (cost of emission
right).
        Power generation is a key sector for any potential CO2 reduction
initiative. Power plants produce about 30% of global emissions (Figure 3).
Few thousands power plants in Europe produce as many CO2 emissions as
7
    CO2 emissions incidence on total GHG emissions is about 80%.


                                               20
millions of transport vehicles. As a consequence, regulating the power sector
is clearly a priority. “Cap-and-trade�? schemes have been introduced since 1st
January 2005 in the EU for about 5.000 energy and industrial plants
accounting for something close to 50% of total EU emissions. The scheme
sets an annual limit on the aggregate amount of CO2 those plants can emit
(quotas). Quotas are given free based on historic emissions ("Grandfathering
principle").
       Every year, emissions have to be calculated per plant, and each firm is
required to own an equivalent amount of “polluting rights�?. Accounts will be
settled per company. In the first phase of program implementation, if a
company exceeds its quotas, it has either to acquire unused quotas in market,
or pay a penalty of â 40/ton or twice the true quota market price – whichever
is higher.
        While at the margin each producer non complying with the regulation
faces an opportunity cost of 40$ per ton, the willingness to pay exhibited by
producers in the market has been oscillating around $20 per ton. This figure
incorporates type 2 error, in two main ways. First, it reflects the fact that not
all producers face the same opportunity costs in terms of alternative means
to reduce emission through cleaner technologies. Thus, not all of them
would be equally harmful by not satisfying the cap, so that the social costs
arises from imposing an excessively tight upper bound to producers whose
emissions would allow a higher production with the same harmful effects of
other producers under the cap. Second, the figure reflects the limited
capacity of control and sanction of the regulating agency and, as a
consequence, the fact that many producers may not comply. What is the
benefit that would justify these costs? If we assume that the standard
incorporates a social value judgement on the desirability to avoid error of
type 1, the probability of exceeding the quota (and the ensuing danger of
type 2 error) becomes the key element to evaluate costs and benefits. It is
this probability, in fact, that determines the optimum ratio between the
marginal value of the quota and the marginal cost of increasing the gap. If
this probability were as high as 25%, for example, the program would be
justified if the ratio between the marginal benefit of tightening the quota and
the marginal costs of increasing the area of non compliance were at least $5
per ton. But even if this condition were not satisfied, and the ratio were
below 5$, a sufficiently larger weight assigned to type 1 error could make
the program acceptable under the precautionary principle.




                                       21
   5.    Conclusions

       We have interpreted the precautionary principle as a simple extension
of the hypothesis testing methodology in two main ways. First, in all cases
where the action proposed bears significant uncertainty, and this has adverse
consequences (Arrow and Fisher, 1974; Bohm, 1975), the null hypothesis
should be formulated not only for the expected values of the action, but also
for their variances or possible variability. Moreover, this should be done in
such a way that the burden of the proof should be turned against the
proposed action, by defining as “null�? the hypothesis that the variability of
the action examined is larger than the alternative. Second, in all cases where
the action proposed may generate a danger, the null hypothesis should be
formulated to entail that the expected danger from the action is greater than
the alternative (the status quo or the situation without the action). As in
hypothesis testing, a standard must be established that allows a demarcation
for rejection of the null hypothesis.
      In setting the standard, two different effects must be balanced: (a) the
increase in safety from tightening the standard (widening the area where the
hypothesis of present or future danger cannot be rejected), and (ii) the
increase in cost due to the widening of the danger area. Because the latter
depends not only on the unit cost, but also on the probability that the
observed variable falls in the danger zone, the optimal marginal benefit cost
ratio will be less than or equal to one. Therefore the application of the
precautionary principle changes the criteria of cost-benefit analysis by
implicitly adding a “shadow�? benefit to tightening of standards.




                                     22
       The methodology of hypothesis testing, by placing the burden of the
proof on disproving the assumption of unacceptable danger, is per se a
natural embodiment of the precautionary principle. In order to apply this
methodology to broad classes of projects and programs, so that it can shape
legislation, regulation and current practice, however, the determination of a
threshold of action is required. This threshold may be seen as expressing a
social standard of safety for broad categories of danger. The lower its value,
the wider is the area where precautions of one form or another (inaction,
prevention or regulation) would be recommended. For simple functions, the
standard should be tightened to the point that the marginal benefit cost ratio
is equal to the probability of an unacceptable outcome. For example if this
probability is .05, that is, an only 5% chance of an unacceptable outcome
then the marginal benefit of tightening the standard can be as low as one-
twentieth of the marginal cost of meeting the standard. This threshold effect
is implicitly embodied in the weaker form of the precautionary principle.




      References


Appell, D. (2001), “The New Uncertainty Principle�?, Scientific American,
January.
Arrow, K.J., and Fisher, A.C. (1974), “Environmental Preservation,
Uncertainty, and Irreversibility�?, Quarterly Journal of Economics, 88, pp.
312-319.
Bohm, P. (1975), “Option Demand and Consumer’s Surplus: Comment�?,
American Economic Review, 65 (4), pp. 733-736.
Commission of the European Communities, (2000), “Communication on the
Precautionary Principle�?, 02 February, Brussels. See http://www.gdrc.org/u-
gov/ugov-mediate.html
Dixit, A.K. and Pyndick, R.S. (1994), Investment under Uncertainty,
Princeton University Press, New Jersey.
Fisher, R. A. (1949). The design of experiments. London: Oliver and Boyd
Gollier, Christian (2001), “Precautionary Principle: the economic
perspective�?, Economic Policy, October 2001, pp. 303-327


                                     23
Knudsen, O. and Scandizzo, P.L. “Bringing Social Standards in Project
Evaluation under Dynamic Uncertainty�?, Risk Analysis, (2005).
Little, I.M.D. and Mirrlees, J.A. (1969), Manual of Industrial Project
Analysis, OECD development Centre, Paris.
Mae Wan Ho (2000), “The Precautionary Principle is Coherent�?, ISIS Paper,
October 31.
Maione, Domenico (2002), “What Price Safety? The Precautionary
Principle and its Economic Implications�?, The Journal of Common Market
Studies, January 2002, vol. 40, n.1, pp.89-109
Monod, J. (1991), L’Hazard et la Necessitè, Gallimard, Paris.

Neyman, J. & Pearson, E. S. (1928). On the use and interpretation of
certain test criteria for purposes of statistical inference. Part I and II.
Biometrika, 20, 174-240, 263-294.

Papoulis, A.(1965), Probability, Random Variables and Stochastic
Processes, McGraw- Hill Inc..

Popper, K. R. (1959). Logic of scientific discovery. London :
     Hutchinson.

Popper, K. R. (1974). “Replies to my critics�?. In P. A. Schilpp (Eds.),
7KH philosophy of Karl Popper (pp.963-1197). La Salle: Open Court.

Scandizzo, P.L. and Knudsen, O. (1980), “The Evaluation of the Benefits of

Basic Need Policies�?, American Journal of Agricultural Economics, 62 (1),

pp. 46-57.

Scandizzo, P.L. and Knudsen, O. (1996), “Social Supply and the Evaluation
of Food Policies�?, American Journal of Agricultural Economics, 78 (1), pp.
137-45.
United Nations, (1982), “World Charter for Nature�?, General Assembly, 28
October. See http://www.un.org/documents/ga/res/37/a37r007.htm




                                       24
United Nations, (1992), “Rio Declaration on Environment and

Development�?, 13-14 June 1992, Rio de Janeiro (U.N.

Doc./CONF.151/5/Rev.1).




Appendix


The relationship between the loss function and errors of type 1 and 2



       The loss function can be specified directly in terms of the two types
of errors as : L = L(e1 ( R), e2 (T )) , where e1 and e2 are the monetary
consequences, in terms of expected monetary gains or losses, of the two
types of error:

      e1 = v1π 1 = v1 prob( y ∈ Ya / y ≤ R) prob( y ≤ R)
      e2 = v 2π 2 = v 2 prob( y ∉ Ya / y > R) prob( y > R)



       Ya indicates the subset of values of the r.v. y , that would be harmful.
      Differentiating totally with respect to R , we obtain:

              dL ∂L ∂e1 dπ 1 ∂L ∂e2 dπ 2
      (A.1)     =           −             (1 − F ( R))
              dR ∂e1 ∂π 1 dR ∂π 2 ∂π 2 dT



      Equating the expression in (A.1) above to zero, we find:


                              ∂L dπ 1      ∂L    dπ 2
      (A.2) (1 − F ( R) = (      v1   ) /(    v2      )
                              ∂e1 dR ∂π 2        dT




                                             25
       which prescribes that the standard should be fixed in such a way that
the probability of falling in the rejection area should be equal the ratio
between the value given to the marginal reduction of the probability of type
1 error and the marginal increase in the probability of type 2 error.
       Consider now the impact of a program that tightens the standard of a
certain percentage dR / R . Using (A.1), we obtain:


                         ∂L            ∂L     dπ 2
      (A.3)     dL = −       v1 dπ 1 +     v2      (1 − F ( R)) dR
                         ∂e1           ∂e2    dT



       Therefore, the net benefit of the program is given by the reduction of
the loss due to a lower probability of type 1 error minus the increase in the
loss due to a higher probability of type 2 error. The program will be
economically justified if:

                   dπ 2
            de2     dT       v2
      (A.4)     ≥               (1 − F ( R ))
            de1 (dπ 1 / dR ) v1

        If v1 = v2 , the condition for accepting the program is that the marginal
rate of substitution between the two types of errors be greater than the ratio
between their marginal variations and the probability that the random
variable falls in the rejection zone. If we gave the same weight to error of
                                 ∂L ∂L      dπ 2
type 1 and error of type 2,          /    =      = 1 , the condition for the program
                                 ∂π 1 ∂π 2 dπ 1
to be acceptable would be :


                             dπ 2
      (A.5) (dπ 1 / dR ) ≥        (1 − F ( R ))
                             dT




       that is, the reduction of type one error as a consequence of the
tightening of the standard should be greater or equal to the increase in type 2
error as a consequence of the increase of the average gap multiplied by the
probability of falling in the danger zone.



                                                  26
      On the other hand, if we gave higher value to type 1 error, as in our
                                                       de2
interpretation of the precautionary principle,             = (1 + a ) , a > 0 , and the
                                                       de1
condition for acceptance would become:


                             dπ 2 (1 − F ( R ))
      (A.6) (dπ 1 / dR ) ≥
                             dT      1+ a


      The greater the constant a , the stronger the form of the precautionary
principle that would be applied.
      We could also have a case where the precautionary principle is not
applied, but the economic consequences attached to a reduction in the
probability of type 1 error are greater than those associated with an increase
of type 2 error, i.e. v1 > v2 . In this case , the condition of acceptance would
be:


                             dπ 2             v
      (A.7) (dπ 1 / dR ) ≥        (1 − F ( R)) 2
                             dT               v1


      The consequences on program acceptance would be similar to the
application of the precautionary principle, but would be due to different
reasons.




                                                  27