Epistemic versus Pragmatic Justification of Risk Analysis

A key assumption underlying analyses of risk is that analyzing risks helps generate justified beliefs about risks. Hence, by analyzing risks, we become wiser in our dealings with risks. However, what exactly is it to have a justified belief, and, moreover, is proper justification of risk beliefs within reach? These probing questions are the topics of this article in which two different perspectives on the justification for risk beliefs are examined, labeled epistemic and pragmatic justification. While epistemic justification amounts to showing that risk beliefs signify knowledge, pragmatic justification amounts to showing that risk beliefs serve as useful signposts for how to act. Arguably, much of the difficulty of providing a cogent basis for risk analysis originates from our conception of justification as only meaning epistemic justification, with this making us ignorant of the pragmatic route to justification. A point advanced in this article is that the chances of justification will be significantly improved by a broadening of perspective to include epistemic as well as pragmatic justification of risk beliefs.


Introduction
Why analyze risk? The obvious answer to this highly pertinent question is that risk analysis makes us better off. We will fare far better if we base our decisions on a careful examination of what lies ahead relative to if we just hurry on without duly considering the things that may affect us in different ways. Damage may be incurred and opportunities may be lost if we move forward without duly analyzing what is to come. This answer, however, raises a second question: What exactly is it that makes risk analysis worth the candle? Again, the answer seems obvious. Risk analysis provides a set of justified beliefs; our beliefs about the future stop being arbitrary guesses and start becoming reasonable ideas. Analyzing risk means introducing a process of justification whereby only those beliefs become believable that are reasonably justified.
However, rather than clarify mattes, this answer triggers a third question. But what does it mean to have a justified belief? Once again, the answer seems obvious. A belief is justified if it fulfills some pre-defined criteria of acceptance.
This but this answer, only invites and a fourth-and in this context -final question: Exactly what kind of acceptance are we looking for in our justification of beliefs?
Although the literature is rapidly expanding, risk analysis as a discipline is still in its infancy. An impressive number of books and articles have been written about risk analytical techniques. What is missing is a corresponding examination of the basic assumptions of risk analysis. With some notable exceptions [2], these are at best hinted at in some methodological discussions of the validity and reliability of specific techniques. The fundamental questions of whether risk analysis provides justified beliefs, and, moreover, of what it means to produce a justified belief, have hardly been touched upon. The vast majority of those advancing different risk analysis techniques simply take it for granted that the techniques will produce justified beliefs.
Having said that, times are changing. Initially a primarily mathematical statistical discipline basically focused on financial and technological dysfunctionalities, risk analysis is rapidly expanding in terms of its toolbox and frame of analysis. Sociological, environmental and psychological perspectives are playing an increasingly prominent role in the field, leading to the introduction of new tools and approaches that are providing alternative as well as complementary frameworks of analysis [3,4,5,14,29]. This mixture of complementarity and rivalry can most clearly be seen in cognitive psychologists' analysis of risks. On the one hand, cognitive psychologists seem to subscribe to the conventional description of humans as goal-aspiring beings that make calculations on the basis of the desires and beliefs that jointly determine their course of actions. However, in their view the analysis cannot stop here. Humans' desires and beliefs are themselves in need of explanation if we are to grasp the processes determining peoples' acts [18,35]. At the same time, therefore, cognitive psychologists disagree with the standard portrayal of humans as rational beings who choose the best course of action on the basis of a well-ordered set of desires and beliefs. From their point of view, the tenet of rationality is nothing but an ideal construct; it provides a norm for how to act that does not explain how humans do in fact act. For this reason, they favor a change of perspective from the normative conception of homo economicus towards the behavioral conception of homo psychologicus for the purposes of descriptive choice modelling on the grounds that this latter conception, among other things, incorporates mental shortcomings in dealing with risks [15,17,19].
In The Black Swan, Nassim Nicholas Taleb makes further headway into the field of risk analysis by showing how philosophy mixes with psychology [37,38]. We are not just misled by a great number of perceptual and inferential biases that help foster a distorted view of the world, but, as limited beings in an unlimited world, we are also unable to know large segments of the world. Risks signify such a blind-spot in our sphere of knowledge. Since risk relates to the future, unlike the empirical evidence in support of risks which relates to the past, analyzing risk is senseless because we cannot know risks. What makes our ignorance alarming is that we tend to shunt this fact aside, thus remaining loyal to the idea that risks are knowable without duly recognizing that our heuristically impregnated analytical instruments only lead us astray.
Without being explicit on the issue, Taleb seems to subscribe to the standard account of knowledge according to which knowledge signifies justified true beliefs. Knowing risks amounts to the production of justified true beliefs about risks. On account of this Taleb's main objection is that risk analysis is bound to fail since we are unable to justify our risk-related beliefs in terms of truth. If we try, we will inevitably try in vain.
Such a conclusion is highly discomforting for anyone who subscribes to the idea of risk analysis as a rational activity. If risk analysis fails to generate knowledge, why analyze risk in the first place? Given that Taleb's The Black Swan is a personal essay rather than a standard academic text [31], it may be tempting just to dismiss him. Why bother about an exposition filled with anecdotes, characterizations s and slogans in which the concepts are ill-defined, the assumptions rarely questioned and the discussion flawed and biased? However, dismissing Taleb outright would be a mistake. For one thing, since the book does not pretend to be an academic contribution to the debate, it should be read with care and generosity. More importantly still, although its tone and style clearly differ from a standard academic text, its content is thought-provoking. In The Black Swan Taleb demonstrates how discussing the value of analyzing risks inevitably slides into a highly existential debate about human finitude.
Although Taleb has strong views on the issue, he makes hardly any serious attempts to clarify them. He instead simply keeps on telling us how the impossibility of analyzing risks derives from the insurmountable problem of generating justifiable beliefs about them. The purpose of this article is to take some highly tentative steps toward reducing this gap in his book by digging deeper into the very meaning of a justified belief. Needless to say, the prospect of fostering justified beliefs about risks crucially hinges on our conceptualization of a justified belief.
Basically speaking, we can justify beliefs about risks in one of two ways: from within or from without. Justification from within involves an acceptance of the standard idea of justification as intrinsically linked to knowledge. Here, the strategy for refuting skeptics like Taleb will be to demonstrate that knowledge is still within reach. Although such a line of defense may take different forms, the guiding idea is that risk analysis is a knowledge-producing activity in which the acceptance of a belief results from the fact that the belief signifies knowledge. A justification from without, in contrast, involves a separation of justification from the concept of knowledge. Whatever we try, Taleb is perfectly right in stating that justification can never bring about knowledge. However, failure to produce knowledge need not imply a corresponding failure to produce justified beliefs. Hence, the crux of this second line of refutation of skeptical thought about risk analysis is a lowering of the requirements of acceptance. Even if the reasons for a belief are not strong enough for it to be characterized as knowledge, a belief may be sufficiently justified if it is reasonably believable.
Hereafter, these perspectives will be labeled "epistemic" and "pragmatic" justification respectively, referring to the different standards involved for accepting the content of a belief. As the analysis proceeds, it will be shown that Taleb's objections lose their force once we make the shift from an epistemic to a pragmatic justification for beliefs. When favoring epistemic justification, we are opting for the highest standards of approval so that beliefs become knowledge. When favoring pragmatic justification, we are simply opting for the standard needed for the reasonableness of the beliefs to be accepted.

Epistemic Justification
Risk analysis is generally associated with searching for knowledge. We analyze risk in order to gain insight into a terrain that would otherwise remain unknown to us. Another word for knowledge is episteme. Hence, to emphasize risk analysis as a knowledge-producing activity is to make a plea for an epistemic justification whereby the value of analyzing risk derives from the epistemic merits of doing so. Needless to say, whether the outcome is knowledge crucially hinges on the very meaning of knowledge. For that reason, we first need to be somewhat more precise about what it means to know risks before entering into a debate about our chances of knowing them.
Roughly speaking, knowledge can be split into procedural, personal and propositional depending on its source and nature [2]. Procedural knowledge is knowledge of how to do something such as how to perform a task to eliminate pain. Personal knowledge is first-hand knowledge referring to something we are acquainted with, such as our experience of pain [30]. Propositional knowledge is knowledge expressed by a declarative sentence stating what there is, such as the fact that Jill suffers from pain. Unlike procedural knowledge, personal knowledge and propositional knowledge are factual, but whereas personal knowledge is subjective, propositional knowledge is intersubjective.
The guiding tenet of any risk analysis is to provide guidance on how to act. Those responsible for analyzing risks generate insights relevant to those responsible for managing risks. Risk analysis, then, is intersubjective in nature. Moreover, risk analysis is also factual in nature. Although the conclusion of a risk analysis may involve multiple layers of uncertainty, it is nonetheless a statement stating what there is in the world. Jointly, these characteristics suggest that if we are looking for an epistemic justification of risk analysis, our task is to show how risk analysis satisfies the requirements of propositional knowledge [2].
According to the standard account of propositional knowledge, knowledge amounts to justified true belief. To know that Paris is the capital of France, we must believe that Paris is the capital of France. Moreover, we must be able to justify our belief that Paris is the capital of France. Finally, it must also be the case that Paris is the capital of France. For this reason, the standard account is also called the tri-partite conception of knowledge, referring to the three knowledge conditions which are held to be individually necessary and jointly sufficient as conditions of knowledge [25,10,32].
Although the great majority of scholars explicitly or implicitly subscribe to the standard account of knowledge, it has been subject to fierce criticism, especially from those who -inspired by the famous Gettier examples -claim the three requirements to be non-exhaustive. Beliefs may be justified as well as true without necessarily being genuine cases of knowledge [13]. From a risk analysis perspective, however, the problem is quite the reverse. Rather than being too permissive, the framework is too restrictive. However, to see this, we first have to be somewhat more specific in our description of the truth-condition.
The truth-condition in the standard scheme is generally cashed in by a correspondence conception of truth. A belief is true if it corresponds, that is, squares with fact. This interpretation of the truth-condition spills over to the justification condition. Justification amounts to showing that the belief corresponds with fact. As factual beliefs only can be justified by appeal to experiences, the resultant effect of these interpretations is a knowledge scheme requiring truth-documentation of the belief in terms of documenting a word-to-world match. Schematically, this means that person A knows proposition P if, and only if: 1. A believes P 2. P is true because P corresponds with fact 3. A justifiably believes P because A justifiably believes that P corresponds with fact Arguably, of the three knowledge conditions the belief condition is the least problematic. A belief is nothing but an hypothesis held to be valid at that given moment in time [40]. Problems start cropping up with the truth condition, though, with the trickiest issue relating to the very meanings of the words "correspondence", "fact" and "truth". Defining "correspondence" is not a simple task [24], describing facts is hardly possible without running into tautologies [36], and attempting to explicate truth involves entering a battle where there is no end in sight [9,20]. For the purpose of this article, however, we do not need to look into these basic questions as the intention is not to address the whole specter of the correspondence conception of knowledge, but just to highlight the justificationary part of it. Therefore, we can also leave the second knowledge condition aside and concentrate on the third condition that designates the requirements for the justification of beliefs.
Moreover, with regard to the third condition, it is not the general problem of justification that is the topic of this analysis. Rather, it is just that tiny slice of the problem that has a direct bearing on the justification of risk beliefs. Hence, my concern is the challenges deriving from the special features of this kind of belief.
So, what are the special features of risk beliefs? Briefly put, risk beliefs are existential beliefs. They are beliefs stating what there is in the world. Moreover, risk beliefs are future-tensed beliefs. They say something about a world yet to come. Beliefs about what there is can only be justified by an appeal to experiences telling what there is. Experiences, however, only refer to the past, not the future. In order to justify beliefs about what is to come, therefore, we need to add a transferability clause. Experiences of the past also have application for the future. The transferability clause is commonly known as the principle of induction, which is nothing but premise-transcending reasoning. From the known and experienced parts of the world, we are licensed to make inferences about the unknown and unexperienced parts of the world.
Yet, recalling a lesson from Hume, inductive reasoning is not conclusive [16]. Observing thousands of white swans provides no guarantee that the next swan observed will be white too. In fact, according to Hume, inductive reasoning is not even probabilistic. Since the number of actual observations is infinitely small as compared to the universe of theoretically possible observations, observations of the past do not even allow for statements expressing the likelihood of a given future event16. From a risk analysis perspective, this situation is disquieting. Because the conclusion of a risk analysis is risky, analyses of risk will invariably fail to pass the threshold of knowledge.
Given the challenge of the strictness of the knowledge scheme, how can the idea of risk analysis as a knowledge-producing activity be preserved? As the root of the problem is the scheme's knowledge conditions, the strategy to resolve the problem will be to redefine the conditions. The three most obvious ways of doing so are to: a) skip the justification condition; b) skip the truth condition; or c) lower the stipulated requirements of justification. Here I will briefly comment on each of these three possible approaches to resolve the problem.

568
Epistemic versus Pragmatic Justification of Risk Analysis

Skipping the Justification Condition
The first approach is to drop the justification condition and to define knowledge as true beliefs. Knowing a risk amounts to truly believing it is coming with no further proviso that the belief has to be justified. At first glance, this approach does not look particularly inviting as it runs counter to the basic idea of knowledge as a cognitively fueled process. If the only thing that matters is the truth content of the belief, in-depth investigations become indistinguishable from lucky guesses. A belief automatically rises to the rank of knowledge if it happens to be true. Admittedly, those favoring a true-belief conception of knowledge seem to be aware of this problem. Their strategy in terms of resolving it is to define truth as something intrinsic. To rank a belief as knowledge, the belief cannot be a lucky guess. It must be believed to be true. The generation of belief involves the faculty of reflection in which true beliefs are separated from false ones [32,33].
This approach, however, raises the question of how to distinguish true belief from justified true belief given that the assignment of a truth-value is the outcome of a reflective, truth-conferring process. The root of the problem is that reflection is just another word for justification. We look into the pros and cons, and end up concluding what to believe. Rather than bolstering the argument that we can do without the justification condition, the justification condition seems to have been smuggled in by the backdoor.
A more indirect way of defending a true-belief conception of knowledge is to stress the difference between the intrinsic and extrinsic instrumental conditions of knowledge. Truth is intrinsic to knowledge. You become knowledgeable once you have detected the truth. Justification, in contrast, is instrumental to knowledge. It is something that helps make you knowledgeable. As the argument then goes, once we have reached the truth, the processes leading us there no longer matter. The fact that we by accident come to know that Mt. Everest is the highest mountain on earth makes us no less knowledgeable than if we had come to know it by physically measuring its height. In the current debate about the value of knowledge, this is a key point of concern among those, like Jonathan Kvanvig [21], who hold the truth-condition to have a swamping effect on the justification condition. The value of a true belief "swamps" the value of the belief having been produced in a truth-conferring way. In the debate, the swamping argument is usually introduced to substantiate the claim that knowledge in terms of a justified true belief adds no value to the mere possession of a true belief, and hence that knowledge is no more valuable than true beliefs. Here, I have rigged the argument somewhat by showing how it can legitimate a non-justificationary account of knowledge as well. If the overriding goal of knowledge is truth, reaching the truth swamps any further condition stating that the truth also has to be the product of a reliable truth-conferring process [27].
Arguably, however, there is something awkward about this line of reasoning. In order to know something, it is not enough to be in possession of a true belief. The truthfulness of the belief must also be cognitively accessible to the holder of the belief. That is, the belief has to be the outcome of a truth-conferring process documenting the correctness of the belief for the believer. In their account of the causal nexus between beliefs and acts, Donald Davidson[8] and Jon Elster [11] have shown how a rational construal of the nexus rests on three assumptions. First, there have to be some beliefs; second, the beliefs have to be efficacious; and third, the beliefs have to be intentionally efficacious. An instance of a violation of the first assumption would be a hunter hitting a rabbit by inadvertently pulling the trigger before even having the rabbit in the crosshairs. An instance of a violation of the second assumption would be a hunter hitting a rabbit after having it in the crosshairs but doing so having inadvertently pulled the trigger due to some sudden noise. Finally, an instance of a violation of the third assumption would be a hunter consciously aiming his rifle and hitting a rabbit, but inadvertently pulling the trigger after becoming emotionally overwhelmed at seeing it in the crosshairs. In a similar vein, making knowledge synonymous with true beliefs opens up three irregular chains of belief-formation in which the impact of truth-conferring processes is nullified. First, there may not be any reasons in support of the belief; second, the reasons may not be genuinely supportive; or third, the reasons may be genuinely supportive without the reasons being supportive in a standard, reasonable way. In Meno, Plato stresses how justification makes us confident about the truthfulness of our beliefs [27]. A key problem of skipping the justification condition is that it is hard to see how we can be confident about their truthfulness. If we are unable to track down the truth-content of our beliefs, we are deprived from rationalizing their content, which -among other things -makes us more susceptible to stimuli causing us to reject true beliefs and to start accepting false ones.

Skipping the Truth Condition
Arguably, since the problem of the knowledge scheme is not the justification condition per se, but the claim that justification has to be truth-conducive, we seem to be grappling with the wrong aspect of the dilemma by decoupling the justification condition from the concept of knowledge. A more promising route is to delink the truth condition from the concept of knowledge. We may become knowledgeable through the justification of beliefs even if the justification fails to justify the truth content of the belief. This is exactly the essence of the second approach. To know is to have a justified belief without also requiring the belief to be true. Only those beliefs that are rationally believable designate knowledge. Rational beliefs, however, are no longer wedded to the search for truth. A belief may be trustworthy without the reasons in support of the belief being strong enough to sanction its truthfulness. The benefit of doing so is that the idea of knowledge as a cognitive activity is preserved. Justification still makes sense in our search for knowledge although we may have to wait forever to Universal Journal of Management 4(10): 565-574, 2016 569 determine whether the belief is true.
In fact, this argument can be stretched a little further. If knowledge involves conclusive justification of beliefs, it necessarily follows that any theories about the world, including any theories about risks in the world, are bound to fail since the theoretical statements expressing what there is exceed the facts documenting what there is. I suspect most of us will find this too high a price to pay to cling to an indubitable conception of knowledge as originating from the search for truth. Rather than dismissing the empirical branch of the sciences as non-knowledge, a far more reasonable response will be to dismiss the idea of knowledge as linked to truth. Here, we may note the remark of Karl Popper [26], one of the most ardent proponents of a truth-conducive empirical epistemology. Truth is nothing but a regulative idea guiding our quest for knowledge. The overriding aim is to distinguish true beliefs from false ones, but absolute truth is beyond reach, and if we nevertheless come across something true, its truthfulness will be unknown to us [26]. Rather than bolster the idea of truth as indispensable in the production of knowledge, this remark of Popper's illustrates the insurmountable problems of linking knowledge to truth. How can a truth-conception of knowledge be rationalized if truth is useless as a criterion of knowledge?
Tempting as it may be to skip the truth condition, and hence to favor a pared-back justified-belief account of knowledge, this strategy to preserve the epistemological virtues of risk analysis raises the problem of construing knowledge by appealing to a knowledge condition that in itself is in need of further support to be genuinely supportive. Justification, as noted in the preceding section, is instrumental rather than intrinsic to knowledge. We justify our beliefs in order to document the truth-content of our beliefs. Therefore, if we try to do without the truth-condition, the question inevitably comes up: Why favor justified beliefs in the first place? Truth signifies the telos in our search for knowledge. If we try to do without the truth-condition, we will no longer be capable of rationalizing the value of justified beliefs as there will no longer be a truth-condition to appeal to in order to legitimize the value of such beliefs. The crux of the matter is that justification only makes sense within the context of a guiding idea stating why justified beliefs are preferable to non-justified ones. What makes the truth-condition all-important is that it provides the rationalization needed by portraying justification as truth-conducive. In fact, the characterization of justification as truth-conducive only helps emphasize the derivative and parasitic linkage of justification with truth. If the truth-condition is skipped, the justification-condition inevitably collapses too.
Here, proponents of a justified-belief account of knowledge may dismiss this objection by stating that it falls short of documenting the indispensability of the truth condition. At best, it has been shown that we cannot do without some guiding ideas in our justification of beliefs. Thus, the objection licenses the introduction of ideas that in the words of Baas C. van Fraasen "fall short of truth" such as "empirical adequacy" [39] and "problem solving effectiveness" [22]. A belief signifies knowledge if it matches our observations, or if it provides insight into how to deal with the world. However, none of the alternative epistemological signposts introduced has gained a wider audience. More importantly still, embarking upon this line of reasoning to dismiss the objection raised is to miss the point. For a justified-belief account of knowledge to have the slightest hope of getting off the ground, it must be convincingly shown that justification suffices to raise a belief to knowledge. Stressing the abundance of truth in the search for knowledge is not enough. For the argument to be successful, it must be shown that no guiding idea whatsoever is needed for accepting justified beliefs as knowledge.
Briefly put, then, rather than softening the dilemma, preferring justification to truth in a minimized two-partite account of knowledge only seems to sharpen it. For justification to make sense, there must be some idea that explains why justification is needed in the generation of knowledge. Without this, the justification condition also slips out of our hands. However, once we start looking for some guiding ideas, we are violating the very conceptualization of knowledge as justified beliefs. The pith and gist of this construal of knowledge is precisely that there is nothing more in it than justified beliefs.

Modifying the Truth Condition
Assuming that justification and truth are indispensable conditions of knowledge while absolute truth is unattainable in the production of knowledge, it necessarily follows that the only way to resolve the dilemma of opposing priorities is to favor a modified conception of knowledge. A belief may be granted the status of knowledge even if we are unable to guarantee the truth of the belief. The third approach to the problem of harmonizing the standard account of knowledge with the prospective features of risk analysis trades on this idea. In our acceptance of risk beliefs as knowledge, the truth-content of the beliefs no longer needs to be conclusively proven. It suffices that it squares with the best evidence available. By lowering the requirements of justification in terms of truth, it necessarily follows that current risk beliefs may be brought into disrepute with new insights in the field. Therefore, knowledge about risk is nothing but tentative knowledge. Nevertheless, for the search for knowledge to make sense in a risk context, the switch from absoluteness to tentativeness seems to be a price we have to pay if we still want to portray risk analysis as knowledge-producing.
The question that comes to mind is whether this price is too high? Is it in fact possible to lower the truth-requirements of the beliefs without jeopardizing the knowledge status of the beliefs? The root of the problem is that once some criteria of sufficiency are introduced, other questions immediately come forward: What is the magical level of justification beyond which beliefs become knowledge? How can whether beliefs fail or succeed in passing that magical level of justification be measured? And how can the existence of such a level of justification be justified [6]?
With respect to the first question, Robert C. Chisholm [7] has suggested a 13-point scale ranging from positive certainty to negative certainty (certainly false) where the different levels of the scale designate different levels of certainty about the truth-content of the beliefs. According to this scale, you have to be 100% sure about the truth of a belief to characterize it as indubitable. In contrast, if you characterize a belief as probable, it is justified without being evident, obvious or certain. Such a scaling makes explicit what the wording "tentative" implicitly suggests, which is that knowledge comes in degrees rather than absolutes. Our current knowledge may be overturned in the future; the levels of the scale reflect our expectation of this happening, and this in turn determines which term to use to describe the knowledge status of the belief.
Accepting that knowledge about risk is conjectural and scalable in nature may be an inviting move to preserve the epistemological status of risk analysis. However, for the move to be truly supportive, the distinctions on the scale have to be measurable. There must be some evidence proving that probable, indubitable, etc. is the correct way of terming the risks. The tricky thing we cannot know for sure is whether we have made a correct assessment of the likelihood of the truth of the belief, thus whether we have termed it correctly. We may fail. Moreover, even if we have not currently failed, we may fail to make necessary adjustments in the future when new evidence requires changes in the location of the belief on the scale. These brief remarks suggest that introducing a knowledge scale could in fact complicate rather than simplify the situation, as we would have to deal not only with the uncertain features of risk beliefs, but also with the uncertain features of higher-order beliefs involving risky guesses about the likely domain of risk beliefs.
Worse still, what rationale would underpin the scale adopted? For example, what would the reasons be for setting the level of certainty at 65% rather than at 75% for a belief to be labeled as probable? The problem of rationalizing scales of knowledge is a spin-off from the more fundamental problem of rationalizing any line of demarcation between knowledge and non-knowledge when the line of demarcation falls short of absolute truth. Obviously, rationalization requires the appeal to some norms. However, it is hard to see what such a norm should be without becoming trapped in the pitfalls of arbitrariness. One may find an indubitable conception of knowledge too restrictive. But there is at least a rationale for adopting such a view according to which the ultimate characteristic of a belief as knowledge coincides with the ultimate requirements of the truth of the belief to be conclusively proven. Once knowledge is delinked from the requirements of conclusive justification of belief, we have to locate the threshold of acceptance elsewhere. Clear-cut alternatives, however, are not obvious. When making a shift from a strong concept towards a weaker concept of knowledge, we have to struggle with the fact that there are no alternative levels of acceptance that rise to prominence [6,27]. Furthermore, weaker concepts of knowledge also suffer from being non-sensible in their prescription of how to choose between two rival beliefs where both of the beliefs signify knowledge. If we have to decide which one to prefer, the obvious choice of preference will be to choose the best justified belief. However, according to weaker concepts of knowledge, we should remain agnostic on the issue. As both beliefs have been granted the highest level of approval, differences in the degree of justification are nullified once the threshold of knowledge has been passed [6]. Hence, the problem with lowering the requirement of knowledge is that it leads to the introduction of an acceptance zone where we are prevented from discriminating between competing beliefs that are differently located in the zone. Although we clearly prefer the most justified belief, weaker concepts of knowledge fail to identify the reasonableness of such a choice once both of the beliefs have been granted the status of knowledge.

Pragmatic Justification
The preceding section has shown that, although the path is not completely blocked, we nonetheless enter into rather thorny territory if our strategy to refute critics of risk analysis is to demonstrate that such analysis is highly rational in that knowledge is generated. Rather than lead to an end to the debate, this way of refuting skeptics like Taleb only seems to add further fuel. A proper question is therefore to ask whether we have been taking the right approach. Are there ways of rationalizing risk analysis that do not involve a commitment to epistemic rationalization? I think so. The starting point for such an alternative route to justification is the recognition that justification is not only a question of deduction but also of reflection. We may become conscious of risks without necessarily knowing them. By focusing on reflection rather than knowledge-production, it necessarily follows the domain of acceptability is significantly enlarged, making justification a lot easier. We no longer need to be fixated on the idea that the refutation has to be epistemological in nature to effectively refute skeptical thoughts about risk beliefs. Assuming that the overriding aim is to counter the claim that risk analysis is senseless, doubts about the epistemological merits of risk analysis lose their force since the justification of risk beliefs is no longer tied to the stricter standards of knowledge. For strategies of refutation to work, it suffices to show that risk beliefs pass the lower standards of soundness.
What makes this alternative line of defense worth pursuing is that it coincides with a philosophical perspective originating in the works of Aristotle. In the Nicomachean Ethics [1], Aristotle introduces a distinction between scientific knowledge and prudence in order to distinguish between the intellectual virtues involved in knowing and those involved in acting respectively. As he stresses, knowledge is universal in nature. A conclusion is the outcome of a deductive inference originating in some invariant laws or principles. This makes knowledge non-deliberative, as "no one deliberates about things that cannot be otherwise" [1REFERENCE]. Prudence, in contrast, is particular in nature. It is "action about things that are good or bad for a human being" in a given situation in a given course of time. Hence, prudence is deliberative, "since the things achievable in action permit of being otherwise" and humans must be "able to deliberate finely" to be successful in their acts. Briefly put, whereas knowledge is explicating, prudence is amplifying. While in the former case we learn to recognize what's already on the table, in the latter case something new is brought to it.
One may question Aristotle's strict definition of knowledge as indubitable truth in which correct reasoning amounts to demonstrative reasoning of beliefs. Our point of concern is his claim that humans entertain two sets of beliefs, one theoretical, referring to what there is in the world, and one practical, referring to how to act in the world, and that the justification for the two sets differs. Rational acceptance of theoretical beliefs means verification, whereas rational acceptance of practical beliefs means substantiation. In the former case, justification has to be conclusive, while, in the latter case, it just has to be supportive. The importance of the science/prudence distinction lies in the fact that failure to prove the knowledge-content of a belief does not automatically make it rationally unbelievable. On the contrary, it is still highly rational to stick to the belief if it succeeds in legitimating ideas of how to act, even if it fails to legitimate ideas of what there is. Failure to satisfy the stricter requirements of knowledge only makes it epistemically unbelievable. It is still pragmatically believable.
As risk analysis deals with acts rather than facts, the differentiation between the two varieties of cognition is of considerable interest. In our discussion so far, the terms of reference have been the search for knowledge. However, as the writings of Aristotle suggest, the proper term of reference is the search for how to act. We therefore commit a logical fallacy if we reject analysis of risk as senseless due to our inability to generate knowledge about risks. Since the purpose is to envisage a proper course of action, the outcome need not be some ultimately justified beliefs. In order for the intellectual virtue of analyzing risks to be legitimate, it is sufficient that the outcome is some adequately rationalized beliefs.
To make further progress in the development of a pragmatic justification for risk analysis we nevertheless need to be somewhat more specific about the kind of justification that characterizes justifications of acts. Here, Herbert Feigl's [12] proposal for distinguishing between justification by vindication and justification by validation may be particularly helpful. Without explicitly referring to Aristotle, this distinction of Feigl's nicely corresponds to the distinction between episteme and prudence outlined in the Nicomachean Ethics. While justification by validation means derivation of beliefs from some basic principles or ideas, justification by vindication involves the selection of some basic principles or ideas for the derivation of beliefs. Hence, whereas validation is inferential, vindication is foundational. In the former case, justification shows how a belief conforms to some pre-defined norms of inferences. In the latter case, justification amounts to rationalizing the norms of inferences used to assess the content of a belief.
At first glance, nothing seems to have been gained by Feigl's distinction. We are still struggling with the problem of how to rationalize the merits of risk analysis. A little reflection nevertheless suggests that Feigl has provided some valuable insight regarding where to look to establish such a rationale. Validation amounts to the justification of beliefs in our search for knowledge (justificatio cognitionis). Whether a belief is acceptable or not depends on whether the belief satisfies some predefined standard of approval. Vindication amounts to the justification of beliefs in our search for how to act (justificatio actionis). Here, the value of a belief depends on whether it generates beneficial acts in the realization of our goals [12].
Belonging to the sphere of vindication, these remarks suggest that pragmatic justification means justification in terms of a cost-benefit analysis in which the benefit has gained the upper hand. Henceforth, establishing a pragmatic line of defense consists of showing that analysis of risk is useful in our handling of risks. By posing the question in such a way, supportive answers are not difficult to find. For one thing, analyses of risk are indispensable in providing a proper map of the terrain. If we do not analyze risk, we will not have a map at our disposal that helps localize risks in the terrain. Let me hasten to add that on this point, Taleb strongly disagrees. As our mapping of risks is bound to be flawed and faulty, his message is that it is better to have no maps at all than to have the wrong ones. However, what Taleb does not take into account is how belief-formation is intrinsically linked to decision making. Our decisions relate to the future, and when we start deliberating on how to act, a part of our deliberation is to figure out what the world will look like. We simply cannot do without ideas about the world in our dealings with the world. Whether Taleb likes it or not, beliefs about risk are an indispensable part of any decision.
In view of this, the key point of concern is not whether we possess some beliefs but rather whether our beliefs make sense. It is from this latter perspective that risk analysis serves as a practical device. It helps justify beliefs that otherwise would have been nothing but guesswork. The outcome may be a false belief, but, for the time being, we at least have some reason to believe that it is not false. If no risk analyses are carried out, we are forced to remain agnostic on truth and falsehood. This way of framing the issue helps illuminate the contrasting -that is the fact and foil structureof the topic. When justifying risk analysis, it is not the isolated strengths and weaknesses of risk analyses that matter, but the pros and cons of analyzing versus not analyzing risk. This adds a second reason in favor of a pragmatically motivated justification of risk analysis. Most of us will probably prefer justified beliefs to non-justified 572 Epistemic versus Pragmatic Justification of Risk Analysis ones in our dealings with risks.
Advocating pragmatic justification of risk analysis does not mean a rejection of the merits of truth. Truth is still highly valued. However, raising the flag of pragmatism signifies a shift in motivation. Truth is no longer valued for its own sake but rather for its usefulness to spur purposeful acts. As compared to an epistemic line of justification, truth has shifted character from being an intrinsic to becoming an extrinsic property of risk analysis. Here we may also note, as Karl Popper [26] stresses, that the difference in context between a practitioner and a theoretician is their search for truth. A theoretician searching for truth can carry on searching forever. It is in principle a never-ending story that can only come to an end by the theoretician's recognition of the impossibility of fully reaching the truth. A practitioner searching for how to act is in a different situation. There is a pressing need to act, with even inaction constituting a kind of action. Although truth is something championed by theoreticians and practitioners alike, practitioners have to deal with the fact that the time at their disposal is limited. Their search for truth has to stop somewhere, and often they have to stop their search rather quickly. Thus, in their acceptance of a belief they have to make a trade-off between time and truthfulness in which the priority of the former causes a modification to the latter. Rather than searching for true belief, the practitioner has to opt for reasonable beliefs and to prefer those beliefs that are closest to the truth given the amount of evidence at that given moment in time.
Some may be tempted to press this line of reasoning even further. Because practitioners are looking for beliefs that work, it does not matter whether the beliefs are true or not. False beliefs may suffice as long as their falsehood does not prevent the decision makers from acting wisely. As I see it, this is to push the argument a bit too far. After all, the success-rate of our acts will be significantly higher if we base our acts on a correct map rather than on a false map. Although completely true beliefs -to paraphrase Weber [41] -are nothing but ideal constructs, it necessarily follows that beliefs that, to an increasing degree, match the world, also will make us increasingly aware of the pleasures and pains in the world. Truth still matters, but truth is no longer the one and only thing that matters.
In a nutshell, then, a shift from an epistemic towards a praxis platform for risk analysis involves two changes. The first change is a modification of the truth requirement. The second change is the inclusion of a normative requirement. Praxis relates to decision making, and decision makers have a duty to act. While theoreticians, as just noted, are free from reaching a final verdict on the truth-content of a belief, practitioners are obliged to reach a conclusion on how to act. This difference makes the standard of appraisal of praxis more complex. For a theoretician, truth is the sole criterion for selection. For a practitioner, truth is only one ingredientalbeit an important one -in a more integral concept of usefulness.
Increases in complexity might appear to run counter to the claim that a shift from epistemic to pragmatic justification of risk beliefs means a lowering of the standards of acceptance. Rather than one clear-cut standard, we now have to struggle with a mixed one. However, this is to miss the point. When locating risk analysis in the domain of praxis, the effect is a widening rather than a narrowing of its justificationary grounding. To act prudently involves a tradeoff between truth and timeliness in which the deontological dimension of timeliness serves as a ban on further truth searching if further research prevents timely acts. Herein lies the third element in favor of a pragmatic justification of risk analysis. Pragmatic justification of risk beliefs is rooted in the idea that beliefs about risk are inherently risky and so the best thing we can do is to make the beliefs less risky. Hence, pragmatism helps rationalize why it is perfectly rational not to strive for ultimate justification of beliefs. Before ending this section, it should be noted that pragmatic justification is not a novel idea. In The Theory of Probability, Hans Reichenbach [28] introduces the term to demonstrate "the usefulness of the inductive procedure for the purpose of acting". The crux of his argument is that, irrespective of whether the future is predictable or not, we will be better off if we adjust our actions on the basis that the future is predictable, rather than on the basis that the future is not predictable. If the future is predictable, we can, thanks to our preparations, take advantage of the predictable features of the world. If the future is not predictable, we have acted in vain, but we will not be worse off than if we had not prepared for a predictable world. To me, this contrasting mode of justification is vital to grasping the essence of risk analysis. More than anything else, analysis of risk is instrumental to producing purposeful acts. The proper question to ask when digging into its grounding is therefore: Do risk analyses help produce purposeful acts? Or, to be a little bit more precise: Is analysis of risk preferable to non-analysis of risk in terms of generating purposeful acts? This way of framing the question clearly suggests an answer in the affirmative. Although our analysis may prove wrong, I strongly believe our prospect of detecting and addressing upcoming risks will -by and largebe substantially improved if we act upon, rather than do not act upon, analysis of what is to come.

Conclusions
Let me briefly summarize my key points. The topic of my discussion has been the justificationary grounding of risk analysis. Two different platforms for justification have been examined, one epistemic/related to the search for knowledge, and one pragmatic/related to the search for purposeful acts.
According to the standard account of propositional knowledge, knowledge amounts to justified true beliefs. Hence, epistemic justification of risk analysis amounts to justified true beliefs. However, epistemic justification of risk analysis raises a number of challenges, one of the greatest of which related to the fact that risk-related beliefs cannot be conclusively justified.
In this article, three different approaches from within have been briefly reviewed, all of them designed to justify the merits of risk analysis within the context of risk analysis as knowledge-producing. As we have seen, all of these approaches are beset with difficulties, the main problem being the status of truth in the search for knowledge. For this reason, the third approach may be the most promising one, as it allows for a modification of the truth condition without calling into question the status of justified beliefs as knowledge. At the same time, once the truth-requirement is no longer absolute, we have to deal with the tricky question of where else to draw the line demarcating adequately justified risk beliefs from inadequately justified risk beliefs.
The multitude of problems facing approaches from within calls for an approach from without such that the foundation of risk analysis is no longer linked to the concept of knowledge. On the basis of works by Aristotle, I have briefly sketched an alternative platform for justification where the ultimate criterion is the practical cash value of analyzing risk. Unlike with an epistemological approach, in the context of pragmatism knowledge is no longer the sole criterion. The tenet is prudency, and if lack of time prevents us from looking deeper into a matter, it will be perfectly reasonable for us to stop looking into it if we have to act in a timely manner. Hence, justification still matters, but the requirement for justification is lowered, making it a lot easier to justify risk analysis.
Given that foundational issues are challenging issues, it necessarily follows that a lot of conceptual and philosophical groundwork needs to be done before we can fully engage in a debate about the justificationary grounding of risk analysis. That said, my brief review of the subject clearly suggests that we are heading in the wrong direction if we assess the merits of risk analysis from a purely epistemological perspective. Risk analysis is primarily a tool for promoting reasonable acts, meaning that its merit also depends on whether it promotes purposeful acts. In fact, this latter perspective not only has to be included, but, as I see it, also seems to be the most promising one in the search for a justificationary grounding for risk analysis as a legitimate analytical activity.