The European Food Safety Authority, although funded by the European Union, operates independently of legislative and executive institutions (Commission, Council, Parliament) and EU Member States. It performs risk assessments that form the basis of scientific advice and communication on risks associated with the food chain.
EFSA has generally had a good reputation, but recent developments cause us concern. The title of the 2018 EFSA Conference, to be held in September, is “Science, Food, Society”, and the main theme of the first day strikes us odd, and even worrisome: “Where science meets society: putting risk assessment in context”, which is supposedly necessary because “values are becoming more influential than facts in shaping public opinion” and “risk managers need to balance facts and values effectively.” The description goes on in that vein, positing that we are in need of “restoring the credibility of and trust in risk assessment, by placing it in a societal context.”
Although those statements are certainly politically correct, they are logically and scientifically dubious, as they set up confusion, or even a conflict, between “facts” and “values”. Moreover, one does not, in fact, restore the credibility and trust in risk assessment by placing it in a “societal context”. That does just the opposite.
In spite of convictions to the contrary among deconstructionists, we maintain that evidence, the foundations of which are data and science-informed analyses (risk assessment), should guide the choice of policies to prevent or minimize undesirable outcomes (risk management). Putting it another way, the search for objective truths remains the necessary premise to undergird sound policy decisions, from the siting of nuclear power plants to the oversight of new varieties of plants or microorganisms.
The insistence of some scholars (mostly sociologists) that scientific results — particularly in risk assessment — are always provisional, value-laden, or even biased, may drive us down the slippery slope of total relativism. (If this seems to be a contradiction in terms, we agree!) At the bottom of that unrestrained cant lies the sort of deconstruction of science by Clinton administration Undersecretary of Agriculture Ellen Haas, who previously had headed an anti-technology advocacy group: “You can have ‘your’ science or ‘my’ science or ‘somebody else’s’ science. By nature, there is going to be a difference.” Translation: “I don’t care about data, or the consensus in the scientific community. My opinions are just as valid, and I feel free to bend the evidence according to my political agenda.” Such views epitomise the “post-truth age”.
Daniel Patrick Moynihan famously said that everyone has the right to his own opinions but not to his own facts. In that vein, we applaud an ongoing development in the arena of Science and Technology Studies (STS), where Cardiff University scholars Harry Collins and Robert Evans are attempting to restore some rational boundaries and sensible division of labour between expertise and the wider public, with respect to balanced, science-informed, democratic deliberations about policymaking, including, especially, risk-management. Collins and Evans theorize a Third Wave of STS, aiming – to put it bluntly – to recover the STS scholars from the hangover of the Second Wave, i.e. the confused democratism, the insistence on the involvement of laypeople in risk management, and even risk assessment – a detrimental trend which has been with us for too long.
As for the world of food, we should stick to the basic principles outlined in the Procedural Manual of the Codex Alimentarius Commission, the FAO-WHO international body which promulgates rules on food safety: “There should be a functional separation of risk assessment and risk management, in order to ensure the scientific integrity of the risk assessment, to avoid confusion over the functions to be performed by risk assessors and risk managers and to reduce any conflict of interest.” “Risk assessment policy should be established by risk managers in advance of risk assessment, in consultation with risk assessors and all other interested parties. This procedure aims at ensuring that the risk assessment is systematic, complete, unbiased and transparent.”
The same basic concepts are strongly reaffirmed by the European Scientific Advice Mechanism: “There should be a functional separation between risk assessment and risk management – this is broadly the case in the EU, but it should be more rigorously applied, both at EU and Member State levels.” (p. 8) “The strict separation between risk assessment and risk management is internationally recognised as best practice…This separation is important to avoid real or perceived political influence in scientific processes, to ensure independence and objectivity, and to provide clarity on accountability for decision-making.” (p. 28)
Therefore, the search for evidence must follow recognized, rigourous scientific procedures and protocols, and the risk assessment should not be influenced by the shifting tides of public opinion. EFSA’s chief, Dr. Bernhard Url, warned about “Facebook Science”, i.e. political decision-makers being offered evaluations that are arrived at according to pressure from surveys, referenda, or even online petitions. Those drivers may yield erratic results, inasmuch as they often rely on fake news and “alternative facts” promulgated by self-interested entities or emotion-motivated non-experts. Inasmuch as Internet websites and social networks are too often sources, or even echo chambers of misinformation, we dub this phenomenon, “Fakebook Science”.
Indeed, politicians seeking easy consensus can elect to reject scientific evidence in favour of the blandishments of influential pressure groups. Putting it another way, risk managers may deny the scientifically arrived at opinions of risk assessors. Three egregious examples are: (1) the unscientific regulation virtually everywhere of so-called “Genetically Modified Organisms”, or “GMOs”; (2) the recent debate about the reauthorization of the herbicide glyphosate in the European Union; and (3) the politically-motivated EU ban on neonicotinoid pesticides. In all these cases, to the detriment of farmers and consumers, politicians decided either to deny or to ignore the overwhelming scientific consensus, instead opting for over-regulation or prohibition.
These three instances of duplicity and dubious actions are allegedly based on a pillar of our democratic framework – that when a large number of people express their preference for certain options, their will must prevail. But this conviction reveals a profound lack of understanding of the essence of democracy: Majority rule is a necessary but not sufficient condition of democratic collective choices. Otherwise, we risk a tyranny of the majority.
More to the point, science is not democratic: the public don’t get to vote on whether a whale is a fish or a mammal, the temperature at which water boils, or the value of pi. Famously, the epistemologist Paul Feyerabend, on the final page of his book, “Against Method”, voices a formidable boutade that should make any scientist’s skin crawl and blood pressure rise: “It is the vote of everyone concerned that decides fundamental issues such as […] the truth of basic beliefs such as the theory of evolution, or the quantum theory” (emphasis in original).
In this sense, not all “values” are equal: Policy-makers and decision-makers should adhere to both constitutional and scientific principles. The three examples alluded to above suggest some corollary principles of regulation: Similar things – the products of old and new genetic modification — should be regulated in similar ways; the degree of government intrusion (i.e., regulation) should be proportional to the actual risk; governments should attempt to protect not against every conceivable or speculative risk, but against unreasonable risks; and following science-informed risk assessments, risk managers should issue appropriate, legitimate authorizations or approvals.
In the final analysis, the principle to be preserved – and the ultimate goal of public policy — is the economic freedom of a fairly-regulated market. That is a value cherished in democracies. Instead, in the three cases explicated above, political and ideological considerations denied both science and economic freedom.
This post is also available in: FR (FR)DE (DE)