1In an age prickling with socio-technological controversy, questions about our reliance on scientific expertise in situations of uncertainty are more pertinent than ever. Indeed, in the 1980s alone, we experienced a succession of public health crises (the contaminated blood scandal, BSE, listeriosis, dioxin) and environmental disasters (asbestos, Seveso, and so on). Our traditional collective risk management model came under serious scrutiny. Various explanations were put forward to account for this model’s failures (overestimation of benefits and underestimation of negative externalities, the inescapable mismatch between scientific output and the actionable input required by policymakers, and so on). Meanwhile, another, alternative model came to the fore. Based on the precautionary principle, it urged decision-makers to weigh up hypothetical risks, thus placing expert judgment, impartiality, and representativity at the heart of these debates.
2However, entrusting experts to conduct a comprehensive analysis of the consequences of technological choices is not without its problems. Consumers’ associations have raised concerns that innovative technologies that have not been thoroughly tested could expose society to unacceptable risks if the economic benefit is deemed high enough. Conversely, private operators warn that products posing minimal risk could be prohibited on the grounds of a lack of utility, arguing that it should be up to the market to decide (Chevassus 2001). Ultimately, this new approach to analyzing technological decisions leaves us wondering how neutral experts really are when assessing the benefits (and risks) that a new technology presents to society. We might worry that their analyses are not entirely impartial, and that every expert consciously or unconsciously leans toward certain interests—including (and why would they not?) their own (Roqueplo 1997).
3So, if we accept that experts will have different opinions to laypeople by virtue of their greater technological knowledge and scientific expertise, we must also accept that this difference cannot necessarily be attributed to a failure of objectivity—that is, a tendency to make decisions based on something other than the social utility of the new technology at hand—but potentially to their affiliation with a particular social group and/or scientific community. While an extensive empirical literature on errors of judgment suggests that beliefs held by experts and the general public diverge in predictable ways, a number of studies have proposed that these disparities can be put down to self-serving bias and how it tends to interfere with scientific objectivity. This psychological theory attempts to explain differences in decision-making between individuals by the human propensity to believe that actions and choices that benefit our own group will benefit society as a whole.
4At a moment when Europe and the rest of the world are battling multiple public health crises and civil society is growing more and more uneasy about certain technologies espoused by political leaders (in Europe, we might point to the heated and divisive debates surrounding nuclear technology, for example), this article proposes to examine the technological choices of a sample of French experts to determine whether self-serving bias can be detected. We administered a technology outlook survey, based on the Delphi method, with a sample of more than 1,200 experts in France. The survey presented respondents with around 1,150 hypothetical technologies that may emerge in the future. We used empirical methods (a logistic regression model) to test whether respondents’ technological choices were dependent on their level of expertise. Here, we positioned ourselves upstream of most existing studies aimed at understanding variations in risk assessments of different technological options. Our goal was to determine whether, at the point at which research agendas and emerging technological opportunities are being outlined, priorities and expectations are aligned across the sample.
5Our results indicate that priority setting is influenced by the respondent’s own areas of interest, suggesting that expert opinions are subject to forms of bias that challenge the notion of objectivity—the main argument for seeking their advice. Moreover, we can observe a behavioral difference between experts and laypeople depending on the maturity of the technology presented to them. Lay respondents seem just as interested in how innovative ideas are to be developed and brought to market as in basic research, whereas experts do not. Consequently, a single individual may reach a different conclusion depending on his or her knowledge of the subject at hand. Those who already work in innovative fields tend to be more supportive of emerging technologies, whereas maturity makes no difference to those with little relevant background knowledge. This would appear to support the hypothesis that self-serving bias is at work.
6In conclusion, by upholding the hypothesis that expert judgments are affected by self-serving bias, this study suggests that the collective irrationality of certain technological choices is not necessarily a reflection of individual irrationality, but rather of a behavioral bias that stems from sectoral interests: experts themselves, while rational actors, may not always be objective. We would venture that all of our socio-technological debates and controversies mask an undercurrent of tension between groups of experts with differing views, each influenced by their own distinct interests. This urges us to rethink the collective organization of expertise, so we can transcend the limitations of individual expert judgment and ensure that decision-makers have access to sound advice when weighing up technological choices.
7At times of uncertainty, we turn to science—but this raises profound questions around choices that pit social interests against technological advancement. We see this in the divisions and controversies between the US and Europe in relation to numerous environmental and public health concerns, one conspicuous example being genetically modified organisms (GMOs) (Joly and Marris 2001). Today’s debate essentially revolves around different understandings of risk management and technological innovation, and of the role that science and expertise should play in these matters.
8Historically, the relationship between science and politics has been governed largely in accordance with the technocratic model (Millstone and van Zwanenberg 2002). To ensure an objective analysis of the risks associated with a given technological choice, policymakers rely exclusively on scientific advice. The general public, so this reasoning goes, do not make rational choices. Laypeople (i.e., nonexperts) are prone to a variety of cognitive biases—such as the small numbers fallacy and availability bias (Rabin 1998)—, do not understand probabilistic reasoning (Tversky and Kahneman 1981), and lack the general cognitive capacities to understand and analyze the situations put before them. In this context, policy decisions can (and must) be made solely on the basis of scientific judgment (Weingart 1999), held up as the gold standard for neutrality and rationality. Decision-makers can then proceed on an evaluation of concrete risks (conducted by scientific experts) and not merely perceived risks (identified by society at large). [1]
9Risk evaluation is deemed to be the sole preserve of scientific experts. Meanwhile, the technocratic model also tells us that responsibility for analyzing the potential benefits of new technologies and for managing any associated risks should lie with economic decision-makers (companies with an interest in that technology) and/or political actors (on behalf of citizens affected by it). It is therefore managers who are entrusted with evaluating a new technology’s acceptability and deciding whether it should be waved through or stopped in its tracks. Their assessments must be based on clear, unambiguous scientific data.
10This traditional risk management model came under serious scrutiny in the 1980s, when we experienced a succession of public health crises (e.g., the contaminated blood scandal, BSE, listeriosis, dioxin) and major environmental disasters (asbestos, Seveso, and so on). It became apparent that managers had underestimated the potential repercussions of certain technological developments. There are a number of plausible explanations for these failures. First, we might conclude that allowing corporations and the market to rule on the benefits of emerging technologies resulted in an overly rose-tinted view, minimizing or even concealing (in the case of asbestos, for example) the problems and negative externalities they could pose for local communities. Thus, decision-makers’ individual rationality became divorced from collective rationality, due to a divergence of interests between experts and the rest of society. Second, it has often been argued that the technocratic model assumes science to be capable of producing unambiguous conclusions that are immediately intelligible to the general public and directly transferable to policymaking. A number of studies leave us with the impression that the rhythms of scientific research and technological innovation have recently fallen out of sync, resulting in scientific controversy and disputes and hampering an objective, case-by-case assessment of an emerging technology’s purported benefits (Kourilsky and Viney 2000). As a result, managers may have inadvertently allowed scientific findings of dubious validity to crowd out contradictory evidence. Over time, the positivist view of science has been displaced by a more constructivist analysis (Latour 1988). We no longer see science as a method for revealing universal truths, independent of the social system in which they were produced. Rather, we see it as a social construct (Beck 1992), placing question marks over the scale and universality of benefits promised by a new discovery or technology.
11Regardless of the explanation to which we subscribe, it is clear that there is a pressing need for a change of approach. Evaluating the social benefits of various innovative technologies developed by science should be a task for individuals capable of analyzing their (possibly opposing) effects across the entire population. It is then up to the decision-maker to determine which option (and therefore which interests) are to take priority. That is why an alternative model, based on the precautionary principle, has come to the fore, urging expert decision-makers to give more consideration to hypothetical risks. In this model, the legitimacy of expertise rests on the premise that “the conscientious judgment of those recognized as competent in a given domain is society’s most credible foundation for action” (Roqueplo 1997, 21). [2] However, entrusting experts to conduct a comprehensive analysis of the potential consequences of technological choices is not without its problems. Consumers’ associations have raised concerns that innovative technologies that have not been thoroughly tested could expose society to unacceptable risks if the economic benefit is deemed high enough. Conversely, private operators warn that products posing minimal risk could be prohibited on the grounds of a lack of utility, arguing that it should be up to the market to decide (Chevassus 2001). Ultimately, this new approach to analyzing technological decisions leaves us wondering how neutral experts really are when assessing the benefits (and risks) that a new technology presents to society. We might worry that their analyses are not entirely impartial, and that all experts consciously or unconsciously lean toward certain interests—including (and why would they not?) their own (Roqueplo 1997).
12While it may seem impossible to predict whether the technologies favored by experts will produce the optimum social outcome in the future, one way around this difficulty in the short term is to establish whether their choices betray a bias toward their own group. Let’s say we accept that experts will have different opinions to laypeople, not only by virtue of their greater technological knowledge and scientific expertise, but also because technology specialists are often called upon to give an opinion on subjects at the very outer edge of scientific discovery, thus “frequently voicing a belief informed by knowledge and experience rather than truth” (Comets 2005). However, this difference is not necessarily attributable to a failure of objectivity—that is, a tendency to make decisions based on something other than the social utility of the new technology at hand—but potentially to their affiliation with a particular social group and/or scientific community.
13While an extensive empirical literature on errors of judgment suggests that beliefs held by experts and the public diverge in predictable ways (on the disconnect between how economists and the general public view the economy, see, for example, Blendon et al. 1997, Caplan 2002), a number of studies have proposed that these disparities can be put down to self-serving bias and how it tends to interfere with scientific objectivity. This psychological theory attempts to explain differences in decision-making between individuals by the human propensity to believe that actions and choices that benefit our own group will benefit society as a whole. Indeed, “considerable empirical evidence shows that people tend to accept positive (as well as normative) beliefs slanted to serve their self-interest” (Caplan 2002). Scientists’ overriding interest is securing funding for their areas of research. We might therefore suppose that a skilled expert working in a particular research field is likely to support a decision that benefits that field, even if it stretches rationality. This may lead to errors of judgment despite his or her extensive subject knowledge (Babcock and Loewenstein 1997). In this scenario, the scientific expert is no longer neutral.
14At a moment when Europe and the rest of the world are battling multiple public health crises and civil society is growing more and more uneasy about certain technologies espoused by political leaders (in Europe, we might consider the heated and divisive debates surrounding nuclear technology, for example), this study proposes to examine the technological choices of a sample of French experts to determine whether self-serving bias can be detected. This is no small issue. The fact is that, if scientific experts are not neutral, there ought to be a debate on involving the general public in technological choices through democratic means, and about the optimum composition of expert committees. [3]
15We administered a technology outlook survey, based on the Delphi method, with a sample of more than 1,200 experts in France. The survey presented respondents with around 1,150 hypothetical technologies that may emerge in the future. We used empirical methods (a logistic regression model) to test whether respondents’ technological choices were dependent on their level of expertise. The main original contribution of this research, then (besides the large number of observations gathered), is that it allows us to study potential differences in opinion at the point when possible future technologies are being posited, i.e., “upstream” of any consideration of the social acceptability of the risks involved. Here, we are positioning ourselves upstream of most existing studies aimed at understanding variations in risk-related judgments, with a view to determining whether, from this early stage when research agendas and emerging technological opportunities are being outlined, expert and lay respondents come down on the same side. This will allow us to ascertain whether their priorities and expectations with respect to new technologies are in alignment or, if not, to offer an additional basis for explaining the lack of mutual understanding between the two groups.
16The remainder of this article is set out as follows. The first section explains the definition of “expert” and “expertise” adopted for the purposes of this research. The second section describes the behavioral model and Delphi database used in the empirical analysis, and the third section discusses our econometric parameters and overall findings. Finally, we conclude by placing these results in a broader perspective.
Knowledge and expertise
17Before getting into the question of expert impartiality toward new technologies, we must first be clear about what we mean by “expert” and “expertise” in this context. Paradiso (1996, 3) defined an expert as “an individual… whose legitimacy is not self-proclaimed but conferred by some competent authority; he or she is chosen on the basis of recognized knowledge and skill; and his or her work… is aimed at providing the authority with the necessary information to reach a judgment or decision.” This definition highlights the importance of knowledge both in the conferral of expert status and in the practice of expertise itself. It is recognized knowledge that guides both the expert’s appointment and subsequent behavior. In the literature on the differences between judgments made by experts and laypeople, the traditional view is that an expert is defined by the level of knowledge he or she is recognized or assumed to possess, often approximated to a certain level of education or professional position (Lazo, Kinnell, and Fisher 2000, Savadori et al. 2004, Thomson et al. 2004). However, calling someone an expert on the basis of technological knowledge alone opens the door to a range of critiques. On a conceptual level, expertise is not synonymous with experience (Collins and Evans 2002). On an empirical level, Bolger and Wright (1994), based on a review of more than sixty studies of the standard of expert evaluation, identify two serious flaws with the use of such a broad definition.
18– Established experts are assumed to have strong specialist skills, an extensive scientific knowledge, and certain internalized rules and “good practices” that allow them to apply their knowledge to real-life situations (Shanteau 1992). However, it is not unusual for such experts to be asked about other fields close to their own specialism, much to the detriment of their predictive acuity (Rowe and Wright 2001).
19– Nor is it unusual for experts to vary significantly from laypeople in terms of sociodemographic characteristics (Savadori et al. 2004), making sampling bias a real concern. Based on the literature, then, it seems that comparing expert and layperson response profiles and, subsequently, drawing conclusions about predictive success, is far from straightforward.
20For this reason, we chose to adopt a narrower definition for our own analysis. Specifically, all of the experts included in our sample were selected purposively by a supervisory authority. In this respect, they conform to Paradiso’s (1996) definition. The selection process was a collaborative effort between the French Ministry of Higher Education and Research (MESR), the market research organization SOFRES, and the Bureau d’Économie Théorique et Appliquée de Strasbourg (BETA) (Strasbourg Bureau of Theoretical and Applied Economics). The sampling frame was provided by the Télélab (MESR) and France Technologie datasets, covering the public and private sectors respectively. In total, 3,388 experts were invited to take part. We received usable responses from 1,253 individuals (a response rate of around 37 percent).
21Next, in order to address the two critiques described above, we determined that to be classed as an expert, in addition to being certified by a supervisory authority an individual must be professionally active in the field of study relevant to each question. Through our Delphi survey (see above), experts were asked about a range of subjects falling under their general research field and not only those in which they had specific expertise. Thus, a life sciences expert might be asked to give a view on anything from the molecular processes involved in brain development and growth to the use of novel genetically modified crops as food. Each expert was therefore asked to rate his or her own knowledge of the relevant subject. This provided us with a self-assessed competence measure for each of the technological fields encompassed by the study. Any expert rating their subject knowledge as “very substantial” is likely to be actively involved in specifically relevant or closely related research. A rating of “substantial” suggests that the expert has been involved in such research in the past and continues to follow new developments closely by associating with researchers or reading relevant publications. A rating of “limited” indicates that the expert does little more than read articles in the general or academic press or keep in touch with a handful of specialists. A rating of “nil” tells us that the expert has no knowledge whatsoever of the field in question. Our study begins from the hypothesis that only those with “substantial” or “very substantial” knowledge of the relevant subject can be classed as genuine experts. This means that individuals could be treated as experts on their own specialist fields but as laypeople when asked about subjects beyond their primary interests.
22Out of 78,486 observations (subject–expert pairs) gathered, only 10 percent came from experts who considered themselves as competent in the relevant area (indicated by a knowledge rating of “substantial” or “very substantial”). The majority pertain to individuals with no knowledge of the subject on which they were asked to express a view. Table 1 below presents the aggregated statistics for self-reported knowledge and the typology of expertise adopted in our analysis.
Self-reported knowledge rating (experts from all fields)
Experts’ self-reported knowledge rating | Percentage of all observations (%) | Knowledge rating |
---|---|---|
Very substantial | 3% | Expert |
Substantial | 7% | Expert |
Limited | 27% | Novice |
Nil | 63% | Layperson |
Self-reported knowledge rating (experts from all fields)
23Our chosen methodology allows us to counter both the first and second critiques: individuals are not classed as experts unless they work or have worked in the relevant area, and the same individual might be classed as an expert on one subject and a layperson with respect to another. This means that the response profiles of experts and laypeople are unbiased in terms of institutional affiliation or other age-related differences. Furthermore, the laypeople in our analysis are, in reality, specialists (their level of education and reasoning skills are both more advanced than the average for the general population) being asked about subjects that they know very little or nothing about. In this context, the existence of self-serving bias will be all the more interesting to ascertain, given that any behavioral differences attributable to education level should be negligible. This means that the only difference between experts and laypeople is the amount of information they have at their disposal. [4] Traditional economic models of belief and preference formation show that more information reduces response variance but should have no effect on the average (Pesaran 1987, Sheffrin 1996).
24Based on this original definition of “expert” and “layperson,” we proceeded to test for the existence or nonexistence of self-serving bias, starting from the following premise: if an expert’s opinion of a given technology’s importance for the future is independent of his or her degree of subject-specific knowledge, we can reasonably conclude that a) experts and laypeople have similar perspectives on the future, and b) experts do not display self-serving bias. Conversely, if knowledge has a significant effect on opinion, we can conclude that, from the point when future scenarios are being outlined (prior to the risk assessment stage), experts tend to favor options that support their own interests—substantiating the hypothesis of self-serving bias.
Determinants of expert opinion: Theory and indicators
25The data used to test these hypotheses empirically derived from a technology outlook survey, based on the Delphi method, carried out in France between 1994 and 1995. [5] The Delphi method is specifically designed to capture subjective evaluations. Experts were asked for their unrehearsed opinions on a range of technological innovations, before moving on to a second survey where they had the opportunity to revise their views based on the results from the first phase. This two-phase process allowed us to tease out both incongruous behavior (in the first phase) and more consensual scenarios (in the second phase) (Munier and Rondé 2000).
26We selected 1,150 subjects pertaining to future technological developments that were currently being researched in France or would be by 2020. Each subject was allocated to one of fifteen broad technological fields, then presented to a sample of experts (engineers, business leaders, and researchers) representing both public and private research. More than 78,000 initial observations were collected and passed to the BETA laboratory for analysis. This data covered the expert’s level of knowledge of a given subject, how important he/she viewed it to be, the probable time frame for product realization (ranging from before 2000 to after 2021), the accuracy of the expert’s prediction, whether he/she believed that international cooperation would be needed to bring the project to fruition, and the potential obstacles ahead (technological, regulatory, financial, informational, and organizational).
27Respondents were drawn from three distinct backgrounds: business (mostly R & D professionals), university research, and major public research bodies. While our main objective was to test the hypothesis of self-serving bias, various authors (Granovetter 1992, Sanbonmatsu et al. 1997, Rabin 1998) have shown that individual judgments are highly dependent on socioprofessional context. For this reason, we introduced certain variables into our empirical study to classify respondents by professional background, so we could control for their impact on opinion. The table below groups respondents’ professional backgrounds into four categories.
Institutional affiliation (experts from all fields)
Organization type | Percentage of sample (%) |
---|---|
Private companies | 40% |
Public research bodies | 22% |
Public agencies | 4% |
Universities | 31% |
Not stated | 3% |
Institutional affiliation (experts from all fields)
28To identify each respondent’s professional background, our Delphi survey also collected data on his or her specific role. It divided them into two categories: those involved in R & D activities and everyone else. Overall, the sample was noticeably skewed toward R & D professionals (70 percent).
29We also gathered information about the size of each expert’s home institution, as summarized in Table 3. We can observe that experts from smaller entities are not underrepresented, since 54 percent worked for an organization with fewer than 500 employees.
No. of employees working for home institution (experts from all fields)
Number of employees | Percentage of sample (%) |
---|---|
Fewer than 10 people | 10% |
Between 10 and 100 people | 24% |
Between 100 and 500 people | 20% |
Between 500 and 2,000 people | 22% |
More than 2,000 people | 24% |
No. of employees working for home institution (experts from all fields)
30Furthermore, in line with Perkins (1981) and Chaiken and Maheswaran (1994), who are adamant that life experience has an impact on individual judgments, we also controlled for respondent age, used here as a proxy for professional experience. Table 4 below presents a breakdown of the sample by age bracket. Note that two-thirds of respondents were between forty and sixty years old and just 3 percent were under thirty.
Respondent age (experts from all fields)
Age group | Percentage of sample (%) |
---|---|
< 40 | 23% |
40–60 | 65% |
> 60 | 12% |
Respondent age (experts from all fields)
31In addition to these individual factors, we included a number of control variables to categorize each technology presented in the survey. This allowed us to account for Sanbonmatsu et al.’s (1997) finding that value judgments on a given subject are positively determined by the contextual information available on that subject compared to others. For our purposes, we might assume that the weight of available information on the technology at hand, relative to the expert’s overall knowledge of the relevant field, will influence his or her assessment. A priori, we might also assume that the more mature a technology (i.e., the closer to commercial release), the lower the uncertainty and, hence, the greater the available information. Our Delphi instrument grouped technologies for future research into four phases in the R & D cycle. Thus, options presented to respondents could be at any of the following stages:
- scientific exposition: the principle or phenomenon is explained scientifically at a theoretical level;
- proof of concept: technical research and development have led to a first working application of the idea (such as a prototype);
- first commercial application: the new technology is shown to be commercially viable; the first product to emerge from the R & D process is brought to market;
- widespread use: products developed on the back of the technological advance are manufactured and sold en masse.
32The following table summarizes the breakdown (by percentage) of innovations by development phase.
Technological innovations by development phase (%)
Development phase | Percentage of technological innovations |
---|---|
Scientific exposition | 16% |
Proof of concept | 37% |
First commercial application | 18% |
Widespread use | 29% |
Technological innovations by development phase (%)
33Finally, while our primary interest is exploring the impact of individual knowledge on scientific judgment, we would concur with the literature on decision-making in conditions of uncertainty: expert judgment is influenced to some extent by both individual characteristics and factors relating to the option being considered. We therefore constructed our model as depicted in Figure 1, paying particular attention to the explanatory power of knowledge level as a factor in individual opinion.
Explanatory variables in expert judgments

Explanatory variables in expert judgments
34Having set out our working hypotheses and the data obtained, we now turn to look at the econometric factors retained and summarize our key findings.
Econometric factors
Model
35Our aim was to estimate the probability of an expert ascribing significant importance to a technology, based on a set of explanatory variables relating to the expert him/herself (home institution, age, sex, role, organization size), the technology under consideration (e.g., its technological maturity), or both (e.g., the expert’s [i] prior subject knowledge [j]).
36We therefore carried out a regression analysis using the following model: [6]
38with
39ij = 1 to 58,010 expert–subject pairs, [7]
40i = 1 to 1,253 experts,
41j = 1 to 1,150 subjects.
42An overview of variables AGE, ORG, EMP, ROLE, KNOW, and MAT is presented in table 6 below.
43As F follows a logistic function, [8] equation (1) can be converted to a Logit-type model. The probability that a respondent will judge an option important is given by:
45where β represents the estimated coefficient vector for the explanatory variables. The estimator was calculated using the maximum likelihood method. [9]
Overview of model variables
Variables | Indicators and values | |
---|---|---|
Dependent | Imp (perceived importance of technology) | IMP = 1 if the respondent regards the technology as “important” or “very important” IMP = 0 if the respondent regards the technology as “somewhat important” or “not important” |
Explanatory | Respondent age | AGE = 1 if the respondent is under 30 years old AGE = 2 if the respondent is between 30 and 39 years old AGE = 3 if the respondent is between 40 and 49 years old AGE = 4 if the respondent is between 50 and 59 years old AGE = 5 if the respondent is between 60 and 69 years old AGE = 6 if the respondent is over 70 years old |
Org (respondent’s organization) | ORG = 1 if the respondent works for a private company ORG = 2 if the respondent works for a public body ORG = 3 if the respondent works for a university ORG = 4 if the respondent works for a government agency | |
Emp (size of respondent’s organization) | EMP = 1 if the organization has fewer than 10 employees EMP = 2 if the organization has between 10 and 100 employees EMP = 3 if the organization has between 100 and 500 employees EMP = 4 if the organization has between 500 and 2,000 employees EMP = 5 if the organization has more than 2,000 employees | |
Role (respondent’s job category) | ROLE = 1 if the respondent is in an R & D role Otherwise, ROLE = 2 | |
Know (respondent’s level of knowledge) | KNOW = 1 if the respondent is currently involved in subject-specific research KNOW = 2 if the respondent has worked in the relevant field in the past and keeps informed KNOW = 3 if the respondent reads articles on the subject in the general press KNOW = 4 if the expert knows very little about the subject | |
Mat (maturity of technological innovation) | MAT = 0 if the technology is in the scientific exposition or development phase MAT = 1 if the technology is in the commercial application phase or already available on the market |
Overview of model variables
Logistic model estimators for the overall sample
Variable | Coefficient (error type) |
---|---|
Constant | 9*** (0,120) |
Age | 0,046*** (0,015) |
Org | 0,189*** (0,015) |
Emp | -0,058*** (0,010) |
Role | -0,053 (0,034) |
Know | -2,92*** (0,020) |
Maturity | -0,065*** (0,012) |
N | 58010 |
Log-likelihood | - 20143 |
Chi 2 | 29232 |
Logistic model estimators for the overall sample
*** Indicates that estimators are significant at 1%.Results and discussion
Overall sample analysis
46As summarized in table 7, we obtained statistically significant results for five of our variables. First, KNOW was shown to play a significant part in expert judgments. This is the first meaningful result of our study, since it indicates that there is a significant divergence in individuals’ opinions depending on their knowledge of what they are being asked about, i.e., between experts and laypeople. Our survey therefore confirms that the phenomenon identified by Furman and Erdur (1999) in the environmental sphere also applies to technology when all research fields are taken together: there is a link between an individual’s level of knowledge and his or her views as to the most promising technological innovations. Moreover, the KNOW variable is negative, indicating that the more knowledge a respondent possesses on a given technology (weak knowledge indicator), the more important he or she regards it to be. This is a fundamental finding, since it reveals a tendency for experts to give more weight to fields and subjects in which they themselves are involved—providing the first empirical evidence to support the self-serving bias hypothesis. Experts and laypeople do not necessarily ascribe importance to the same things: how much a respondent knows about the technology in question has a direct influence on this assessment. A gap may therefore emerge between social demand for research and innovation and what scientists are actually supplying.
47The AGE and ORG variables are positive, showing that both older respondents and those working in the public sector are more likely to give a high importance score to any given option. In other words, older experts or those from a public sector background are more supportive of the investment of time and resources required to make advances in any technological field, while their younger and/or private sector counterparts are much more selective when deciding which fields are worthy of investment and future research.
48The EMP variable is negative, indicating that experts from small organizations are more likely to regard an option as important. Despite their more limited resources, this group does not seem any more liable to endorse a particular subset of fields.
49As for job category (the ROLE variable), this does not appear to have any effect on opinion. This means that individuals involved in R & D activities do not hold significantly different views compared to colleagues in different roles (sales, consulting, etc.). The nature of respondents’ work is therefore not found to be a differentiating factor between experts and laypeople. More than technical competence, it looks very much like the primary influence on an individual’s judgments is the quantity of relevant knowledge they possess.
50Finally, the technological MATURITY of each option has a negative effect on perceived importance. Respondents view upstream research in emerging fields (weak maturity) as more important than applied research on more established concepts (strong maturity).
51This first regression model therefore demonstrates that expert judgments are influenced by individual characteristics, as well as those of the options presented to them. We do not claim that this influence extends to variations in perceived risk (which we have not measured here), but rather to the very nature of the kind of technological advance that society deems desirable: individual factors have an effect on the perceived importance of future technological developments, raising questions for those who defend the neutrality of “sound science.” It would appear that the priorities identified by laypeople (even well-educated laypeople, such as those in our sample) do not coincide with the kinds of technologies that most interest experts. While it is possible that lay respondents (who are also citizens and taxpayers) are expressing their own deeply held values rather than a relatively well-reasoned analysis of objective data, this first regression model suggests that the disconnect between expert and lay opinion has another explanation: expert objectivity is attenuated by self-serving bias.
52To shed further light on these differences and move toward an explanation, we conducted an additional empirical analysis of three subsamples: responses provided by experts, novices, and laypeople.
Analysis of three subsamples with varying knowledge levels
53Table 8 presents the results for each subsample. The expert column combines all observations where respondents rated their knowledge level as 1 or 2, i.e., they currently work in the subject area concerned or have done so in the past and continue to take an interest. The novice column relates to observations where respondents rated their knowledge level as 3, i.e., they keep up to date with the subject area by reading articles in the general press. The third subsample groups opinions given by respondents that can be classed as laypeople, having no knowledge of that particular subject. Since each respondent was asked about multiple subjects, his or her responses will fall into different categories. In other words, the same expert could fall into all three subsamples, depending on the question to which the observation relates.
54In this analysis, we find some of the same results repeated across these three subsamples: The ORG and EMP variables are found to be statistically significant. Whatever their existing level of subject knowledge, experts’ opinions on future technological development in France are influenced by their professional environment (number of colleagues and organization type).
55Moreover, it is intriguing to note that experts and novices display very similar behavior. According to our analysis, novices are more likely to agree with experts than with laypeople. More specifically, both the expert and novice subsamples point toward the same findings for the AGE, ORG, EMP and MATURITY variables reached for the sample as a whole. The only difference to emerge concerns the ROLE variable: the negative coefficient is insignificant for the expert subsample but significant for the novice subsample. This finding suggests that novices involved in R & D ascribe more importance to the options presented than those in other professional roles (consulting, marketing, and so on), whereas this is not the case for experts.
56Laypeople stand out from the other groups in two respects: the influence of respondent age and of technological maturity. In fact, it appears that young laypeople, all else being equal, place importance on a broader range of technologies (regardless of field) than their older counterparts, but this is not true for young experts or novices, as shown by the sign of the coefficient for the AGE variable.
57Finally, technological MATURITY is not a significant predictor for the layperson subsample. These respondents do not give more weight to innovations at a very early stage of development than to those at a later stage in the R & D cycle. This last point is important, because it implies that laypeople, unlike experts, are just as interested in the development and commercialization of innovations as they are in basic research. Our study thus corroborates Mayer’s (2003) work in the United Kingdom. While experts place most value on discovery research, laypeople (even those with a significant level of technical and scientific competence, as in our sample) tend to favor technological innovations that will improve their daily lives, both in the long and short term. This finding also underscores the fact that the same individual might have different opinions depending on his or her subject-specific knowledge. Those who work with a particular emerging technology (and who can therefore be classed as experts) tend to place greater weight on other incipient areas of research (i.e., technologies at the discovery and exploration stages). However, they are less influenced by technological maturity when they do not have a firm grasp of the subject area in question (and are therefore classed as laypeople). Again, this tends to support the idea that self-serving bias is at work.
Logit regression results
Variables | Coefficients (standard errors) | ||
---|---|---|---|
Experts | Novices | Laypeople | |
Constant | 1,1402***(0,2689) | 1,0139***(0,1377) | -3,0675***(0,1965) |
Age | 0,1504***(0,0407) | 0,0849***(0,0208) | -0,0801***(0,0294) |
Org | 0,2161***(0,0422) | 0,1716***(0,0210) | 0,2969***(0,0302) |
Emp | -0,0959***(0,0277) | -0,0418***(0,0142) | -0,0954***(0,0215) |
Role | -0,0594(0,0876) | -0,2262***(0,0417) | -0,0502(0,0756) |
Maturity | -0,1058***(0,0320) | -0,1103***(0,0166) | -0,0111(0,0200) |
N | 5610 | 15090 | 37310 |
Log-likelihood | 5386 | 18015 | 11843 |
Chi 2 | 80,93 | 202,24 | 114,38 |
Logit regression results
*** Indicates that estimators are significant at 1%.Conclusion
58Following our analysis of some 58,010 observations, our overriding conclusion is that—contrary to Caplan’s (2002) findings in a study of economists, and to those of Dahl and Ransom (1999), who worked with Mormon priests—self-serving bias can be shown to influence technological choices among the expert community in France. Indeed, we have seen that respondents’ chosen research priorities correspond to their own professional interests (areas of exploratory research within their own specialist field). In other words, the more sophisticated an expert’s knowledge of a particular subject, the more likely that he or she will regard that subject as vital for the future of society. The collective irrationality of certain technological choices is therefore not necessarily attributable to individual irrationality, but rather to the existence of behavioral biases linked to sectoral interests. Experts themselves may not be objective in their judgments, lending support to Shanteau’s (1992) point that expert status is typically awarded by peers, whereas ideally it should be dependent on the objectivity of one’s judgments.
59Our results indicate that experts and laypeople (even highly educated laypeople, like the respondents in our study) express different views about which technological innovations will matter most for the future. Still, it should not be forgotten that, as Tesh (1999) observes, laypeople tend to seek expert advice. The little information they possess generally comes from the literature that experts/specialists have chosen to produce. More generally, the lack of agreement between experts and laypeople probably masks a tension between groups of experts with different opinions, pulled in different directions by sectoral interests.
60To conclude, if the boundary between scientific fact and value judgment is fading, the notion of “sound science” becoming ever more diaphanous, it might be time for a rethink of the collective organization of expertise. Expertise should transcend the limitations of individual experts; otherwise, decision-makers may be deprived of a sound, reliable basis for technological choices.
Notes
-
[1]
Nonetheless, we would point out that, according to the “social theory of risk” (Krimsky 2003, Marris 1999, Slovic 1987,1993), the risk assessments performed by the public are indeed rational, even if their rationality departs from the “quantitative” rationality of the standard model. In fact, ordinary people seem to evaluate risk quite successfully. Thus, there is a positive correlation between the number of deaths caused by a particular hazard and the evaluation of risk reached by an ordinary “unschooled” citizen, although people do tend to overevaluate low-probability risks and underevaluate higher-probability risks.
-
[2]
Translator’s note: Quotation is our translation from the original French source. Unless otherwise stated, all translations of cited foreign language material in this article are our own.
-
[3]
For example, the public could be invited to participate in formulating the overarching working hypotheses that scientists should be working on, as is already the case in some European countries (Cuhls 2000). Such a model would bring us closer to the ideal conditions for coconstruction (Callon, Lascoumes, and Barthe 2001), where scientific research and decision-making are informed by an explicit sociological framework and subject to sociopolitical validation, thus establishing a mutually supportive relationship between science and society.
-
[4]
Insofar as individuals are only asked about their own area of research and not others, we can assume that they all share the same basic technical competence and training.
-
[5]
While this study was carried out some years ago, it remains the only empirical investigation relating to emerging technologies undertaken at the national scale in France, and the only one to encompass such an extensive and diverse range of technological fields. Given this scope, it offers a useful basis for exploring decision-making behavior among French experts.
-
[6]
In a second model, we incorporated fixed technological effects in order to account for specificities linked to particular technological fields. We created fourteen dummy variables (one for each field) and each subject was linked to its broader technological field. We then tested the significance of this second model by applying a constraint to force the sum of fixed effects to equal zero. The additional information from the various fields raised the overall significance of the model very slightly, but the estimators remained unchanged. Moreover, we found that, regardless of the technological field in question, the variable KNOW remains significant at 1 percent and retains the same sign as in the model based on equation (1). This confirms our initial hypothesis. Insofar as each field was associated with a highly diverse group of experts (in some fields, respondents from the private sector, younger respondents, and innovations/inventions were underrepresented), our analysis is based only on the predictions for the entire population, all technological fields included.
-
[7]
Note that our sample is smaller than the Delphi dataset. This is because we excluded any incomplete observations, i.e., those where at least one of our chosen explanatory variables was missing. This left us with 58,010 observations out of the 78,486 collected. Likewise, the number of expert–subject pairs (ij) is not equal to the product of i*j. In fact, not all experts were asked about every subject.
-
[8]
Alternatively, we could have used a Probit-type model. However, Amemiya (1981) shows that both types of model produce similar results. More specifically, we can derive the estimators for a Probit model by multiplying those of a Logit model by π / √3 ≅ 1.8 (Greene 1990 and Ghosh 1991).
-
[9]
As specified by Gourieroux (1984), the function is strictly concave, telling us that maximum likelihood has been obtained.