CAIRN-INT.INFO : International Edition
“For Sciences en Marche, there is an urgent need to rebalance the funding of research centers by strongly reinforcing the recurrent payments allocated by research funding bodies, and therefore reducing the proportion of funding attributed by competitive calls for proposals […]. Without a major financial effort, which we would estimate at two billion euros per year, we will see a continuation of the slow decline of French research and higher education and the increasing precarity of staff. The necessary financial means could be obtained through a reform of the Research Tax Credit[1] (Crédit Impôt Recherche)”[2]
“Gathered in Paris on September 19, the bibliometric commissions for Arts, literature, languages, and societies of foreign cultures, made up of presidents of the relevant sections of the National Council of Universities (Conseil national des universités; CNU), presidents of visiting committees, representatives of learned societies, and representatives of the National Center for Scientific Research (Centre national de la recherche scientifique; CNRS) decided not to publish any ranking of journals on the website of the Evaluation Agency for Research and Higher Education (Agence d’évaluation de la recherche et de l’enseignement supérieur; AERES) for Wave D. It was agreed that the determination of those publishing or creating material would be left to the judgment of the visiting committees, who had been chosen for their expertise and were therefore capable of judging in good conscience the quality of a publication or creation, however it might have been funded.”[3]

1The extracts cited above bear witness to the protests generated by recent reforms of higher education and research. They are a sign of the acts of resistance provoked by the systematization of certain policy instruments, such as the organization of funding by projects, or rankings, which place research centers and researchers in competition and comparison with one another. Whether they are expressed in the streets, through pronouncements made by bodies representing the scientific community, or in research centers, these protests criticize the perverse effects of these instruments on research work and knowledge production. Research is not exceptional in this regard: together with the health sector, it constitutes one of the front lines of the “new bureaucratic revolution” [4]: this involves the creation of agencies separate from the state, which reform the means of funding and evaluating of professional activities. In each of these sectors, these reforms have provoked forms of resistance that are visible in the street, online, or in the press.

2From very early on, the analysis of forms of resistance was integrated into the scope of researchers working on policy instruments. Michel Foucault opened up this field of study from as early as 1978, insisting that governmentality and the instruments that operationalize it always encounter resistance, as does any form of power. [5] This invitation to include forms of resistance in the analysis of policy tools was renewed by Pierre Lascoumes and Patrick le Galès. [6] From then on, these forms of resistance have been documented. [7] They can take the form of gaming tactics, which aim to manipulate figures and thereby reduce the ability of those instruments that rank or measure activities to govern behavior. [8] They can also be expressed by stalling practices, which limit the progress brought about by the instruments. [9] They can sometimes even succeed in subverting the instruments and placing them in the service of activist projects engaged in contesting the conditions of governance. [10] These forms of resistance are not distributed equally: they are often the work of those actors who hold dominant positions, as we can see from the opposition to streamlining tools in hospitals [11] and the circumvention of school catchment areas. [12] Yet much remains to be done in order to discover the forms that they take, the actors who participate in them, and the influence that they have on the instruments themselves: ten years after the publication of the book that launched, in France, the program of research on the policy instruments, Pierre Lascoumes and Jean-Pierre Le Bourhis note that “the question of forms of resistance is one of the less observed and less analyzed effects of techniques of government.” [13]

3Two questions seem to us to be particularly unexplored: that of the emergence of forms of resistance, and that of their political consequences on the architecture of the instruments. One part of the interest of work on forms of resistance is to emphasize that the actors involved have room for maneuver when faced with the administrative mechanisms that they encounter in their everyday lives. Resistance is then one of the means of appropriation [14]: it makes it possible to counteract the instrument’s intention to direct behaviors by attempting to limit its effects. However, these forms of resistance usually appear as an automatic response provoked by these mechanisms, which obscures the question of their emergence. The political consequences of these forms of resistance also remain insufficiently studied: although they are often considered at the time of their execution, protests are rarely considered in terms of their ability to affect the architecture of instruments and to take the fight to the arenas in which they were made. The ability of these forms of resistance to repoliticize the instruments, and even to reform them, remains obscure.

4In order to explore these two questions, this article will focus on protests, that is, forms of resistance that publicly condemn the instruments, propose alternatives, and make use of collective mobilizations—such as rallies, petitions, spontaneous events—that have the effect of questioning and publicly condemning the instruments. [15]

5This analysis considers the case of research policies, whose particular characteristics offer the opportunity to examine the origins of protests and their influence on instruments. In fact, since the mid-2000s, this sector of public action has been marked by the extensive use of instruments of competition and comparison, such as competitive calls for proposals and rankings. Although they were put into wide use in the 2000s, these instruments were not new, and had sometimes been employed in the French research system well before the neo-managerial turn in public action. However, it was only in the mid-2000s that a critique of these instruments was formally established. This delay between the creation of these instruments and the emergence of protest movements allows us to bring to light the mechanisms that transform a discreet, previously overlooked instrument into a contested instrument. Some of these critiques were actually taken into account and resulted in adjustments being made to the instruments, or even their complete cancellation. Others, in contrast, were ineffective. Research policies therefore allow us to document the varied effects of critiques of these instruments.

6We will examine the emergence of the critiques and their influence by studying the interrelated dynamics of social movements and policy instruments. Although we will pay close attention to protest movements and their consequences, [16] these are not sufficient in themselves for understanding how an instrument comes to be contested, nor even the conditions in which critiques can influence the mechanisms of public action. Studying both the development of protests and the trajectory of instruments will allow us to show, on the one hand, that the instruments often precede the critiques and that the emergence of the critiques follows the changes in the uses of the instruments. On the other hand, this combined analysis makes it possible to emphasize just how much the critiques can play a role in the changes that are made to instruments, but that their influence also depends on the characteristics of the instruments, the resources they allocate, their dissemination, and their place in governmental and professional activities.

7To pursue these aims, we will first show that the emergence of critiques against project-based funding and rankings was preceded by transformations in the uses of these instruments that exacerbated the inequality of the distribution of symbolic and financial credits. This created a fertile terrain on which critiques could emerge to call the instruments into question. Next, an analysis of the effects of critiques will show that they result in a repoliticization of instruments, but that they do not always succeed in reforming them. The different abilities of protests to affect, or even remove, the instruments depend on the characteristics of both the social movements and the instruments.

Methodology: Two studies of the reforms of the instruments of research governance

To support these investigations, we use two studies, carried out by the authors of this article, examining project-based funding and the tools of research evaluation. These two studies were carried out independently from one another. Although protests were not initially the focus of these studies, the importance of protests was brought to light by the work done in these two respective fields.
The data relating to these protests and their effects were then collected more systematically, from a comparative and longitudinal perspective. Indeed, by viewing the instruments over the long term, it was possible to perceive the conditions of emergence of protests, revealing the history—now forgotten—of these mechanisms before they had been called into question. The comparative approach allows us to document the recurrent features and particularities that mark the trajectory of the relations between instruments and critiques.
This analysis builds on the gathering of written, printed, and oral sources. The written sources are drawn from the archives of institutions involved in research governance, such as interministerial bodies, the Ministry of Higher Education and Research, research organizations, and agencies. The printed sources include public declarations published in the press or, for the contemporary period, online, produced by collectives taking a position on the conditions of research governance. The oral sources are made up of interviews that were conducted with key actors involved in these debates about the instruments (representatives of agencies, protest groups, and research organizations).

From discreet instruments to contested instruments

8Whereas critiques of institutions often die out when the work of the institutions becomes routine, the opposite trajectory can be seen for project-based funding and rankings, which become more contested when they are mobilized. Much like the police protests against traffic enforcement cameras and the reconfiguration of professional work that they entailed, [17] the critiques of instruments developed progressively within the research community. We intend to explain this delay between the creation of instruments and the protests that they provoke. It was derived first of all from the gradual reconfigurations of the political project that is implicit within these instruments. The instruments make up a part of the mechanisms producing inequalities: whether it is a matter of selecting certain research projects—and not others—or of producing rankings that hierarchize the academic system, they reinforce or objectivize inequalities in funding, status, and reputation. This project of inequality was not fixed in a certain form forever. Like other instruments, [18] they have undergone significant modifications in the course of their various uses by those who created them and those who used them. These uses notably aimed, until the 1990s, to guarantee the instruments’ political acceptability: the instruments’ creators and users strove to limit the inequalities that they gave rise to and to remove these instruments from the sites of protest that are characteristic of research policies. From the 1990s, the instruments were marked by major reconfigurations. These did not directly give rise to the protests: although they were mindful of these reconfigurations, the protests against the instruments were not just an automatic response to an inegalitarian project that was formerly consensual and had now become contentious; they were also connected to the emergence and success of activists who presented these instruments as being problematic, and worked to put the reform of these instruments on the public agenda.

Constructing the political acceptability of the instruments

9The acceptability of project-based funding and instruments for ranking research activity did not appear immediately: these instruments generated debate when they were introduced. These debates, however, remained confined to governmental arenas: the creators of the instruments worked politically to configure the instruments in such a way as to make them acceptable. This work was then taken up by the instruments’ users, who sought to limit what they considered to be potentially harmful effects. Creators and users therefore worked to circumscribe the instruments’ jurisdiction, [19] define the degree of competition that they organized or the publicity of their results, and keep them away from sites of social protest. Taken together, these three restrictions allowed them to limit the controversies that these instruments generated.

10One of the first dimensions of this political work consisted in reducing the jurisdiction of the instruments by specifying the research domains that they could govern. For project-based funding, this restriction was brought about in 1961 with the creation of the Scientific and Technical Research Development Fund (Fonds de développement de la recherche scientifique et technique), which was the first instrument supporting the funding of research projects in French research centers. The creation of this instrument of intervention provoked controversy in the world of research governance of the time. [20] The Fund was effectively placed under the responsibility of the new General Delegation for Scientific and Technical Research (Délégation générale à la recherche scientifique et technique; DGRST), which had been created to coordinate research policies and was able, through collaborative actions, to directly support research teams placed under the supervision of ministries or organizations. The ministerial representatives intervening in research and the management teams of organizations, notably the management of the National Center for Scientific Research (Centre national de la recherche scientifique; CNRS), considered the Fund as a potential threat that might intervene in their own research policy. Faced with this potential opposition, the representatives of the DGRST constructed the political acceptability of the Fund by limiting it in three respects. The Fund was initially limited to financing areas of research in which ministries and research organizations had no involvement: it aimed to promote the emergence of new research disciplines or the funding of relations between research and industry. Next, the sums attributed to the Fund were limited to 10 percent of the total that was invested by the state in civil research. Finally, the support provided by the DGRST via the Fund was temporary and could not continue indefinitely: the collaborative actions that had been undertaken ultimately had to be transferred to research organizations, result in the creation of a new research organization, or, in the case of failure, come to an end. These three limitations explicitly aimed to guarantee the political acceptability of the Fund with regard to the institutions already involved in the governance of scientific research: the documents presenting the work of the Fund, published by the DGRST, repeatedly stress the supplementary character of this instrument, and the principle that it was not intended to intrude on or substitute the supervision of research centers.

11Initially confined to the DGRST, project-based funding was extended from the mid-1960s. From its first use by the DGRST, it was then taken up by organizations that made it an instrument of their research policy. This dissemination of instruments had the effect of extending the jurisdiction of project-based funding: it was no longer reserved only to emerging disciplines or projects situated at the interface of research and industry, but was now applied to all disciplines that were considered a high priority by research organizations or, at the National Institute of Health and Medical Research (Institut national de la santé et de la recherche médicale; Inserm), to all the disciplines within its perimeters. This dissemination increased the social support for project-based funding: the managers of organizations, formerly opposed to the monopoly held by the DGRST, now supported it. The gradual extension into new research domains should not be overstated, however. In practice, the funds mostly remained concentrated in the high-priority research domains and did not spread to all research domains. From the 1970s to the 2000s, the extension of the jurisdiction of project-based funding was therefore gradual and remained limited in scope.

12We also find this limitation of the jurisdiction of the instruments in the case of the evaluation of research. This was due to the institutional conditions in which research evaluation was conducted. Until the creation of the Evaluation Agency for Research and Higher Education (Agence d’évaluation de la recherche et de l’enseignement supérieur; AERES) in 2007, the processes of evaluation of research centers were managed by the disciplinary or multidisciplinary commissions specific to each of the organizations, and which were mostly appointed by the academic community. The university-based teams, which did not depend on any organization, started being evaluated later, by a body created in 1981 within the ministry, the Scientific and Technical Mission (Mission scientifique et technique), which was also divided into departments by discipline. Each commission then adapted its methods according to the appropriate practices of each research domain: while certain sections of the national committee of the CNRS or Inserm had decided to grade research groups, or to base their work on bibliometric indicators, these choices were not imposed on the whole academic community. In this case, the potential controversy was limited by avoiding imposing a uniform method of evaluation on all domains, which might have provoked protests in cases where the procedures would reflect the domination of certain modes of evaluation in disciplines where they would not be legitimate.

13Another mechanism for constructing the acceptability of the instruments was the limitation of the inequalities that they produced, or of their publicity. In the case of project-based funding, this took the form of practices that aimed to limit the concentration of the credits produced by the competitive allocation of funds. The creation of competition did not immediately pose a problem. At the start of the 1960s, the vast majority of requests for funding were successful and, for those that were rejected, the large sums invested by the state in research allowed the unsuccessful research teams to find other subsidies. Throughout the 1970s, in a context of diminishing funding, the competition between teams remained under control. Initially, the funds that could be requested through project-based funding were sometimes capped, like the free research contracts administered by Inserm. The awarding committees then became attentive to the equity of the distribution of credits: as they possessed information on the resources available to the research centers (whether “recurrent” budgets or project-based funds), they generally used this information to avoid focusing funds on projects proposed by teams that were already among the most abundantly financed. This solution was adopted, for example, by the Carcinogenesis and Pharmacology of Cancer committee (comité Cancérogenèse et pharmacologie du cancer), which, at the end of the 1970s, even made this an explicit principle of its policy. Its president, Georges Mathé, announced that “the number of contracts granted to teams from the large institutes will be limited.” [21] This limitation of the concentration of credits was accompanied by the methods of selection adopted by several committees who reduced the size of the budgets awarded to successful candidates in order to increase the number of teams selected. By this means, favorable success rates for selection were maintained, even at a time of shrinking budgets. At the start of the 1970s, members of the committee for Biological Membranes: Structures and Functions (Membranes biologiques: structures et fonctions) explicitly made this choice. Although the committee president recommended a “strict selection process for projects” because of the large number of project proposals, the committee awarded funds to fifty-five out of sixty-two projects [22]: a success rate of more than 88 percent. These mechanisms limited the inequality of the allocation of funds to applicants.

14We find this same mechanism in the cases of rankings and attributing grades to research centers. This time, it was based on the secrecy that surrounded the use of quantification in the evaluation of research. The ministry’s Scientific and Technical Mission began to attribute grades to research centers during the 1980s, but these were not made public, and it was not uncommon that the main interested parties had no knowledge of the grades that had been attributed, nor even that such a procedure existed. At the start of the 2000s, there were debates regarding how these grades should be publicized, but the ministerial cabinet opposed the publication of the quantified evaluations of research centers:


I asked for permission to publish all of the rankings of the doctoral schools on the Scientific and Technical Mission website. And I got permission, it was made public. But for the research centers it was out of the question, and the same for teaching too. You could say there were times when the debate was a bit fierce. […] The cabinet was completely against it. [23]

16Until the creation of the agencies, these instruments were therefore made discreet by their being limited to within the ministerial services.

17A third mechanism for the construction of the political acceptability of the instruments relates to the care exercised by some of their promoters to separate them from sites of social protest, such as that of the condemnation of the precarity of contract workers. At the start of the 1960s, project-based funding was one of the factors driving the increase in precarity: it was not only used to finance temporary research bursaries for doctoral students, but also to pay the technical staff of research centers. Condemned by the unions, [24] these practices were gradually brought to an end. At first, measures were adopted to integrate precarious workers into the staff of the institutes: for example, batches of posts were reserved for “DGRST scholarship holders” in recruitment competitions. From the 1970s, as the academic job market became increasingly limited, members of several committees excluded the funding of salary costs from the sums allocated to applicants. The Budget Directorate itself declared at the start of the 1980s that personnel costs would be excluded from the sums allocated to projects. A ministerial note stressed that “it is clear that for the Budget Directorate […], the FRT [Research and Technology Fund; Fonds de la recherche et de la technologie] aims to facilitate the creation of research programs, excluding the funding of personnel costs and building overheads.” [25]

18Until the mid-2000s, rankings and project-based funding therefore kept a “low profile” [26]: their limited visibility was not a consequence of their technical nature, [27] but of an effort to construct their acceptability. When we speak of acceptability, this does not mean that they were considered legitimate and accepted by all, but that their uses did not provoke organized critique in the public sphere. [28] This acceptability was the product of political work carried out by the creators and users of the instruments, which circumscribed their perimeter, limited their publicity and the inequalities to which they gave rise, and separated them from sites of social protest. The production of this acceptability therefore took the form of “processes of reform put in motion, […] in discrete domains, and unaccompanied by any debate about their appropriateness,” [29] just as in the case of the policies by which water services were privatized in Latin America. It did not, however, mobilize a coalition of elites, or a “coherent, militant, and conscious” elite, in the words of William Genieys, [30] dominating research policies over the long term. It was, rather, the product of disparate and separate groups of actors, who acted in pursuit of different interests, but whose activities had in common a desire that the instruments that they produced should not provoke (too much) controversy, or that they should be made compatible with uses that they considered legitimate. The removal of these limitations, in a context of budgetary crisis, called into question this order established between the 1960s and 1990s.

Contesting instruments

19The contestation of project-based funding and of rankings took hold at the same time, in the mid-2000s, when a series of legislative documents and investment plans appeared that would profoundly alter the conditions in which research policies would be governed. [31] This questioning of the instruments can be explained by the changes in their uses, but also by initiatives brought about by researchers, who established themselves as activists in order to protest against the direction being taken by research policies.

20Project-based funding was the first process that saw the abandonment of those mechanisms that guaranteed its political acceptability. The division of work between the Ministry of Higher Education and Research and research organizations was first reformed between the late 1980s and the early 1990s. After the political shifts of 1993, the new minister of higher education and research, François Fillon, sought to regain control of research policy, notably in the life sciences. Representatives of the ministerial structure then used project-based funding to exert more influence on the policy of research organizations. The creation of the National Research Agency (Agence nationale de la recherche; ANR) redefined once again the division of the work of governance: the ministerial budgets devoted to project-based funding were concentrated in the new structure and made formally autonomous from the ministry. The ANR’s mandate was to centralize project-based funding, and it thus modified preexisting power relations: struggles took place between the management of the agency, the ministry, and research organizations to establish who had the legitimacy to define the structural orientations of the calls for project proposals. [32]

21The creation of the AERES in 2007 had the same effects: it called into question the division of the work of evaluation that had been established between the ministry and research organizations, who had found that a part of their remit had been removed. In effect, the sections of the ministry no longer directly oversaw the evaluation of the research centers for which they were responsible, but instead deliberated on the basis of reports that had been assembled by the visiting committees of the agency. This questioning of the sections’ jurisdiction with regard to the evaluation of research centers was even more profound, as the AERES was, for the first few months, supported by researchers engaged in a reforming project that considered the evaluation carried out in research organizations as a negative example, to be avoided. [33] The contestation of the instruments was therefore initially promoted by institutional transformations: the jurisdiction of collegiate bodies that controlled the distribution of symbolic and financial credits was curtailed, to the advantage of agencies that, by their method of recruiting members—exclusively by appointment—, seemed to reestablish the state’s control of research policies. Numerous studies have revealed interviews in which the AERES is depicted as “the strong arm of the ministry,” “Big Brother,” a “state inspection body,” or even “a servant of Satan.”

22Another mechanism was related to redefinitions of the project promoted by the instruments. The conditions of use of the calls for project proposals were reformed: representatives of the Ministry of Higher Education and Research argued, from the start of the 1990s, for a more selective and structuring funding policy. These new aims were made explicit in the instructions that were given to committee members encouraging them to select only 10 percent of projects. With some rare exceptions, the ministerial ordinance for selectivity was followed by the members of research committees: for example, at the start of the 1990s, only 15 percent of projects were selected out of those proposed in life sciences programs, far below the success rates of earlier periods.

23A parallel process, although it came later, marked the use of ranking instruments. The counting of the total number of publishing researchers, for example, preceded the creation of the AERES by at least ten years: in 1997, the expertise files of the Ministry of Higher Education and Research contained two rubrics, placed side by side: that of the “declared number of researchers and teacher-researchers,” alongside that of their “actually active number.” [34] But this information only became public when it was transferred to the AERES. The public revelation of this information transformed the potential uses of these devices: whereas they had previously been instruments for assisting only in decision-making in the oversight of research centers, their accessibility now allowed anyone to make use of them as a benchmarking tool, or even for “blaming” and “shaming” the research centers and teams with the lowest performance. The members of the agency were aware that the fact of making these categories public gave purchase to critique:


There was a huge problem. The agency’s policy was open access, the evaluations had to be online, with the grades. […] In fact, everyone, right down to the concierge, anyone at all, could see on the agency’s website how my professional activity had been evaluated overall. […] That obviously provoked an outcry. [35]

25One final development of the instruments relates to their being applied to all research domains, an extension that was initiated by the creation of the two agencies. In the case of project-based funding, the introduction of “open” programs, that is, non-thematic ones, created the possibility for all disciplinary fields to submit research proposals. The “Big Investment Plan” (“Programme investissements d’avenir”; PIA) again extended the instrument’s jurisdiction recursively: not only did this program itself function through calls for project proposals, but the successful institutions also often used this instrument for enacting their own policy. The creation of the AERES similarly led to the systematization of the use of rankings across all research domains. Certain humanities disciplines that had until then been removed from the practices of ranking and project-based funding were then subjected to these new instruments.

26These changes facilitated the emergence of critique, but were not sufficient to bring it about. The challenging of the instruments came about following mobilizations that condemned the direction of research policies. These movements did not only come from preexisting groups, such as the unions that were mobilized on this matter, but also took the form of collectives focused on this one issue. In 2003, the researchers’ movement, which would later become Save Research (Sauvons la recherche), emerged from the Cochin Institute, led by biologists. The researchers instigating the movement were not precariously employed but were in fact well established in the profession; they were heads of teams, research directors in organizations, and had an established reputation in research. They were not, at that moment in their careers, members of a union. Their engagement was rooted in their daily experiences of the precarity of early career researchers and their awareness of the difficulties that certain teams faced in obtaining funding. Although these difficulties were present in all research domains, they were particularly keenly felt in biology because of the recent increase in the cost of research and the very difficult situation for those entering the academic job market:


[W]hat happened was very personal. I saw all around me, in the lab, young people who were very promising researchers, who were writing their doctoral dissertations and would say: “When I finish my dissertation I’m stopping, because we can see that there’s no future for us now, for people like us.” And it seemed to me that if I didn’t say anything—at that time I wasn’t politically engaged or involved in a union, but in the past I was always concerned with these things—I said to myself, if I don’t say anything that means that, to a certain extent, I tacitly approve of this, that, to a certain extent, I would be complicit with it. [36]

28The establishment of these collectives did not immediately lead to the formation of a visible protest movement. In the case of Save Research, ten months passed between when the state decided to cancel the attribution of credits in the budget for research organizations and the explosion of protests in the street. However, from spring 2003, the biologists of the Cochin Institute organized “show protests,” as they were described by one of their instigators, which aimed to illustrate the precarity of the financial situation of the research centers. In March 2003, in Paris, a coffin carried by researchers, surrounded by mourners and accompanied by Mozart’s Requiem, was carried up the rue Soufflot towards the Panthéon to symbolize the funeral of research. The event, gathering together a few dozen researchers, attracted the interest of the media, who had been informed in advance by the organizers, and was the subject of several articles in the national press. At the same time, the organizers created a website to support their mobilization and to gather texts critiquing the state of the French academic system. These events remained isolated, however, and did not lead to a wide-scale mobilization in research centers. This mobilization occurred only after the promises made by President Jacques Chirac before the constitutional bodies on January 7, 2004, [37] when he reaffirmed his commitment to investing in research. This declaration, in a context of budget cuts, acted as a catalyst: the online petition launched in December 2003, which had only a limited visibility until then, saw a sudden rise in the number of signatories. A few months later, 12,000 researchers had signed the petition, among whom were several heads of research centers and established figures in the profession.

29The challenging of the policy instruments was not at the heart of the demands that were initially presented to the authorities, which were instead concerned with a raise in credits and posts for research and the holding of consultations aimed at rethinking the governance of research policies. Only the petition at the origin of the Save Research movement indicated, fairly obliquely, the consequences of the concerted incentivizing actions, put in place by the Ministry of Higher Education and Research (Ministère de l’enseignement supérieur et de la recherche; MESR) to promote the support of research projects, in particular for early career researchers. However, these movements provided a fertile ground for the formalization of critiques of the instruments of funding and research evaluation, as we can see from the occupation of the ANR in 2008, at the end of a day of action organized by Save Research, Save Universities, and certain unions. From that point, other social movements were formed—such as Sciences en Marche, established in 2014—to produce a critical reflection on research policies; besides the questioning of the proportion of the research budget that was attributed to project-based funding, the Research Tax Credit also drew fire from the movement’s critiques. Rankings were also condemned by collectives that grouped together members of learned societies and bodies representing the academic world.

30To understand the emergence of the critiques requires us to take account of the long history of the instruments and their uses: protests against these instruments were not only a consequence of the creation of the ANR and the AERES. They were the result of the gradual erosion of the political work that had aimed to limit the inequalities produced by these mechanisms. These transformations, which had the effect of significantly increasing the jurisdiction of the instruments and the inequalities that they produced, constituted a fertile ground for the emergence of protests against these instruments. These critiques were all the more significant as they had an influence on the instruments themselves, and in some cases were even able to bring about their removal.

The variable effects of critique on the instruments

31Were the instruments affected by the action of the protest coalitions? To what extent, and under what conditions? To answer these questions, we shall draw on two elements from sociological works on social movements and the effects of protests. The first involves considering the consequences of mobilizations with regard to the plurality of forms that they take. While acknowledging the difficulty of establishing a causal link between a mobilization and a change, [38] several works emphasize the great variability of the consequences of social movements: they can take the form of political changes, cultural developments, or even changes in the lives of activists. Our understanding of the effects of a mobilization must therefore adopt a “variable geometry,” [39] rather than being limited to the categories of “failure” and “success,” which are hardly capable of doing justice to the diversity of possible effects. The second element of these works is an insistence on the contingency and multiplicity of factors that make up the influence wielded by these social movements, whose struggles always belong to particular historical contexts. [40] By addressing these perspectives, we intend to understand the influence of the social movements on the instruments by showing that they worked in different ways. They resulted in the emergence of debates within institutions, and the repoliticization of the instruments. Their ability to reform the instruments was, however, variable, and depended on the characteristics both of the social movements and of the instruments.

The establishment of arenas for the repoliticization of the instruments

32Critical works focusing on the instruments have shown that they participate in a technicization of political issues [41]: once they are integrated in technical decision-making, the political motivations embedded within them are concealed. The calculation of payroll expenditure is a good example of this process: its high degree of technicality makes it hard to contest for union activists who do not possess the necessary expertise to contest its figures. [42] Project-based funding and the instruments of ranking demonstrate another configuration: one of the consequences of the mobilizations was the repoliticization of these instruments, whose political motives were thereby exposed and systematically discussed in public debates. The repoliticization at work here is the counterpoint to the work undertaken by governmental elites, through the instruments, to “create a useful smoke screen to conceal less admissible aims, to depoliticize fundamentally political questions, to create a minimum level of consensus for reform by capitalizing on the apparent neutrality of instruments that are presented as being modern.” [43] For the actors involved in this work, it is then necessary to reveal the absence of neutrality in these instruments, to identify how they participate in the allocation of public resources, create inequalities, produce winners and losers, and participate in the distribution of power between individuals and institutions of research governance.

33This repoliticization took place in arenas of consultation, which had been recognized or established by the authorities in the wake of the protests in order to gather the proposals of the academic community. The creation of these arenas may at first have been a direct consequence of the mobilizations, like the Estates General for Research (États généraux de la recherche) organized in the fall of 2004, which was created in response to one of the demands of Save Research from the very beginning of the movement.

34The arenas of repoliticization of the instruments may also have been more indirect consequences of the mobilizations against research policy decisions. Some of them were constituted upon changes in government to present a break with previous practices and directions in governance. Nicolas Sarkozy’s presidency had been marked by several mobilizations against the reforms that were enacted, as well as by a very tense climate, following several speeches by the president on research. Three months after the election of François Hollande, the establishment of the Conference for Higher Education and Research (Assises de l’enseignement supérieur et de la recherche) was the scene for a consultation with the academic community prior to a reform of this sector of public action. Finally, the mobilizations may have had consequences on the work carried out by the organizations involved in research governance: the protests against the AERES drove its leaders to develop several consultative procedures to try to strengthen the legitimacy of the agency and the instruments it used. The agency then increased the number of commissions aimed at reflecting on the instruments it produced. [44]

35The establishment of these arenas resulted in the integration of new actors who had not been involved in the earlier mobilizations. Some of them were even able to take up positions of responsibility in these consultations, even though they had publicly adopted a position against some of the initiatives of the social movements: for example, the president of the Initiative and Proposition Committee (Comité d’initiative et de proposition; CIP) of the Estates General for Research, Étienne-Émile Baulieu, had published a piece in Le Figaro encouraging the leaders of research centers not to follow the call by Save Research for them to resign their positions. The Estates General did not, therefore, align completely with the movement, and the instigators of the movement held the same opinion, as they sent a briefing letter to activists emphasizing that the movement was autonomous in relation to the consultation procedures:


The creation of the CIP was considered desirable by SLR [Save Research], which is heavily involved in it, to the extent that SLR wanted to bring about this great national debate. But the CIP and SLR are not the same body, for two reasons. First, SLR has other demands (notably regarding the 2004 emergency measures for employment), and its engagement in the CIP does not in any way prevent it from continuing to act in this regard. Second, there are some figures in the CIP who do not agree with SLR. For this national debate to be rich and fruitful, it is essential that these different viewpoints have the opportunity to express themselves at every level, including the CIP and the local and regional committees. [45]

37The organization of the debates also had the effect of extending the scope of reflections beyond the watchwords of the mobilizations. The reflections produced by the Estates General concerned all aspects of research policy, from the evaluation of personnel and their status to funding structures and the relations between research and society. However, the construction of alternatives to the instruments was limited by the consultation process. These framing effects were all the more clear as the political authorities were present at the initial creation of the arenas of debate. [46] At the Conference for Higher Education and Research of 2012, the ministerial cabinet decided on the composition of its steering committee and the list of interested parties who would be interviewed. In particular, it directed the agenda and the content of the consultations, avoiding debate of certain subjects, such as the—clearly decisive—question of the means made available for the research budget:


What you need to see is that, by defining the topics of discussions, it was clearly a way of excluding others. […] On questions of research, there was no possibility of talking about resources! You couldn’t say: “we need 100 million euros more, or a billion euros more,” it wasn’t our place to say that. You had to work more on subjects such as the way the institution functioned. [47]

39The repoliticization of the instruments took place within these arenas. Although the question of project-based funding was not part of the official demands of the Save Research movement, the Estates General led to the formalization of a general reform program, gathered together in a work that was published hastily. [48] This work analyzed the appropriateness and the contemporary conditions of the use of project-based funding: for example, its “perverse effects” were emphasized. Because it had reached “disproportionate proportions,” it contributed to undermining the role of research centers as the basic units of the production of knowledge, and to the weakening of the research policy of research organizations. Its spread also had consequences for research work: it led to the “loss of autonomy of the research community in defining its research priorities.” Counter-proposals were made: without entirely rejecting the use of project-based funding, the authors of the document argued that it should be limited to 30 percent of the operational resources of research centers, and that it should be opened to “spontaneous projects” produced directly by research centers. Finally, they insisted that the instrument should be placed under the control of representatives of the academic community, within a “minimal structure that would not itself be involved in the management of its resources.” With regard to the methods of research evaluation, the participants identified three possible options, without making a choice between them. [49] This absence of a decision emphasizes that the proposals produced by the consultations were marked by an “ambiguous consensus,” [50] which characterizes most reforms of public service: they are polysemous in their design, and so they are subject to multiple interpretations according to the various interests and views of their creators.

40A few years later, the Conference for Higher Education and Research was marked by intense controversy regarding the processes of evaluation related to the creation of the AERES. Although there was no consensus in the steering committee of the Conference for Higher Education and Research regarding the future role of calls for project proposals, the ANR was not the main recipient of criticism, which was more directed towards the AERES and its methods of research evaluation:


There was a group that was extremely hostile to the AERES within the steering committee, which wanted simply to eradicate the AERES and remove it from the face of the Earth. [51]

42The final report from the Conference for Higher Education and Research emphasized that “the current procedures of the AERES are too heavy-handed, bureaucratic, and superfluous.” The grades for evaluation in particular were challenged: the spokesman, stressing that “the harmful effects and the time that is wasted on obsessing over these grades have already gone on for too long,” proposed to eliminate evaluation grades and to replace them with “reasoned assessments, a collection of advice and recommendations.” [52] By proposing a shift from grades to written assessments, the committee was constructing alternatives to the existing instruments.

43These processes of repoliticization could also be more local, focused on particular disciplinary spaces. This was the case for certain commissions established by the AERES to rank academic journals. In the field of literature, language, and art, the ranking programs provoked major controversy. The protesters argued that the specificities of the field of literary publishing was grounds to refuse these rankings. They pointed out that the work of constructing and organizing research centers throughout the second half of the twentieth century was accompanied by the large-scale development of journals by research centers and universities. These journals, of significant number and without major hierarchies, are the site for the publication of reference articles in their respective disciplines, where recognized authority figures publish their works. These characteristics provided arguments for those who objected to rankings to claim that they were entirely ineffective in evaluating the importance of a publication. In interviews, several of them referred to the major publications of Tzvetan Todorov or Roland Barthes, published in “rags, literal rags” (interview, vice-president of a section of the CNU), to show, on one hand, the dangers of hierarchizing journals, and, on the other hand, to emphasize the inability of rankings to reflect the quality standards of the literary world.

44The mobilizations therefore had two initial effects on the instruments. First, they contributed to the establishment of institutionalized, officially recognized sites for the discussion of the instruments. These reflections gave rise to an expansion of the groups participating in the controversies and, often, of the subjects under debate. Through critiquing the instruments, proposing alternatives, and subjecting them to discussion in the public sphere, these sites brought to light the political project that was implicit in these devices.

Eliminating (or not) the instruments

45However, the critiques had varying effects on the instruments. Some of them led to their destruction: following the protests against the ranking of literary journals and the grades produced by the AERES, these devices were abandoned by the authorities. But, in contrast, project-based funding offers an example where the critiques of instruments had only a limited influence. The instrument was reformed following the proposals of the Conference for Higher Education and Research, but it was only subject to adjustments, [53] which did not call into question either the longevity of its use, or the extent of the inequalities that it produced in the allocation of funds to research teams.

46Multiple factors could be suggested to explain the varying consequences of the social movements on the instruments. These have been set out by the literature on the political effects of social movements: the organization of a movement, its ability to obtain political channels for its demands, the framing of the problem around which it is mobilized, the forms of action that it uses, the political context, and the resources of the activists all have an influence on the effectiveness of mobilizations. [54] Émilien Schultz considers that the tone of the protests against the ANR—where a greater emphasis was placed on individual experience rather than a reasoned reflection on the general reforms of research policies—was one of the main reasons for its lack of influence. [55] These factors affect the reception of the mobilizations, but they do not explain their varying influence. These critiques operate in the same political context: although opposition to project-based funding was initiated earlier, it was recurrent and was also active at the same time as the protests against research ranking instruments. Similarly, the framings used to condemn the perverse effects of the instruments employed identical methods: they emphasized the negative consequences of these two instruments on the production of research as a whole, and more specifically on certain forms of production that were considered more vulnerable—basic research in the case of project-based funding, and the humanities in the case of ranking instruments. In both cases, critiques were initiated and often operated from relatively prestigious parts of research disciplines and from collectives created for this purpose. Finally, the critiques often did not systematically call upon a considered vision of what the reforms in research should be overall, and were often grounded in the subjective experiences of the protesters. The varied effects of critiques on project-based funding and rankings essentially depended on their ability to undermine the support and credibility that were provided to these instruments by members of the profession and those who governed them.

47The unequal influence of the protests is due, first of all, to the greater or lesser ability to organize and display a common front of the academic profession against the instrument. This common front was particularly difficult to organize in the case of project-based funding. Very early on, the promoters of the protest movements had to face activists who were more favorably disposed to it. Some of the biologists involved from 2004 in Save Research advocated the creation of a funding agency that would have a more systematic use of project-based funding. They saw this as an opportunity to reform a research system that they found too hierarchical: by allowing all tenured researchers to seek funding, the instrument would help to liberate them from the power of directors of research centers. Other activists, occupying similar posts, were far more critical of project-based funding and campaigned instead to abandon it. [56] These differing views were still to be found at the time of the Estates General for Research. The same biologists who were favorable to project-based funding still mobilized for the creation of a funding agency. They came into conflict with physicists and researchers from the humanities and social sciences, who were far more reticent about the opportunity of creating such an institution. A third way was demonstrated by activists who were critical, but pragmatic, and thought that this proposal would be acceptable to the representatives of the authorities, whom they knew to be working towards the creation of a funding agency. These activists sought to influence the organization of the agency in question and the funds that it would use:


But its very existence [that of a funding agency] had been promoted directly by the biologists, by these notorious forty-somethings who dream of becoming mandarins, and there was strong resistance from others, in particular the physicists, who said: “Let’s not get involved in that, let’s not talk about it, that’s going to come back to haunt us.” I wasn’t so sure at that time because I knew, like others, I had precise information about the preparation of the ANR, in July 2004. While we were at work, people in government, and in particular Sarkozy who was then in the Finance Ministry, were working on the preparation of the ANR. […] Knowing that, it seemed a lesser evil to say: “Okay, we need a funding structure, but we’ll try to limit it. [57]

49These internal divisions within the profession regarding the utility and legitimacy of project-based funding did not diminish during the first years in which the ANR was operational, or with the increase in calls for project proposals linked to the launch of the PIA in 2009. They were also found within Sciences en Marche. Its main founders had to address the same divisions: certain activists proposed that they take a stand condemning project-based funding, while others were opposed to this. The desire to maintain the unity of the movement led them to avoid taking a position on the ANR and project-based funding, and to calls for a rebalancing of recurrent funding and project-based funding:


There were people who wanted us to take a position against the ANR. But there were some who just wanted us to not talk about it, or even to have a very pro-ANR position. We avoided talking about the ANR mainly because we were aware that it was a very divisive issue. […] So that was why we tried not to dwell too much on issues where we knew that, on top of the existing problems in mobilizing the community, if we mobilized them around proposals that were divisive, it was going to get even worse. [58]

51This fragmentation of positions with regard to project-based funding can be explained partly by the fact that the instrument provided financial resources: the need to obtain such resources, which was increasingly pressing in certain disciplines, made it even more difficult to establish unanimous mobilizations against project-based funding. The calls issued by Save Research to boycott the evaluations of the ANR came up against the reticence of certain researchers to obstruct the functioning of the agency when they had a funding application under way. In this case, disciplines were in unequal situations. Some disciplines have time frames and methods of knowledge production that are ill suited to the needs of project-based funding (individual work, long-term projects). [59] The members of these disciplines were all the more likely to contest this instrument collectively. In biology, however, the difficulty of financing research activities is particularly acute and the organization of work is relatively well suited to that which is required by calls for project proposals (teamwork, projects of limited duration). These characteristics made it all the more difficult to mobilize unanimously against project-based funding.

52In literary studies, the opposition to instruments of evaluation did not suffer from any such divisions. The instigators of the critique soon managed to organize a common front against journal rankings. A number of resources that are particular to the literary field facilitated this undermining, which others—in economics, for example—attempted to undertake without success. In literary studies, even though some universities were in favor of rankings, the production of a consensus against this instrument was facilitated by the fact that the protest emerged from the most prestigious parts of the discipline and that they were expressed by structures that did not need to build their legitimacy in representing the field, such as professional associations and learned societies. The protest was also led by representatives from literature sections of the CNRS and CNU, which possessed both an academic and political legitimacy. They were also facilitated by the fact that the opposition to the rankings was already established. Orders to produce rankings did not date from the 2000s: attempts at creating lists of journals, undertaken by representatives of the ministry more than ten years before the creation of the AERES, had been abandoned and had already resulted in the victory of their opponents. The success of the opponents of rankings was also built on the fact that the ranking instruments were ill suited to the publishing landscape of literary studies: they condemned the bizarre results of the rankings by presenting a certain image of the publishing industry and of publishing practices in literary studies, which was a matter of agreement for most representatives of the discipline.

53The situation of rankings in economics provides a counterexample. The legitimacy of these tools could not easily be presented as a heteronomous project undertaken by a managerial state to control researchers. On the contrary, it had already been established from within the discipline itself: economists frequently made use of rankings to choose the journals in which they would publish. Unlike the case of literary studies, although objection to rankings existed within economics, it arose from elements that occupied less important positions in the discipline. The fact that the instrument was more integrated within the practices and representations of members of the discipline limited the power of the protests. [60] The degree of porosity of the instruments with regard to the critiques was also connected to their dissemination and the place they occupied within the range of devices used for research governance. Since an institution only persists when it is perpetuated by use, [61] project-based funding is all the more difficult to eradicate as it makes up part of the toolkit used by numerous institutions for research governance, and this has been the case for a long time. Research organizations such as Inserm and the CNRS have used this device since the 1960s. Philanthropic foundations and patients’ associations also use it to intervene in research. It is also strongly supported by the Finance Ministry, which, since the 1980s, has seen it as a way of controlling the use of funds in research centers, by not leaving the use of these credits entirely at the discretion of the management of research organizations.

54Project-based funding has also been used by the European Union since the start of the 1980s to intervene in research, [62] and by countries such as the United States, the United Kingdom, or Germany. This dissemination of the instrument beyond national borders is beneficial for its promoters: it allows them to present it as an inevitable tool for research policies, used for a long time by countries and supranational institutions that are taken as models in this respect. [63]

55The struggles between the promoters and critics of the instruments cannot be seen as an opposition between, on one hand, representatives of the state seeking to use these devices to control the power of the profession and, on the other hand, researchers who would unanimously condemn this attempt at reform. Not that the instruments disrupted an established professional unity: the research profession has for a long time been divided over its relation to reforms and political power. [64] But the contemporary uses of the instruments reconfigured the internal relations of solidarity within the profession and acted differently in different disciplines. In certain configurations, these disciplines coalesced around a common position against an instrument, at least within their representative bodies. In other cases, the instruments introduced new divisions and new relations of solidarity, such as for the protesting biologists, who were opposed to their colleagues who were in favor of project-based funding, and allied themselves with representatives of the humanities and social sciences. These new divisions represented obstacles to the constitution of a common front against an instrument of research policy.

56The absence of a unified front was all the more detrimental to the effectiveness of the critiques, as the divisions between those in charge of institutions of research governance concerned in particular the very institutions that would be using these instruments to govern: to return to the example of project-based funding, it was less a struggle for or against this instrument, than it was a competition between the leaders of research organizations, the ANR, and the ministry to establish who could legitimately use it. Furthermore, protest movements could not always rely on changes in government to bring about change. In fact, research policies do not constitute, or do so only to a limited extent, a point of opposition between the main political parties. The presidency of François Hollande was marked by symbolic gestures directed at the academic community, but also by the perpetuation of a certain number of policies of hierarchization of the academic system, such as the pursuit of the PIA and the transformation of the governance of universities, [65] which had been initiated by Nicolas Sarkozy in a context of major protests. Finally, change was all the more difficult to initiate as the unequal allocation of credits involved several instruments; the removal of one of these instruments would not have sufficed to bring about a major restructuring of the contemporary direction of research policies.

57* * *

58Formerly discreet, now contested, project-based funding and rankings have a unique history. Whereas most institutions become a matter of routine over time, these two instruments, in contrast, saw the emergence of critiques that challenged them long after their introduction into research governance in France. This unusual trajectory shows that neither of these devices was, by its very nature, contrary to the interests and values of an academic profession that would be unanimously resistant to their introduction. From the 1960s to the 1990s, the project of inequality that was implicit in the instruments concerned only limited parts of the academic field and aimed principally to foster the emergence of new disciplines, to support others for a limited period, or to produce evaluations exclusively for the use of the administration, without restructuring the whole system of distribution of financial and symbolic credits. This project, which related both to a program of reforms and to the power relations organizing the scientific policy of the day, was manifested in the instruments and in their appropriations: their creators incorporated within them a certain number of rules aimed at limiting the inequalities that they would produce, as well as their publicity, while their users, within the commissions responsible for selecting projects, extended this work by making sure that they did not give excessive support to research centers that were already well funded. From the 1990s, and in particular the 2000s, the political project behind the instruments aimed much more clearly to differentiate and hierarchize the French academic system and to change the power relations that organized it. The limitations that had been placed on the transformative power of the instruments were then steadily removed. In this new context, activists mobilized to protest against the instruments and their contemporary use. These mobilizations therefore emerged in the wake of reconfigurations of the instruments’ political project. This specific case study, building on other studies that are still small in number, [66] invites a more systematic exploration of the effects of the instruments and of their developments on social movements. Like many institutions, [67] the instruments are capable of influencing the constitution of social groups by provoking the emergence of protest groups, establishing new solidarities, removing older solidarities, or reinforcing those that are already in place. In this respect, the devices that place researchers in competition and comparison with one another are not only tools for the reform of public policies; they also profoundly reform the alliances and divisions within professional groups.

59Conversely, our results invite a more systematic reflection on the effects of critiques on the instruments: by revealing the political project that is implicit within them, these critiques succeed in eroding their legitimacy, repoliticizing them, and even, as in the case of rankings of journals in literary studies, removing them. To understand this variable influence of the critiques involves considering together the characteristics of the instruments, their integration within certain practices, and the greater or lesser ability of social movements to gather support for their cause. Project-based funding, because it provides crucial financial resources to support research activity in a context of shrinking budgets, and because it is spread throughout several public policies, is more difficult to protest than rankings, which are confined within particular disciplines. The familiarity and repeated use of the instruments also have important consequences: the fact that economists systematically used rankings to choose where to publish articles made this device routine for them, whereas it was alien to the practices of literary studies. The political work of the protesters is another dimension to be explored: it was because they wished to maintain the unity of their movement that the representatives of Save Research and Sciences en Marche avoided discussion of project-based funding in their main demands.

60There are three advantages of taking account of social movements in our analysis of the instruments. First, it illuminates the social processes that lead an instrument to be considered problematic. The literature on public problems has, for a long time now, [68] shown that the “problem” is not a given, but that it results from social and political mobilizations that establish a practice or event as problematic. The same can be said of the instruments: forms of resistance to them should not only be considered as tactics for preserving the flexibility that is threatened by instruments that are considered from the very beginning as tools for controlling professional activity. Forms of resistance result from social processes, of which collective action is one particular manifestation, which lead to establishing the instruments as problematic. The second advantage of taking account of social movements is to avoid considering that the work of depoliticization carried out through the technicization of the fields in which the instruments operate is systematically and permanently established: although, in certain cases, the technicality of an instrument can be an obstacle to social movements attempting to challenge it, [69] in other cases, mobilizations can succeed in revealing its political aims and bringing about a critique that manages to undermine its acceptability. Finally, paying attention to social movements invites us to not limit the analysis of forms of resistance to the enactment of public action. This resistance does not arise exclusively in the services in which the instruments are active but can also be found in the street in the form of collective mobilizations. Without exaggerating the overall consequences of critiques, taking them into account invites us to not segregate the production and implementation of public action. The “mysterious ways” [70] that mark public action can, sometimes, (re)connect the street with the arenas in which the instruments are configured.


  • [1]
    The aim of the Research Tax Credit is to support private companies’ R&D investment.
  • [2]
    Press release, “Résultats de l’appel à projets générique 2015: la France continue de négliger ses chercheurs,” accessed June 10, 2019, Translator’s note: Unless otherwise stated, all translations of cited foreign language material in this article are our own.
  • [3]
    11th to 15th and 18th sections of the CNU, accessed June 10, 2019,
  • [4]
    Patrick Le Galès and Allen J. Scott, “Une révolution bureaucratique britannique? Autonomie sans contrôle ou ‘freer markets, more rules,’” Revue française de sociologie 49, no. 2 (2008): 301–30.
  • [5]
    Michel Foucault, Security, Territory, Population: Lectures at the Collège de France, 1977-78, ed. Michel Senellart, trans. Graham Burchell (London: Palgrave Macmillan, 2007). On this field of research and its influence on sociology and public action, see also Pascale Laborier and Pierre Lascoumes, “l’action publique comprise comme gouvernementalisation de l’État,” in Travailler avec Foucault: Retours sur le politique, ed. Sylvain Meyet (Paris: L’Harmattan, 2005), 37–62.
  • [6]
    Pierre Lascoumes and Patrick Le Galès, eds., Gouverner par les instruments (Paris: Presses de Sciences Po, 2005).
  • [7]
    For a summary of the literature on the subject, see Jean-Pierre Le Bourhis and Pierre Lascoumes, “En guise de conclusion: Les résistances aux instruments de gouvernement: essai d’inventaire et de typologie des pratiques,” in L’Instrumentation de l’action publique, ed. Charlotte Halpern et al. (Paris: Presses de Sciences Po, 2014), 493–520.
  • [8]
    See, for example, in the academic sphere, Wendy N. Espeland and Michael Sauder, “Rankings and Reactivity: How Public Measures Recreate Social Worlds,” American Journal of Sociology 113, no. 1 (2007): 1–40; see also, in the field of public security, Jean-Hugues Matelly and Christian Mouhanna, Police: des chiffres et des doutes (Paris: Michalon, 2007); Emmanuel Didier, “L’État néolibéral ment-il? ‘Chanstique’ et statistiques de police,” Terrain 57 (2011): 66–81.
  • [9]
    Anaïk Purenne and Jérôme Aust, “Piloter la police par les indicateurs? Effets et limites des instruments de mesure de la performance,” Déviance et société 34, no. 1 (2010): 7–28.
  • [10]
    Isabelle Bruno and Emmanuelle Didier, Benchmarking: L’État sous pression statistique (Paris: Zones, 2013).
  • [11]
    Nicolas Belorgey, “Réduire le temps d’attente et de passage aux urgences: une entreprise de ‘réforme’ d’un service public et ses effets sociaux,” Actes de la recherche en sciences sociales 189, no. 4 (2011): 16–33.
  • [12]
    Lorenzo Barrault, Gouverner par accommodements: Stratégies autour de la carte scolaire (Paris: Dalloz-Sirey, 2013).
  • [13]
    Le Bourhis and Lascoumes, “En guise de conclusion,” 493.
  • [14]
    Renaud Crespin, “Des objets techniques aux objets-frontières: appropriation et dissémination des instruments d’action publique,” Sciences sociales et santé 32, no. 2 (2014): 57–66.
  • [15]
    Le Bourhis and Lascoumes, “En guise de conclusion,” 508.
  • [16]
    For a summary of the literature concerning the consequences of social movements on public policies, see Claire Dupuy and Charlotte Halpern, “Les politiques publiques face à leurs protestataires,” Revue française de science politique 59, no. 4 (August 2009): 701–22. For examples of works on the role of social movements in trading activity, see especially Philippe Steiner and Marie Trespeuch, eds., Marchés contestés: Quand le marché rencontre la morale (Toulouse: Presses Universitaires du Midi, 2015); Sophie Dubuisson-Quellier, “A Market Mediation Strategy: How Social Movements Seek to Change Firms’ Practices by Promoting New Principles of Product Valuation,” Organization Studies 34, no. 5/6 (2013): 683–703.
  • [17]
    Fabrice Hamelin, “Gouverner les conduites automobiles: l’ambivalence du recours à l’automatisation du contrôle des infractions à la vitesse autorisée,” Gouvernement et action publique 1, no. 1 (2015): 111–31.
  • [18]
    For example, the “Elisa” tests for detecting HIV (Renaud Crespin, “Connaître ou informer: la carrière sociale des tests ELISA/VIH dans deux enquêtes épidémiologiques en France et aux États-Unis,” Sciences sociales et santé 24, no. 4 [2006]: 53-89) or the “scorecards” that were used to direct the Bologna Process (Pauline Ravinet, “La coordination européenne ‘à la bolognaise’: réflexions sur l’instrumentation de l’espace européen d’enseignement supérieur,” Revue française de science politique 61, no.1 [February 2011]: 23–49).
  • [19]
    Andrew Abbott uses this term to refer to the territory on which the monopoly of activity of a professional group is established (Andrew Abbott, The System of Profession [Chicago: University of Chicago Press, 1988], 18–20). We also consider that, like professions, instruments have jurisdictions in which their application is recognized. Although in sociology it is professionals who are engaged in disputes over jurisdiction, in the case of instruments it is the representatives of institutions that mobilize disputes who are engaged in competitions to define the legitimate perimeter of their use.
  • [20]
    Jérôme Aust and Emmanuelle Picard, “Gouverner par la proximité: allouer des fonds à des projets de recherche dans les années 1960,” Genèses 94, no. 1 (2014): 7–31.
  • [21]
    AN 1986 0037/31, DGRST, Action complémentaire coordonnée: “Cancérogenèse et pharmacologie du cancer,” minutes of the second meeting, July 13, 1976.
  • [22]
    AN 1981 0479/14, DGRST, Action complémentaire coordonnée: “Membranes biologiques: Structures et fonctions,” minutes of the meetings of June 7, June 28, and July 1, 1971.
  • [23]
    Interview with a director of the Scientific and Technical Mission during the 2000s.
  • [24]
    AN 1976 0215/14, Note from the director of the National Center for Space Studies (Centre national d’études spatiales; CNES) on the status of researchers and technicians paid by research contracts, June 1, 1966.
  • [25]
    AN 1990 0693/14, Policy of the Research and Technology Fund for 1984, minutes of the meeting chaired by Roland Morin, February 15, 1984.
  • [26]
    Philippe Bezes, “The Hidden Politics of Administrative Reform: Cutting French Civil Service Wages with a Low-Profile Instrument,” Governance 20, no. 1 (2007): 23–56.
  • [27]
    Bezes, “The Hidden Politics of Administrative Reform.”
  • [28]
    For a definition of the concept of acceptability, see Pierre-Louis Mayaux, “La production de l’acceptabilité sociale: privatisation des services d’eau et normes sociales d’accès en Amérique latine,” Revue française de science politique 65, no. 2 (April 2015): 237–59.
  • [29]
    Mayaux, “La production de l’acceptabilité sociale.”
  • [30]
    William Genieys, “Nouveaux regards sur les élites du politique,” Revue française de science politique 56, no. 1 (February 2006): 121–47, here 124.
  • [31]
    Christine Musselin, La grande course des universités (Paris: Presses de Sciences Po, 2017).
  • [32]
    On this point, see Émilien Schultz, “Construire une économie de la recherche sur projets: l’installation de l’Agence nationale de la recherche en France et ses conséquences dans les domaines de la génomique végétale et de la chimie durable” (PhD diss. in sociology, supervised by Michel Dubois, Paris-Sorbonne University, 2016), 65-125.
  • [33]
    On this point, see Clémentine Gozlan, “Réinventer le jugement scientifique,” (PhD diss. in sociology, supervised by Christine Musselin, Sciences Po Paris, 2016).
  • [34]
    AN 2008 0487/2, “Contrats Quadriennaux 1997–2001.”
  • [35]
    Interview with a delegate coordinator, AERES, 2007.
  • [36]
    Interview with one of the instigators of Save Research.
  • [37]
    This refers to the series of promises that the French president makes at the start of each new year to different groups, beginning with a televised address to the nation.
  • [38]
    In particular, see Marco Giugni, “Was It Worth the Effort? The Outcomes and Consequences of Social Movements,” Annual Review of Sociology 24 (1998): 371-93; Didier Chabanet and Marco Giugni, “Les conséquences des mouvements sociaux,” in Penser les mouvements sociaux, ed. Éric Agrikoliansky (Paris: La Découverte, 2010), 145-61.
  • [39]
    Chabanet and Giugni, “Les conséquences des mouvements sociaux.”
  • [40]
    Giugni, “Was It Worth the Effort?”; Edwin Amenta et al., “The Political Consequences of Social Movements” Annual Review of Sociology 36 (2010): 287–307.
  • [41]
    Pierre Lascoumes and Patrick Le Galès, “Introduction: l’action publique saisie par ses instruments,” in Gouverner par les instruments, 11–44.
  • [42]
    Bezes, “The Hidden Politics of Administrative Reform”; Philippe Bezes, “Rationalisation salariale dans l’administration française: un instrument indiscret,” in Gouverner par les instruments, 71–122.
  • [43]
    Lascoumes and Le Galès, “Introduction: l’action publique,” 26-27.
  • [44]
    Clémentine Gozlan, “L’autonomie de la recherche scientifique en débats: évaluer l’‘impact’ social de la science?”, Sociologie du travail 57, no. 2 (2015): 151-74; Gozlan, “Réinventer le jugement scientifique.”
  • [45]
    Briefing letter from the Initiative and Proposition Committee, March 26, 2004, accessed June 10, 2019,
  • [46]
    These effects were documented in works analyzing the integration of protest groups in the regulation of technological risks. See Sezin Topçu, La France nucléaire: L’art de gouverner une technologie contestée (Paris: Seuil, 2013).
  • [47]
    Interview, member of the steering committee of the Conference for Higher Education and Research, 2012.
  • [48]
    Les États généraux de la recherche: 9 mars-9 novembre 2004 (Paris: Tallandier, 2004).
  • [49]
    Les États généraux de la recherche.
  • [50]
    Bruno Palier, Gouverner la sécurité sociale (Paris: PUF, 2002).
  • [51]
    Interview with a member of the steering committee of the Conference for Higher Education and Research, 2012.
  • [52]
    Les États généraux de la recherche, 70.
  • [53]
    On this point, our conclusions align with those of Schult, “Construire une économie de la recherche sur projets,” 430-34.
  • [54]
    For a summary of the works on this subject, see Giugni, “Was It Worth the Effort?”; Amenta et al., “The Political Consequences of Social Movements.”
  • [55]
    Schultz, “Construire une économie de la recherche sur projets,” 375-35.
  • [56]
    These varied attitudes towards project-based funding and the ANR within the protest groups were also found at the level of research centers. See Johan Giry and Émilien Schultz, “L’ANR en ph(r)ase critique: figures et déterminants de la critique d’un dispositif de financement,” Zilsel 2 (2017): 63–96.
  • [57]
    Interview with one of the founders of Save Research.
  • [58]
    Interview with a promoter of Sciences en Marche.
  • [59]
    This remark does not relate exclusively to the humanities and social sciences: Richard Whitley shows that the supposedly “exact” sciences are marked by very variable methods of knowledge production and organization (Richard Whitley, The Intellectual and Social Organization of the Sciences [Oxford: Oxford University Press, 2000]).
  • [60]
    We find here mechanisms of congruence and incongruence, or fit and misfit, which are emphasized by the literature on European integration, which shows that a piece of European legislation has more chance of avoiding becoming abandoned if it coincides with some existing national law. See Maria Green Cowles, James Caporaso, and Thomas Risse, eds., Transforming Europe: Europeanization and Domestic Change (New York: Cornell University Press, 2001).
  • [61]
    Pierre François, ed., Vie et mort des institutions marchandes (Paris: Presses de Sciences Po, 2011).
  • [62]
    Laurence Jourdain, Recherche scientifique et construction européenne: Enjeux et usages nationaux d’une politique communautaire (Paris: L’Harmattan, 1996).
  • [63]
    In other rarer cases, the “international” dimension can be helpful for the supporters of social movements, however, and can help to erode the acceptability of the instruments. The European Quality Agency, of which the AERES is a member, had criticized the use of grades by the French institution in its report of 2015. Similarly, the use of rankings had been the subject of major debates about their legitimacy and robustness, which led to their reform (see David Pontille and Didier Torny, “The Controversial Policies of Journal Ratings,” Research Evaluation 19, no. 5 [2010]: 347–60).
  • [64]
    Christophe Charle, La République des universitaires, 1870-1940 (Paris: Seuil, 1994); Pierre Bourdieu, Homo Academicus, trans. Peter Collier (Cambridge: Polity, 1988).
  • [65]
    The Act of July 22, 2013, called the Fioraso Act, reformed certain aspects of the Liberties and Responsibilities of Universities Act (Loi relative aux libertés et responsabilités des universités, LRU), in particular by modifying the composition of university councils. It did not, however, make changes to the main features of the LRU, such as the transfer of power from the state to institutions and the strengthening of the powers of their executives.
  • [66]
    See, in particular, Jingyue Xing, “La tarification des services d’aide et d’accompagnement à domicile comme résistance à un instrument d’action publique,” Droit et société 90, no. 2 (2015): 393-412; and the work of Pierre-Louis Mayaux on the political acceptability of the privatization of water supplies in two Latin American cities, which shows that the methods used for imposing this new system for managing the water supply (either a brutal approach or, alternatively, a gradual one) explain the intensity of local mobilizations by the system’s users. Pierre-Louis Mayaux, “La production de l’acceptabilité sociale.”
  • [67]
    Theda Skocpol, “Bringing the State Back In: Strategies of Analysis in Current Research,” in Bringing the State Back In, eds. Peter B. Evans, Dietrich Rueschemeyer, and Theda Skocpol (Cambridge: Cambridge University Press, 1984), 3–38.
  • [68]
    Joseph Gusfield, La culture des problèmes publics: L’alcool au volant et la production d’un ordre symbolique (Paris: Economica, 2009), originally published in 1981; Herbert Blumer, “Social Problems as Collective Behavior,” Social Problems 18, no. 3 (1971): 298–306.
  • [69]
    Bezes, “The Hidden Politics of Administrative Reform.”
  • [70]
    Pierre Lascoumes and Patrick Le Galès, Sociologie de l’action publique (Paris: Armand Colin, 2007).

This article examines the protests provoked by recent reforms in research governance, using the example of the instruments of academic ranking and project-based research funding. These reforms were implemented a long time before they began to be criticized: how can we explain the lag between when these instruments were created and the emergence of protest movements? Reciprocally, have critiques influenced the trajectory of these instruments, and if so, to what extent? In this article, we will demonstrate that the emergence of critiques and their influence on policy instruments can be explained at the intersection between the characteristics of contemporary social movements and the reconfigurations of policy instruments.

  • policy instruments
  • project-based funding
  • research assessment
  • social protests
  • research policy
Jérôme Aust
Jérôme Aust is a researcher at Sciences Po (CSO/CNRS). He works on higher education and research policy. He managed a project funded by the National Research Agency (Agence nationale de la recherche; ANR) entitled “Gouverner la science” (“Governing science”). In this project, he was particularly interested in the historical development of the funding of projects in France from the 1960s and the characteristics of the elites who have governed research policy since the start of the Fifth Republic. His publications include Bâtir l’université: Gouverner les implantations universitaires à Lyon (Paris: L’Harmattan, 2013), and, with Emmanuelle Picard, “Gouverner par la proximité: Allouer des fonds à des projets de recherche dans les années 1960,” Genèses 94, no. 1 (2014): 7–31.
Centre de sociologie des organisations, 19 rue Amélie, 75007 Paris
Clémentine Gozlan
After defending a doctoral dissertation in 2016 devoted to contemporary reforms of research evaluation, Clémentine Gozlan was employed as a temporary teaching and research fellow at Versailles Saint-Quentin-en-Yvelines University. As a post-doctoral researcher at the Center for the Sociology of Organizations (Centre de sociologie des organisations; CNRS/Sciences Po Paris), her research is currently concerned with protest movements involving researchers. Her work combines the sociology of science, the sociology of public action, the sociology of professions, and the sociology of social movements. Her publications include: “L’autonomie de la recherche scientifique en débats: évaluer l’‘impact’ social de la science?”, Sociologie du travail 57, no. 2 (2015): 151–74; and “Les sciences humaines et sociales face aux standards d’évaluation de la qualité académique: enquête sur les pratiques de jugement dans une agence française,” Sociologie 7, no. 3 (2016): 261–80. A work derived from her doctoral dissertation will be published by Éditions de l’ENS.
Centre de sociologie des organisations, 19 rue Amélie, 75007 Paris
Uploaded on on 31/07/2019
Distribution électronique pour Presses de Sciences Po © Presses de Sciences Po. Tous droits réservés pour tous pays. Il est interdit, sauf accord préalable et écrit de l’éditeur, de reproduire (notamment par photocopie) partiellement ou totalement le présent article, de le stocker dans une banque de données ou de le communiquer au public sous quelque forme et de quelque manière que ce soit.
Loading... Please wait