1 Chapter VI of The Great Transformation (Polanyi 1944) continues to prove enlightening in terms of understanding the internal tensions of a market economy. Returning to Polanyi’s brilliant idea elucidated in this chapter, Geneviève Azam (2007) analyzes information as a fictitious commodity. Within a knowledge economy, information becomes a commodity with properties similar to those attributed to land, labor, and currency in a market economy. Information, unlike these other fictitious commodities, has not been produced and accumulated for the purposes of being sold. It becomes integrated, like labor, into human relationships. It is the result of collective and cumulative processes, and all knowledge cannot be encompassed in the marketable dimension of information. Information has a substantive dimension that evades the logic of capital, but is at the same time subject to pressure from the order of capital, which aims to realize the commodity fiction. The creation of intellectual property rights is essential for the appropriation of information. In this context, the invention and generalized use of instruments designed to measure the production of knowledge, are required in order to achieve its commodification.
2 The various considerations elicited by this proposition lead us to observe that the pressure currently exerted on information production activities, whose purpose is to formally subjugate these activities  to an objective and quantifiable assessment, results from the specific needs of the commodification of information process, particularly as it applies to information for the public good. Attempts to subjugate lecturer-researchers’ activities to these new productivity standards constitute a new phase in terms of implementing the rules that govern the knowledge economy or, in different terminology, for cognitive capitalism.
3 The rules of good governance advocated by international institutions (particularly those that espouse the European Commission’s good governance directives for research activities) are complementary to this new domination. Sustained by an obsessive concern for control, supervision, and competitiveness, good governance criteria make it possible to hand control of research activities over to utilitarianism and technocracy. They thereby participate in more firmly entrenching the process of knowledge commodification.
4 As a result, we can question the sustainability of the new productivity evaluation practices as applied to information production activities, insofar as they tend to guide the commodification process to its completion. If the transformation of information into fictitious commodities becomes a core objective of the new dynamic of capital accumulation, does this not imply a strong internal tension between the completion of the process of information commodification and those conditions that elude this commodification process, but are essential to producing new information?
The Commodification of Knowledge: A Second “Enclosure” Movement?
5 In this short essay, we will not discuss whether the information economy is a new core accumulation sector within capitalism that replaces the industrial sectors of the twentieth century, nor indeed whether it represents a new phase of capitalism, a “new great transformation,” leading to cognitive capitalism (Moulier-Boutang 2007). Capitalism has experienced a profound transformation, which seems to focus on information as the central locus of capital accumulation (the information economy theory). This transformation could signify the arrival of a new accumulation regime, “in which the primary object of accumulation is information, which tends to be subject to direct valuation and whose production extends beyond the traditional bounds of business” (Corsani et al. 2008). This accumulation regime is based on a sustained rhythm of innovation. Competition between firms is consequently displaced to this new ground, thereby creating substantial pressure to expand the application of intellectual property rights to information production activities as much as possible.
6 The increasing intangibility of labor and its extension across the whole field of social relations are the most important manifestations of this process (Paulré 2008). As Azam rightly emphasizes, it is a question of radical change, not only in the organization of production, but of information itself. Information is progressively being detached from its status as a common good and transformed into a commodity or, in other words, an economic resource fully governed by property rights. This process leads to the erasure of boundaries between patentable and non-patentable information products, and in terms of knowledge, between a common and a private good.
7 Information’s transformation into a commodity—into an economic product—requires that it be measured and that its scarcity be organized. Since the process was set up to broaden the scope of property rights to include intellectual activity products as part of the agreements that established the WTO in 1994, intellectual property rights have been de-territorialized like those of other marketable products. This broadening of property rights to knowledge and the living being made it possible to simultaneously regulate scarcity. Information thus became a commercial economic product, or a global public product, subject to exploitation in the interest of the private accumulation of capital. Boyle analyzes this phase, which broadens the public domain both in terms of the physical environment and knowledge, as a second “enclosure” movement (Boyle 2003, 2008). The harnessing of these new property rights seems essential for the generalized reproduction of the new accumulation regime. The organization of research thereby becomes one of the primary issues of the new development dynamic.
Evaluate, Quantify, Control: The Hegemony of the Quantitative
8 Isabelle Bruno (2008) provides an insightful analysis into the initiation at the turn of the last century of a system of production, exchange, and valuation of knowledge inspired by the general model of competition. In order to direct research and innovation activities towards economic competitiveness, the European Research Area was launched at the Lisbon European Council in 2000. This entity aimed to disseminate and intensify a performance-based management culture in Member States’ national research systems. Its primary mechanism is the new liberal governmentality, defined by “good governance” criteria. Bruno demonstrates that the Lisbon strategy initiated a new approach in terms of intergovernmental cooperation. In the context of the open method of coordination, performance measurement techniques (benchmarking) are the tools used to master this system, which operates on the basis of “incentive, emulation between peers, and multilateral surveillance” (Bruno 2008), but without recourse to legal constraints. Bruno reminds us that “it is by quantifying national performance and publishing classifications, [that this method] submits state leaders to a management-by-objectives-based discipline” (Bruno 2008, 13).
9 Instead of legal integration, good governance methods propose harmonization and convergence based on quantifiable data. The European Research Area suddenly becomes a priority area for the application of methods that advocate competitiveness as an absolute objective and demonstrate an obsession for control that is just as prevalent in scope as this performance culture.
10 The Directorate General responsible for research within the European Commission does not hesitate to qualify this new doctrine as a “cultural revolution,” which advocates competition as the guiding principle of research. It characterizes the “researcher-entrepreneur” as the central figure of this field of economic activity—and all of this in conformance with principles of good governance.
11 Under the rules of good governance, and in order for authority to derive its legitimacy not from elective mandates but from cooptation and through the outcomes of exercising authority, an art of governing through objectivity and neutrality must be developed. Quantifying and grading performance together promotes standards that tend to be substituted for law and explicit rule. This enables governance to develop ad hoc implementation systems that respond, at least superficially, to the criteria of objectivity and neutrality. The liberal utilitarianism-bureaucracy combination can therefore fully invest and govern the new areas of capital accumulation. As François Fillon announced at the Université d’Orsay on June 1, 2007:
It is a matter of enabling these [research institutions] to fully adopt a performance mentality whereby taking performance quality into consideration becomes in itself an act of responsibility, [where] the question of governance naturally arises.
13 Following Polanyi, we note in fact that, far from being self-instituted and self-regulated, the research market is actually instituted, supervised, and guided by research leaders. To do this, however, these leaders need management tools. The quantitative assessment of a scientific researcher’s output acquires a fundamental importance, such as that held by the chronometer in days gone by, in completing the subjugation of labor to capital. As a result, indicator engineering aimed at measuring intangible investment performance has been developed. This is combined with a desire to analyze the entire knowledge production chain by means of metrological evolution in order to identify links that are not positively evaluated. Infometrics or bibliometrics, rationalized within a discipline such as scientometry, will, by placing the value on writing, which is the visible aspect of this activity, principles to provide governments with the means to control knowledge production activities remotely. Combined with elements of the new competition doctrine, these indicators will above all normalize the behaviors of participants in this “production chain” in order to achieve “cognitive convergence.”
14 Müldür, from the European Commission’s DG for research, claims that, failing an absolute evaluation of knowledge production activities, the comparative method currently seems “the only coherent method possible for evaluating the results of efforts in this field” (Müldür 2000, 191). In fact, since 1999, the OECD has supplied a quantitative evaluation of the information-based economy and, since 2001, the Commission has published an Innovation Scoreboard, which supports the standardization of innovation policies (Bruno 2008, 200). The next phase developed by these synthetic indexes, which is established at a national level, is to issue comparative performance indicators, both according to institution and according to individuals, which will establish the conditions for performance evaluation for “researcher-entrepreneurs.” The Shanghai classification provides one example of this at the institutional level. In the United States, it has also become common to classify researchers from the same discipline according to ad hoc criteria.
On Accumulating “Citation Capital,” or When the Number Devours Science
15 The quantification of research activities is a decisive moment in the process of transforming information into a commodity. It makes it possible to organize a knowledge market, evaluate its productivity, and create conditions for a new symbolic accumulation. Performance indicators have proliferated spectacularly over the past few years in an attempt to respond to this need for more indicators. Several commercial companies now supply scientific publication databases that make it possible for universities, research institutions, and researchers to compare themselves with others. Although developed on the basis of databases from scientific disciplines, these databases only contain work published in indexed journals as research activity. Thomson-Reuters is the most frequently used of these databases worldwide. Publications in journals figuring in the SCI, SSCI, and AHCI are indexed according to researcher and research institution. Other competing companies offer alternative collections, or collections specialized in certain fields. The assessment market is also rapidly expanding.
16 In these assessments, standardization is based purely on articles published in selected peer-reviewed journals. This journal monopoly is primarily the result of the value created by “peer review,” which is supposed to objectively guarantee a homogenous quality. As with all commodities (including in this field), the question of homogeneity is a necessary condition for ensuring the fluidity of transactions and the pertinence of cardinal rankings. Nevertheless, since the publication of an article selected by peers does not necessarily mean that it will be of use to others, a second inventory is developed to supplement the first, namely an inventory of references to an article in other peer-reviewed articles. In other words, it is not enough just to publish or produce articles for the journal market; this product also has to prove useful as input for other knowledge producers when placed on the market. As such, the primary purpose of research becomes its consumption by other research activities. Since it comes full circle, so to speak, the citation market enjoys self-sustaining growth. This is why this “impact factor” is attributed a growing importance, as it enables researcher-entrepreneurs to accumulate citation capital.
17 Estimates as to the number of scientific journals existing in the world vary widely. Conservative estimates suggest 40,000 scientific journals, while the more ambitious estimate gives a figure of 135,000 active titles. We estimate that the annual number of citations to articles that have appeared in these journals is approximately fifteen times that of the number of articles published. Thomson-Reuters data indicates that three million indexed articles were published in the United States between 1998 and 2008, which includes 42 million citations. As the number of journals tends to increase, the number of articles published annually and, even more significantly, the annual volume of citations, will grow exponentially.
18 In 2001, it was estimated that an average of two million articles were published annually in approximately 20,000 scientific journals (across all disciplines). Each year, Thomson Scientific in Philadelphia enters information on thirty million citations from 1.3 million articles published in a wide range of fields. A significant number of these articles are not cited. However, because an article published in an indexed journal has a higher signal value than a book published on the information market, even without being cited, the bubble of scientific journals, and consequently the bubble of indexed articles, continues to expand. These assessments are now more generalized, however, and search engines calculate the impact factor of each article based on indices such as the H factor and/or Egghe’s G-Index. 
19 Academic productivity is, in fact, regularly evaluated on the basis of ratios. The effectiveness of such ratios in evaluating the “value” of an article is no better than those used for the Dow Jones or CAC 40 indices. In this respect, there are certain similarities between the functional aspects of both the financial and knowledge markets. Financial markets function according to evaluations based on judgment, opinion, or belief (Orléan 2005). These evaluations do not reflect real values, and the ex ante objectivity hypothesis of the base value is unfounded. The situation for scientific evaluations is quite similar.
20 In an article that appeared in the midst of the university lecturers’ strike in February 2009,  the author shows future researcher-entrepreneurs which avenues to pursue in order to increase the value of their knowledge production activity. Increased academic productivity, measured by citation impact indicators, no longer depends only on the effort involved in producing information. It requires an upstream strategy in order to optimize this effort and to maximize downstream results as measured by classifications in each discipline (at a national or international level). By picking up on the suggestions on websites such as academicproductivity.com, we can create the portrait of a ranking “champion,” who is not necessarily someone who devotes most of his or her time to research. In fact, the site advises researchers “not to research, but to publish papers.”
21 Perron looks at the critiques formulated by Peter Lawrence, a zoologist at Cambridge University, and reminds us that,
as soon as bibliographic indicators are taken as performance indicators and decision-making tools, they cease to be a measurement tool and become a final objective guiding the behavior of those involved. It is, in fact, a remarkable scenario in which scientific observation has completely disrupted the environment it is observing: it is bibliographic performance that becomes a priority objective, and no longer scientific discovery.
23 In order to maximize this publication objective, it is essential to avoid publishing books. It is better to publish shorter and shorter papers, to pare ideas down to the smallest possible publishable research unit, and endlessly recycle its thesis. In fact, in order to facilitate this recycling process, it is necessary, from the outset, to consider the thesis as a collection of “standard papers.” In addition to this production strategy, one must also pursue a publication strategy. This can sometimes take up more time than that spent on churning out one article after another, each one reconstituted from a single “master” article.
24 Finally, a researcher’s productivity is measured based on a fictitious price. The world of research adapts to this price creation requirement, constrained by the choice between “publishing or perishing.” With researchers in continental Europe adopting this rather Anglo-Saxon approach to research activity, and with the increasing arrival of researchers from emerging nations, who adapt to this system with remarkable ease, the scientific publication market is literally exploding. This is what led Richard Monastersky to speak of “the number that’s devouring science” in the October 2005 edition of the Chronicle of Higher Education (Monastersky 2005).
25 This close resemblance between evaluation modalities in both the financial markets and knowledge production activities enables us to better understand how the new accumulation regime of cognitive capitalism could experience evolutions similar to those of the financial markets. We note, moreover, the increase in private evaluation agencies, which are mostly modeled after international credit-rating agencies (Standard & Poor’s and others). The evaluations provided by these agencies are no more objective or reliable than those of their financial market counterparts.
26 These agencies evaluate public research institutions, as well as the researchers affiliated to these establishments, without differentiating between research activities that are for the public or private good. One of the characteristics of good governance, or liberal governance, is thus at play here and the public-private hierarchy disappears. It will not come as a surprise to see the appearance of financial services aimed at insuring intangible investment risks, as well as derivative products devised and developed on the basis of citation capital. We can, therefore, imagine a market in which knowledge would be exchanged based on fictitious prices, and where we would proceed to keep accounts of its accumulation. Uncertain values that are detached from their substance and de-contextualized will be exchanged on these markets, but these values will make it possible to maintain the market illusion of having, once again, succeeded in transforming a public good into a private good.
In Conclusion: Toward an Ignorant Society?
27 The fiction of knowledge as a commodity thus has real effects, forming the basis for the knowledge market (Azam 2007, 112). In this sense, the current reforms to universities and research organizations, as well as to the status of lecturer-researchers in France, for example, do not represent a departure from the past, but instead represent a crucial step in terms of integration into the European Research Area while, at the same time, consolidating the basis for a certain type of cognitive capitalism. Is this project sustainable, however? To include knowledge in the market mechanism like labor is to subjugate it to market laws and the demands of capital appreciation. Polanyi correctly maintains that a market that operates on the basis of these fictitious commodities puts society itself in peril. As with knowledge and labor, the market economy perpetuates itself as a result of that which cannot be commodified, which it cannot fully integrate into the commodified space. Total appropriation of knowledge would signify its sterilization in the short-term, as some scientists claim, due to the proliferation of insignificant clone articles in scientific journals.
28 Knowledge is intrinsically a collective good, because it is for the most part the product of cumulative and collective processes. Even if private appropriation of knowledge is possible under a regime of private capital accumulation, this does not mean that submitting the conditions necessary for producing knowledge to a mindset of capital appreciation does not have negative consequences on creative potential. Referring to La Fontaine’s fable of the wolf and the dog, Philippe d’Iribarne clarifies the effectiveness of this formal non-submission: “University professors are in a deplorable situation. . . . But they are free. They design their courses as they see fit, conduct research they deem necessary. And although some do not do much, the very fact that they are not reprimanded is proof that the majority, who do work hard, do so of their own will, without constraints.” 
29 It is legitimate to question the reasons that led to the implementation of this steamroller of a system, the productive efficiency of which is debatable to say the least. The first reason probably lies in the irrationality of capital, in its desire to pursue its drive to expand and to subject the whole of society to market principles even if, in fine, this subjugation may be less effective than previous conditions. After all, the financial crisis of 2008 reminded us of the narrow limits of the rationality of capitalism’s key actors. There is no reason why those governing cognitive capital should be more rational than those governing financial capital. Maybe by allowing this commodification of the living being to play out to its logical conclusion, we could exhaust the dynamic of capital accumulation—but this would probably lead to the end of humanity itself!
30 The second reason may be found in the disdain shown for anything that is not measurable or quantifiable, which is frequently the case with common good products. This disdain is often expressed through the demand for equality of conditions achieved by means of a generalized leveling. This demand is frequently manipulated by liberal populists like, for example, in Nicolas Sarkozy’s January 22, 2009 speech launching “a national innovation and research strategy.” After exclaiming loud and clear that “frankly, without evaluation, there is no performance,” Sarkozy delivered a fatal blow to university presidents and directors of research organizations, who were listening dumbfounded at the Elysée Palace:
Someone needs to explain it to me! How we have more researchers with permanent appointments, fewer publications and, I’m sorry, I don’t mean to be mean, but with a comparable budget, a French researcher publishes 30% to 50% less than a British researcher in certain fields. Obviously, if you do not want to acknowledge this, I thank you for coming, but there is a certain clarity, this is heating up . . . We can continue, we can write . . . [etc.]
32 The disdain on display in these words is in line with the popular and atavistic stereotypes of civil servants and researchers, in short, those well-off people who operate outside of the market. Manipulated by market populists, the legitimate aspiration for equality of conditions that is at the heart of democracy creates an environment that favors the legitimization of the entirely-quantifiable and the entirely-measurable, as measured by performance criteria. The effectiveness of these criteria lies, above all, in bringing about subjugation. In this sense, administrative standardization, under the guise of technical neutrality and scientific objectivity, is now the primary vector of ignorance in society.
This formulation refers to concepts developed by Marx in an unpublished chapter of Das Kapital, where he explains that, at the beginning (of capitalism), the domination of capital over labor remains formal and external to the labor process, which remains, on the whole, organized according to modalities that existed prior to the appearance of capitalism. It is only with the appearance of large-scale industry that the control of capital over labor will exert itself from within the labor process, through the objectification of rhythms and the discipline enabled by machines. Marx explains that the subjugation of labor to capital, which was once formal, now becomes real (editorial note).
By visiting, for example, quadsearch.csd.auth.gr/index.php ?lan=1&s=2 and entering your name in the search box, you can see your “impact value” according to these two indicators.
This article (Chamayou 2009) is discussed in the next issue to appear of this journal.
Le Monde, February 19, 2009. This article is considered in the third part of this issue.