1This article aims to contribute to the current debate on methods in the sociology of science, and particularly the evaluation of the anti-differentiationist thesis, according to which science is indistinguishable from other human activities, from pseudoscience and non-science – a thesis criticized as “ordinarism” by Mario Bunge (2001). This article is based on an empirical study of measurements of the speed of light produced between 1676 and 1983. In the introduction, we will correct two errors concerning the relationship between the sociology of Mertonian and post-Mertionian sciences. First, that constructivist sociology may mitigate the lack of interest in Mertonism for scientific knowledge, and secondly, that it is differentiationist in nature, while constructivism is anti-differentiationist (Section 1). We can then pose the question whether post-Merton constructivist sociology has not abandoned the study of whole areas of science, making it appear as a highly differentiated activity. We shall present the different methods used for determining the speed of light in Section 2. Empirical data will be compared to each of the key statements of constructivism, which will be rewritten as required (Section 3). These rewritten statements will then serve as the basis for an outline of an internalist sociology of science, illustrated in particular by reviewing the concept of “scientific competition” (Section 4).
1 – The Sociology of Mertonian Science and Constructivism
2The sociology of post-Merton sciences, which I shall term “constructivist sociology,” in view of its predominant orientation, set itself the objective of accounting for the construction of scientific knowledge. For constructivism, science is a linguistic and social construction like any other. Much has been written on construction, but much less about the non-differentiation of activities. Researchers are human. They are fallible, make mistakes, are driven by self-interest, are motivated by glory and honor. They watch over their territory, wage war against their opponents and occasionally raise armies to prevail. As a result, the rational quest for objective knowledge would be a myth, created to dupe the general public. As noted by Karin Knorr-Cetina, “No interesting epistemological difference could be identified between the pursuit of truth and the pursuit of power” (Knorr-Cetina 1995, 151).
3From the central thesis
|0.||There is no difference between science and other social activities four statements are derived:|
|1.||The researcher produces literature as does any writer,|
|2.||There is no objective truth – facts are constantly constructed and deconstructed,|
|3.||The researcher negotiates the evidence and the facts as would a politician,|
|4.||The researcher wins the consent of his peers through persuasion.|
4We find some or all of these arguments in Latour and Woolgar (1979), Knorr-Cetina (1981), Knorr-Cetina and Mulkay (1983), Restivo (1985), Lynch (1993), Lenoir (1997), Restivo (2005), and Rehg (2009). Constructivist sociology of science has come under fire from, among others, Bunge (1991/2, 2001, 2012), Boudon (1994), Gingras (1995), Bouveresse (1999), Dubois (2001), Raynaud (2003) and Boghossian (2006). Among the objections raised are the questionable choice of units of time and place (observations of laboratories, over the very short term), the underestimation of the role of nature and reason (which occur little or not at all in the production of scientific knowledge), confusion between science and technology, and radicalism, examples of which are the “Great Divide,” the “Flat Earth,” and so on.
5Faced with the magnitude of these criticisms, some sociologists of science attempt today to revive the Mertonian program. The sociology of science is, so to speak, in midstream. Unfortunately, Mertonian positions are not always correctly perceived. I hope to show this with a report on a recent work, in which Mathieu Quet writes:
“Mertonian sociology remains deeply ‘differentiationist.’ It declares that science is fundamentally different from other cognitive activities. Therefore, the Mertonian approach refuses to analyze the cognitive content of science, abandoning this task to epistemology.”
7We can accept this diagnosis, with the exception of the logic relator “therefore.” In reality, the Mertonian program has also produced results opposed to the specificity of scientific activity. While the constructivist sociology of science, and in particular the sociology of scientific knowledge has argued that science is an undifferentiated activity, it is not impossible for the contrary position, taking into account the specific characteristics of scientific knowledge, to be defended on an empirical basis.
8Let us examine the relationship between the Mertonian program and the specificity of science. Certain research has argued that science is a differentiated activity in the sense, for example, that there is some correspondence between the type of work performed in a laboratory and its social organization (Shinn 1980). Nevertheless, other studies have come to the opposite conclusion. Thus, the criteria of the scientific professions (Storer 1966) apply to other professions, such as medicine, law and architecture. Neither are bureaucracy, oligarchy, or “adhocracy” (Whitley 1984) specific to scientific organizations. As for the unequal distribution of scientific productivity (Price 1963), it has long been known that this is of a nature comparable to that which affects revenues. There is, then, no necessary relationship between the idea of a specificity of scientific activity and the Mertonian approach which, according to common opinion, eschews the analysis of content. 
9Symmetrically, there is no reason to think there is a stable relationship between the undifferentiated conception of science and the sociology of scientific knowledge. This association is at best a contingent historical relationship, promoted by constructivism.
10Having shown that the divide between (the scientific activity being differentiated under Mertonism) versus (the undifferentiated scientific activity under constructivism) is anything but necessary, we can now ask whether constructivism has not left certain aspects unexamined, which may, in fact, have allowed it to conceive of science as a highly differentiated activity.
11The following study, based upon the historical sociology of physical optics, focuses on determinations of the speed of light made between 1676 and 1983. Given the question being asked, this field is exemplary for several reasons:
121) In the physical sciences, measurements are a regular product of the researcher’s work. A sociology of measurement is, therefore, well placed to identify certain features of scientific activity. Measurements of the speed of light readily lend themselves to a study of this type because any such determination is always expressed as just two numbers – the measurement, and its uncertainty.
132) Although c is a constant, this article is not the study of the physical constant itself, which would yield a result too specific to be generalized to other areas. The speed of light, which is now a constant by definition, only acquired the status of a primary constant in the 1940s, when the hypothesis of variation in c was rejected by Birge (1941) and Dorsey (1944). This means that 87% of the series of measures of c – 268 published values in 307 years – are measurements of a common concept, not measurements of a physical constant. Moreover, as we shall see, there is nothing to distinguish the behavior of physicists concerning the measurements between the periods before and after c acquired this status as a constant, either primary or by definition. The adoption of methods producing the least uncertainty, and the consequent rejection of methods subject to a large uncertainty is a typical feature of the attitude of researchers, whether in regard to a defined constant or not. The same choices govern series of measurements relating to variations such as the precession of the perihelion of Mercury, the decreasing diameter of the Sun, and the increasing distance between the earth and the moon.
143) One of the pitfalls of the post-Mertonian sociology of science lies in the choice of time scales. Scientific results are constructed in the long term. Studying the activity of science over the short term – laboratory practices, for example – only gives an account of the phases of research in which the researchers do not yet have the knowledge they seek. This is not uninteresting in itself, but the sociologist thus obliterates any reference to the rationality and objectivity that direct scientific developments over the long term. As any determination of c is expressed as just two numbers – the measure itself and its uncertainty – it is possible to study measurements the speed of light over the long term, and thereby to correct for this flaw.
154) We have available to us a nearly complete record of the set of measurements made between that of Ole Rømer in 1676, and the laser measurements made by Woods, Shotton, and Rowley in 1978. While the findings of a study are always questionable where empirical data are truncated or filtered, in this case, no selection is necessary, and all data are included.
16This highly favorable situation concerning determinations of the speed of light is propitious for yielding particularly stable and reliable empirical results.
2 – Methods for the Determination of c
17The history of successive determinations of the speed of light can be summarized in broad strokes. Between antiquity and 1676, arguments were advanced for both a finite and an infinite speed of light. Rømer reported his first estimate of the speed of light in 1676, which would then be progressively updated with the introduction of new methods – terrestrial optical measurements devised by Fizeau, Foucault, and Michelson, and then electromagnetic measurements designed by Weber, Hertz, Essen, and Froome. Laser measurements, developed at the National Bureau of Standards, lead to a qualitative leap forward. The stability of the value obtained in successive experiments from 1972 to 1978, of c = 299,792.45898 ± 0.0002 kms-1, lead to the transformation of the primary constant into a constant of definition, on the basis of which the seventeenth International Weights and Measures Conference of 1983 was to take the speed of light in vacuum as the foundation for the definition of the meter.
18The determinations of c will be considered in the long term and across multiple teams of participants. To position itself within the sociology of scientific knowledge, the study will focus on the values of c and their corresponding uncertainties. It is useful to recall here that a measurement is a set of operations whose object is to determine the value of a quantity. Two hundred determinations of c have been collated in the literature – optical and astronomical determinations: Newcomb, 1886; Birge, 1934; Cohen, 1939; Boyer, 1941; Dorsey, 1944 and Kulikov, 1949 and 1964, who includes an almost complete list of values up to the date of publication; Taton, 1978; Tobin and Lequeux, 2002; Bobis and Lequeux, 2008; Bogaert, 2011; electromagnetic determinations: Bergstrand, 1956; Dupeyrat, 1958; Strong, 1958; Mulligan, 1952 and 1957; Atten and Pestre, 2002; laser determinations: Mulligan, 1976 and Eichler, 1993. For each determination of c, we have collated the following:
191) The speed of light in vacuum. The value of c was obtained sometimes directly, sometimes converted from the speed of light in air for terrestrial measurements – a correction of +67 kms-1 in Albert Michelson’s experiments. In the Rømer series, it is obtained from the delayed emergence of a satellite of Jupiter, the time taken by light to travel the diameter of the Earth’s orbit (twice 8’12”). In the Bradley series, it is deducted from the aberration constant (about 20 “495 of arc). In the Weber series, it is derived from Maxwell’s formula. Wherever sources quote different values, they have been corrected from the original publications.
202) The uncertainty framing these values. No measurement is perfectly accurate. The expression of a measurement, therefore, requires the value of the candidate measurement to be accompanied by some declaration of its uncertainty. The theory of error, born in the coeval works of Laplace and Gauss in 1810, was progressively applied to physical measurements, until none were considered valid without such a declaration. The first mention of an error on the Bradley series is attributable to Lindenau, Lundahl, and von Struve in 1842. The theory of error gave birth to a science – metrology – of which I shall just recall the main concepts (for details, JCGM 2012). Metrology distinguishes two notions – random error, which is unpredictable, and systematic error, which is constant across a series of measurements. Accuracy is a combination of precision and trueness. Precision refers to the absence of random error, and trueness refers to the absence of systematic error. Uncertainty frames measurement errors. We distinguish between Type A uncertainty, which is estimated using statistical methods (often the probable error), and Type B uncertainty, which is estimated by other means. Random errors are normally framed by Type A, while systematic errors are more difficult to detect. It has, therefore, happened that the margin of uncertainty at a given time did not encompass the true value of c. In addition, uncertainties are often subject to discussion. Where an uncertainty has been corrected, the correction is noted in the ‘author’ column (see below, Table 1). When the uncertainty has not been estimated, we have not recalculated it. A number of attempts have been made to show that this missing information meant that the measurements made were not amenable to statistical processing. The values with no associated uncertainty are representative of a “pioneer” state of research. Where the uncertainty is available, it is expressed in the form of absolute uncertainty in kms-1.
213) The team that made the determination. The original publications are not cited in the bibliography for reasons of space. More recent work can be sourced using the dates and names of authors (Table 1). For earlier works, we judged it useful to complement this information with the title of the work.
224) The method used. Not less than twelve different methods have been used to measure the speed of light. Strong (1958, 451-467) and Bortfeldt (1992, 3-37) provide an overview of these methods. Each gives rise to a “series” that bears the name of its inventor. Within a series, the initial method may have experienced considerable improvements and refinements. By definition, series in competition at any time t comprise the “research front.” The determinations can be grouped into broad categories: astronomical methods (1, 2) terrestrial optical methods (3, 4, 7), the method of permittivity (5), radio methods (6, 8, 11), measurement of gamma radiation (9) and laser methods (12). We have preferred to present them in chronological order.
231. The method of Ole Rømer, employed between 1676 and 1909, is the first to have fixed a value for the speed of light. Io, the innermost of the Galilean moons, orbits Jupiter with a near constant speed. The interval between the moon entering and exiting the planet’s cone of shadow depends on whether the Earth is moving towards or away from Jupiter. The maximum delay is equal to the time taken for light to cross the diameter of earth’s orbit. If the orbital properties of the earth are known, the speed of light can be deduced. This method yielded results which were vitiated in the seventeenth and eighteenth centuries by errors resulting from a poor understanding of the Earth’s movement.
242. The method of James Bradley, in use between 1729 and 1962, is based on the measurement of stellar aberration – the apparent displacement of a star in the direction of motion of the observer. In the context of classical mechanics, this is interpreted by the composition of the speed of light of the star with that of the observer. A star in direction θ appears in the direction θ’. For a star at the zenith, (θ - θ’) = v / c · sin θ’. The main component of this aberration is the annual aberration resulting from the earth’s orbit. This is a fundamental constant, denoted by κ. Let us equate the earth’s orbit to a circle of radius a. The speed of the earth is then written v = 2πa / T, where T is the duration of the year in seconds. By rewriting the aberration formula, κ = 2πa/cT is found. The aberration constant has been calculated from a large number of passes across the zenith. It is of the order of 20”495. Taking an orbital speed for the earth of v = 30 kms-1, the speed of light is deduced.
253. The method invented by Hippolyte Fizeau, first used in 1849 and last tested in 1902, is the first direct terrestrial optical measurement. A light beam is made intermittent by passing it through the gaps of a rotating toothed wheel. The beam is directed by collimating lenses onto a mirror several kilometers distant. When the light returns, it meets the disc at the same point in space, through which it either passes or not, depending on whether it encounters an empty or filled gap. The disc’s speed is progressively increased until the time taken for light to cover the optical path is equal to the time taken for a full gap in the wheel to be replaced by the next one. Knowing the size and speed of the disc, the speed of light is deduced. Marie-Alfred Cornu improved this method by replacing the naked eye with a mechanical detection register.
264. Léon Foucault’s method, which yielded results from 1862 to 1941, is a development of an idea originally attributed to François Arago, in which a light beam is directed onto a rotating mirror which then reflects the beam onto a set of plane mirrors before making the reverse path. The image of the light spot will stabilize when the time taken for light to traverse the entire optical path is equal to the time for one face of the mirror to replace the previous one. The speed of light is deduced from the rotational speed of the mirror when the image is stable. In 1920, Albert Michelson proposed several improvements to this arrangement using polygonal mirrors.
275. The method developed by Wilhelm Weber and Rudolf Kohlrausch was employed between 1856 and 1906 to determine the speed of light from the simple relationship between electromagnetic and electrostatic units. The full meaning of the empirical finding only became clear in 1864, when Maxwell advanced the hypothesis that light is an electromagnetic wave. One of Maxwell’s equations yields ε0μ0c2 = 1. The speed of light is thus deduced from the dielectric permittivity ε0, and the magnetic permeability μ0 of the vacuum.
286. The method invented by Heinrich Hertz, used from 1888 to 1958, derives from his experimental discovery of electromagnetic waves. Electrical pulses supply an electric dipole oscillator. A zinc mirror is placed facing the oscillator, at a distance which is varied until a standing wave is produced. By moving a resonator between the mirror and the oscillator, the nodes (extinction) and antinodes (sparks) of the standing wave are detected. The distance between two nodes gives the half-wavelength λ. The frequency ν of the oscillator being known, the velocity of the Hertzian wave is deduced from λν = c, which is equal to the speed of light. This method became more precise only in the twentieth century, with improvements proposed by Louis Essen to permit the frequency of the waves to be adjusted through a resonant cavity, and Keith Davy Froome, to adjust the wavelength using an interferometer.
297. The method developed by August Karolus, giving results between 1928 and 1967, can be seen as a reinterpretation of the Fizeau and Foucault methods, in which the toothed wheel or the rotating mirror is replaced by a Kerr cell, using the property of carbon disulphide to become birefringent when subjected to an electric field. The plates of the Kerr cell are supplied with an alternating current of high frequency, which rapidly modulates the intensity of the light beam – approximately 20,000,000 times per second. The speed of light is then measured by the conventional procedure. In 1951, Erik Bergstrand improved the sensitivity of the device using a photomultiplier.
308. The method proposed by Captain Carl I. Aslakson of the U.S. Coast and Geodetic Survey has given results, listed here, which were obtained between 1949 and 1956. It is a determination of c in the microwave domain using SHORAN (SHOrt RAnge Navigation) radar. An aircraft C emits waves of different frequencies νA and νB. Two ground bases A and B receive and return these signals to the aircraft. The altitude of the plane and the distance between the bases being known, the distance of the airplane from the two bases, AC and BC are known. The speed of the electromagnetic wave is then calculated based on the delay between the two pulse trains.
319. The method of Marshall R. Cleland and P.S. Jastram (1951) uses coincidence counters for measuring the speed of propagation of γ-rays emitted by a 64Cu source. One of the two counters is fixed to the source. The other counter is moved on an optical bench. The speed of propagation of the γ-rays is derived from the difference in transit times.
3210. The Rank method (1952-1957) uses interference applied to multiple waves. The absorption lines of hydrogen cyanide HCN are subject to two measures. Their wavenumber n (the reciprocal of λ) is determined with a Fabry-Perot interferometer. These give rise to an independent measure of frequency. The speed of light is then deduced from the ratio c = ν/n = λν.
3311. The Florman method (Edwin F. Florman 1955) studies the interference produced by two sources of radio waves. Two fixed receivers R1 and R2 are positioned, and two transmitters E1 and E2 are first placed on the segment R1R2. The equal phase line is then detected, along which a transmitter can move without affecting reception by the two receivers. The phase difference measurement is used to calculate the wavelength and the phase velocity of the electromagnetic wave.
3412. Kenneth M. Evanson’s method, which gave results between 1972 and 1978, was the first to take advantage of the invention of the laser in the 1960s, and is the most recent addition to the list of types of method. The basic idea of this method is to measure separately the wavelength and frequency of a narrow emission line, such as the red He-Ne 633 nm line. The wavelength is measured by interferometry. The laser frequency is stabilized by a “frequency synthesis chain,” which allows a frequency ν1 to be locked to a harmonic frequency ν2, itself locked to a harmonic frequency ν3, and so on, up to the known frequency of a crystal oscillator. This technique of frequency synchronization reduced the uncertainty from 10-9 to 10-11 in just a few years. Since the wavelength and frequency are known, their product gives the speed of light.
35In the 1950s, when around a dozen methods were being used to measure the speed of light, the comparison of results provided by each method sparked a debate about what was actually being measured. It was noted that the measured velocities were not always of the same nature. The distinction introduced by Hamilton in 1839, between “phase velocity”  and “group velocity”  was introduced into the problem (Dupeyrat 1958, 559).  Indeed, the various methods for determining the speed of light can be classified according to whether they measure phase velocities or group velocities, and according to the frequency domain to which they apply.
36As can be seen from this table, most of the historical determinations of the speed of light focused on the visible range, to which research returned after a long detour through the exploration of electromagnetism.
37Successive determinations of the speed of light are shown in Table 1. They can be represented on a graph, with the speed of light measured and its uncertainty on the ordinate.
Determinations of the Speed of Light
Determinations of the Speed of Light
Convergence of values of the speed of light from 1840 to 1980
Convergence of values of the speed of light from 1840 to 1980
3 – Rectification of Statements P1… P4
38Has the constructivist sociology of science, which aimed to report on scientific knowledge, achieved its objective? It has not, on this point, given the expected results. One could even say that, far from realizing its program, it has drawn attention to secondary facts of scientific activity, abandoning those which actually constitute it. This can be demonstrated by examining each statement P1… P4 derived from thesis 0 above.
39P1. The researcher makes a set of factual statements whose quality is explicitly linked to the states of the external world, which is sufficient to distinguish them from any other literary inscription.
40One of the well-known theories of the sociology of constructivist science is that science is a “system of literary inscriptions.”  This thesis goes beyond the idea that the scientist spends a significant portion of his or her time writing. It means that there is no difference in nature between a scientific text, a novel and a poem. Examination of texts relating to determinations of the speed of light shows that the experimenter makes a note of a measurement in order to establish a hypothesis or a result, linking this to an evaluation of the states of the objective world, which immediately distinguishes it from all literary inscriptions.
41Consider the measurements made by Henri Perrotin between 1897 and 1904 using the Fizeau’s toothed wheel method (Figure 2).
A page of Perrotin’s notebook, photo by Marc Heller
A page of Perrotin’s notebook, photo by Marc Heller
42We find in the notebook of May 1898 a list of measurements on the left hand page, accompanied by qualitative observations on the right hand page, the transcript of which reads:
March 14, 1898 (16) 295.48 (5) 279.18 Avg. 287.3. March 24 (12) 298.56 (11) 285.57 Avg. 292.1. April 20 (19) 297.80 (4) 281.55 Avg. 289.7. April 22 (19) 300.81 (10) 283.32 Avg. 292.1. April 27 (8) 300.88 (1) 274.71 Avg. 287.8. May 9 (15) 301.62 (9) 278.16 Avg. 289.9. | | May 13, 1898 No. 1. Second and third rather good… No. 2. 1st rather good, others passable except the last. No. 3. Void Series, no measurements. No. 4. 2nd and last mediocre, others fair. No. 5. Passable Series. No. 6. Point undulating, very spread out, nothing clear. Glitch in the electro-recorder on the tenths pen.
44The quality of measurements is judged on the instrumentation, and the states of the world. This quest for adjustement of the factual statements to the facts is a characteristic of science. If we call referentiality this property of scientific text to refer to reality, then the scientific text is a “reference text with a systematic evidential vocation” (Berthelot 2003, 48). This criterion immediately distinguishes a scientific inscription from a literary one.
45P2. Scientific facts are subject to a review process with a unidirectional and irreversible tendency. They are not constructed and deconstructed at will.
46In Laboratory Life, Latour and Woolgar classify scientific statements into five types: conjecture or speculation (type 1), descriptions (type 2), non-definitive assertions (type 3), specialized facts (type 4), and facts (type 5).  Researchers “construct” a fact by transforming a type 1 statement into a type 5 statement. They would “deconstruct” a fact by degrading a statement of type 5 onto a statement of type 1. “Public science” would then go about erasing all traces of the construction process, particularly its less rational elements, such as negotiation, persuasion, and on-the-fly corrections, thus obscuring the versatile nature of scientific statements.  Let us compare this thesis to empirical data. The study of the determinations of the speed of light shows that there has been both “construction” and “deconstruction” in the preliminary and speculative research phase. The space for maneuver of authors between the Aristotelian arguments of infinite speed, and the Arabic position of finite speed was extinguished by Rømer. From the moment when a numerical estimation was proposed, the determination of the speed of light entered into an irreversible process, the essential properties of which are:
471) A speculative argument gives way to a quantitative determination. Let us take the first determination attributed to Rømer. Delays in the emergence of Jupiter’s satellites were observed at the Académie des Sciences from 1671. The first known document is a note written by Cassini on August 22, 1676, in which the emergence of November 16 is forecast to be ten minutes late. The handwritten note is lost, but there are two copies (Bobis and Lequeux 2008). It is on this basis that Rømer was to defend the thesis of “successive propagation of light” which Cassini refused, not understanding the origin of the observed delays. In 1678, while writing his Treatise on Light, published in 1690, Huygens, a supporter of Rømer’s theory, gave it a numerical estimate. “The speed of light is more than six hundred thousand times greater than that of sound” (1690, 9), which would be 230,000 kms-1. An infinite speed of light was defended for the last time in 1707 by Fontenelle. This was not a case of nostalgia for the days when thinkers could discourse freely on nature, relying on neither observation nor calculation. Fontenelle doubted Rømer’s idea because of its inadequacy of observational data. He argued that neither Rømer nor Maraldi II had actually detected any variation in the delay of the emergence of Jupiter’s satellites. The Earth and Jupiter having elliptical orbits, the delay of emergence should not be constant. Not knowing that this effect was, in fact, negligible, due to the inaccuracy of the measurement of time, and the low eccentricity of earth’s orbit, Fontenelle concluded, “It appears therefore that we must renounce, though perhaps with regret, the clever and attractive hypothesis of successive propagation of light” (1707, 80). It is important to note that Fontenelle made no return to the qualitative arguments of Kepler, Descartes, or Hooke. From 1676 onwards, the history of the speed of light entered a special regime, in which speculation no longer had any validity.
482) Raw quantitative determinations, with no uncertainty, are superseded by measures with a margin of uncertainty. Thus, from the moment when the determinations of stellar aberration – the Bradley series – were accompanied by an estimate of the uncertainty, the values produced by Rømer were seen as mere indications of an order of magnitude, and as representative of a past state of research.
493) Any new method was abandoned when it produced values with a considerably greater uncertainty than those obtained on the research front. In 1955, Florman produced a result with an assigned uncertainty of ± 3.1 kms-1, compared to just ± 0.3 kms-1 in Bergstrand’s 1951 measurements. Also in 1951, Cleland and Jastram obtained an uncertainty of ± 15,000 kms-1 against Bergstrand’s ± 0.3 kms-1 in the same year. How could these two new methods ever expect to stay in the race? The choice of the least uncertainty is not unique to physical optics – it is a metrological principle. Bevington and Robinson write that once an experiment is repeated, “results gradually and asymptotically approach what we may accept with some confidence to be a reliable description of events” (2003, 1).
504) When a method is able to significantly reduce uncertainty on the research front, the methods that produce values outside the range of uncertainty are abandoned. The Foucault method supplanted the Rømer, Bradley, and Fizeau series in 1927 (± 20 kms-1). The Hertz method overshadowed the Foucault, Weber, and Karolus series in 1947 (± 3 kms-1). The Bergstrand method outperformed the Hertz, Aslakson, Rank, Cleland, and Florman series in 1957 (± 0.16 kms-1). The Evenson series definitely supplanted all other series in 1972 (± 0.0011 kms-1), with a reduction in uncertainty of a factor of 100 on the research front, a result further enhanced in 1978 to ± 0.0002 kms-1. This selection of methods is at the origin of the convergence of measurements to the true value y = c, where the asymptotic distribution of measurements (Table 1, Figure 1, above) comes from. However, while one method overrides another because it is accompanied by a smaller uncertainty, we also see a parallel trend towards smaller error bars associated with the measurements themselves (Table 2, Figure 3). A direct relationship between the convergence of values and the reduction of uncertainty establishes the unidirectional and irreversible tendency of revisions of scientific knowledge.
Progress in Reducing the Uncertainty of c
Progress in Reducing the Uncertainty of c
Reducing uncertainty in the measurement of c 1880-1980
Reducing uncertainty in the measurement of c 1880-1980
51P3. Hypotheses, and the series that carry them, are separated by the experiments and by the collegial exercise of rational criticism, the foundation of the normative structure of science. They are not negotiated.
52It is difficult to read, in the sequence of methods for determining the speed of light, any result of disorderly negotiation.  The data show that measurement method m1 replaces a method m2, if it provides an uncertainty Δ1 smaller than Δ2. It is on this basis that the hypotheses advanced in the 1930s of a secular variation of the speed of light were eliminated. The first of these, proposed by Maurice Gheury de Bray in 1927, postulated a linear decrease of 4 kms-1 per year, despite Marie-Alfred Cornu’s protestations that his result had been misrepresented. Frank Edmonson (1934) hypothesized a sinusoidal variation of c with a period of about 40 years. Wold (1935) proposed a linear model. These ideas were swept aside in the 1940s, when the uncertainty was reduced by a factor of ten (Birge 1941, Dorsey 1944).
53As methods are refined, the values asymptotically approach the true value. It cannot be excluded that, at a time when uncertainty is high, that consecutive measurements may vary in the same direction, within the confidence interval, thus giving the illusion of a downward trend. It was mainly based on the Bradley series that such a conviction was acquired. A comparison of the series shows that this decrease is entirely fortuitous – the Foucault, Karolus, Rank, and Evanson series all show upward trends, while those of Rømer, Weber, and Hertz show no trends at all.
54Once the uncertainty is reduced, it is impossible to return. Let us apply these assumptions to the data of the Evanson series (uncertainty between ± 0.0002 and 0.0011 kms-1). According to Gheury de Bray (1927), c should have decreased by 24 kms-1 between 1972 and 1978. This value is 16,000 times greater than the fluctuation recorded between the two dates. According to Edmonson (1934), the speed of light in vacuum is given by the equation c = 299,885 + 115 2π/40 sin (t – 1901) whence c1978 = 299,909.097. The difference of +116.64 kms-1, with the experimental value, does not fall within the margin of uncertainty. For Barry Setterfield (1981), the speed of light would decrease exponentially. His data are badly biased. First of all, he sets the value of Rømer at 301,300 ± 200 kms-1, while Kulikov (1964, 57), from where he took Rømer’s values, actually gives 215,000 kms-1. Secondly, Rømer’s result is given with no uncertainty. In the Adversaria, Rømer wrote no more than that light takes one minute to travel 1091 terrestrial diameters. Taking his unilateral declaration that the diameter of the earth is 3,000 (Parisian?) leagues in the Journal des Sçavans of December 7, 1676, a speed of about 212,000 kms-1 can be calculated. In the Comptes-Rendus de l’Académie des Sciences, Fontenelle indicates that Rømer found a speed of light of 48,203 common French leagues per second, or about 214,000 kms-1. Historians are inclined to retain this latter value, whose unit is specified, or a value rounded to 215,000 km-1 without uncertainty. Setterfield’s first data point is thus located at 86,300 kms-1 from his curve. The ever-reducing uncertainty thus invalidates any hypothesis proposing variations of c.
55P4. A researcher may use strategies of persuasion (as opposed to conviction); they will not permanently gain the consent of their peers if they fail to meet the standards of scientificity.
56Consider the long Bradley series, composed of 60 consecutive determinations between 1729 and 1927, to which we have added determinations from 1927 to 1961, for the purposes of this discussion. This method having been widely used, one would tend to think that ever more perfect protocols were developed, and that astronomers would have been able to drastically reduce the uncertainty. Nevertheless, all the values in this series come with a considerable uncertainty. It is greater than ± 250 kms-1 in 1842, and is still at ± 90 kms-1 by 1961.
57The best value for the aberration constant (κ = 20”5120 ± 00031), equivalent to c = 299,560 ± 45 kms-1 was obtained by Konstantin A. Kulikov, calculating the weighted average of 28,542 observations made at Pulkovo Observatory between 1915 and 1929.  Such a small uncertainty seems doubtful, given that all later measurements using this method have a much greater uncertainty than this. Let us examine the methodology. A system of conditional equations is proposed, where each line expresses the true latitude φ0 as a function of the observed latitude φ along with a number of corrective terms: dβ (the declination), dμδ (proper motion in declination), dN (nutation), dκ (difference from the value adopted for the aberration constant κc), and π (the stellar parallax). The system of equations is solved, by removing the unknown values φ0 and dβ, giving dκ from which we deduce the constant κ = κc + dκ (Kulikov 1964, 89-91). Despite the corrections made, Kulikov’s result appears off by -232 kms-1. The uncertainty has thus been considerably underestimated.
58Should we then conclude that the Pulkovo astronomers would have forced their peers to admit the viability of their program using strategies of persuasion?  To understand why this hypothesis has little meaning, we must return to a point of metrology. The Pulkovo Large Zenith Telescope ZTF -135 (D = 135 mm, f = 1760 mm), designed by Freiberg in 1904, on the Wanschaff-Zeiss VZT-1 model of 1898, was among the first-class instruments of the time (Sobolev 2005, 171). The theoretical angular resolution of an instrument, which is limited by the diffraction radius, is written θ ≥ λ0 / D, or θ ≥ (122/D mm) for λ0 = 510 nm. Neglecting turbulence and geometric aberrations, this telescope had a resolution of θ ≥ 0”904 (about one second of arc), a value nevertheless 292 times greater than the uncertainty provided by Kulikov (± 0”0031). The apparent precision of the result published by Kulikov is a statistical artifact of the 28,542 observations made at the Pulkovo Observatory. The point is that very precise measurements are not necessarily accurate. They can be weakly scattered around a mean which is not the true value. No amount of correction can eliminate all deviation,  and if these deviations are of a systematic nature, they will not be detected by statistical treatment. This describes the absolute limits of this method, which relied exclusively on repeated observations to reduce uncertainty. There is no strategy of persuasion here. The only striking human factor in the attitude of the Pulkovo astronomers is the excessive confidence in the capacity of 28,000 observations to produce an accurate result.
59So why did the Pulkovo astronomers plan such a Herculean task? In 1915, at the beginning of the observation campaign, the authoritative measurement of c was that of Simon Newcomb in 1882 (± 30 kms-1), which did not quite halve the uncertainty of the value obtained by Nyrén and Wagner at Pulkovo in 1879 (± 50 kms-1). In addition, the International Service for Latitude had advocated the use of the Wanschaff Zenith Telescope in each observatory of its network, including Pulkovo, and this instrument was perfectly suited to the study of the aberration constant. Therefore, astronomers had every incentive not to abandon the method used at Pulkovo, from Otto von Struve to Oleksandr Orlov, of whom Zimmerman was a student (Rikun 2005, 41-42). It was hoped to that the value of Newcomb would be bettered by multiplying observations and increasing the resolution of the ZTF-135 (D = 135 mm, θ ≥ 0“904), compared to the Wanschaff VZT-1 (D = 85 mm, θ ≥ 1”435). There was nothing irrational about this methodological design in 1915. However, by 1929, at the end of the observation campaign, the situation had completely changed as other measurements of c were published in the meantime. In 1927, Albert Michelson reduced the uncertainty to ± 4 kms-1, so much so that the race against uncertainty at the research front, saw the Foucault series (± 30 kms-1 and then ± 4 kms-1) overtake the Bradley series (± 50 kms-1).
60Let us more precisely compare the Foucault and Bradley series (Figure 4). In the Foucault series, Michelson, aided by Pease and Pearson, achieved ± 11 kms-1 in 1933 – a mediocre result due to the use of a partial vacuum tube, sensitive to temperature variations. Raymond Birge obtained ± 4 kms-1 in 1941. Overall, uncertainty using the Foucault method was reduced by a factor of 125 over 79 years (1862-1941). By comparison, in the Bradley series, Romanskaja obtained ± 100 kms-1 (± 0”007) in 1941, from 14,783 observations made on the Large Zenith Telescope at Pulkovo. Guinot did no better: ± 120 kms-1 (± 0”008) in 1959, and ± 90 kms-1 (± 0”006) in 1961. Overall, uncertainty in the Bradley series was reduced by a factor of just three in 145 years (1816-1961). This ratio, that is, 1.58 versus 0.02, easily distinguishes the two sets apart. A method whose uncertainty is reduced rapidly, and whose values behave asymptotically – the Foucault series – will inevitably surpass method whose values fluctuate with virtually no reduction in uncertainty, as in the Bradley series. While no program of research is likely to be abandoned upon the appearance of the first critical review (Lakatos 1994), scientific success is reflected in the arrival at the research front of series that progressively chase others out. We can place the series side-by-side and perhaps determine the date on which the one overtook the other. The two series in fact intersect in 1927, when Michelson gave a value of c ± 4 kms-1. The method of stellar aberration could not hope to stay in the race, and was forced to abdicate in favor of other series.
Comparison of the Foucault and Bradley series
Comparison of the Foucault and Bradley series
61Nevertheless, the graph shows that research on the aberration constant κ went on into the 1970s. Is this a sign that astronomers were intransigently sticking to their preferred method, regardless of the progress of terrestrial measurements? Nothing of the sort. In the 1920s, it was still widely reported in textbooks that the speed of light could be calculated from the constant and the speed of the earth. Ten years later, we read, “The speed of light is well known; it is sufficient to deduce the aberration constant experimentally in order to determine the solar parallax” (Dufour 1934, 121). Between these two dates, astronomers took a methodological decision. They decided to leave measurements of c to physicists, in the light of Michelson’s measurement. They then went on to try and clarify values related to the aberration constant. Solar parallax, π⊙, is the angle at which the radius of the earth is seen from the sun. Noting a as the distance between the centers of the earth, sun, and R0, the equatorial radius of the earth, then sin π⊙ = R0 / a. Since a is given by the aberration formula a = κTc/2π, we have π⊙ = arcsin (2πR0/κTc). The importance of π⊙ is that an uncertainty of 0”01 on the solar parallax causes an uncertainty of about 30 earth radii on the value of one astronomical unit, and hence the estimation of all distances in the universe.
62The determination of c in 1927 thus had a significant impact in astronomy. It ended a research program two centuries old. It brought about new developments, based on the introduction of a precise measurement of the speed of light, even if it was still to be improved further.
63Such developments had been imagined earlier, but the knowledge of the time was not sufficient to implement them. In a note to the Comptes-Rendus de l’Académie des Sciences in 1862, Foucault describes the consequences that his determination might have on calculations by stellar aberration and parallax. It seems quite natural for Foucault to get involved in this way. In 1850, he raised the first hurdle by showing that light travels faster in air than in water, a fundamental result that refuted Newton’s theory of emission, and put an end to a century and a half of debate. The emission theory offered an alternative explanation of stellar aberration, which was thus invalidated. But Foucault was then faced with a second obstacle – the ether – which was not to be cleared until after his death. The ether theory was based on the concept that light waves must propagate through some subtle medium. It predicted that the speed of light would be c – v when the earth moved in one direction relative to the ether, and c + v when moving in the opposite direction, as it does in its orbit after a period of six months. The speed of light could, therefore, not be constant. Furthermore, this would only be in the domain of the measurable if when the uncertainty of the measurements dropped below the threshold of ± 30 kms-1, being the orbital velocity of the earth. With an uncertainty of ± 500 kms-1, Foucault’s value was insufficient.  Therefore, a reinterpretation of stellar aberration and solar parallax could only be undertaken after 1927 when Michelson measured c at 299,796 ± 4 kms-1. It is the reduction of the uncertainty that allowed these developments.
4 – Sketch of an Internalist Sociology of Science
64The study of successive determinations of the speed of light between 1676 and 1983 establishes four new statements P1… P4. These statements define an alternative approach in the sociology of science by integrating certain principles obtained from epistemology and the history of science:
65P1. The researcher produces sets of factual statements whose quality is explicitly linked to the states of the external world, which is sufficient to distinguish them from any other literary inscription.
66P2. Scientific facts are subject to a review process with a unidirectional and irreversible trend. They are neither constructed nor deconstructed at will.
67P3. Hypotheses, and the series based upon them, are differentiated by experiment and the collegial exercise of rational criticism, the foundation of the normative structure of science. They are not negotiated.
68P4. A researcher may use persuasion strategies (as opposed to conviction strategies). They do not permanently gain the consent of his peers if they do not meet the standards of scientificity.
69The actual content of these statements explains why constructivism has always asked the anthropologist to feel disdain for native discourse (see Latour and Woolgar 1988, 45-46, 88). Were he to take into account these native discourses, he would no longer be able to claim that the researcher “negotiates” results or “persuades” his peers through literary inscriptions. Criticisms have been leveled at the kind of anthropology which ignores the language of the tribe studied (Lemaire 1983). We will not add anything on this point.
70The rewritten statements do not constitute a program, in the sense of the manifestos published over the last 30 years – the “Strong Program,” the “Empirical Program of Relativism,” and the “Actant-Network Theory.” They merely state some empirically based principles, which today sociology of science should not ignore.
71The sociology of scientific knowledge has produced, due to its relativistic and constructivist orientation, results incompatible with the new statements P1… P4. How, then, should a sociology of science consistent with the statements P1… P4 be qualified? If one refers to a classical distinction in the history of science, we might call this an internalist sociology of science?  The sociology of science post-Merton attempted to determine scientific knowledge by its social context in a causal manner (Bloor), or to determine how scientific statements are stabilized by local procedures within the core set (Collins), or to show that the statements made are indistinguishable from research appropriations, the researchers themselves, or oxygen cylinders in the lab, since they all belong to the same actor-network (Latour). Even when research turns away from macro-social determinants to explain science in the immediate context of production, they are not internalist.  First, these studies draw attention to circumstances which often bear little relation to the distinctive character of science, from power outages, for example, to the negotiation of scientific evidence. Secondly, they postulate that within the core set, scientific knowledge is determined by cognitive, technical, material, social and political aspects, an explanation inconsistent with the definition of internalism given by Canguilhem. This sociology of science is, therefore, one of externalism, in the sense that scientific facts are explained by the context – immediate or global – in which the scientific activity unfolds. But in presenting this scientific activity as an activity like any other, in which ad hoc adjustments to procedures, error and persuasion play an essential role, constructivist and relativist sociology have only drawn attention to quite secondary issues in science. By comparison, an internalist sociology of science should aim to examine the more specific aspects of scientific activity that characterize science as science.
72I will give an illustration of the concept of scientific competition. Let us start from statement P2: scientific knowledge is subject to a trend of revision which is both unidirectional and irreversible as a consequence of the fact that the reduction in uncertainty and the convergence of values are concomitant. The convergence of parallel methods is often taken as a sign of the “robustness” (Wimsatt 1981) of the value of the speed of light – the more the methods are independent of each other, the more the agreement between their results is significant. Physicists are well aware of this fact – recall that one of the first implementations of the concept of robustness is attributed to Edward Morley, Albert Michelson’s collaborator in his most famous experiments. Nonetheless, robustness does not suppose an indiscriminate use of methods. At any given moment in research, only a certain number of methods may be contributing to the robustness of the result. Their contribution requires that they be accompanied by an uncertainty equal to, or less than, the best value known, which drastically reduces the spectrum of methods included. Thus, we find a chain of robustness, reduction of uncertainty, and scientific competition.
73Scientific competition is defined by Lemaine, Matalon, and Provansal thus: “It is about achieving the goal first, to obtain a satisfactory result in a particular field before anyone else” (1969, 140). It is founded on the aspiration of the researcher to be recognized as having primacy in the treatment of a question. The researcher considers “on one hand, the chances of a solution existing and, on the other, the chances of reaching the goal first, given the difficulty of the problem and the intensity of competition” (1969, 141). Competition results either in a struggle for supremacy, where competitors cannot leave the field to which they have been assigned, or peaceful coexistence, where some of the researchers are free to explore new territories. The invention of a new domain of research would reduce the intensity of competition. In the conclusion of a study of 1,000 U.S. researchers, Hagstrom (1974) showed that 60% of them had been overtaken by a competitor at least once in their careers. Competition has both adverse and positive aspects. Competition leads the researchers to be more circumspect in the dissemination of results. It may also increase productivity, by placing an issue at the center of the debate, and dissemination, by sharing new ideas and stimulating intensive work. This study was extended by Mulkay and Edge (1976), looking at groups of radio astronomers at Cambridge and Jodrell Bank. The authors emphasize the interdependence between competition, estimations of the chances of success, and investment in new areas of research.
74First, recall that there are traditional forms of competition in science, namely applications for funds, and striving for scientific prizes. The benefits of these are not to be shared.
75There is another form of scientific competition, both more widespread and more imperceptible, in which the scientist, in the act of producing knowledge, is a competitor of his peers. This competition is characterized by emulation (Latin aemulatio) for two main reasons: one terminological, the other factual. First reason – the word “emulation” has a more accurate meaning than competition. While competition is any rivalry between people sharing a common goal, with the means being undefined, emulation refers to the aim of matching, and surpassing if possible, a type model (Latin æmulor: imitate, take as a model, strive to equal, or compete with). Thus, it is actually emulation that we observe at the research front, and not any undefined form of competition. Second reason – the data for measuring the speed of light reveal three registers of competition: between multiple series, wherein research teams attempt to obtain better results than teams using other methods; competition within a given series, where the goal is to obtain better results than other teams using the same method; and competition with oneself – attempting to improve on one’s own previous results. The last of these is shown in the way in which researchers continue to perfect their experimental devices and publish several successive values of c (Table 1: Michelson [7 values], Nyrén , Doolittle , Cornu , Perrotin , Froome , etc.). Given that the three registers describe the same phenomenon between different sorts of participants, and that the third register cannot be considered competition in the ordinary sense of the term, the term “scientific competition” is inadequate to account for the most common form of contest within the scientific edifice.
76Furthermore, it has been shown that scientific competition does not exclude scientific cooperation (Lemaire et al. 1976, Rudwick 1985, Hull 1990). What do our data have to say on this point? Consider two determinations of the speed of light:
|(1) κ = 2πa/cT||(stellar aberration constant, Bradley series)|
|(2) ε0μ0c2 = 1||(permittivity and permeability of the vacuum, Weber series)|
77The relations (1) and (2) have long been used to infer c from the measurement of other variables. However, neither of these two series has emerged victorious. As a consequence, from the moment new values for c appeared on the research front with an uncertainty smaller than those produced by these two series, the equations themselves were read in the opposite direction.  It is knowledge of c which clarified the values of the aberration constant and solar parallax, and which fixed the values of ε0 and μ0 as universal constants. This diffusion of the value of c throughout physics has affected all areas where the speed of light occurs – relativistic equations, black body radiation, quantization of energy, the Cherenkov effect, the Doppler effect, etc. Without these relationships, the value of c would have had no effect on these areas. Because these relationships exist, all researchers have taken full and unchallenged advantage of the knowledge produced. How is this type of activity to be qualified?
78Notions associating aspects of cooperation and competition have been described in the social sciences under the names of competitive partnership, in which partners pool their efforts to achieve a common goal, each trying to assert their interests opportunistically, and “coopetition,” where competitors agree to cooperate on common areas of their business, leaving their areas of specialization, where their profits are made, subject to only limited competition. In the first case, the individual manages to capture a portion of the interest that would have been returned to his partner. In the second, there is a form of classical competition on the area of specialization. Competitive partnership and coopetition, therefore, assume the existence of own benefit, not to be shared by competitors. This is why scientific activity – which includes aspects of competition and cooperation – is not reducible to either. Indeed, whatever his or her involvement in a scientific domain, each and every researcher will receive the global benefit of the knowledge produced by the research front. It appears that this redistributive emulation is irreducible to the kind of competition which prevails in the market or the political arena.
79This article aimed at contributing to the debate on the methods of the sociology of science, via a study of successive determinations of the speed of light. One of the hallmarks of science is to produce measurements, and a sociology of measurement seems to have been able to reveal certain characteristics that make science a highly differentiated activity.
80Methods for determining the speed of light respond to the leitmotifs of the constructivist sociology of science with difficulty. The theme of literary inscription is so general that it does not encompass the specific characteristics of scientific text. The unidirectional trend of the revision of values establishes that scientific statements are not constructed and deconstructed at will. Commitment to a method, which may be dependent on persuasion strategies, disappears as soon as it produces too much uncertainty in the light of contemporary standards. Finally, the convergence of values, and the systematic selection of methods with ever-diminishing uncertainty oppose the emphasis on scientific negotiation. Literary writing, games of construction and deconstruction, persuasion and negotiation simply do not provide a good characterization of scientific activity.
81Each statement P1… P4 was rewritten to match the historical facts relating to the successive determinations of the speed of light. Relying on these rewritten statements, we have presented an outline of an internalist sociology of science which sets out to examine those aspects that make science, science.
82We may remind the skeptic who doubts the results presented, and the sketch drawn from them, that these results do not stop at the speed of light. The same demonstration could be made from determinations of the magnetic moment of the electron, the obliquity of the ecliptic, or the evolution of certain ratios of quantities. For example, consider the reduced series of values of the ratio of the mass of the Sun to Jupiter (Table 4):
Progress in Reducing the Uncertainty of the Sun/Jupiter Ratio
Progress in Reducing the Uncertainty of the Sun/Jupiter Ratio
83Successive values of the ratio of the mass of the Sun to Jupiter show a convergence, and a reduction in uncertainty perfectly comparable to those obtained from measurements of c. Therefore, these data could also be used to establish P1 (reference to statements from the outside world), P2 (unidirectional trend of revision), P3 (collective exercise of rational criticism) and P4 (normative orientation of science). It is a sign of the robustness of the results set out in this article.
AcknowledgmentsI would like to thank Mario Bunge (McGill University, Montreal), Gérard Dolino (LIPhy, Grenoble), Michel Dubois (GEMASS, Paris), Yves Gingras (UQAM, Montreal), Emilien Shultz (University of Paris-Sorbonne) and the anonymous referees who expressed their comments on a draft version of the article. The final version is my opinion alone.
I differ in this from the current diagnosis that “for Merton, sociology has nothing to say about the content of science” (Ragouet 2002, 167). Merton, author of a program of sociology of knowledge, does not doubt the development of the specialty. “It seems that the sociology of knowledge has wedded these tendencies (fact-finding and theories) in what promises to be a fruitful union. Above all, it focuses on problems which are at the very center of contemporary intellectual interest” (Merton 1973, 40). Perhaps the above diagnosis relies upon a phrase by Storer, in the introduction of the same volume. “Mulkay and King have suggested that he has erred in not taking more direct account of the substantive content of science in his own formulations” (Storer, in Merton 1973, xxviii).
When Hertz studied standing waves with a resonator, he was measuring a phase velocity. The speed is determined by the product of the wavelength and the frequency c = λν. In this case, there is no intermittence, and verification is performed on the stationary and monochromatic character of the wave.
When Fizeau measured the time of flight of a wave between Montmartre and Suresnes, he based his determination on the behavior of the head of the wave train, assimilable to a perturbation. He was thus measuring a group velocity. The speed is determined by the ratio of a distance and a time c = x / t. In this case, the beam is made intermittent, and the measurement is made on progressive waves with mixed frequencies.
Phase velocity and group velocity are only equal in a non-dispersive medium, such as a vacuum. In a dispersive medium, the head of the wave train is deformed. The precursor waves which compose it, having a lower amplitude, are not registered by the detector. The signal is thus received with a delay, which leads to an underestimation of the speed of propagation of the electromagnetic wave. This difference has a limited effect on the measurement of the speed of light. It was estimated that the group velocity could be less than the phase velocity by 10–5, around 3 kms-1. In addition, when we say that the speed of light can not exceed c, reference is being made to the group velocity, as the phase velocity can be lower or higher than c. In some cases, it is infinite. Everything depends on the dispersion relation.
This thesis which enjoys a consensus in the contemporary sociology of science, is expressed by Latour and Woolgar (1979) 45, 71, 76), Knorr-Cetina (1981, 94), Knorr-Cetina and Mulkay (1983, 9-10), Lynch (1993, 94), Restivo (1985, 85), Lenoir (1997, 27) and Rehg (2009, 72).
Interestingly, this classification differs a lot in the French version of Laboratory Life. It provides the following: conjecture (type 1), circumstantial modal statement (type 2), modal statement (type 3), affirmation (type 4), and fact (type 5).
Reports of the versatility of scientific statements can be found in Latour and Woolgar (1979, 179, 237), Knorr-Cetina (1981, 4, 41), Knorr-Cetina and Mulkay (1983, 61, 120), Restivo (1985, 109, 120), Lynch (1993, 172), Lenoir (1997, 5, 47-48), Restivo (2005, 491), and Rehg (2009, 116-117).
The thesis of negotiation is put forward by La Latour and Woolgar (1986 , p. 157), Collins (1981, p. 4), Knorr-Cetina (1981, pp. 51, 66), Lynch (1993, pp. 108, 115), Knorr-Cetina (1995, pp. 152-154), Restivo (2005, pp. 253, 462, 581) and Rehg (2009, p. 75).
These 28,542 observations were made by N.V. Zimmerman, N.I. Dnieprovskii, A.D. Drozd, G. Maksimov, S.V. Romanskaja, and V.R. Berg. The first made 3000 observations (Rikun 2005, 42). Romanskaja and Berg about 20,000 (Pulikov 1949) or, according to other sources, Romanskaja 23,500 alone (Vasil’evich 2011). O.N. Kramer oversaw the calculations at the Mathematical Institute of the Soviet Academy of Sciences. The latitude observation program seems to have been driven in 1915 by Zimmerman, seconded by Orlov and Baklund, the latter then being director of the observatory (Bulletin of the Central Nicolas Observatory 1916, Kolchinskij 1977, Zhukov and Sobolev 2002).
The hypothesis of “persuasion” (inasmuch as it differs from “conviction” by the use of rational arguments) is regularly advanced by sociologists of science such as Latour and Woolgar (1979), 157, 214) Knorr-Cetina (1981, 51, 66), Knorr-Cetina and Mulkay (1983, 9-10), Pickering (1992, 92), Lenoir (1997, 27), Restivo (2005, 58), and Rehg (2009, 72).
Temperature, pressure, humidity, and atmospheric turbulence all perturb the observations. The geometry of the device varies depending on the temperature, which causes the optical tube length to change, and the sighting angle, due to bending of the optical tube. Directionality is sensitive to mechanical wear, especially over a period of fifteen years. Added to this are calculation approximations – graphical interpolation of the pole coordinates, grouping values on averaged dates of the year, a failure to take into account the personal equations used by researchers, and the fact that observations were made by several different observers (Kulikov 1964, 90).
From 1881, Michelson made a number of attempts to measure the ether wind by interferometry. These experiments, often described as “negative” in the literature, actually furnished ambiguous results. The first section ends with the words, “The result of the hypothesis of a stationary ether is thus shown to be incorrect” (1881, 128), giving the advantage to the ether of Stokes. However, the following year, Lorenz and Pottier showed that Michelson’s article included certain miscalculations, which he accepted. The second article begins with a review of the device used in 1881. Once the defects had been corrected, the experiments were performed anew. Michelson and Morley found a shift of 0.01 (instead of 0.4) times the distance between interference fringes. They argued that the movement of the solar system through the ether had not been studied and, therefore, that the hypothesis of the ether had still not been refuted. “If there be any relative motion between the earth and the luminiferous ether, it must be small” (1887, 341). Fresnel’s hypothesis of the ether was rejected in favor of that of Stokes. In 1897, Michelson carried out new experiments, and also failed to detect the ether wind at altitude as predicted by Stokes. He then returned to the hypothesis of Fresnel. As noted by Lakatos (1994, 106), the Michelson-Morley experiment did not attain the status of a refutation of the existence of the ether until 25 years after its publication.
Commenting on an article by Buchdahl, Canguilhem writes, “Externalism is a way to write the history of science by conditioning a number of events that we continue to call scientific by tradition rather than by critical analysis – by their relationships with economic and social interests, with technical practices and requirements, with religious or political ideologies (…) Internalism, held by the former to be idealism, is to think that there no history of science, if it one does not move within the scientific work to analyze the processes by which it seeks to meet the specific standards that can be defined as science and not as a technique or an ideology” (1968, 15).
The best known promoters of the study of core set wrote, “Another part of the programme is to relate the sort of work presented here to the wider social and political structure […] The consensual interpretation of day-to-day laboratory work is only possible within constraints coming from outside that work” (Collins 1981, 7).
As Kulikov said, concerning the observations made between 1915 and 1929, “The speed of light was long determined from the aberration constant, or from the light equation, until up until it became the object of direct terrestrial measurements. When these experimental determinations of the speed of light became sufficiently perfected, its relationship to the aberration constant and solar parallax was no longer that of the original goal, but conversely, was used to determine these very values” (1964, 55).