quarta-feira, 28 de janeiro de 2015

Particle Physics and Negative Knowledge

"Negative Knowledge and the Liminal Approach

Having argued that to assure success HEP experiments turn toward the care of the self, I now want to add that they also turn toward the study of “liminal” phenomena, things which are neither empirical objects of posi­tive knowledge nor effects in the formless regions of the unknowable, but something in between. Limen means “threshold” in Latin. The term has been used in the past to refer to the ambiguous status of individuals during transitional periods of time (Turner 1969). I shall use the term to refer to knowledge about phenomena on the fringe and at the margin of the objects of interest. High energy physics incorporates liminal phenom­ena into research by enlisting the world of disturbances and distortions, imperfections, errors, uncertainties, and limits of research into its project. It has lifted the zone of unsavory blemishes of an experiment into the spotlight, and studies these features. It cultivates a kind of negative knowledge. Negative knowledge is not nonknowledge, but knowledge of the limits of knowing, of the mistakes we make in trying to know, of the things that interfere with our knowing, of what we are not interested in and do not really want to know. We have already encountered some forces of this kind in the background, the underlying event, the noise, and the smearing of distributions. All of these are limitations of the experi­ment, in the sense that they are linked to the features of the detector, the collider, or the particles used in collisions. High energy collider physics defines the perturbations of positive knowledge in terms of the limita­tions of its own apparatus and approach. But it does not do this just to put the blame on these components, or complain about them. Rather, it teases these fiends of empirical research out of their liminal existence; it draws distinctions between them, elaborates on them, and creates a discourse about them. It puts them under the magnifying glass and pre­sents enlarged versions of them to the public. In a sense, high energy experimental physics has forged a coalition with the evil that bars knowl­edge, by turning these barriers into a principle of knowing.

In Christian theology, there was once an approach called “apophantic theology” that prescribed studying God in terms of what He was not rather than what He was, since no positive assertions could be made about His essence. High energy experimental physics has taken a similar route. By developing liminal knowledge, it has narrowed down the region of positive, phenomenal knowledge. It specifies the boundaries of knowledge and pinpoints the uncertainties that surround it. It delimits the properties and possibilities of the objects that dwell in this region by recognizing the properties of the objects that interfere with them and distort them. Of course, if one asks a physicist about “negative knowl­edge” he or she will say that the goal remains to catch the (positive, phenomenal) particles at loose, to measure their mass and other (posi­tive, phenomenal) properties, and nothing less. All else is the ways and means of reaching this goal. There is no doubt that this goal is indeed what one wishes to achieve, and occasionally succeeds in achieving, as with the discovery of the vector bosons at CERN in 1983 (Arnison et 1983a,b; Bagnaia et al. 1983; Banner et al. 1983). My intention is by no means to deny such motivations or their gratification, but what is of interest as one works one’s way into a culture is precisely the ways and means through which a group arrives at its gratifications. The upgrading of liminal phenomena—the torch that is shone on them, the time and care devoted to them—is a cultural preference of some inter­est.4 For one thing, it extends and accentuates what I call HEP’s negative and self-referential epistemics. For another, the majority of fields, among them molecular genetics, does not share this preference. And, lastly, it is quite remarkable how much one can do by mobilizing negative knowledge.

There are two areas in which the liminal approach is most visible: the area of errors and uncertainties and the area of corrections. Let me start with the latter. The idea of a correction is that the limits of knowing must enter into the calculation of positive knowledge. For example, “meaningless” measurements can be turned into meaningful data by correcting them for the peculiarities and limitations of the detector. “What you really want to know,” as a physicist summed up this point, “is, given that an event is produced in your detector, do you identify it.” Corrections can be characterized as “acceptances”, or as “efficien­cies.” An acceptance tells physicists “how many events my detector sees of what it should see”; it is the number of observed events divided by the number of produced events.5 An overall acceptance calculation re­quires a detector response model: it requires that all the physics proc­esses that end up in a detector are generated, and a full detector simulation is created to ascertain what the detector makes of these processes. In UA2, the detector response model also included such components as a simulation of the underlying event and detector per­formance measures, such as its geometrical acceptance (which describes how many events are lost through the incomplete coverage of detectors with dead angles and cracks), its resolution (which refers to the smear­ing of distributions described earlier), and its response curves (which determine the reaction to energy inputs that deviate from those of the test beam used to determine the basic calibration constants)."

Karin Knorr Cetina, Epistemic Cultures: How the Sciences Make Knowledge, p.63-65.

Epistemic Cultures: How the Sciences Make Knowledge, by Karin Knorr Cetina. Cambridge, MA: Harvard University Press, 1999. xix + 329 pp.

Karin Knorr Cetina offers the reader a valuable comparative look at the “epistemic cultures” (the arrangement and mechanisms by which we come to know what we know) in two fields of science—high-energy physics (HEP) and molecular biology (MB). She believes there is a “diversity” among epistemic cultures, which reveals “disunity” within the sciences—hence the comparative approach. Her analysis is not focused on the construction of scientific knowledge, but rather the “machineries of knowledge construction”—the practices that go into the making of scientific knowl­edge—and the “cultures” that surround and give symbolic meaning to such practices. Following a chapter that examines the constitution of differing types of laboratories, Knorr Cetina offers up a series of comparative chapters that deal with the configura­tion of “reality,” the technological machinery, and the social arrangements entailed in the two fields. A final chapter in the form of an imagined dialogue between author and reader, which worked less well for me than most of the book, tries to set what has been learned about these two epistemic cultures within what Knorr Cetina sees as a broader trend toward knowledge-based societies in the Western world. A brief summary of Knorr Cetina’s analysis, which is based on many years of participant observation and interviews with selected practitioners, can hardly do justice to the complexities of her findings. Nor is the reading always easy, in part due to the science itself. Nonetheless, a review of some of the author’s key points is revealing of the richness of this impor­tant study.

Knorr Cetina views laboratories as reconfigurations of natural and social orders, ones in which scientists have also become “specific epistemic subjects” as they are shaped and transformed with regard to the kinds of technologies and techniques they use. She views HEP laboratories in terms of technologies of “correspondence” that “stage” real-world phenomena, while MB laboratories involve technologies of “treat­ment and intervention” in which the objects of research are processed partial versions of these phenomena. Thus, the detectors of particle physics laboratories are sign-pro­cessing technologies that examine particle beams within “a closed circuitry” in which measurements are inside the staged experiment, not outside as in the molecular biol­ogy laboratory, which is open to “natural” objects. Detectors mediate between the experiment and their data representation of the phenomenon. Because detectors vary widely, interact with each other, and are tied to “background” processes, HEP labora­tories often focus on negative knowledge areas of imperfection and uncertainty as ways to narrow down the region of positive, phenomenal knowledge. This also accounts for why experimental physicists often focus so much attention on analyzing the experiment itself, rather than the objects of it. By comparison, the more open MB laboratory seeks to maximize empirical contact through experimental manipulation of objects to develop positive knowledge. In contrast to the large-scale nature of HEP experiments, molecular biology is a benchwork science dealing with small objects in small laboratories. When problems arise, it adopts a “blind variation natural selec­tion” approach that tries out various alternative procedures until one is successfully found to “fit” and is hence selected. This is in sharp contrast to the more “scientific” self-analytical problem investigation of HEP experiments. Finally, MB uses the sen­sory body of the scientist as an information-processing tool in a way that has been eliminated by the HEP detector.

Knorr Cetina moves from the discussion of what goes on experimentally in the two types of laboratories to an analysis of the symbolic classifications that particular phys­icists and molecular biologists superimpose on their technical universes and that reveal relationships between objects and subjects. HEP is dominated by technology that determines what physicists can do, yet these machines turn into symbolic organisms. Thus, detectors “see” and are “in/sensitive” and can have “reactions” and “responses” while “interacting” with particles. They also “age,” “act up,” and “get sick” and can either “cooperate” or “misbehave.” Scientists blame the “background,” not the detector, which is a “friend,” while they “fight” the background, trying to “kill” it. Physicists seem less imaginative about themselves, referring symbolically to them­selves in terms of the objects with which they work—hence the “electron group” or the “top working group.”

Whereas machines are organically symbolized in HEP laboratories, in MB, living organisms are transformed into production systems and molecular machines. MB does not deal with naturally occurring plants and animals; rather, it cultures its own entities for study. Thus, mouse breeding for experiments is “standardized” and “ratio­nalized” as part of a “well-oiled production line,” and while cell lines are certainly “cared for,” it is out of concern for economy of time and resources, not a sense of morality. Mice, cell lines, bacteria, and vectors are production devices themselves— autonomous units in which biological tasks are performed not for the host’s own needs but for human objectives. Substances created by such biological machines are mass-produced, uniform, and pure. It is in this sense that the term “genetic engineer­ing” sustains the idea of biological machines and suggests a technological view of molecular biology.

Knorr Cetina’s third area of comparative analysis deals with the collective, com­munitarian structure of HEP experiments and the far more individualized nature of the MB laboratories. HEP experiments—because they depend so centrally on singular and very large experimental devices, involve hundreds of financially independent “institutes,” and whose “life” may run for upwards of twenty years—tend as a result to be cooperatively managed by content rather than by social hierarchies. Thus, funding, research work, and publication are largely divorced from individual scientists and are instead collectively focused. Because no one individual or even a small group can do all the work, “naming” and “epistemic agency” have shifted to the experiment, and publications are authored alphabetically with all the many hundreds of participants being listed. Conferences are attended by a variety of “spokespersons” who reflect the collaborative nature of the work. Horizontal “object-centered” organizational struc­tures characterize HEP experiments, as does the free flow of open “discourse” through widely circulated “status reports” and “confidence pathways” that help link people together. Knorr Cetina characterizes all this as entailing a “post-traditional communitarian structure.”

By direct contrast, MB laboratories entail a dual organizational format in which individual units focused around single researchers do the direct research, while a lab­oratory leader provides overall direction and is the focal point for the laboratory as a whole. In MB, where there is no dominating technical apparatus as in HEP, the indi­vidual scientist remains the epistemic subject, using skills and expertise in small “lifeworlds” that are largely separate from each other. The laboratory, which is a grouping of otherwise spatially and often territorially separate activities, is personi­fied in the leader who must position and represent the entity as a whole to those in the outside scientific community. Competitive tensions may evolve, for while most work is conducted by Ph.D. students, post-docs, and permanent position scientists, it is gen­erally only the laboratory leader who travels to conferences to represent the laboratory and hence reaps the benefits. There can also be tensions between those responsible for their own research projects and those who do necessary, but often unrecognized, “ser­vice” work within a laboratory. Primary authorship thus becomes much more of a competitive issue, for it is through such recognition that one advances through a more hierarchical career path than is generally evident in HEP, hopefully to become a labo­ratory leader at some point in the future. Knorr Cetina points out that there is a more well-defined “logic of exchange” in MB laboratories, in contrast to the communi­tarian principles evident in HEP experiments. Finally, she briefly notes that gender seemed to be more of an issue in MB laboratories than in HEP experiments, which she characterized as more typically being “mono-gendered,” albeit with a look more male than female.

In sum, this is a sophisticated study that because of its comparative lens provides the reader with useful insights into the two fields, insights that might otherwise be harder to discern if examined separately. Knorr Cetina’s study of the “cultures” within which scientific knowledge is constructed is a useful addition to science studies and contributes to our overall understanding of the multiple ways science is practiced. It further suggests the tight fit between techniques, technological instrumentation, and scientific knowledge creation in contemporary laboratory work. Epistemic Cultures should be a STS standard for some years to come.

—Stephen Cutcliffe Lehigh University

Science, Technology, & Human Values
Vol. 26, No. 3 (Summer, 2001), pp. 390-393

See also my interview with Prof Karin Knorr Cetina for Folha de SP, 2/5/2010, Mais!

The Ghost in the Machine

Replications: A Robotic History of the Science Fiction Film by J. P. Telotte. Urbana & Chicago: University of Illinois Press, 1995. 222pp. Review by: Vivian Sobchack

The central argument of this gracefully written and unpretentious volume is that “the image of human artifice, figured in the great array of robots, androids, and artificial beings found throughout the history of the science fiction film, is the single most important one in the genre.” Such replications of human function and being and the dramas of identity they generate can be tracked through time, marking “the interactions between the human and the technological that lie at the very heart of science fiction” (5). Historically played out in sf cinema through the figures of robots, androids, cyborgs and other technological “doubles” of the self, this central “fantasy of roboticism” (9) articulates increasing human ambiguity about the ambiguity of being human in the face of our own ever-increasing capacity for artifice. Telotte attributes this ambiguity to both the desire and fear that surround our relations with tech­nology and its creative/destructive power to mirror, interrogate, extend, trans­form, and dissolve our human being. The power of technological replication “exercises its seductive potential by...offering to make us more than we are— to grant us a nearly divine sway over life and death” while also “fundamental­ly devaluing human nature” (17) insofar as it becomes indistinguishable from or inferior to human artifice. Broadly tracking the history of the genre, Telotte notes the paradox of, on the one hand, the structural reversibility between the human and its technologically-constructed double and, on the other, the func­tional asymmetry and historical oscillation whereby we project ourselves into and as the technological “other” until our replications become “more human than human.” Thus, the human reasserts itself—revalued in the technological. Indeed, Telotte sees the genre’s teleology as “headed less toward showing the human as ever more artificial than toward rendering the artificial as ever more human, toward sketching the human, in all its complexity, as the only appro­priate model, even for a technologically sourced life” (23).

These rather general statements point to both the strengths and weaknesses of Replications. Its strength is that it presents a solid and chronological trek through the genre’s history that is clear and cogent in its readings of those paradigmatic texts selected to embody the volume’s central thematic. Its weak­ness is that, despite citation of major cultural theorists such as Haraway, Fou­cault, and Baudrillard and insightfiil readings of specific films, the volume tells us little new about the genre because its attempts at historical and cultural specificity are relatively cursory and in the service of a universalizing and fuzzy humanism. Certainly, Telotte is aware of the limits of his study and explicit in telling us that it “stops short—as history, as explication of the genre, and as cultural commentary” (i 94). What, then, do we get instead? A modest “vantage” point on our “technological doubles” via readings of a selec­tive group of films that are intelligently glossed in terms of their specific thematics, but unfortunately put into only the most general relation to the historical and cultural conditions of their production and reception.

Chapter 1, “Our Imagined Humanity,” raises some general theoretical issues and discusses the “fantasy of roboticism” in relation to literary and dramatic texts such as Shelley’s Frankenstein, Poe’s “The Man That Was Used Up” and “Maelzel’s Chess-Player,” Capek’s R.U.R., and, more significantly, in rela­tion to science fiction writers Edgar Rice Burroughs, Isaac Asimov, and Jack Williamson, Stanislaw Lem, and William Gibson. The aim of this “overview of a robotic mythos” (51) is to contextualize the film analyses to follow and, in a limited way, it does so. However, it also sets the book’s mode of gen­eralization about technology, culture, and history, a mode that, through de­fault, assumes the sameness of cultural difference as it asserts the genre’s historical specificity.

Chapter 2, “The Seductive Text of Metropolis,” quickly introduces the work of early film-makers and goes on to focus on Lang’s Metropolis (1926) as paradigmatic of the ambivalence surrounding technology found in “early cine­matic images of human artifice” (58). Telotte’s insightful analysis articulates the homology between the film’s seductive images and “special effects” and its narrative: “Metropolis seems self-conscious about how these images can make us desire the very technological developments whose dangers it so clear­ly details. It is almost as if Lang, in order to keep his ‘special effects’ from becoming too seductively ‘special,’ had decided to foreground seduction itself, especially through his central image of human artifice, to lay bare its work­ings” (59). Unfortunately, however, the cultural specificity of Metropolis is almost completely elided (much as is Lem’s work in Chapter 1). That the film is German, what it might have to do with the history of technology and its culture and, indeed, what it might have to do with ours (given the book’s predominant, if unmentioned, emphasis on American cinema) are issues that become subordinated to a general point and trajectory that lose substance as they are put in the service of a rather general argument.

Chapter 3, “A ‘Put Together’ Thing: Human Artifice in the 1930s,” con­siders films that foreground “violent efforts to redefine the human body as some sort of raw material, waiting to be reshaped, reformed by a scientific capacity for artifice” (86). Focusing on Frankenstein (1931), Island of Lost Souls (1933), and Mad Love (1935)—which problematize the generic boun­daries between horror and sf—Telotte is able to productively analyze the “image of the body under dissection, rendered as a thing to be explored, mastered, and reshaped” (74). These “mad scientist” movies are not merely “modem versions of the Promethean myth,” but “operate more in the Pygmalion mold, as they address what it means to fashion or refashion the human” (87) and dramatize the effect of the modem scientific spirit as the devaluation and subjection of the human. Reflecting a doubling of creator and created in which “the human [is] at odds with itself” (88), these generic hybrids dramatize both the desire for god-head and the overwhelming anxiety that we “might too readily assist in our own grotesque reconfiguration” (89).

Chapter 4, “A ‘Charming’ Interlude: Of Serials and Hollow Men,” ad­dresses sf serials of the 1930’s and 1940’s. At a time when Hollywood fea­tures gave us “little evidence of...technological fascination” (94), serials not only frequently featured robots and automata, but also were, themselves, machine-like constructions standardized in design and narratively predictable. The “imaginative worlds” of Flash Gordon and Buck Rogers both “depend on and stand for the forces of dynamic power, control, and undifferentiation— forces that explicitly in the course of their narratives, but implicitly in their every use, promise to turn the individual into a component part in some large machine, part and product in a serial process” (98). However, Telotte notes, the robots of the serials were relatively “empty” threats, “hollow” men who played out human rather than technological will—more like The Wizard of Oz’s Tin Man than the darker and more complex technological doubles to follow.

Chapter 5, “Science Fiction’s Double Focus: Alluring Worlds and For­bidden Planets,” deals with the fantasy of roboticism in the “golden age” of the 1950s through a focus on Forbidden Planet (1956). Not only does the film feature “Robby the Robot” (a replicant prominently functioning as a repli­cator), but it also “fashions a world practically full of doubles...doubled characters, repeated actions, and most importantly a thematic concern with duplication or imitation” (114) that emerges from and ultimately destroys both its central figure Dr. Morbius and the entire planet. For Telotte, Forbidden Planet is paradigmatic of a growing cultural awareness not only of the seduc­tions of simulation, but also “a lack in the double, a danger in the simulacrum that justifies the warning its title sounds” (114).

  Chapter 6, “Lost Horizons: Westworld, Futureworld, and the World’s Obscenity,” addresses the increasing conflation and confusion of the human and its simulacrum in 1970’s sf. Gone are the differences marked by mechani­cal robots and in their place—our place—are less easily detected androids. With recourse to Baudrillard’s notion of obscenity as complete “displayabili­ty,” Telotte glosses this shift as corresponding to the increasing collapse of “private life and public spectacle” in a “media-suffused environment” (132) which translates “‘private scenes,’ the space of desire, into public space” (136) such as Disneyland and, in sf, Delos. Thus, Westworld (1973) and Future­world (1976) both model and critique “the culture of schizophrenia that much of modem life and especially our artifice seem to promote” (139).

Chapter 7, “Life at the Horizon: The Tremulous Public Body,” suggests that sf film in the 1980’s recasts this schizophrenic vision of artifice by figuring its “subversive character” in a master trope that projects into our technological double the desire for “freedom and expression, even as it is pressed to be the perfect, servile subject of society” (149). Focusing on Blade Runner (1982), Robocop (1987), Cherry 2000 (1988), and Total Recall (1990) as texts which “respond to the blurred boundaries, the lost horizons fore­grounded by our artifice,” Telotte argues that these films both “speak of their own constructed nature and of the sort of public images of the self that the movies typically project” (165) and also affirm “how much of the human inevitably remains...despite our long history of repressing, denying or ‘de- realizing’ the self” (164).

Chapter 8, “The Exposed Modem Body: The Terminator and Terminator 2,” uses the chapter’s eponymous films (1984, 1991) as “fitting caps” for the volume’s “discussion of human artifice” (171). Both films reveal the con- structedness of being, and also urge that we not judge human beings by their “covers.” The “new gloss on the nature of the self in a postmodern and inev­itably technologized environment” is that the body as it appears is “infinitely variable, deceptive, and regenerative” (177). Telotte, by way of Robert Ro- manyshyn, concludes that these and other recent films about human artifice provide both symptoms of and occasions for critical distance. They not only show us how we reduce being “to the status of things,” but also allow us, “by reexamining that distant and superficial view of things...by peeling back the artificial surface and looking into our depths,” to “recognize how much we have iost touch with things’...and begin to reclaim the self” (183).

Replications is an accessible volume that might make a very good intro­ductory text for undergraduates who haven’t much experience with interpreting either films or science fiction. Because of its brevity and over-arching generality, however, it may be less than satisfying for those who are looking for a closely tracked archaeology or complexly developed genealogy of “the robotic mythos” and its relationship to specific changes in American culture’s romance with technology, to science fiction as a popular film genre, and to the very technologies of representation that enable the motif its visible appearance on screen.

-Vivian Sobchack, University of California at Los Angeles.

In: Science Fiction Studies, Vol. 23, No. 2 (Jul., 1996), pp. 299-302

See also: http://urania-josegalisifilho.blogspot.de/2012/05/the-exposed-modern-body-terminator-and.html

terça-feira, 27 de janeiro de 2015

The Exploration of Space by Dr Edwin P Hubble*

Desiring to give those of our readers who may not have heard the broadcast the opportunity of reading so clear a statement of such a large phase of astronomical research by so eminent an authority, we asked and were readily granted permission to reprint the text. Possibly those who did hear the broadcast will, like ourselves, welcome the privilege of reading and studying it at leisure. We, therefore, present it here with much satisfaction.


Astronomy is the study of the universe - the study of its structure and its behavior. From our home on the earth we look out into the dim distances, and we strive to imagine the sort of world into which we are born. We are confined to the earth. Our knowledge of outer space is derived from light waves and other radiations which come flooding in from all directions.

From time immemorial men studied the heavens with their unaided eyes. Finally, about three centuries ago, the telescope was invented. With the growth and development of these giant eyes, the exploration of space has swept outward in great waves. Today we explore with a telescope 100 inches

- more than 8 feet - in diameter. It has the light gathering power of more than 200,000 human eyes. We observe a volume of space so vast that it may be a fair sample of the universe itself. Men are already attempting to infer the nature of the universe from a study of this sample.

The explorations fall into three phases. The first phase led long ago to a picture of the solar system - the sun with its family of planets, including the earth, isolated and lonely in space.

Then, after several centuries had passed, a picture of a stellar system began to emerge. This was the second phase. The sun was found to be merely a star, one of several thousand million stars which, together, form our stellar system - a swarm of stars drifting through space as a swarm of bees drifts through the air.

From our position within the system, we look out through the swarm of stars, past the boundaries, into the universe beyond. The conquest of this outer space is the third, and most recent, phase of the explorations.

The outer regions are empty for the most part. But, here and there, scat­tered at immense intervals, we now recognize other stellar systems, compa­rable with our own. These stellar systems, these lonely drifting swarms of stars, are the true inhabitants of the universe.

They are so remote that, in general, we cannot distinguish their individual stars; the swarms appear merely as vague, cloudy patches of light, and were called by the name “nebulae”, the Latin word for “clouds”.

A few of the nebulae appear large and bright; these are the nearest swarms. Then we find them smaller and fainter, in constantly increasing numbers, and we know that we are reaching out into space farther and ever farther until, with the faintest nebulae that can be detected with the greatest tele­scope, we reach the frontiers of the Observable Region.

This glimpse of space, thinly populated by drifting swarms of stars, has been revealed by great telescopes, and in particular by the greatest of all those in actual operation, the l00-inch reflector of the Mount Wilson Observatory. It was the 100-inch that first detected individual stars in a few of the nearest nebulae, and identified among them several types of stars that are well known in our own system. Since the real brightness, or candle-power, of such stars had already been measured in our own system, their apparent faintness indicated their distances and, consequently, the distances of the nebulae in which they lay.

Once the essential clue of the distances was found, the mystery of the neb­ulae was quickly solved. They are, in fact, huge stellar systems, like our own system, and they appear small and faint only because they are vastly remote.

Some nebulae are giant systems and some are dwarf, but the range in candle- power is not great. For statistical purposes, they can all be considered as equally luminous. Therefore, their distances ate correctly indicated by their apparent faintness. This property has been used to survey accurately the whole of the Observable Region out as far as telescopes can reach.

The scale of the survey is so immense that a special unit of distance is employed in the reports. This unit is the light year - namely, the distance light travels in a year going at the rate of 186,000 miles each second. The number of miles in a light year is six million million - in other words, six followed by 12 ciphers. Light reaches the earth from the moon in about one and one-third seconds, from the sun in about eight minutes, and from the nearest star in about four and one-half years. This last figure is typical. The average distance between neighboring stars in our system is several light years. The diameter of our system (which is one of the giant nebulae) is about 100,000 light years.

The faintest nebulae that can be detected with the greatest telescope are, on the average, about 500 million light years away. We intercept and pho­tograph today the light which left these stellar systems far back in a remote geological age. This light has been sweeping through space for millions of centuries at the speed of 186,000 miles each second. Truly, as we look out into space, we look back into time.

With the largest telescope, we can look out into space about 500 million light years in all directions. Thus, the Observable Region is a vast sphere, about 1000 million light years in diameter, with the observer at the center.

Throughout this sphere are scattered about 100 million nebulae, each a stellar system at some stage of its evolution history. These nebulae aver­age about 10,000 light years in diameter and about 100 million times the brightness of the sun. The average distance between neighboring nebulae is about two million light years.

A rough model of the Observable Region might be represented as follows. Assume that the sphere, 1000 million light years across, is reduced to a sphere with a diameter of one mile and a half. Then the 100 million nebulae are reduced to the size of golf balls, and they are scattered through the sphere at average intervals of about 30 feet. On this scale, the earth could not be seen with a microscope, not even with an electron microscope.

The nebulae are scattered singly, in groups, and even in clusters, but this irregularity is a minor detail. When very large volumes of space are com­pared, they are found to be remarkably alike. On the grand scale, the Observable Region is very much the same everywhere and in all directions

- in other words, it is homogeneous.

This feature could not be predicted. It is the first characteristic definitely established for our sample of the Universe.

Only one other general feature has been found. Light reaching us from the nebulae has lost energy in proportion to the distance it has traveled. The fact is established, but the explanation is still uncertain.

* One of a series, delivered by American scientists, on the New York Philharmonic-Symphony program, sponsored by United States Rubber Company. Reprinted by permission of the United States Rubber Company. Courtesy: Maria Mitchell Observatory. Provided by the NASA Astrophysics Data System.

In: Popular Astronomy (Vol.54, p. 183, 1946).