By Ken Baake (ken.baake@ttu.edu)
A sign posted in the Ashland, Kansas, courthouse in the style of an Old West wanted poster reads, “Wanted dead: Sericea lespedeza”. The sign urges landowners to remove any outbreaks of the plant, a kind of flowering clover, noting that “This noxious weed can replace grass, reduce carrying capacity, and reduce income”. In comparison, a Web page from the College of Agriculture at Auburn University in Alabama describes a variant of Sericea lespedeza that has been modified through plant breeding to provide “numerous advantages as forage crop” for livestock.
So which is it—noxious pest that crowds out real food for domestic animals or nutritious grazing crop? As with many plants and animals in the science of ecology, beauty is partly in the eye of the beholder, and partly determined by where they are doing the beholding. A wild clover spreading from roadside to pasture in Kansas might be seen as “highly competitive and invasive”, as the Kansas State University agriculture extension service Web bulletin states. But those same successful reproductive traits, when harnessed through selective breeding in Alabama, can make it “a highly economical forage crop to grow”.
Ecology is replete with metaphors such as “invasive” that impart a value judgment on different species that populate the Earth, revealing that scientific classifications often depend on the worth of something to human beings. The word ecology comes from the Greek oikos (household), and is thus, itself, a metaphor that depicts the flora and fauna of the earth as part of the furnishings of human society. Today’s science of ecology looks at various interrelationships between living entities and their living and non-living environments as if we were all living in the same house.
I first became aware of the highly metaphorical and anthropocentric nature of ecology when I participated in a working group of ecologists, rhetoricians, and other scholars at the National Center for Ecological Analysis and Synthesis in Santa Barbara in 2004. The topic for the four-day event was Ecological Metaphors: Their Cultural Resonance and What We Can Do About It. The conference was organized by Brendon Larson, an ecologist and interdisciplinary scholar of science and society who is particularly interested in the language of ecological metaphors. The assumption that brought us all together is that metaphors in ecology are prolific and often necessary to understand the complex interactions of species, but that those metaphors can be troublesome—inviting imprecise comparisons.
“Invasive species” or “alien species” are some common value-laden metaphors we encounter in ecology. These terms suggests an animal or a plant, such as Sericea lespedeza, that has been introduced into one ecosystem from another ecosystem that is geographically distant. The species is cast in military terminology as an invader that gains a foothold in a new land, like an army storming the beaches, crossing the Alps, or sweeping down from the steppes of Central Asia.
Invasive species—as defined by Mark Woods and Paul Moriarty and summarized by Larson in an unpublished paper—are those that meet these five criteria:
- Introduced by humans
- Evolved somewhere else
- Lives outside its historical range
- Degrades its new environment
- Not integrated into an ecological community
Perhaps the most infamous plant to earn the “invasive species” distinction is Kudzu, introduced from Japan in 1876 at the Centennial Exposition in Philadelphia and later planted in the American South to aid in erosion control. It became so successful—growing sometimes at a foot per day—that it took over trees, telephone poles, and anything in its path. Another fairly well-known example of invasive species is the Zebra Mussel that made its way to the North American Great Lakes in the 1980s from Central Asian inland seas by stowing away in the ballast of ships. These mussels are about the size of a fingernail and cluster in ways that can clog boat motors or crowd out other species in the lakes.
The most dreaded invader in recent years has been the Northern Snakehead fish, an import from China that turned up in ponds in Maryland and Virginia. The story of this fish grew to horror movie status as a “Frankenfish”; it is said to have a rapacious appetite for native species, to be able to survive outside of water for several days, and to be difficult to kill. “Please kill this fish by cutting/bleeding”, the Maryland Department of Natural Resources implored outdoors enthusiasts who might encounter it on a fishing trip.
These words of alarm about invasive species can ring incongruous to scientists schooled in Darwinian evolutionary biology, which asserts that nature has no goal and that species are constantly evolving, changing, and moving around. Thus, ecologist Larry Slobodkin asks a difficult question: “What can it mean, 150 years after Darwin, to say that some species or communities are good and some are bad?”
I asked a similar question when working earlier this year on a stream restoration project in New Mexico, part of which involved removing “alien” Siberian elm trees from the stream banks so that “native” cottonwood could have the space. As far as I could tell, the main thing that made the cottonwood preferable to the elm is that it was living in the region before the elm showed up. Its nativeness was its main attribute. But go back far enough in evolutionary history and you will find that cottonwoods at one time also were new to the region.
Perhaps the main threat of creatures like the Northern Snakehead to the ponds of Maryland or Virginia is that they are robust, successful, and, let’s be honest, frighteningly ugly with their gaping jaws. Just like cockroaches, anything that thrives and looks radically different threatens us at our core human level. As I write this I am thinking of the words John Lennon sang in the old Beatles song about hunting tigers in the jungle: “If looks could kill, it would have been us instead of him.”
So, is the concept of invasive species a variation on the instinct among human societies to distrust the newcomer, the illegal alien? Larson suggests this could be the case, writing that the term invasive introduces militaristic and xenophobic thought patterns to conservation studies and creates an overly simplistic view of the environment as a battleground between good and evil. Politicians successfully play on this fear by stirring up anxiety of “illegal aliens” who flock to our country and supposedly take away jobs and social services. Are ecologists doing the same fear mongering in order to win public support and financing for eradication projects?
I am not suggesting that ecologists should not call attention to the fragility of our environment and the need for humans, as stewards, to recognize that human-based norms should drive our stewardship policy. Nor am I expressing affection for harmful proliferators such as the Zebra Mussel. I am merely positing that we should acknowledge that despite how the metaphor of invasion focuses our thoughts, there is nothing intrinsically good or bad about any species when considered over the course of evolution. If we decide that diversity of plant and animal life is desirable and that ravenous species threatening such diversity should be controlled, we must recognize that we are making value judgments, not neutral scientific judgments.
References
Ball, D.; Mosjidis, J. (undated) AU grazer—a Sericea lespedeza that tolerates heavy grazing. Auburn University. <www.aces.edu/dept/forages/ForageAUGrazer03.htm> Consulted 6 September 2006.
MDNR. (undated) Have you seen this fish? Maryland Department of Natural Resources. <www.dnr.state.md.us/fisheries/fishingreport/snakehead.html> Consulted 6 September 2006.
Larson, B. The war on invasive species. Unpublished essay.
Larson, B. 2005. The war of the roses: demilitarizing invasion biology. Frontiers in Ecology and the Environment 3(9):495-500.
KSRE. (undated) Sericea lespedeza awareness campaign. Kansas State Research and Extension. <www.oznet.ksu.edu/sericea/sericeainfo/Awareness/awareness.htm> Consulted 6 September 2006.
Slobodkin, L.B. 2001. The good, the bad and the reified. Evolutionary Ecology Research 3:1-13.
Woods, M.; Moriarty, P.V. 2001. Strangers in a strange land: the problem of exotic species. Environmental Values 10(2):163-191.
Ken Baake (ken.baake@ttu.edu) is an associate professor in the Texas Tech English Department. He is the author of Metaphor and Knowledge: The Challenges of Writing Science, published in 2003 by SUNY Press. Current research interests include the rhetoric of science, including the ways in which different understandings of the world (e.g., scientific, metaphorical, mythological) interact and clash in the development of environmental policy.
Editorial: Overextending metaphors
by Geoff Hart (ghart@videotron.ca)
The world is a complex place, and understanding that complexity can be a thorny challenge. Different communicators solve that problem in a range of interestingly different ways. Mathematicians, for instance, create abstractions that describe the world by combining the symbols of mathematics into equations. To those of us who work with words, this is clearly analogous to creating words from combinations of letters, and sentences from combinations of words—though the words and sentences of mathematics are entirely incomprehensible if you don’t speak the language. For many aspects of reality, particularly in sciences such as physics, an equation provides a sufficiently complete and accurate description of reality that those who speak the language understand the message more clearly than would be possible with any other form of communication. For those of us who aren’t mathematicians, we need something a bit more concrete to help us wrap our minds around a complex concept.
Abstraction is the process of simplifying something complex by eliminating details until only the essentials remain. Just as mathematicians abstract their world using equations, nonmathematicians also use abstractions—but more familiar ones. One effective approach to abstraction involves the creation of a metaphor that compares something we don’t yet understand with something that we do understand. By demonstrating the similarities between the known and the unknown, the metaphor builds a bridge between the two. Crossing that bridge provides the necessary understanding. (Indeed, my example of bridging a gap is one such metaphor.) Seen from the perspective of how human memory works, metaphors let us connect new information with information that already exists in the brain, and such connections are a powerful tool for creating meaning.
One problem with metaphors is that they can be carried too far: because a metaphor is only a simulation of reality, it does not precisely or fully match that reality, and each mismatch can potentially lead to misunderstanding. Consider, for example, the trash can used to delete files in most graphical user interfaces. The Macintosh interface designers who chose this metaphor to describe how users discard files chose an obvious and effective metaphor because just about everyone understands how a trash can works. But unfortunately, a great many users took that metaphor places its designers never intended. When this interface choice was first made, many Macintosh owners used their computer at home or in a small graphics studio rather than in a large corporate workplace, and thus used their experience with trash cans to make the following assumption: “When I throw something in the trash, it’s going to stay there forever, or at least until I can persuade someone to take out the trash.” Unfortunately, the first implementation of the Macintosh trash can automatically emptied the trash when you shut down your computer. That was clearly a problem for anyone who expected the discarded files to still be there waiting for them the next day when they turned on their computer.
So many people complained about losing precious files (never mind that these files should never have been in the trash in the first place) that Apple changed the interface. Version two of the trash can accounted for this problem by leaving deleted files in the trash until someone specifically told the computer to empty the trash. That’s a great idea, except by then, the world had moved on and more Macintosh users were using their computer in the workplace, where a janitor could be relied on to empty the trash each night after the workers went home. Since that wasn’t the way the software actually worked, the inevitable consequence was that files accumulated in the trash until they took over the entire computer; in other cases, people deleted files that were potentially embarrassing, not realizing the files were still there to be discovered by anyone who went poking around in the trash. Clearly, another small interface failure; unlike a spouse or roommate, the Macintosh operating system doesn’t remind you to empty your trash periodically.
A future iteration of the interface will presumably strike the right balance between versions one and two by retaining information in the trash until you specifically delete it, but also by periodically providing a gentle reminder to empty the trash. This example illustrates an important rule for successful use of metaphors: you must strive to understand the consequences of the metaphor by asking yourself what users will think when they encounter it, and thus, how they can be expected to behave. Where some behaviors will prove damaging, we need to clearly communicate the problem and its solution in our documentation. Better still, we need to report the problem to the designers of a product so they can take appropriate measures to protect users from their own instincts.
Another significant problem with metaphors is that they rely on certain assumptions, and those assumptions bias how we think about reality. One of the most famous (some might say infamous) relates to a favorite device of science fiction writers: time travel. Science fiction writer René Barjavel, in pondering the implications of time travel, wondered about what quickly became known as the grandfather paradox: What would happen if you traveled back in time to a date before your parents were born, and killed one of your grandparents? Clearly, this means that one of your parents would never have lived, and thus could not have conceived you; the result, a few years into the future, is that you would never exist to return and kill that grandparent. But because you did not kill the grandparent, your parent would be born, leading to your birth and your subsequent desire to travel back in time and become a murderer. Round and round we go until we give up in frustration and choose a convenient way to avoid the problem—declare that time travel is impossible.
Whether or not time travel really is impossible, that would be an unfortunate choice, because paradoxes are crucially important in science: they reveal when we don’t understand a process nearly as clearly as we thought we did. If we did understand fully, there would be no paradox. The grandfather paradox presupposes that we understand how the physics of time travel would really work, namely that there is an indestructible connection between the past and the future and that changing the past would inevitably change the future. Should we stop there, no one would ever examine time travel in more detail to see whether other possibilities exist, and that would rob us of a much richer understanding of our world. One consequence might be the elimination of the branch of mathematics that examines the “many worlds” hypothesis, in which a whole new universe is hypothesized to spring into existence as soon as we change the past. In the case of the grandfather paradox, this means that two universes (one in which you are born and one in which you are not) would move forward through time from that point onwards. In writing a story, I once proposed a different metaphor: that time is more like a VHS tape, and that if you go back and change something, this is no different from recording over an old program you’ve already watched. The future (the part of the tape after the new recording) isn’t changed because you haven’t overwritten it yet.
Both metaphors may be entirely incorrect (as seems likely based on our modern understanding of physics), but their correctness is not the important issue here: what’s important is how each metaphor biases the way we think and predetermines the kind of analysis we’re prepared to consider. Thus, a second rule of successful use of metaphors is that we must take great pains to understand the constraints they place on our thoughts. If we’re aware of those constraints, we can attempt to work around them; if not, we won’t make that effort, and that may prevent us from making crucial new discoveries.
A third problem arises if we oversimplify our description of reality and thus neglect key issues. Consider, for example, the issue of fighting forest fires. Because mature forests develop over time spans longer than the typical human life, it’s natural for us to think of them as eternal. Because we now understand the value of “untouched” nature, the inevitable consequence is that we want to preserve old forests and protect them against fires. This belief is epitomized in the public consciousness by Smokey the Bear and the “only you can prevent forest fires” slogan. Although it’s true that human-originated fires are a serious problem, and should often be fought, the often part is neglected. In particular, the limited worldview offered by Smokey the Bear ignores the fact that fires are a crucial part of natural ecosystems and that some forest ecosystems only develop after fires, and will eventually disappear from the landscape if natural fires are not allowed to burn.
The more general point is captured by the cliché that “the only constant is change”. Ecosystems, including forests, aren’t truly stable; instead, they exhibit what is known as metastability, in which what seems stable from the outside is actually changing continuously. In a forest, old trees die, unlucky trees are felled by lightning or windstorms, and new trees sprout to take their place. Rather than perfect stability, a mature forest is in equilibrium: individual components change, but the overall ecosystem stays close to its current state. Yet these equilibrium states also change; if the environment changes, or if disturbances such as fire are prevented, natural processes will lead the ecosystem to change into something new, and a new equilibrium will develop. For example, in the absence of fire, boreal jack pine forests will be replaced by shade-tolerant decidous trees that grow in the limited light beneath the forest canopy. As the older trees die, they are replaced by younger decidous trees, which produce so much shade when mature that the pines can no longer survive.
The problem with describing ecosystems as stable is that it conceals the important concept of dynamic equilibrium, and the consequence that any equilibrium will eventually shift to a new type of equilibrium. This means we can never preserve a specific ecosystem in its current state forever, and that we probably should not try. Instead, it is more important to preserve the conditions that allow a given site to evolve naturally from one equilibrium state to another (“succession”), while altering conditions elsewhere to permit the development of the desired ecosystem. Communicating more of the complexity provides the necessary bounds on the metaphor, permits a more complete understanding, and lets us choose wiser management strategies.
A third rule for successful use of metaphors is thus that we must identify critical points of failure—places where the metaphor is insufficiently complete that it leads our audience astray—and must provide the missing complexity that will prevent this misunderstanding. We must recognize that the purpose of a metaphor is to facilitate understanding, but once that understanding exists, we must build on it to provide any missing details that explain the true complexity.
As scientific communicators, we often resort to metaphors because of their power to facilitate understanding. But to use metaphors successfully, we must be conscious of the problems I’ve identified in this essay: we must identify mismatches with reality, implicit and explicit assumptions, and places where the metaphor is too simplistic. Understanding these three problems lets us help our audience to understand the mismatches between the metaphor and reality, remind them of the assumptions behind the metaphor so that they can challenge those assumptions and make conceptual breakthroughs, and recognize where we have oversimplified a complex reality. That oversimplification is only acceptable if it provides an initial understanding that we can subsequently build upon to create a deeper, richer understanding.
***
In a non-metaphorical sense, you might be interested in an article I recently published on knowledge and technology transfer. You can find it on my Web site:
Hart, G.J.S. 2006. Technology and knowledge transfer: science and industry working together. KnowGenesis International Journal for Technical Communication 1(2):28-31. <www.geoff-hart.com/resources/2006/techtran.htm>
Goals of scientific use of metaphor: risks and rewards
by Candice McKee (candie_mckee@sbcglobal.net)
The debate over the use of metaphors in science isn’t new, but despite this, their use definitely has not become better understood. Many of my academic colleagues still believe that scientists claim to “not use metaphors”. As Ken Baake notes, this is simply a misconception. Most modern scientists know that the use of metaphors is necessary, yet risky because the reference attached to metaphors is specific to each individual reader. My colleagues in the English department, most often literature experts, tend to understand the use of metaphor primarily in terms of their literary studies, which include examinations of Hobbes, Locke, and Sprat. These studies are important in their historical context; however, modern studies of metaphor use in science provide us with a richer understanding of how metaphors are used by scientists and how scientists understand the use of metaphor in their own work.
In the literary world, metaphors open doors to knowledge, invoke powerful feelings, and develop esthetic prose. This approach isn’t risky because the reader is known to be on a specific kind of journey—to understand and derive something personal from the prose. In scientific writing, the reader is on a different journey—to understand the advantages, disadvantages, limits, and qualifiers of the information. The scientist is also on a journey—to provide clear, accurate, and concise information about current scientific theories or new, cutting-edge theories that might change science as we know it. Scientists face large challenges when they attempt to use metaphors in as narrow a way as possible because if the reader, whether expert or lay, attaches meaning to the metaphor that does not apply, the wrong meaning may be communicated and the scientist may quickly lose credibility.
In 1998, Johnson-Sheehan noted in his essay Metaphor in scientific discourse (in Battalio 1998) that by studying the use of metaphors in scientific writing we would better understand its use in other genres. Using a repertoire of studies, he asserted that the use of metaphor in scientific writing was limited because scientists understand the risk of using metaphors. Johnson-Sheehan noted that when scientists do use metaphors, they use them to construct and create scientific knowledge, to support descriptions of tasks, and to help tell a story about the world’s causal structures.
As scientists begin to analyze scientific logic and data obtained from experiments and studies, they need a way to connect abstract concepts with real-world ideas, because not everyone thinks in pure abstraction. Metaphors help writers encourage other scientists to think about current ideas in new ways, sometimes leading to what Kuhn described as “paradigm shifts”. Furthermore, this offers scientist writers a method by which to look at new ideas and to further develop them. Writers often assign an initially vague metaphor to a concept in an effort to leave room for further development as they develop a stronger understanding. When scientists use such a vague metaphor to construct scientific knowledge for an abstract concept, they open the door for other scientists to offer additional metaphors to support or expand on the same concept.
Metaphors are functional in scientific writing and used to support the scientist’s research and ideas—not something used solely for poetic license, to demonstrate their writing skills. However, our understanding of these metaphors may change, just as our perceptions of history change, leaving some metaphors to appear “poetic” after the passage of time even when the original intention was functional use. For example, scientists in the 17th and 18th century might describe an experiment in which the length and width of a hole is of extreme importance. Instead of using mathematical measurements, some scientists used commonly known items to describe such measurements as “the length and width of a Taylor’s [tailor’s] needle”. To the modern reader, this metaphor holds little to no functional meaning because few people are familiar with the length and width of a tailor’s needle. Metaphors, then, may not retain their intended functionality over time. As new metaphors are created, others disappear.
Scientists seek to describe and define the causal structure of the world—what we call “reality”. Those of us with a rich background in philosophy know that “reality” can be subjective; moreover, those of us with an understanding of cross-cultural communications understand how readily “reality” changes from culture to culture. Scientists seek to describe and define the world as it really is—but without language, humans have no way of doing so. Many scientists understand that “reality” may be subjective for any number of reasons. Thus, they try to limit that subjectivity by limiting metaphorical references and telling readers exactly how they intend their metaphors to apply. In addition, scientists often revise such references to help clarify the intended meaning. This may occur when a scientist assigns a vague metaphor to a new concept, then redefines that metaphor as their understanding becomes clearer. As scientists come to increasingly understand the causal structure of the world, “reality” is redefined and clarified by means of increasingly sophisticated metaphors.
Because of the risks of using metaphors, their use in science will continue to provide scientists, writers, and researchers with a contentious point of debate. But when used skillfully, scientific metaphors will continue to inform us about our world, our “reality”, and ourselves.
References
Baake, K. 2003. Metaphor and knowledge: the challenges of writing science. SUNY Press, Albany, N.Y.
Battalio, J. (Ed.), 1998. Essays in the study of scientific discourse: methods, practice, and pedagogy. Ablex, Stamford, Conn.
Gross, A. 1996. The rhetoric of science. Harvard University Press, Cambridge, Mass.
Gross, A.; Harmon, J.; Reidy, M. 2002. Communicating science: the scientific article from the 17th century to the present. Oxford University Press, Oxford, U.K.
Kuhn, T. 1996. The structure of scientific revolutions. 3rd ed. Univ. Chicago Press, Chicago, Ill.
Candice McKee (candie_mckee@sbcglobal.net) is a senior member of the Oklahoma State University (OSU) STC student chapter and is working on her PhD at OSU. She is past president of the Oklahoma chapter and currently the manager of the International Student Technical Communication Competition (ISTCC)
Book review: The Chicago guide to writing about numbers
Miller, J.E. 2004. The Chicago guide to writing about numbers. University of Chicago Press, Chicago, IL. [ISBN 0-226-52631-3. 312 p., including index. USD$17.00 (softcover).]
by David E. Nadziejka (David.Nadziejka@vai.org)
Previously published in Technical Communication 52(4):480-481.
The Chicago guide to writing about numbers provides a comprehensive discussion of why and how to use numbers in written documents. The first of the three parts of this book, “Principles,” provides a set of 12 principles for writing about numerical data, as well as a general discussion of causality and significance. The second part, “Tools,” provides four chapters on quantitative comparisons, tables, figures, and examples and analogies. The final part, “Pulling it all together”, covers distributions and associations; data and methods; methods for writing introductions, results, and conclusions; and guidelines for presenting numerical data in oral presentations.
Throughout the book, Miller uses examples in the form of “poor, better, best” writing to illustrate how a particular wording can be improved, with a concise explanation of exactly what the improvement is. This is an effective and useful way to add clarity.
The audience Miller aims at is not clearly specified; the outside back cover states the book was tested “with students and professionals alike”. She assumes the reader has “a good working knowledge of elementary quantitative concepts such as ratios, percentages, averages, and simple statistical tests” (p. 7). The question of audience comes up because of the basic level at which Miller starts the discussion; for example, from the first chapter: “An important aspect of ‘what’ you are reporting is the units in which it was measured. There are different systems of measurement for virtually everything we quantify” (p. 13).
Although Miller presents accepted principles—always give units with numbers, spell out numbers (and their units) that begin a sentence, clearly define terms—she provides a great deal of explanation and rationale. Writers who have little experience with numerical data will likely need this fundamental approach, but I think much of it is material that technically knowledgeable readers do or should already understand. I found it difficult to maintain my concentration through the “Principles” part: there was too much explanation of things I already know and accept.
That said, Chapter 3, on causality and significance, is one for readers of any background. Novice writers will surely need this discussion on causality, bias, and association. Statistical and substantive significance are defined and explained, with useful examples that illustrate the differences between prose suitable for technical readers and that suitable for the general public. Although the discussion is necessarily limited, even writers in a technical field will benefit from the succinct review of kinds of significance and how to write about them.
Also of benefit for virtually any writer is Miller’s discussion in Chapter 4 of the types of variables (continuous versus categorical), their characteristics, and how they can and cannot be manipulated. It’s too easy to forget that you are dealing with categorical variables when the groups are unfortunately named “1” through “6” and then proceed to erroneously “average” those numbers.
The “Tools” part covers the types of charts and tables: when to use them and how to prepare them. The “how to prepare” instructions are provided in great detail, including when and how to indent row headings of tables and how to order the variables listed. The chapter on charts covers the usual pie charts, bar charts, scatter plots, and so on, with detail on when each is appropriate and a 4-page summary table. The examples discussed in each of these chapters are suitably diverse for a novice writer, but the complexity of the examples is limited.
The “Pulling it all together” part of the book focuses on the writing of your document and contains a large number of “poor, better, best” examples. Miller gives guidelines about how to select an appropriate level of technical detail. There is a cursory mention of writing general-interest articles (with an accompanying example) and a longer discussion with examples of the structure of—and instructions on how to write—a standard scientific paper. The 25-page chapter on using numbers in oral presentations provides good advice and guidelines, some of which I see ignored regularly in presentations by scientists, engineers, and (I must admit) technical communicators.
The information in this part of the book generally seems more advanced than that in the first two. In a way, this is natural given that the discussion is about the complex process of writing clearly, but certain passages seem well beyond the grasp of a novice writing about numbers. For example, discussing a scientific paper’s Methods section, Miller writes: “If your data are from a random sample, they usually come with sampling weights that reflect the sampling rate, clustering, or disproportionate sampling, and correct for differential nonresponse and loss to follow-up . . . ” (p. 213).
Overall, this book contains useful information on writing about numbers; I found very few principles or details that I would disagree with. The question is who will benefit from reading the book. My thought is that this is primarily a book for writers with little or no background in dealing with numerical data in their prose; it may also be useful for undergraduates in science or engineering. Although experienced technical writers or editors may glean a new idea or two from Miller’s book, for the most Part I believe they will simply be nodding in agreement at lessons already learned.
David E. Nadziejka (David.Nadziejka@vai.org) is the biomedical editor at the Van Andel Research Institute in Grand Rapids, MI, and an STC fellow. He has been a science and engineering editor for 25 years and has taught technical communication at the Institute of Paper Chemistry, Argonne National Laboratory, and Illinois Institute of Technology.
Book review: Creating more effective graphs
Robbins, N.B. 2005. Creating more effective graphs. Wiley-Interscience, Hoboken, NJ. 402 p., including index. [ISBN 0-471-27402-X. USD$64.95 (softcover).]
by Denise Kadilak (Denise.Kadilak@Blackbaud.com)
Previously published in Technical Communication 52(4):485-486.
Do you need to create a graph? Are you unsure what type of graph will work best with your data? Are you afraid of alienating your readers with an ineffective graph? If so, you may find Naomi B. Robbins’ Creating more effective graphs helpful.
Creating more effective graphs is a quick reference guide on selecting easy-to-read charts and graphs for any context. The book examines a multitude of graphing options in relationship to various data scenarios and briefly explains why some graphs work and others do not.
Robbins, president of NBR, a graphical data presentation consulting and training company, moves her readers from the simple—pie charts, dot plots, and bar charts—to the complex—trellis displays, mosaic plots, and linked micromap plots. The journey is slow and gentle. Robbins avoids large leaps, using a variety of examples at every level and explaining and displaying advantages and disadvantages of one graph type over another. While moving readers through this host of sample graphs, Robbins’ focus—the data—never wanes: Is the data clearly presented? How long will it take the reader to discern the results? Is there a better graphing alternative?
The arrangement of the book mirrors Robbins’ concern for clarity. The book is composed of a series of sample graphs, each accompanied by brief overviews of the data represented and explanations of why one version works better than another. Robbins invites you to take your time, to study each graph and come to some conclusion as to what it represents before turning the page. Once you turn the page, a better graphic representation of the same data usually appears, clearly demonstrating the limitations of many common graphs and the profound affect the choice has on the user’s interpretation of the data.
Influenced by the work of William S. Cleveland and Edward Tufte, the psychology of information design weighs heavily on Robbins’ book. In The visual display of quantitative information (Graphics Press, 1983), a classic on statistical graphics, Tufte argues that a good graphic design allows viewers to quickly comprehend a large array of ideas. Cleveland, in The elements of graphing data (Wadsworth, 1985), argues that the effectiveness of a graph depends on how well you make use of graphical perception principles. Understanding visual and graphical perception is key, Cleveland contends, to creating a user-friendly graph.
Robbins’ examples demonstrate both principles. She repeatedly asks you to time yourself when interpreting a graph, to compare the time it takes to read one graph versus another graph, and to notice if you glean more information from one graph versus another.
For example, when graphing the area of U.S. states, Robbins first uses a strip plot, which displays the distribution of data points on a numerical axis. With the plot, it is easy to discern area range and the location of most of the values. On the next page, however, she displays the same data using a dot plot. The increased clarity of the dot plot is astounding. The dot plot is about three times larger than the strip plot, allowing a good deal more space in which to display all fifty states. Because of the increased space, the dots representing the data do not overlap, making it much easier to see the area.
Of the dot plot, Robbins explains, “Dot plots were introduced by Cleveland (1984) after extensive experimentation on human perception and our ability to decode graphical information. Since the judgments the reader makes when decoding the information are based on position along the common horizontal scale, these plots display data effectively. Since it would be very difficult to fit the names of the states on the horizontal axis, dot plots place them on the vertical axis, and the quantitative variable, area in thousands of square miles, on the horizontal axis” (p. 69).
She takes this same example a step further, demonstrating how to improve on the dot plot by arranging information in order of size rather than alphabetically. She explains, “As in any form of communication, we must know our audience and tailor what we say to be appropriate for that audience, the readers of the chart” (p. 71).
Robbins also offers helpful advice on the various graphing software products available and a checklist of possible graph defects.
The book’s only fault is Robbins’ weak prose style, which affects primarily the preface and introduction. She does well writing short, to-the-point explanations of the various graphs, but the comparatively text-heavy preface and introduction prove awkward for her. Her sentences lack variety, and the excessive use of pronouns with no antecedents makes her writing difficult to understand at times. Fortunately, Robbins’ solid knowledge of her topic and sound visual aids overcome these flaws.
Creating more effective graphs answers all the basic questions of graphing: What constitutes an effective graph? How do I choose a graph? How do I recognize an ineffective graph? Novice and experienced graph designers alike will benefit from reading this book.
Denise Kadilak (Denise.Kadilak@Blackbaud.com) is a technical writer with Blackbaud, Inc., a software company in Charleston, South Carolina, and a senior member of the STC Northeast Ohio chapter. She holds a master’s degree in English and is currently working toward a certificate in computer science at The University of Akron.
Parting thoughts
“You can know the name of a bird in all the languages of the world, but when you’re finished, you’ll know absolutely nothing whatever about the bird… So let’s look at the bird and see what it’s doing—that’s what counts. I learned very early the difference between knowing the name of something and knowing something.”—Richard Feynman
—
“It is the theory which decides what we can observe.”—Albert Einstein
—
“If you are ever forced to take a chemistry class, you will probably see, at the front of the classroom, a large chart divided into squares, with different numbers and letters in each of them. This chart is called the table of the elements, and scientists like to say that it contains all the substances that make up our world. Like everyone else, scientists are wrong from time to time, and it is easy to see that they are wrong about the table of elements. Because although this table contains a great many elements, from the element oxygen, which is found in the air, to the element aluminum, which is found in cans of soda, the table of elements does not contain one of the most powerful elements that make up our world, and that is the element of surprise.”
—”Lemony Snicket”, The ersatz elevator
—
“While sophisticated technologies… might let us see nature, observe the stars, and even watch the news more clearly, we mustn’t let them deprive us of the icons and metaphors we use to describe the things in our lives that are less tangible and more allegorical, less a reality and more a model. For without the ability to model, we don’t have any science at all.”—Douglas Rushkoff, Too clear for comfort
—
“Homo sapiens is, in the end, not that fleet an animal intellectually. We take our time, so that any new paradigm has to work on us like a shaggy-dog story… whose point is its windy telling and retelling… Like evolution among the species, well-entrenched norms can be completely upended—it just takes time.”—Jack Hitt, A gospel according to the Earth
—
“Sometimes a new scientific idea can be like the punch line of a very long joke: you need to keep the whole setup in mind to appreciate the humor, but it’s worth the effort.”—Jaron Lanier, Raft to the future
—
“Must a scientific description of life be so lifeless? … How to reconcile one day’s sense that a mathematical model could represent a deep and universal rule with another day’s realization that a model told us far too little about the creatures we watched just hours before?”—Aaron E. Hirsh, Signs of Life