In this issue:
- Linguistic lightning: the power of the right word
- Editorial: Just how significant is it?
- Book review: Pocket Book of Technical Writing for Engineers and Scientists
- Book review: The ACS Style Guide: Effective Communication of Scientific Information
- Fun on the Web
- Parting thoughts
- Contact and copyright information
by Bob Johnson ([email protected])
Previously published in Science Editor 29(1):24-25. January/February 2006.
“The difference between the almost right word and the right word is really a large matter—it’s the difference between the lightning bug and the lightning.”—Samuel Clemens (Mark Twain), Letter to George Bainton, 10/15/1888
OK, Sam, how did you do it?
To my ear, it is always easier to detect the wrong word, the word that clunks when it should ring, the one that obfuscates rather than clarifies, the one that replaces a stout, strong, time-honored linguistic building block with a limp piece of voguish fudge (finalizing a task is no improvement over finishing it). And certainly, no word should sound a false alarm.
Of all the pernicious pieces of technical miswriting of the past decade, the laurel for the worst, in my view, rests on the head of the wretch at Microsoft who dredged up the following warning for the Windows 98 operating system: “This program has performed an illegal operation and will be shut down.”
Coupling the word illegal with the passive voice construction will be shut down conjures the image of the FBI at your door, axes drawn, about to beat it in, handcuff you, and seize your computer. I wonder how many elderly newbies in Iowa keeled over at the keyboard the first time they saw that one flash on their screen. Changing illegal to incompatible and taking be out of will be shut down might have prevented some ambulance runs.
The thoughtful writer shifts the burden off the reader. She or he does the work of supplying precise meaning so that all the reader has to do is read, not reread, rationalize the meaning, or recover from a heart attack. This implies that the writer (1) commands a sufficient stock of synonyms—or access to one—from which to choose (see “Oxford has a word for it”, below) and (2) takes time to review, reflect, and rewrite—for clarity if not for eloquence.
Trade in your blunderbuss for a .44 magnum
Writers should take dead aim at their readers. Many (I’ll say it—lazy) writers habitually reach for catch-all words that convey a broad sense of their meaning rather than searching for more accurate words that help the reader grasp it. Such words are what Sir Ernest Gowers (The Complete Plain Words) called “blunderbuss words”—words that spray pellets of meaning like a shotgun, hoping that some will hit the target. This kind of author uses words “in the front rank of the armoury” rather than “troubling to search in the ranks behind for one that is more likely to hit the target in the middle”.
Here are six examples from the recent scientific press that make Gowers’ point, focusing mainly on the weak verbs have and show, with suggestions for improvement:
- “The juvenile gamont specimens of a single clonal population, produced during asexual reproduction, should thus have [display] very little genetic variability.”
- “According to results of a recent study of planktonic species, fluctuations of this concentration may have [exert] minimal influence on foraminiferal Mg/Ca.”
- “Transmission electron microscopy and negative staining of the helical ribonucleoprotein capsid show [reveal] a herring-bone appearance.”
- “Electron probe measurements showed [detected] Mg/Ca variability within a single chamber.”
- “These results show [suggest; imply; indicate; demonstrate; prove] that Mg concentrations of high-Mg species are somehow controlled by temperature.”
- “The NCBI site contains several major resources. The most well known [best known] among these is probably GenBank.”
Sam would have loved Sir Winston
As an example of the power of the right word in the right place, consider Sir Winston Churchill’s speech of 18 June 1940 to the House of Commons. Many have termed it the greatest speech of the 20th century. In his summation, Sir Winston exhorted:
“Let us therefore brace ourselves to our duties, and so bear ourselves that if the British Empire and its Commonwealth last for a thousand years, men will still say, ‘This was their finest hour.’ ”
Listen to a RealAudio clip at <www.earthstation1.com/pgs/churchill/dos-wc047.wav.html>.
Try substituting any other word for finest in the last sentence. “This was their best hour?” “Most noble hour?” “Most courageous hour?” Nothing else serves as well as finest. You can hear the lightning crackle.
Oxford has a word for it
The excellent Oxford Thesaurus of English (2004) can help you locate the right word. Did you know that there are no fewer than 48 synonyms for the adjective peculiar? How many can you think of? Test yourself, then visit <www.askoxford.com/worldofwords/thesauri/?view=uk> to see how you did.
Thinkmap’s Visual Thesaurus
Thinkmap’s Visual Thesaurus (www.visualthesaurus.com) is also an excellent digital resource to suggest and sort out crisscrossing connotations. Type a word in the text box, click “Look it up”, and VT flies to its task, popping up clusters of connotations like flowerets in blossom around key synonyms from its 145 000-word vocabulary. It will even pronounce unfamiliar terms and display both British and American spellings.
The legal eagle is also a Word Hawk
The New York Times (29 August 2005, A1) described Supreme Court nominee John Roberts as “a cheerfully ruthless copy editor” who has “demanded verbal rigor from his colleagues and subordinates, refusing to tolerate the slightest grammatical slip”.
The paper concluded, “If Judge Roberts is confirmed, and his word-consciousness follows him to the court, it will put him in the upper tier of justices who have put a premium on the English language”.
Querily we roll along
The Word Hawk requests your opinion: What do you think about the usage “unhoused” over “homeless” (http://www.paloaltoonline.com/weekly/morgue/2005/2005_08_24.homesidea.shtml)? Useful connotation? Political hypercorrectness? (Save your Google search: 14 700 hits for “unhoused” and 12 900 000 for “homeless”.) Please reply to <[email protected]>.
News knocks numb nouns
The Palo Alto Daily News of 10 August 2005 noted the following all-caps city warning on some newsracks:
NEWSRACK ORDINANCE COMPLIANCE VIOLATION WARNING AND FIXTURE IMPOUNDMENT NOTICE: CORRECTIVE ACTION REQUIRED.
Nine nouns in 12 words. The News also printed a suggested fix penned by visiting Stanford linguistics professor Arnold Zwicky: “This newsrack violates city ordinances and will be removed unless the violation is fixed.” Ahem! That’s 14 words, professor—but at least you eliminated a five-noun string.
Chuckle of the month
“Hix Nix Stix Flix” is a headline out of the past from Variety, Hollywood’s show-business review. It meant that movies about rural life did not sit well with farm folk. William Safire parodied it with his “Hix Nix Blix Fix” column in the 24 October 2002 New York Times about the Bush administration’s refusal to accept a North Korean nuclear nonproliferation scheme floated by Hans Blix, the head of the International Atomic Energy Agency.
Bob Johnson ([email protected]) writes the “Word Hawk” column for Science Editor, the bimonthly publication of the Council of Science Editors (CSE). Following university training as a biologist, Mr. Johnson discovered a love for writing and editing, holding senior positions at Annual Reviews, Frost & Sullivan, SRI International, and Applied Biosystems. He is a member of the Board of Editors in the Life Sciences (BELS). He also has a degree in French. In 2000, Mr. Johnson was language arts editor for the statewide California High-School Exit Examination (CAHSEE). He was a reviewer of the AMA Manual of Style, 10th Edition (2007), and the CSE’s Scientific Style and Format, Seventh Edition (2006). Currently self-employed, his recent work includes editing two books: Is It You, Me, or Adult A.D.D.? (Pera 2008) and the two-volume Encyclopedia of Love in World Religions (Greenberg 2007).
by Geoff Hart ([email protected])
This editorial will seem a bit distant from the subject of this newsletter (scientific communication), but bear with me for a few hundred words and I hope the relevance will become clear.
A curious little article appeared in 2003 in the ordinarily staid and sober British Medical Journal: “Parachute use to prevent death and major trauma related to gravitational challenge: systematic review of randomised controlled trials” (BMJ 327:1459-1461, http://www.bmj.com/cgi/content/full/327/7429/1459). In this paper, Gordon Smith and Jill Pell note, tongues firmly in cheek, that their goal was to determine whether parachutes were effective in preventing “major trauma related to gravitational challenge”—what the rest of us might call, in the vernacular, damage caused by striking the ground at high velocity after falling from an airplane. In their words:
“As with many interventions intended to prevent ill health, the effectiveness of parachutes has not been subjected to rigorous evaluation by using randomised controlled trials. Advocates of evidence based medicine have criticised the adoption of interventions evaluated by using only observational data. We think that everyone might benefit if the most radical protagonists of evidence based medicine organised and participated in a double blind, randomised, placebo controlled, crossover trial of the parachute.”
Like all the best satire, their paper provides an important critique—here, one that cuts right to the heart of the modern scientific method. Currently, the gold standard for research is a placebo-controlled double-blind experimental design. The “placebo” part means that you study both what you’re testing and a placebo that should, in theory, have no effect. The “double-blind” part refers to the fact that you conceal details of the treatment from both the researchers and the test subjects so that both are “blind” to the experimental conditions and therefore cannot consciously or subconsciously affect the results. For a result to be considered real, the treatment must produce better results than the placebo under these conditions. To learn more, see the Wikipedia article on this topic (http://en.wikipedia.org/wiki/Placebo-controlled_study).
Such trials are critical in several areas of science, most notably in medicine, and have greatly advanced our knowledge by preventing many false or potentially misleading findings from entering the scientific literature. However, for various compelling reasons, it’s not always possible to design such trials—as Smith and Pell so incisively observe. But not everyone accepts that argument, and some people who should know better insist on this standard of proof. It’s worthwhile noting that some of the most important scientific discoveries, including most early genetics and “natural history” (animals and plants) studies did not use this approach at all, yet managed to produce important results. Perhaps more seriously, blinded placebo trials don’t eliminate bias and error in how scientists frame their research question (i.e., you won’t see what you’re not looking for), and they don’t eliminate problems with how researchers interpret their results.
One such problem relates to the modern dependence on the science of statistics, and specifically on the requirement for statistical replication, to provide reassurance that our observations are real. Replication involves observing the same process repeatedly under controlled conditions based on the belief that something that happens repeatedly is likely to have a predictable cause rather than arising solely from random chance. The statistical aspect of the approach relies on the mathematics of probability to provide an indication of how likely it is that we observed something purely by random chance. To learn more, see the Wikipedia article (http://en.wikipedia.org/wiki/Replication_(statistics)).
Again, this approach represents a tremendous step forward. Its key insight may be that it quantifies the likelihood that a result could have occurred purely as a result of random chance, thereby increasing our confidence that a result is real. Although the principle is sound and important, we must remember two key limitations:
- First and most serious, statistical significance does not tell us whether a result is real or just a coincidence. It only tells us the probability of these two possibilities.
- Second, statistical tests rely on an arbitrary decision about what probability level defines a significant finding.
Most science journals consider a result to be statistically significant if the likelihood of it occuring purely by chance is less than 5%. This sounds impressive until you think about what it means: at this level of significance, as many as 1 in 20 published results represent nothing more than bad luck on the part of the researchers rather than a valid result that points towards something real. Those are great odds if you’re a gambler or a professional athlete in any major sport, but not so great if you believe that our world follows consistent, predictable rules that should produce consistent, predictable results. In such a world, wouldn’t a 1% or 0.1% chance of error provide more confidence?
Please note: In no way am I suggesting that statistical replication is a bad idea, or that a 5% probability level is inadequate. In reality, many additional controls greatly reduce the likelihood of errors being published, including the fact that all research results are evaluated by a journal’s editor and (most often) at least two peer reviewers in light of their knowledge of the body of research. That corpus has been validated by many other researchers who have repeated the research under different conditions, and who have tested each other’s findings to ensure that they present a consistent picture. Studies that fail to produce statistically significant results are difficult to publish because there is no way to know whether the researchers simply screwed up; 1 in 20 may have. Statistically significant positive results that contradict accepted knowledge are similarly difficult to publish, at least until other researchers have replicated these results to “prove” that they are real.
Andrew Gelman and Hal Stern explored this problem and several related issues in their paper The difference between “significant” and “not significant” is not itself statistically significant (The American Statistician, November 2006; http://www.stat.columbia.edu/~gelman/research/published/signif4.pdf). This is a subtle paper, and not an easy read, but it makes some important points. I’m not the only one to have observed that any particular threshold for statistical significance is arbitrary. Neither am I alone in pointing out that statistically significant results can have no practical significance; for example, a change of blood pressure of 1 point in response to some medication may be statistically significant, but for someone who must lower their systolic blood pressure from 160 to 120, it’s not a meaningful result. Their more important point is that a small and not necessarily meaningful change in the measured values can change a statistically insignificant result (with a 5.1% significance level) into a statistically significant result (with a 4.9% significance level). It’s not the arbitrariness of the 5% criterion that is important here, but rather the fact that the change in the data that is required to move a result from significant to non-significant may not be meaningful in any practical sense. This makes it risky to compare two results based on their reported significance levels: the fact that one result has a significance level of 4.9% and another has a significance level of 5.1% does not make the former result more meaningful. This is particularly true if the two studies that produced these results reveal a small and a large actual difference: the small significant result may not be meaningful, whereas the large non-significant result may be very meaningful indeed. It’s often more important to compare the magnitudes of two results (do both studies say the same thing?), not just their significance levels.
So: how does all this relate to the task of scientific communication? First, as in the Smith and Pell paper, it means that we must exercise some judgment in assessing and communicating the methods used to produce experimental results. Sometimes the traditional techniques of a trained and objective observer, combined with a reliance on extensive experience with how the world works, can reveal important results, even if the methodology is not state of the art. Indeed, most engineering proceeds based on the assumption that there’s no need to test design alternatives that we know will prove ineffective just to prove that effective alternatives really work better than the failures. Of course, we should never ignore an opportunity to test a conclusion using the most rigorous methods available, but neither should we insist on those methods when there are clear ethical or practical problems that prevent their use. If a research result appears important and has important consequences, it may be necessary to find a way to draw attention to that result, yet without blinding ourselves or our audience to potential flaws in the methodology that produced it.
Second, as in the Gelman and Stern paper, we must not rely exclusively on simple numerical values (levels of statistical significance) to judge whether a result is important. We must always remember the arbitrary nature of statistical levels of significance, and never assume that something is meaningful simply because it is statistically significant; neither must we assume that something is wrong and unimportant if it fails to meet such criteria. We must, of course, rigorously examine research results; problems arise most often when we use that rigor to obviate the need to actually think about the results.
As communicators, this can lead to thorny challenges. Because we are most often not experts in a particular subject, we must work closely with the experts to ensure that we understand what we must communicate: Is it meaningful even if the methodology was flawed or the result was not statistically significant? Is it important even if the methodology was scrupulous and the results statistically significant? How much confidence should we place in the results? In short, how significant (in all senses of the word) is the result that we’re being asked to communicate?
[Finkelstein, L. 2008. 3rd ed. McGraw-Hill, New York, NY. ISBN 978-0-07-319159-1. 384 p., including index. $40.00 USD (softcover).]
Previously published in Technical Communication 55(3):297, August 2008.
by Wayne L. Schmadeka ([email protected])
If you are a student of engineering or science and want a textbook that provides straightforward and practical solutions to a wide range of technical writing needs, Leo Finkelstein’s Pocket Book of Technical Writing for Engineers and Scientists may be for you.
Finkelstein’s writing is always straightforward, sometimes colorful, sometimes amusing. He manages to make a potentially dry topic somewhat lighthearted while providing practical guidance and insight. For example, while defining technical writing, Finkelstein writes that “Grammar and style often go to the heart of the author’s credibility. In other words, if you write like an idiot, the reader may well perceive you as an idiot and your organization as a collection of idiots” (p. 8).
Finkelstein’s coverage of the topic is rather comprehensive. He includes chapters that describe how to write technical definitions, descriptions of mechanisms and processes, proposals, various technical reports, instructions and manuals, abstracts and summaries, and various business communications. He also includes chapters about recognizing and responding to the ethical issues inherent in technical writing, citing sources, using visuals, publishing electronically, developing and delivering presentations and briefings, and writing in teams. The author provides an abundance of examples, which are often accompanied by a variety of excellent photos, diagrams, tables, graphs, and charts that illustrate and clarify concepts, processes, and procedures.
The chapters on technical reports describe, step-by-step, how to write progress, feasibility, recommendation, laboratory, project, and research reports. Each of these chapters leads off with a description the purpose of the report, follows with a description and explanation of the steps to write the report, and follows up with multiple fictitious but realistic examples on diverse engineering and scientific topics. Each of these chapters concludes with a checklist for the writer and an exercise that reinforces and reviews the chapter content.
Although this book is straightforward, easy to read, and relatively comprehensive, it does have some shortcomings. Most notably, the four-page index is inadequate, especially for a textbook of nearly 400 pages. Also, there is no information about writing for publication in journals, which is an important topic for many scientists. There is nothing about topics related to document management, such as version control, filenaming conventions, archival procedures, and change history documentation. The explanation of copyright is limited to the impact and anticipated influences of the Web. There is no definition of copyright, no explanation of its protections or the limitations of the fair use doctrine, and no description of the requirements and process for copyright registration.
This well-written and well-illustrated book provides essential information about the purpose and writing of a wide variety of engineering and scientific documents. It is a valuable training tool and reference for students and practitioners of engineering and science, as well as for technical writers studying or working in these fields.
Wayne L. Schmadeka ([email protected]) is an STC senior member and serves on the faculty of the Professional Writing Program, University of Houston-Downtown. He founded and ran an educational software engineering firm for 12 years, has extensive experience developing varied forms of documentation, and consults with engineering firms to increase the effectiveness and reduce the cost of their documentation.
[Coghill, A.M.; Garson, L.R. (eds.) 2006. 3rd ed. Oxford University Press, New York, NY. ISBN 978-0-8412-3999-9. 444 p., including index. $59.50 USD.]
Previously published in Technical Communication 55(1):79-80, February 2008.
by David E. Nadziejka ([email protected])
The ACS Style Guide has been reorganized and revised since the second edition, and the book is now divided into two major parts. Part 1, “Scientific communication”, covers general and procedural topics, including ethics, the writing of scientific papers, the editorial process, writing style and usage, Web-based electronic submission of manuscripts, peer review, copyright, and markup languages. Most of this material is new. Part 2, “Style guidelines”, arranges most of the style manual material into nine chapters, including the invaluable chapters on chemistry. Dropped from the new edition are chapters on posters, letters to the editor, and press releases; peer review (containing several dozen firsthand views of authors and editors about how they approach such review); and oral presentations. Given that this edition is subtitled Effective Communication of Scientific Information”— rather than A Manual for Authors and Editors—I think it’s regrettable that the oral presentations chapter is gone. In my experience, well-presented talks are far too rare.
I’m not convinced of the value of Part 1. Except for Chapter 4 (on writing style and word usage), the chapters in Part 1 seem too brief and general for most writers or editors. Chapter 1, on ethics in scientific publication, is eight pages of introductory discussion plus a six-page appendix reproducing the ACS Ethical Guidelines to Publication of Chemical Research. Chapter 5, on electronic submission of manuscripts, contains six pages of discussion plus a three-page appendix listing publishers and grant agencies with their respective URLs and submission software, and a two-page appendix on text and image formats used for seven specific submission systems. The discussion of the submission process may be helpful to new authors, but with technology today, as the chapter author points out, “the information that follows will likely age rapidly” (p. 59). Peer review is covered in the six pages of Chapter 6, which outlines the process and the responsibilities of author and reviewer.
Chapter 8 covers markup languages and the “datument” (the latter meaning a container for data and its associated description). It presents a brief outline of HTML and XML and an example of an XML datument describing a property of aspirin. Also discussed briefly are datument validation, vocabularies, XML authoring and editing tools, and the datument as an information component of science. I know a little HTML and a little about XML, but this chapter was too brief and the datument descriptions too detailed for me to grasp. I understood the XML elements such as molecule and atomArray and their parent and child relationships (p. 91), but I couldn’t determine after several readings what the feature called “namespace” was (p. 92). For people already versed in XML this chapter may be useful, but for me it was far too condensed.
The chapters of Part 2—plus Chapter 4 (on style and usage) from Part 1—form the style manual portion of the book. The chapters treat grammar, style, numerals and units, chemical nomenclature, chemistry conventions, references, figures, tables, and chemical structures. This material has minor changes and revisions from the second edition. Among those changes are somewhat fewer ACS-specific details and a new recommendation not to hyphenate numeral-unit adjectives in phrases such as “20 mL sample.” These chapters are well presented, with examples of each general guideline or rule. The chapters specifically on chemistry are, to me, the reason for buying this book, because they convey a wealth of information that I consult regularly in my editing. I expect Part 2 will be just as useful to me in this edition as it was in the second edition.
Two points bothered me throughout the book. First, I have a strong interest in tables, and as I read, I couldn’t understand an apparent inconsistency in the capitalization of first words in table rows. For example, Tables 12-3, 12-4, and 12-5 use lower case in the stub column for such compounds as arabinose, cytosine, and tryptophan. Yet the stub in Table 13-1 has the name of each chemical element capitalized, despite directions on the previous page to treat the names of elements as common nouns. Chapter 16, which covers table preparation, has no recommendation on capitalizing the first word in a table row, although the example table (Fig. 16-1) shows caps on the first word in each line except the one denoting “control”.
Second, in Chapter 4 and the chapters of Part 2, an arrowhead icon is used to call attention to “rules”. This should be an aid to both browsing and reading. However, the editors give insufficient attention to distinguishing between actual rules and other varied types of information that carry this mark. For example, the first arrowhead in Chapter 9 marks the statement “The number of the subject can be obscured when one or more prepositional phrases come between the subject and the verb” (p. 105). True, but these words state a problem, not the rule for dealing with it. Another arrowhead points out that “Hyphenation of double surnames is discussed on p. 139” (p. 152). Further, some useful points are left unmarked, such as “Messages sent by electronic mail are considered personal communications and are referred to as such” (p. 316). I hope the next edition will make appropriate distinctions.
This edition aims to include more information about electronic information processing and publishing, but the chapters in Part 1 covering these topics are too basic to be helpful, to me at least. The style manual materials, however, remain as strong as ever. I’ll have this third edition on my shelf, and if you work in writing or editing chemistry, you’ll want it too. But most of my use—and I suspect yours also—will be in the pages of Part 2.
An online edition of the book is being planned by ACS, but is not yet available.
David E. Nadziejka ([email protected]) is the biomedical editor at the Van Andel Research Institute in Grand Rapids, MI, and an STC fellow. He has been a science and engineering editor for 25 years and has taught technical communication at the Institute of Paper Chemistry, Argonne National Laboratory, and Illinois Institute of Technology.
Zombies, you say?
What could be more appropriate for the pre-Hallowe’en season than a few links to the science of zombies? First, of course, you must understand how your enemy thinks if you hope to defeat—um… him? her? it? Harvard to the rescue: A Harvard Psychiatrist Explains Zombie Neurobiology (http://io9.com/5286145/a-harvard-psychiatrist-explains-zombie-neurobiology). Try using the phrase “ataxic neurodegenerative satiety deficiency syndrome” next time the conversation lags. Unfortunately, Canadian researchers are less optimistic that this will help: Science Ponders ‘Zombie Attack’ (http://news.bbc.co.uk/2/hi/science/nature/8206280.stm). Hey, if the Beeb says it, it must be true! Given the disagreement in the scientific community over our odds of survival, perhaps it’s time to rely on a more pragmatic approach. For that, turn to the survivalist literature:Rob Sacchetto’s The Zombie Handbook: How to Identify the Living Dead and Survive the Coming Zombie ApocalypseMax Brooks’ The Zombie Survival Guide: Complete Protection from the Living Dead
Best of luck! Hope to see you in the December issue.
Archive of interviews with famous physicists
The Niels Bohr Library and Archives (http://www.aip.org/history/nbl/) of the American Institute of Physics has the mission “to help preserve and make known the history of modern physics and allied sciences”. To support this endeavor, they’ve begun building a database of interviews, both written and taped, with important physicists (http://www.aip.org/history/ohilist/transcripts.html). One of the interesting things about the interviews is that they don’t just cover the science; as in the case of Mildred Allen (born in 1894), there’s a lot of human history and coverage of an astounding amount of social, scientific, and technological change. Although there are odd omissions, such as Richard Feynman and Albert Einstein, I assume that these gaps will be filled as time and resources permit.
“Science is not some monolithic entity. It is made up of thousands of individual scientists, all trying to advance their own knowledge and, at the same time, their own careers. To become established, to find a job in which to do science, each scientist has to do work that meets with the approval of the scientific community.”—Keith Devlin, The Wolfram Controversy
“The most important discoveries will provide answers to questions that we do not yet know how to ask and will concern objects we have not yet imagined.”—John N. Bahcall, astrophysicist (1935-2005)
“In all affairs it’s a healthy thing now and then to hang a question mark on the things you have long taken for granted.”—Bertrand Russell
“Nature teaches more than she preaches. There are no sermons in stones. It is easier to get a spark out of a stone than a moral.”—John Burroughs, naturalist and writer (1837-1921)
“Formerly, when religion was strong and science weak, men mistook magic for medicine; now, when science is strong and religion weak, men mistake medicine for magic.”—Thomas Szasz
“Science is built with facts as a house is with stones—but a collection of facts is no more a science than a heap of stones is a house.”—Jules Henri Poincaré (1854-1912)
“For a list of all the ways technology has failed to improve the quality of life, please press three.”—Alice Kahn
The Exchange is published four times per year on behalf of the Scientific Communication special interest group of the Society for Technical Communication (www.stcsig.org/sc/).
Copyright for material published in the Exchange belongs to the author. For permission to reprint an article, please contact the author. We welcome comments and letters to the editor.
Submissions: To submit an article, please contact the editor. By submitting an article, you grant a license to this newsletter to publish the article in print and online. In your cover letter, please indicate whether the article has been published elsewhere, and confirm that you hold copyright to the text.
Editor and Publisher of the Exchange newsletter:
Geoff Hart ([email protected])
Scientific Communication SIG Manager:
Kathie Gorski ([email protected])
SciCom SIG Webmasters: