by Diane Evans (diane_evans@hotmail.com, diane_evans@merck.com) and C. Shannon Brown (csbrown2@gmail.com)
Previously submitted to fulfill the requirements of ENG6460 (Studies in Digital Media: Genre and Web Text Evaluation) at Utah State University
The atmosphere in the Acme Products conference room was somber as the project manager began the meeting: “I just don’t understand. Our company has been in business for over 100 years. We hired the best Web developers available, spent a fortune on usability tests, advertised the roll-out of our new site, and know that people are visiting. But, they only seem to spend a few minutes at a time, and very few of them purchase anything or even send us an email for more information. What is wrong?”
The team’s technical writer stepped in, “Let’s look at what the Web site says, not just what it does. If we look closely at both our site’s content and our expectations for the site, we should be able to find the answer.”
This company has just discovered a major shortcoming in Web site usability tests. Although usability tests can reveal many defects in a site’s architecture or navigation, the real difficulty this site is having is in generating interest or action. The primary response the site gets from visitors is indifference—or worse! Thus, the site content’s biggest problem has nothing to do with usability; the content is just not up to its task. It is not saying the right things in the right way to do what we want the site to do.
In a fundamental sense, the site is failing as communication. Its lack of effectiveness with readers presents similarities to examples in other media. For example, newsletters and radio spots face the same difficulties in reaching audiences. Although the Web offers unique possibilities and limitations in communication, the basic elements of writing involve the same groups of people or organizations: the audience, the authors, and the stakeholders. Each of these groups has a set of expectations; if the expectations of the groups diverge, problems will result in any medium.
Fixing this site consists of identifying each of these groups and determining what they expect from the site:
- What groups of people is each part of the site expected to reach? What do these people expect from the site?
- What do the site’s stakeholders expect and assume about the site, its users, and the subject material?
- What does the site author expect the different parts of the site to do (e.g., sell, inform, entertain)?
Usability tests will not help us answer these questions. Our analysis requires different tools: careful reading of the site and meticulous investigation of what the company really, really wants to accomplish with the site.
This article will explain how to get from this starting point (identifying problems in content communication on the Web) to a finished site content evaluation. We will detail a standard way to address whether site content says the right things in the right way.
Where to start
Where and how you start evaluating makes a difference in what the evaluation will find. If you start at the top levels of the site, for example, and work to find specific pages deeper within the site, the evaluation will reveal navigational and architectural problems: the evaluation is concentrating on the act of getting from the outside in. As we’ve already pointed out, however, the problem with this company’s site involves something else, the effectiveness of the site’s components in generating interest or prompting action. Where is this failure taking place? Where are the readers and visitors not becoming interested? Where are they not taking action?
Actually, the visitors’ decisions about this site are based in large part on their own expectations and goals. If, for example, a visitor is interested in mountain biking, a page about hiking in the Rocky Mountains might not get much reaction, regardless of how well it’s written. Our evaluation, then, needs to start where our visitors’ interests and expectations should intersect with what our site delivers.
Often, a Web site attempts to meet several different visitor expectations and interests. Some of our visitors come to buy product X; others want information about our board of directors. Our evaluation should take this variety of visitor interests and expectations into account. We have to find where our site reflects the broadest possible range of interests, at the level of individual pages within the site, beginning at the bottom of the site hierarchy, where visitors figure out whether our site has anything specifically for them.
Clarity of purpose
After isolating a single page or set of related pages to evaluate, we must determine why that area of the site even exists in the first place. We might have to dig to find this purpose, since it may be ill-defined or vague. If so, the page’s writing will reflect this lack of purpose. A lack of clarity is one possible reason why the site is failing to generate interest or action.
Even if the purpose is clearly articulated and understood, poor execution might muddy the page’s clarity of purpose. The evaluator’s task regarding a page’s purpose, then, is two-fold: to find a clear purpose for the page and judge how well the page’s execution maintains this clarity of purpose.
Some pages present a straightforward purpose/evaluation scenario. A weather.com page for a local forecast, for instance, has a clear purpose: to show a weather forecast for my region. If we want to evaluate how well this particular page does its job, we can measure success rates in a controlled usability study.
Evaluating other pages may not be so easy. A page for a museum exhibit, for example, might have several purposes at the same time. One of its goals is to inform, another to persuade. The end result in many cases would be attendance: the page or group of pages is trying to persuade page visitors to become museum visitors. In other cases, though, the page will make an impression on readers without convincing them to come to the museum. This might be an acceptable outcome for some of the museum site stakeholders. This page’s purpose/evaluation scenario involves assigning the appropriate weights to different purposes and evaluating the page’s content compared to these purposes.
Regardless of its complexity, a Web page’s purpose is the key to gauging its overall effectiveness. At the end of our evaluation, it is what we measure against. We have to know what a page or site is trying to achieve before we can judge how well that page does its job.
Purpose is not the only consideration, however. Nor is getting to this purpose as easy as asking the page owner’s opinion. In fact, digging for a clear purpose can make the page more complicated than it seemed at first. Imagine our museum exhibit page, for example. Obviously the page has at least one clear purpose—to get page visitors to become exhibit visitors. At the same time, if the museum exhibit has corporate sponsors, these parties have an interest in the page beyond whether it attracts visitors. At the very least, these sponsors want their role acknowledged. At most, they might have an interest in how the page’s presentation of the exhibit reflects on their participation. Part of the page’s purpose in this case is to promote the exhibit in a way that reflects well on the sponsors. In fact, any stakeholder in the exhibit will have some unique requirement of the page’s content. Any evaluation of purpose will need to take these stakeholders into account.
Acme Products Site: purpose evaluation — The Acme Products company selected a set of pages to evaluate—pages describing the company’s products. The evaluators discovered several purposes on each page—a single page described each product and offered purchasing information for only the product described. The committee recommended the addition of a single page for descriptive information that would include a comparison chart for the different products. A separate “shopping cart” should also be included, allowing a visitor to purchase multiple items and separating the description from the selling.
Audience
As the previous section makes clear, an audience of end readers is not the only group to consider when evaluating a page’s effectiveness. But end readers are crucial to the success of the page. Someone within the site’s ownership can tell you who they expect to be interested in the page’s subject matter. The evaluation should not, however, stop at simply identifying the audience. Evaluators must explain what this audience expects from the page and what visitors want and need to see in order for the page to meet these expectations.
Part of visitor expectation comes from who these individual visitors are—their histories, preferences, and desires all play a role in what they’re looking for. But visitors also have expectations due to the page’s environment and appearance. Both types of visitor expectation play a part in other media as well. The expectations visitors have from the page’s environment are especially relevant for a Web page. The relative newness of Web media, coupled with the new organizational possibilities that hyper-linking provides, makes material on the Web difficult for site visitors to predict. Meeting visitor expectations in this regard is crucial to getting visitors to engage with the page.
Contrasting expectations on the Web with media outside of the Web illustrates how a Web page can be unique. In television, for example, a viewer can usually predict what will happen next. At 8:00 PM, a certain TV program will begin. The first minute or two will tell you what the program is called, who is starring in the show, and something about the show. Later on, the program will be interrupted by commercials. Each commercial is likely to be 30 seconds or 60 seconds in length; after a number of these commercials, your program will begin where it ended before the commercial.
On a Web site, however, it’s difficult to know what to expect. Even if the home page tells you who owns the site and what the site is about, there is no way to know for sure what will happen when you click on a link. You may go to a commercial, another topic, or even a different Web site. You may not be able to find your way back to where you left off before you clicked.
If we take this contrast between the Web and TV further, we can see just how unique the Web is. While watching our 8PM show, imagine we know not just that we’re watching a TV program but also that we’re watching a crime drama. Now we have even more specific expectations. We won’t be expecting a laugh track, for instance, and we’ll get very confused if we hear one. Likewise, we expect certain kinds of characters in a crime drama: police detectives, suspects, potential witnesses, and victims and their families are expected to be present and important to the storyline. The detective’s friends or neighbors are not expected to be as important in this kind of program as they might be in a sitcom, for example.
Back in the Web world, we still have expectations as viewers or readers. Our “programs” change more quickly, however, and in different ways. Navigating a single site can take you through several different kinds of pages with dramatically different purposes in a matter of minutes. For example, if you visit Amazon.com, it doesn’t take long to go from a front page filled with menus, through search result pages, to pages for specific books, to a shopping cart, a form completion page, and finally a confirmation page. Just as a TV network, like PBS or NBC, can have a variety of different kinds of shows, Amazon.com has a variety of different pages with unique roles. And just as television viewers have different expectations of varied TV show types, so do site visitors have different expectations of varied Web page types. To take these myriad viewer expectations into account, site evaluators must look closely at the site at the level of individual pages. The site’s navigational architecture helps visitors get around the site, but individual pages are the elements that speak to specific reader or visitor needs. Part of what makes a page speak to a reader is its ability to meet reader expectations.
Visitors are also individuals, though, with individual levels of expertise and interest in the subject matter, individual assumptions and desires regarding the page, and individual attitudes. Evaluators must find out not only who these people are but also what they want and expect. The page owners can identify who they think visits their site, but they might not have an idea of audiences at the page level. Even after identifying visitors at the page level, however, evaluators still need to translate these identities into the specifics of what these visitors bring to the page and what they expect out of it.
Let’s imagine, for example, we identify our page visitors as experts in quilting. We can say that these visitors expect this page’s writing to use certain words, such as “batting” and “log cabin” without including a definition of each term. These visitors don’t need the same amount of background information that non-experts would need. But do we know if these experts are coming to our page to buy something, to get information, for entertainment, or for some other reason our page owners aren’t aware of? Evaluators have to figure this out by analyzing the information at their disposal about the visitors and the page’s environment. If our page doesn’t support what our visitors want, then our visitors will not have much use for it.
Acme Products Site: audience evaluation — From a close reading of pages meant to describe the company’s products, our evaluators discovered too many company-specific terms included in the first paragraph of each page. With these terms, seldom heard outside of a sales meeting, used so quickly on these pages, it was no surprise that a site visitor would quit reading by the second sentence. Although a separate glossary had been included on the site upon recommendation of an earlier usability study, the glossary was only available from the home page and required several clicks to access from the product pages.
The evaluators also discovered that the audience was not who the site owners expected. Because the average shopper in the store was 40 to 50 years old, the company expected the audience to be the same age. But, after one person wrote on his blog what a wonderful product the site was selling, a large number of 19- to 30-year-olds followed his link to check it out. These were also the audience members least likely to make a purchase due to the large amount of reading required on each page. Acme Products’ site was wasting free publicity!
Stakeholders
Stakeholder expectations have an impact on a page’s success as well. Your page may be a hit with visitors but a failure with stakeholders, especially if stakeholders think their interests have not been taken into account. The process for taking stakeholder expectations into account is the same as the one we use for visitor expectations: find out who these people are and what they want from the page.
The dynamics of dealing with stakeholder expectations differ from those of dealing with visitor expectations. Stakeholders are best served by being involved in the design and execution of the page. Documenting the interests of stakeholders before writing serves as a reminder for writers and stakeholders of what expectations exist for the site, keeping clear purposes for each page instead of allowing the author to lose focus during construction or allowing stakeholders to make endless new demands.
Documenting stakeholder interests and expectations after the fact can also help authors and evaluators. If, in fact, the existing page completely neglects stakeholder expectations, the evaluators may find themselves back at the beginning of development, starting from scratch to include these interests.
Acme Products Site: stakeholder evaluation — Our evaluators discovered that no written documentation existed for stakeholder expectations. The only instruction given the authors was, “Make it look like our catalog.” Every Monday, the authors met with a small group of stakeholders (not the whole group) to display screenshots of work accomplished the previous week. The evaluators discovered that no salespeople had been included in these meetings because they were too busy to attend. Hence, with very little face-to-face input and no written record of their interests, the sales team ended up with virtually no say in the final site.
Authors
The authors of a Web site are generally independent from both the audience and the stakeholders. These Web developers may include multiple programmers, writers, and editors. The authors must know what the expectations of the stakeholders are, as well as how best to present this information to an audience.
If the stakeholders have not provided written expectations, and if the authors do not understand the audience, the site may suffer as a result. A clear requirements document that includes the stakeholder expectations as well as an audience analysis will ensure the success of the authors.
Acme Products Site: author evaluation — The actual authors of the Acme Products Web site were a group of Web developers hired just for creating the site. Although talented in Web design, they knew little about the products being sold and simply included the catalog description for each item.
With no clear requirements document, the authors depended on the Acme CEO for guidance. Little thought had been given to the needs of other stakeholders. Because an audience analysis had not been conducted, the authors assumed that all Web users would be similar to the CEO.
Generalizing the evaluation: from pages to the site
Perhaps you have the time and energy to give a detailed evaluation for every single page in your site. Of course, the site owners expect this level of attention to the parts of their site, and you feel it is part of your job to cover all the bases. Nonetheless, the site requires not just diagnosis but a real remedy. And this remedy comes much more quickly and effectively if the evaluation reveals patterns of problems within the site rather than the problems of each page individually. Fortunately, evaluators have clues to these patterns in the site’s structure, which they should verify and explain as part of their final report.
Imagine we’ve decided to do a detailed written evaluation of each and every page in the site. In the process of covering all this ground, evaluators will find commonalities in their evaluations. All the pages within a certain section might exhibit, for example, variations on a common miscalculation regarding visitor expectations. A general overview of the site’s pages will show patterns of errors across certain levels of the site. Evaluators can find these patterns without producing a detailed written evaluation of each page. But this does not mean that evaluators pay less attention to individual pages. Rather, evaluators analyze pages and site structure closely in order to verify that these patterns are real and happen in exactly the way described in the evaluation. Hence, evaluators do pay attention to each individual page—they just approach their evaluations of different pages in different ways.
Evaluations always start at specific pages. As explained earlier, this page should be in the site’s interior. After evaluating a page, we should look again at this page’s relationship to the rest of the site. How do visitors get to it? Is it part of a family of similar pages all at the same level in the site’s hierarchy? If so, is it similar to other pages in this family in all but a few respects? The page could be located in an area of common interests—overviews of art history, for example—with each page representing a more specific interest—an overview of impressionism. If this page’s problem is its clarity of purpose (to suggest just one possibility), the other pages may have the same problem. Check it out. You may have just discovered a systematic approach to making the site better. This gives the site owners a powerful plan of attack, much more compelling than addressing the site page by page.
Acme Products Site: generalizing the evaluation — The evaluators discovered that, once three-clicks deep in the site, the attention to the quality and accuracy of the writing began to worsen. Most pages at this level showed a lack of close editorial control; for example, “doe snot” appeared in several places instead of “does not”. Likewise, several product descriptions included too much information about product use, and not enough about why it should be purchased. Finally, all product pages showed a lack of consideration for window- and comparison-shopping visitors: prices were not visible until the viewer created an account and viewed the shopping cart, making comparison shopping difficult and requiring browsing visitors to reveal information about themselves just to get a price.
Conclusion
Will this evaluation help the Web site do a better job? Not yet—the site still needs to be rewritten, perhaps even redesigned. But the point of the evaluation was to do the preparation work the writers should have done before the site’s design and execution. This preparation is crucial to make the different parts of the site meet their purposes. Otherwise, the writers and designers are just guessing at what will work.
Obviously, this kind of evaluation is more difficult after the fact. But it’s still the right approach to take for a Web site that is failing to communicate. In other words, if a site is not connecting with visitors—if it’s getting visitors to the right pages but failing to interest them or persuade them to act—then site owners need to look closely at what their different pages are supposed to do and whether the writing on those pages is up to the task. Only after the evaluation identifies the purpose, audience, and stakeholders, as well as determining the expectations of the page’s visitors, stakeholders, and writers, can a person judge what would constitute effective writing for a Web site.
Diane Evans (diane_evans@merck.com, diane_evans@hotmail.com), a software quality engineer working in a biotech, lives near Seattle, Washington with her husband and menagerie. Her current project is spoiling her one and a half grandchildren. C. Shannon Brown (csbrown2@gmail.com) designs, writes, and edits marketing and technical content for Web, print, and presentations at Audiovox Accessories Corporation in Indianapolis, Indiana. He holds an M.S. in English (Technical Writing) from Utah State University.
Editorial: annual conference report
by Geoff Hart (ghart@videotron.ca)
As I did last year, I’ll be using this space to report on the sessions I attended at this year’s annual conference in Philadelphia. Not all were directly relevant to scientific communication, but with a few twists of perspective, I found that all the speakers had something useful to say about how we do our work. You can obtain copies of the speaker handouts at the conference Web site.
Keynote speech: Howard Rheingold
Howard Rheingold (hlr@well.com, http://www.rheingold.com/) started his career back in the days when the Xerox Palo Alto Resarch Center (PARC) was still working on the first word processor, leading to his book Tools For Thought. Since then, he’s been a kind of knowledge butterfly, flitting among the more attractive intellectual flowers of our age and doing a lot of thinking about them, while simultaneously cross-pollinating a great many ideas.
One of his recent enthusiasms began with observations of Finnish teens and their cell phones; apparently, the Finnish word for cell phone is the diminutive form of the word hand, which says something interesting about the value they place on their phones and how close they keep them at all times. Rheingold noted that “We are human because we use communication to do things together in new ways.” He speculated that this ability to use communication to organize ourselves is what helped our primitive forebears survive when other hominids died out. Listening to his description of how these kids were constantly networked reminded me of the buzz among computer scientists about the notion of “pervasive computing”, in which computers will be embedded in everyday objects and found everywhere in the world around us. Possibly these folk should get out of the lab and keep an eye on the real world, since in many ways, we’re already there. Cell phones may not yet be the world’s largest pervasive computing network, but with most new phones now permitting Web browsing, they’re clearly version 1.0 of that world. Consider a device like Apple’s iPhone, which combines an iPod (something that seems as firmly attached to teens like my children as any of the devices used by the Borg in Star Trek), a small computer (including a nifty Web browser), and a decent cell phone with (as of the latest release) GPS capability—and all of it integrated seamlessly with their desktop Macs now that Apple is transitioning from their Mac.com service, which already permits sharing of calendars and other information, to MobileMe. Other competitors aren’t quite there yet, but they’ll catch up eventually.
One of many interesting things about this sea change in how we’re using technology is just how much it changes about how our world works. For instance, kids in Finland and Japan tend to flock together, drawn to interesting happenings by means of text messaging transmitted over their phones. This is a fairly innocent, if still intriguing social phenomenon, but it also has both darker and more practical sides. The darker side is what science fiction author Larry Niven referred to as a “flash crowd” in his 1973 story of the same name: cheap communication technology combined with essentially instantaneous transportation led to the “flash” emergence of crowds of people, often leading to riots and looting. The practical side is a sudden ability of otherwise powerless people to self-organize in a great hurry. Examples include the mass demonstrations against Philippines president Joseph Estrada in 2001, similar political demonstrations in Korea and Spain, and the Muslim protests against the infamous Mohammed cartoons in Denmark. This and related thoughts led Rheingold to write Smart Mobs. I’ve seen this sudden appearance of communities described as a “collective emergent response”: something that emerges, often without any prior knowledge, from the collectivity.
There’s been an interesting sequence of communication revolutions over time. Probably the first preservation of collective memory in fixed form, preserved across time and space rather than purely as oral history, would have been cuneiform writing on clay tablets. Subsequent development of paper and ink improved the ease of the process, but probably did not change it much beyond that, and the development of the printing press had a similar effect: a quantitative change (vastly improved reproduction speed) rather than anything truly qualitative. In all these cases, fixing information in tangible form remained the province of experts. But now, widespread literacy combined with the creation of the Internet has produced an unimaginable acceleration of this process: not only has publishing become open to anyone with access to a computer (even if only via a public library’s shared terminals), now they’re distributing their creations to ever larger global audiences, accompanied by huge amounts of collaboration on and reworking of the information via blogging, wikis, FaceBook, Del.icio.us, and others. All of these trends have facilitated knowledge sharing and collaboration, giving rise to the modern technological and social explosions, which have been accelerating faster in recent decades than they had in all previous centuries. In pondering this, I found it interesting and ironic that at the same time this has been happening, phenomena such as cell phone culture and blogs are once again reinvigorating the old notion of oral history and reinventing how we communicate.
Many companies and groups have been taking advantage of this paradigm shift, and nowhere more obviously to technical communicators than in the concepts of “open source” software. IBM, for instance, has created an “open commons”, in which they have released many of their patents into the public domain to spur innovation and create a market for their open-source products, such as Linux. In so doing, they turned a potentially serious competitor for their main software business into a lucrative source of consulting and services income. Open-source software is all about harnessing the power of communities and the information they’re eager to share; the success of Linux and the Mozilla Foundation, developers of the free Firefox Web browser and Thunderbird e-mail software, are familiar examples. Sites such as eBay work because of their “reputation management system”, which helps buyers find vendors they can trust and vendors find buyers by demonstrating their trustworthiness. The WikiPedia which combines peer creation and review of knowledge with open access to that knowledge, is perhaps the most familiar example to technical communicators, but there are other examples. ThinkCycle, for example, uses the open collaboration concept to harness the minds of people all over the world to solve thorny design challenges. Many other endeavors are using “borrowed” computing time from computers all over the world to perform calculations that would be impossible for any one agency: science-related examples include SETI@home, a project to analyze interstellar radio signals in the hope of discovering other civilizations, and the protein folding project, a tool for analyzing protein structures and speeding up the development of new drugs while improving our understanding of crucial biological processes.
There are several key characteristics that permit communities to form and collaborate. The tools must be easy to use, enable connections, be open to all, promote collaboration, be self-instructing (easy to learn), and leverage the power of self-interest. The last point is particularly interesting because, as in economic theory, selfish (self-interested) individuals working together can accomplish great things for everyone. In this context, you may be interested in “the cooperation project”, which is designed to encourage interdisciplinary study about cooperation and collective action. Technical communicators will be familiar with this concept in the guise of Web 2.0. Michael Wesch of Kansas State University has explicated this brilliantly in a 5-minute video (http://www.youtube.com/watch?v=NLlGopyXT_g) that shows how things are changing; for all the hype, Web 2.0 really does represent something new and exciting.
All this ferment has also led to what is commonly known as a “creative commons”, most familiar in the form of the group of the same name. One goal of this group is to provide information creators with a way to provide more nuanced access to their information than is permitted by conventional copyright, thereby facilitating collaboration and conversation and co-creation using an author’s materials. A direct example of how this can affect scientific communicators is discussed at some length in a recent Scientific American article on “Science 2.0”. If you have any experience in this area, and particularly in how it affects scientific communication, please drop me a line to discuss the possibility of writing about your work for this newsletter.
All of these trends will develop increasing importance for us in years to come. Communication is changing at a phenomenal rate, and although our traditional means of communication remain important and valid tools, clinging too tightly to them will stop us from taking advantage of many new possibilities. If you’re currently taking advantage of any of these trends to improve your scientific communication, drop me a line; I’m sure most readers of this newsletter will be interested to learn what you’re doing.
Distance education
Science has always been about global communication, and for those of us who must communicate complex science to the public, the notion of distance education should be quite familiar. However, I attended this session to get a sense for how educational technology is changing, since many of us are also educators and may have things we can learn from the academics in our midst.
David Lumerman and Robert Krull (krullr@rpi.edu) discussed various aspects of their experiences with distance education in the form of a case study, revealing both the challenges and the promise of these methods. One common theme was that despite advances in the technology, the human aspects of the communication were still most important, and any strategy that neglects these aspects will fail or have greatly compromised effectiveness. Though most of us still communicate in static form (print, online help, Web pages), this will gradually or perhaps even suddenly change, as noted by Howard Rheingold during his keynote. It will also change as our roles transform from sources of one-way information delivery to more interactive roles, in whch roles we have much to learn from what educators are doing. Distance educators are already grappling with these changes.
Many distance education programs involve a mixture of online and in-person learning, often with both proceeding simultaneously. Technological limitations (e.g., excessive transmission delays, too-slow site responses) and a lack of adequate interactivity were the single biggest problems reported by the presenters (both 33% of the total); latency (delays) were a particular problem for audio and video, with occasional delays of 10 seconds or even longer. Other problems included difficulty in achieving effective interactions (15%) and human networking (13%), and in dynamically defining learning roles (6%). The best learning experiences were achieved when everyone participated, but it was difficult to manage “handoffs” (taking turns speaking) and to juggle different streams of information (audio, video, chat) simultaneously. Using graduate students as moderators and facilitators during lectures helped keep the interactions on track, but did not entirely resolve these problems. For example, remote students could click a button to sound a chime that would notify everyone that they wanted to say something, but this was not particularly effective (often missed). Students preferred face to face interactions and conference calls over videoconferencing, preferred phone calls over chat software, and preferred chat software over whiteboard software, though this may have resulted from limitations in the available technology rather than inherent problems in the approach. The study also revealed the importance of testing technology carefully under realistic conditions to ensure that it performs as advertised.
In this study, teachers tended to be most strongly concerned with reliable delivery of information, whereas students were more concerned with their ability to control the multiple channels of information they were receiving and engage in peer-to-peer learning. A majority of the on-campus students (mostly grad students in Krull’s case study) tended to be engaged in peer learning during class, whereas in other previous reports, variable but often large proportions of the students weren’t paying close attention to the class, being distracted by the ability to engage in other online activities (e.g., Twitter, music downloads). Distance students were more tolerant of problems than on-campus students, perhaps because they had no alternatives, but interestingly, a significant number of on-campus students chose to take the course online rather than in person, perhaps because doing so allowed them to multitask and accomplish other activities. Both groups favored synchronous interaction with their peers over asynchronous interactions.
Jennifer Cote (jennifer_cote@credence.com) and Mariann Foster (mariann_foster@credence.com) discussed how their company, which produces quality control equipment used by engineers, transitioned from classroom-based learning to online learning. Since they initially had no experience with this form of instruction, the chose a contractor to produce their learning management system and create the final instructional materials, but they retained control over the actual course content because of their expertise in creating this material. To acquire some expertise, both took William Horton’s course “E-learning by design”. One problem with the initial approach that they developed was a lack of incremental reviews; rather than approving information at several stages (e.g., storyboard, prototype), they only reviewed the final product, leading to considerable amounts of rework when the contractor did not successfully interpret their needs. Once they had been through the process with their contractor and began to feel comfortable with the technology, they gradually began taking over more of the production themselves. Over time they developed various useful heuristics for their lessons. For example: learning objectives + test that those objectives were attained + test of what was absorbed + “do” (actually perform the activities, which sometimes was equivalent to the test part of the heuristic). One useful rule of thumb they adopted based on an uncited Elearn.com article: “Keep lessons no longer than a sitcom.” They also noted the importance of keeping lessons interactive, because without interaction, you might as well just send someone a PDF of the information you want them to learn. This fits with what I’ve read about adult learning, in which engagement can be significantly increased through interaction even when, unlike in the case study by Lumerman and Krull, the interaction is not with humans.
In developing lessons, they found storyboarding techniques particularly useful. A typical storyboard involved combining screen captures with narration text in a Word document, thereby showing how the two related. In my own work, I began with this technique but discovered a more effective alternative that combines the information in DreamWeaver to create a working prototype that is easy to revise and republish before committing efforts to something more final, like a Flash presentation. I’ve found that this makes the actual images and interactions clearer than is possible with a static storyboard, particularly when you’re dealing with managers and subject-matter experts (SMEs) who aren’t good visual thinkers. A big advantage of storyboarding, whatever approach you use, is that it provides a quick way to perform stakeholder reviews, including reviews by SMEs, before actually committing time and effort to produce a working prototype. One cool trick they discovered was to use text-to-speech software (now built into most operating systems) to read the narration, since this quickly produces a usable prototype of the narration; the problem with recording actual voices during the early design stages is that narration takes a long time to do right, and changes in the script would force lesson developers to re-record the narration with each change—a poor use of their time. Rather than hiring professional actors to do the voiceovers, they used their own colleagues, and found that students appreciated the resulting diversity of voices. I’ve used this approach successfully too, and it illustrates that you can achieve surprisingly good results without the time and expense of hiring professionals.
How scientists communicate
I often find that when you know something far too well to have any critical distance from it, listening to someone else discuss it provides many interesting insights. In this session, Joseph Harmon (harmon@cmt.anl.gov) of the Argonne National Laboratory, coauthor of Communicating Science: the Scientific Article From the 17th Century to the Present, presented a summary of the trends he and his coauthor observed during an intensive study of how scientific communication has changed over the past 400 years. He reported three clear trends, and an emerging trend.
The first trend involves the increased use of visuals: 88% of modern journal papers contain at least one graphic, versus only 39% in the 17th century, probably due to the technical difficulty of creating such materials during the early history of science writing and the current ease of doing so. During this time, visuals have evolved from illustrations with varying degrees of photorealism (the dominant form in the 17th century) to data-driven graphics. For example, as late as the 19th century, Harmon found only a single Cartesian graph in his sample of the research literature, versus 60% of modern papers; data tables, being easier to create, were more common than graphs initially, but still only appeared in 10% of papers, versus 50% of papers today. (And these figures seem low based on the papers I edit, probably due to the different populations being sampled.) Modern graphs have also become increasingly complex, often with multiple graphs in a single figure, juxtaposed to facilitate comparisons of the simultaneous trends in several parameters. In the 21st century, graphics are increasingly moving online, as “online supplemental material”, where they can make lavish use of color (still difficult and expensive to use for printed matter) and include sound files, video, and interactive tools such as modeling software and databases. Amidst these changes, I noted the evolution from first-person, anecdotal evidence provided by nominally credible authorities in a field to an increasingly heavy reliance on quantitative data and replication of results. Lost in this evolution is much of the qualitative information that is often equally important, but more difficult to “sell” to journal reviewers.
The second trend is the emergence of English as the international language of science, accompanied by certain changes in writing style. From the 17th century to the present, there has been a continuing evolution from active to passive voice, from personal references to the excision of any personal component in the writing, from anecdotal and qualitative data to replicated and quantitative data, from broad statements to increasingly qualified (hedged) statements, from long and dense sentences to shorter sentences with fewer clauses, and from simple descriptive phrases to complex compound adjectives. Nominalization (using verbs as nouns) and the creation of complex acronyms and abbreviations have also greatly increased in frequency. Formerly poetic and visually descriptive phrases have been largely lost from the literature. What interests me about these changes is how they have been inspired by attempts to make scientific writing focus on objective science rather than subjective impressions, ignoring the highly subjective, often irrational (highly emotional) personalities of the real scientists who produce this information. In many cases, the joy of reading the material has been lost, and the resulting materials become of interest only to scientists, irrespective of the inherent interest of the topic. For those of us who must transform science into popular science, this is a problem; it can be very difficult to persuade authors to abandon these hard-learned habits and adopt a style that will communicate effectively with a new audience, particularly since this form of communication is often devalued by the scientific community.
The third trend is that the schema for a scientific document has become increasingly standardized, with increasing use of headings (versus the scrawled marginalia of the 17th century), integration of graphics and tables directly within the text (versus the “plates” at the end of a paper or the center of a book that were traditionally used in the 17th century), greatly increased use of literature citations (often dozens per paper today, rather than a few scrawled marginalia in the 17th century), and insertion of equations within the text but on separate lines for ease of reading (rather than inline within paragraphs). Perhaps more significantly, the rhetorical structure has changed from an almost folksy description of the author’s voyage of personal discovery to the rigidly structured AIMRDR schema (Abstract, Introduction, Methods and Materials, Results, Discussion, and References). These sections, in turn, have their own schemata. For example, the Introduction usually follows a pattern of describing the research domain, framing the research problem to be solved, and proposing a possible solution to be tested by research. This schema was adopted by only 60% of papers in the 17th century, versus more than 85% today. In contrast, the Conclusion section must fulfill the promises made in the Introduction by presenting answers to the questions raised at the start of the paper, while also discussing the wider significance of these results, and calls for specific future research; though only 15% of papers have all three components, 60% have at least one of them. (Again, this seems low in my experience.) The Methods and Materials section has its own schema: preparation for the experiment, details of the experimental procedures, and generation and analysis of the data. The goal of this section is to provide a “warrant” for the Results (i.e., to justify their validity). The Results and Discussion sections, which are often combined, specifically present the results of the study (often visually or by means of tables), then attempt to explain the meaning of these results, often supported by citing results from other studies, and provides any necessary qualifications of the results (uncertainties, future research, etc.). Interestingly, though the Abstract is now a key component of all journal articles, and often read instead of the full paper (a guilty admission of most scientists), it is a relatively recent innovation (possibly as late as the 20th century); it presents the overall article in microcosm.
Journal articles are increasingly evolving onto the Web. Although the current form of the article itself may be identical to the printed form, length is no longer a restriction, so potentially huge amounts of supplemental supporting material may be provided. Visual and auditory information are increasingly available, as are interactive tools such as modeling software and databases such as the Encyclopedia of Life and many specialized genetics databases. And as my mention of Science 2.0 earlier in this article reveals, we are only seeing the beginnings of this evolution.
Interestingly, although the unified modern journal paper schema is highly efficient (see, for example, The scientific method: technical communicators learning from scientists), it sacrifices some of the pleasure of reading that came with the variety of older texts to achieve this efficiency. Some of this loss arises from the focus on abstract science rather than concrete human endeavor. Some of this arises from the modern peer review process, which began in the late 1700s when John Hill publicly criticized the many ludicrous research findings that were making their way into the research literature, controlled only by the diligence (and personal biases) of the publishers. Today, truly rigorous peer review provides a much higher degree of quality control, though the human fallibilities (prejudice, competition for research funding, personal animosities) have by no means been completely eliminated.
Pictures and profits
In this presentation, designer Patrick Hofmann (patrick@designph.com) presented examples of how he has redesigned information to take advantage of the power of visual communication. One of the more interesting things about Hofmann’s approach is not so much the graphics, but rather how he manages to consistently think outside the box, something all of us should strive for. (The title of his presentation refers to how much money he has been able to save for clients through his designs; here, I’ll focus on the design strategies rather than the financials.)
In one example, Hofmann was responsible for developing training aids for a company that produced laser projection machines for leather cutters. The handheld control for these machines was a typical engineering nightmare, requiring complex combinations of button presses with a three-button control, and errors in learning and applying these shortcuts were expensive in terms of lost time and wasted leather. The factory workers in this environment were recent immigrants from multiple ethnic backgrounds, with weak English skills and few words and phrases in common, suggesting the need for visual aids. Although strongly discouraged by his employer from visiting the actual users of the product, he nonetheless obtained permission to visit the factory and observed something crucial: that most of the workers had created their own visual aids to help them remember the correct combinations. By observing these aids and how they were used in the workplace, he was able to design a comparable work aid that made the work much easier and more effective for all the workers. In scientific communication, we might follow much the same approach by studying how scientists use equipment in the laboratory or the field.
In another example, Hofmann redesigned instructions for Sprint Canada’s new telephony services, many of which were specifically intended to be sold to recent immigrants. Here, the goal was to eliminate as many translation costs as possible, and Sprint was willing to test their many audiences to confirm that the results were successful. Rather than hiring test participants via a recruiter, Hofmann discovered that test participants could be hired much more cheaply (by an order of magnitude) from a temp agency such as Kelly Services. Such agencies offer a very powerful tool for usability testing: they maintain detailed records on all of their employees, including their education, skills and skill levels (e.g., computer experience), ethnicity, language skills, and so on, thereby allowing testers to be as specific as they want in recruiting test participants. By testing various combinations of purely abstract (pictures) to purely literal (text) instructions, Hofmann arrived at a combination of words and pictures that communicated effectively (an 85% success rate in task completion tests), allowing them to reduce their languages to two (English and French, Canada’s two official languages). This is precisely the kind of inexpensive approach we could use to test the effectiveness of scientific technology transfer.
In a third example, Hofmann set out to help Hewlett-Packard reduce their documentation costs for an installation guide. This effort occurred during the redesign of a computer terminal, with the goal of developing a wordless setup guide, much like what Ikea provides for their products. Problems with the traditional approach included the need to localize 200-page manuals into more than 16 languages, with obviously huge translation costs and many less-obvious costs, such as the need to create a separate part number for each manual and maintain and manage inventories of these manuals. To assist in the redesign of both the product and the manual, they brought in their inexpensive video cameras from home and recorded details of how engineers performed the setup, including obtaining documentary evidence of how dangerous some aspects of the installation were (e.g., sharp edges of parts caused blood loss in some instances), and how frustrating others were (e.g., audible cursing during the assembly of difficult parts). The video made a strong case to management for modifying the design, such as covering sharp parts and labeling other parts directly so that the label information could be excluded from the documentation. One interesting insight was that although some linguistic groups read information from left to right and others from right to left, all groups read from top to bottom on a page and from top page to bottom page; binding the manuals at the top edge thus neatly eliminated the problem of having two different sets of manuals for the two different reading patterns. Again, this example illustrates how observing real users of a product and how they use the product can provide important design insights.
Hofmann provided a few additional useful tricks. Using small text boxes (2´3 inches) in storyboards is a useful tool for forcing yourself to create concise descriptions. In some cases, and particularly for abstract concepts, words are more effective than visuals, or must be added to visuals for clarity. As Jakob Neilsen has noted, any feedback is better than none, even if that feedback can only be provided by your officemates or family members—or even yourself. (This is a particularly useful tactic when, as happens distressingly often, employers forbid their technical communicators to contact users of their product.) As Hofmann’s examples illustrate, guerrilla tactics such as field visits to users are a powerful way to gain insights, often for not much money. When it’s not possible to arrange these visits, sometimes thinking laterally reveals the solution. In one case where it was necessary to test products with a Chinese audience, there was no budget to travel to China to conduct tests, but enough of a budget to purchase inexpensive webcams that could be couriered to each test participant. This solved the problem nicely; combining the video feeds with chat software provided an elegant solution to not being able to be physically present during testing.
Information visualization
In this session, Phylise Banner (pbanner@skidmore.edu) provided a philosophical take on how we humans process visual information, with an emphasis on teaching us how to think visually and think about visual thinking rather than providing predigested design solutions. (She introduced her presentation by recommending the book Visual Intelligence, by Ann Marie Seward Barry, which nicely complements what she was about to discuss.) This is an approach I strongly favor, since I share her belief that it’s better to learn how to think through a problem than to memorize rote responses that often have limited applicability.
Visualization is the process of transforming observations into communication, even though the communication is inherently fictional; after all, ink on paper or dots on a computer screen are not the same thing as the object they portray. The process of communicating visually is complicated by what viewers bring to the dialogue: they not only perceive the dots, but also infer information about how and why a visual was created and what the designer was attempting to communicate through their design.
Visual perception is tightly related to how the brain processes visual signals from the eyes. Some features of this processing are consistent both between and within cultures. For example, if you draw any closed, curved shape, then add a circle inside the shape and near one of the borders, then join a triangle outside that shape but adjacent to the circle, it is nearly impossible to create an image that doesn’t resemble a bird—even though it’s unlikely that any real bird looks anything like what you’ve drawn. Similarly, if you draw two horizontal lines side by side (- -), add a vertical line below them and between the two horizontal lines ( | ), and then add a third horizontal line ( – ) below the vertical line, then enclose this image within any shape, it’s essentially impossible to produce a design that doesn’t resemble a human face. However, our interpretation of many other images is strongly shaped by our history (education, experience) and our cultural interpretations of certain images. For example, different cultures assign different meanings to the same color; white is the color of death in China, but the color of purity and innocence in Western culture. Those who indulge in cross-cultural communication need to be keenly aware of the risks of using symbols without a deep understanding of the other culture.
Visual perception is always composed of three factors: a distal stimulus (what you’re looking at), a proximal stimulus (how that image is detected by your eyes), and a percept (what you imagine those sensory signals to represent). When we see something, we attempt to match our internal representation of the image to the image’s context, and in so doing, use our prior experience to help us assign meaning to the image. This process involves classification, an attempt to discover order or patterns in nature. An important role of design is to facilitate this process by using familiar visual symbols in such a way that the matching process becomes easier. Maps are also ways to link internal knowledge with the outside environment. In that context, I’ve always liked Richard Saul Wurman’s treatment of the word map as an acronym; I paraphrase his explanation as “Making Able to Perceive”.
Because every graphic is an interpretation of reality, successful communication requires that the designer and the viewer share enough knowledge to establish a connection between their world views. (This is why “art appreciation” courses exist: our culture is sadly impoverished in visual literacy, and the education these courses provide enables even naïve viewers to understand something of what an artist was trying to say or achieve.) An often neglected component of visual information is the emotion it is intended to evoke (and sometimes emotions that are unintentionally evoked). As technical communicators, constrained by the modern Western scientific mindset, we tend to forget about this and in so doing, fail to take advantage of the power of affect (emotional response) in a well-crafted visual. Some additional insights can be gained from the book Imagination and the Meaningful Brain by Arnold Modell.
Unclogging brain bandwidth by reducing cognitive load
In this session Jane Bozarth (info@bozarthzone.com) discusses how information can be presented in such a way as to avoid overloading the recipient’s ability to receive, process, and understand the information (i.e., their “brain bandwidth”). Overload occurs when too much information is presented in too little space or time. To avoid this problem, a useful design trick is to identify the 20% of the information that is most important, and eliminate the remaining 80%. (The actual numbers are less important than their relative magnitudes.) The goal is to reduce the “cognitive load”, a term that can be translated simplistically as how hard the brain must work to deal with incoming information; heavier loads represent a more difficult task.
Humans have two main sensory channels for receiving information from our environment: an auditory-verbal channel, and a visual-pictorial channel. Both channels have a limited capacity, and a powerful design strategy involves dividing information between the two channels rather than overloading a single channel. Consider, for example, how difficult it is to read a book while someone is talking to you: both streams of information compete for limited space in the auditory-verbal channel. Contrast this with how easy it is to examine a technical drawing while someone explains what you should be looking at and why it is relevant: the oral information enters the brain via one channel while the visual information enters via another channel, thereby avoiding competition for the limited bandwidth. This is why, for example, reading lengthy Powerpoint slides to your audience is less effective than simplifying the slides and letting your voice carry most of the content: the information is divided among two channels rather than one. In this case, an additional complication is that we tend to read faster than people talk; in addition to the two forms of words (written and spoken) vying for space in the same auditory-verbal channel, the lack of synchronization between the two forms of word processing complicates the task of processing the information. I’ve used this knowledge successfully in my own presentations by producing short bullet points that only take a second or two to read; by the time I’ve had a sip of water or taken a deep breath, my audience has finished reading and is ready to pay attention to what I have to say, their minds already primed by the bullet point.
To understand how cognitive load affects communication, it’s helpful to distinguish between working memory (sometimes referred to as representational or short-term memory) and long-term memory. Working memory is where you hold information while you work to understand and respond to it, whereas long-term memory is where you store pre-existing information and the source of connections between old memories and the new information being held in working memory; once the connections are established, the information can be transferred to long-term memory, where it becomes permanently available. The goal of design is to help your audience make this transfer. High cognitive loads are a problem because they can overload working memory, leading to a loss of information (just like overfilling a cup of coffee) and difficulty finding the time required to process information and move it into long-term memory.
Many cognitive processes interfere with the process of receiving and processing information. One of the better-known examples is referred to as the split-attention effect: when you are forced to divide your attention between two sources of information, you can’t devote your full attention to either. This can be a relatively simple problem, such as when you must glance back and forth between printed documentation and the computer screen or when you must constantly refer back to a key or legend to understand a graphic, or something considerably more complex and dangerous, such as talking on your cell phone while driving. In both cases, we have only limited attention, and being forced to divide it among too many streams of information simultaneously can compromise the communication. This is why, for instance, Powerpoint presentations with sound tracks, multiple graphical animations, text-heavy slides with animated text fly-ins, and the speaker’s voice become impossible to comprehend: there are too many signals vying for too little attention.
A famous design dictum is that the design is complete when there is nothing left to remove; I’m familiar with this from Antoine de Saint-Exupery, who observed that “A designer knows he has achieved perfection not when there is nothing left to add, but when there is nothing left to take away.” Edward Tufte is another well-known advocate of visual minimalism. Diagrams are an interesting example of efficient communication, since the process of abstraction (preserving only the key details) both reduces the cognitive load (fewer distracting details to ignore) and takes advantage of the powerful visual processing channel.
Another problem relates to what Bozarth named compellingness: when something is so distracting that it incessantly draws your attention, you focus on the distraction rather than the information you should really be focusing on. Apparently, a major cause of airplane accidents on the ground is that pilots become so distracted by all their instrument displays that they forget to actually look out the window and watch where they’re going. (Having recently purchased a new Toyota Prius, I can testify that this is a real effect: the urge to keep an eye on the fuel-consumption display is nearly irresistable, even after several weeks with the car.)
Cognitive loads can be intrinsic, and related to the difficulty of the concept being communicated; we have no control over this, other than through our efforts to simplify and clarify. Loads can also be extrinsic, and related to extraneous details; we have considerable control over this aspect of a design. Loads can also be what Bozarth described as germane, meaning that they relate to the relevance of a communication effort to the audience and thus to their motivation to pay attention; we have some control over this because we can design simple, attractive communications that are directly relevant to the audience’s needs and desires, and can engage them by turning dictation into dialogue (e.g., by creating a game). This can pose problems when designing a communication for both beginners and experts; the two audiences have different needs and desires, and thus require different communication strategies (e.g., simple, unchallenging exercises for beginners versus complex, demanding exercises for experts). Various strategies exist for meeting both needs in documentation (e.g., providing “more info.” buttons and optional complex problems for the experts), but in the classroom, it’s hard to strike the right balance; sometimes all you can do is to try enlisting the experts to mentor the beginners.
Conceptual diagrams for science communication
In this presentation, Joanna Woerner (jwoerner@umces.edu) and Caroline Wicks (caroline.wicks@noaa.gov) discussed their efforts to create conceptual diagrams that use symbols such as icons to present the essential details of a concept. Their goal is to synthesize and abstract complex information using the power of graphics to simplify the communication. They distinguished between conceptual diagrams and cartoons (which rely on humor and context), box-and-arrow drawings (e.g., flowcharts), and data graphics. One goal of creating conceptual diagrams is to help clarify thought processes, and thereby improve understanding, identify gaps in a body of knowledge, generate ideas, reveal priorities, identify key elements, and help groups synthesize information along the way to reaching a consensus. This can work, in part, because the same information may be presented in mutually complementary ways, as when text and graphics combine to make the meaning of an image clearer. Because of the simplification process, conceptual diagrams are a useful way to bridge the gap between scientists and the general public, thereby creating a shared vision.
Symbols can be used to represent both tangibles (e.g., aquatic organisms) and intangibles (e.g., flows, processes). Because visual symbols are inherently equivocal, standardizing symbols goes a long way to ensure accurate communication: as is the case in the letters we use in written language, symbols communicate very efficiently once they have been learned. To support this goal, the speakers are part of a group striving to create a standard symbol and image library that can be used by others in their own conceptual diagrams. This collection currently comprises more than 1500 icons (soon to reach a total of 3000) and more than 2000 photos, and everyone is welcome to contribute new images.
As part of the presentation, Woerner and Wicks provided examples of nested diagrams, such as smaller, more detailed images zoomed out from a larger image that provides less detail but more context (sometimes referred to as blowouts), and sequential flows that show spatial or temporal changes. They also presented examples of hybrid diagrams that layer data of various forms on top of a conceptual diagram. To make the process of diagram creation more concrete, they led us through two class exercises in which volunteers were asked to play a version of Pictionary that they call “Conceptionary” with people sitting at their table: the goal was to create crude prototype diagrams that could be used to communicate often-subtle scientific concepts such as acid rain causing the death of trees or bleaching of coral reefs; the power of this technique to create understanding and consensus in real-time between participants in a meeting was revealed by the fact that most volunteers were able to successfully communicate their concepts to the rest of the table in less than 2 minutes, even though no one admitted to being a professional artist. Even I succeeded, and my lack of artistic skills is legendary!
Knowledge transfer between academics and practitioners
Knowledge transfer (often called “technology transfer” in the sciences) is of keen interest to readers of this newsletter, as bridging the gap between scientists and the general public is an increasingly important part of our work. Thus, we have much to learn from any other group that faces the same challenge.
Joel Kline (jkline@lvc.edu) presented the results of his study of communication between university professors (academics) and technical communication practitioners in New Zealand. New Zealand has a two-level university system, in whch research universities focus on academic work and train graduate students, whereas polytechnic institutes train practitioners. As in North America, many professors of technical communication have never actually practiced their trade, and therefore don’t adequately understand the nature of the work and the challenges that practitioners face. On the flip side of the coin, practitioners don’t appreciate the work done by academics because they often see it as too detached from the real world and irrelevant to their concerns. This is a great shame because, as I’ve always believed and as Kline confirms, both groups benefit greatly when they make efforts to discuss their mutual concerns and learn from each other. From the academic side of the divide, the problem is that there appears to be no unified model of knowledge transfer in technical communication, and possibly no clear perception of the benefit from doing so; from the practitioner side, there’s no clear return on the time investment required to listen to the academics, and also no clear model for how this dialogue should occur. As a result, each community is asking research questions that don’t interest the other community. Kline refers to this as the WhoGARA problem: “who gives a rat’s ass?”
Interactions between academics and practitioners come in various forms: face to face, via publications, and online. In my own experience, these forms fail for a variety of reasons. Face to face meetings most often arise in learning environments, such as when a practitioner attends university to attain a degree, or at conferences. But relatively few practitioners seek a degree, even though salary surveys show that this can improve their future earning potential, and you’ll often see a single conference become two conferences in which academics talk to academics, practitioners talk to practitioners, and neither group crosses the floor to talk to the other because their perceived interests don’t overlap. Publications are another obvious common ground, except for the “common” part of that name: Academics see little value in publications such as Intercom because the theoretical sophistication is usually low and there is no career-related incentive (e.g., tenure) to publish in such venues; practitioners, on the other hand, recognize the potential value of articles that appear in peer-reviewed journals, but simply lack the time to extract that value from papers that are often forbidding, theoretically dense, and turgidly written. These stereotypes raise a nearly impenetrable barrier between the two communities. Last but not least, there are virtual communities established online. These suffer from all of the above problems, plus problems unique to the medium combined with a lack of time to sort through the often-high level of traffic in such forums. All three modes of interaction suffer from a form of learned contempt for the other community: like the feud between the Montagues and the Capulets, the original reasons for the feud may be long forgotten
Those who responded to Kline’s survey of New Zealand practitioners provided some interesting insights into the nature of the problem. They did not favor any specific form of virtual interaction with academics, though e-mail and discussion groups received the highest ratings (used by 30 and 25% of practitioners, respectively). Peer-reviewed journals were an unpopular source of information, with Technical Communication receiving the highest rating but not a truly high rating (only 28% had “ever” read a paper in the journal), and other journals received even less attention; though low, this response rate actually compares favorably with that of the most highly rated non-peer-reviewed publication, Intercom (at only 33%). Clearly, publications are not a high priority for practitioners, and academic resources (the teachers and their journals) were rated lowest of all possible sources of information; they received the lowest “useful or very useful” rating, and the highest “never or rarely useful” rating. Practitioners rated colleagues and seminars (including conferences and 1-day workshops) as their most useful sources of information. Professional associations were seen as another valuable source of knowledge. Colleagues, seminars, and associations collectively accounted for nearly 60% of the total sources of knowledge. Despite these pessimistic results, practitioners acknowledged—at least in theory—the value of contacts with the academy, but most felt a significant disconnect between the two worlds. Echoing my own experience, they generally did not believe that creating an online community to facilitate this dialogue would be effective. As a result, Kline notes that “we cannot simply provide a technological channel between the communities and expect it to work”.
I attended Kline’s presentation because I’ve been doing knowledge transfer work for nearly 20 years, and because understanding this work is an interest near and dear to my heart. For scientific communicators, many of whom engage in knowledge transfer activities between audiences separated by a gap as wide as that between the the academics and practitioners in Kline’s study, the lessons of this study are clear. First and foremost, steps must be taken to break down the barriers that separate the two communities: communication cannot happen if neither party can hear the other party’s voice, or will listen to it if they do hear it. In scientific knowledge transfer, as in technical communication, the technical communicator’s role must become that of mediator or translator between the two communities, helping each to understand the other and helping to identify ways of making each party’s message audible and comprehensible. Various possibilities suggest themselves to me: Academics should be rewarded equally for theoretical and practical research, particularly when the practical research occurs in the workplace, in close cooperation with practitioners. Practitioners should be given strong incentives to join the academic world through opportunities such as adjunct professorships and joint research projects. Papers published in peer-reviewed journals should include a section entitled “implementation” or “practical considerations” that could make practitioners more interested in reading the papers; if the implications of the research are clear, more practitioners will be willing to invest the time to read the full paper. Conversely, “popular” publications such as Intercom should consider routinely publishing the Abstracts or “implications” sections of relevant journal articles, giving practitioners a reason to consult the journal to learn more. Simple steps such as these should provide a strong step towards bridging the divisive gap between academics and practitioners, whether in scientific or technical communication.
Closing keynote speech: Richard Saul Wurman
I’ve been a fan of Richard Saul Wurman (rsw@wurman.com) ever since I stumbled across a copy of Information Anxiety, which ignited my passion for information design. Though now in his 70s, Wurman appears as energetic as ever, and every bit as syncretic; in a long, entertaining, rambling presentation, he flitted between concepts like a bee visiting flowers to collect pollen, creating innumerable useful cross-pollinations along the way. (Indeed, I’m not surprised to have found his speech much like his books: nuggets of important information floating in a sea of fascinating distractions.) It’s hard to unite these disparate thoughts into a coherent narrative, since much of what he said was almost a “greatest hits” collection of bon mots and followed his own observation that “nobody has ever wanted anything I’ve ever done”; Wurman does it anyway because it interests him, and in satisfying his own curiosity, he’s produced more than 80 books that proved very interesting indeed to a great many readers. Rather than trying to assemble his talk into a narrative, I’ll simply present a few of the thoughts that struck home as I listened:
- Using round-trip translations between two languages, whether via Google or Babelfish reveal a key problem: we’re probably not communicating as clearly across cultures as we think. Though the problems of machine translation are well known, it’s less well known that similar problems afflict communication between humans.
- We’re taught to focus on successes rather than investigating failures. As scientists know well, the failures are often more revealing. Similarly, asking questions (questioning conventional wisdom) is often more insightful than answering questions (perpetuating conventional wisdom).
- Wurman’s book Follow the Yellow Brick Road may be of particular interest to technical communicators, as it deals with giving and receiving instructions.
- Learning involves remembering something that interests you; to improve learning, we must therefore find ways to make our information interesting and relevant.
- Important innovations are not always immediately obvious; Wurman noted that pagination of books was not invented for nearly a century after Gutenberg printed his first bible.
- Considering things from unconventional angles produces stunning conceptual breakthroughs. For instance, we like to think of ourselves as individuals, but based solely on the number of microorganisms living in and on our bodies (many of which are essential for our survival), it would be more accurate to think of ourselves as a zoo.
- Focusing on how things are used can produce design breakthroughs. For example, most road atlases are presented in alphabetical order, by state or province, to facilitate looking things up, and in scales determined by the size of the pages (i.e., large and small areas are both presented using the same page size) rather than based on user-centered considerations. Yet we don’t drive in alphabetical order, and presenting each map in this manner makes it unnecessarily difficult to drive across map borders. In his own road atlas of the United States, Wurman solved these problems by breaking the entire country into a series of overlapping maps with consistent scales. He also placed the table of contents on the book cover in the form of a map, with each region labeled using the corresponding page of the book. This innovation has since been adopted in a great many city atlases.
- Something scientists must never forget: assigning numbers to something does not always assign meaning to the numbers.
- Jokes work by subverting our expectations, and insights and breakthroughs work in much the same way.
- We can often define things more clearly by examining their opposites. For example, consider the differences between copyright, the right to copy, and copyleft.
Wurman concluded with an overview of his “19.20.21 project”, which is designed to explore 19 cities around the world that will have more than 20 million people in the 21st century.
Book review: Rhetoric in(to) science: style as invention in inquiry
Previously published in Technical Communication 53(1):98-99. February 2006.
Brodie Graves, H. 2005. Rhetoric in(to) science: style as invention in inquiry. Hampton Press, Cresskill, N.J. 284 p., including bibliographical references and indexes. [ISBN 1-57273-535-X.]
by Jackie Damrau (jdamrau3@airmail.net)
The main nugget of Heather Brodie Graves’ Rhetoric in(to) science comes in a subheading: “Research is a talent; writing is a skill” (p. 241). She does an admirable job of correlating the key points of rhetoric with scientific writing through her observations on laboratory research. She proves that technical communicators do not need to interpret the actual scientific findings but should have enough knowledge to ask intelligent questions.
Brodie Graves began her study of rhetoric in science by “focusing on how they [scientists] used language to conceptualize, understand, and develop a coherent explanation for their experimental data that would reveal the ways in which rhetoric is part of this interpretive process” (p. 2). She spent laboratory time listening to conversations with physicists and the principal writer to document how the rhetorical style works in scientific inventions.
Rhetoric in(to) science provides a scientific overview relating to physics and semiconductors; reviews the rhetorical inventory theory found in the philosophies of Aristotle, Francis Bacon, Robert Boyle, and Joseph Priestley; and then discusses the different ways that analogy, metaphor, and metonymy affect scientific rhetorical writing. Brodie Graves says that she wants to “show how these elements… also have a wider function and contribute to how we make sense of the unknown” (p. 20).
Analogy is the first element of style that Brodie Graves uses to show how scientists perceive new information in terms of complexity, yet have to simplify that information when preparing journal articles for an audience that may be unfamiliar with their specific area. Scientists then begin using metonymy, by which they are “translating an abstract theory into an extended metaphor, or developing an analogy to represent a complicated process” (p. 30). The analogies cited in Rhetoric in(to) science are “used for predictive purposes” (p. 102).
Metaphors help in “mapping a concrete object from a familiar (or source) domain onto a less well-known concept from an abstract (or target) domain” (p. 146). Brodie Graves cites two types of metaphors discovered during her research: “one based on the assumption that metaphor creates a different kind of meaning than literal language, and the other that sees metaphor as a fundamental conceptual system that is central to human thought” (p. 153). Scientists quoted in the book use conceptual metaphors by referring to animate or inanimate parts of nature.
Brodie Graves cites Kristina Rolin’s definition of metonymy as “a linguistic structure where one metaphorical substitution is displaced by another metaphorical substitution of a word” (p. 181). Three types of metonymies exist: sign, reference, and concept. A sign metonymy brings words such as electron together with concepts; thus, an electron is referred to as “the entity that orbits the nucleus of an atom” (p. 208). Referential metonymies link a concept to a thing or event within a specific scientific model or structure, whereas conceptual metonymy relates two or more concepts in some specific way.
Brodie Graves does a superb job of excerpting her transcriptions that show the principal writer working through the results of experiments. She says, “When writing is used as a mode of thinking, style takes on greater importance because it facilitates the representation of thought and, with it, insight” (p. 238). Her view, though, is true of any technical communication. The use of rhetoric can help us to relay the technical to the less technically minded audience.Jackie Damrau (jdamrau3@airmail.net) has more than 20 years of technical communication experience. She is a fellow and member of the STC Lone Star community and two SIGs. Jackie also serves on the Leadership Community Resource Committee and the Nominating Committee.
Book review: Scientific style and format: the CSE manual for authors, editors, and publishers
Style Manual Committee, Council of Science Editors. 2006. 7th ed. Reston, VA: Council of Science Editors. [ISBN 978-0-9779665-0-9. 658 pages, including index.]
by Geoff Hart (ghart@videotron.ca)
Previously published in Technical Communication54(1):119-121.
Editors accumulate style guides the way others accumulate back issues of National Geographic because despite heroic efforts by style guide authors, no guide can possibly cover everything. For example, no style guide includes a usage guide as comprehensive as Garner’s Modern American Usage (Oxford University Press, 2003; reviewed in the February 2005 issue of Technical Communication). That being the case, those who embark on the perilous task of creating a subject-specific style guide must focus on the essentials and leave generalities to more general works. To respect that constraint, we reviewers of style guides must temper our disappointment when a guide fails to include everything we’re hoping for.
The long-awaited 7th edition of the Council of Science Editors style guide valiantly attempts to cover the essentials in 32 chapters. Here’s a selective overview: Part I covers publishing (elements of scientific publications, policies and practices, copyright), Part II covers general style conventions (symbol use, word formation, prose style, abbreviations, units of measure), Part III covers 12 specific genres (including chemistry, physics, medicine, genetics, and geology), and Part IV covers publication elements (journal styles, references, tables, figures, manuscript preparation).
At 658 pages, the guide clearly cannot be all things to all people. Recognizing this, the authors have provided extensive supplementary references in each chapter, including the authorities behind their recommendations, plus an 11-page general bibliography with logical subheadings and a significant list of Internet-based resources. There are omissions, such as a comprehensive catalog of journal style guidelines (for example, www.akademisyen.com/author), but they’re forgivable; no index of Internet resources can ever hope to be comprehensive or fully up to date.
A primary goal of the authors was to gather information from myriad sources in a single place, both as a resource and to encourage convergence among the disparate style guides for each scientific discipline. Another goal was to simplify complex rules that cover too many exceptions, as illustrated by the thorough, logical discussion of all things numerical in Chapter 12. For example, the authors provide the welcome recommendation to use numerals for all numbers representing measurements or counts instead of inconsistently using words for single-digit numbers. There’s even a decent overview of statistics that would benefit many scientists I’ve worked with, not just editors. Here and elsewhere, sound justification is provided for recommendations so readers can understand style choices instead of memorizing seemingly arbitrary rules. (Rules are easier to remember and apply in daily practice when they make sense.) However, science is complex; here, as in many other chapters, you’ll need to have at least a basic background in science to grasp certain recommendations. A similar caveat applies to the comprehensive discussion of taxonomy and nomenclature in Chapter 22: you won’t become a taxonomist, but if you’ve previously grappled with this complex discipline, you’ll learn many things you never fully grasped before, including puzzling inconsistencies between fields.
It isn’t possible to do justice to a book this large in a brief review, so I’ve chosen to illustrate the guide’s strengths and weaknesses through a representative example: Chapter 21(“Genes, chromosomes, and related molecules”). I would have focused on my own specialties—ecology, environmental biology, and plant physiology—but there are no such guidelines. The authors clearly state that they set out to emphasize common themes rather than providing a guide for every branch of science. This is an acceptable choice for a general reference, but a high level of unnecessary detail in some areas leads to seemingly arbitrary omissions such as my fields of study.
Chapter 21 describes gene nomenclature well enough that I wish I’d seen it before grappling with this material on the job; I’ll save considerable time in future edits of genetics manuscripts. However, it also digresses into an explanation of how to determine whether you’ve really discovered a new gene; every genetics writer and editor needs to know the nomenclature, but the latter is in no way a matter of style and comes at the expense of a clear discussion of more relevant resources such as GenBank and BLAST searches. The lengthy discussion of nomenclatural conventions and detailed tables for major model species from yeasts to mice is, to be sure, a valuable resource for these species. But basic editorial concepts are missing. The explanation of sequence lengths (measured in base pairs, bp) is clear and thorough but doesn’t discuss whether long segments should be measured in kbp (a common choice consistent with bp) or kb (the authors’ shorter preference) or how to use nt for nucleotide positions (nt before a number represents a position, but after a number it represents a sequence length). The description of transformation vectors is too short, and there’s no discussion of frequently misused genetic terms such as expression, transcription, and translation. These kinds of bread-and-butter omissions are symptomatic of a recurring lack of focus.
Did the authors choose the right essentials to cover? Often, they did not. The subject-specific guides (Part III) provide a wealth of information applicable across many scientific fields. The basics are clear and concise, and accompanied by copious literature citations for those who need more details. However, help in several key subject areas simply isn’t present.
Specific sciences were omitted because of the emphasis on general applicability and space constraints (the book is set in legible but uncomfortably small type to keep its size manageable). But several chapters whose topics are covered better by unabridged dictionaries, style guides such as Chicago, and guidelines to authors available on publishers’ Web sites also should have been omitted to make room for issues unique to science. For example, there’s little use for chapters on punctuation (Chapter 5, though the list of specialized uses for punctuation marks should be preserved), spelling and word formation (Chapter 6, though the list of prefixes that don’t require hyphens is useful and should be enhanced by expanding the too-brief list of prefixes that do require hyphens on p. 75), grammar (Chapter 7), capitalization (Chapter 9), typography (Chapter 31), and correcting proofs (Chapter 32).
The list of frequently confused words is great to see but could be expanded by eliminating the incomplete discussion of irregular plurals (for example, discussion of octopi–octopuses–octopodes is missing—but perhaps that’s why we have dictionaries). Similarly, it’s a questionable choice to devote 85 pages to literature citations (Chapter 29) when science publishers are famously idiosyncratic in their approach. (Indeed, every publisher seems to take it as a personal challenge to develop unique permutations of the possible ways to arrange and punctuate references.) Though it’s laudable to gather this detailed information in one place to help authors understand what is necessary and why, most publishers cover literature citations adequately in a fraction of the space in their online style guides.
This redundant coverage leads to an occasional lack of focus on style issues and to important omissions. For example, Chapter 7 (“Prose style and word choice”) does not explain appropriate use of active and passive voice—a concern if you write or edit for journals. At a minimum, it should include basic guidelines, possibly accompanied by a survey of journal preferences to illustrate how prose style is changing. Indeed, no mention is made of “voice” or “active voice” whatsoever in that chapter—nor anywhere else as far as I can tell from the index or the tables of contents. At nearly 30 pages, the index is clear and well-laid out but suffers from insufficient use of keywords and synonyms, making it difficult to find topics without browsing. If you’re willing to browse, a table of contents for each chapter can help, but the 32+ pages occupied by these tables of contents would have been more usefully allocated to an expanded index.
A good review should be critical, so don’t take my discussion of the book’s flaws as damning. Scientific style and format remains a valuable addition to any science writer or editor’s library. It does not eliminate the need for specialized references such as university-level textbooks, but fills a large gap in the existing scientific references. However, its own significant gaps in coverage, the inclusion of information that’s not relevant to writers or editors, and redundant overlaps with more comprehensive general guides should be addressed in preparing the next edition.
Explaining the need for donors is also scientific communication
Montreal-area technical communicator Emru Townsend was recently diagnosed with leukemia, and has been struggling to defeat this often-fatal disease ever since, showing an astounding mixture of humor and courage.
Emru and his friends and family have also made heroic efforts to get the word out about the need to register potential donors for bone marrow transplants, particularly from non-white ethnic groups, which are badly underrepresented in the national donor registries of most countries.
If you’d like to learn more, and particularly if you’d like to learn about the painless process of registering—and potentially donating your bone marrow if you’re fortunate enough to match someone who needs a donor—visit Emru’s Web site.
Got announcements?
Here at the newsletter, we’re always eager to include information submitted by our readers. So if you’ve got anything to announce, something to brag about, or something you think other readers might be interested in, please send it along. We’ll be happy to find space for you.
Parting thoughts
“Popular, palatable views of the world and how it came to be do not constitute science or truth. But decent science education requires that we share the truth we find—whether or not we like it.”—Lynn Margulis
“If you’re not part of the solution, you’re part of the precipitate.”—Steven Wright
“The human mind treats a new idea the same way the body treats a strange protein; it rejects it.”—Peter B. Medawar, scientist, Nobel laureate (1915-1987)
—
“In the search for new laws, you always have the psychological excitement of feeling that possibly nobody has yet thought of the crazy possibility you are looking at right now.”—Richard Feynman
—
“… the history of technology has never been a mere offshoot of the history of science. In fact, science plays a less important role… than economics and politics. The very notion of science as an activity distinct from technology or philosophy is a recent one; the term scientist is a nineteenth-century invention… But by the late 1800s, scientific research was itself becoming industrialized. Research laboratories were idea factories, organizing large groups of scientists to create knowledge that satisfied the needs of corporate sponsors. In other words, science became a branch of technology, instead of the other way around.”—Alex Soojung-Kim Pang, Iron, Coal, Burgers, and Beer
—
“Thinking in terms of systems, rather than individual artifacts, also helps us to see that technology is always susceptible to influence by its users. While this is important for understanding its past, it is absolutely critical for shaping its future… This, ultimately, is the strongest reason for taking technology as seriously as science or culture: not just because it influences us, but because we can influence it.”—Alex Soojung-Kim Pang, Iron, Coal, Burgers, and Beer
—
“Whereas the Enlightenment profited largely from the disposition of a very powerful descriptive tool, that of matters of fact—which were excellent for debunking quite a lot of beliefs, powers, and illusions—it found itself totally disarmed once matters of fact, in turn, were eaten up by the same debunking impetus [criticism and social construction]. After that, the lights of the Enlightenment were slowly turned off, and some sort of darkness appears to have fallen on campuses. My question is thus: Can we devise another descriptive tool that deals this time with matters of concern and whose import will no longer be to debunk but to protect and to care? Is it really possible to transform the critical urge to an ethos that adds reality to matters of fact and does not subtract from it?”—Bruno Latour, Why has criticism run out of steam?
Contact and copyright information
The Exchange is published on behalf of the Scientific Communication special interest group of the Society for Technical Communication. Material in the Exchange can be reprinted without permission if credit is given to the author and a copy of the reprint is sent to the editor. Please send comments, letters, and articles to the editor.
Editor and Publisher of the Exchange newsletter:
Geoff Hart (ghart@videotron.ca)
Scientific Communication SIG Manager:
Kathie Gorski (kgorski@execpc.com)
SciCom SIG Webmasters:
Matt Hunt (matthew.hunt@acm.org) and Scott Hughes (RaySHughes@Eaton.com)
© 2008, Society for Technical Communication (901 North Stuart St., Suite 904, Arlington, Virginia 22203-1822 U.S.A., 703-522-4114, 703-522-2075 fax, www.stc.org.