Technical Writing: Argument-Writing Issues
T. R. Girill
Society for Technical Communication/Lawrence Livermore National Lab.
Technical Writing: Argument-Writing Issues
The three lead literacy standards of the Common Core (Standards 1
through 3, each with half a dozen lettered subparts) stake out the
three broad genres that literate science students should be able
to write: informative/explanatory text (standard 2), narratives
(stories, standard 3), and arguments (standard 1). Of these three,
arguments are perhaps the most ambiguous. What counts as success
with argument literacy? The answer greatly affects student learning
goals and hence the skills they must master to achieve those goals.
Argument as Debate
One interpretation of CCSS Standard 1, playing off its reference
to "claims and counterclaims," is that argument means DEBATE, a
structured verbal discussion (of a proposition) in turn modeled
after parliamentary dispute or courtroom litigation.
This debate interpretation was featured, for example, in the
September, 2013, issue (Vol. 80, no. 5) of NSTA's The Science
Teacher. One article in that issue talks about "teaching historical
scientific controversies," a second discusses "defending scientific
arguments" through support or rebuttal, while a third suggests
showing video debates on scientific topics (such as global warming)
and having students then practice debating similar topics. There
even exist online resources, such as argumentinterchange.org,
designed to map public-policy debates conducted on blogs or
Viewing argument as debate, however, tends to politicize rather than
clarify technical issues. It also has two PEDAGOGICAL weaknesses:
(1) Casting arguments as debates stresses public SPEAKING skills
rather than the writing skills that are the main thrust of Common
(2) Debate ultimately involves a scored competition with winners
and losers, rather than a thoughtful, on-going analysis of evidence
old and new.
Argument as Logic
A second interpretation of CCSS Standard 1, based on its mention of
"relevant data and evidence" along with "discipline-appropriate form,"
is that argument here means reasoning, which ranges from the formal
validity of proofs (e.g., in geometry) to the qualitative assessment
of research sources and analogies. While mathematical logic and
rigorous statistics await most science students only in college, their
proxies turn up much earlier, in many aspects of writing about science.
As science fair projects make explicit, defending their methodological
plans and analyzing their own results are where most students need to
write effective arguments.
When students explain their methods in lab reports or in the more
formal documentation for a science fair project, their most common
mistake is to omit key steps or crucial enabling details (how much,
how long). But another common weakness is failure to provide
EVIDENCE for (to argue for) the conclusion "my experimental design
is adequate." Such evidence could include:
(1) An inventory of relevant VARIABLES and the imposed controls on
all but the experimental variable,
(2) An inventory of relevant RISKS and the precautions taken to
manage those risks,
(3) A discussion of SAMPLE SIZE (and, where relevant, subject
selection and randomization) sufficient for statistical significance.
Without some version of this adequacy argument, a methods section is
not doing its full job. Providing a checklist (or yes-no binary
rubric) that itemizes such possible evidence can help unsure students
learn HOW to construct this good-methods argument.
Merely describing one's results (data recorded or observations made)
is not enough. Readers expect (and science fair projects are rated
on) the (hopefully interesting) INFERENCES that each student draws
from their results. Are their data points examples of (or counter-
examples of) one or more general principles or laws? Do the data
show trends (e.g., in plant growth or pollution levels) and what do
those trends imply? If there are unexpected or negative results,
what do they imply about the assumptions made during the work?
Are there errors or omissions that explain (entail) those unexpected
results? "Infer," "imply," and "entail" all signal that arguments
are being made--which is why this section is sometimes called
"conclusions." (These are usually probabilistic rather then
deductive arguments, but no less important because of that.)
Method/Effect Distinction Still Applies
The Gates Foundation regards such science-argument crafting as
important enough to be one focus of its "Literacy Design
Collaborative," through which (middle-school) teachers share
sample exercises. Here is one example:
Should animals be kept in zoos?
After reading the informative texts provided,
write an essay [= an argument] that addresses
the question and support your position with
evidence from the texts.
Just as with teaching how to write instructions and descriptions
(CCSS Standard 2), however, distinguishing between method and
effect is crucial when teaching HOW to write arguments. Many students
cannot tackle the Gates exercise, nor learn skills from it, until you
expose (e.g., using the kind of checklist/scaffold suggested in the
Methods Section above) the specific TECHNIQUES by which skilled
scientists argue such issues.