The Current Logo

Beginning Our Study of Multimodal Composing

Written by Lanette Jimerson
September 22, 2011

We began our work in October 2010 when we held our initial meeting in Davis, California. After reviewing our charge from NWP and the MacArthur Foundation, we began with what seemed to be a very simple problem:  How do we talk about the array of multimodal products which authors are creating today?  How do we create a shared language for talking about what is valuable in digital videos, VoiceThreads, Word docs, websites, twitter streams, and the other very diverse sets of texts?  We wanted to create a framework that would account for, honor, and accommodate these new ways of composing texts.


We began trying to answer these questions by applying existing writing assessment rubrics to some sample multimodal texts.  We used Washington State’s Content/Organization/Ideas and Conventions Rubrics (HsCoSSS), NWP’s Analytic Writing Continuum (AWC), and the 6+1 Trait rubrics.  We found that although elements of these rubrics translated to the new products, the digital videos, VoiceThreads, and kinetic type Flash productions we looked at could not be fully assessed using those rubrics—parts of those traditional writing assessments did apply, but an equal number of categories did not work.

We continued our work by reading and responding to pieces by the New London Group, Cheryl Ball, Anne Wysocki, Paul Prior, Anne Herrington, Charles Moran, Eve Bearne, and Carl Whithaus.   We were also able to look at the page-proofs from Because Digital Writing Matters. We used these readings to see how teachers and researchers were talking about multimodal writing.

Finally, we wanted to recognize that there were other “fellow travelers” at work on questions of how do we assess and value multimodal writing.  We examined the TDR Final Film Rubric, Digital Storytelling Rubric, and Bearne’s What Does Getting Better at Multimodal Composition Look Like.  These rubrics captured more of the multimodal aspects of the compositions we were examining, but they did not fully describe the dynamic processes of composing the student samples that we were looking at.  The meeting ended with a shift from creating a rubric (or series of rubrics) to creating a framework or set of domains.  We were not looking only at assessment but at the interactions among assessment, curriculum, and pedagogy.  We wanted to create tools that would ultimately enhance learning.