Going Multi-Modal, part 1

Let's face it -- our students are working, and will work, in a digital, multi-modal world. Whether they're doing things for work or for community groups, they're designing posters, infographics, brochures, videos, music, and a whole host of things most English profs don't think of as "writing." Except, surprise! It is! It's called multi-modality, and it's been part of rhet-comp studies for quite a while... because, face this, too, if we're going to teach "writing," we also have to teach them how to turn their writing into something they consider every-day useful.

Freaking out yet?

I know what you're thinking: "I didn't go to film school. What do I know about teaching videos?" or "Pictures are nice, but what do they have to do with a research essay?"

Good questions. I'll tackle the second one first.

Simply put, pictures are just as rhetorical as any other form of communication. And they often make arguments in more profound ways than alphabetic writing. Think of any photo of frontline workers during the current pandemic, or the famous picture of the falling man on 9/11. Or wildfires in California. Or clean canals in Venice. Those are the things students (and we!) see every day, so analyzing them, understanding who created them and for what purpose and how they put together, is critical to creating an informed student and citizenry.

And we writing instructors don't have to be art critics to say how a picture affects us -- how the artist/creator/composer tapped into pathos. Or used their credibility to sell the message in the image (ethos). Or marshalled statistics to be a powerful headline (logos).

Learning a little about things like "bigger = more important" and "diagonal lines mean motion" doesn't hurt, but that takes 30 seconds. See? You just learned something!

Now you add motion to those pictures (video and film) -- the first question -- and you can talk about things like why is something shot in close-up, and who the camera is following and why, and how the filmmaker leads us to specific information or actions. And you don't even need to know terms like "key light" or "gaffer" -- although that's fun jargon.

And again, we can analyze rhetorically how a moving image affects us. Which means we're not only rhetorical analysts of alphabetic text but also of visual text.

More importantly, those are, as I mentioned, what students see every day, what they're going to be creating, and we do them a disservice if we don't give them ways to analyze them. Better yet, give them practice in creating them for specific purposes and audiences, just like alphabetic texts.

So now that you're convinced :-), I'll be posting more about how to approach teaching multimodal composition. First, however, a couple of great things to read:

  1. Pamela Takayoshi and Cynthia Selfe have been writing about multimodal composing for a while. This is an easy read, very approachable introduction to it: http://techstyle.lmc.gatech.edu/wp-content/uploads/2012/08/Takayoshi-Selfe.pdf

  2. Melanie Gagich has a wonderful student (and instructor) friendly piece in Vol 3 of Writing Spaces (which is itself a great resource for articles aimed at students but which are grounded in theory) that I use in the classroom with great success: "An Introduction to and Strategies For Multimodal Composing" https://writingspaces.org/sites/default/files/1gagich-introduction-strategies-multimodal-composing.pdf

More later about how to approach teaching multimodal units when you don't have a multimodal clue.

BTW: the image uses more than a couple of different modes within it... Credit: https://www.flickr.com/photos/44550450@N04/21587635011

21 views2 comments

Recent Posts

See All