Вы находитесь на странице: 1из 43

Learning and communication in digital

multimodal landscapes
An inaugural professorial lecture by Carey Jewitt

People communicate and interact through gesture, gaze, and shifting their posture and
position as well as through language. Many of the texts and artefacts people engage
with in learning environments also go beyond language a mix of image, colour, texture,
movement, music, writing, and spoken word. Research that looks beyond language,
though once marginalized, is increasingly recognized as essential to understanding
communication and interaction, particularly in digital environments that create new
challenges for and place interesting demands on social science research methods.

In this lecture, Professor Carey Jewitt explores the educational terrain of multimodal
communication and the challenges of how to research and understand it. She draws upon
her work with colleagues at the Institute of Education and beyond, and demonstrates the
potential of attending to the unspoken as much as to the spoken in helping to understand
communication and learning in the changing digital landscape.

The author shows how the design and use of technologies has a key role in
communication and learning, and how their use can shape practices and potentials
and change resources. Finally, she points towards the ways in which, through the use of
technologies, teachers and students have access to different semiotic resources, and how
their situated use of these resources shapes learning.

Carey Jewitt is Professor of Technology and Learning at the Institute of Education,


University of London, and is the head of its Culture, Communication, and
Media Department.

Institute of Education Press


20 Bedford Way
London
WC1H 0AL
ioe.ac.uk/ioepress

CJ_proflec_1112_1.1.indd 1-2 25/03/2013 13:23


Learning and communication in digital
multimodal landscapes
Carey Jewitt
Learning and communication in digital
multimodal landscapes
Carey Jewitt

Professor of Learning and Technology

Based on an Inaugural Professorial Lecture delivered at the Institute of Education,


University of London, on 1 November 2012

Institute of Education Press


Professorial Lecture Series
First published in 2013 by the Institute of Education Press,
20 Bedford Way, London WC1H 0AL
www.ioe.ac.uk/ioepress

Carey Jewitt 2013

British Library Cataloguing in Publication Data:


A catalogue record for this publication is available from the British Library

ISBN 978-1-78277-018-3

All rights reserved. No part of this publication may be reproduced,


stored in a retrieval system, or transmitted in any form or by
any means, electronic, mechanical, photocopying, recording or
otherwise, without the prior permission of the copyright owner.

The opinions expressed in this publication are those of


the author and do not necessarily reflect the views of
the Institute of Education, University of London.

Typeset by Quadrant Infotech (India) Pvt Ltd

Printed by ImageData Group


Biography
Carey Jewitt is Professor of Learning and Technology, and Head of
the Culture, Communication, and Media Department, at the Institute
of Education, University of London. Her research interests are the
development of visual and multimodal research methods, video-based
research, and researching technology-mediated interaction in the
school classroom. She is Director of MODE: Multimodal Methodologies
for Researching Digital Data and Environments, a node of the National
Centre for Research Methods, funded by the ESRC (mode.ioe.ac.uk).
Careys recent publications include The Sage Handbook of Digital
Technology Research (2013), with Sara Price and Barry Brown; The
Routledge Handbook of Multimodal Analysis (2009); and Technology,
Literacy, Learning: A multimodal approach (Routledge, 2008).
Learning and communication in digital
multimodal landscapes
Carey Jewitt

In this paper, I explore the educational terrain of multimodal communication


and the challenges of how to research and understand it, with a focus on digital
technologies. It is organized into five sections. First, I draw on my work over the
past 15 years to outline what I mean by multimodality. Second, I outline six
reasons for using a multimodal approach: it is holistic, provides a new lens for
research, is sensitive to notions of power, as well as difference, helps to identify
new semiotic resources, and provides innovative research methods. The third
section focuses on the changing digital landscape. The fourth section shows
how digital technologies provide teachers and students access to multimodal
configurations of visual, embodied, and spatial resources, and discusses how
these reshape curriculum knowledge and classroom practices including
reading, writing and multimodal design, hypothesis and problem solving,
exploratory embodied learning, and spatial thinking, in ways significant for
teaching and learning. I conclude with a few comments on future directions.

What is multimodality?
People communicate not just by using language. Fifteen years ago that was a
marginal and relatively contested position to hold especially among linguists
and educational researchers. That it is useful to look beyond language or at least
around it to understand communication and interaction is now increasingly
common and fairly accepted, particularly in relation to researching digital
environments. In reality, however, the extent to which this actually happens varies
across disciplines, and language remains the primary analytical focus for many
researchers and educational practitioners. People interact with others and objects
through the use of gesture and gaze, facial expression, by shifting their body

1
Carey Jewitt

posture and position, and by moving around the spaces that they are in. Many
of the texts and artefacts with which people engage when they communicate or
are in learning environments also go beyond language textbooks, maps, forms,
websites, digital objects, models, and equipment are generally a mix of images,
colour, texture, and writing, as well as dynamic animations involving movement,
music, and the spoken word. This places significant challenges and demands on
social science research methods to which my work responds, most recently with
colleagues on MODE, a project on multimodal methodologies for researching
digital environments, with Jeff Bezemer, Gunther Kress, Sara Price, and others,
funded by The Economic and Social Research Council (ESRC) and a node of the
National Centre for Research Methods (mode.ioe.ac.uk).
Multimodality, the approach that informs my work, attends
systematically to the social interpretation of a range of forms of making
meaning. It provides concepts, methods, and a framework for the collection
and analysis of visual, aural, embodied, and spatial aspects of interaction and
environments (Jewitt, 2009; Kress, 2010). While modes of communication, such
as gesture, have been recognized and studied extensively (e.g. McNeill, 1992),
multimodality attends to all aspects of communication and investigates the
interaction between communicational means. That is, it takes a more holistic
view of interaction and communication that goes beyond paying primary
attention to one aspect of communication resources or specific modes. For
researchers from linguistics this means moving beyond language, but in other
disciplines it may mean moving beyond a focus on image, and so on.
Speech and writing continue to be significant but are seen as parts
of a multimodal ensemble. Multimodality emphasizes the importance of the
social context and the resources available to people to make meaning, with
attention to peoples situated choice of resources; for example, their use of the
resources of gaze a direct or averted gaze, a held, short, or intermittent gaze,
or a fixed or roving gaze. It sees the way something is expressed as central
to what it can mean. Thus it opens up possibilities for recognizing, analysing,
and theorizing the different ways in which people make meaning with the full
range of resources available to them in a specific context and moment.

Why use a multimodal approach?


There are many reasons for, and benefits to, taking a multimodal approach to
look beyond language. Below, I highlight six that have been key to my work to
date.

2
Learning and communication in digital multimodal landscapes

Holistic
Multimodality systematically describes the communicative work of all modes.
A multimodal approach counters the partiality of looking at just one form of
communication language and in doing so it brings attention to the specific
and different communicative work of other modes. It situates what is written
or said alongside all the other modes of communication used image, gesture,
gaze, body posture, space, and so on and starts from the point that all make
a contribution to meaning. Multimodality provides a framework for the
systematic description of modes and their semiotic resources. This enables the
multimodal ensemble of communication to be interrogated and helps to get
at the contradictions, sometimes even conflicting discourses, between what is
spoken or written and expressed in other modes, in powerful ways.
I showed this in my early work on sexual health information leaflets
for young men (Figures 14). I was evaluating a London-based sexual health
service for young men at the time, and the men working in the clinics were
not using any of the posters or leaflets with which they had been provided:
they said that they hated the look of them but they could not articulate why.
This was problematic, as images have a central role in peoples perceptions of
health promotion materials; indeed, an audiences response may be entirely
image-based, and identification with the imagery is a basic prerequisite to
effectiveness. I analysed the leaflets using the visual social semiotic method of
Reading Images (Kress and van Leeuwen, 1996). I analysed how male sexuality
was managed at a visual level, and this showed that the information encoded
in the images would be unacceptable to many sexual health professionals
and young people; for instance, that the context for sex is either heterosexual
reproduction or infection. The leaflets upheld heterosexual norms and revealed
messages about gender that were not apparent in the written text, including
portraying men as sexually less complex than women, sexually dangerous, and
predatory, and failing to acknowledge young mens concerns or emotional
lives.
My analysis showed, for instance, that the use of setting in images in
the leaflets is associated with the level of male sexual control and indicates
whether or not sex has taken place. In Figures 14, for example, the setting
indicates that women and men are represented as having control of different
sexual domains. Women are represented in the images as possessing sexual
control in medical settings, home/domestic settings, and in public venues, and
natural outdoor settings (e.g. Figures 13), and men are depicted as having
sexual control in the urban (outdoor) settings (e.g. Figure 4). Men on the street

3
Carey Jewitt

are represented as sexually dangerous, but once in domestic settings they are
shown to relinquish control to women (Jewitt, 1997, 1998; Jewitt and Oyama,
2001).

Figure 1: Image from gonorrhoea leaflet (Health Education Authority)

Figure 2: Postcard from the If He Wont Use a Condom campaign


(Health Education Authority)

4
Learning and communication in digital multimodal landscapes

Figure 3: Front page of What has HepB got to do with Me? leaflet (Group B)

Figure 4: Poster from the Explore the Possibilities campaign


(City and East London Health Promotion)

5
Carey Jewitt

The written elements of leaflets like the one shown in Figure 4 present a positive
discourse of men as sexually responsible, while the visual depiction is one of
predatory risk a young man in a sports car in an urban environment with
a comatose-looking young woman (Jewitt, 1997). Remember that these are
leaflets for young men. This work shows how multimodalitys holistic approach
can make visible significant discourses that are hidden or left ambivalent,
somehow fluid, unarticulated in the non-verbal multimodal interaction
between people, or in the visual or multimodal elements of texts and artefacts.
Looking beyond language can make apparent these contradictions.

Multimodality provides a new lens


Looking at communication and learning through a multimodal lens literally
changes what comes to be seen and considered as data. It redraws the
boundary around what it is necessary and possible to analyse. Multimodality
provides an inclusive research lens.
The image in Figure 5 is from the Rhetorics of the Science Classroom
project led by Gunther Kress and Jon Ogborn, funded by the ESRC, and on
which I was the lead researcher (Kress et al., 2001). It was one of the first large
multimodal empirical research projects. This new lens enabled us to look at
how teachers and students orchestrate a range of resources to create scientific
narratives in this case, blood circulation through the use of an image on
the board, talk, interaction with (and manipulation of ) a three-dimensional
model of the human body, gesture, the use of ones own body, and the use
of a textbook. We explored the different functions of these resources, and the
ways in which students took these up in their written work, concept maps, and
drawings.
Multimodality provides an inclusive research lens that enables the
technologies and objects that are so much a part of our everyday world that
we no longer notice them, or enables silent embodied engagement to be seen
differently. Looking at all the modes together brings them into the frame
into the research picture. This attention to modes in relation to digital texts,
mediated interaction, and environments suggests that multimodal inventories
can be of use both in understanding the potentials and constraints that different
technologies place on their use, and in how the users of a technology notice
and take up those resources in different ways. This can inform the redesign of
technological artefacts and environments, as well as how they are introduced
into a set of practices.

6
Learning and communication in digital multimodal landscapes

Figure 5: A science classroom from the Rhetorics of the


Science Classroom project (Kress et al., 2001)

Power
A multimodal approach is sensitive to exploring power relations, and how these
are materially instantiated through the different kinds of access that people
have to communicational modes. It makes visible the uneven distribution of
modal resources across social groups and shows how these differences mark
power, as well as the ways in which people resist these markers of power.
An example of this is shown in Figures 6 and 7, which are from the
Production of School English project with Gunther Kress, Ken Jones, Anton
Franks, John Hardcastle, and others, on which I was the lead researcher (Kress
et al., 2004). The students are working in small groups to analyse a poem, and
the teacher joins each group for around five minutes each. The ways in which
the students and teachers interact with the objects on the table the pens,
the dictionary, the printed poem, and their use of gaze, gesture, and posture,
all produce very different pedagogic relations. In the instance represented by
Figure 6, the teacher stands and leans informally across the table, she holds the

7
Carey Jewitt

dictionary, the students and teacher look at one another, there are expansive
open gestures, the students write, and the talk is distributed across the teacher
and students, with multiple questions and answers.

Figure 6: An English classroom from the Production of


School English project (Kress et al., 2004)

In the instance in Figure 7, the teacher sits at the table, she holds the pencil and
the poem, the student and teacher gazes do not meet, the posture is closed
arms folded with no gestures.
Multimodality makes visible the uneven distribution of modal resources
across social groups, and shows how these differences mark power as well as
the ways in which people resist these markers of power.

8
Learning and communication in digital multimodal landscapes

Figure 7: An English classroom from the Production of


School English project (Kress et al., 2004)

Difference
Multimodality enables the investigation of how modes are taken up differently
within specific environments and by different actors. Taking a multimodal
approach to communication and learning provides tools to look at change over
time and change across contexts, including technologies. It is a method that is
increasingly being taken up within digital technologies, as it draws attention
to how technologies reshape modal practices. A multimodal understanding
of how digital technologies reshape modal practices moves beyond intuitive
ideas about what a technology can do, to provide a detailed analysis of the
resources of digital technologies, how these are used in situ, and what they can
and cannot do.

9
Carey Jewitt

Figure 8: An English teacher using an overhead projector from the


Production of School English project (Kress et al., 2004)

From one perspective, the changes in technology over the past decade may
not appear to have changed the relationships between teacher and students
and the school as an institution. From another perspective, technologies can
change classroom interaction in significant ways. For instance, drawing on
work with Gemma Moss and colleagues (Moss et al., 2007) on the roll-out of
interactive whiteboards (IWBs) in UK secondary schools, we can compare the
teaching of school English with an overhead projector (Figure 8), and with an
IWB (Figure 9) to explore how the use of a technology can shape the ways in
which a teacher moves around the classroom, how students are grouped, and
the kinds of texts that come into the classroom. These changes impact upon
the pedagogic relations and textual practices of the classroom.

10
Learning and communication in digital multimodal landscapes

Figure 9: An English teacher using an interactive whiteboard (Moss et al., 2007)

New resources
Multimodality can contribute to the identification and development of new
digital resources and new uses of existing resources particularly in digital
environments. In addition to creating inventories of modes and semiotic
resources and analysing how these have been, are, and can be used in a
range of specific contexts, which is an inventory of the past and the present,
multimodality can also contribute to imagining future resources and their uses.
Digital technologies have been key in reshaping modal resources.
Digital synthesizers and other digital technologies, for example, have reshaped
the possibilities of the human voice to create new resources and contexts for
the use of human voices in digital artefacts, public announcements, music,
and so on (van Leeuwen, 2005). This digital reshaping of voice has in turn had
an impact on the non-digital use of voice for example, by providing different
tonal or rhythmic uses of the non-digital voice not previously imagined.
Similarly, the Emergent Objects project (www.emergentobjects.co.uk) brought

11
Carey Jewitt

together computer scientists in robotics and dancers to use embodiment in


the field of performance with humans and robots in order to prototype and
develop a robotic agent, named Zephyrus, which is designed to promote
expressive interaction of the device and human dancers, in order to achieve
performative merging (Wallis et al., 2010). The significance of the work is
to bring further knowledge of embodiment to bear on the development of
humantechnological interaction in general and to extend the resources of
movement in both (Bayliss and McKinney, 2007; Wallis et al., 2010).

Innovative methods
Multimodality can contribute to innovative research methods. Through work
with Kress and colleagues in the Science and English classroom, we developed
methods for transcribing, sampling, and theorizing multimodal interaction
in the classroom, as well as addressing substantive questions about the role
of image, gesture, movement, and action with objects in the teaching and
learning of school science. We applied and further developed these multimodal
methods to understanding school English in the socially contested classrooms
of urban super-diverse schools (Kress et al., 2004). I also developed a multimodal
framework to explore how digital technologies reshape knowledge, literacy,
and practices of learning and teaching (Jewitt, 2006). This work led to a number
of methodological books, including The Routledge Handbook of Multimodal
Analysis (Jewitt, 2009) and The Sage Handbook of Digital Technology Research
(Price et al., 2013).
A holistic view of interaction and communication places significant
demands on research methods with respect to digital texts and environments
where conventional concepts and analytical tools (e.g. talk aloud protocols, or
ethnographic field notes) may need rethinking (Price and Jewitt, 2013a). How
can research methods effectively capture and analyse the flow of materials
in online social interactions and other digital environments (Price and Jewitt,
2013b)? If digital environments are not fixed or frozen in time, how do we
archive them or make a corpus of data? Multimodality makes a significant
contribution to existing research methods for the collection and analysis
of data and environments within social research. For example, it enables
modal changes to elements in a multimodal configuration on screen and its
subsequent meaning to be mapped across different digital platforms, as it is
blogged, re-blogged, tweeted, and texted. These changes include: changes in
colour and in content through framing, cropping, and rescaling; new image-
writing relations through the use of captions; the addition of voiceover; new

12
Learning and communication in digital multimodal landscapes

meanings created through insertion into a larger multimodal layout; and


juxtaposition with new elements, as well as the material affordances and
features of different technological platforms that reshape what can be done
with a text.
The development of methodologies is a primary aim of the MODE
project (mode.ioe.ac.uk). We are trying to stretch multimodal methods, engage
with a wide range of digital data and environments, and bring multimodality
into contact with a range of ways of thinking to challenge multimodality and
to develop interdisciplinary ways of working.
Having distilled the transformative role of multimodality into these
six key points, I now want to discuss how adopting this theoretical stance can
make sense of communication and learning in new and exciting ways.

Learning and communication in a digital multimodal landscape


The question of how we can understand and research multimodal
communication and interaction has been central to my work across a range of
settings over the past 20 years or so. The use of digital technologies in learning
environments raises both methodological and substantive questions for
communication and learning. Understanding the communicative potentials
of different technologies is a key aspect of multimodal studies and my work,
because they make a wide range of modes available, often in new and interesting
combinations and relationships to one another, and they frequently unsettle
and remake genres in ways that reshape or even entirely remake practices
and interaction. Technologies differ in how they make available multimodal
affordances and resources, although there is increasing convergence across
platforms; an example of this is how the movement of the body features in the
use of mobile phones, online game environments, and Wii games, but all in
distinct ways.
This shapes what knowledge can be represented and how it is
represented. That is, technologies change both what it is that we see, how we
see it, and what we can do. At an extreme end of this remaking are the new
forms of knowledge that the use of technology has made available, such as
microbiology and genetic engineering. Multimodal features of technologies
also, however, change the mundane the way everyday texts and objects look
and feel, the way people interact, and their practices, in significant ways for
communication and learning. The effects of technologies on learning in school

13
Carey Jewitt

contexts has been the primary focus of my work, and is thus the focus for the
rest of this paper.
The design and use of digital technologies also enable people to make
meaning in new ways, and this is another aspect of multimodal research and
my work. The multimodal features of technologies have consequences for how
knowledge is shaped and the ways people interact, their practices; in other
words, the ways that they do many things for example making learning
resources for a lesson, giving feedback, teaching in the classroom, reading
and writing, and so on, as well as more broadly shaping social relations
and identities. Digital technologies are thus a key site of theoretical and
methodological interest for me, and others, within multimodal research.
It is perhaps important to make clear that I am not suggesting that
technology determines peoples meaning making; rather, we suggest that the
features of technologies (old and new technologies) provide different kinds
of constraints and possibilities for meaning making technologies, like other
tools, shape what we do. In addition, the communicative potentials that shape
knowledge and the practices that people engage with are, I think it important
to note, fundamentally connected. The distinction here is to a large extent
analytical the multimodal design of these digital interfaces and interactive
environments, texts, and their communicative potentials is itself a practice.
This boundary between text and practice is perhaps especially blurry in the
context of digital technologies, where the user is often making the text by
their selections and journey through a digital environment. The distinction
is nonetheless a useful one for the purposes of this paper, in that it helps
to understand the constraints and potentials that a technology places on
communication and interaction.
In the next section I will briefly discuss some examples drawn from
my work and that of colleagues. Each example focuses on a practice common
to learning reading, writing, teachers design of resources, exploring and
building a hypothesis, physical exploration, and gathering and interpreting
data on field trips. These practices occurred prior to digital technologies
but, as I will show, have been transformed by the multimodal character and
features of technology. Each example will explore a specific digital technology
computer applications, online resources, interactive whiteboards, physical
digital technologies and mobile technologies. Each example will draw attention
to particular modes foregrounded by the technological design and its situated
use. I will discuss these examples along with a commentary on multimodality
with particular attention to the role of the visual, embodied modes (gaze,

14
Learning and communication in digital multimodal landscapes

gesture, posture, and so on) and modal aspects of space (traversals, pathways,
distance, direction and orientation, the bounding of spaces in different ways,
composition, layout, and other organizational structures of space). These
modes are sets of material resources and organizing principles that have been
shaped and conventionalized through their daily social use by people and
communities over time. The work of multimodal research is to unpack these
resources and organizational principles, and how they have been used, through
the detailed analysis of their materialization to get at their communicative
functions and use these to describe, critique, and design their use in relation
to social research questions.

How the design and use of digital technologies reshape


knowledge and practice
Through my work I have asked how the design of digital technologies reshapes
knowledge in learning contexts. This discussion needs to be read in the
knowledge that sites of display are always socially shaped and located: the new
always connects with, slips, and slides over the old. The ways in which modes
of representation and communication appear on the screen are therefore
still connected with the page, present and past, and similarly, the page is
increasingly shaped and remade by the possibilities of the screen. There are
screens that look page-like and pages that look screen-like (e.g. Dorling
Kindersley books).

Reading: multimodality, digital novels, and other texts


The visual has long been a feature of textbooks and learning environments,
from the illustrations in printed textbooks to the posters on the classroom
wall. Digital texts, online resources, DVDs, virtual worlds, and simulations have
expanded the role of the visual in learning resources and environments. The
increased use of image in digital texts has a significant impact on the way
character, stories, and other concepts across the curriculum can be represented.
In the digital version of the novel Of Mice and Men, for example, each chapter
of the digital novel starts with a short video of a key moment of the story,
and each screen page combines an image alongside the written novel, with
an image taking up more than half the screen in the majority of cases (Figure
10) (Jewitt, 2002). The images serve to fill in the characters, the relationships
between them, and their setting through what they wear, their posture, gaze,

15
Carey Jewitt

gesture and body movement, their mood through the use of sombre or bright
lighting and colour. This multimodal reshaping gives the characters voice and
movement, and presents information not in the novel.
The degree of multimodal representation of characters in such texts
serves to indicate the importance of characters within a story. In the case of
the digital version of the novel Of Mice and Men, this serves to reposition them,
with the marginal black and female characters gaining new import for the
contemporary reader.

Figure 10: A screen shot from the digital version of the novel Of Mice and Men

Bookmarking tools enable the reader to make notes on their reading; an


interpretative guide appears at various points to give additional information
to the reader on the context of the novel and the authors intentions; and
hyperlinks embedded in the writing and visual objects displayed on the
screen link to definitions of colloquial terms, to images of the characters, and
recordings of songs sung by the characters in theatre and musical productions

16
Learning and communication in digital multimodal landscapes

of the book. This kind of digital reshaping of the characters and their
relationships is significant for the interpretation of the novel. In addition, such
multiple visual representations serve to make the concept of character more
abstract, moving beyond a question of the individual readers interpretation or
the authors intention.

Figure 11: A screen shot of the character file of Curleys wife

While the original story is represented, the screen can hold less text than the
printed page and this, combined with the narrative guide, the hyperlinks, the
video excerpts, and character files, restructures or breaks up the narrative and
disconnects ideas that previously ran across one page to fragment the narrative
across screens. This creates a different narrative pace and structure, and re-
contextualizes the story and constrains its interpretation. In this process the
relationship between the written and visual elements on the screen become
complex, with the writing appearing in blocks that move across the screens to
reveal an emotional subtext of the story. Students working with a digital text,

17
Carey Jewitt

such as this one, need to move between studying different layers and domains
of knowledge. An example of this is shown in the two images in Figures 11 and
12 the fictional level in Figure 11 shows a handwritten envelope addressed to
the character. In Figure 12, the factual level is shown the envelope links to a
letter written by Steinbeck to an actor playing the character.

Figure 12: A screen shot of an opened hyperlink in the character file of Curleys wife

In digital texts and environments, the visual is often to the fore, and writing
is itself a highly visual element. The genres and practices of reading such
multimodal texts remain relatively open for the time being. Reading, or
perhaps more aptly, watching them introduces new resources and practices
for navigating, constructing, and understanding texts and provides different
routes into and pathways through texts. In this multimodal environment it
is clear that to persist in thinking of reading primarily in terms of writing is
problematic.

18
Learning and communication in digital multimodal landscapes

Reading involves engagement with different modes and, as already


discussed, this gives different access to knowledge, as the modes carry
different kinds of potential meanings: each describes in different ways, each
enables sequential events to unfold over time differently, each has different
potentials for conveying emotion and effect. Reading in a digital context
requires understanding meanings across different modes and understanding
the principles and values of the design. Thus choice of mode by the designers
of a digital resource and by its reader can be seen as a choice of the level, detail,
or type of engagement. This also enables students to bring different genres
of engagement to their interaction. In the case of the digital novel discussed
earlier, students navigated character in modally different ways. Several students
strung together all the music files, temporarily transforming the novel into a
musical. Others watched it in the form of a film or animated the images via
their movement through the text, momentarily overlaying the novel with the
genre of comic. With the other examples discussed, similarly, student choice of
modal movement through the text is key in determining how learning takes
place.
The layering of information via hyperlinks, and the structuring of
pathways into digital texts, is a common feature. The organizational structure
visually models the need to move between studying and connecting across
different kinds of information or experience. In the case of the students
working with the digital novel, reading involved moving between studying
character at the fictional level of the novel, and the hyperlinked factual level
that served to place the characters and the novel itself (its language, its
geographical location, and its focus on the American Depression) in a historical
context. Modal resources are differently configured in these different domains/
layers, and this multimodal organization indicates that two different kinds of
engagement are required of the student. In many cases I have examined, the
domain constructed via the visual, at the level of display, demands students
imaginative engagement hypothesizing and planning while the domain
constructed at the level of language demands engagement with the factual
information or outcomes. The ideological expectation that students should
move across these is embedded in the multimodal orchestration of the digital
text. This involves students in making new connections across layers, objects,
and other organizational features of online environments to navigate pathways
across them. These pathways may be linear-sequential, non-sequential, or
multiple non-linear pathways that disrupt the notion of left to right sequential
patterns. Multiple pathways can be seen in the earlier programming language

19
Carey Jewitt

examples and in response threads on YouTube and in other social media


(Adami, 2010). The work of the reader is to make coherent texts in a potentially
incoherent space. This positions reading digital texts as a process of connection,
rearranging, and modifying spaces and elements: a fluid and dynamic process
that blurs the boundaries between reading and writing. Multimodality provides
ways to explore and contrast these traversals.

Writing: multimodality and digital design


Writing as a practice is also significantly transformed by digital technologies.
This goes beyond the immediate impact of word-processing features on writing
(e.g. spell check, editing, and formatting tools) to the visual and dynamic
character of writing in digital environments. The new configurations of image
and writing on screens and the changing relationships between speech and
writing are also transformed. This point is illustrated by the photograph by
Lungile Madela (Figure 13), part of a project by Marion Walton and colleagues
on the phone messaging and mobile media sharing relationships of a group of
young mobile phone users in Khayelitsha, South Africa (Hassreiter et al., 2011).
The ways in which these young people are using their mobile phone resources
brings image and word together in interesting combinations that unsettle and
remake genres and interactional practices. It is important to understand the
communicative potentials of different technologies and their situated use. In
Lungiles context, in which mobile phones are used where Internet connectivity
is zero and the phone is a standalone device, multimodality can help in this
task.
I have seen increasingly diverse practices using spoken-writing as
a modal resource across a range of digital environments. For example, the
use of speech in digital contexts is strongly shaped by the multimodal and
technological features of the technologies used. Walkers (2008) study of
students use of mobile technologies for field trips found that restricting the
audio record time resulted in students having different kinds of conversations
with one another, as they had to rehearse their spoken entries, reflect on
them, and then record them leading to the division of roles, and different
approaches to scripting and rehearsal.

20
Learning and communication in digital multimodal landscapes

Figure 13: Without me u nothin mobile phone photograph by Lungile Madela

21
Carey Jewitt

We saw many examples of multimodal practices in which writing was


substituted with talk in our research on the use of learning platforms in primary
and secondary schools in England (Jewitt et al., 2010) and in the evaluation
of the Home Access project (Jewitt and Parashar, 2011). This included
multimodal stories made by parents and children, PowerPoint presentations
with embedded spoken narratives, podcasts by teachers and students, and the
use of embedded audio comments to give feedback to students on their work.
This use of digital technologies engaged some parents with their childrens
learning in new ways, including those who would otherwise not have engaged.
Writing persists in digital learning environments, but often in short
bursts of activity, including the annotation of texts, the filling in of a missing
word, the collecting up of thoughts written on Post-it notes, brainstorming,
and mind-maps. The facilities of the IWB, for instance, do not easily support
coherent and extensive writing. Writing on prepared PowerPoint slides and
writing in real time in the classroom are emerging as two discrete forms of
writing within pedagogic practice with an IWB. The work of the teacher is
represented in typewriting, and the scribing of the collective work/voice of the
class in the handwriting. The typed writing refers to a matter of authoritative
evidence (e.g. the canonical text) and the handwriting attends to personal
responses, interpretations, and meaning: thus maintaining the permanent
fixedness of the canon against the ephemeral and temporary character of
interpretation. How these forms of writing are configured in relation to each
other is then a clue to the work that is expected of the student (Jewitt et al.,
2011a).
The changing digital landscape of the classroom has also shaped what
texts are presented, how texts are presented, and what can be done with
them. With the use of IWBs, image, colour, and layout have, alongside writing,
become more central and have changed how teachers design and use learning
resources. The IWB enables connection to a wide range of texts, sources, and
so on. This diversifies the kinds of texts that enter and circulate across the
English classroom. This serves to connect English with the technologies and
experiences that students engage with out of school. One effect of this is to
create connections across previously distinct boundaries of education and
other spaces, such as the commercial sector, education, and the everyday lives
of students. This changes the communicational landscape of the classroom.
In the contemporary classroom, texts are integrated with images downloaded
from the Internet in a teacher-made PowerPoint across several slides. The
changes in the relationship between image, speech, and writing that I have

22
Learning and communication in digital multimodal landscapes

commented on are embedded in the practices of the classroom. It is now


common for teachers to start a lesson with an image or a short digital video
(from YouTube, the BBC Learning Zone, and other online video resources):
malleable and flexible episodes to be inserted into teacher discourse. Teachers
frequently use PowerPoint presentations to present their argument, they
annotate texts visually, or they connect to a webpage. The use of image is also
prevalent in students work, with the use of clipart, digital photographs, and
short videos made by students or downloaded from the Internet. This reshapes
the work of the teacher and the student. The contemporary teacher is involved
in the pedagogic design of digital multimodal texts (Jewitt et al., 2009).
The different temporal and spatial features of digital technologies have
consequences for the practices of teachers and students. For example, with
no extended time where the teacher writes on the board, or without the need
to erase the contents of the board, moments where teachers have their backs
to the class are eradicated. One consequence of this is the removal of spaces
for students to behave badly, but also the reduction of regular informal-open
classroom spaces for students to think, reflect, and chat spaces that can now
be filled with curriculum. Another is the bundling up of information into bite-
size chunks related through layout to other chunks. The modularization of
knowledge is a gradual move affecting all media, and one that marks a more
general move to deliver content across a range of media formats, including
mobile phones. It is also driven by a pervasive response to a managerial
discourse of effectiveness, as well as the pressures of examination, and
concerns about student attention spans and engagement. This reshaping of
knowledge into small units structures how young people and teachers engage
with curriculum knowledge in the classroom (Jewitt et al., 2007a; Jewitt, 2011).
Student-made texts can be incorporated into the active pedagogic
space of the classroom via scanning student texts that can then be displayed
immediately on the IWB. These then become an object of discussion, to be
manipulated and annotated a shared, malleable text that opens up new
possibilities for the configuration of authorship and authority in the classroom.
The teachers annotation and marking of the student texts on the IWB
transforms what is usually a semi-private activity into a public one. This makes
explicit both the marking criteria and process.
These changes in digital learning environments raise new decisions for
teachers and students, with implications for curriculum, teaching, and learning.

23
Carey Jewitt

Hypothesis and problem solving: multimodality, digital games and programming


Multimodal shaping takes place across a range of digital texts and curriculum
areas. For instance in computer games and other animated interactive texts, the
distribution of modes is a key part of meaning making. While the multimodal
action rolls on, the combination of movement, elaborated visuals, and writing
is used to indicate a characters status. For example, the decision of when and
what to represent in writing and/or speech can shape game character and
narrative. Writing and speech can be used to give voice and expression to
some characters in a game and not others, and students move through games
by using the characters access to speech and movement as a multimodal clue
to their potential to help solve the puzzles and tasks in the game. A characters
access to language indicates (is read as a part of ) their game value, that is,
their value in achieving the object of the game, to collect resources to move
through to the next level of the game. The characters that have the most
modes of communication are the key to game success especially those with
the potential to speak when approached by the player/avatar (Jewitt, 2005).

Figure 14: The representation of the rule Move the object to the right when the right arrow
control button is pressed in ToonTalk, an animated programming language

For example, the range of representational modes made available in different


programming systems has an impact on how the mathematical concepts of

24
Learning and communication in digital multimodal landscapes

rule, condition, and action are constructed. The Playground project (directed
by Richard Noss and Celia Hoyles) built computer environments for 48-year-
olds to play, design, and create games using two different programming
languages: ToonTalk, an animated programming language (Figure 14), and
Imagine Pathways, a graphical version of Logo (Figure 15). These shaped the
potentials for learning with them. With Ross Adamson, I analysed the different
modal selections and combinations that these programming languages made
available, and the impact of these modal choices (including still image, gesture,
posture, speech, music, writing, and new configurations of the elements of
these) on the emergence of the mathematical entities rule, condition, and
action as the students programmed (Jewitt and Adamson, 2003).

Figure 15: The representation of the rule Move the object to the right when the right arrow
control button is pressed in Imagine Pathways, a graphical version of Logo

We showed that the choice of representational modes in the design of each


program is central to the potentials for user engagement. Modes (e.g. image,
animated movement, and writing) provide the maker of an application and the
user of it with different features for making meaning, in this case, for engaging
with aspects of programming and building games. We suggested that in order to
understand the rule-building practices of students engaged with each of these
systems, a better understanding of the kinds of resources these applications
provide is required. In short, we need to understand what it is that students
are working with and how these multimodal resources might contribute to the
shaping of the learner, the learning environment, and what it is that is to be
learnt. Our focus was on how these realize the entity rule in different ways that
have an impact on game building and the subjectivity of the user. In order to
show this, we looked in detail at the representation of the rule Move the object

25
Carey Jewitt

to the right when the right arrow control button is pressed (shown in Figures
14 and 15). The representations were varied, as can be seen. The visual objects
varied from symbolic to visually named; the spatial arrangements varied from
sequential, linear, left to right structures to multi-directional; genre varied from
equation, to cartoon to animation; the representation of realism varied from
scientific to everyday. The modal representations of rule (and its constituent
elements condition and action) in Imagine Pathways and ToonTalk differ in
important ways. Important, because they constitute the entity rule in different
ways and provide the user with different resources for thinking about rule and
about her/himself as a learner in relation to the system.
These resources lead then to different shapes of knowledge and
kinds of work for the learner. (They also provide new opportunities for self-
identification such as in the use of the robot as a pseudo avatar.) For example,
two students (aged 7 years) were working with the resources of Playground to
build a simple game (the full case study is reported in Jewitt, 2006). It concerns
a small creature being chased by an alien on a planet and who fires bullets
to try and kill the alien. The planet landscape includes bars where the bullets
might bounce and miss or hit the creature. The students first use pen and paper
to design their game. They then work in Playground, using ready-made visual
elements and backgrounds, colour, movement, and sound to make the game.
In the students written game design, the action of the bullets bounce
is represented as a matter of movement and change of direction when
something is touched. The multimodal environment of Playground differs,
and raises two key questions for the students in their design, What is it that
produces bounce? and What is it that bounces?.
Initially, the students programmed the sticks to bounce. It was the
visual experience of playing the game that led them to realize their mistake
they played the game and the sticks bounced off! The students, however,
used gaze and gesture to solve the problem. The students created different
kinds of spaces on the screen through their gesture and gaze with the screen
itself and their interaction with, and organization of, the elements displayed
on the screen. These spaces marked distinctions between the different kinds
of practices with which the students were engaged. In their creation and use
of these spaces, the students set up a rhythm and distinction between game
planning, game design, game construction, and game playing. The students
gestured on the screen to produce a plan of the game: an imagined-space
overlaying the screen, in which they gesturally placed elements and imagined
their movement, and used gesture and gaze to connect their imagined

26
Learning and communication in digital multimodal landscapes

(idealized) game with the resources of the application as it ran the program.
The temporary and ephemeral character of gesture and gaze as modes enabled
their plans of the game to remain fluid and ambiguous.
The role of gesture was central to understanding their unfolding
programming process in three ways. First, gestures gave a way to see how they
coordinated visions and disagreements and built hypotheses through gestural
tracing and overlay to explore trajectories of movement. Second, examining
the students use of gesture to identify points of gestural vagueness, wiggles,
and trailing off, helped to identify areas of difficulty and ambivalence. Third,
the students use of gesture gave insight into their hypotheses.
These and other technological platforms, including digital simulation
applications (such as Interactive Physics for learning science), enable students
to manipulate elements on screen and involve them in embodied interaction
with the screen gesture, gaze, and so on. Embodiment is also a key resource
in many digital spaces with represented bodies, like avatars, which offer a form
of virtual embodiment. Such environments offer new ways to embody a set of
identities outside ones own physical being, where the virtual avatar or visual
artefacts act as a tool through which identity and experience can be shaped.
Moving off-line, into a physical digital environment, which uses digitally
augmented objects that can be engaged with physically, reconfigures and
brings additional features and modes to embodied engagement. Multimodality
provides a set of resources to describe and interrogate these remappings, for
example to get at the interaction between the physical and the virtual body.

Exploratory embodied learning: multimodality, and physical digital environments


The need to better understand the modal features of embodiment connects
with advances in computing and the potentials for bodily interaction offered
by complex digital technologies, such as tangible, multi-touch, sensor, and
mobile technologies with new forms of interaction. These technologies
offer new opportunities for physically interacting with objects and digital
representations, foregrounding the role of the body in interaction and
learning more than with traditional desktop computing. These are reaching
the marketplace through systems such as the Nintendo Wii, the Xbox Kinect,
multi-touch tables, and the touch interaction of the iPad. These multimodal
technologies enable bodily-based physical experiences in new ways.
Multimodality, with its emphasis on examining the use of multiple
semiotic resources for meaning making, helps to examine the differential
use of semiotic resources by students interacting with a tangible learning

27
Carey Jewitt

environment how objects are manipulated and handled, the role of physical
touch and other physical sensations, and the role of body position, gaze,
manipulation, and speech in shaping interaction are all resources on which to
draw. More generally, physical digital environments prompt the emergence of
sensory modal resources, such as physical (or haptic) feedback. For example,
the Wii gives sensory feedback via wristbands and body straps, and visually
through the use of virtual avatars. Another focus of multimodality is that
of how multimodal action flows and unfolds in time, particularly in terms
of pace, rhythm, and interaction structure, and the implications of this for
interaction and the processes that facilitate knowledge. The embodied and
the spatial resources that these physical digital environments make available
are intertwined. The spatial design of these technologies positions the screen/
surface to the users in a range of ways, and these require the user to engage in
physical digital mapping in interesting ways for what it means to collaborate
and play together.
In digital environments, embodied interaction practices are a central
part of how students communicate ideas, build hypotheses, explore, and
collaborate for learning. Here, I focus briefly on an example of students
learning scientific concepts and embodied learning in a physical digital
environment (Figure 16), and on how this supported forms of interaction and
enabled new action, physical, perceptual, and bodily experiences (Price and
Jewitt, 2013b), which in turn led to new practices for learning. This analysis
illustrates the different ways that 12 pairs of students, aged 1011 years,
used and orchestrated the multimodal resources of a light-table, particularly
in terms of bodily posture, pace, and structure of activity and talk. How the
students worked with the table has implications for the process of meaning
making, in an independent exploration activity that involved learning about
the science of light. The analysis suggests that position choice affects how the
action and activity evolve. Different positions give different opportunities for
interaction; for example, where to look (gaze), point of view, and ease of access
to tangible objects at the side of the table and on the table itself. Positioning
themselves opposite each other made it equally easy for each student to
pick up a new object from the side, to manipulate any objects already on the
table, and to work simultaneously. This led to clashes of action and ideas and
to repositioning one anothers blocks while creating a configuration on the
tabletop, in contrast to pairs who were adjacent to one another.

28
Learning and communication in digital multimodal landscapes

Figure 16: An image of students learning scientific concepts and


embodied learning in a physical digital environment

When positioned opposite each other, however, the time spent focusing on
observing each others actions, rather than simultaneously doing, was less
since their gaze was split between what they each were doing themselves and
what their peer was doing. Point of view also affected interaction, in that the
same point of view made sharing and gesturing to one another straightforward,
and some pairs exploited this mutual point of view in explanations.
Just as pace and rhythm were a feature in the practices of students
and teachers with screen-based digital environments, such as the online
digital novel and the IWB, pace of interaction and rhythm were shaped by
the students embodied interaction with the light-table. Those opposite one
another were significantly faster paced in their interactions. This potentially
exposed the students to more experiences, but it also reduced the amount of
considered reflection time and a purposeful approach, and called for different
forms of systematic activity.
A multimodal approach offers a way of describing and classifying
embodied forms of interaction, which goes beyond looking primarily at

29
Carey Jewitt

language, or specific forms of action, as the medium for providing insight


into interaction, by extending the analysis to include body positioning and
gaze, and the integration of modes. By taking this approach, and examining
multimodal action flow, we can see how embodied action can be played out
differently in a learning interaction with pairs of students.
Specifically, it illustrates how body positioning, gaze, and different
ways of manipulating the tangibles change the pace, rhythm, and structure of
interaction, and the kinds of participation that students take. For example, the
analysis shows the different ways in which the representational resources on
the table are taken up and used differently by students. Some forms of action
fostered slower forms of interaction, clearer turn-taking, and building on one
anothers ideas, while others engendered a rapid pace of interaction, with less
clear turn-taking and less coordinated structure, being more fragmented and
discontinuous. This is important in helping to understand how bodies are used
differently in physicaldigital environments such as these, and the implications
they might have for the learning process.
The analysis also shows the place of talk in activity in different ways.
In particular, it demonstrates that meaning making for pairs of students can
take place equally as well through action, experimentation, observation, and
demonstration. To achieve this, however, this analysis suggests the importance
of the role of bodily positioning, perspective, gaze, and turn-taking, as well as
action through manipulation; or, in other words, the role of embodiment in
shaping multimodal action flow.
The ways in which digital technologies reshape physical spaces are
significant for learning environments, as they place people in new physical
and thus social relationships to one another and to digital artefacts. This
reorientation can be seen in the case of Wii games where the users direction
of gaze and action are orientated to the screen rather than to their opponents.
The surgical operating theatre, the site of Jeff Bezemers research,
demonstrates another aspect of MODE: surgeons undertaking keyhole surgery
in screen-based digital environments orient their gaze, body posture, and
team configurations, and are required to engage in physicalvisual mapping
(Bezemer, 2013). Further, the visual display of the surgeons work inside the
patients body cavity makes new information available in ways that have an
impact on both what can be seen and learnt, and who can see it. Bezemers
work shows how multimodal research can map the interactional impact of
digital technologies being inserted into older established social environments,
such as the surgical operating theatre.

30
Learning and communication in digital multimodal landscapes

Space and spatial thinking: multimodality and spatial orientation


The intertwining of body and space is pronounced in the context of mobile
and GPS technologies that serve to locate the body in time and space in
interesting ways for thinking about learning a focus of the MODE project as
it unfolds. They exploit our physical space and perceptual interaction with the
environment, and may enhance the physical experience of a space through
making contextually relevant information available in situ. The affordance of
technology to create bridges and connections between different physical and
virtual locations, times, and environment is relevant to notions of space, place,
and time. GIS technologies provide ways to investigate time and space from
new vantage points and scales, as well as ways to visualize data from previously
unexplored perspectives. For example, Walking Through Time is a smartphone
app, developed by Edinburgh Art school, that connects a persons current
location with historical maps from the past. They can select a time period and
walk across old streets that no longer exist and go on scripted walks by local
historians. Similarly, GeoSciTeach, an app that uses GIS developed by Sara
Price with myself and colleagues, supports trainee teachers initially in school
science, and allows data to be collected and tagged to specific locations (Price
et al., 2012; geosciteach.wordpress.com). This means that data, photographs,
classifications of leaves, data on the temperature, etc., may be automatically
linked to places in both space and time, including data produced by the
action of living organisms or the environment, e.g. sunlight, temperature,
wind patterns, and precipitation (biotic data). The data collected can be used
to make sense of relevant scientific phenomena and can be manipulated to
model and make predictions. It can be used to engage with ideas visually
through maps, models, and two- and three-dimensional representations.
What I am trying to show here is that understanding space and
embodiment is central to understanding contemporary digital technologies
and how we move around and communicate in the world and learn.
Multimodality provides resources to explore these types of digital remapping
and extending of the physical in a range of digitally remediated contexts,
looking at layers, links, the functions of different media and spaces, the use
of spatial metaphor and classifications to construe experience, and how this
shapes knowledge in that domain.

31
Carey Jewitt

Concluding comment
Communicational resources have changed significantly over the past decade,
bringing music, image, and video into our everyday repertoires. Nearly all
students in the UK now have home access to the Internet and routinely carry
a mobile phone with digital camera, video, and MP3 player: new media are
pervasive. These changes have expanded the multimodal resources available
to students, multiplied the reading paths to be navigated, and introduced
practices of re-mixing and redesign of communicational forms. They raise
questions about the form and functions of writing and image in the classroom,
and highlight the complexity of digital writing and reading practices (and
speaking and listening).
I have shown that how knowledge is represented and experienced
the choice of communicational modes and technologies is crucial to
understanding knowledge construction. I have also pointed towards how
the situated use of these modal resources and digital technologies shapes
practices of teaching and learning in the multimodal environment of the
contemporary classroom. In particular, I have shown the potential of examining
the visual, the body, and embodied modes, as well as spatial modal resources,
for understanding how digital technologies reshape knowledge and practices
of learning and teaching.
No one can really know or predict what will happen with digital
technologies and how they will unfold over the next decade or so. I think it is
clear, however, that as the cost of digital complexity reduces, we will see the
mainstreaming of now elite and costly interactional forms like eye tracking
and gesture-based interaction; we will see new functionalities, new forms of
interaction, and new whole-body digital devices and experiences. In short,
the multimodal interaction potentials of digital technologies will increase and
develop. We need new methods to research these, and multimodality is a very
good starting point from which to develop these.
As this work develops, I hope to bring multimodality into more
contact with digital art and design, to generate new questions through that
collaboration, and to take multimodality into new and interesting directions.

32
Learning and communication in digital multimodal landscapes

Acknowledgements
I would like to thank Jeff Bezemer, Gunther Kress, Sara Price, and Richard
Noss for their insights and comments on this paper, and the many research
participants whose practices provided the focus of the work discussed in this
paper.

References
Adami, E. (2010) Contemporary Patterns of Communication: The case of video
interaction on YouTube. Saarbrucken, Germany: Lambert Academic Publishing.
Barton, S. and Jewitt, C. (1995) Talking about Sex. In H. Curtis, T. Hoolaghan and
C. Jewitt (eds), Sexual Health Promotion in General Practice. Abingdon: Radcliffe
Medical.
Bayliss, A. and McKinney, J. (2007) Emergent Objects: Design and performance
research cluster. In T. Inns (ed.), Designing for the 21st Century: Interdisciplinary
questions and insights. Aldershot: Gower Publishing, pp.15065.
Bezemer, J. (2013) Gesture Beyond Conversation. In C. Jewitt (ed.), The Routledge
Handbook of Multimodal Analysis, second edition. London: Routledge.
Hassreiter, S., Walton, M. and Marsden, G. (2011) Degrees of Sharing: Public
voices, impression management and mobile video production in a participatory
media project for teens in Makhaza, Khayelitsha. Project report produced for
Nokia Research, February 2011.
Jewitt, C. (1997) Images of Men: Male sexuality in sexual health leaflets
and posters for young people. Sociological Research Online, 2 (2). Online.
www.socresonline.org.uk/2/2/6.html (accessed 15 March 2013).
(1998) A Social Semiotic Analysis of Male Heterosexuality in Sexual
Health Resources: The case of images. International Journal of Social Research
Methodology: Theory and Practice, 1 (4): 26380.
(2002) The Move from Page to Screen: The multimodal reshaping of school
English, Journal of Visual Communication, 1 (2): 17196.
(2005) Multimodal "Reading" and "Writing" On Screen, Discourse: Studies in
the Cultural Politics of Education, 26 (3): 31532.

33
Carey Jewitt

(2006) Technology, Literacy, Learning: A multimodality approach. London:


Routledge.
(2008) Multimodal Classroom Research. AERA Review of Research in
Education, 32: 24167.
(ed.) (2009) The Routledge Handbook of Multimodal Analysis. London:
Routledge.
(2011) The Changing Pedagogic Landscape of Subject English in UK
Classrooms. In K. L. OHalloran (ed.), Multimodal Studies. Routledge Studies in
Multimodality Series. New York: Routledge.
(2012) Technology and Reception as Multimodal Remaking. In S. Norris
(ed.), Multimodality in Practice. New York: Routledge, pp. 97114.
and Adamson, R. (2003) The Multimodal Construction of Rule in Computer
Programming Applications. Education, Communication and Information, 3 (3):
36182.
and Kress, G. (eds) (2003) Multimodal Literacy. New York: Peter Lang.
and Oyama, R. (2001) Visual Meaning: A social semiotic approach. In T. van
Leeuwen and C. Jewitt (eds), A Handbook of Visual Analysis. London: Sage,
pp. 13456.
and Parashar, U. (2011) Technology and Learning at Home: Findings from
the evaluation of the Home Access Programme Pilot. Journal of Computer
Assisted Learning, 27 (4): 30313.
, Bezemer, J. and Kress, G. (2011a) Annotation in School English: A social
semiotic historical account. In S. Abrams and J. Rowsell (eds), Teachers College
Record Annual Yearbook: Rethinking identity and literacy education in the 21st
century. New York: Teachers College Press.
, Bezemer, J., Jones, K. and Kress, G. (2009) Changing English? The impact of
technology and policy on a school subject in the 21st century. English Teaching:
Practice and critique, 8 (3): 2140.
, Clark, W. and Hadjithoma-Garstka, C. (2011b) The Use of Learning Platforms
to Organize Learning in English Primary and Secondary Schools. Learning,
Media and Technology, 36 (4): 33548.
, Clark, W., Hadjithoma-Garstka, C., Banaji, S. and Selwyn, N. (2010) Benefits of
Learning Platforms and Integrated Technologies. Coventry: Becta.

34
Learning and communication in digital multimodal landscapes

, Moss, G. and Cardini, A. (2007a) Pace, Interactivity and Multimodality in


Teacher Design of Texts for IWBs. Learning, Media and Technology, Autumn 32
(3): 30218.
, Triggs, T. and Kress, G. (2007b) Screens and the Social Landscape: Digital
design, representation, communication and interaction. In T. Innes (ed.),
Designing for the 21st Century. London: Gower and Ashgate.
and van Leeuwen, T. (1996) Reading Images. Routledge: London.
Kress, G. (2010) Multimodality. London: Routledge.
, Jewitt, C., Ogborn, J. and Tsatsarelis, C. (2001) Multimodal Teaching and
Learning: Rhetorics of the science classroom. London: Continuum.
, Jewitt, C., Bourne, J., Franks, A., Hardcastle, J., Jones, K. and Reid, E. (2004)
English Urban Classrooms: Multimodal perspectives on teaching and learning.
London: RoutledgeFalmer.
McNeil, D. (1992) Hand and Mind: What gestures reveal about thought. Chicago:
Chicago University Press.
Moss, G., Jewitt, C., Levacic, R., Armstrong, V., Cardini, A. and Castle, F. (2007) The
Interactive Whiteboards, Pedagogy and Pupil Performance Evaluation (Research
report 816). London: DfES.
Price, S. and Jewitt, C. (2013a) Interview Approaches to Researching
Embodiment. To be presented at Computer Human Interaction (CHI), May
2013, Paris.
and Jewitt, C. (2013b) A multimodal approach to examining embodiment
in tangible learning environments. Proceedings of the Seventh International
Conference on Tangible, Embedded and Embodied Interaction. Barcelona,
February 2013.
, Davies, P., Farr, W., Jewitt, C., Roussos, G. and Sin, G. (2012) Fostering
Geospatial Thinking in Science Education Through a Customisable Smartphone
Application.British Journal of Educational Technology. DOI: 10.1111/bjet.12000.
, Jewitt, C. and Brown, B. (2013) The Sage Handbook of Digital Technology
Research. London: Sage.
van Leeuwen, T. (2005) Introducing Social Semiotics. London: Routledge.
and Jewitt, C. (2001) (eds) A Handbook of Visual Analysis. London: Sage.

35
Carey Jewitt

Walker, K. (2008) Mobile audio capture in a learning ecology. Paper presented


at the 10th International Conference on HCI for Mobile Devices and Services,
Amsterdam, September.
Wallis, M., Popat, S., McKinney, J., Bryden, J. and Hogg, D. (2010) Embodied
Conversations: Performance and the design of a robotic dancing partner.
Design Studies, 31 (2): 99117.

36

Вам также может понравиться