Академический Документы
Профессиональный Документы
Культура Документы
multimodal landscapes
An inaugural professorial lecture by Carey Jewitt
People communicate and interact through gesture, gaze, and shifting their posture and
position as well as through language. Many of the texts and artefacts people engage
with in learning environments also go beyond language a mix of image, colour, texture,
movement, music, writing, and spoken word. Research that looks beyond language,
though once marginalized, is increasingly recognized as essential to understanding
communication and interaction, particularly in digital environments that create new
challenges for and place interesting demands on social science research methods.
In this lecture, Professor Carey Jewitt explores the educational terrain of multimodal
communication and the challenges of how to research and understand it. She draws upon
her work with colleagues at the Institute of Education and beyond, and demonstrates the
potential of attending to the unspoken as much as to the spoken in helping to understand
communication and learning in the changing digital landscape.
The author shows how the design and use of technologies has a key role in
communication and learning, and how their use can shape practices and potentials
and change resources. Finally, she points towards the ways in which, through the use of
technologies, teachers and students have access to different semiotic resources, and how
their situated use of these resources shapes learning.
ISBN 978-1-78277-018-3
What is multimodality?
People communicate not just by using language. Fifteen years ago that was a
marginal and relatively contested position to hold especially among linguists
and educational researchers. That it is useful to look beyond language or at least
around it to understand communication and interaction is now increasingly
common and fairly accepted, particularly in relation to researching digital
environments. In reality, however, the extent to which this actually happens varies
across disciplines, and language remains the primary analytical focus for many
researchers and educational practitioners. People interact with others and objects
through the use of gesture and gaze, facial expression, by shifting their body
1
Carey Jewitt
posture and position, and by moving around the spaces that they are in. Many
of the texts and artefacts with which people engage when they communicate or
are in learning environments also go beyond language textbooks, maps, forms,
websites, digital objects, models, and equipment are generally a mix of images,
colour, texture, and writing, as well as dynamic animations involving movement,
music, and the spoken word. This places significant challenges and demands on
social science research methods to which my work responds, most recently with
colleagues on MODE, a project on multimodal methodologies for researching
digital environments, with Jeff Bezemer, Gunther Kress, Sara Price, and others,
funded by The Economic and Social Research Council (ESRC) and a node of the
National Centre for Research Methods (mode.ioe.ac.uk).
Multimodality, the approach that informs my work, attends
systematically to the social interpretation of a range of forms of making
meaning. It provides concepts, methods, and a framework for the collection
and analysis of visual, aural, embodied, and spatial aspects of interaction and
environments (Jewitt, 2009; Kress, 2010). While modes of communication, such
as gesture, have been recognized and studied extensively (e.g. McNeill, 1992),
multimodality attends to all aspects of communication and investigates the
interaction between communicational means. That is, it takes a more holistic
view of interaction and communication that goes beyond paying primary
attention to one aspect of communication resources or specific modes. For
researchers from linguistics this means moving beyond language, but in other
disciplines it may mean moving beyond a focus on image, and so on.
Speech and writing continue to be significant but are seen as parts
of a multimodal ensemble. Multimodality emphasizes the importance of the
social context and the resources available to people to make meaning, with
attention to peoples situated choice of resources; for example, their use of the
resources of gaze a direct or averted gaze, a held, short, or intermittent gaze,
or a fixed or roving gaze. It sees the way something is expressed as central
to what it can mean. Thus it opens up possibilities for recognizing, analysing,
and theorizing the different ways in which people make meaning with the full
range of resources available to them in a specific context and moment.
2
Learning and communication in digital multimodal landscapes
Holistic
Multimodality systematically describes the communicative work of all modes.
A multimodal approach counters the partiality of looking at just one form of
communication language and in doing so it brings attention to the specific
and different communicative work of other modes. It situates what is written
or said alongside all the other modes of communication used image, gesture,
gaze, body posture, space, and so on and starts from the point that all make
a contribution to meaning. Multimodality provides a framework for the
systematic description of modes and their semiotic resources. This enables the
multimodal ensemble of communication to be interrogated and helps to get
at the contradictions, sometimes even conflicting discourses, between what is
spoken or written and expressed in other modes, in powerful ways.
I showed this in my early work on sexual health information leaflets
for young men (Figures 14). I was evaluating a London-based sexual health
service for young men at the time, and the men working in the clinics were
not using any of the posters or leaflets with which they had been provided:
they said that they hated the look of them but they could not articulate why.
This was problematic, as images have a central role in peoples perceptions of
health promotion materials; indeed, an audiences response may be entirely
image-based, and identification with the imagery is a basic prerequisite to
effectiveness. I analysed the leaflets using the visual social semiotic method of
Reading Images (Kress and van Leeuwen, 1996). I analysed how male sexuality
was managed at a visual level, and this showed that the information encoded
in the images would be unacceptable to many sexual health professionals
and young people; for instance, that the context for sex is either heterosexual
reproduction or infection. The leaflets upheld heterosexual norms and revealed
messages about gender that were not apparent in the written text, including
portraying men as sexually less complex than women, sexually dangerous, and
predatory, and failing to acknowledge young mens concerns or emotional
lives.
My analysis showed, for instance, that the use of setting in images in
the leaflets is associated with the level of male sexual control and indicates
whether or not sex has taken place. In Figures 14, for example, the setting
indicates that women and men are represented as having control of different
sexual domains. Women are represented in the images as possessing sexual
control in medical settings, home/domestic settings, and in public venues, and
natural outdoor settings (e.g. Figures 13), and men are depicted as having
sexual control in the urban (outdoor) settings (e.g. Figure 4). Men on the street
3
Carey Jewitt
are represented as sexually dangerous, but once in domestic settings they are
shown to relinquish control to women (Jewitt, 1997, 1998; Jewitt and Oyama,
2001).
4
Learning and communication in digital multimodal landscapes
Figure 3: Front page of What has HepB got to do with Me? leaflet (Group B)
5
Carey Jewitt
The written elements of leaflets like the one shown in Figure 4 present a positive
discourse of men as sexually responsible, while the visual depiction is one of
predatory risk a young man in a sports car in an urban environment with
a comatose-looking young woman (Jewitt, 1997). Remember that these are
leaflets for young men. This work shows how multimodalitys holistic approach
can make visible significant discourses that are hidden or left ambivalent,
somehow fluid, unarticulated in the non-verbal multimodal interaction
between people, or in the visual or multimodal elements of texts and artefacts.
Looking beyond language can make apparent these contradictions.
6
Learning and communication in digital multimodal landscapes
Power
A multimodal approach is sensitive to exploring power relations, and how these
are materially instantiated through the different kinds of access that people
have to communicational modes. It makes visible the uneven distribution of
modal resources across social groups and shows how these differences mark
power, as well as the ways in which people resist these markers of power.
An example of this is shown in Figures 6 and 7, which are from the
Production of School English project with Gunther Kress, Ken Jones, Anton
Franks, John Hardcastle, and others, on which I was the lead researcher (Kress
et al., 2004). The students are working in small groups to analyse a poem, and
the teacher joins each group for around five minutes each. The ways in which
the students and teachers interact with the objects on the table the pens,
the dictionary, the printed poem, and their use of gaze, gesture, and posture,
all produce very different pedagogic relations. In the instance represented by
Figure 6, the teacher stands and leans informally across the table, she holds the
7
Carey Jewitt
dictionary, the students and teacher look at one another, there are expansive
open gestures, the students write, and the talk is distributed across the teacher
and students, with multiple questions and answers.
In the instance in Figure 7, the teacher sits at the table, she holds the pencil and
the poem, the student and teacher gazes do not meet, the posture is closed
arms folded with no gestures.
Multimodality makes visible the uneven distribution of modal resources
across social groups, and shows how these differences mark power as well as
the ways in which people resist these markers of power.
8
Learning and communication in digital multimodal landscapes
Difference
Multimodality enables the investigation of how modes are taken up differently
within specific environments and by different actors. Taking a multimodal
approach to communication and learning provides tools to look at change over
time and change across contexts, including technologies. It is a method that is
increasingly being taken up within digital technologies, as it draws attention
to how technologies reshape modal practices. A multimodal understanding
of how digital technologies reshape modal practices moves beyond intuitive
ideas about what a technology can do, to provide a detailed analysis of the
resources of digital technologies, how these are used in situ, and what they can
and cannot do.
9
Carey Jewitt
From one perspective, the changes in technology over the past decade may
not appear to have changed the relationships between teacher and students
and the school as an institution. From another perspective, technologies can
change classroom interaction in significant ways. For instance, drawing on
work with Gemma Moss and colleagues (Moss et al., 2007) on the roll-out of
interactive whiteboards (IWBs) in UK secondary schools, we can compare the
teaching of school English with an overhead projector (Figure 8), and with an
IWB (Figure 9) to explore how the use of a technology can shape the ways in
which a teacher moves around the classroom, how students are grouped, and
the kinds of texts that come into the classroom. These changes impact upon
the pedagogic relations and textual practices of the classroom.
10
Learning and communication in digital multimodal landscapes
New resources
Multimodality can contribute to the identification and development of new
digital resources and new uses of existing resources particularly in digital
environments. In addition to creating inventories of modes and semiotic
resources and analysing how these have been, are, and can be used in a
range of specific contexts, which is an inventory of the past and the present,
multimodality can also contribute to imagining future resources and their uses.
Digital technologies have been key in reshaping modal resources.
Digital synthesizers and other digital technologies, for example, have reshaped
the possibilities of the human voice to create new resources and contexts for
the use of human voices in digital artefacts, public announcements, music,
and so on (van Leeuwen, 2005). This digital reshaping of voice has in turn had
an impact on the non-digital use of voice for example, by providing different
tonal or rhythmic uses of the non-digital voice not previously imagined.
Similarly, the Emergent Objects project (www.emergentobjects.co.uk) brought
11
Carey Jewitt
Innovative methods
Multimodality can contribute to innovative research methods. Through work
with Kress and colleagues in the Science and English classroom, we developed
methods for transcribing, sampling, and theorizing multimodal interaction
in the classroom, as well as addressing substantive questions about the role
of image, gesture, movement, and action with objects in the teaching and
learning of school science. We applied and further developed these multimodal
methods to understanding school English in the socially contested classrooms
of urban super-diverse schools (Kress et al., 2004). I also developed a multimodal
framework to explore how digital technologies reshape knowledge, literacy,
and practices of learning and teaching (Jewitt, 2006). This work led to a number
of methodological books, including The Routledge Handbook of Multimodal
Analysis (Jewitt, 2009) and The Sage Handbook of Digital Technology Research
(Price et al., 2013).
A holistic view of interaction and communication places significant
demands on research methods with respect to digital texts and environments
where conventional concepts and analytical tools (e.g. talk aloud protocols, or
ethnographic field notes) may need rethinking (Price and Jewitt, 2013a). How
can research methods effectively capture and analyse the flow of materials
in online social interactions and other digital environments (Price and Jewitt,
2013b)? If digital environments are not fixed or frozen in time, how do we
archive them or make a corpus of data? Multimodality makes a significant
contribution to existing research methods for the collection and analysis
of data and environments within social research. For example, it enables
modal changes to elements in a multimodal configuration on screen and its
subsequent meaning to be mapped across different digital platforms, as it is
blogged, re-blogged, tweeted, and texted. These changes include: changes in
colour and in content through framing, cropping, and rescaling; new image-
writing relations through the use of captions; the addition of voiceover; new
12
Learning and communication in digital multimodal landscapes
13
Carey Jewitt
contexts has been the primary focus of my work, and is thus the focus for the
rest of this paper.
The design and use of digital technologies also enable people to make
meaning in new ways, and this is another aspect of multimodal research and
my work. The multimodal features of technologies have consequences for how
knowledge is shaped and the ways people interact, their practices; in other
words, the ways that they do many things for example making learning
resources for a lesson, giving feedback, teaching in the classroom, reading
and writing, and so on, as well as more broadly shaping social relations
and identities. Digital technologies are thus a key site of theoretical and
methodological interest for me, and others, within multimodal research.
It is perhaps important to make clear that I am not suggesting that
technology determines peoples meaning making; rather, we suggest that the
features of technologies (old and new technologies) provide different kinds
of constraints and possibilities for meaning making technologies, like other
tools, shape what we do. In addition, the communicative potentials that shape
knowledge and the practices that people engage with are, I think it important
to note, fundamentally connected. The distinction here is to a large extent
analytical the multimodal design of these digital interfaces and interactive
environments, texts, and their communicative potentials is itself a practice.
This boundary between text and practice is perhaps especially blurry in the
context of digital technologies, where the user is often making the text by
their selections and journey through a digital environment. The distinction
is nonetheless a useful one for the purposes of this paper, in that it helps
to understand the constraints and potentials that a technology places on
communication and interaction.
In the next section I will briefly discuss some examples drawn from
my work and that of colleagues. Each example focuses on a practice common
to learning reading, writing, teachers design of resources, exploring and
building a hypothesis, physical exploration, and gathering and interpreting
data on field trips. These practices occurred prior to digital technologies
but, as I will show, have been transformed by the multimodal character and
features of technology. Each example will explore a specific digital technology
computer applications, online resources, interactive whiteboards, physical
digital technologies and mobile technologies. Each example will draw attention
to particular modes foregrounded by the technological design and its situated
use. I will discuss these examples along with a commentary on multimodality
with particular attention to the role of the visual, embodied modes (gaze,
14
Learning and communication in digital multimodal landscapes
gesture, posture, and so on) and modal aspects of space (traversals, pathways,
distance, direction and orientation, the bounding of spaces in different ways,
composition, layout, and other organizational structures of space). These
modes are sets of material resources and organizing principles that have been
shaped and conventionalized through their daily social use by people and
communities over time. The work of multimodal research is to unpack these
resources and organizational principles, and how they have been used, through
the detailed analysis of their materialization to get at their communicative
functions and use these to describe, critique, and design their use in relation
to social research questions.
15
Carey Jewitt
gesture and body movement, their mood through the use of sombre or bright
lighting and colour. This multimodal reshaping gives the characters voice and
movement, and presents information not in the novel.
The degree of multimodal representation of characters in such texts
serves to indicate the importance of characters within a story. In the case of
the digital version of the novel Of Mice and Men, this serves to reposition them,
with the marginal black and female characters gaining new import for the
contemporary reader.
Figure 10: A screen shot from the digital version of the novel Of Mice and Men
16
Learning and communication in digital multimodal landscapes
of the book. This kind of digital reshaping of the characters and their
relationships is significant for the interpretation of the novel. In addition, such
multiple visual representations serve to make the concept of character more
abstract, moving beyond a question of the individual readers interpretation or
the authors intention.
While the original story is represented, the screen can hold less text than the
printed page and this, combined with the narrative guide, the hyperlinks, the
video excerpts, and character files, restructures or breaks up the narrative and
disconnects ideas that previously ran across one page to fragment the narrative
across screens. This creates a different narrative pace and structure, and re-
contextualizes the story and constrains its interpretation. In this process the
relationship between the written and visual elements on the screen become
complex, with the writing appearing in blocks that move across the screens to
reveal an emotional subtext of the story. Students working with a digital text,
17
Carey Jewitt
such as this one, need to move between studying different layers and domains
of knowledge. An example of this is shown in the two images in Figures 11 and
12 the fictional level in Figure 11 shows a handwritten envelope addressed to
the character. In Figure 12, the factual level is shown the envelope links to a
letter written by Steinbeck to an actor playing the character.
Figure 12: A screen shot of an opened hyperlink in the character file of Curleys wife
In digital texts and environments, the visual is often to the fore, and writing
is itself a highly visual element. The genres and practices of reading such
multimodal texts remain relatively open for the time being. Reading, or
perhaps more aptly, watching them introduces new resources and practices
for navigating, constructing, and understanding texts and provides different
routes into and pathways through texts. In this multimodal environment it
is clear that to persist in thinking of reading primarily in terms of writing is
problematic.
18
Learning and communication in digital multimodal landscapes
19
Carey Jewitt
20
Learning and communication in digital multimodal landscapes
21
Carey Jewitt
22
Learning and communication in digital multimodal landscapes
23
Carey Jewitt
Figure 14: The representation of the rule Move the object to the right when the right arrow
control button is pressed in ToonTalk, an animated programming language
24
Learning and communication in digital multimodal landscapes
rule, condition, and action are constructed. The Playground project (directed
by Richard Noss and Celia Hoyles) built computer environments for 48-year-
olds to play, design, and create games using two different programming
languages: ToonTalk, an animated programming language (Figure 14), and
Imagine Pathways, a graphical version of Logo (Figure 15). These shaped the
potentials for learning with them. With Ross Adamson, I analysed the different
modal selections and combinations that these programming languages made
available, and the impact of these modal choices (including still image, gesture,
posture, speech, music, writing, and new configurations of the elements of
these) on the emergence of the mathematical entities rule, condition, and
action as the students programmed (Jewitt and Adamson, 2003).
Figure 15: The representation of the rule Move the object to the right when the right arrow
control button is pressed in Imagine Pathways, a graphical version of Logo
25
Carey Jewitt
to the right when the right arrow control button is pressed (shown in Figures
14 and 15). The representations were varied, as can be seen. The visual objects
varied from symbolic to visually named; the spatial arrangements varied from
sequential, linear, left to right structures to multi-directional; genre varied from
equation, to cartoon to animation; the representation of realism varied from
scientific to everyday. The modal representations of rule (and its constituent
elements condition and action) in Imagine Pathways and ToonTalk differ in
important ways. Important, because they constitute the entity rule in different
ways and provide the user with different resources for thinking about rule and
about her/himself as a learner in relation to the system.
These resources lead then to different shapes of knowledge and
kinds of work for the learner. (They also provide new opportunities for self-
identification such as in the use of the robot as a pseudo avatar.) For example,
two students (aged 7 years) were working with the resources of Playground to
build a simple game (the full case study is reported in Jewitt, 2006). It concerns
a small creature being chased by an alien on a planet and who fires bullets
to try and kill the alien. The planet landscape includes bars where the bullets
might bounce and miss or hit the creature. The students first use pen and paper
to design their game. They then work in Playground, using ready-made visual
elements and backgrounds, colour, movement, and sound to make the game.
In the students written game design, the action of the bullets bounce
is represented as a matter of movement and change of direction when
something is touched. The multimodal environment of Playground differs,
and raises two key questions for the students in their design, What is it that
produces bounce? and What is it that bounces?.
Initially, the students programmed the sticks to bounce. It was the
visual experience of playing the game that led them to realize their mistake
they played the game and the sticks bounced off! The students, however,
used gaze and gesture to solve the problem. The students created different
kinds of spaces on the screen through their gesture and gaze with the screen
itself and their interaction with, and organization of, the elements displayed
on the screen. These spaces marked distinctions between the different kinds
of practices with which the students were engaged. In their creation and use
of these spaces, the students set up a rhythm and distinction between game
planning, game design, game construction, and game playing. The students
gestured on the screen to produce a plan of the game: an imagined-space
overlaying the screen, in which they gesturally placed elements and imagined
their movement, and used gesture and gaze to connect their imagined
26
Learning and communication in digital multimodal landscapes
(idealized) game with the resources of the application as it ran the program.
The temporary and ephemeral character of gesture and gaze as modes enabled
their plans of the game to remain fluid and ambiguous.
The role of gesture was central to understanding their unfolding
programming process in three ways. First, gestures gave a way to see how they
coordinated visions and disagreements and built hypotheses through gestural
tracing and overlay to explore trajectories of movement. Second, examining
the students use of gesture to identify points of gestural vagueness, wiggles,
and trailing off, helped to identify areas of difficulty and ambivalence. Third,
the students use of gesture gave insight into their hypotheses.
These and other technological platforms, including digital simulation
applications (such as Interactive Physics for learning science), enable students
to manipulate elements on screen and involve them in embodied interaction
with the screen gesture, gaze, and so on. Embodiment is also a key resource
in many digital spaces with represented bodies, like avatars, which offer a form
of virtual embodiment. Such environments offer new ways to embody a set of
identities outside ones own physical being, where the virtual avatar or visual
artefacts act as a tool through which identity and experience can be shaped.
Moving off-line, into a physical digital environment, which uses digitally
augmented objects that can be engaged with physically, reconfigures and
brings additional features and modes to embodied engagement. Multimodality
provides a set of resources to describe and interrogate these remappings, for
example to get at the interaction between the physical and the virtual body.
27
Carey Jewitt
environment how objects are manipulated and handled, the role of physical
touch and other physical sensations, and the role of body position, gaze,
manipulation, and speech in shaping interaction are all resources on which to
draw. More generally, physical digital environments prompt the emergence of
sensory modal resources, such as physical (or haptic) feedback. For example,
the Wii gives sensory feedback via wristbands and body straps, and visually
through the use of virtual avatars. Another focus of multimodality is that
of how multimodal action flows and unfolds in time, particularly in terms
of pace, rhythm, and interaction structure, and the implications of this for
interaction and the processes that facilitate knowledge. The embodied and
the spatial resources that these physical digital environments make available
are intertwined. The spatial design of these technologies positions the screen/
surface to the users in a range of ways, and these require the user to engage in
physical digital mapping in interesting ways for what it means to collaborate
and play together.
In digital environments, embodied interaction practices are a central
part of how students communicate ideas, build hypotheses, explore, and
collaborate for learning. Here, I focus briefly on an example of students
learning scientific concepts and embodied learning in a physical digital
environment (Figure 16), and on how this supported forms of interaction and
enabled new action, physical, perceptual, and bodily experiences (Price and
Jewitt, 2013b), which in turn led to new practices for learning. This analysis
illustrates the different ways that 12 pairs of students, aged 1011 years,
used and orchestrated the multimodal resources of a light-table, particularly
in terms of bodily posture, pace, and structure of activity and talk. How the
students worked with the table has implications for the process of meaning
making, in an independent exploration activity that involved learning about
the science of light. The analysis suggests that position choice affects how the
action and activity evolve. Different positions give different opportunities for
interaction; for example, where to look (gaze), point of view, and ease of access
to tangible objects at the side of the table and on the table itself. Positioning
themselves opposite each other made it equally easy for each student to
pick up a new object from the side, to manipulate any objects already on the
table, and to work simultaneously. This led to clashes of action and ideas and
to repositioning one anothers blocks while creating a configuration on the
tabletop, in contrast to pairs who were adjacent to one another.
28
Learning and communication in digital multimodal landscapes
When positioned opposite each other, however, the time spent focusing on
observing each others actions, rather than simultaneously doing, was less
since their gaze was split between what they each were doing themselves and
what their peer was doing. Point of view also affected interaction, in that the
same point of view made sharing and gesturing to one another straightforward,
and some pairs exploited this mutual point of view in explanations.
Just as pace and rhythm were a feature in the practices of students
and teachers with screen-based digital environments, such as the online
digital novel and the IWB, pace of interaction and rhythm were shaped by
the students embodied interaction with the light-table. Those opposite one
another were significantly faster paced in their interactions. This potentially
exposed the students to more experiences, but it also reduced the amount of
considered reflection time and a purposeful approach, and called for different
forms of systematic activity.
A multimodal approach offers a way of describing and classifying
embodied forms of interaction, which goes beyond looking primarily at
29
Carey Jewitt
30
Learning and communication in digital multimodal landscapes
31
Carey Jewitt
Concluding comment
Communicational resources have changed significantly over the past decade,
bringing music, image, and video into our everyday repertoires. Nearly all
students in the UK now have home access to the Internet and routinely carry
a mobile phone with digital camera, video, and MP3 player: new media are
pervasive. These changes have expanded the multimodal resources available
to students, multiplied the reading paths to be navigated, and introduced
practices of re-mixing and redesign of communicational forms. They raise
questions about the form and functions of writing and image in the classroom,
and highlight the complexity of digital writing and reading practices (and
speaking and listening).
I have shown that how knowledge is represented and experienced
the choice of communicational modes and technologies is crucial to
understanding knowledge construction. I have also pointed towards how
the situated use of these modal resources and digital technologies shapes
practices of teaching and learning in the multimodal environment of the
contemporary classroom. In particular, I have shown the potential of examining
the visual, the body, and embodied modes, as well as spatial modal resources,
for understanding how digital technologies reshape knowledge and practices
of learning and teaching.
No one can really know or predict what will happen with digital
technologies and how they will unfold over the next decade or so. I think it is
clear, however, that as the cost of digital complexity reduces, we will see the
mainstreaming of now elite and costly interactional forms like eye tracking
and gesture-based interaction; we will see new functionalities, new forms of
interaction, and new whole-body digital devices and experiences. In short,
the multimodal interaction potentials of digital technologies will increase and
develop. We need new methods to research these, and multimodality is a very
good starting point from which to develop these.
As this work develops, I hope to bring multimodality into more
contact with digital art and design, to generate new questions through that
collaboration, and to take multimodality into new and interesting directions.
32
Learning and communication in digital multimodal landscapes
Acknowledgements
I would like to thank Jeff Bezemer, Gunther Kress, Sara Price, and Richard
Noss for their insights and comments on this paper, and the many research
participants whose practices provided the focus of the work discussed in this
paper.
References
Adami, E. (2010) Contemporary Patterns of Communication: The case of video
interaction on YouTube. Saarbrucken, Germany: Lambert Academic Publishing.
Barton, S. and Jewitt, C. (1995) Talking about Sex. In H. Curtis, T. Hoolaghan and
C. Jewitt (eds), Sexual Health Promotion in General Practice. Abingdon: Radcliffe
Medical.
Bayliss, A. and McKinney, J. (2007) Emergent Objects: Design and performance
research cluster. In T. Inns (ed.), Designing for the 21st Century: Interdisciplinary
questions and insights. Aldershot: Gower Publishing, pp.15065.
Bezemer, J. (2013) Gesture Beyond Conversation. In C. Jewitt (ed.), The Routledge
Handbook of Multimodal Analysis, second edition. London: Routledge.
Hassreiter, S., Walton, M. and Marsden, G. (2011) Degrees of Sharing: Public
voices, impression management and mobile video production in a participatory
media project for teens in Makhaza, Khayelitsha. Project report produced for
Nokia Research, February 2011.
Jewitt, C. (1997) Images of Men: Male sexuality in sexual health leaflets
and posters for young people. Sociological Research Online, 2 (2). Online.
www.socresonline.org.uk/2/2/6.html (accessed 15 March 2013).
(1998) A Social Semiotic Analysis of Male Heterosexuality in Sexual
Health Resources: The case of images. International Journal of Social Research
Methodology: Theory and Practice, 1 (4): 26380.
(2002) The Move from Page to Screen: The multimodal reshaping of school
English, Journal of Visual Communication, 1 (2): 17196.
(2005) Multimodal "Reading" and "Writing" On Screen, Discourse: Studies in
the Cultural Politics of Education, 26 (3): 31532.
33
Carey Jewitt
34
Learning and communication in digital multimodal landscapes
35
Carey Jewitt
36