Communication in the 21st century has become multimodal. A variety of primary modes,
including text, narration, movement, colour and sound are also “translated” and delivered
in secondary modes through electronic devices. One of the genres that is characterised by
multimodality is audio-visual presentation. In students’ presentations, speech is normally the
primary mode, supported by text, movement, still pictures and colour. In oral presentation,
physical gestures, gaze and other body language function as part of multimodal ensembles.
This article focuses, in particular, on the hand gestures that form part of modal ensembles.
In scholarly literature, the word “gesture” is used to imply that the actor has some
voluntary control over a movement, and its meaning. Gestures are “actions” demonstrating
“features of manifest deliberate expressiveness” (Kendon, 2004:14, 15), which involve the
hand and arm movements humans make when they speak (Seyfeddinpur, 2011:148; Roth,
2001:368).
Much of the theoretical literature on gestures deals with how they are cognitively processed.
De Ruiter (2007) distinguishes three main “architectures” that account for different viewpoints
on the processing of manual gestures: Window Architecture (Beattie, 2003) assumes that
gestures come straight from the mind, without mediation by language. Language Architecture
assumes that the language a person speaks affects their gesture. Postcard Architecture implies
that words, speech and gesture arise together from an underlying propositional representation
that has both visual and linguistic aspects (Tenjes, 2001:317). Kendon (1980; 2004) and
McNeill (1992:2) have consistently emphasised the unity of speech and gesture.
Postcard Architecture resonates with thinking in recent and current studies in multimodality, that there are no distinct semiotic systems in the human brain, but rather one integrated
repertoire of linguistic and semiotic practices from which communicators constantly draw
(Garcia, 2009; Garcia & Li, 2014; Canagarajah, 2011; Mazak, 2017). Sign makers make
meaning by drawing on a variety of modes that combine with others in “ensembles” (Kress
& Van Leeuwen, 1996; 2001; Kress, 2010). However, there is still no general consensus on
whether gestures primarily explicate the thought processes of the speaker, or intentionally
communicate information to the audience or interlocutor, or both. For the purpose of this
article it is deemed sufficient to recognise that gestures constitute part of multimodal ensembles,
in which oral discourse is the primary mode of communication; that gestures do have
communicative value; and that people use gestures in accordance with their communicative
goals (Müller, Bressem & Ladewig, 2013:713).
The majority of scholars today view language and gestures as semiotic systems of which
the signs have form and meaning. It is also generally accepted that gesture studies need their
own vocabularies in order to talk about their mode-specific formal characteristics. Below, an
overview is given of the most cited gesture typologies and the nomenclatures for describing
some of the formal characteristics of gestures.
From the various gesture typologies, in particular McNeill (1992), (Tenjes (2001) and
Müller (1998; 2004) the following typology has been distilled to serve as a basis for the
semantic categorisation of gestures:
Representational
Concrete: Iconic
Abstract: Metaphorical
Referential
Concrete (referent present in the discursive space)
Concrete (referent absent from the discursive space)
Abstract (metaphorical)
Emblems
Beats
Discourse gestures
The article also discusses aspects of gestures that resemble syntax in language, viz. handedness
(left, right or both), semi-conventionalised hand shapes and orientation of the palm and fingers,
as well as position of gestures in the gesture space.
Subsequently a description is given of a research project that analyses co-speech gestures
in a corpus of 17 video-recorded group presentations by first-year students of theology
registered for the module “Academic Literacy for Theology” on the topic “The evaluation of
ten church websites according to criteria from a published theology article” (Waters & Tindall,
2010). Data were captured during three different cycles:
• Extracting the speech from each video using the software program Subtitle Edit,
saved in a Microsoft Excel file, and edited.
• Dividing the text according to presentations and (speech) turns by group members,
and saved in separate Word documents.
• Watching each video again, while conducting the following actions: Dividing all the
relevant gestures according to the five main types and their subtypes and inserting
them as still pictures into the relevant files, for example “metaphor”, “abstract
deixis”.
During the analysis, all captured images of the gestures demonstrating “manifest deliberate
expressiveness” (Kendon, 2004:15) were further categorised according to their formal
characteristics (handedness, position in gesture space and hand shape). A number of
representative examples from each of the main typological categories (deictic, iconic and
metaphoric) were described in detail. This was done by replaying the relevant video clip a
number of times, listening to the speech, and interpreting the information with reference to
the literature review.
The following table gives a brief overview of the findings regarding handedness, position
in the gesture space, and the hand shape of the 232 analysed gestures.
The table shows that both hands are used for approximately two thirds of the iconic and
metaphorical gestures, while deictic gestures are predominantly produced with only one hand.
This is not surprising, given the referential (demonstrative) purpose of deictic gestures (for
which only one hand is necessary), as opposed to iconic and metaphorical gestures, which
have a representative function, best achieved by using both hands. Findings regarding the
gesture space is well aligned with findings in the literature. Regarding hand shape, the C- and
flat C-shape occur in 17% of the iconic gestures, while the shallow cup occurs in similar
frequencies (between 10,4 and 13,1%) across all three main typological categories. An
unexpected, but positive, finding was that the open hand is the preferred shape for deictic
gestures (63%), and not the pistol shape, which is strongly discouraged in the advice literature
on the use of gestures in presentations.
The article is concluded by advice for the use of gestures in students’ audiovisual
presentations. The advice is based on the gesture theories and typologies discussed earlier,
as well as on the analysis of the corpus.
Kommunikasie in die 21ste eeu het grootliks multimodaal geword. ’n Verskeidenheid primêre
modusse, byvoorbeeld geskrewe teks, mondelinge vertelling, beweging, kleur, klank en musiek,
word ook elektronies “vertaal” en sekondêr deur rekenaars, selfone en tablette afgelewer. Een
van die genres wat gekenmerk word deur multimodaliteit is oudiovisuele aanbieding. In
studente se aanbiedings is spraak gewoonlik die primêre modus, en word aangevul deur teks,
animasie (beweging), foto’s en kleur. Die mondelinge aanbieding tree as deel van ’n
multimodale ensemble op, waar die unieke eienskappe van elke modus benut word om betekenis
oor te dra. Eers gee ek ’n definisie van die konsep “handgebaar”, gevolg deur ’n uiteensetting
van algemeen erkende gebareteorieë en tipologieë wat gebare indeel volgens semiotiese
gebaretipes, handigheid (links, regs of beide), gekonvensionaliseerde handvorme en palmoriëntasies, en beweging en posisie in die gebareveld. Daarna volg ’n beskrywing van ’n
navorsingsprojek wat berus op die analise van ko-taalgebare in ’n korpus van 17 gevideograveerde mondelinge aanbiedings deur teologiestudente. Die artikel word afgesluit deur
puntsgewyse advies vir die gebruik van gebare in studente se oudiovisuele aanbiedings. Die
advies is sowel gegrond op teorieë en tipologieë van gebare as op die analise van die korpus.
Waar dit relevant is, word verwys na handleidings oor kommunikasie en openbare redevoering.