• Nie Znaleziono Wyników

1.4. Webcomics, Comics Theory and Multimodality …

1.4.2. Comics and Webcomics as Multimodal Works

Based on Kress’ and the NLG’s understanding of multimodality, it comes as no surprise that in

23 Excellent contemporary approaches on multimodality and multimodal literacy can be further found in the following academic titles: Introducing Multimodality (2016) by Carey Jewitt, Jeff Bezemer and Kay O’Halloran;

Multimodality (2019) by Jeff Bezemer and Sahra Abdullahi; as well as Rachel Heydon and Susan O’Neill’s Why Multimodal Literacy Matters (2016) along with the early but fundamental Routledge Handbook of Multimodal Analysis (2009) edited by Carey Jewitt.

general, comics are seen as multimodal works by many scholars, researchers, educators and writers.

For instance, Frank Serafini (2011) talks in his article (while referring to Kress) about how multimodal texts, including graphic novels, require the reader to simultaneously process different modes e.g. written text, visual images, and graphic design (p. 342). He also mentions that the younger generation are usually the recipients of such multimodal texts,24 and that knowledge about their composition and structure is key to their understanding (p. 346). In similar vein, Dale Jacobs (2013) says that “reading comics involves a complex, multimodal literacy”, and that “by thinking about the complex ways comics are used to sponsor multimodal literacy, we can engage more deeply with the ways people encounter, process, and use these and other multimodal texts” (p. 3).

Therefore, Jacobs sees comics as a truly multimodal form that has pedagogical and educational purposes, much like Serafini. Here, Jacobs (2013) cites his clear inspiration from the New London Group, and emphasizes the group’s role in their innovative approach to texts and multimodality (p. 7). Above all, Jacobs (2013) notes that the comic medium has been mis-categorized by many scholars and literary analysts as a “debased written text”, and comments on the need to consider comics as multimodal texts which are read thanks to multimodal literacy (p. 7).25 A further, detailed explanation is given by Jacobs (2013) as to how exactly and in what way comics are multimodal:

Texts can and should affect our thinking about them. As texts, comics provide a complex environment for the negotiation of meaning, beginning with the layout of the page itself. The comics page is separated into multiple panels, divided from each other by gutters, physical or conceptual spaces through which connections are made and meanings are negotiated;

readers must fill in the blanks within these gutters and make connections between panels.

Images of people, objects, animals, and settings, word balloons, lettering, sound effects, and gutters all come together to form page layouts that work to create meaning in distinctive ways and in multiple realms of meaning making. In these multiple realms of meaning making, comics are inherently multimodal, a way of thinking that moves beyond a focus on strictly word-based literacy. (p. 9)

As such, Jacobs anchors the multimodality of comics in the negotiation of meaning and through the making of connections, demonstrating just how much the modes to be found in comics interweave, and how much depends on their interpretation.

Moreover, it can be seen that Jacobs brings to light the need to analyze such multimodal works as comics in a different way, much like Kress. In fact, Jacobs (2013) cites Kress directly in his comics approach, saying that: “Kress clearly believes that other semiotic modes do not operate in exactly the same way that language does” (p. 8). Such attention to the comic medium is important to its proper understanding, especially in the ways it operates and functions as a

24 A general term used by Serafini for video games, graphic novels, magazines, textbooks, picture books, etc.

25 One can note here the similarity between the terms: multimodal literacy and multiliteracy.

multimodal form. Furthermore, Jacobs’ and Kress’ approaches allow for the proper application of the comic medium in education and learning in their understanding. Naturally, for the needs of this thesis, this approach will be expanded on within the context of webcomics and translation.

It is important to note that while Jacobs’ musings on multimodality concern printed comics, they can also concern webcomics. When it comes to multimodality, what is applicable to printed comics is also applicable in many ways to webcomics. In fact, Jacobs (2014) himself confirms this notion in an article describing webcomics as multimodal works, which he sees as “embedded in other multimodal sources of information” (par. 3). Furthermore, Jacobs (2014) sees webcomics as comics that have been re-adapted from their printed forms (par. 8), a notion which is in many ways true, especially for non-enhanced webcomics. Still, webcomics function in an online environment, which is a defining feature for them as multimodal works, thus impacting any processes they may undergo, including translation. The roles of the interface and the website are vital for the understanding of webcomics, and indeed belong to the category of multimodal sources that Jacobs mentions.

Thus, multimodal theory can be just as easily applied to webcomics as printed comics; what remains of import is to consider the context, as always, and the fact that to a webcomics, additional modes can be rapidly added, causing a dynamic change in the reception of the work. As Jakob F.

Dittmar (2012) points out in regards to digital comics26: “It has to be kept in mind that digitally transmitted comics that are shown on-screen but are not supposed to be printed out can use additional layers of narration apart from sequential juxtaposed images and texts, for example, audio material or animated sequences” (p. 88). Throughout his article, Dittmar (2012) emphasizes the multimodality of digital comics and discusses the impact of new technologies on the reception and composition of digital comics, which in turn contribute to the creation of a new kind of storytelling.

Dudek (2020) offers a similar multimodal analysis, and applies the Conceptual Integration Theory (CIT) to explain humor in online webcomic strips; the theory basically postulates that

“humans subconsciously produce meaning through conceptual links (mappings) of meaningful elements residing within mental spaces” (pp. 112-123). Dudek (2020) does mention that there are differences between webcomics and comics, e.g. the publishing and producing process, as well as humor expectations- he points out that the webcomic strip genre usually aims at heightened humor, and is by design meant to amuse through association and reference (p. 116).27 Yet Dudek (2020) rightfully notes that when it comes to conceptualization, there are no particular differences- a reader has to make multimodal connections and possess multimodal literacy in order to enjoy an online webcomic strip as much as a printed one (p. 116). Dudek (2020) notes that “an alternative mixture of modes of expression” (p. 115) is used to convey a certain goal in comics-in this case,

humor-26 Dittmar (2012) uses both the term ‘digital comics’ and ‘webcomics’, and also further differentiates comics as

‘download only’ or ‘online readable’. However, he sometimes uses the term ’digital comics’ as a blanket term.

27 Interestingly, Dudek (2020) does not mention syndicated print newspaper strips and cartoons, which indeed also feature heightened expectations for humor due to their form and history.

which also occurs in webcomics as well: “each webcomic producer makes use of various meaning-making mechanisms that may impact the final interpretation” (p. 124).

Naturally, webcomics preside in a different environment and can touch upon different topics;

in Dudek’s (2020) article, the analysis focuses on webcomics ingrained in typical online/meme humor, while Wilde (2015) points out that the online environment and its potential surrounding attachments/paratext e.g. message boards, social media influences webcomics’ features, functions (p. 9) and by extension, reader interpretation. However, at its core, the comic medium structure creates meaning in a way that is not dissimilar to that of its printed counterparts. Jerzy Skwarzyński (2019) offers a comprehensive study on printed comics and multimodality during which he emphasizes the importance of not simply the imagetext, but also surrounding paratext that may not be as apparent: “The essential components of visual grammar and pictorial vocabulary, including composition, perspective, foregrounding or symbolia, must be appreciated and analyzed with care, just like the text of all multimodal content” (p. 116). His conclusion is just as applicable and relatable to non-print comics as it is to printed ones.

Therefore, one can conclude that webcomics’ multimodality is defined through the way each and every mode within the webcomics interacts not only with each other, e.g. animation with music, image with text, but also in what way the sum of these modes interacts with its surrounding, multimodal online environment. The previously mentioned taxonomy in this thesis relying on enhanced and enhanced webcomics becomes inherently useful; as can be deduced, non-enhanced webcomics are not as closely intertwined with their multimodal online environment as their enhanced counterparts. The coordination and interaction of modes is something that Dittmar (2012) also mentions, citing that while modes can be considered separately, they also need to be considered in combination, as a whole, in order to appropriately asses and interpret the entire comic itself (p. 84). Dittmar (2012) also refers to an interesting term, the so-called panel (or meta-image) which “consists of all its individual images and combination of their designs” (p. 84).

Although the over-focus on the image itself could be criticized, it can be seen that scholars like Dittmar and Jacobs attempt to focus on digital comics and webcomics alike as an interactive whole;

while they have elements that can be deconstructed and analyzed for certain needs, such as pedagogy in Jacobs’ case, overall they must be considered as a complex work in order to be adequately understood. Jacobs (2014) analyzes two webcomics with this mindset: Randall Munroe’s famous xkcd (2006-current) and Josh Neufeld’s noteworthy A.D.: New Orleans After The Deluge (2007-2008) which addresses the tragedy of Hurricane Katrina. Jacobs (2014) concludes that as webcomics functioning in an online environment, both titles: “[involve] us as readers in not only the linguistic realm, but also in the visual, the audio, the gestural, and the spatial. As readers, we make meaning from each of these semiotic modes, but we also engage in multimodal design where the meanings from these multiple modes come together to create overall meaning” (par. 15).

Multimodality is a crucial component in comics theory, extending towards comics functioning in the digital sphere. Depending on the modes and media used, certain collaborative meanings are made, meanings which are further dependent on how the reader interprets them. As McCloud (1993) wisely points out in Understanding Comics, “an equal partner in crime is the reader” (p. 68).

As such, naturally, it would be important to delve deeper into multimodal comics theory;

often, comics are seen as a kind of language, a vocabulary, a grammar, and one that is wholly dependent on the reader’s background knowledge and presuppositions. It is here that Gerard Genette’s, Neil Cohn’s, Thierry Groensteen’s and Scott McCloud’s detailed multimodal theories pertaining to comics become important in not only understanding the comic medium, but also being able to further apply it to the webcomic realm.28 The following approaches were chosen as they were seen as most relevant to the query of comics and multimodality in this subchapter.

1.4.3. Multimodality and the Comic Theories of Scott McCloud, Neil Cohn, and Thierry