Monday, 25 April 2011

A Feature Film Is Not the Equivalent of a Novel

The other day, I finished watching the second season of Sons of Anarchy, and I was yet again reminded of something, which I have often thought during the last decade (and in all honesty, other minds have presented the same idea or similar ones): the cinematic equivalent to a novel is not a feature-length film, but a TV-series. The aforementioned Sons of Anarchy is a clear example of this, as is the absolutely superb series The Wire. But how? And what do I mean when I write this? Perhaps even more importantly, what does this imply for the visual medium of moving pictures?

Relax. I will explain myself.

Let us first remember the distinction between series and serial (which I have discussed earlier) and remember the fact (as also mentioned) that the TV-series format has slowly moved from a more series structured thinking to a more serial structured one. As in most cases, there is not a sharp dividing line; there is room for series with some serial elements, to be sure. However, that is not to say that clearly distinguished specimens of both categories cannot be found without any problems. For all the shades of grey, there are still black and white on the spectrum too, and the distinctions are therefore useful both in b/w terms and on a shaded scale (where the shades can still be judged in the present ratio of either category in any given case).

Given that the format has moved in this direction, it is perhaps not strange that full-blown TV-"serials" (outside of the already existing mini-series format) have proven themselves successful. But how does this relate to my claim? A novel tends to use a different pacing than a feature film. It is (usually) divided into an unset number of chapters, a feature which shapes the narrative, both in practical terms of how readers tend to interact with the text (chapter breaks often being preferred points of pauses in the reading for many readers) and in terms of dramaturgy. By the latter, I mean that chapters as a general rule are structural devices on the writer's part to create narrative units of drama that in some sense almost hologrammatically* mirror the whole. At least on a general level. Chapters often have an internal build-up, not rarely ending with a cliffhanger of sorts, which drives the reading ever onwards.

Now, feature films clearly do not lack dramaturgy and they are often, dramaturgically speaking, divided into acts (which at least on some level can be compared to chapters; this being something of a simplification of a much more complex relation, of course). Nevertheless, feature films are relatively short and definitely meant for single sitting consumption. Novels can be, but certainly need not (and in more cases than not pushes the possibility for such consumption for most readers).

In a TV-series (or -"serial" if you will), each episode serves as a dramaturgically structured unit in a larger dramaturgical structure. That is to say, each episodes functions like a chapter in a novel. Now, a feature film cannot go on hours and hours and hours on end. And yes, I am aware that some films do, but they really raise the question as to how viewable they actually are, and if, in fact, they actually resort to a somewhat episodic structure anyway, and in some sense then mimic novel/TV-"serial" structure. Still, as a general rule films over the three-hour mark are fairly rare on the whole, and many if not most tend to find them rather bothersome to watch for obvious reasons. TV-"serials" on the other hand can easily go on for hours and hours and hours and... simply because they, unlike feature films, do not go on on end. They come with ready-made pauses, where we can catch our breaths and stop for a bit to digest what has been going on. As with a novel, we can read/view one chapter/episode at a time, or as many as we happen to have time for at any given time.

Even more importantly, this less temporally compact structure allows for slower pacing, additional subplots and greater complexities on the whole. I have discussed adaptation in here before, and this certainly has bearing on it. The medium of film has always enjoyed adapting literature, but it is worthwhile noting that slimmer novels, novellas or even short fiction often make for better films. Or at least better adaptations. Simply because the ratio between story content/plot and narrative length is more even and requires less chopping off, cutting or slimming down. Naturally the line is neither singular nor sharp, at least not in terms of page count vs. playing time. After all, a thick tome spending most of its pages on visual descriptions which a camera can capture with a single image stands a much better chance of remaining intact and on time than an equally thick tome where the bulk of the pages is spent to delineate and develop a heavily complex intrigue.

Differently put, TV-"serials" not only have more time in which to tell the story, more space in which to include more story content, but also the opportunity to pace the storytelling differently, to allow for more characters and character voices to be heard, to be (in some sense at least) in focus. There is time and room for a narrative to breathe, to develop over time, and to consequently (at least potentially) hit you even harder with its moments of emotional impact.

There is no doubt in my mind that Alan Moore and Dave Gibbons' Watchmen would have made a much better TV-"serial" than a feature film. I have my reservations towards certain things in the film adaptation (some of which I may well discuss in here at some point), but my main complaint is still that even the good bits did not get time enough to breathe, were not allowed to develop and expand, but rather ended up feeling rushed (and consequently less than satisfactory). Consider the time Rorschach spends with the shrink and the slow revelation that one issue (or chapter, if you will) builds up towards, its heavy impact, and compare to the swift rush job of the film. In a TV-"serial" that would easily have been an episode in its own right. And a mighty fine one at that, if we would have had Jackie Earl Haley still doing the role.

As I write this, I have not yet had a chance to watch the TV-"serial" adaptation of George R. R. Martin's A Game of Thrones (I have watched and enjoyed the 15 minutes preview available on the HBO website), but it certainly looks promising. And it bodes well, if we are seeing the beginning of a trend that will see more adaptations from novels into moving pictures that are not feature films but TV-"serials".


* Holograms are among other things known for the fact that each piece contains the whole.

Monday, 11 April 2011

Who Will Actually Know How to Change the Batteries When They Run Out?

A friend of mine recently said that her children will be computer whizzkids more or less by default, and I questioned this notion. Do not get me wrong. It is not as if I question the ability of my friends children to actually become computer whizzkids. They are still very young and have a lot of time to develop such skills in time. What I questioned was the belief that these skills will be more or less inherent in this generation of children. And this belief is more common than one would honestly wish.

The argument relies on the fact that we live in a society where computers are all around us and that everyone, from an early age (and for better or worse), is forced to interact with this technology on a daily basis. This is undeniably a fact in this age of the internet, social media, smartphones, etc, and I certainly will not make any attempts to deny it. What I strongly question is the flawed assumption that simple exposure will computer whizzkids make. And point to the fact that if that is what we believe, maybe we need to redefine our understanding of what a computer whizzkid is.

It is true that the current generation of children will not have (some of their) parents' (not to mention grandparents') fear of using some of these devices and media. Having grown up around them, the children will no doubt see these devices as a natural part of their lives. But having no fear to use a device or medium does not by default mean mastery of said device or medium. In fact, in this specific context, there are gaps. As I hope I will show.

The first gap is this: computers and other devices today (like smartphones) are designed to be user friendly. This is of course a plus, as it makes these media accessible to more people, and as such makes the many fora opening up all the more democratic. Or at the very least the possibility of their being democratic is heightened by the fact that almost anyone can use them, and definitely without an advanced degree in technology or physics. On the downside, there is quite naturally a greater risk that fewer people understand how the technology behind the devices actually work. And this, of course, applies both to hardware and software.

To illustrate this point further, I would like to point to the situation of my own generation. We are the early generation of people exposed to home computers. I had several friends who had Commodore 64 or 128. For my own part, I opted to get a Nintendo, simply for the reason that it was more user friendly when it came to gaming (no real loading times, no winding cassette tapes back and forth – and yes, not only obsolete floppy disks were used in the early stages but also actual cassette tapes). My decision was one of convenience. Yet, while there is no doubt in my mind that my computer-owning friends had their computers for gaming (just as I had my Nintendo), I note with interest that quite a few of them have continued into careers in software programming or computer support.

The key to this is not strange, of course. While I saved a bit of time on loading my games whenever I wanted to play, they gained early insights into the structure and language of computer coding. It is said that necessity is the mother of invention. I would go even further and say that necessity can also be a great motivation for learning. Or differently put: human beings are quite often lazy by nature. If we do not have to know something, chances are that we will not bother about acquiring the knowledge.

This sort of brings us to the second gap I wanted to discuss: we are already a few generations into the supposedly naturally computer savvy homo sapiens computer (if you pardon the expression), yet I have had students who do not know how to use simple functions and tools in word processing programs. And before you make the argument that everyone cannot be expected to know everything everywhere (which is certainly true), one of the core arguments when claiming that young people are more and more universally belonging to a new breed of computer whizzkids is that these people (because of their fearless nature) will be able to find their way on their own, even if they do not have the specifics in their head at the outset. If this was true, how come they stumble on very simple problems like changing font sizes and types, line spacing, adding page numbers, headers and footers? Or use the spellchecker? Quite simply put, the less we need to scratch our way below the surface of things, the less we will bother finding out what is underneath that surface, let alone know and understand how whatever is there actually works.

As the user friendly nature evolves, the average user (albeit quite obviously belonging to a much greater community of users) will know much less about code writing than my childhood friends who did their gaming on computers; will know much less than myself (who would by no means make any claims to be a computer whizzkid), who acquired no knowledge of code writing, but at least some basic understanding of information structures in some operating systems, after years of using computers as tools for word processing and the like.

This problem (and make no mistake, it is a problem) can be compared to an obvious one in the field of car mechanics. When I was a kid, it was not uncommon for people to fix their cars themselves whenever there were car problems. These days it is more or less impossible for an amateur to dive under the hood of a car and fix anything, because of computerised systems, and nuts and bolts that require specialised tools. In short, the further along we have got, the less car users know about how their vehicles work, and more importantly, how to fix them should something break down.

Similarly in computers, the more user friendly and surface oriented they become, the less informed about whatever happens underneath the surface most users will be. And the worse that negative ratio grows, the more hopeless a scenario when something goes awry starts to look.

A while back my internet pal, Zaki Hasan wrote a blog post entitled "The Death of Saturday Morning". The post deals with the technological advances in television broadcasting and recording, which has forever changed the playing field and made Zaki's (and my own) memories of waiting for a show to start (rather than instantly appear at the press of a button, whenever required) somewhat obsolete and incomprehensible to the current generation of children. There is something of an all-consuming id being fostered into the texture of our societal weave in the underlying question of necessity: why wait?

Why wait indeed? I will not pretend that I am not a fan of watching TV-series on DVD, in order to avoid being bound by a TV guide schedule dictating when I can see what and how much of it. But that confession does not deny that there is an inherent problem in a culture that is seemingly developing a greater and greater need for instant gratification, while simultaneously developing technology that insures that we need not acquire "useless" information about how our tools actually work. If we continue on this route, where will we end up?

Or differently put: eventually, who will actually know how to change the batteries when they run out?