A friend of mine recently said that her children will be computer whizzkids more or less by default, and I questioned this notion. Do not get me wrong. It is not as if I question the ability of my friends children to actually become computer whizzkids. They are still very young and have a lot of time to develop such skills in time. What I questioned was the belief that these skills will be more or less inherent in this generation of children. And this belief is more common than one would honestly wish.
The argument relies on the fact that we live in a society where computers are all around us and that everyone, from an early age (and for better or worse), is forced to interact with this technology on a daily basis. This is undeniably a fact in this age of the internet, social media, smartphones, etc, and I certainly will not make any attempts to deny it. What I strongly question is the flawed assumption that simple exposure will computer whizzkids make. And point to the fact that if that is what we believe, maybe we need to redefine our understanding of what a computer whizzkid is.
It is true that the current generation of children will not have (some of their) parents' (not to mention grandparents') fear of using some of these devices and media. Having grown up around them, the children will no doubt see these devices as a natural part of their lives. But having no fear to use a device or medium does not by default mean mastery of said device or medium. In fact, in this specific context, there are gaps. As I hope I will show.
The first gap is this: computers and other devices today (like smartphones) are designed to be user friendly. This is of course a plus, as it makes these media accessible to more people, and as such makes the many fora opening up all the more democratic. Or at the very least the possibility of their being democratic is heightened by the fact that almost anyone can use them, and definitely without an advanced degree in technology or physics. On the downside, there is quite naturally a greater risk that fewer people understand how the technology behind the devices actually work. And this, of course, applies both to hardware and software.
To illustrate this point further, I would like to point to the situation of my own generation. We are the early generation of people exposed to home computers. I had several friends who had Commodore 64 or 128. For my own part, I opted to get a Nintendo, simply for the reason that it was more user friendly when it came to gaming (no real loading times, no winding cassette tapes back and forth – and yes, not only obsolete floppy disks were used in the early stages but also actual cassette tapes). My decision was one of convenience. Yet, while there is no doubt in my mind that my computer-owning friends had their computers for gaming (just as I had my Nintendo), I note with interest that quite a few of them have continued into careers in software programming or computer support.
The key to this is not strange, of course. While I saved a bit of time on loading my games whenever I wanted to play, they gained early insights into the structure and language of computer coding. It is said that necessity is the mother of invention. I would go even further and say that necessity can also be a great motivation for learning. Or differently put: human beings are quite often lazy by nature. If we do not have to know something, chances are that we will not bother about acquiring the knowledge.
This sort of brings us to the second gap I wanted to discuss: we are already a few generations into the supposedly naturally computer savvy homo sapiens computer (if you pardon the expression), yet I have had students who do not know how to use simple functions and tools in word processing programs. And before you make the argument that everyone cannot be expected to know everything everywhere (which is certainly true), one of the core arguments when claiming that young people are more and more universally belonging to a new breed of computer whizzkids is that these people (because of their fearless nature) will be able to find their way on their own, even if they do not have the specifics in their head at the outset. If this was true, how come they stumble on very simple problems like changing font sizes and types, line spacing, adding page numbers, headers and footers? Or use the spellchecker? Quite simply put, the less we need to scratch our way below the surface of things, the less we will bother finding out what is underneath that surface, let alone know and understand how whatever is there actually works.
As the user friendly nature evolves, the average user (albeit quite obviously belonging to a much greater community of users) will know much less about code writing than my childhood friends who did their gaming on computers; will know much less than myself (who would by no means make any claims to be a computer whizzkid), who acquired no knowledge of code writing, but at least some basic understanding of information structures in some operating systems, after years of using computers as tools for word processing and the like.
This problem (and make no mistake, it is a problem) can be compared to an obvious one in the field of car mechanics. When I was a kid, it was not uncommon for people to fix their cars themselves whenever there were car problems. These days it is more or less impossible for an amateur to dive under the hood of a car and fix anything, because of computerised systems, and nuts and bolts that require specialised tools. In short, the further along we have got, the less car users know about how their vehicles work, and more importantly, how to fix them should something break down.
Similarly in computers, the more user friendly and surface oriented they become, the less informed about whatever happens underneath the surface most users will be. And the worse that negative ratio grows, the more hopeless a scenario when something goes awry starts to look.
A while back my internet pal, Zaki Hasan wrote a blog post entitled "The Death of Saturday Morning". The post deals with the technological advances in television broadcasting and recording, which has forever changed the playing field and made Zaki's (and my own) memories of waiting for a show to start (rather than instantly appear at the press of a button, whenever required) somewhat obsolete and incomprehensible to the current generation of children. There is something of an all-consuming id being fostered into the texture of our societal weave in the underlying question of necessity: why wait?
Why wait indeed? I will not pretend that I am not a fan of watching TV-series on DVD, in order to avoid being bound by a TV guide schedule dictating when I can see what and how much of it. But that confession does not deny that there is an inherent problem in a culture that is seemingly developing a greater and greater need for instant gratification, while simultaneously developing technology that insures that we need not acquire "useless" information about how our tools actually work. If we continue on this route, where will we end up?
Or differently put: eventually, who will actually know how to change the batteries when they run out?
The argument relies on the fact that we live in a society where computers are all around us and that everyone, from an early age (and for better or worse), is forced to interact with this technology on a daily basis. This is undeniably a fact in this age of the internet, social media, smartphones, etc, and I certainly will not make any attempts to deny it. What I strongly question is the flawed assumption that simple exposure will computer whizzkids make. And point to the fact that if that is what we believe, maybe we need to redefine our understanding of what a computer whizzkid is.
It is true that the current generation of children will not have (some of their) parents' (not to mention grandparents') fear of using some of these devices and media. Having grown up around them, the children will no doubt see these devices as a natural part of their lives. But having no fear to use a device or medium does not by default mean mastery of said device or medium. In fact, in this specific context, there are gaps. As I hope I will show.
The first gap is this: computers and other devices today (like smartphones) are designed to be user friendly. This is of course a plus, as it makes these media accessible to more people, and as such makes the many fora opening up all the more democratic. Or at the very least the possibility of their being democratic is heightened by the fact that almost anyone can use them, and definitely without an advanced degree in technology or physics. On the downside, there is quite naturally a greater risk that fewer people understand how the technology behind the devices actually work. And this, of course, applies both to hardware and software.
To illustrate this point further, I would like to point to the situation of my own generation. We are the early generation of people exposed to home computers. I had several friends who had Commodore 64 or 128. For my own part, I opted to get a Nintendo, simply for the reason that it was more user friendly when it came to gaming (no real loading times, no winding cassette tapes back and forth – and yes, not only obsolete floppy disks were used in the early stages but also actual cassette tapes). My decision was one of convenience. Yet, while there is no doubt in my mind that my computer-owning friends had their computers for gaming (just as I had my Nintendo), I note with interest that quite a few of them have continued into careers in software programming or computer support.
The key to this is not strange, of course. While I saved a bit of time on loading my games whenever I wanted to play, they gained early insights into the structure and language of computer coding. It is said that necessity is the mother of invention. I would go even further and say that necessity can also be a great motivation for learning. Or differently put: human beings are quite often lazy by nature. If we do not have to know something, chances are that we will not bother about acquiring the knowledge.
This sort of brings us to the second gap I wanted to discuss: we are already a few generations into the supposedly naturally computer savvy homo sapiens computer (if you pardon the expression), yet I have had students who do not know how to use simple functions and tools in word processing programs. And before you make the argument that everyone cannot be expected to know everything everywhere (which is certainly true), one of the core arguments when claiming that young people are more and more universally belonging to a new breed of computer whizzkids is that these people (because of their fearless nature) will be able to find their way on their own, even if they do not have the specifics in their head at the outset. If this was true, how come they stumble on very simple problems like changing font sizes and types, line spacing, adding page numbers, headers and footers? Or use the spellchecker? Quite simply put, the less we need to scratch our way below the surface of things, the less we will bother finding out what is underneath that surface, let alone know and understand how whatever is there actually works.
As the user friendly nature evolves, the average user (albeit quite obviously belonging to a much greater community of users) will know much less about code writing than my childhood friends who did their gaming on computers; will know much less than myself (who would by no means make any claims to be a computer whizzkid), who acquired no knowledge of code writing, but at least some basic understanding of information structures in some operating systems, after years of using computers as tools for word processing and the like.
This problem (and make no mistake, it is a problem) can be compared to an obvious one in the field of car mechanics. When I was a kid, it was not uncommon for people to fix their cars themselves whenever there were car problems. These days it is more or less impossible for an amateur to dive under the hood of a car and fix anything, because of computerised systems, and nuts and bolts that require specialised tools. In short, the further along we have got, the less car users know about how their vehicles work, and more importantly, how to fix them should something break down.
Similarly in computers, the more user friendly and surface oriented they become, the less informed about whatever happens underneath the surface most users will be. And the worse that negative ratio grows, the more hopeless a scenario when something goes awry starts to look.
A while back my internet pal, Zaki Hasan wrote a blog post entitled "The Death of Saturday Morning". The post deals with the technological advances in television broadcasting and recording, which has forever changed the playing field and made Zaki's (and my own) memories of waiting for a show to start (rather than instantly appear at the press of a button, whenever required) somewhat obsolete and incomprehensible to the current generation of children. There is something of an all-consuming id being fostered into the texture of our societal weave in the underlying question of necessity: why wait?
Why wait indeed? I will not pretend that I am not a fan of watching TV-series on DVD, in order to avoid being bound by a TV guide schedule dictating when I can see what and how much of it. But that confession does not deny that there is an inherent problem in a culture that is seemingly developing a greater and greater need for instant gratification, while simultaneously developing technology that insures that we need not acquire "useless" information about how our tools actually work. If we continue on this route, where will we end up?
Or differently put: eventually, who will actually know how to change the batteries when they run out?
Just a comparison that struck me: If we could take a person from the 17th century and put him in a car (as a passenger), he would probably be terrified. A modern child on the other hand (under normal circumstances of course) would just jump into the car without a thought. Yet, the child is most likely not a better mechanic than the guy from the 17th century.
ReplyDeleteThe second thought that struck me is from the foundation triology by Asimov. In the end of the big empire, they still used all of the technology, but no one alive knew HOW they worked. Just THAT they worked and almost never broke.
In the same way, when computers gets more and more user friendly and more common in use, people will get less interested in whats going on behind the curtain of the windows on their desktop. As long as it works, why turn the stone and check whats underneath it?
@Pixy: I would obviously agree to those argument, since they basically echo my own. The obvious question, and one which I think Asimov's vision somewhat misses, is the fact that sooner or later, there will be a need to pull the curtain or turn the stone (depending on which metaphor we prefer), yet at that time, we may well find ourselves in the position of 17th century person being faced with the car, and as such both terrified and clueless.
ReplyDeleteHeck, we might not even understand effectively how to pull the curtain or turn the stone. At least not without destroying the whatever device we handle in the process.