From Information Literacy to Procedural Literacy

—Or, How to Be Literate in Algorithmic Culture



Lisa Gye




Media literacy, as a concept and a goal, has been established as an integral part of Australian schools’ curricula. There is now widespread acceptance that students should be able to read, respond to, interrogate and create a variety of media texts in a range of media formats. Of course, it wasn’t always this way. Resistance to the inclusion of so-called non-traditional texts like newspapers, television programs and films had to be fought for and that battle still manages to resurface from time to time. In contrast, the almost universal expectation, from employers, parents, governments and the media, that students be equipped with information and computer technology (ICT) skills is met with far less controversy. Presumably this is because ICT literate graduates, furnished with the requisite proficiencies and competencies required by cognitive capitalism, are indispensible and desirable labourers in the ‘digital economy’. But there are good reasons to ask what is meant by information literacy at a time when we are being transformed in no less a profound way than when we learned to read and write, both collectively and as individuals.


So what makes one ICT literate? Under the set of ‘proficient standards’ drawn up by the Australian Curriculum Assessment and Reporting Authority (ACARA),a student at Year 6 level should be able to:


generate simple general search questions and select the best information source to meet a specific purpose, retrieve information from given electronic sources to answer specific, concrete questions, assemble information in a provided simple linear order to create information products, use conventionally recognised software commands to edit and reformat information products. (ACARA, National, 30)


The proficient standard set for Year 10 indicates that a student at this level should be able to:


generate well targeted searches for electronic information sources and select relevant information from within sources to meet a specific purpose, create information products with simple linear structures and use software commands to edit and reformat information products in ways that demonstrate some consideration of audience and communicative purpose. (29)


This is the media literacy equivalent of being able to read the television guide and make a list of the shows you want to watch by year six and write a new version of the television guide targeted at your mates and their interests by year ten. There is no mention in any of the IT proficiency standards of being able to read software or understand programming. Questions relating to the corporate control of the development, ownership and dissemination of software and the material it transmits and encodes are completely elided. The evolution of media studies in school curricula was driven by the belief, rightly or wrongly, that understanding how media was made, who made it and for what purposes, coupled with critical strategies for decoding media texts, would furnish students with the requisite skills to navigate their media saturated world in a way that enhanced their ability to participate more effectively in that world as informed, democratically minded citizens. The current expectations of students in relation to information literacy fall well short of these lofty ideals.


So should this matter? Frankly, yes it should. It should matter even more now that the regulatory frameworks that constrain our engagement with information technologies struggle to come to terms with increasingly globalised, deregulated flows of information. As the cultural techniques of information technologies become less visible to those of us not inculcated into the kabbalistic world of computer programming and network infrastructure, the need to question these processes becomes even more urgent. At the very least, understanding the workings of algorithms might provide us with a starting point.



Procedural Literacy and Algorithmic Culture

As things stand we simply do not understand how the material infrastructures of Web 2.0 play out in the lives of individual users, how the software constrains and enables, how it formulates hierarchies, shapes the things people encounter, and so on. (Beer, ‘Power’, 985)


A very basic definition of an algorithm is a series of ‘encoded procedures for transforming input data into a desired output, based on specified calculations’ (Gillespie, ‘Relevance’). They are the engines that drive our interaction with data in a range of contexts, from recommendations on Amazon and Google to what we see in our Facebook feeds. They have the power ‘to shape auditory and cultural experiences’ (Beer, ‘Power’, 996). The key word here is power. Algorithms are decision-making systems. ‘Their job is to consider, or weigh, the significance of all of the arguments or information floating around online (and even offline) and then to determine which among those arguments is the most important or worthy’ (Striphas, ‘Algorithms’). But they are opaque decision-making systems because as David Beer points out, most of us simply don’t understand them or how they wield their incantatory magic.1 Just as we were dazzled by the magic of television in the 1950s (not to mention those nifty devices that let us change channels without leaving the comfort of our chair), so too are we now captive to a technology that works as if by magic simply because we cannot immediately observe the sleights of hand that manipulate the outcomes that algorithmic processes are calculated to produce.


As Florian Cramer reminds us, ‘magic … is, at its core, a technology, serving the rational end of achieving an effect, and being judged by its efficacy’ (Words, 15). And in order for it to do its work, magic must be performed on or by someone. The same is certainly true for algorithms. Humans design them with specific outcomes in mind. Despite the invisibility of their operations to the end user and their origins in the object-oriented world of computer science, algorithms are not merely abstract, objective mathematical formulas or operations that parse data. As desperately as their makers might cling to the illusion of objectivity, programmers, like magicians, leave their marks on their creations. Or as Cramer expresses it, 


Algorithms as subjective expression were not conceived of in classical computational poetics. Both occult and scientific computations, be it magic, Kabbalah, encyclopaedism or algorithmic calculus, rely on the idea that computation expresses a higher objective order; be it divine law, or the laws of logic and mathematics. It’s hardly surprising that computation often oscillated between the occult and scientific poles, blurring its boundaries: from Pythagorean mathematics to eventually metaphysical computations such as those of Karlheinz Stockhausen and John Cage. Nevertheless, as the ‘semantics’ of formal languages, i.e. the choice of their cultural denominators like Llull’s alphabetum indicates, computations were never able to do without superimposed meaning, inscribed subjectivity, embedded metaphors. (86-7; emphasis added)


Algorithms are the building blocks of software and thus software shares many of these same characteristics with them. Just as algorithms are culturally encoded, subjectively derived procedures, software too is not a transparent interface between the user and what the user wants to do. It dictates what the user can do and it does so in rhetorically loaded ways. As Cramer argues, 


Software involves interface paradigms with encoded cultural preconceptions of what, for example, a ‘document’, ‘writing’, ‘designing’ is. It has embedded concepts of the order of things, of communication and workflows. To this extent, software controls its users. Yet it sells the illusion that the user is fully in charge. (85)


Software, and its underlying algorithmic structures, is what Nigel Thrift would describe as a ‘performative infrastructure’ (Thrift, Knowing, 224). Algorithms and software form part of our ‘technological unconscious’, the operation of the powerful and unknowable information technologies that come to ‘produce’ everyday life (212-3). But are they really so unknowable?


Clearly the answer to that question is ‘no’. However, the effort required to develop an understanding of how algorithms and software work puts it beyond the remit of many people who rely on these technologies in their daily lives. Extending the definition of what constitutes information literacy beyond the inadequate parameters expressed in the guidelines outlined above should be seen as a first and necessary step in the process of rectifying this situation. This means going beyond the limited push to make ICT courses more interesting to those that might potentially study them at secondary and tertiary educational institutions. While this is valuable and important work, particularly as it relates to the necessity of engaging larger numbers of young women in the study of ICT, it doesn’t emphasise enough the import of learning how and what computers do and how this impacts on every other aspect of our lives.


For information literacy to become truly meaningful, it must take account of and pay attention to the procedural literacy that is acquired by programmers in their making of software and writing of algorithms. Procedural literacy means, in the words of Michael Mateas, ‘the ability to read and write processes, to engage procedural representation and aesthetics, to understand the interplay between the culturally embedded practices of human meaning-making and technically mediated processes’ (Mateas, ‘Procedural’). For Ian Bogost, ‘procedural literacy entails the ability to reconfigure basic concepts and rules to understand and solve problems, not just on the computer, but in general’ (Bogost, ‘Procedural’, 32). In other words, procedural literacy entails learning, and thus being able to recognise, the procedures that enable algorithms and hence software to weave their magic but it is also a more fundamental literacy which takes into account of a range of human interactions. It allows us to model knowledge and to see the world as a system of interconnected parts.


Procedural literacy is a by-product of learning to write algorithms and program computers. But that doesn’t mean that it can’t and shouldn’t be acquired by non-programmers. Calls for non-programmers to become procedurally literate are not new. As Mateas points out, 


[the] ideal of procedural literacy as necessary … as a requirement and right of an educated populace, has been with us for over 40 years. Yet the two culture divide persists, with artists and humanists often believing that programming is a narrow technical specialty removed from culture, and computer scientists and engineers often happy to pursue just such an unexamined and narrow focus. (‘Procedural’)


There are readily available ways to teach procedural literacy. One way is through the introduction of basic programming skills to the curriculum in general rather than as a specialist enclave. As Katie Salen notes, ‘products like Mindstorms® and open-source tools and programming languages like Logo©, Squeak©, Scratch©, and Alice© [are] designed to teach procedural thinking, problem solving, and logic, by learning to program’ (Salen, ‘Gaming’, 303). Many of these programs have been around for a long time and have been purposively used in educational settings. Another way is through the making and playing of games. As Christopher Walsh and Tom Apperley note (citing Bogost, ‘Procedural’, 258), ‘through gaming, gamers embody a procedural literacy where they “read and write procedural rhetorics—to craft and understand arguments mounted through unit operations represented in code”’ (Walsh and Apperley, ‘Gaming’, 3). The advantage of games is that they are already popular with most students.


Regardless of how it might be achieved, the acquisition of procedural literacy is becoming, if it is not already, vital to understanding how digital, networked media work on and through us. We may not want to everyone to become programmers but we are certainly going to need to learn how to think like programmers if we are going to be able to decode the ways in which future media sort, manipulate, calculate and disseminate information. The values and politics embedded in algorithms may not be immediately visible but they are powerful. As Ted Striphas rightly argues, 


Culture has long been about argument and reconciliation: argument in the sense that groups of people have ongoing debates, whether explicit or implicit, about their norms of thought, conduct, and expression; and reconciliation in the sense that virtually all societies have some type of mechanism in place—always political—by which to decide whose arguments ultimately will hold sway. You might think of culture as an ongoing conversation that a society has about how its members ought to comport themselves.


Increasingly today, computational technologies are tasked with the work of reconciliation, and algorithms are a principal means to that end. Algorithms are essentially decision systems—sets of procedures that specify how someone or something ought to proceed given a particular set of circumstances. Their job is to consider, or weigh, the significance of all of the arguments or information floating around online (and even offline) and then to determine which among those arguments is the most important or worthy. Another way of putting this would be to say that algorithms aggregate a conversation about culture that, thanks to technologies like the internet, has become ever more diffuse and disaggregated. (‘Algorithms’).


We need to be a part of this conversation about culture and not merely its subject. And the sooner we learn how to speak, or at least understand, the language the faster that can happen.




1. Here I am constraining my argument to the technical dimensions of the algorithm. The opacity of such algorithms is increased by the patent laws, trade secret laws, and other legal and technical instruments protecting from scrutiny those ‘decision-making systems’ which have the maximum impact on us socially, culturally and politically—those used by search engines, social media and so on. But even if we could access them, most of us could not ‘read’ them, and for me this ‘illiteracy’ is the most pressing issue. #back




Australian Curriculum Assessment and Reporting Authority. National Assessment Program ICT Literacy Years 6 & 10 Report. Sydney: ACARA, 2011.


Beer, David. ‘Power through the Algorithm? Participatory Web Cultures and the Technological Unconscious’. New Media Society 11 (2009): 985-1002.


Bogost, Ian. ‘Procedural Literacy: Problem Solving with Programming, Systems, and Play’. The Journal of Media Literacy 52, 1-2, (2005): n.p.


Cramer, Florian. Words Made Flesh. Rotterdam: Piet Zwart Institute, 2005.


Gillespie, Tarleton. ‘The Relevance of Algorithms’, in Media Technologies: Essays on Communication, Materiality, and Society, eds Tarleton Gillespie, Pablo Boczkowski, and Kirsten Foot. Cambridge, MA: MIT Press, 2014. [In press]


Mateas, Michael. ‘Procedural Literacy: Educating the New Media Practitioner’, in Beyond Fun: Serious Games and Media, ed. Drew Davidson et al. ETC Press, 2008.


Gee, James. What Video Games Can Teach Us About Learning and Literacy. New York: Palgrave, 2003.


Peppler, Kylie and Yasmin Kafai. ‘What Videogame Making Can Teach Us about Literacy and Learning: Alternative Pathways into Participatory Culture’, in Situated Play: Proceedings of the Third International Conference of the Digital Games Research Association (DiGRA), ed. Akira Baba. Tokyo: The University of Tokyo, 2007, pp. 369-376.


Salen, Katie. ‘Gaming Literacies: A Game Design Study in Action’. Journal of Educational Multimedia and Hypermedia 16, 3 (2007): 301-322.


Striphas, Ted. ‘Algorithms Are Decision Systems’ [Interview]. 40Key Magazine, 26 November 2012: n.p.


Thrift, Nigel. nowing Capitalism. London: Sage Publications, 2005.


Walsh, Christopher and Thomas Apperley. ‘Gaming Capital: Rethinking Literacy’ in Changing Climates: Education for Sustainable Futures: Proceedings of the AARE 2008 International Education Research Conference, 30 Nov-4 Dec 2008. Queensland University of Technology, 2009.




Ctrl-Z: New Media Philosophy

ISSN 2200-8616


< Contents

< Close Issue