A magazine about programmers, code, and society. Written by and for humans since 2018.

A Review Of Research Around Programming Education From The 1960s To Today

The problem of teaching programming skills to new generations of software engineers is as old as the computers themselves. Each generation has tried to do it in a slightly different way, with various degrees of success. There is a lot of literature available online about the subject, and in this article we will point out papers and books that we found to be the most noteworthy. By no means this is an exhaustive list, but it features some interesting entries that might serve as a starting point for your own research.

The Beginnings

Teaching people about computers has never been as complicated as it was in the early 1960s. To begin with, computers were relatively hard to come by according to our standards (and this is an understatement). However, many scientists (rightly) understood that the computer was a breakthrough machine in the history of mankind, and that knowledge about this invention had to be spread as much as possible.

Douglas Engelbart, of “Mother of All Demos” fame, came up with an interesting gamified approach to teach the inner workings of binary computers to “laymen”.

The novel feature of the teaching method is that it makes use of human participants to simulate the function of logical elements that are typical of those used in digital computers. A group of such participants can be “wired” into a network that will function in a manner very similar to that of an actual digital network.

(Douglas C. Engelbart, “Games That Teach the Fundamentals of Computer Operation”, IRE Transactions on Electronic Computers EC-10, no. 1 (March 1961): 31–41. https://doi.org/10.1109/TEC.1961.5219149.)

Of course this was not precisely teaching programming, but merely teaching about what computers are and how they work internally. You gotta start somewhere.

We have already remarked in this magazine that 1968 was an annus mirabilis for computer science, similarly to how 1905 was one for Physics. It was the year of Engelbart’s “Mother of All Demos”, the year of Edsger Dijkstra’s “Go To Statement Considered Harmful” article, the year of the NATO Software Engineering Conference and its definition of “software crisis”, and the year of ALGOL W starting to become Pascal inside Wirth’s brilliant mind (a language that, just 15 years later, would become a staple of programming curricula all over the world.)

That same year (seriously!) the report “Curriculum 68: Recommendations for Academic Programs in Computer Science” appeared in the pages of Volume 11 of the Communications of the ACM. This was the first attempt at a formal definition of a study program for future generations of computer scientists and programmers.

The impact of this document on academia is hard to ignore. Richard Hamming mentioned this landmark event during his 1968 (again!) Turing Award lecture:

The topic of my Turing lecture, “One Man’s View of Computer Science,” was picked because “What is computer science?” is argued endlessly among people in the field. Furthermore, as the excellent Curriculum 68 report remarks in its introduction, “The Committee believes strongly that a continuing dialogue on the process and goals of education in computer science will be vital in the years to come.”
(…)
For example, let me make an arbitrary distinction between science and engineering by saying that science is concerned with what is possible while engineering is concerned with choosing, from among the many possible ways, one that meets a number of often poorly stated economic and practical objectives. We call the field “computer science” but I believe that it would be more accurately labeled “computer engineering” were not this too likely to be misunderstood.

(Richard R. Hamming, “One Man’s View of Computer Science”, 1968 ACM Turing Award Lecture.)

In the decades that followed, the ACM Curriculum has been (understandably) updated many times, and this author has found online copies of the 1978, 1991, 2001, 2005, 2016, and 2023 editions. These are massive documents, describing in detail the subjects, topics, and activities to be conducted in order to guide new students through the maze of computer technology.

The Pioneer

Around 1970, Seymour Papert applied the Constructivist theory of Swiss child psychologist Jean Piaget in order to come up with Logo, a language geared towards teaching programming to young kids.

Logo was (still is, actually) quite a controversial topic in teaching circles. For some, the mere idea of moving a turtle on a screen (which was exactly what Logo allowed you to do) was too much of a simplification; for others, it was an opinionated approach that had no place in a classroom. In retrospect, the language did not survive to its hype.

The final report of the Didapro 7 / DidaSTIC 2018 conference in Lausanne called “De 0 à 1 ou l’heure de l’informatique à l’école” was edited by my friend Gabriel Parriaux and many others of his colleagues. In the abstract of the opening keynote by Professor Pierre Dillenbourg of the École Polytechnique Fédérale de Lausanne, we can read:

In retrospect, Logo’s pedagogical potential fell victim to the level of expectation created by Papert’s speech, which was certainly brilliant and charismatic, but promised effects that no pedagogical approach could achieve. If you’re promised 1 million, you’ll be disappointed to receive half that.

(Gabriel Parriaux, Jean-Philippe Pellet, Georges-Louis Baron, Éric Bruillard, et Vassilis Komis (Eds.), 2018, “De 0 à 1 ou l’heure de l’informatique à l’école”, actes du colloque Didapro 7 – DidaSTIC. Berne, Suisse: Peter Lang. http://hdl.handle.net/20.500.12162/1438. Translated from French to English by Adrian Kosmaczewski.)

Logo started a long tradition of education research at MIT, which eventually yielded the programming languages Scheme and Scratch, the latter a staple in programming labs during the first two decades of the 21st century, with a whole field of investigation to go with it. (We will come back to Scheme in a bit.)

The amount and impact of Seymour Papert’s research in the field of programming education is too big to summarize in a single section of a single article like this. Let me just point the interested reader to his 1980 book “Mindstorms: Children, Computers, and Powerful Ideas”, widely considered to be the hallmark in the field, and whose name was co-opted by Lego (hopefully with Papert’s blessing) for its famous line of construction sets.

Let us close this section with a quote from 2001 that describes the towering impact of Piaget’s and Papert’s research, still felt more than half a century later:

Psychologists and pedagogues like Piaget, Papert but also Dewey, Freynet, Freire and others from the open school movement can give us insights into: 1. How to rethink education, 2-imagine new environments, and 3- put new tools, media, and technologies at the service of the growing child. They remind us that learning, especially today, is much less about acquiring information or submitting to other people’s ideas or values, than it is about putting one’s own words to the world, or finding one’s own voice, and exchanging our ideas with others.

(Edith Ackermann, “Piaget’s Constructivism, Papert’s Constructionism: What’s the Difference?,” January 2001.)

The Personal Computer Era

The spread of the personal computer in the 1980s fundamentally changed the scenario for programming teachers. All of a sudden, schools of all levels and types could access, if they had the required monetary means, to a level of computing power that merely 10 years prior would have been unthinkable.

Even primary and high schools started experimenting with offering programming classes to their students, a fact that triggered a lot of controversy and research. Sadly, the conversation shifted around the most banal and useless of debates: which language is the best for teaching programming? (Insert rolling eye emoji here.)

This perspective suggests that rather than arguing, as many currently are, over global questions such as which computer language is “best” for children, we would do better in asking: how can we organize learning experiences so that in the course of learning to program students are confronted with new ideas and have opportunities to build them into their own understanding of the computer system and computational concepts?

(Roy D. Pea and D. Midian Kurland, “On the Cognitive Effects of Learning Computer Programming”, New Ideas in Psychology 2, no. 2, January 1984, 137–68. https://doi.org/10.1016/0732-118X(84)90018-7.)

Donald Norman, of “Design of Everyday Things” fame, tried to shift back the conversation to the most important aspects: first, the usability of computers at the time (or rather, their lack thereof); second, to a much-needed distinction between levels of computer literacy.

Let me quickly come to my main point: the difficulties that we mortals have with computers are unnecessary. If computers are not understandable, the few will dominate the masses, because the secret language of computation leaves out the uninitiated; those who understand make it hard for those who do not.
(…)
It is important to distinguish among four levels of computer literacy:

  1. Understanding general principles of computation.
  2. Understanding how to use computers.
  3. Understanding how to program computers.
  4. Understanding the science of computation.

(Donald A. Norman, “Worsening the Knowledge Gap*: The Mystique of Computation Builds Unnecessary Barriers”, Annals of the New York Academy of Sciences 426, no. 1, November 1984, 220–33. https://doi.org/10.1111/j.1749-6632.1984.tb16522.x)

This paper by Donald Norman states as well the societal risk of increasing inequality between those countries that can afford computer equipment for their classrooms, and those that cannot. (As an anecdote, this was a fact witnessed by the author of these lines, who experienced a no-computing-whatsoever learning environment in Buenos Aires in 1990, and then a fully-equipped computer room filled with Apple Macintoshes in Geneva in 1991. The contrast could not have been harsher.)

The Internet Era

By the mid-1990s, the rise of networking and the World Wide Web once again drove an acceleration of programming education. The future was online, and schools had to seize the day as quickly as possible. And to be able to face the new technological reality of the world, universities had to increase their professionalism; or at least, that is what Gal-Ezer and Harel thought:

In fact, there is no clear agreement even on the name of the field. In European universities, the titles of many of the relevant departments revolve around the word “informatics,” whereas in the U.S. most departments are “computer science.” To avoid using the name of the machine in the title (a problem that prompted Dijkstra to quip that doing so is like referring to surgery as knife science), some use the word “computing” instead.
(…)
One of the main lessons we learned from teaching the material was that students must have an appropriate CS background. We cannot stress this statement enough. For example, one student in class was from electrical engineering, another’s sole connection to computing was via her use of computers in general education, and a third’s CS knowledge was 25 years old. These students simply did not fit in.

(Judith Gal-Ezer and David Harel, “What (Else) Should CS Educators Know?” Communications of the ACM 41, no. 9, 1998.)

(Note: that last paragraph is infuriating and ageist. There, I said it.)

The publication of “Structure and Interpretation of Computer Programs” (SICP) by Harold Abelson, Gerald Jay Sussman, and Julie Sussman in 1984 made Scheme the go-to programming language for education for the following decade.

Schools and colleges all over the world adopted Scheme in the 1990s, but by the end of the decade the industry wanted more Java developers, not Scheme developers, so Java schools started producing Blub programmers, functional programming be damned.

After all, even Philip Wadler (of all people!) criticized Abelson and the Sussmans for their choice of programming language:

This paper contrasts teaching in Scheme to teaching in KRC and Miranda, particularly with reference to Abelson and Sussman’s text.
(…)
Some people may wish to dismiss many of the issues raised in this paper as being “just syntax”. It is true that much debate over syntax is of little value. But it is also true that a good choice of notation can greatly aid learning and thought, and a poor choice can hinder it.

(Philip Wadler, “A Critique of Abelson and Sussman or Why Calculating Is Better than Scheming”, ACM SIGPLAN Notices 22, no. 3, March 1, 1987, 83–94.)

Let us speak a bit more about programming paradigms. The peak of the hype curve of Object-Oriented programming happened right in the middle of the 1990s, and it had a strong impact in programming curricula. This debate burned quite a bit of research funding:

It is a prevailing opinion that learning a programming language equals learning to program. In the call for papers for this workshop it is stated that “Switching to object-orientation is not just a matter of programming language”. We suggest rephrasing and strengthening this statement: Learning to program is not just a matter of learning a programming language.

(Jens Bennedsen and Michael E Caspersen, “Teaching Object-Oriented Programming”, 2004.)

The research around teaching object-oriented programming to younger generations produced, among other highlights, the BlueJ System:

An environment for an object-oriented language does not make an object-oriented environment. The environment itself should reflect the paradigm of the language. In particular, the abstractions students work with should be classes and objects.
(…)
BlueJ is an integrated Java development environment specifically designed for introductory teaching. BlueJ is a full Java 2 environment: it is built on top of a standard Java SDK and thus uses a standard compiler and virtual machine. It presents, however, a unique front-end that offers a different interaction style than other environments.

(Michael Kölling, Bruce Quig, Andrew Patterson, and John Rosenberg, “The BlueJ System and Its Pedagogy”, Computer Science Education 13, no. 4, December 2003, 249–68. https://doi.org/10.1076/csed.13.4.249.17496.)

What about on the other side of the fence? After teachers dropped Scheme because the industry wanted Java developers, should functional languages come back to the classroom? Surprisingly enough, the answer for some was a resounding no. (To be fair, Chakravarty and Keller propose to literally drop all programming paradigms, and instead to focus on, you know, teaching thinking skills instead of just teaching programming.)

Let us start with a controversial thesis: We should not teach purely functional programming in freshman courses! In fact, we should not teach procedural, object-oriented or logic programming either. Instead, we should concentrate on teaching the elementary techniques of programming and the essential concepts of computing as a scientific discipline as well as foster analytic thinking and problem solving skills.
(…)
The central thesis of this article is that purely functional languages are ideally suited for introductory computing classes, but only if the focus is on general concepts rather than the specifics of functional programming.

(Manuel M. T. Chakravarty and Gabriele Keller, “The Risks and Benefits of Teaching Purely Functional Programming in First Year”. Journal of Functional Programming 14, no. 1 (January 2004): 113–23. https://doi.org/10.1017/S0956796803004805.)

Sign of the times, the 2022 edition of SICP switched from Scheme to… JavaScript. I guess Douglas Crockford must be proud.

Following the steps of Chakravarty and Keller, other researchers wanted out of the whole “what is the best programming language for education?” debate, and proposed a novel idea: how about teaching “computational thinking” to our younger generations? Particularly given the fact that most researchers knew the time of LLMs was just beyond the horizon; maybe we should focus on teaching kids how to think, in the good old ways of Papert and Piaget:

Computational thinking confronts the riddle of machine intelligence: What can humans do better than computers? and What can computers do better than humans?

(Jeannette M. Wing, “Computational Thinking”, 2006.)

Computational Thinking has had a major impact in programming education research during the past two decades. Noteworthy are the 2018 book “Hello World: How to be Human in the Age of the Machine” by Hannah Fry, which explores the subjects of computational thinking and its possible impact in society, and the 2019 book “Computational Thinking Education” edited by Siu-Cheung Kong and Harold Abelson, and freely available in Open Access.

Learn To Code!

The 2000s saw the rise of Jupyter notebooks, Julia, and R, as implementations of Donald Knuth’s concept of Literate Programming.

These implementations of Literate Programming were not the first; commercial software packages like Mathematica and Maple had already explored the same idea starting in the 1980s. But the availability of such packages as Open-Source and Free Software implementations changed the game completely, and greatly contributed to their spread.

Needless to say, this distribution model had a non-negligible impact in the current developments around Machine Learning and Large Language Models. It also fueled a whole industry of “Learn to Code!” workshops, training courses, and YouTube videos, still popular today despite the hiring freezes experienced by the software industry since 2022.

Conclusion

This review is purposely short and left out plenty of important books, milestones, and papers that have marked the research of programming education. We cannot name them all, but here go some honorable mentions we must make:

We will end this short and opinionated summary with a quote by Turing Award winner John Hopcroft, one which we can only agree with:

The potential of computer science, if fully explored and developed, will take us to a higher plane of knowledge about the world.

(John E. Hopcroft, “Computer Science: The Emergence of a Discipline”, Turing Award Lecture, 1986.)

Cover photo by the author.

Back to top