On Research Software Engineering
Let’s be plain upfront: academia dropped the ball on software engineering. Go back to the genesis of the field, and you see that computing was being advanced mostly by needs in the public sector, with the private sector playing a role. The first publicly demonstrated computer was produced by Konrad Zuse in 1941, and marketed by his company Zuse Apparatebau. But it was actually funded by the Nazi government in Germany, specifically the aerodynamic research institute, for its use in designing and flying cruise missiles.
The Z3 was the first Turing-complete machine, and Zuse went on to write the first computer chess program in the first high-level programming language Plankalkül, a language of his own devising and an unacknowledged forerunner to ALGOL. This was all part of his PhD thesis, but having failed to pay the submission fee to the University of Augsburg he did not obtain a degree.
World War 2 was, unsurprisingly, the big limitation on Zuse’s success, as someone who worked on the Nazi war effort. Despite computing being a breakout field, its impact was not as obvious as something like rocketry and so Zuse was never recruited into the U.S. Operation Paperclip nor its Soviet counterpart, Osoaviakhim. A lot of his equipment was destroyed in bombing, and the terms of American occupation rule in West Germany meant that the country was neither permitted to develop electronic computing machinery nor was it financially capable of it. Zuse founded two post-war computer companies, both funded by ETH Zurich, and eventually delivered a computer to the technology institute in 1950. This was only the second computer ever to be sold, and the first commercial computer to work correctly: computer engineers were already moving fast and breaking things back then.
It was not only in continental Europe that early computing was a mix of military, academic, and commercial interests. Digital computers arose in both the UK and USA out of military applications: cryptanalysis in Britain and ballistics in the States. With the war over, people and material from the British project found their way to Manchester and Cambridge Universities. ENIAC, the first American general-purpose computer, found its way to the University of Pennsylvania. Meanwhile, IBM had independently invented a computer, which became the Harvard Mark I (and was used by the U.S. Navy, as well as a Manhattan Project mathematician by the name of John von Neumann). All of these anglophone projects came together at the Moore School Lectures. John Von Neumann write up his notes from the lectures, and published an early draft in which he hadn’t added the citations. Thus was born the “Von Neumann” architecture, definitely invented at least once by Turing (who Von Neumann had talked to about it) and once by Zuse (who he hadn’t).
You wouldn’t know now that these three strands of development in computing are related. Judging from today’s computing scene, there’s an academic pursuit called “Software Engineering” where people apply maths to things like design patterns and formal methods that most professional programmers discarded as Not Relevant or Too Hard decades ago. Professional programmers call themselves “engineers”, but don’t really do engineering so much as they take a guess at what they thought their customer wanted and then check back in a couple of weeks to see if they were correct. When the military needs computers, they buy them from Dell and and get software by Microsoft, IBM, or Google. Unless they’re in a sanctioned country like Iran, when they buy them from a Dell reseller.
The three fields come together in one nexus: Carnegie-Mellon University’s Software Engineering Institute. The SEI is where the military pays academia to appraise commercial software vendors.
Question one: what happened?
We could point to a number of big events that introduced divisions between military, commercial, and academic computing. Arguably the academic field sewed the seeds of their own irrelevance with Curriculum ’68, the Association for Computing Machinery’s “recommendations for academic programs in computer science”. While the ACM had been a cross-disciplinary organisation of computing enthusiasts, this National Science Foundation grant-supported effort brought together a committee of academic men to talk about how academic programs in computer science could be more, well, academic, damn it.
It’s common for educational programs to have some sort of sop to industry, something about “training the workforce of the future” or “getting students ready for life after university”. Curriculum 68, which is explicitly about defending and consolidating the legitimacy of “computer science” as a pursuit worthy of the name science, which means all those pesky programmers, coders and operators are definitively out of scope, being mere blue collar button pushers.
Although programs based on the recommendations of the Curriculum Committee can contribute substantially to satisfying this demand [for “substantially increased numbers of persons to work in all areas of computing”], such programs will not cover the full breadth of the need for personnel. For example, these recommendations are not directed to the training of computer operators, coders, and other service personnel. Training for such positions, as well as for many programming positions, can probably be supplied best by applied technology programs, vocational institutes, or junior colleges. It is also likely that the majority of applications programmers in such areas as business data processing, scientific research, and engineering analysis will continue to be specialists educated in the related subject matter areas, although such students can undoubtedly profit by taking a number of computer science courses.
The ACM succeeded at the task of making computer science its own pursuit, by decoupling its relevance to the application of computing technology. “Software Engineering” tended toward the problem of assembling prefabricated units and hoping they approached the customer’s need, while computer science retreated into the mathematics of algorithms and information theory. Programmers needed to know how many screen updates would be missed in sorting a list of ten mailboxes, and computer scientists told them how many instructions it would take on a theoretical processor writing out to a paper tape.
The AI winter was not kind to academic computing either. Having promised thinking machines just around the corner ever since Turing had thought about it a bit in the 1940s, it became abundantly clear that they weren’t. In fact, it became evident that the sorts of problems you could pay a programmer to think hard about and write a reasonably efficient program to solve would require computers that hadn’t been invented yet using novel techniques like neural networks and random forests. By the time the computers had been invented, they were no longer the preserve of the university.
While all of this was happening, commercially-owned pure research institutions were, at least in the US, taking on the job of actually inventing the future. Xerox PARC, the home of ethernet networking, laser printing, graphical user interfaces, the mouse, video conferencing, the tablet computer, object-oriented programming, and more, is well known. As is Bell Labs, the home of UNIX, statistical process control, transistor electronics, speech synthesis, the C and C++ programming languages, and more.
Reality is a bit muddier than this picture: UCLA and DARPA worked together to invent the internet (which was quickly appropriated by commercial service providers); Niklaus Wirth contributed many programming languages that people actually use (quickly appropriated by commercial developer tools vendors), and many operating systems and hardware designs that they actually don’t, across his academic career. Mach, the kernel and operating system design used by NeXTSTEP, mkLinux, macOS/iOS/watchOS/tvOS, GNU/HURD, MachTen, OSF/1 and others, was a mash-up invented at Carnegie-Mellon University using bits from University of California, Berkeley. But by the twenty-first century academic computing was on the back foot, trying to find logical positivist things to say about Agile practices. Academia is now subservient to market forces after five decades of Friedman ideology, so computer science departments beg for handouts from commercial behemoths to become outsourced risk-takers (research salaries are much lower than Silicon Valley salaries), or push their academics to “spin out” their inventions into start-up companies to get into the orbit of those behemoths and get some of the sweet acqui-hire money.
Question two: what’s happening now?
Meanwhile, the need for computing in academia itself only got greater. Meeting invitations moved from internal pigeon post to email. Papers moved from, well, paper to Word and LaTeX. High-Performance Computing—the application of supercomputers to various research problems, mostly but not exclusively in the sciences—both grew in application and shrank in innovation as the vendors learned to build machines from commoditised parts. Both the Apple PowerMac and the Sony Playstation 2 have appeared in the TOP 500 list of supercomputers. Mathematical proofs became so complex they could only be constructed by computer, and only tested by computer. Statistical analysis became the preserve of R, Python, and similar tools. Biology begat bioinformatics and computational biology. And so on.
More and more software in research, and a growing realisation that the way software is treated by the research community not only isn’t state of the art, but is actually holding research back. Pick an HPC simulation at random and you’ll be lucky if you can get it to build at all, and even then only on the same cluster with exactly the same Fortran-77 compiler, MPI library, CPUs and GPUs as the researcher who last touched it. If it does build, you won’t find any tests to tell you whether it will reproduce the same results that researcher obtained, or whether those results represent real science rather than buggy outcomes or compounded floating-point inaccuracies.
That’s where I, my team, and people like us come in. You see, I may be an erudite essayist once per month over here, but most of the time I’m a Research Software Engineer. Our task is to make sure good practices from commercial software engineering are being used in research, helping researchers achieve their goals through software. Often that means we write the software ourselves. Sometimes it means promoting practices like Continuous Integration, or automated testing. Often it means training academics in programming languages and tools.
Our goal is to build a community of practice across academia, improving the quality of research software everywhere.
Cover photo by Sidharth Bhatia on Unsplash.
Continue reading Teacher, Leave This Kid Alone or go back to Issue 023: Academia. Did you like this article? Consider subscribing to our newsletter or contributing to the sustainability of this magazine. Thanks!