The hard truth about AI? It might produce some better software | John Naughton


As you have doubtless noticed, we are in the middle of a feeding frenzy about something called generative AI. Legions of hitherto normal people – and economists – are surfing a wave of irrational exuberance about its transformative potential. It’s the newest new thing.

For anyone suffering from the fever, two antidotes are recommended. The first is the hype cycle monitor produced by consultants Gartner, which shows the technology currently perched on the “peak of inflated expectations”, before a steep decline into the “trough of disillusionment”. The other is Hofstadter’s law, about the difficulty of estimating how long difficult tasks will take, which says that “It always takes longer than you expect, even when you take into account Hofstadter’s law”. Just because a powerful industry and its media boosters are losing their marbles about something doesn’t mean that it will sweep like a tsunami through society at large. Reality moves at a more leisurely pace.

In its Christmas issue, the Economist carried an instructive article entitled “A short history of tractors in English” (itself an understated tribute to Marina Lewycka’s hilarious 2005 novel, A Short History of Tractors in Ukrainian). The article set out to explain “what the tractor and the horse tell you about generative AI”. The lesson was that while tractors go back a long way, it took aeons before they transformed agriculture. Three reasons for that: early versions were less useful than their backers believed; adoption of them required changes in labour markets; and farms needed to reform themselves to use them.

History suggests, therefore, that whatever transformations the AI hype merchants are predicting, they’ll be slower coming than they expect.

There is, however, one possible exception to this rule: computer programming, or the business of writing software. Ever since digital computers were invented, humans needed to be able to tell them what they wanted the machines to do. Since the machines didn’t speak English, generations of programming languages evolved – machine code, Fortran, Algol, Pascal, C, C++, Haskell, Python etc. So if you wanted to communicate with the machine, you had to learn to speak Fortran, C++ or whatever, a tedious process for many humans. And programming became a kind of arcane craft, as implied by the title the great Donald Knuth gave to the first book in his seminal five-volume guide to it, The Art of Computer Programming. As the world became digitalised, this craft became industrialised, and rebadged as “software engineering” to downplay its artisanal origins. But mastery of it remained an arcane and valued skill.

And then along came ChatGPT and the astonishing discovery that as well as composing apparently lucid sentences, it could also write software. Even more remarkable: you could outline a task to it in plain English prompts, and the machine would write the Python code needed to accomplish it. Often the code wasn’t perfect, but it could be debugged by further interaction with the machine. And suddenly a whole new prospect opened – of non-programmers being able to instruct computers to do things for them without having to learn computer-speak.

In the New Yorker recently, programmer James Somers wrote an elegiac essay about the implications of this development. “Bodies of knowledge and skills that have traditionally taken lifetimes to master are being swallowed at a gulp,” he said. “Coding has always felt to me like an endlessly deep and rich domain. Now I find myself wanting to write a eulogy for it. I keep thinking of Lee Sedol. Sedol was one of the world’s best Go players, and a national hero in South Korea, but is now best known for losing, in 2016, to a computer program called AlphaGo.” To Somers, Sedol seemed “weighed down by a question that has started to feel familiar, and urgent: What will become of this thing I’ve given so much of my life to?”

That sounds a bit OTT to me. Such evidence as we have suggests that programmers are taking to AI assistance like ducks to water. A recent survey of software developers, for example, finds that 70% are using, or are planning to use, AI tools in their work this year and 77% of them have “favourable or very favourable” views of these tools. They see them as ways of increasing their productivity as programmers, speeding up learning and even “improving accuracy” in writing computer code.

This doesn’t look like defeatism to me, but the attitude of professionals who see this technology as “power steering for the mind”, as the saying goes. At any rate, they don’t sound like the horses of the Economist’s story. But just as the tractor eventually transformed agriculture, this technology will eventually transform the way software is developed. In which case software engineers will have to be more like engineers and less like artisans. About time too, (says this engineer-cum-columnist).

What I’ve been reading

Smart move?
A terrific blast from Gary Marcus on his Substack blog on the AI companies’ lobbying to be exempted from responsibility for copyright infringement.

Control mechanism
A really thoughtful piece by Diana Enríquez on the Tech Policy Press website on what it’s like to be “managed” by an algorithm.

Off with their heads
A lovely post on Margaret Atwood’s Substack on films about the French Revolution, beginning with Ridley Scott’s Napoleon.


Leave a Reply

Your email address will not be published. Required fields are marked *