The Last Scribes

Dec 8, 2025·
Markus Leipe
Markus Leipe
· 6 min read

Imagine you are a scribe in Cologne, Holy Roman Empire, in 1460. You have heard about some invention in Mainz a few years before, something about automatizing your entire work with a new kind of machine. You even get to see one of these new “printed books” and are not impressed. “Look at this kerning; every single one of our novices could do a better job” you explain to your abbott. “Sure, you can make 200 copies. But who needs 200 copies of the same book? The monastery needs one Bible. Maybe two. This is a solution looking for a problem” you post on news.scribecombinator.com. “At least you do not have to live to see the slop these people put out now” you console the heretic, being marched to his execution.

You would have been wrong, of course. And since this obviously a post about AI and our jobs, so are we. At the end of 2025, making this analogy seems a lot less prescient compared to the beginning of this year. Software development seems to have gone through a phase transition, writing your own code will continue to become less and less of a developers job description, and everyone becomes either a manager/architect/central-node-of-the-agent-swarm, or gets laid off.

But in science, we are acting like none of this is happening. Of course, every conference you travel to is full of “AI meets <your niche subfield>” sessions, every institute offers a few middling AI workshops, and every social media feed is full of unbearable AI influencers, breathlessly blasting their bullshit into every corner. At the beginning of the year I was struggling to explain to colleagues that o1-preview actually could reliably solve some undergraduate math and physics questions; at the end of this year every second monitor in the office has the institute-mandated ChatGPT clone opened permanently. 2025 was the year we accepted that AI is here to stay. And yet, almost nobody around me seems to actually be thinking through the implications.

In SF-speak, you might say we lack “Situational Awareness” - a term popularised by Leopold Aschenbrenner (previously OpenAI) to describe the understanding that scaling of compute and improvement of training methods which we know how to do already lead to pretty wild AI capabilities. And, most likely, up to achieving Artificial General Intelligence (AGI). If you follow these arguments, this should radically change how you prioritise your career, your life decisions, your finances, and your attention. I read it and instead proceeded to moderately change my finance decisions (buying a few NVIDIA stocks) and attention (scrolling on AI Twitter for too long), and not much else.

But even if you had Situational Awareness, what would follow from that? For Aschenbrenner, the conclusion was that you should collect giant bags of money in the stock market as long as other people underestimate how important AI will be. I like the spirit, but lack the friends that would give me the starting capital for a hedge fund, and also the math talent. Everyone and their mother seem to be leading “AI for Science” initiatives right now, but the progress in new model developments means that the next release will be able to do your goal natively, or can re-do your work within two prompts and a coffee break, whereas you just wasted half a year trying to post-train <whatever> into the model. Continual Learning and training on real-world scientific experiments are big, open problems on the path towards AI-driven science breakthroughs, but the few coherent proposals for how this could work seem to have found no takers in Germany (or anywhere else for that matter). It is just extremely unlikely that the correct reaction to a wild year in technological progress is to continue everything exactly as before, only with half of every funding proposal and project report being written by ChatGPT (and then, presumably, also read by ChatGPT on the other side). And yet, we all just carry on, and I am no exception. Just more cynical about it.

But this has got to be me misunderstanding the situation. I am standing at the edge of a new world, with many unpredictable changes and opportunities in front of me, ushering in a new age of technological progress, and my reaction is - being sad about the fact that everything I do right now will be meaningless because an AI will do a better job just months later? Really? Is this the best I can do?

I don’t know. Maybe someone else does, but I can’t find them, since they all hang out in a few square kilometers in the Bay Area. It is very frustrating to read the German commentary on AI, stuck between “Oh no, students will maybe continue to cheat on their homework”, “Why don’t we have an AI industry ourselves?” and “Meet The Evil Techbros Behind The AI Industry”. The few AGI-pilled people here are sitting in Whatsapp groups run by Effective Altruists, arguing ineffectively about gender differences or AI Doom. In a way, my closest ideological allies seem to be the (blegh!) LinkedIn AI influencers, who also think AI will be transformational, only in a much more stupid version.

I recently went to a multi-day event, where ~200 of the most AI-interested people from all walks of life in Jena and the surrounding area met to discuss everything we found most important about AI. The session topics ranged from “SEO, but for Chatbots”, via “Unlock your creativity by typing your shitty prompt into the Canva integration in ChatGPT” to “How AI is used by Fascists”. The keynote speaker used his talk about “Future AI Scenarios” to waffle about whether it would be like in “Terminator”, “2001: A Space Odyssey” or “Her”, without any reference to how the actual systems work and behave these days. I don’t believe half of the room had ever heard of the term AGI. I had some decent conversations with a few people, but overall came away even more depressed; if this is the highest level of preparedness in German civil society, we are extremely screwed.

In a way, I am still pretty lucky, since Robotics seems still limited enough so I can leverage my experimental skills for a few more years of turning mirrors and plugging in cables in some laboratory. But I have lost hope that by my thesis defense I will have any more advanced understanding of the actual physics of my thesis than whatever chatbot I use. I only know what kind of problems the overloaded air conditioning system adds in the summer, how the source misaligns when a colleague opens the wrong door, or whatever. It is not nothing, but is it a skill I would want to give 110% on improving, every day? Some of the scribes continued the rest of their career to add the illustrations to the otherwise mechanically printed books. Did they feel good about it? Did they feel like they made the most out of the situation?

Markus Leipe
Authors
PhD student, Quantum Communication