Linux devs are quietly plugging AI into the kernel’s plumbing

Linux devs are quietly plugging AI into the kernel's plumbing - Professional coverage

According to ZDNet, the Linux developer community has rapidly shifted from debate to implementation, deeply embedding AI into kernel engineering workflows, with Linus Torvalds now describing himself as “a huge believer” in AI as a maintenance tool. At the Open Source Summit Japan, Linux Kernel Maintainer Summit, and Linux Plumbers conference in Tokyo, developers formalized how large language models will fit into long-term processes like stable backporting and CVE triage. NVIDIA’s Sasha Levin, a stable-kernel maintainer, has already wired LLMs into the thankless jobs of identifying backports and security fixes, using a system that lets multiple models “vote” on candidates. A patch merged for Linux 6.15, credited to Levin but entirely AI-generated, demonstrated the technology’s potential but also contained a subtle mistake, sparking debate about disclosure. The community agrees human accountability is non-negotiable and that some form of AI-use disclosure is needed, but major questions about legal ramifications and error standards remain unresolved.

Special Offer Banner

Augmentation, not replacement

Here’s the thing: Linus Torvalds isn’t interested in AI writing complex kernel code. Not yet, anyway. His vision, and the current practical use, is all about augmentation. Think of AI as a super-powered filter for his famously overflowing inbox. Tools that pre-screen patches, surface issues, and handle the “drudge work” he compared to the shift from assembly to higher-level languages. It’s a survival mechanism. Kernel maintainer burnout is a real, serious problem, and the patch volume just keeps growing. When someone like Shuah Khan says AI can reduce a day’s patch triage to minutes, that’s not a nice-to-have. That’s the difference between a sustainable project and one that grinds its maintainers into dust.

Levin’s projects, AUTOSEL and the in-house CVE workflow, are perfect examples. They don’t make the final call. They act like an indefatigable junior maintainer who never forgets, sifting through mountains of data to present a shortlist to a human. That’s the model. AI handles the scalability problem of an ever-growing codebase, while humans retain the judgment. It’s a pragmatic division of labor that’s already shipping, as seen with the AI-generated hash-table patch in 6.15 and the git-resolve script in 6.16.

The devil’s in the disclosure

But that hash-table patch is also the cautionary tale. It dropped a performance-critical `__read_mostly` attribute. A small, subtle error that a human reviewer might catch, but that an AI just didn’t reason about. The bigger firestorm, though, was about honesty. The fact it was AI-generated wasn’t disclosed upfront, leading to a wave of criticism on LWN that it violated the spirit of the Developer’s Certificate of Origin. Torvalds himself said he would have scrutinized it more carefully had he known.

So that’s settled. There will be a tag or disclosure for AI-derived code. But that’s just the first of many thorny issues. Does AI make *different* kinds of mistakes? Should we hold its output to a *higher* standard? And then there’s the legal elephant in the room. Copyright law around AI is utterly unsettled, with major lawsuits pending. Given that these coding tools were trained on open-source code of all licenses, injecting their output into the kernel is a genuine legal gamble. The community is building new plumbing with potentially proprietary, legally ambiguous materials.

The BitKeeper ghost and a missing ladder

This dependency risk is what worries people like Jonathan Corbet. He explicitly invoked the BitKeeper disaster, where Linux’s reliance on a proprietary SCM system blew up in its face, leading to the creation of Git. If the review process becomes dependent on a proprietary AI service that changes its terms or disappears, the project could be in crisis. It’s a stark reminder that open-source foundations need open tools.

There’s another, more human concern. Dan Williams talks about telling high schoolers to “show your work.” AI is the ultimate “the AI told me it’s correct” machine. As research scientist Stefania Druga pointed out, if AI automates all the junior-level tasks—the bug triage, the simple patches, the boilerplate—how do new developers learn? How do they gain the experience to become senior maintainers? The kernel community isn’t just maintaining code; it’s maintaining a pipeline of human expertise. Automating the entry point could starve the pipeline. That’s a long-term problem, but a vital one.

Plumbing, not bolt-ons

So where does this leave us? The trend is clear. AI isn’t a flashy gimmick for Linux anymore. It’s becoming part of the plumbing. The focus is on mission-critical, behind-the-scenes work: managing the crushing patch load, automating stable and security workflows, and using these models as pattern-matching machines to capture institutional knowledge. The discussions at OSS Japan and the Maintainer Summit were about formalizing this reality, not debating its existence.

Will AI eventually write big chunks of the kernel? Maybe. But that’s a future question. Right now, the more immediate resolution depends less on technical prowess and more on copyright law and community policy. The kernel developers are famously pragmatic. They’re using the tools that work today to solve today’s problems, even as they nervously eye the legal and educational pitfalls ahead. The integration is already deeper than most people realize, and it’s only going to get more fundamental. The age of AI-as-assistant is here, and it’s compiling.

Leave a Reply

Your email address will not be published. Required fields are marked *