Demo of a fully automated patch-set testing workflow using opencode, the virtme-ng MCP server and a local LLM (gpt-oss:120b) served via Ollama. The AI agent (opencode) runs on my laptop, while Ollama is hosted on a DGX Spark in the same subnet.
In this workflow, the AI agent applies a patch set from lore.kernel.org to a local Linux kernel Git repository, builds the kernel on a remote host (nv-builder) using virtme-ng, boots and tests the rebuilt kernel on my laptop (inside a virtme-ng session) and runs the sched_ext kselftests to validate the changes.
Using the virtme-ng MCP server, the AI agent executes the entire process autonomously and produces a final summary of the results, effectively automating patch-set testing and confirming (based on the kselftests) that no regressions were introduced.
All components operate entirely within my local subnet: no data is sent to the internet during the workflow.