For decades, the computing landscape has been defined by a fundamental architectural split: the Central Processing Unit (CPU) and the Graphics Processing Unit (GPU). The CPU, a versatile "jack-of-all-trades," excels at sequential tasks, complex decision-making, and running operating systems. The GPU, a "master of one," is a parallel processing powerhouse built to handle thousands of simple calculations simultaneously, originally for rendering graphics and now for AI and scientific computing. What if you could erase that line? What if you could run a CPU—its entire instruction set, its operating system, its applications—directly on GPU hardware?
This is not a hypothetical question from a distant future. It is the core premise of nCPU, a provocative open-source research project by developer Robert C. Price that is quietly challenging one of computing's most entrenched paradigms. Hosted on GitHub, nCPU is an experiment in building a CPU that is "entirely implemented to run on a GPU." It's a concept so counterintuitive that it forces a re-examination of what we think we know about processors, efficiency, and the very trajectory of Moore's Law.
Key Takeaways
- Architectural Heresy: nCPU is a software project that implements a CPU microarchitecture designed to be executed on GPU shader cores, not traditional silicon CPU circuits.
- Proof of Concept, Not Product: The project is a functional experiment, capable of running a Linux kernel and basic programs, demonstrating the technical feasibility of the idea.
- Challenges the Status Quo: It questions the necessity of separate, specialized CPU silicon in an era where GPUs are becoming increasingly general-purpose and dominant in compute-heavy fields.
- Parallelism Paradox: