Next.js rebuilt in a week, and why I think it matters
In late February, an engineer used an AI coding assistant to recreate the core behavior of Next.js in roughly one week. The result, called vinext, mirrors the Next.js API but runs on top of Vite rather than the original toolchain. Early benchmarks claimed up to 4x faster builds and about 57 percent smaller bundles.
The raw numbers were not what shook developers. The speed and the method did. About $1,100 in AI tokens, a focused human in the loop, and a framework that powers a huge slice of React apps was cloned to a new build system in days, not months. That is a moment worth pausing on.
I see this as more than a flashy demo. It is a clear signal about what modern AI can do with open documentation, tests, and strong human guidance. Whether you are excited or uneasy, it changes how we think about building, maintaining, and defending software.
What exactly was rebuilt
Next.js is a widely used React framework created and maintained by Vercel. It provides routing, server and client rendering, middleware, file conventions, and a developer experience that teams rely on from startups to large enterprises and even public sector sites.
Vinext set out to implement a compatible Next.js API on top of Vite. The goal was not to copy every edge case or ecosystem detail. It was to deliver core routing, rendering flows, middleware behavior, and the everyday conventions that developers expect when they start a Next.js app.
It was an experiment, not a production replacement. Even its authors emphasized that it had not carried heavy production traffic. But for many in the community, the scope and speed were the story.
How AI did most of the heavy lifting
The process looked different from traditional framework development. The engineer fed documentation, tests, and example code into an AI-assisted workflow. The model generated modules, proposed implementations, and iterated quickly as the human reviewed, corrected, and re-scoped the work.
Each loop produced working pieces: routing logic here, a middleware hook there, SSR and hydration pathways stitched together faster than a small team could typically manage. The human acted as planner, critic, and integrator, while the AI wrote the bulk of the boilerplate and glue.
It felt less like hand-coding a framework and more like orchestrating a machine that writes and refines one.
This is the pattern I expect to see more often. AI handles the repetitive and the obvious, while humans direct architecture, enforce standards, and make the tradeoffs that shape a product.
Why this matters for open source
For years, frameworks were moats. Codebases, docs, and ecosystems formed defensible positions. Today, AI trained on abundant public code and paired with thorough specs can recreate large portions of functionality with surprising speed.
That does not eliminate the value of open source frameworks. It shifts where the defensibility lives. If the code itself can be reimplemented quickly, the durable advantages move to:
- Community and governance, which sustain trust, velocity, and quality
- Documentation and learning paths, which reduce time to productivity
- Integrations and plugins, which unlock real-world use cases
- Operational polish, from reliability to security to upgrade tooling
- Backwards compatibility and long-term stewardship
The headline is not that frameworks are obsolete. It is that the center of gravity for value is already drifting from code to ecosystem.
The limits, and what still needs humans
It is important to keep the experiment in perspective. Shipping a compatible version 1 is not the same as carrying millions of users through years of growth, security threats, and breaking changes. Maintenance is the long game, and it is where the hidden costs live.
AI can draft code rapidly, but subtle behaviors emerge under stress. Edge cases, memory leaks, race conditions, and abuse paths often surface only in production. Teams still need the judgment to triage issues, design safe migrations, and keep contracts stable.
Benchmarks are another caution. A smaller bundle or faster build in a narrow test may not hold when you add real-world plugins, polyfills, and infrastructure layers. Sustained performance work is as much profiling and strategy as it is code generation.
Practical ways to use AI to rebuild or port open source
Despite those limits, I think this approach opens useful paths for teams. Not to replace mature frameworks outright, but to experiment, reduce risk, and create options.
- Port core APIs to alternative toolchains. Create compatibility layers that let you run familiar patterns on different bundlers, runtimes, or clouds.
- Extract and isolate critical features. Rebuild only the routing, middleware, or SSR you need, rather than hauling in an entire framework.
- Prototype performance-focused forks. Test whether a different build pipeline or rendering strategy yields real gains in your domain.
- Reduce vendor lock-in. Build adapters that make moving between platforms less costly.
- Modernize legacy systems. Use AI to map old conventions to current APIs, then refine with targeted human review.
- Strengthen testing. Have AI generate exhaustive test matrices around behaviors you must preserve before and after a port.
Here is a simple playbook I would follow:
- Define the scope. Choose the smallest useful surface area to replicate first.
- Assemble artifacts. Collect official docs, RFCs, type definitions, conformance tests, and representative apps.
- Pick constraints. Lock in the target runtime, bundler, language level, and performance budgets.
- Generate in tight loops. Use AI to propose modules, wire them into a test harness, and iterate on failures fast.
- Gate with tests and fixtures. Treat passing conformance and real-world app fixtures as your promotion criteria.
- Review for security and licensing. Run static analysis, dependency audits, and license checks on every AI output.
- Document decisions. Capture deltas from the upstream behavior so users know what to expect.
What this means for frameworks and moats
If AI can regenerate much of a framework in days, then defensibility shifts to what is hardest to copy. I expect more emphasis on:
- Developer experience end to end, from create flows to debugging and migration tools
- Hosted operations, including observability, edge distribution, and automated scaling
- Formal specs, so implementations can compete on quality without forking the ecosystem
- Compatibility guarantees, which reduce fear of upgrades
- Paid support and SLAs, which enterprises rely on more than raw code
Open source wins when multiple implementations improve the commons. The risk is fragmentation. This is where standards, working groups, and conformance suites can keep momentum aligned while still inviting experimentation.
Risks, ethics, and licensing
The fastest way to sour this trend is to ignore licenses or blur attribution. Before you unleash an AI to replicate an open project, verify the licenses of input materials and the compatibility of your planned output. Respect trademarks and naming to avoid confusion.
On the technical side, AI can invent plausible but wrong behaviors. Protect yourself with:
- Strict conformance tests and golden fixtures
- Security review for injection paths, SSRF, deserialization, and supply chain concerns
- Reproducible builds and SBOMs so you know exactly what shipped
- Clear deprecation policies for any deltas from upstream behavior
Ethically, give credit where due, contribute improvements upstream when possible, and disclose material differences so users do not assume drop-in compatibility where it does not exist.
The new cadence of software
I do not think this moment means engineers are obsolete. I think it means the cadence changes. The draft phase compresses. More options appear sooner. The hard parts move to choosing the right option, integrating it safely, and stewarding it over time.
That has downstream effects. Product management must evaluate more competing approaches faster. QA and security must scale automation to keep up. Finance must plan for AI token spend as a real line item, not a one-off experiment.
The competitive game also shifts. Feature parity is easier to achieve. Differentiation comes from stability, performance under load, migration experience, and how well the ecosystem serves real teams.
So, can we rebuild any open source project like Next.js with AI?
We can rebuild meaningful slices of many projects, especially when APIs are well specified and tests are available. I would not claim universal parity across all edge cases or ecosystems. But as the Next.js experiment showed, recreating core surfaces is now practical at surprising speed.
That does not trivialize the originals. It highlights how valuable their communities, docs, and operational wisdom are. It also empowers teams to explore alternatives, reduce risk, and push performance without waiting for upstream roadmaps.
Used well, this is a lever for openness. Used carelessly, it is a path to fragmentation and regressions. The difference is discipline.
Key takeaways
- An AI-assisted effort recreated core Next.js APIs on Vite in about a week, spending roughly $1,100 in tokens and claiming faster builds and smaller bundles.
- The surprise is the speed and method. A human guided, reviewed, and integrated, while the AI generated most of the code.
- Defensibility shifts from code to ecosystem. Community, docs, integrations, and operational maturity matter most.
- Production readiness still takes time. Edge cases, security, and maintenance are where costs accumulate.
- Teams can use AI to port or extract open source features to reduce lock-in, test performance ideas, and modernize systems, with strong testing and licensing discipline.
- Standards and conformance will be key to avoid fragmentation while enabling healthy competition among implementations.
I am optimistic, with caution. The ability to rebuild complex open source projects quickly is real. The responsibility to do it thoughtfully, ethically, and safely is just as real. If we meet both sides, we get more choice, faster learning, and a stronger ecosystem.

Written by
Tharun P Karun
Full-Stack Engineer & AI Enthusiast. Writing tutorials, reviews, and lessons learned.