We moved Railway's frontend off Next.js. Builds went from 10+ mins to under 2
TL;DR Highlight
This is a practical experience of Railway migrating its production frontend from Next.js to Vite + TanStack Start, reducing build times from over 10 minutes to under 2 minutes. Teams that deploy multiple times a day can feel how build time directly affects development speed.
Who Should Read
Frontend developers who are operating client-centric dashboards or SPAs with Next.js and are experiencing increasing build times or feel it doesn't align with a server-first paradigm. Especially for teams that deploy multiple times a day and want to resolve build bottlenecks.
Core Mechanics
- Railway's production frontend (dashboard, canvas, entire railway.com) has been fully transitioned from Next.js to Vite + TanStack Router, completing the migration without downtime in just two PRs.
- Existing Next.js builds took over 10 minutes, with 6 minutes being Next.js's own processing time, especially stuck in the 'finalizing page optimization' stage for nearly half the time. For teams deploying multiple times a day, this wasn't just an inconvenience but a heavy tax on each iteration.
- Railway apps have a rich state-based interface for the dashboard and a real-time, WebSocket-heavy structure for the canvas, fundamentally incompatible with Next.js's server-first design approach.
- Trying to handle layout sharing while maintaining the Pages Router led to adding custom abstractions on top of the framework, with all layout patterns being workarounds rather than framework features. Switching to the App Router would solve some problems, but it would enforce a server-first paradigm, which also didn't fit.
- The reasons for choosing TanStack Start + Vite are fourfold: type-safe routing with automatic inference of route and search parameters, Pathless Layout Route supporting layouts as a first-class concept, a fast development loop with virtually no feedback loop due to instant HMR, and the flexibility to apply SSR selectively where needed, such as on marketing pages.
- The migration was divided into two stages. PR 1 replaced Next.js dependencies such as next/image, next/head, and next/router with native browser APIs or framework-agnostic alternatives without touching the framework itself. PR 2 actually migrated over 200 routes.
- Added Nitro as the server layer and replaced next.config.js with Nitro configuration. Integrated over 500 redirects, security headers, and caching rules in one place. Also, replaced Node.js APIs (Buffer, url.parse, etc.) that Next.js polyfilled with browser-native alternatives, resulting in cleaner code.
Evidence
- There was a case sharing from a team that experienced a similar migration. Integrating a Next.js landing page and a TanStack Router SPA into a single Vite + TanStack Start app reduced build time from 12 minutes to under 2 minutes, and it no longer made sense to fight Next.js's server-first assumptions in a client-centric, real-time state app.
- There were reports that Next.js builds were particularly slow on the Railway platform. Railway is container-based, so a 2-minute build on Vercel takes 12 minutes on Railway, while direct VPS deployment takes about 20 seconds. This adds context to Railway's migration decision.
- There was concern that developers using AI coding tools would be more strongly drawn to Next.js/Vercel, given that LLMs are very familiar with Next.js and the Vercel ecosystem and Vercel is aggressively investing in LLM tool integration.
- There was also critical feedback that 2 minutes is still too long. There were comments like 'a 2-minute build is still ridiculous' and a cynical response like 'amazing that 10 minutes was normal. How far we've fallen'.
- It was pointed out that the original article did not mention which version of Next.js was being used or whether Turbopack was enabled. Since build times could change if Turbopack was enabled, the point of comparison is unclear.
How to Apply
- If your current Next.js app is client-centric (SPA, dashboard, real-time features-focused) and builds take 5 minutes or more, you can refer to the method of migrating in two PRs. Replacing Next.js-specific APIs such as next/image, next/head, and next/router with framework-agnostic alternatives in the first PR, and then replacing the framework itself in the second PR can reduce risk.
- If you are managing over 500 redirects, security headers, and caching rules in next.config.js, you can move them to a Nitro server layer and integrate them into a single file. Like Railway's case, you can achieve both configuration file unification and code cleanup.
- If you are implementing layouts on top of the Pages Router with custom abstractions, TanStack Router's Pathless Layout Route can replace those workarounds. You can also automatically obtain file system-based route generation and type-safe routing.
- If you need image optimization without next/image, you can use edge image optimization from a CDN like Fastly, as Railway did, or leverage external services like Cloudflare Image Resizing to achieve the same effect without framework dependency.
Terminology
Related Papers
Training an LLM in Swift, Part 1: Taking matrix mult from Gflop/s to Tflop/s
Apple Silicon에서 Swift로 직접 행렬 곱셈 커널을 구현하며 CPU, SIMD, AMX, GPU(Metal)를 단계별로 최적화해 Gflop/s에서 Tflop/s 수준까지 성능을 높이는 과정을 상세히 설명한 글이다. 프레임워크 없이 LLM 학습의 핵심 연산을 밑바닥부터 구현하고 싶은 개발자에게 Apple Silicon의 성능 한계를 체감할 수 있는 드문 자료다.
Removing fsync from our local storage engine
FractalBits가 fsync 없이 SSD 전용 KV 스토리지 엔진을 구현해 동일 조건 대비 약 65% 높은 쓰기 성능을 달성한 설계 방법을 공유했다. fsync의 메타데이터 오버헤드를 피하기 위해 사전 할당, O_DIRECT, SSD 원자 쓰기 단위 정렬 저널을 조합한 구조가 핵심이다.
Google Chrome silently installs a 4 GB AI model on your device without consent
Google Chrome이 사용자 동의 없이 Gemini Nano 4GB 모델 파일을 자동 다운로드하고, 삭제해도 재다운로드되는 문제가 발견됐다. GDPR 위반 가능성과 수십억 대 기기에 적용될 때의 환경 비용 문제가 제기되고 있다.
How OpenAI delivers low-latency voice AI at scale
OpenAI redesigned its WebRTC stack to serve real-time voice AI to over 900 million users, detailing the design decisions and trade-offs of a relay + transceiver split architecture.
Efficient Test-Time Inference via Deterministic Exploration of Truncated Decoding Trees
Deterministic Leaf Enumeration (DLE) cuts self-consistency’s redundant sampling by deterministically exploring a tree of possible sequences, simultaneously improving math/code reasoning performance and speed.