We rewrote JSONata with AI in a day, saved $500k/year
TL;DR Highlight
Reco saved $500K annually by rewriting their Node.js-based JSONata evaluation pipeline in Go using Claude AI — but the HN community fired back with criticism: 'Why did you let this linger so long?' and 'Why didn't you use an existing Go library?'
Who Should Read
Backend engineers whose data pipelines waste money on cross-language RPC calls, or engineers considering using AI code generation tools to replace legacy components.
Core Mechanics
- Reco's security policy engine evaluates JSONata expressions against billions of events, but since JSONata's reference implementation is in JavaScript, it couldn't be used directly in Go services. They ended up running a dedicated Node.js jsonata-js pod on Kubernetes and having Go services call it via RPC — an architecture they maintained for years.
- This architecture required a round-trip of serialization → network transfer → evaluation → result serialization → return for every event, costing roughly $300K per year in compute alone, with costs growing as customers and detection rules increased.
- The solution was to port JSONata directly to Go. The approach mirrored Cloudflare's rapid rewrite strategy: migrate the official jsonata-js test suite to Go, then implement the evaluator until all tests pass.
- Using Claude as the AI tool, the rewrite was completed in a single day at a cost of roughly $400 in Claude tokens. Switching to Go inline execution eliminated RPC overhead, bringing the jsonata-js pod cost from $25K/month down to zero.
- Total savings were announced at $500K annually, which appears to include not only compute costs ($300K) but also deployment complexity and operational overhead from running two language stacks simultaneously.
- JSONata's reference JavaScript implementation is approximately 5,500 lines, while the AI-generated Go implementation is approximately 13,000 lines — the increase likely reflects the effort of expressing JavaScript's dynamic characteristics and JSONata's JS-based grammar nuances in Go.
- The rewritten Go implementation was validated against the official JSONata test suite, and after production deployment the dramatic cost reduction provided immediate internal proof of ROI.
Evidence
- "The core criticism wasn't about the 'AI rewrite' itself, but rather 'why was this left alone for so long?' The fact that the rewrite cost only $400 in Claude tokens implies the codebase wasn't that large — meaning an engineer could have ported it manually at a comparable scale. Many commenters expressed disbelief at years of wasted spending at roughly $300K/year, the equivalent of one full-time employee. Critics also pointed out that two existing Go JSONata implementations already existed, yet Reco never explained why they didn't use them. One of those, `github.com/jsonata-go/jsonata`, was itself built with Claude Code and Codex in early 2025, passes the official test suite, and runs in production — confirmed by a contributor who commented on the post. Many felt it was a shortcoming for an engineering blog not to explain why existing libraries were unsuitable. The benchmarking approach also came under fire: measuring performance at the app level captures the effect of eliminating RPC overhead, not the actual performance of the Go JSONata implementation itself. Without an isolated, apples-to-apples comparison of the JS and Go implementations, concerns arose that 'the AI-generated implementation might actually be slower.' Long-term maintainability of AI-generated code was another concern raised. The JSONata open-source project currently has 15 open PRs and 150 open issues, and Reco now owns a self-maintained fork of 13,000 lines of Go code. Commenters questioned how upstream changes would be tracked and whether that maintenance cost was factored into the savings calculation. There were also cynical predictions that 'posts like this will become more common' — a pattern of reframing basic architectural fixes as AI wins, followed by a vicious cycle of shipping AI-generated code without fully understanding it and accumulating new bugs over time. One commenter warned of a future where 'code output increases 100x and token costs double developer salaries, but actual impact remains minimal.'"
How to Apply
- "If your Go services call out to a Node.js, Python, or other language runtime via RPC or a sidecar, start by calculating the monthly compute cost of that component. If the annual cost exceeds tens of thousands of dollars, it's worth exploring an inline port using an AI coding tool like Claude Code or Cursor. The key prerequisite: secure the official test suite before porting so you have a clear standard for verifying behavioral equivalence. If you need to use JSONata in a Go pipeline, evaluate `github.com/jsonata-go/jsonata` (ported with AI in early 2025, passes the official test suite, production-verified) before rewriting from scratch like Reco did. If an existing library meets your requirements, you can use it without taking on maintenance burden. When using AI to rewrite a legacy component, follow this sequence: port the official test suite to the target language → have AI implement until tests pass → run isolated benchmarks to verify performance before production deployment. The reason the benchmarking in this blog post drew criticism is precisely that it only looked at system-level metrics without isolated verification. When forking an open-source library with AI-generated code, define your maintenance strategy upfront. Either designate someone responsible for periodically syncing upstream updates, or explicitly decide to freeze features at the fork version and manually apply only security patches — failing to do this will create technical debt down the line."
Terminology
Related Papers
Training an LLM in Swift, Part 1: Taking matrix mult from Gflop/s to Tflop/s
Apple Silicon에서 Swift로 직접 행렬 곱셈 커널을 구현하며 CPU, SIMD, AMX, GPU(Metal)를 단계별로 최적화해 Gflop/s에서 Tflop/s 수준까지 성능을 높이는 과정을 상세히 설명한 글이다. 프레임워크 없이 LLM 학습의 핵심 연산을 밑바닥부터 구현하고 싶은 개발자에게 Apple Silicon의 성능 한계를 체감할 수 있는 드문 자료다.
Removing fsync from our local storage engine
FractalBits가 fsync 없이 SSD 전용 KV 스토리지 엔진을 구현해 동일 조건 대비 약 65% 높은 쓰기 성능을 달성한 설계 방법을 공유했다. fsync의 메타데이터 오버헤드를 피하기 위해 사전 할당, O_DIRECT, SSD 원자 쓰기 단위 정렬 저널을 조합한 구조가 핵심이다.
Google Chrome silently installs a 4 GB AI model on your device without consent
Google Chrome이 사용자 동의 없이 Gemini Nano 4GB 모델 파일을 자동 다운로드하고, 삭제해도 재다운로드되는 문제가 발견됐다. GDPR 위반 가능성과 수십억 대 기기에 적용될 때의 환경 비용 문제가 제기되고 있다.
How OpenAI delivers low-latency voice AI at scale
OpenAI redesigned its WebRTC stack to serve real-time voice AI to over 900 million users, detailing the design decisions and trade-offs of a relay + transceiver split architecture.
Efficient Test-Time Inference via Deterministic Exploration of Truncated Decoding Trees
Deterministic Leaf Enumeration (DLE) cuts self-consistency’s redundant sampling by deterministically exploring a tree of possible sequences, simultaneously improving math/code reasoning performance and speed.