Show HN: 1-Bit Bonsai, the First Commercially Viable 1-Bit LLMs
TL;DR Highlight
PrismML has released the Bonsai LLM series (8B/4B/1.7B) based on 1-bit weights, claiming 14x memory reduction, 8x speed improvement, and 5x energy savings compared to conventional 16-bit models, while achieving comparable benchmark performance.
Who Should Read
Developers who need to deploy LLMs on edge devices such as smartphones, robots, and IoT hardware, or AI infrastructure engineers looking to reduce server costs and energy consumption.
Core Mechanics
- PrismML has released three models in the Bonsai series using 1-bit weights (with a shared 16-bit scale factor per 128-bit group, effectively ~1.125-bit), available in 8B, 4B, and 1.7B parameter sizes.
- Bonsai 8B requires only 1.15GB of memory. Compared to a typical 16-bit FP 8B model that uses around 16GB, it claims to be 14x smaller, 8x faster, and 5x more energy-efficient. It demonstrated performance comparable to existing 8B models on benchmarks including IFEval, GSM8K, HumanEval+, BFCL, MuSR, and MMLU-Redux.
- Bonsai 4B is 0.57GB and Bonsai 1.7B is 0.24GB, making them even more compact. The 1.7B model reportedly achieves 130 tokens/s on an iPhone 17 Pro Max, while the 4B model records 132 tokens/s on an M4 Pro.
- The models were developed based on research from Caltech and introduce a new metric called 'intelligence density' — defined as the negative log of the model's error rate divided by model size — an attempt to measure 'intelligence per bit' rather than raw parameter count.
- Model files are distributed in llama.cpp-compatible GGUF format. Community-reported benchmarks on an RTX 3090 show the 8B model using only 4GiB of VRAM, achieving approximately 190 tokens/s with 700-token input and approximately 135 tokens/s with 6400-token input.
- The community has raised suspicions that the model may be based on Qwen3 with custom quantization kernels applied. Comments pointed out that the white paper only compares against full-precision models, without comparison to other quantized models (e.g., INT4).
- The models are described as designed for robotics and real-time agent use cases. On-device execution is available via an iPhone app (Locally AI), and tool use integration with code editors like Cursor has been confirmed to work.
Evidence
- "One user tested tool use by connecting Bonsai 8B to Cursor — the Monte Carlo pi simulation logic was correct, but UI generation failed and unnecessary symbols remained, requiring manual fixes. The general consensus was 'impressive for its size, but not perfect.' | A user who ran a SQL debugging agent benchmark reported 8 correct answers out of 25 (17 errors). It slightly outperformed Qwen3.5-4B (7/25) and fell slightly short of Nanbeige4.1-3B (9/25), but completed the full test in 200 seconds — far faster than Qwen3.5 (976s) or Nanbeige (2000s+). Granite 7B 4bit matched the speed at 199 seconds but had much lower accuracy (4/25). | A key debate was whether the model was 'trained from scratch' targeting 1-bit (like Microsoft BitNet), or was post-training quantized from an existing float model — the latter would make it just aggressive quantization. The white paper's lack of comparison against other quantized models with the same memory footprint (e.g., quantized Qwen3) was flagged as suspicious. | On a CPU-only 2018 laptop using a basic llama.cpp fork, throughput was only 0.6 tokens/s, but a user confirmed this was due to missing AVX2 optimization; after manually adding it, speed jumped to 12 tokens/s. The absence of AVX2 support in the official CPU kernel signals incomplete optimization. | A Harry Potter knowledge test produced confidently wrong answers — claiming 'Sirius Black is James Potter's father' and 'James Potter is Harry's uncle and Luna Lovegood's brother.' This serves as a warning about hallucinations in tasks requiring factual accuracy."
How to Apply
- "If you want to add on-device AI features to a smartphone app, you can download Bonsai 1.7B (0.24GB) via the Locally AI app (iOS) from its settings. There are real-world reports of it running at about half the reading speed even on an older iPhone SE2, so try it out and verify whether the accuracy is within acceptable range. | If you're running a llama.cpp-based local server, you can download Bonsai-8B.gguf in GGUF format and load it directly onto your existing server. On an RTX 3090, it uses only 4GiB of VRAM and can handle 5 concurrent requests with the --parallel 5 and --cont-batching options, letting you immediately measure GPU cost savings. | If you want to use a lightweight model for preprocessing steps (typo correction, input sanitization) in a RAG pipeline or agent, consider using Bonsai 1.7B or 4B as a lightweight filter. One user reported good results testing it on correcting NYT article paragraphs. | Before adopting this model as your primary inference model, be sure to run your own benchmarks on your specific tasks — especially multi-step reasoning and fact-verification workloads. The official benchmarks only compare against full-precision models and lack comparisons with INT4-quantized models of the same memory footprint, so you need to verify the actual advantage yourself."
Code Example
# Example of running Bonsai 8B with llama.cpp server (shared by community)
./build/bin/llama-server \
-m ../Bonsai-8B.gguf \
-ngl 999 \
--flash-attn on \
--host 0.0.0.0 \
--port 80 \
--ctx-size 65500 \
--batch-size 512 \
--ubatch-size 512 \
--parallel 5 \
--cont-batching \
--threads 8 \
--threads-batch 8 \
--cache-type-k q4_0 \
--cache-type-v q4_0 \
--log-colors on
# Results: ~4GiB VRAM usage on RTX 3090
# 700-token input → ~190 tokens/s
# 6400-token input → ~135 tokens/sTerminology
Related Papers
Training an LLM in Swift, Part 1: Taking matrix mult from Gflop/s to Tflop/s
Apple Silicon에서 Swift로 직접 행렬 곱셈 커널을 구현하며 CPU, SIMD, AMX, GPU(Metal)를 단계별로 최적화해 Gflop/s에서 Tflop/s 수준까지 성능을 높이는 과정을 상세히 설명한 글이다. 프레임워크 없이 LLM 학습의 핵심 연산을 밑바닥부터 구현하고 싶은 개발자에게 Apple Silicon의 성능 한계를 체감할 수 있는 드문 자료다.
Removing fsync from our local storage engine
FractalBits가 fsync 없이 SSD 전용 KV 스토리지 엔진을 구현해 동일 조건 대비 약 65% 높은 쓰기 성능을 달성한 설계 방법을 공유했다. fsync의 메타데이터 오버헤드를 피하기 위해 사전 할당, O_DIRECT, SSD 원자 쓰기 단위 정렬 저널을 조합한 구조가 핵심이다.
Google Chrome silently installs a 4 GB AI model on your device without consent
Google Chrome이 사용자 동의 없이 Gemini Nano 4GB 모델 파일을 자동 다운로드하고, 삭제해도 재다운로드되는 문제가 발견됐다. GDPR 위반 가능성과 수십억 대 기기에 적용될 때의 환경 비용 문제가 제기되고 있다.
How OpenAI delivers low-latency voice AI at scale
OpenAI redesigned its WebRTC stack to serve real-time voice AI to over 900 million users, detailing the design decisions and trade-offs of a relay + transceiver split architecture.
Efficient Test-Time Inference via Deterministic Exploration of Truncated Decoding Trees
Deterministic Leaf Enumeration (DLE) cuts self-consistency’s redundant sampling by deterministically exploring a tree of possible sequences, simultaneously improving math/code reasoning performance and speed.