CERN uses ultra-compact AI models on FPGAs for real-time LHC data filtering
TL;DR Highlight
CERN uses a 'hardware-first' inference approach at the LHC by burning PyTorch/TensorFlow models directly into FPGAs to filter hundreds of terabytes of collision data per second at nanosecond latency — a radical departure from conventional GPU/TPU-based AI.
Who Should Read
Embedded/hardware developers working on or considering deploying AI models to edge devices or FPGAs, and ML engineers designing systems that require extreme low-latency (nanosecond to microsecond) inference.
Core Mechanics
- The LHC generates approximately 40,000 exabytes of raw data per year (roughly 1/4 of the entire current internet), peaking at hundreds of terabytes per second. Storing or processing all of this is physically impossible, so only about 0.02% of all collision events are preserved.
- The Level-1 Trigger, the first filtering stage, consists of approximately 1,000 FPGAs and must evaluate incoming data within 50 nanoseconds. A specialized algorithm called AXOL1TL operates in real time here, instantly determining whether an event is worth preserving.
- CERN uses an open-source tool called HLS4ML to convert ML models written in PyTorch or TensorFlow into synthesizable C++ code, which is then deployed directly onto FPGAs, SoCs, and ASICs. This achieves extreme speed with far less power and chip area compared to GPU/TPU-based approaches.
- AXOL1TL was trained as a VAE (Variational Autoencoder, a structure that detects anomalies by compressing and reconstructing input data). At inference time, the decoder is removed and only the mu² term of the KL divergence is used as an anomaly score, operating within 2 clock cycles at a 40MHz clock.
- A significant portion of chip resources is allocated not to neural network layers but to precomputed lookup tables. By storing results for common input patterns in advance, nearly instantaneous output is possible without floating-point operations. This is the core design philosophy that enables nanosecond-level latency.
- Starting from v5, the model was improved by adding a VICREG block and using reconstruction loss, deployed to FPGA via the hls4ml-da4ml flow. Another model, CICADA, was trained as a VAE and then used knowledge distillation with a supervised loss on the anomaly score.
- The second filtering stage, the High-Level Trigger, runs on a large computing farm of 25,600 CPUs and 400 GPUs. This stage only activates after the Level-1 Trigger has dramatically reduced the data volume.
Evidence
- "One of the authors of the AXOL1TL model commented directly to correct errors in the article, clarifying that the phrase 'burned into silicon' is inaccurate — the model actually runs on FPGAs. While the weights can be considered 'burned in' in the sense that they are hardwired (implemented via shift-add) on the FPGA, the FPGA itself is reprogrammable. Separate projects targeting actual silicon (ASICs), such as SmartPixel and HG-Cal readout, also exist. The same author shared detailed technical specifics: versions prior to v4 were pure VAE-based small MLPs, while v5 introduced VICREG blocks and reconstruction loss. FPGA deployment was implemented using QAT (Quantization-Aware Training) and distributed arithmetic, operating within 2 clock cycles (= 50 nanoseconds) at a 40MHz clock. The community also criticized the article for misusing the term 'LLM' — the system is actually a small VAE-based neural network, not an LLM like ChatGPT. Some argued that even the word 'AI' was an overstatement, suggesting 'a chip with hardcoded logic derived from machine learning' would be more accurate. An interesting discussion emerged about using analog circuits for neural networks, questioning whether matrix-multiplication-based neural networks could yield instantaneous output if implemented in analog. Separately, it was noted that modern CPU branch predictors already use perceptrons, meaning most modern computers already have neural networks embedded in hardware. As a real-world application example, a coffee machine equipped with a tiny CNN model was shared — handling three tasks locally via an onboard camera: cup type classification, cup position image segmentation, and volume regression for coffee quantity adjustment. While not as extreme as CERN, it was cited as another practical example of ultra-small AI models running at the edge."
How to Apply
- "If you need to deploy a model trained in PyTorch or TensorFlow onto an FPGA, HLS4ML (https://github.com/fastmachinelearning/hls4ml) can convert your model into synthesizable C++ HLS code. It is the open-source toolchain actually used by CERN and is also applicable to industrial edge AI requiring sub-microsecond latency. When deploying an anomaly detection system in an extremely low-latency environment, you can follow the AXOL1TL approach: train a VAE, then at inference time remove the decoder and use only the mu² term of the KL divergence as the anomaly score. This reduces model size and simplifies computation, making hardware implementation more feasible. If you need to reduce bit-width while maintaining model accuracy for FPGA deployment, apply QAT (Quantization-Aware Training) during the training phase. CERN combined high-granularity quantization with distributed arithmetic to achieve sub-1-microsecond latency; the specific methodology is detailed in the related paper (https://arxiv.org/abs/2405.00645). By replacing a significant portion of neural network operations with lookup tables, you can achieve near-instantaneous responses to recurring input patterns without floating-point arithmetic. This approach is useful not only for FPGAs but also for extremely resource-constrained environments such as embedded MCUs."
Terminology
Related Papers
Training an LLM in Swift, Part 1: Taking matrix mult from Gflop/s to Tflop/s
Apple Silicon에서 Swift로 직접 행렬 곱셈 커널을 구현하며 CPU, SIMD, AMX, GPU(Metal)를 단계별로 최적화해 Gflop/s에서 Tflop/s 수준까지 성능을 높이는 과정을 상세히 설명한 글이다. 프레임워크 없이 LLM 학습의 핵심 연산을 밑바닥부터 구현하고 싶은 개발자에게 Apple Silicon의 성능 한계를 체감할 수 있는 드문 자료다.
Removing fsync from our local storage engine
FractalBits가 fsync 없이 SSD 전용 KV 스토리지 엔진을 구현해 동일 조건 대비 약 65% 높은 쓰기 성능을 달성한 설계 방법을 공유했다. fsync의 메타데이터 오버헤드를 피하기 위해 사전 할당, O_DIRECT, SSD 원자 쓰기 단위 정렬 저널을 조합한 구조가 핵심이다.
Google Chrome silently installs a 4 GB AI model on your device without consent
Google Chrome이 사용자 동의 없이 Gemini Nano 4GB 모델 파일을 자동 다운로드하고, 삭제해도 재다운로드되는 문제가 발견됐다. GDPR 위반 가능성과 수십억 대 기기에 적용될 때의 환경 비용 문제가 제기되고 있다.
How OpenAI delivers low-latency voice AI at scale
OpenAI redesigned its WebRTC stack to serve real-time voice AI to over 900 million users, detailing the design decisions and trade-offs of a relay + transceiver split architecture.
Efficient Test-Time Inference via Deterministic Exploration of Truncated Decoding Trees
Deterministic Leaf Enumeration (DLE) cuts self-consistency’s redundant sampling by deterministically exploring a tree of possible sequences, simultaneously improving math/code reasoning performance and speed.
Related Resources
- Original Article: CERN Uses Tiny AI Models Burned into Silicon for Real-Time LHC Data Filtering
- AXOL1TL Paper (arXiv)
- hls4ml-da4ml Flow Paper 1 (arXiv:2512.01463)
- hls4ml-da4ml Flow Paper 2 (arXiv:2507.04535)
- High-Granularity Quantization and FPGA Deployment Paper (arXiv:2405.00645)
- CERN LHC Big Data and AI Presentation (YouTube, Thea Aarrestad)
- Nanosecond AI at the LHC Presentation (YouTube, Thea Aarrestad)
- AXOL1TL Detailed Slides (Indico CERN)