Show HN: Gemma Gem – AI model embedded in a browser – no API keys, no cloud
TL;DR Highlight
A Chrome extension that runs the Google Gemma 4 model completely locally within the browser using WebGPU, allowing it to read web pages and perform DOM manipulations such as clicks and input without requiring an API key or server.
Who Should Read
Frontend/full-stack developers who want to incorporate LLMs into projects where privacy is important, or developers who want to experiment with browser automation without external APIs.
Core Mechanics
- The Google Gemma 4 model is run within a browser extension using WebGPU (a standard API for directly using GPUs on the web). Data does not leave the device, and no API key is required.
- Model size is approximately 500MB for E2B (about 2B parameters) and ~1.5GB for E4B (about 4B parameters), and it is downloaded and cached only once upon first execution. The execution environment requires only Chrome with WebGPU support.
- The architecture is separated into three layers: Offscreen Document, Service Worker, and Content Script. The Offscreen Document loads the model and runs the agent loop using the @huggingface/transformers library, the Service Worker routes messages, and the Content Script manipulates the actual DOM.
- Built-in tools available to the agent include read_page_content (read page text/HTML), take_screenshot (capture page screenshot as PNG), click_element (click by CSS selector), type_text (enter text into input field), and scroll_page (scroll).
- The model supports a 'thinking mode' that displays the inference process (chain-of-thought, i.e., 'how it thought') directly in the UI. This allows for transparently checking how the model interprets the page.
- Installation involves pnpm install → pnpm build, followed by loading the .output/chrome-mv3-dev/ folder in developer mode at chrome://extensions. There is no separate server or backend setup.
- JavaScript execution permission (run_javascript) is also handled at the Service Worker level, allowing the agent to directly execute scripts on the page.
Evidence
- "There was mention that Chrome's Prompt API (developer.chrome.com/docs/ai/prompt-api) currently offered as an Origin Trial, uses a similar approach. One commenter confirmed that the model folder size was as large as 4,072MB (v3Nano model, GPU backend), and expressed the opinion that while it could eventually become a native browser feature, the model size is currently too large for the browser itself. \n\nThere was concern that granting full JS execution rights to a 2B model on a live page is risky from a security perspective. It was also suggested that using a local background daemon as a server and making the extension a 'thin client' would be more stable, as the agent's state would be lost if Chrome crashed or the tab was discarded.\n\nThere was an opinion that 'thinking mode' (exposing the inference process) is a killer feature of this extension. It's not just a curious demonstration, but actually useful for understanding how the model interprets the page.\n\nThere was a suggestion that it would be good to evolve into a local LLM plugin SDK for apps handling sensitive data. Previously, requiring users to set up a local LLM environment was a high barrier to entry, but this browser-integrated approach could solve that problem.\n\nThere was a question about whether the browser could natively embed local models and allow developers to query them via an API. Information was shared that Chrome's Prompt API Origin Trial is heading in that direction."
How to Apply
- If you need to analyze web pages containing personal information (internal intranet, medical records, etc.) with AI, but it is difficult to send data to an external API, you can use this extension as a base to add LLM functionality without the data leaving the device.
- If you want to prototype a browser-based automation agent (form auto-filling, page summarization, button click workflows) without relying on external services, you can install pnpm → pnpm build and then load it into Chrome developer mode to experiment immediately.
- If you want to offer local LLM functionality as an option in a SaaS app that handles sensitive data, you can refer to the Offscreen Document + @huggingface/transformers + WebGPU combination of this project to design a local inference module in the form of a Chrome extension.
- If you need a UX that shows the agent's reasoning process to the user, you can refer to the implementation of 'thinking mode' (chain-of-thought exposure) in the code of this project and apply it.
Code Example
# Installation and build
pnpm install
pnpm build
# Load in Chrome
# Go to chrome://extensions → Turn on Developer Mode
# 'Load unpacked' → Select .output/chrome-mv3-dev/
# Architecture summary
# Offscreen Document: @huggingface/transformers + WebGPU to run Gemma 4 model + agent loop
# Service Worker: Message routing, take_screenshot, run_javascript processing
# Content Script: Chat UI (Shadow DOM) injection, DOM tool execution
# List of available tools
# read_page_content - Read page text/HTML (Content Script)
# take_screenshot - Capture page PNG (Service Worker)
# click_element - Click by CSS selector (Content Script)
# type_text - Enter text into input field (Content Script)
# scroll_page - Page scroll (Content Script)
# run_javascript - Execute JS code (Service Worker)Terminology
Related Papers
Show HN: adamsreview – better multi-agent PR reviews for Claude Code
Claude Code에서 최대 7개의 병렬 서브 에이전트가 각각 다른 관점으로 PR을 리뷰하고, 자동 수정까지 해주는 오픈소스 플러그인이다. 기존 /review나 CodeRabbit보다 실제 버그를 더 많이 잡는다고 주장하지만 커뮤니티에서는 복잡도와 실효성에 대한 회의론도 나왔다.
How Fast Does Claude, Acting as a User Space IP Stack, Respond to Pings?
Claude Code에게 IP 패킷을 직접 파싱하고 ICMP echo reply를 구성하도록 시켜서 실제로 ping에 응답하게 만든 실험으로, 'Markdown이 곧 코드이고 LLM이 프로세서'라는 아이디어를 네트워크 스택 수준까지 밀어붙인 재미있는 사례다.
Show HN: Git for AI Agents
AI 코딩 에이전트(Claude Code 등)가 수행한 모든 툴 호출을 자동으로 추적하고, 어떤 프롬프트가 어느 코드 줄을 작성했는지 blame까지 가능한 버전 관리 도구다.
Principles for agent-native CLIs
AI 에이전트가 CLI 도구를 더 잘 사용할 수 있도록 설계하는 원칙들을 정리한 글로, 에이전트가 CLI를 도구로 활용하는 빈도가 높아지면서 이 설계 방식이 실용적으로 중요해지고 있다.
Agent-harness-kit scaffolding for multi-agent workflows (MCP, provider-agnostic)
여러 AI 에이전트가 서로 역할을 나눠 협업할 수 있도록 조율하는 scaffolding 도구로, Vite처럼 설정 없이 빠르게 멀티 에이전트 파이프라인을 구성할 수 있다.
Show HN: Tilde.run – Agent sandbox with a transactional, versioned filesystem
AI 에이전트가 실제 프로덕션 데이터를 건드려도 롤백할 수 있는 격리된 샌드박스 환경을 제공하는 도구로, GitHub/S3/Google Drive를 하나의 버전 관리 파일시스템으로 묶어준다.