Using LLMs at Oxide
TL;DR Highlight
Oxide, a systems software company, published their internal LLM usage principles — focused on accountability, rigor, empathy, and teamwork rather than 'use it fast and often.'
Who Should Read
Engineering managers and team leads thinking about how to set healthy norms around LLM use in their org, and individual devs wanting a framework for responsible AI use.
Core Mechanics
- Oxide's guidelines aren't about maximizing LLM usage — they center on values like accountability (own the output), rigor (verify before shipping), empathy (consider the reader), and teamwork (don't let AI erode collaboration).
- The core principle: LLM output is your responsibility once you use it. 'The AI wrote it' is not an acceptable excuse for incorrect, sloppy, or misleading content.
- They explicitly warn against using LLMs to pad or obscure — generated content should be as tight and precise as anything you'd write manually.
- On teamwork: LLMs can subtly reduce knowledge-sharing and mentorship if people stop asking colleagues questions and start asking models instead. The guidelines encourage preserving human interaction.
- Rigor includes not just fact-checking but also stylistic review — LLM output tends toward certain patterns (verbose intros, hedge words, passive voice) that need to be actively edited out.
- The document positions LLMs as tools that amplify your existing quality standards, not tools that establish new (lower) ones.
Evidence
- HN discussion was unusually positive — many commenters praised this as a rare example of an org thinking carefully about LLM use rather than just adopting it uncritically.
- Several engineers shared similar internal guidelines they'd written, suggesting this is a widespread but rarely published concern.
- A few commenters noted the irony that the guidelines were apparently drafted collaboratively by humans, not generated by LLMs — which itself demonstrates the values they're promoting.
- Some pushback: critics argued that overly restrictive guidelines risk making a company less competitive as AI-augmented peers become more productive.
How to Apply
- Use this as a template to draft your own team's LLM usage guidelines — adapt the values section to match your org's existing engineering culture.
- Add a step in your code review process where authors flag AI-generated sections, enabling reviewers to apply extra scrutiny.
- For technical writing: run a dedicated pass to strip LLM-isms (hedging phrases, unnecessary preamble, passive constructions) before publishing.
- Explicitly discuss LLM use in onboarding — set expectations early rather than letting norms drift organically.
Terminology
LLM-ismsCharacteristic patterns in LLM-generated text: verbose introductions, excessive hedging ('it's worth noting that...'), passive voice overuse, and generic structure.
Accountability in AI useThe principle that a person who uses AI-generated content owns full responsibility for its accuracy, quality, and appropriateness — regardless of the AI's role in producing it.