Show HN: I built a social media management tool in 3 weeks with Claude and Codex
TL;DR Highlight
**SoloDev built a Buffer/Sendible alternative open-source social media management platform in 3 weeks by leveraging AI coding tools like Claude Opus and OpenAI Codex.**
Who Should Read
Developers looking to apply AI coding tools to real-world production projects, or those seeking a self-hosted social media management tool instead of SaaS options like Buffer or Sendible.
Core Mechanics
- This project resulted in a full-stack SaaS-level platform completed solo in 3 weeks, featuring integrations with 12 social platforms (Facebook, Instagram, LinkedIn, TikTok, YouTube, Pinterest, Threads, Bluesky, Google Business Profile, Mastodon, etc.), multi-tenant authentication, encrypted credential storage, background job processing, approval workflows, and a unified inbox.
- Detailed spec documents, architecture documents, and style guides were created *before* writing any code and were all made public. The developer emphasized that 'this pre-planning phase was everything; without it, the AI agents would be a mess.'
- Specs were divided into tasks executable in parallel and tasks requiring sequential processing to form multi-agent workflows. This categorization was key to the AI coding workflow.
- Claude Opus 4.6 (Claude Code) was used for maintaining large contexts and making architectural decisions across multiple files, particularly excelling at cascading changes across models, views, and templates when restructuring the permission system.
- OpenAI Codex 5.3 was leveraged to review implemented code, identify security issues, and fix bugs. Token consumption between the two models was roughly equal.
- AI excelled in areas like standard CRUD operations (Django models/views/serializers), provider modules for well-documented APIs (Facebook, LinkedIn), Tailwind layouts and HTMX interactions, test generation, and large-scale refactoring across files.
- AI faltered in areas with poor documentation or unique upload flows, such as TikTok’s Content Posting API, repeatedly generating incorrect code with confidence. A multi-tenant permission logic bug exposed data across workspaces, even passing tests, making it more dangerous.
- OAuth edge cases (token refresh, permission revocation, platform-specific error codes) and background task orchestration (retry logic, rate limit backoff, error handling) all required manual implementation. AI handles 'happy paths' well but is vulnerable in defensive coding.
Evidence
- "Some responses questioned the credibility of the '3 weeks of vibe coding' claim, with one commenter noting that similar open-source tools had poor quality and self-hosting experiences, leading them to prefer paid premium products. The claim signaled a lack of battle testing and potential maintenance issues."
How to Apply
- When building production-level projects with AI agents, create detailed spec documents and architecture documents *before* writing code, and separate tasks into those executable in parallel and those requiring sequential processing. Benchmark your spec writing against the specs published in the brightbean-studio repository’s development_specs folder.
- When configuring multi-agent workflows, use Claude Opus (or a high-performance model) for initial architecture design and large-scale refactoring, and a separate model (Codex, etc.) for reviewing implemented code and identifying security issues. This can catch more bugs and security vulnerabilities than using a single model.
- When implementing multi-tenant environments, do not test AI-generated permission logic only in a single-tenant environment. AI can create data cross-tenant exposure bugs that still pass unit tests, so write separate integration tests explicitly including multi-tenant scenarios.
- Agencies or small teams currently paying $100-$300/month for SaaS solutions like Buffer, Sendible, or SocialPilot can deploy brightbean-studio to their own VPS with Docker Compose or use Render/Railway one-click deployment for the same functionality at no cost. However, be sure to review the AGPL-3.0 license conditions (requirement to open-source modifications).
Terminology
Related Papers
Show HN: adamsreview – better multi-agent PR reviews for Claude Code
Claude Code에서 최대 7개의 병렬 서브 에이전트가 각각 다른 관점으로 PR을 리뷰하고, 자동 수정까지 해주는 오픈소스 플러그인이다. 기존 /review나 CodeRabbit보다 실제 버그를 더 많이 잡는다고 주장하지만 커뮤니티에서는 복잡도와 실효성에 대한 회의론도 나왔다.
How Fast Does Claude, Acting as a User Space IP Stack, Respond to Pings?
Claude Code에게 IP 패킷을 직접 파싱하고 ICMP echo reply를 구성하도록 시켜서 실제로 ping에 응답하게 만든 실험으로, 'Markdown이 곧 코드이고 LLM이 프로세서'라는 아이디어를 네트워크 스택 수준까지 밀어붙인 재미있는 사례다.
Show HN: Git for AI Agents
AI 코딩 에이전트(Claude Code 등)가 수행한 모든 툴 호출을 자동으로 추적하고, 어떤 프롬프트가 어느 코드 줄을 작성했는지 blame까지 가능한 버전 관리 도구다.
Principles for agent-native CLIs
AI 에이전트가 CLI 도구를 더 잘 사용할 수 있도록 설계하는 원칙들을 정리한 글로, 에이전트가 CLI를 도구로 활용하는 빈도가 높아지면서 이 설계 방식이 실용적으로 중요해지고 있다.
Agent-harness-kit scaffolding for multi-agent workflows (MCP, provider-agnostic)
여러 AI 에이전트가 서로 역할을 나눠 협업할 수 있도록 조율하는 scaffolding 도구로, Vite처럼 설정 없이 빠르게 멀티 에이전트 파이프라인을 구성할 수 있다.
Show HN: Tilde.run – Agent sandbox with a transactional, versioned filesystem
AI 에이전트가 실제 프로덕션 데이터를 건드려도 롤백할 수 있는 격리된 샌드박스 환경을 제공하는 도구로, GitHub/S3/Google Drive를 하나의 버전 관리 파일시스템으로 묶어준다.