I wrote a cron job that saves me ~2 hours of dead time on Claude Code every day
TL;DR Highlight
This method leverages the 5-hour usage window of the Claude Code Max plan, which starts based on the first message, by automatically sending a 'hi' message every morning to anchor the window to your work hours.
Who Should Read
Developers who heavily use the Claude Code Max plan daily, especially those who have experienced waiting times due to hitting their usage limit in the morning.
Core Mechanics
- The usage window for the Claude Code Max plan is 5 hours and starts based on the time (rounded down to the nearest hour) of the first message sent that day. For example, if the first message is sent at 8:30 AM, the window is from 8 AM to 1 PM.
- By utilizing this timing, sending a short message ('hi') at 6 AM before work starts anchors the window to 6 AM to 11 AM, opening a new window at 11 AM.
- GitHub Actions cron job can be used to automatically send a warm-up message every morning. The related repository is available at https://github.com/vdsmos/claude-warmup.
- A simpler method is to use the native scheduling feature on the Claude Code web interface (https://claude.ai/code/scheduled), which achieves the same effect without separate infrastructure. The author has confirmed that it works correctly.
- This concept can be applied not only to the morning window but also to other times of the day to optimize the timing of multiple windows to match your desired workflow.
- This method does not increase usage or bypass limitations; it only adjusts the starting time of the fixed window. There is no additional usage.
Evidence
- Regarding concerns about whether this method violates the TOS, many opinions stated that 'it's okay because it only adjusts the window start time, not increasing usage.' Some even criticized that it should have been designed with a rolling reset.
- A tip was first suggested in the comments to use the native scheduling feature on the Claude Code web interface (https://claude.ai/code/scheduled) instead of GitHub Actions, and the author tested and confirmed that it 'works perfectly,' which was reflected in the body EDIT.
- There were shared experiences from users who manually type 'hi' every morning for months before showering. Many heavy users were already using the same pattern.
- Real-world experience was shared that using launchd is more stable than cron on Mac environments. Some switched to launchd due to cron-related issues.
- Additional tips related to the Claude Code workflow were shared in the comments: using CLAUDE.md as a project convention repository, focusing on one task per session, and requesting a file summary before making code changes were all found to be effective.
- The issue of the agent stopping at the permission prompt while remotely monitoring the session was mentioned as an unresolved pain point that this method does not solve. The situation where the agent is stopped while you are away is still inconvenient.
How to Apply
- On the Claude Code web interface (https://claude.ai/code/scheduled), schedule a 'hi' message to be sent every morning 2 hours before work starts (e.g., 6 AM) to anchor the window to your desired time without separate infrastructure.
- If you prefer GitHub Actions or Mac launchd, refer to the https://github.com/vdsmos/claude-warmup repository to set up a cron job. Keep in mind that launchd is more stable than cron on Mac based on community experience.
- If you have focused work hours in the afternoon, adjust the start time of the second window in the same way to arrange the entire day's Claude Code usage time to match your work schedule.
Terminology
Related Papers
Training an LLM in Swift, Part 1: Taking matrix mult from Gflop/s to Tflop/s
Apple Silicon에서 Swift로 직접 행렬 곱셈 커널을 구현하며 CPU, SIMD, AMX, GPU(Metal)를 단계별로 최적화해 Gflop/s에서 Tflop/s 수준까지 성능을 높이는 과정을 상세히 설명한 글이다. 프레임워크 없이 LLM 학습의 핵심 연산을 밑바닥부터 구현하고 싶은 개발자에게 Apple Silicon의 성능 한계를 체감할 수 있는 드문 자료다.
Removing fsync from our local storage engine
FractalBits가 fsync 없이 SSD 전용 KV 스토리지 엔진을 구현해 동일 조건 대비 약 65% 높은 쓰기 성능을 달성한 설계 방법을 공유했다. fsync의 메타데이터 오버헤드를 피하기 위해 사전 할당, O_DIRECT, SSD 원자 쓰기 단위 정렬 저널을 조합한 구조가 핵심이다.
Google Chrome silently installs a 4 GB AI model on your device without consent
Google Chrome이 사용자 동의 없이 Gemini Nano 4GB 모델 파일을 자동 다운로드하고, 삭제해도 재다운로드되는 문제가 발견됐다. GDPR 위반 가능성과 수십억 대 기기에 적용될 때의 환경 비용 문제가 제기되고 있다.
How OpenAI delivers low-latency voice AI at scale
OpenAI redesigned its WebRTC stack to serve real-time voice AI to over 900 million users, detailing the design decisions and trade-offs of a relay + transceiver split architecture.
Efficient Test-Time Inference via Deterministic Exploration of Truncated Decoding Trees
Deterministic Leaf Enumeration (DLE) cuts self-consistency’s redundant sampling by deterministically exploring a tree of possible sequences, simultaneously improving math/code reasoning performance and speed.