3.6 KiB
Agent Development Guide
Development Environment
This project uses:
- mise (mise.jdx.dev) - tool version manager and task runner
- hk (hk.jdx.dev) - git hook manager
- uv - fast Python package installer
- ruff - linter and formatter
- pytest - test runner
Setup
# Install dependencies
mise run install
# Or equivalently:
uv sync
Available Commands
mise run test # Run tests
mise run test-v # Run tests verbose
mise run test-cov # Run tests with coverage report
mise run test-browser # Run e2e browser tests (requires playwright)
mise run lint # Run ruff linter
mise run lint-fix # Run ruff with auto-fix
mise run format # Run ruff formatter
mise run ci # Full CI pipeline (sync + test + coverage)
Runtime Commands
mise run run # Run mainline (terminal)
mise run run-websocket # Run with WebSocket display
mise run run-both # Run with both terminal and WebSocket
mise run run-client # Run both + open browser
Git Hooks
At the start of every agent session, verify hooks are installed:
ls -la .git/hooks/pre-commit
If hooks are not installed, install them with:
hk init --mise
mise run pre-commit
The project uses hk configured in hk.pkl:
- pre-commit: runs ruff-format and ruff (with auto-fix)
- pre-push: runs ruff check
Workflow Rules
Before Committing
-
Always run the test suite - never commit code that fails tests:
mise run test -
Always run the linter:
mise run lint -
Fix any lint errors before committing (or let the pre-commit hook handle it).
-
Review your changes using
git diffto understand what will be committed.
On Failing Tests
When tests fail, determine whether it's an out-of-date test or a correctly failing test:
-
Out-of-date test: The test was written for old behavior that has legitimately changed. Update the test to match the new expected behavior.
-
Correctly failing test: The test correctly identifies a broken contract. Fix the implementation, not the test.
Never modify a test to make it pass without understanding why it failed.
Code Review
Before committing significant changes:
- Run
git diffto review all changes - Ensure new code follows existing patterns in the codebase
- Check that type hints are added for new functions
- Verify that tests exist for new functionality
Testing
Tests live in tests/ and follow the pattern test_*.py.
Run all tests:
mise run test
Run with coverage:
mise run test-cov
The project uses pytest with strict marker enforcement. Test configuration is in pyproject.toml under [tool.pytest.ini_options].
Architecture Notes
- ntfy.py and mic.py are standalone modules with zero internal dependencies
- eventbus.py provides thread-safe event publishing for decoupled communication
- controller.py coordinates ntfy/mic monitoring
- The render pipeline: fetch → render → effects → scroll → terminal output
- Display abstraction (
engine/display.py): swap display backends via the Display protocolTerminalDisplay- ANSI terminal outputWebSocketDisplay- broadcasts to web clients via WebSocketMultiDisplay- forwards to multiple displays simultaneously
- WebSocket display (
engine/websocket_display.py): real-time frame broadcasting to web browsers- WebSocket server on port 8765
- HTTP server on port 8766 (serves HTML client)
- Client at
client/index.htmlwith ANSI color parsing and fullscreen support