forked from genewildish/Mainline
Compare commits
65 Commits
drift
...
feature/ve
| Author | SHA1 | Date | |
|---|---|---|---|
| 828b8489e1 | |||
| 31cabe9128 | |||
| bcb4ef0cfe | |||
| 996ba14b1d | |||
| a1dcceac47 | |||
| c2d77ee358 | |||
| 8e27f89fa4 | |||
| 4d28f286db | |||
| 9b139a40f7 | |||
| e1408dcf16 | |||
| 0152e32115 | |||
| dc1adb2558 | |||
| fada11b58d | |||
| 3e9c1be6d2 | |||
| 0f2d8bf5c2 | |||
| f5de2c62e0 | |||
| f9991c24af | |||
| 20ed014491 | |||
| 9e4d54a82e | |||
| dcd31469a5 | |||
| 829c4ab63d | |||
| 22dd063baa | |||
| 0f7203e4e0 | |||
| ba050ada24 | |||
| d7b044ceae | |||
| ac1306373d | |||
| 2650f7245e | |||
| b1f2b9d2be | |||
| c08a7d3cb0 | |||
| d5a3edba97 | |||
| fb35458718 | |||
| 15de46722a | |||
| 35e5c8d38b | |||
| cdc8094de2 | |||
| f170143939 | |||
| 19fb4bc4fe | |||
| ae10fd78ca | |||
| 4afab642f7 | |||
| f6f177590b | |||
| 9ae4dc2b07 | |||
| 1ac2dec3b0 | |||
| 757c854584 | |||
| 4844a64203 | |||
| 9201117096 | |||
| d758541156 | |||
| b979621dd4 | |||
| f91cc9844e | |||
| bddbd69371 | |||
| 6e39a2dad2 | |||
| 1ba3848bed | |||
| a986df344a | |||
| c84bd5c05a | |||
| 7b0f886e53 | |||
| 9eeb817dca | |||
| ac80ab23cc | |||
| 516123345e | |||
| 11226872a1 | |||
| e6826c884c | |||
| 0740e34293 | |||
| 1e99d70387 | |||
| 7098b2f5aa | |||
| e7de09be50 | |||
| 9140bfd32b | |||
| c49c0aab33 | |||
| 089c8ed66a |
8
.gitignore
vendored
8
.gitignore
vendored
@@ -1,4 +1,12 @@
|
|||||||
__pycache__/
|
__pycache__/
|
||||||
*.pyc
|
*.pyc
|
||||||
.mainline_venv/
|
.mainline_venv/
|
||||||
|
.venv/
|
||||||
|
uv.lock
|
||||||
.mainline_cache_*.json
|
.mainline_cache_*.json
|
||||||
|
.DS_Store
|
||||||
|
htmlcov/
|
||||||
|
.coverage
|
||||||
|
.pytest_cache/
|
||||||
|
*.egg-info/
|
||||||
|
coverage.xml
|
||||||
|
|||||||
1
.python-version
Normal file
1
.python-version
Normal file
@@ -0,0 +1 @@
|
|||||||
|
3.12
|
||||||
239
AGENTS.md
Normal file
239
AGENTS.md
Normal file
@@ -0,0 +1,239 @@
|
|||||||
|
# Agent Development Guide
|
||||||
|
|
||||||
|
## Development Environment
|
||||||
|
|
||||||
|
This project uses:
|
||||||
|
- **mise** (mise.jdx.dev) - tool version manager and task runner
|
||||||
|
- **hk** (hk.jdx.dev) - git hook manager
|
||||||
|
- **uv** - fast Python package installer
|
||||||
|
- **ruff** - linter and formatter
|
||||||
|
- **pytest** - test runner
|
||||||
|
|
||||||
|
### Setup
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Install dependencies
|
||||||
|
mise run install
|
||||||
|
|
||||||
|
# Or equivalently:
|
||||||
|
uv sync --all-extras # includes mic, websocket, sixel support
|
||||||
|
```
|
||||||
|
|
||||||
|
### Available Commands
|
||||||
|
|
||||||
|
```bash
|
||||||
|
mise run test # Run tests
|
||||||
|
mise run test-v # Run tests verbose
|
||||||
|
mise run test-cov # Run tests with coverage report
|
||||||
|
mise run test-browser # Run e2e browser tests (requires playwright)
|
||||||
|
mise run lint # Run ruff linter
|
||||||
|
mise run lint-fix # Run ruff with auto-fix
|
||||||
|
mise run format # Run ruff formatter
|
||||||
|
mise run ci # Full CI pipeline (topics-init + lint + test-cov)
|
||||||
|
```
|
||||||
|
|
||||||
|
### Runtime Commands
|
||||||
|
|
||||||
|
```bash
|
||||||
|
mise run run # Run mainline (terminal)
|
||||||
|
mise run run-poetry # Run with poetry feed
|
||||||
|
mise run run-firehose # Run in firehose mode
|
||||||
|
mise run run-websocket # Run with WebSocket display only
|
||||||
|
mise run run-sixel # Run with Sixel graphics display
|
||||||
|
mise run run-both # Run with both terminal and WebSocket
|
||||||
|
mise run run-client # Run both + open browser
|
||||||
|
mise run cmd # Run C&C command interface
|
||||||
|
```
|
||||||
|
|
||||||
|
## Git Hooks
|
||||||
|
|
||||||
|
**At the start of every agent session**, verify hooks are installed:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
ls -la .git/hooks/pre-commit
|
||||||
|
```
|
||||||
|
|
||||||
|
If hooks are not installed, install them with:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
hk init --mise
|
||||||
|
mise run pre-commit
|
||||||
|
```
|
||||||
|
|
||||||
|
**IMPORTANT**: Always review the hk documentation before modifying `hk.pkl`:
|
||||||
|
- [hk Configuration Guide](https://hk.jdx.dev/configuration.html)
|
||||||
|
- [hk Hooks Reference](https://hk.jdx.dev/hooks.html)
|
||||||
|
- [hk Builtins](https://hk.jdx.dev/builtins.html)
|
||||||
|
|
||||||
|
The project uses hk configured in `hk.pkl`:
|
||||||
|
- **pre-commit**: runs ruff-format and ruff (with auto-fix)
|
||||||
|
- **pre-push**: runs ruff check + benchmark hook
|
||||||
|
|
||||||
|
## Benchmark Runner
|
||||||
|
|
||||||
|
Run performance benchmarks:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
mise run benchmark # Run all benchmarks (text output)
|
||||||
|
mise run benchmark-json # Run benchmarks (JSON output)
|
||||||
|
mise run benchmark-report # Run benchmarks (Markdown report)
|
||||||
|
```
|
||||||
|
|
||||||
|
### Benchmark Commands
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Run benchmarks
|
||||||
|
uv run python -m engine.benchmark
|
||||||
|
|
||||||
|
# Run with specific displays/effects
|
||||||
|
uv run python -m engine.benchmark --displays null,terminal --effects fade,glitch
|
||||||
|
|
||||||
|
# Save baseline for hook comparisons
|
||||||
|
uv run python -m engine.benchmark --baseline
|
||||||
|
|
||||||
|
# Run in hook mode (compares against baseline)
|
||||||
|
uv run python -m engine.benchmark --hook
|
||||||
|
|
||||||
|
# Hook mode with custom threshold (default: 20% degradation)
|
||||||
|
uv run python -m engine.benchmark --hook --threshold 0.3
|
||||||
|
|
||||||
|
# Custom baseline location
|
||||||
|
uv run python -m engine.benchmark --hook --cache /path/to/cache.json
|
||||||
|
```
|
||||||
|
|
||||||
|
### Hook Mode
|
||||||
|
|
||||||
|
The `--hook` mode compares current benchmarks against a saved baseline. If performance degrades beyond the threshold (default 20%), it exits with code 1. This is useful for preventing performance regressions in feature branches.
|
||||||
|
|
||||||
|
The pre-push hook runs benchmark in hook mode to catch performance regressions before pushing.
|
||||||
|
|
||||||
|
## Workflow Rules
|
||||||
|
|
||||||
|
### Before Committing
|
||||||
|
|
||||||
|
1. **Always run the test suite** - never commit code that fails tests:
|
||||||
|
```bash
|
||||||
|
mise run test
|
||||||
|
```
|
||||||
|
|
||||||
|
2. **Always run the linter**:
|
||||||
|
```bash
|
||||||
|
mise run lint
|
||||||
|
```
|
||||||
|
|
||||||
|
3. **Fix any lint errors** before committing (or let the pre-commit hook handle it).
|
||||||
|
|
||||||
|
4. **Review your changes** using `git diff` to understand what will be committed.
|
||||||
|
|
||||||
|
### On Failing Tests
|
||||||
|
|
||||||
|
When tests fail, **determine whether it's an out-of-date test or a correctly failing test**:
|
||||||
|
|
||||||
|
- **Out-of-date test**: The test was written for old behavior that has legitimately changed. Update the test to match the new expected behavior.
|
||||||
|
|
||||||
|
- **Correctly failing test**: The test correctly identifies a broken contract. Fix the implementation, not the test.
|
||||||
|
|
||||||
|
**Never** modify a test to make it pass without understanding why it failed.
|
||||||
|
|
||||||
|
### Code Review
|
||||||
|
|
||||||
|
Before committing significant changes:
|
||||||
|
- Run `git diff` to review all changes
|
||||||
|
- Ensure new code follows existing patterns in the codebase
|
||||||
|
- Check that type hints are added for new functions
|
||||||
|
- Verify that tests exist for new functionality
|
||||||
|
|
||||||
|
## Testing
|
||||||
|
|
||||||
|
Tests live in `tests/` and follow the pattern `test_*.py`.
|
||||||
|
|
||||||
|
Run all tests:
|
||||||
|
```bash
|
||||||
|
mise run test
|
||||||
|
```
|
||||||
|
|
||||||
|
Run with coverage:
|
||||||
|
```bash
|
||||||
|
mise run test-cov
|
||||||
|
```
|
||||||
|
|
||||||
|
The project uses pytest with strict marker enforcement. Test configuration is in `pyproject.toml` under `[tool.pytest.ini_options]`.
|
||||||
|
|
||||||
|
### Test Coverage Strategy
|
||||||
|
|
||||||
|
Current coverage: 56% (336 tests)
|
||||||
|
|
||||||
|
Key areas with lower coverage (acceptable for now):
|
||||||
|
- **app.py** (8%): Main entry point - integration heavy, requires terminal
|
||||||
|
- **scroll.py** (10%): Terminal-dependent rendering logic
|
||||||
|
- **benchmark.py** (0%): Standalone benchmark tool, runs separately
|
||||||
|
|
||||||
|
Key areas with good coverage:
|
||||||
|
- **display/backends/null.py** (95%): Easy to test headlessly
|
||||||
|
- **display/backends/terminal.py** (96%): Uses mocking
|
||||||
|
- **display/backends/multi.py** (100%): Simple forwarding logic
|
||||||
|
- **effects/performance.py** (99%): Pure Python logic
|
||||||
|
- **eventbus.py** (96%): Simple event system
|
||||||
|
- **effects/controller.py** (95%): Effects command handling
|
||||||
|
|
||||||
|
Areas needing more tests:
|
||||||
|
- **websocket.py** (48%): Network I/O, hard to test in CI
|
||||||
|
- **ntfy.py** (50%): Network I/O, hard to test in CI
|
||||||
|
- **mic.py** (61%): Audio I/O, hard to test in CI
|
||||||
|
|
||||||
|
Note: Terminal-dependent modules (scroll, layers render) are harder to test in CI.
|
||||||
|
Performance regression tests are in `tests/test_benchmark.py` with `@pytest.mark.benchmark`.
|
||||||
|
|
||||||
|
## Architecture Notes
|
||||||
|
|
||||||
|
- **ntfy.py** and **mic.py** are standalone modules with zero internal dependencies
|
||||||
|
- **eventbus.py** provides thread-safe event publishing for decoupled communication
|
||||||
|
- **controller.py** coordinates ntfy/mic monitoring and event publishing
|
||||||
|
- **effects/** - plugin architecture with performance monitoring
|
||||||
|
- The render pipeline: fetch → render → effects → scroll → terminal output
|
||||||
|
|
||||||
|
### Display System
|
||||||
|
|
||||||
|
- **Display abstraction** (`engine/display/`): swap display backends via the Display protocol
|
||||||
|
- `display/backends/terminal.py` - ANSI terminal output
|
||||||
|
- `display/backends/websocket.py` - broadcasts to web clients via WebSocket
|
||||||
|
- `display/backends/sixel.py` - renders to Sixel graphics (pure Python, no C dependency)
|
||||||
|
- `display/backends/null.py` - headless display for testing
|
||||||
|
- `display/backends/multi.py` - forwards to multiple displays simultaneously
|
||||||
|
- `display/__init__.py` - DisplayRegistry for backend discovery
|
||||||
|
|
||||||
|
- **WebSocket display** (`engine/display/backends/websocket.py`): real-time frame broadcasting to web browsers
|
||||||
|
- WebSocket server on port 8765
|
||||||
|
- HTTP server on port 8766 (serves HTML client)
|
||||||
|
- Client at `client/index.html` with ANSI color parsing and fullscreen support
|
||||||
|
|
||||||
|
- **Display modes** (`--display` flag):
|
||||||
|
- `terminal` - Default ANSI terminal output
|
||||||
|
- `websocket` - Web browser display (requires websockets package)
|
||||||
|
- `sixel` - Sixel graphics in supported terminals (iTerm2, mintty, etc.)
|
||||||
|
- `both` - Terminal + WebSocket simultaneously
|
||||||
|
|
||||||
|
### Effect Plugin System
|
||||||
|
|
||||||
|
- **EffectPlugin ABC** (`engine/effects/types.py`): abstract base class for effects
|
||||||
|
- All effects must inherit from EffectPlugin and implement `process()` and `configure()`
|
||||||
|
- Runtime discovery via `effects_plugins/__init__.py` using `issubclass()` checks
|
||||||
|
|
||||||
|
- **EffectRegistry** (`engine/effects/registry.py`): manages registered effects
|
||||||
|
- **EffectChain** (`engine/effects/chain.py`): chains effects in pipeline order
|
||||||
|
|
||||||
|
### Command & Control
|
||||||
|
|
||||||
|
- C&C uses separate ntfy topics for commands and responses
|
||||||
|
- `NTFY_CC_CMD_TOPIC` - commands from cmdline.py
|
||||||
|
- `NTFY_CC_RESP_TOPIC` - responses back to cmdline.py
|
||||||
|
- Effects controller handles `/effects` commands (list, on/off, intensity, reorder, stats)
|
||||||
|
|
||||||
|
### Pipeline Documentation
|
||||||
|
|
||||||
|
The rendering pipeline is documented in `docs/PIPELINE.md` using Mermaid diagrams.
|
||||||
|
|
||||||
|
**IMPORTANT**: When making significant architectural changes to the rendering pipeline (new layers, effects, display backends), update `docs/PIPELINE.md` to reflect the changes:
|
||||||
|
1. Edit `docs/PIPELINE.md` with the new architecture
|
||||||
|
2. If adding new SVG diagrams, render them manually using an external tool (e.g., Mermaid Live Editor)
|
||||||
|
3. Commit both the markdown and any new diagram files
|
||||||
@@ -3,29 +3,29 @@
|
|||||||
mainline\.py does heavy work unsuitable for ESP32: 25\+ HTTPS/TLS RSS feeds, OTF font rasterization via Pillow, Google Translate API calls, and complex text layout\. Simultaneously, messages arriving on `ntfy.sh/klubhaus_terminal_mainline` need to interrupt the news ticker on the same device\.
|
mainline\.py does heavy work unsuitable for ESP32: 25\+ HTTPS/TLS RSS feeds, OTF font rasterization via Pillow, Google Translate API calls, and complex text layout\. Simultaneously, messages arriving on `ntfy.sh/klubhaus_terminal_mainline` need to interrupt the news ticker on the same device\.
|
||||||
## Architecture: Server \+ Thin Client
|
## Architecture: Server \+ Thin Client
|
||||||
Split the system into two halves that are designed together\.
|
Split the system into two halves that are designed together\.
|
||||||
**Server \(mainline\.py `--serve` mode, runs on any always\-on machine\)**
|
**Server (mainline\.py `--serve` mode, runs on any always\-on machine)**
|
||||||
* Reuses existing feed fetching, caching, content filtering, translation, and Pillow font rendering pipeline — no duplication\.
|
* Reuses existing feed fetching, caching, content filtering, translation, and Pillow font rendering pipeline — no duplication\.
|
||||||
* Pre\-renders each headline into a 1\-bit bitmap strip \(the OTF→half\-block pipeline already produces this as an intermediate step in `_render_line()`\)\.
|
* Pre\-renders each headline into a 1\-bit bitmap strip (the OTF→half\-block pipeline already produces this as an intermediate step in `_render_line()`)\.
|
||||||
* Exposes a lightweight HTTP API the ESP32 polls\.
|
* Exposes a lightweight HTTP API the ESP32 polls\.
|
||||||
**ESP32 thin client \(Arduino sketch\)**
|
**ESP32 thin client (Arduino sketch)**
|
||||||
* Polls the mainline server for pre\-rendered headline bitmaps over plain HTTP \(no TLS needed if on the same LAN\)\.
|
* Polls the mainline server for pre\-rendered headline bitmaps over plain HTTP (no TLS needed if on the same LAN)\.
|
||||||
* Polls `ntfy.sh/klubhaus_terminal_mainline` directly for messages, reusing the proven `NetManager::httpGet()` \+ JSON parsing pattern from DoorbellLogic \(`DoorbellLogic.cpp:155-192`\)\.
|
* Polls `ntfy.sh/klubhaus_terminal_mainline` directly for messages, reusing the proven `NetManager::httpGet()` \+ JSON parsing pattern from DoorbellLogic (`DoorbellLogic.cpp:155-192`)\.
|
||||||
* Manages scrolling, gradient coloring, and glitch effects locally \(cheap per\-frame GPU work\)\.
|
* Manages scrolling, gradient coloring, and glitch effects locally (cheap per\-frame GPU work)\.
|
||||||
* When an ntfy message arrives, the scroll is paused and the message takes over the display — same interrupt pattern as the doorbell's ALERT→DASHBOARD flow\.
|
* When an ntfy message arrives, the scroll is paused and the message takes over the display — same interrupt pattern as the doorbell's ALERT→DASHBOARD flow\.
|
||||||
## Server API \(mainline repo\)
|
## Server API (mainline repo)
|
||||||
New file: `serve.py` \(or `--serve` mode in mainline\.py\)\.
|
New file: `serve.py` (or `--serve` mode in mainline\.py)\.
|
||||||
Endpoints:
|
Endpoints:
|
||||||
* `GET /api/headlines` — returns JSON array of headline metadata: `[{"id": 0, "src": "Nature", "ts": "14:30", "width": 280, "height": 16, "bitmap": "<base64 1-bit packed>"}]`\. Bitmaps are 1\-bit\-per\-pixel, row\-major, packed 8px/byte\. The ESP32 applies gradient color locally\.
|
* `GET /api/headlines` — returns JSON array of headline metadata: `[{"id": 0, "src": "Nature", "ts": "14:30", "width": 280, "height": 16, "bitmap": "<base64 1-bit packed>"}]`\. Bitmaps are 1\-bit\-per\-pixel, row\-major, packed 8px/byte\. The ESP32 applies gradient color locally\.
|
||||||
* `GET /api/config` — returns `{"count": 120, "version": "...", "mode": "news"}` so the ESP32 knows what it's getting\.
|
* `GET /api/config` — returns `{"count": 120, "version": "...", "mode": "news"}` so the ESP32 knows what it's getting\.
|
||||||
* `GET /api/health` — `{"ok": true, "last_fetch": "...", "headline_count": 120}`
|
* `GET /api/health` — `{"ok": true, "last_fetch": "...", "headline_count": 120}`
|
||||||
The server renders at a configurable target width \(e\.g\. 800px for Board 3, 320px for Boards 1/2\) via a `--width` flag or query parameter\. Height is fixed per headline by the font size\.
|
The server renders at a configurable target width (e\.g\. 800px for Board 3, 320px for Boards 1/2) via a `--width` flag or query parameter\. Height is fixed per headline by the font size\.
|
||||||
The server refreshes feeds on a timer \(reusing `_SCROLL_DUR` cadence or a longer interval\), re\-renders, and serves the latest set\. The ESP32 polls `/api/headlines` periodically \(e\.g\. every 60s\) and swaps in the new set\.
|
The server refreshes feeds on a timer (reusing `_SCROLL_DUR` cadence or a longer interval), re\-renders, and serves the latest set\. The ESP32 polls `/api/headlines` periodically (e\.g\. every 60s) and swaps in the new set\.
|
||||||
## Render pipeline \(server side\)
|
## Render pipeline (server side)
|
||||||
The existing `_render_line()` in mainline\.py already does:
|
The existing `_render_line()` in mainline\.py already does:
|
||||||
1. `ImageFont.truetype()` → `ImageDraw.text()` → grayscale `Image`
|
1. `ImageFont.truetype()` → `ImageDraw.text()` → grayscale `Image`
|
||||||
2. Resize to target height
|
2. Resize to target height
|
||||||
3. Threshold to 1\-bit \(the `thr = 80` step\)
|
3. Threshold to 1\-bit (the `thr = 80` step)
|
||||||
For the server, we stop at step 3 and pack the 1\-bit data into bytes instead of converting to half\-block Unicode\. This is the exact same pipeline, just with a different output format\. The `_big_wrap()` and `_lr_gradient()` logic stays on the server for layout; gradient *coloring* moves to the ESP32 \(it's just an index lookup per pixel column\)\.
|
For the server, we stop at step 3 and pack the 1\-bit data into bytes instead of converting to half\-block Unicode\. This is the exact same pipeline, just with a different output format\. The `_big_wrap()` and `_lr_gradient()` logic stays on the server for layout; gradient *coloring* moves to the ESP32 (it's just an index lookup per pixel column)\.
|
||||||
## ESP32 client
|
## ESP32 client
|
||||||
### State machine
|
### State machine
|
||||||
```warp-runnable-command
|
```warp-runnable-command
|
||||||
@@ -35,19 +35,19 @@ BOOT → SCROLL ⇄ MESSAGE
|
|||||||
* **BOOT** — WiFi connect, initial headline fetch from server\.
|
* **BOOT** — WiFi connect, initial headline fetch from server\.
|
||||||
* **SCROLL** — Vertical scroll through pre\-rendered headlines with local gradient \+ glitch\. Polls server for new headlines periodically\. Polls ntfy every 15s\.
|
* **SCROLL** — Vertical scroll through pre\-rendered headlines with local gradient \+ glitch\. Polls server for new headlines periodically\. Polls ntfy every 15s\.
|
||||||
* **MESSAGE** — ntfy message arrived\. Scroll paused, message displayed\. Auto\-dismiss after timeout or touch\-dismiss\. Returns to SCROLL\.
|
* **MESSAGE** — ntfy message arrived\. Scroll paused, message displayed\. Auto\-dismiss after timeout or touch\-dismiss\. Returns to SCROLL\.
|
||||||
* **OFF** — Backlight off after inactivity \(polling continues in background\)\.
|
* **OFF** — Backlight off after inactivity (polling continues in background)\.
|
||||||
### ntfy integration
|
### ntfy integration
|
||||||
The ESP32 polls `https://ntfy.sh/klubhaus_terminal_mainline/json?since=20s&poll=1` on the same 15s interval as the doorbell polls its topics\. When a message event arrives:
|
The ESP32 polls `https://ntfy.sh/klubhaus_terminal_mainline/json?since=20s&poll=1` on the same 15s interval as the doorbell polls its topics\. When a message event arrives:
|
||||||
1. Parse JSON: `{"event": "message", "title": "...", "message": "..."}`
|
1. Parse JSON: `{"event": "message", "title": "...", "message": "..."}`
|
||||||
2. Save current scroll position\.
|
2. Save current scroll position\.
|
||||||
3. Transition to MESSAGE state\.
|
3. Transition to MESSAGE state\.
|
||||||
4. Render message text using the display library's built\-in fonts \(messages are short, no custom font needed\)\.
|
4. Render message text using the display library's built\-in fonts (messages are short, no custom font needed)\.
|
||||||
5. After `MESSAGE_TIMEOUT_MS` \(e\.g\. 30s\) or touch, restore scroll position and resume\.
|
5. After `MESSAGE_TIMEOUT_MS` (e\.g\. 30s) or touch, restore scroll position and resume\.
|
||||||
This is architecturally identical to `DoorbellLogic::onAlert()` → `dismissAlert()`, just with different content\. The ntfy polling runs independently of the server connection, so messages work even if the mainline server is offline \(the device just shows the last cached headlines\)\.
|
This is architecturally identical to `DoorbellLogic::onAlert()` → `dismissAlert()`, just with different content\. The ntfy polling runs independently of the server connection, so messages work even if the mainline server is offline (the device just shows the last cached headlines)\.
|
||||||
### Headline storage
|
### Headline storage
|
||||||
* Board 3 \(8 MB PSRAM\): store all ~120 headline bitmaps in PSRAM\. At 800px × 16px × 1 bit = 1\.6 KB each → ~192 KB total\. Trivial\.
|
* Board 3 (8 MB PSRAM): store all ~120 headline bitmaps in PSRAM\. At 800px × 16px × 1 bit = 1\.6 KB each → ~192 KB total\. Trivial\.
|
||||||
* Boards 1/2 \(PSRAM TBD\): at 320px × 16px = 640 bytes each → ~77 KB for 120 headlines\. Fits if PSRAM is present\. Without PSRAM, keep ~20 headlines in a ring buffer \(~13 KB\)\.
|
* Boards 1/2 (PSRAM TBD): at 320px × 16px = 640 bytes each → ~77 KB for 120 headlines\. Fits if PSRAM is present\. Without PSRAM, keep ~20 headlines in a ring buffer (~13 KB)\.
|
||||||
### Gradient coloring \(local\)
|
### Gradient coloring (local)
|
||||||
The 12\-step ANSI gradient in mainline\.py maps to 12 RGB565 values:
|
The 12\-step ANSI gradient in mainline\.py maps to 12 RGB565 values:
|
||||||
```warp-runnable-command
|
```warp-runnable-command
|
||||||
const uint16_t GRADIENT[] = {
|
const uint16_t GRADIENT[] = {
|
||||||
@@ -68,8 +68,8 @@ mainline.py (existing, unchanged)
|
|||||||
serve.py (new — HTTP server, imports mainline rendering functions)
|
serve.py (new — HTTP server, imports mainline rendering functions)
|
||||||
klubhaus-doorbell-hardware.md (existing)
|
klubhaus-doorbell-hardware.md (existing)
|
||||||
```
|
```
|
||||||
`serve.py` imports the rendering functions from mainline\.py \(after refactoring them into importable form — they're currently top\-level but not wrapped in `if __name__`\)\.
|
`serve.py` imports the rendering functions from mainline\.py (after refactoring them into importable form — they're currently top\-level but not wrapped in `if __name__`)\.
|
||||||
### klubhaus\-doorbell repo \(or mainline repo under firmware/\)
|
### klubhaus\-doorbell repo (or mainline repo under firmware/)
|
||||||
```warp-runnable-command
|
```warp-runnable-command
|
||||||
boards/esp32-mainline/
|
boards/esp32-mainline/
|
||||||
├── esp32-mainline.ino Main sketch
|
├── esp32-mainline.ino Main sketch
|
||||||
@@ -79,31 +79,31 @@ boards/esp32-mainline/
|
|||||||
├── HeadlineStore.h/.cpp Bitmap ring buffer in PSRAM
|
├── HeadlineStore.h/.cpp Bitmap ring buffer in PSRAM
|
||||||
└── NtfyPoller.h/.cpp ntfy.sh polling (extracted from DoorbellLogic pattern)
|
└── NtfyPoller.h/.cpp ntfy.sh polling (extracted from DoorbellLogic pattern)
|
||||||
```
|
```
|
||||||
The display driver is reused from the target board \(e\.g\. `DisplayDriverGFX` for Board 3\)\. `MainlineLogic` replaces `DoorbellLogic` as the state machine but follows the same patterns\.
|
The display driver is reused from the target board (e\.g\. `DisplayDriverGFX` for Board 3)\. `MainlineLogic` replaces `DoorbellLogic` as the state machine but follows the same patterns\.
|
||||||
## Branch strategy recommendation
|
## Branch strategy recommendation
|
||||||
The work spans two repos and has clear dependency ordering\.
|
The work spans two repos and has clear dependency ordering\.
|
||||||
### Phase 1 — Finish current branch \(mainline repo\)
|
### Phase 1 — Finish current branch (mainline repo)
|
||||||
**Branch:** `feat/arduino` \(current\)
|
**Branch:** `feat/arduino` (current)
|
||||||
**Content:** Hardware spec doc\. Already done\.
|
**Content:** Hardware spec doc\. Already done\.
|
||||||
**Action:** Merge to main when ready\.
|
**Action:** Merge to main when ready\.
|
||||||
### Phase 2 — Server renderer \(mainline repo\)
|
### Phase 2 — Server renderer (mainline repo)
|
||||||
**Branch:** `feat/renderer` \(branch from main after Phase 1 merges\)
|
**Branch:** `feat/renderer` (branch from main after Phase 1 merges)
|
||||||
**Content:**
|
**Content:**
|
||||||
* Refactor mainline\.py rendering functions to be importable \(extract from `__main__` guard\)
|
* Refactor mainline\.py rendering functions to be importable (extract from `__main__` guard)
|
||||||
* `serve.py` — HTTP server with `/api/headlines`, `/api/config`, `/api/health`
|
* `serve.py` — HTTP server with `/api/headlines`, `/api/config`, `/api/health`
|
||||||
* Bitmap packing utility \(1\-bit row\-major\)
|
* Bitmap packing utility (1\-bit row\-major)
|
||||||
**Why a separate branch:** This changes mainline\.py's structure \(refactoring for imports\) and adds a new entry point\. It's a self\-contained, testable unit — you can verify the API with `curl` before touching any Arduino code\.
|
**Why a separate branch:** This changes mainline\.py's structure (refactoring for imports) and adds a new entry point\. It's a self\-contained, testable unit — you can verify the API with `curl` before touching any Arduino code\.
|
||||||
### Phase 3 — ESP32 client \(klubhaus\-doorbell repo, or mainline repo\)
|
### Phase 3 — ESP32 client (klubhaus\-doorbell repo, or mainline repo)
|
||||||
**Branch:** `feat/mainline-client` in whichever repo hosts it
|
**Branch:** `feat/mainline-client` in whichever repo hosts it
|
||||||
**Content:**
|
**Content:**
|
||||||
* `MainlineLogic` state machine
|
* `MainlineLogic` state machine
|
||||||
* `HeadlineStore` bitmap buffer
|
* `HeadlineStore` bitmap buffer
|
||||||
* `NtfyPoller` for `klubhaus_terminal_mainline`
|
* `NtfyPoller` for `klubhaus_terminal_mainline`
|
||||||
* Board\-specific sketch for the target board
|
* Board\-specific sketch for the target board
|
||||||
**Depends on:** Phase 2 \(needs a running server to test against\)
|
**Depends on:** Phase 2 (needs a running server to test against)
|
||||||
**Repo decision:** If you have push access to klubhaus\-doorbell, it fits naturally as a new board target alongside the existing doorbell sketches — it reuses `NetManager`, `IDisplayDriver`, and the vendored display libraries\. If not, put it under `mainline/firmware/` and vendor the shared KlubhausCore library\.
|
**Repo decision:** If you have push access to klubhaus\-doorbell, it fits naturally as a new board target alongside the existing doorbell sketches — it reuses `NetManager`, `IDisplayDriver`, and the vendored display libraries\. If not, put it under `mainline/firmware/` and vendor the shared KlubhausCore library\.
|
||||||
### Merge order
|
### Merge order
|
||||||
1. `feat/arduino` → main \(hardware spec\)
|
1. `feat/arduino` → main (hardware spec)
|
||||||
2. `feat/renderer` → main \(server\)
|
2. `feat/renderer` → main (server)
|
||||||
3. `feat/mainline-client` → main in whichever repo \(ESP32 client\)
|
3. `feat/mainline-client` → main in whichever repo (ESP32 client)
|
||||||
Each phase is independently testable and doesn't block the other until Phase 3 needs a running server\.
|
Each phase is independently testable and doesn't block the other until Phase 3 needs a running server\.
|
||||||
280
README.md
280
README.md
@@ -6,21 +6,45 @@ A full-screen terminal news ticker that renders live global headlines in large O
|
|||||||
|
|
||||||
---
|
---
|
||||||
|
|
||||||
## Run
|
## Using
|
||||||
|
|
||||||
|
### Run
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
python3 mainline.py # news stream
|
python3 mainline.py # news stream
|
||||||
python3 mainline.py --poetry # literary consciousness mode
|
python3 mainline.py --poetry # literary consciousness mode
|
||||||
python3 mainline.py -p # same
|
python3 mainline.py -p # same
|
||||||
python3 mainline.py --firehose # dense rapid-fire headline mode
|
python3 mainline.py --firehose # dense rapid-fire headline mode
|
||||||
python3 mainline.py --refresh # force re-fetch (bypass cache)
|
python3 mainline.py --display websocket # web browser display only
|
||||||
|
python3 mainline.py --display both # terminal + web browser
|
||||||
|
python3 mainline.py --no-font-picker # skip interactive font picker
|
||||||
|
python3 mainline.py --font-file path.otf # use a specific font file
|
||||||
|
python3 mainline.py --font-dir ~/fonts # scan a different font folder
|
||||||
|
python3 mainline.py --font-index 1 # select face index within a collection
|
||||||
```
|
```
|
||||||
|
|
||||||
First run bootstraps a local `.mainline_venv/` and installs deps (`feedparser`, `Pillow`, `sounddevice`, `numpy`). Subsequent runs start immediately, loading from cache.
|
Or with uv:
|
||||||
|
|
||||||
---
|
```bash
|
||||||
|
uv run mainline.py
|
||||||
|
```
|
||||||
|
|
||||||
## Config
|
First run bootstraps dependencies. Use `uv sync --all-extras` for mic support.
|
||||||
|
|
||||||
|
### Command & Control (C&C)
|
||||||
|
|
||||||
|
Control mainline remotely using `cmdline.py`:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
uv run cmdline.py # Interactive TUI
|
||||||
|
uv run cmdline.py /effects list # List all effects
|
||||||
|
uv run cmdline.py /effects stats # Show performance stats
|
||||||
|
uv run cmdline.py -w /effects stats # Watch mode (auto-refresh)
|
||||||
|
```
|
||||||
|
|
||||||
|
Commands are sent via ntfy.sh topics - useful for controlling a daemonized mainline instance.
|
||||||
|
|
||||||
|
### Config
|
||||||
|
|
||||||
All constants live in `engine/config.py`:
|
All constants live in `engine/config.py`:
|
||||||
|
|
||||||
@@ -29,69 +53,50 @@ All constants live in `engine/config.py`:
|
|||||||
| `HEADLINE_LIMIT` | `1000` | Total headlines per session |
|
| `HEADLINE_LIMIT` | `1000` | Total headlines per session |
|
||||||
| `FEED_TIMEOUT` | `10` | Per-feed HTTP timeout (seconds) |
|
| `FEED_TIMEOUT` | `10` | Per-feed HTTP timeout (seconds) |
|
||||||
| `MIC_THRESHOLD_DB` | `50` | dB floor above which glitches spike |
|
| `MIC_THRESHOLD_DB` | `50` | dB floor above which glitches spike |
|
||||||
| `FONT_PATH` | hardcoded path | Path to your OTF/TTF display font |
|
| `NTFY_TOPIC` | klubhaus URL | ntfy.sh JSON stream for messages |
|
||||||
|
| `NTFY_CC_CMD_TOPIC` | klubhaus URL | ntfy.sh topic for C&C commands |
|
||||||
|
| `NTFY_CC_RESP_TOPIC` | klubhaus URL | ntfy.sh topic for C&C responses |
|
||||||
|
| `NTFY_RECONNECT_DELAY` | `5` | Seconds before reconnecting after dropped SSE |
|
||||||
|
| `MESSAGE_DISPLAY_SECS` | `30` | How long an ntfy message holds the screen |
|
||||||
|
| `FONT_DIR` | `fonts/` | Folder scanned for `.otf`, `.ttf`, `.ttc` files |
|
||||||
|
| `FONT_PATH` | first file in `FONT_DIR` | Active display font |
|
||||||
|
| `FONT_PICKER` | `True` | Show interactive font picker at boot |
|
||||||
| `FONT_SZ` | `60` | Font render size (affects block density) |
|
| `FONT_SZ` | `60` | Font render size (affects block density) |
|
||||||
| `RENDER_H` | `8` | Terminal rows per headline line |
|
| `RENDER_H` | `8` | Terminal rows per headline line |
|
||||||
| `SSAA` | `4` | Super-sampling factor (render at 4× then downsample) |
|
| `SSAA` | `4` | Super-sampling factor |
|
||||||
| `SCROLL_DUR` | `5.625` | Seconds per headline |
|
| `SCROLL_DUR` | `5.625` | Seconds per headline |
|
||||||
| `FRAME_DT` | `0.05` | Frame interval in seconds (20 FPS) |
|
| `FRAME_DT` | `0.05` | Frame interval in seconds (20 FPS) |
|
||||||
| `GRAD_SPEED` | `0.08` | Gradient sweep speed (cycles/sec, ~12s full sweep) |
|
|
||||||
| `FIREHOSE_H` | `12` | Firehose zone height (terminal rows) |
|
| `FIREHOSE_H` | `12` | Firehose zone height (terminal rows) |
|
||||||
| `NTFY_TOPIC` | klubhaus URL | ntfy.sh JSON endpoint to poll |
|
| `GRAD_SPEED` | `0.08` | Gradient sweep speed |
|
||||||
| `NTFY_POLL_INTERVAL` | `15` | Seconds between ntfy polls |
|
|
||||||
| `MESSAGE_DISPLAY_SECS` | `30` | How long an ntfy message holds the screen |
|
|
||||||
|
|
||||||
**Font:** `FONT_PATH` is hardcoded to a local path. Update it to point to whatever display font you want — anything with strong contrast and wide letterforms works well.
|
### Display Modes
|
||||||
|
|
||||||
---
|
Mainline supports multiple display backends:
|
||||||
|
|
||||||
## How it works
|
- **Terminal** (`--display terminal`): ANSI terminal output (default)
|
||||||
|
- **WebSocket** (`--display websocket`): Stream to web browser clients
|
||||||
|
- **Sixel** (`--display sixel`): Sixel graphics in supported terminals (iTerm2, mintty)
|
||||||
|
- **Both** (`--display both`): Terminal + WebSocket simultaneously
|
||||||
|
|
||||||
- Feeds are fetched and filtered on startup (sports and vapid content stripped); results are cached to `.mainline_cache_news.json` / `.mainline_cache_poetry.json` for fast restarts
|
WebSocket mode serves a web client at http://localhost:8766 with ANSI color support and fullscreen mode.
|
||||||
- Headlines are rasterized via Pillow with 4× SSAA into half-block characters (`▀▄█ `) at the configured font size
|
|
||||||
- A left-to-right ANSI gradient colors each character: white-hot leading edge trails off to near-black; the gradient sweeps continuously across the full scroll canvas
|
|
||||||
- Subject-region detection runs a regex pass on each headline; matches trigger a Google Translate call and font swap to the appropriate script (CJK, Arabic, Devanagari, etc.) using macOS system fonts
|
|
||||||
- The mic stream runs in a background thread, feeding RMS dB into the glitch probability calculation each frame
|
|
||||||
- The viewport scrolls through a virtual canvas of pre-rendered blocks; fade zones at top and bottom dissolve characters probabilistically
|
|
||||||
- An ntfy.sh poller runs in a background thread; incoming messages interrupt the scroll and render full-screen until dismissed or expired
|
|
||||||
|
|
||||||
---
|
### Feeds
|
||||||
|
|
||||||
## Architecture
|
|
||||||
|
|
||||||
`mainline.py` is a thin entrypoint (venv bootstrap → `engine.app.main()`). All logic lives in the `engine/` package:
|
|
||||||
|
|
||||||
```
|
|
||||||
engine/
|
|
||||||
config.py constants, CLI flags, glyph tables
|
|
||||||
sources.py FEEDS, POETRY_SOURCES, language/script maps
|
|
||||||
terminal.py ANSI codes, tw/th, type_out, boot_ln
|
|
||||||
filter.py HTML stripping, content filter
|
|
||||||
translate.py Google Translate wrapper + region detection
|
|
||||||
render.py OTF → half-block pipeline (SSAA, gradient)
|
|
||||||
effects.py noise, glitch_bar, fade, firehose
|
|
||||||
fetch.py RSS/Gutenberg fetching + cache load/save
|
|
||||||
ntfy.py NtfyPoller — standalone, zero internal deps
|
|
||||||
mic.py MicMonitor — standalone, graceful fallback
|
|
||||||
scroll.py stream() frame loop + message rendering
|
|
||||||
app.py main(), boot sequence, signal handler
|
|
||||||
```
|
|
||||||
|
|
||||||
`ntfy.py` and `mic.py` have zero internal dependencies and can be imported by any other visualizer.
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Feeds
|
|
||||||
|
|
||||||
~25 sources across four categories: **Science & Technology**, **Economics & Business**, **World & Politics**, **Culture & Ideas**. Add or swap feeds in `engine/sources.py` → `FEEDS`.
|
~25 sources across four categories: **Science & Technology**, **Economics & Business**, **World & Politics**, **Culture & Ideas**. Add or swap feeds in `engine/sources.py` → `FEEDS`.
|
||||||
|
|
||||||
**Poetry mode** pulls from Project Gutenberg: Whitman, Dickinson, Thoreau, Emerson. Sources are in `engine/sources.py` → `POETRY_SOURCES`.
|
**Poetry mode** pulls from Project Gutenberg: Whitman, Dickinson, Thoreau, Emerson. Sources are in `engine/sources.py` → `POETRY_SOURCES`.
|
||||||
|
|
||||||
---
|
### Fonts
|
||||||
|
|
||||||
## ntfy.sh Integration
|
A `fonts/` directory is bundled with demo faces. On startup, an interactive picker lists all discovered faces with a live half-block preview.
|
||||||
|
|
||||||
Mainline polls a configurable ntfy.sh topic in the background. When a message arrives, the scroll pauses and the message renders full-screen for `MESSAGE_DISPLAY_SECS` seconds, then the stream resumes.
|
Navigation: `↑`/`↓` or `j`/`k` to move, `Enter` or `q` to select.
|
||||||
|
|
||||||
|
To add your own fonts, drop `.otf`, `.ttf`, or `.ttc` files into `fonts/`.
|
||||||
|
|
||||||
|
### ntfy.sh
|
||||||
|
|
||||||
|
Mainline polls a configurable ntfy.sh topic in the background. When a message arrives, the scroll pauses and the message renders full-screen.
|
||||||
|
|
||||||
To push a message:
|
To push a message:
|
||||||
|
|
||||||
@@ -99,43 +104,160 @@ To push a message:
|
|||||||
curl -d "Body text" -H "Title: Alert title" https://ntfy.sh/your_topic
|
curl -d "Body text" -H "Title: Alert title" https://ntfy.sh/your_topic
|
||||||
```
|
```
|
||||||
|
|
||||||
Update `NTFY_TOPIC` in `engine/config.py` to point at your own topic. The `NtfyPoller` class is fully standalone and can be reused by other visualizers:
|
---
|
||||||
|
|
||||||
```python
|
## Internals
|
||||||
from engine.ntfy import NtfyPoller
|
|
||||||
poller = NtfyPoller("https://ntfy.sh/my_topic/json?since=20s&poll=1")
|
### How it works
|
||||||
poller.start()
|
|
||||||
# in render loop:
|
- On launch, the font picker scans `fonts/` and presents a live-rendered TUI for face selection
|
||||||
msg = poller.get_active_message() # returns (title, body, timestamp) or None
|
- Feeds are fetched and filtered on startup; results are cached for fast restarts
|
||||||
|
- Headlines are rasterized via Pillow with 4× SSAA into half-block characters
|
||||||
|
- The ticker uses a sweeping white-hot → deep green gradient
|
||||||
|
- Subject-region detection triggers Google Translate and font swap for non-Latin scripts
|
||||||
|
- The mic stream runs in a background thread, feeding RMS dB into glitch probability
|
||||||
|
- The viewport scrolls through pre-rendered blocks with fade zones
|
||||||
|
- An ntfy.sh SSE stream runs in a background thread for messages and C&C commands
|
||||||
|
|
||||||
|
### Architecture
|
||||||
|
|
||||||
|
```
|
||||||
|
engine/
|
||||||
|
__init__.py package marker
|
||||||
|
app.py main(), font picker TUI, boot sequence, C&C poller
|
||||||
|
config.py constants, CLI flags, glyph tables
|
||||||
|
sources.py FEEDS, POETRY_SOURCES, language/script maps
|
||||||
|
terminal.py ANSI codes, tw/th, type_out, boot_ln
|
||||||
|
filter.py HTML stripping, content filter
|
||||||
|
translate.py Google Translate wrapper + region detection
|
||||||
|
render.py OTF → half-block pipeline (SSAA, gradient)
|
||||||
|
effects/ plugin architecture for visual effects
|
||||||
|
types.py EffectPlugin ABC, EffectConfig, EffectContext
|
||||||
|
registry.py effect registration and lookup
|
||||||
|
chain.py effect pipeline chaining
|
||||||
|
controller.py handles /effects commands
|
||||||
|
performance.py performance monitoring
|
||||||
|
legacy.py legacy functional effects
|
||||||
|
effects_plugins/ effect plugin implementations
|
||||||
|
noise.py noise effect
|
||||||
|
fade.py fade effect
|
||||||
|
glitch.py glitch effect
|
||||||
|
firehose.py firehose effect
|
||||||
|
fetch.py RSS/Gutenberg fetching + cache
|
||||||
|
ntfy.py NtfyPoller — standalone, zero internal deps
|
||||||
|
mic.py MicMonitor — standalone, graceful fallback
|
||||||
|
scroll.py stream() frame loop + message rendering
|
||||||
|
viewport.py terminal dimension tracking
|
||||||
|
frame.py scroll step calculation, timing
|
||||||
|
layers.py ticker zone, firehose, message overlay
|
||||||
|
eventbus.py thread-safe event publishing
|
||||||
|
events.py event types and definitions
|
||||||
|
controller.py coordinates ntfy/mic monitoring
|
||||||
|
emitters.py background emitters
|
||||||
|
types.py type definitions
|
||||||
|
display/ Display backend system
|
||||||
|
__init__.py DisplayRegistry, get_monitor
|
||||||
|
backends/
|
||||||
|
terminal.py ANSI terminal display
|
||||||
|
websocket.py WebSocket server for browser clients
|
||||||
|
sixel.py Sixel graphics (pure Python)
|
||||||
|
null.py headless display for testing
|
||||||
|
multi.py forwards to multiple displays
|
||||||
|
benchmark.py performance benchmarking tool
|
||||||
```
|
```
|
||||||
|
|
||||||
---
|
---
|
||||||
|
|
||||||
## Ideas / Future
|
## Development
|
||||||
|
|
||||||
### Performance
|
### Setup
|
||||||
- **Concurrent feed fetching** — startup currently blocks sequentially on ~25 HTTP requests; `concurrent.futures.ThreadPoolExecutor` would cut load time to the slowest single feed
|
|
||||||
- **Background refresh** — re-fetch feeds in a daemon thread so a long session stays current without restart
|
|
||||||
- **Translation pre-fetch** — run translate calls concurrently during the boot sequence rather than on first render
|
|
||||||
|
|
||||||
### Graphics
|
Requires Python 3.10+ and [uv](https://docs.astral.sh/uv/).
|
||||||
- **Matrix rain underlay** — katakana column rain rendered at low opacity beneath the scrolling blocks as a background layer
|
|
||||||
- **CRT simulation** — subtle dim scanlines every N rows, occasional brightness ripple across the full screen
|
|
||||||
- **Sixel / iTerm2 inline images** — bypass half-blocks entirely and stream actual bitmap frames for true resolution; would require a capable terminal
|
|
||||||
- **Parallax secondary column** — a second, dimmer, faster-scrolling stream of ambient text at reduced opacity on one side
|
|
||||||
|
|
||||||
### Cyberpunk Vibes
|
```bash
|
||||||
- **Keyword watch list** — highlight or strobe any headline matching tracked terms (names, topics, tickers)
|
uv sync # minimal (no mic)
|
||||||
- **Breaking interrupt** — full-screen flash + synthesized blip when a high-priority keyword hits
|
uv sync --all-extras # with mic support
|
||||||
- **Live data overlay** — secondary ticker strip at screen edge: BTC price, ISS position, geomagnetic index
|
uv sync --all-extras --group dev # full dev environment
|
||||||
- **Theme switcher** — `--amber` (phosphor), `--ice` (electric cyan), `--red` (alert state) palette modes via CLI flag
|
```
|
||||||
- **Persona modes** — `--surveillance`, `--oracle`, `--underground` as feed presets with matching color themes and boot copy
|
|
||||||
- **Synthesized audio** — short static bursts tied to glitch events, independent of mic input
|
|
||||||
|
|
||||||
### Extensibility
|
### Tasks
|
||||||
- **serve.py** — HTTP server that imports `engine.render` and `engine.fetch` directly to stream 1-bit bitmaps to an ESP32 display
|
|
||||||
- **Rust port** — `ntfy.py` and `render.py` are the natural first targets; clear module boundaries make incremental porting viable
|
With [mise](https://mise.jdx.dev/):
|
||||||
|
|
||||||
|
```bash
|
||||||
|
mise run test # run test suite
|
||||||
|
mise run test-cov # run with coverage report
|
||||||
|
|
||||||
|
mise run lint # ruff check
|
||||||
|
mise run lint-fix # ruff check --fix
|
||||||
|
mise run format # ruff format
|
||||||
|
|
||||||
|
mise run run # terminal display
|
||||||
|
mise run run-websocket # web display only
|
||||||
|
mise run run-sixel # sixel graphics
|
||||||
|
mise run run-both # terminal + web
|
||||||
|
mise run run-client # both + open browser
|
||||||
|
|
||||||
|
mise run cmd # C&C command interface
|
||||||
|
mise run cmd-stats # watch effects stats
|
||||||
|
|
||||||
|
mise run benchmark # run performance benchmarks
|
||||||
|
mise run benchmark-json # save as JSON
|
||||||
|
|
||||||
|
mise run topics-init # initialize ntfy topics
|
||||||
|
```
|
||||||
|
|
||||||
|
### Testing
|
||||||
|
|
||||||
|
```bash
|
||||||
|
uv run pytest
|
||||||
|
uv run pytest --cov=engine --cov-report=term-missing
|
||||||
|
|
||||||
|
# Run with mise
|
||||||
|
mise run test
|
||||||
|
mise run test-cov
|
||||||
|
|
||||||
|
# Run performance benchmarks
|
||||||
|
mise run benchmark
|
||||||
|
mise run benchmark-json
|
||||||
|
|
||||||
|
# Run benchmark hook mode (for CI)
|
||||||
|
uv run python -m engine.benchmark --hook
|
||||||
|
```
|
||||||
|
|
||||||
|
Performance regression tests are in `tests/test_benchmark.py` marked with `@pytest.mark.benchmark`.
|
||||||
|
|
||||||
|
### Linting
|
||||||
|
|
||||||
|
```bash
|
||||||
|
uv run ruff check engine/ mainline.py
|
||||||
|
uv run ruff format engine/ mainline.py
|
||||||
|
```
|
||||||
|
|
||||||
|
Pre-commit hooks run lint automatically via `hk`.
|
||||||
|
|
||||||
---
|
---
|
||||||
|
|
||||||
*macOS only (system font paths hardcoded). Python 3.9+.*
|
## Roadmap
|
||||||
|
|
||||||
|
### Performance
|
||||||
|
- Concurrent feed fetching with ThreadPoolExecutor
|
||||||
|
- Background feed refresh daemon
|
||||||
|
- Translation pre-fetch during boot
|
||||||
|
|
||||||
|
### Graphics
|
||||||
|
- Matrix rain katakana underlay
|
||||||
|
- CRT scanline simulation
|
||||||
|
- Sixel/iTerm2 inline images
|
||||||
|
- Parallax secondary column
|
||||||
|
|
||||||
|
### Cyberpunk Vibes
|
||||||
|
- Keyword watch list with strobe effects
|
||||||
|
- Breaking interrupt with synthesized audio
|
||||||
|
- Live data overlay (BTC, ISS position)
|
||||||
|
- Theme switcher (amber, ice, red)
|
||||||
|
- Persona modes (surveillance, oracle, underground)
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
*Python 3.10+. Primary display font is user-selectable via bundled `fonts/` picker.*
|
||||||
366
client/index.html
Normal file
366
client/index.html
Normal file
@@ -0,0 +1,366 @@
|
|||||||
|
<!DOCTYPE html>
|
||||||
|
<html lang="en">
|
||||||
|
<head>
|
||||||
|
<meta charset="UTF-8">
|
||||||
|
<meta name="viewport" content="width=device-width, initial-scale=1.0">
|
||||||
|
<title>Mainline Terminal</title>
|
||||||
|
<style>
|
||||||
|
* {
|
||||||
|
margin: 0;
|
||||||
|
padding: 0;
|
||||||
|
box-sizing: border-box;
|
||||||
|
}
|
||||||
|
body {
|
||||||
|
background: #0a0a0a;
|
||||||
|
color: #ccc;
|
||||||
|
font-family: 'Fira Code', 'Cascadia Code', 'Consolas', monospace;
|
||||||
|
display: flex;
|
||||||
|
flex-direction: column;
|
||||||
|
align-items: center;
|
||||||
|
justify-content: center;
|
||||||
|
min-height: 100vh;
|
||||||
|
padding: 20px;
|
||||||
|
}
|
||||||
|
body.fullscreen {
|
||||||
|
padding: 0;
|
||||||
|
}
|
||||||
|
body.fullscreen #controls {
|
||||||
|
display: none;
|
||||||
|
}
|
||||||
|
#container {
|
||||||
|
position: relative;
|
||||||
|
}
|
||||||
|
canvas {
|
||||||
|
background: #000;
|
||||||
|
border: 1px solid #333;
|
||||||
|
image-rendering: pixelated;
|
||||||
|
image-rendering: crisp-edges;
|
||||||
|
}
|
||||||
|
body.fullscreen canvas {
|
||||||
|
border: none;
|
||||||
|
width: 100vw;
|
||||||
|
height: 100vh;
|
||||||
|
max-width: 100vw;
|
||||||
|
max-height: 100vh;
|
||||||
|
}
|
||||||
|
#controls {
|
||||||
|
display: flex;
|
||||||
|
gap: 10px;
|
||||||
|
margin-top: 10px;
|
||||||
|
align-items: center;
|
||||||
|
}
|
||||||
|
#controls button {
|
||||||
|
background: #333;
|
||||||
|
color: #ccc;
|
||||||
|
border: 1px solid #555;
|
||||||
|
padding: 5px 12px;
|
||||||
|
cursor: pointer;
|
||||||
|
font-family: inherit;
|
||||||
|
font-size: 12px;
|
||||||
|
}
|
||||||
|
#controls button:hover {
|
||||||
|
background: #444;
|
||||||
|
}
|
||||||
|
#controls input {
|
||||||
|
width: 60px;
|
||||||
|
background: #222;
|
||||||
|
color: #ccc;
|
||||||
|
border: 1px solid #444;
|
||||||
|
padding: 4px 8px;
|
||||||
|
font-family: inherit;
|
||||||
|
text-align: center;
|
||||||
|
}
|
||||||
|
#status {
|
||||||
|
margin-top: 10px;
|
||||||
|
font-size: 12px;
|
||||||
|
color: #666;
|
||||||
|
}
|
||||||
|
#status.connected {
|
||||||
|
color: #4f4;
|
||||||
|
}
|
||||||
|
#status.disconnected {
|
||||||
|
color: #f44;
|
||||||
|
}
|
||||||
|
</style>
|
||||||
|
</head>
|
||||||
|
<body>
|
||||||
|
<div id="container">
|
||||||
|
<canvas id="terminal"></canvas>
|
||||||
|
</div>
|
||||||
|
<div id="controls">
|
||||||
|
<label>Cols: <input type="number" id="cols" value="80" min="20" max="200"></label>
|
||||||
|
<label>Rows: <input type="number" id="rows" value="24" min="10" max="60"></label>
|
||||||
|
<button id="apply">Apply</button>
|
||||||
|
<button id="fullscreen">Fullscreen</button>
|
||||||
|
</div>
|
||||||
|
<div id="status" class="disconnected">Connecting...</div>
|
||||||
|
|
||||||
|
<script>
|
||||||
|
const canvas = document.getElementById('terminal');
|
||||||
|
const ctx = canvas.getContext('2d');
|
||||||
|
const status = document.getElementById('status');
|
||||||
|
const colsInput = document.getElementById('cols');
|
||||||
|
const rowsInput = document.getElementById('rows');
|
||||||
|
const applyBtn = document.getElementById('apply');
|
||||||
|
const fullscreenBtn = document.getElementById('fullscreen');
|
||||||
|
|
||||||
|
const CHAR_WIDTH = 9;
|
||||||
|
const CHAR_HEIGHT = 16;
|
||||||
|
|
||||||
|
const ANSI_COLORS = {
|
||||||
|
0: '#000000', 1: '#cd3131', 2: '#0dbc79', 3: '#e5e510',
|
||||||
|
4: '#2472c8', 5: '#bc3fbc', 6: '#11a8cd', 7: '#e5e5e5',
|
||||||
|
8: '#666666', 9: '#f14c4c', 10: '#23d18b', 11: '#f5f543',
|
||||||
|
12: '#3b8eea', 13: '#d670d6', 14: '#29b8db', 15: '#ffffff',
|
||||||
|
};
|
||||||
|
|
||||||
|
let cols = 80;
|
||||||
|
let rows = 24;
|
||||||
|
let ws = null;
|
||||||
|
|
||||||
|
function resizeCanvas() {
|
||||||
|
canvas.width = cols * CHAR_WIDTH;
|
||||||
|
canvas.height = rows * CHAR_HEIGHT;
|
||||||
|
}
|
||||||
|
|
||||||
|
function parseAnsi(text) {
|
||||||
|
if (!text) return [];
|
||||||
|
|
||||||
|
const tokens = [];
|
||||||
|
let currentText = '';
|
||||||
|
let fg = '#cccccc';
|
||||||
|
let bg = '#000000';
|
||||||
|
let bold = false;
|
||||||
|
let i = 0;
|
||||||
|
let inEscape = false;
|
||||||
|
let escapeCode = '';
|
||||||
|
|
||||||
|
while (i < text.length) {
|
||||||
|
const char = text[i];
|
||||||
|
|
||||||
|
if (inEscape) {
|
||||||
|
if (char >= '0' && char <= '9' || char === ';' || char === '[') {
|
||||||
|
escapeCode += char;
|
||||||
|
}
|
||||||
|
|
||||||
|
if (char === 'm') {
|
||||||
|
const codes = escapeCode.replace('\x1b[', '').split(';');
|
||||||
|
|
||||||
|
for (const code of codes) {
|
||||||
|
const num = parseInt(code) || 0;
|
||||||
|
|
||||||
|
if (num === 0) {
|
||||||
|
fg = '#cccccc';
|
||||||
|
bg = '#000000';
|
||||||
|
bold = false;
|
||||||
|
} else if (num === 1) {
|
||||||
|
bold = true;
|
||||||
|
} else if (num === 22) {
|
||||||
|
bold = false;
|
||||||
|
} else if (num === 39) {
|
||||||
|
fg = '#cccccc';
|
||||||
|
} else if (num === 49) {
|
||||||
|
bg = '#000000';
|
||||||
|
} else if (num >= 30 && num <= 37) {
|
||||||
|
fg = ANSI_COLORS[num - 30 + (bold ? 8 : 0)] || '#cccccc';
|
||||||
|
} else if (num >= 40 && num <= 47) {
|
||||||
|
bg = ANSI_COLORS[num - 40] || '#000000';
|
||||||
|
} else if (num >= 90 && num <= 97) {
|
||||||
|
fg = ANSI_COLORS[num - 90 + 8] || '#cccccc';
|
||||||
|
} else if (num >= 100 && num <= 107) {
|
||||||
|
bg = ANSI_COLORS[num - 100 + 8] || '#000000';
|
||||||
|
} else if (num >= 1 && num <= 256) {
|
||||||
|
// 256 colors
|
||||||
|
if (num < 16) {
|
||||||
|
fg = ANSI_COLORS[num] || '#cccccc';
|
||||||
|
} else if (num < 232) {
|
||||||
|
const c = num - 16;
|
||||||
|
const r = Math.floor(c / 36) * 51;
|
||||||
|
const g = Math.floor((c % 36) / 6) * 51;
|
||||||
|
const b = (c % 6) * 51;
|
||||||
|
fg = `#${r.toString(16).padStart(2,'0')}${g.toString(16).padStart(2,'0')}${b.toString(16).padStart(2,'0')}`;
|
||||||
|
} else {
|
||||||
|
const gray = (num - 232) * 10 + 8;
|
||||||
|
fg = `#${gray.toString(16).repeat(2)}`;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
if (currentText) {
|
||||||
|
tokens.push({ text: currentText, fg, bg, bold });
|
||||||
|
currentText = '';
|
||||||
|
}
|
||||||
|
inEscape = false;
|
||||||
|
escapeCode = '';
|
||||||
|
}
|
||||||
|
} else if (char === '\x1b' && text[i + 1] === '[') {
|
||||||
|
if (currentText) {
|
||||||
|
tokens.push({ text: currentText, fg, bg, bold });
|
||||||
|
currentText = '';
|
||||||
|
}
|
||||||
|
inEscape = true;
|
||||||
|
escapeCode = '';
|
||||||
|
i++;
|
||||||
|
} else {
|
||||||
|
currentText += char;
|
||||||
|
}
|
||||||
|
i++;
|
||||||
|
}
|
||||||
|
|
||||||
|
if (currentText) {
|
||||||
|
tokens.push({ text: currentText, fg, bg, bold });
|
||||||
|
}
|
||||||
|
|
||||||
|
return tokens;
|
||||||
|
}
|
||||||
|
|
||||||
|
function renderLine(text, x, y, lineHeight) {
|
||||||
|
const tokens = parseAnsi(text);
|
||||||
|
let xOffset = x;
|
||||||
|
|
||||||
|
for (const token of tokens) {
|
||||||
|
if (token.text) {
|
||||||
|
if (token.bold) {
|
||||||
|
ctx.font = 'bold 16px monospace';
|
||||||
|
} else {
|
||||||
|
ctx.font = '16px monospace';
|
||||||
|
}
|
||||||
|
|
||||||
|
const metrics = ctx.measureText(token.text);
|
||||||
|
|
||||||
|
if (token.bg !== '#000000') {
|
||||||
|
ctx.fillStyle = token.bg;
|
||||||
|
ctx.fillRect(xOffset, y - 2, metrics.width + 1, lineHeight);
|
||||||
|
}
|
||||||
|
|
||||||
|
ctx.fillStyle = token.fg;
|
||||||
|
ctx.fillText(token.text, xOffset, y);
|
||||||
|
xOffset += metrics.width;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
function connect() {
|
||||||
|
const protocol = window.location.protocol === 'https:' ? 'wss:' : 'ws:';
|
||||||
|
const wsUrl = `${protocol}//${window.location.hostname}:8765`;
|
||||||
|
|
||||||
|
ws = new WebSocket(wsUrl);
|
||||||
|
|
||||||
|
ws.onopen = () => {
|
||||||
|
status.textContent = 'Connected';
|
||||||
|
status.className = 'connected';
|
||||||
|
sendSize();
|
||||||
|
};
|
||||||
|
|
||||||
|
ws.onclose = () => {
|
||||||
|
status.textContent = 'Disconnected - Reconnecting...';
|
||||||
|
status.className = 'disconnected';
|
||||||
|
setTimeout(connect, 1000);
|
||||||
|
};
|
||||||
|
|
||||||
|
ws.onerror = () => {
|
||||||
|
status.textContent = 'Connection error';
|
||||||
|
status.className = 'disconnected';
|
||||||
|
};
|
||||||
|
|
||||||
|
ws.onmessage = (event) => {
|
||||||
|
try {
|
||||||
|
const data = JSON.parse(event.data);
|
||||||
|
|
||||||
|
if (data.type === 'frame') {
|
||||||
|
cols = data.width || 80;
|
||||||
|
rows = data.height || 24;
|
||||||
|
colsInput.value = cols;
|
||||||
|
rowsInput.value = rows;
|
||||||
|
resizeCanvas();
|
||||||
|
render(data.lines || []);
|
||||||
|
} else if (data.type === 'clear') {
|
||||||
|
ctx.fillStyle = '#000';
|
||||||
|
ctx.fillRect(0, 0, canvas.width, canvas.height);
|
||||||
|
}
|
||||||
|
} catch (e) {
|
||||||
|
console.error('Failed to parse message:', e);
|
||||||
|
}
|
||||||
|
};
|
||||||
|
}
|
||||||
|
|
||||||
|
function sendSize() {
|
||||||
|
if (ws && ws.readyState === WebSocket.OPEN) {
|
||||||
|
ws.send(JSON.stringify({
|
||||||
|
type: 'resize',
|
||||||
|
width: parseInt(colsInput.value),
|
||||||
|
height: parseInt(rowsInput.value)
|
||||||
|
}));
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
function render(lines) {
|
||||||
|
ctx.fillStyle = '#000';
|
||||||
|
ctx.fillRect(0, 0, canvas.width, canvas.height);
|
||||||
|
|
||||||
|
ctx.font = '16px monospace';
|
||||||
|
ctx.textBaseline = 'top';
|
||||||
|
|
||||||
|
const lineHeight = CHAR_HEIGHT;
|
||||||
|
const maxLines = Math.min(lines.length, rows);
|
||||||
|
|
||||||
|
for (let i = 0; i < maxLines; i++) {
|
||||||
|
const line = lines[i] || '';
|
||||||
|
renderLine(line, 0, i * lineHeight, lineHeight);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
function calculateViewportSize() {
|
||||||
|
const isFullscreen = document.fullscreenElement !== null;
|
||||||
|
const padding = isFullscreen ? 0 : 40;
|
||||||
|
const controlsHeight = isFullscreen ? 0 : 60;
|
||||||
|
const availableWidth = window.innerWidth - padding;
|
||||||
|
const availableHeight = window.innerHeight - controlsHeight;
|
||||||
|
cols = Math.max(20, Math.floor(availableWidth / CHAR_WIDTH));
|
||||||
|
rows = Math.max(10, Math.floor(availableHeight / CHAR_HEIGHT));
|
||||||
|
colsInput.value = cols;
|
||||||
|
rowsInput.value = rows;
|
||||||
|
resizeCanvas();
|
||||||
|
console.log('Fullscreen:', isFullscreen, 'Size:', cols, 'x', rows);
|
||||||
|
sendSize();
|
||||||
|
}
|
||||||
|
|
||||||
|
applyBtn.addEventListener('click', () => {
|
||||||
|
cols = parseInt(colsInput.value);
|
||||||
|
rows = parseInt(rowsInput.value);
|
||||||
|
resizeCanvas();
|
||||||
|
sendSize();
|
||||||
|
});
|
||||||
|
|
||||||
|
fullscreenBtn.addEventListener('click', () => {
|
||||||
|
if (!document.fullscreenElement) {
|
||||||
|
document.body.classList.add('fullscreen');
|
||||||
|
document.documentElement.requestFullscreen().then(() => {
|
||||||
|
calculateViewportSize();
|
||||||
|
});
|
||||||
|
} else {
|
||||||
|
document.exitFullscreen().then(() => {
|
||||||
|
calculateViewportSize();
|
||||||
|
});
|
||||||
|
}
|
||||||
|
});
|
||||||
|
|
||||||
|
document.addEventListener('fullscreenchange', () => {
|
||||||
|
if (!document.fullscreenElement) {
|
||||||
|
document.body.classList.remove('fullscreen');
|
||||||
|
calculateViewportSize();
|
||||||
|
}
|
||||||
|
});
|
||||||
|
|
||||||
|
window.addEventListener('resize', () => {
|
||||||
|
if (document.fullscreenElement) {
|
||||||
|
calculateViewportSize();
|
||||||
|
}
|
||||||
|
});
|
||||||
|
|
||||||
|
// Initial setup
|
||||||
|
resizeCanvas();
|
||||||
|
connect();
|
||||||
|
</script>
|
||||||
|
</body>
|
||||||
|
</html>
|
||||||
256
cmdline.py
Normal file
256
cmdline.py
Normal file
@@ -0,0 +1,256 @@
|
|||||||
|
#!/usr/bin/env python3
|
||||||
|
# -*- coding: utf-8 -*-
|
||||||
|
"""
|
||||||
|
Command-line utility for interacting with mainline via ntfy.
|
||||||
|
|
||||||
|
Usage:
|
||||||
|
python cmdline.py # Interactive TUI mode
|
||||||
|
python cmdline.py --help # Show help
|
||||||
|
python cmdline.py /effects list # Send single command via ntfy
|
||||||
|
python cmdline.py /effects stats # Get performance stats via ntfy
|
||||||
|
python cmdline.py -w /effects stats # Watch mode (polls for stats)
|
||||||
|
|
||||||
|
The TUI mode provides:
|
||||||
|
- Arrow keys to navigate command history
|
||||||
|
- Tab completion for commands
|
||||||
|
- Auto-refresh for performance stats
|
||||||
|
|
||||||
|
C&C works like a serial port:
|
||||||
|
1. Send command to ntfy_cc_topic
|
||||||
|
2. Mainline receives, processes, responds to same topic
|
||||||
|
3. Cmdline polls for response
|
||||||
|
"""
|
||||||
|
|
||||||
|
import os
|
||||||
|
|
||||||
|
os.environ["FORCE_COLOR"] = "1"
|
||||||
|
os.environ["TERM"] = "xterm-256color"
|
||||||
|
|
||||||
|
import argparse
|
||||||
|
import json
|
||||||
|
import sys
|
||||||
|
import time
|
||||||
|
import threading
|
||||||
|
import urllib.request
|
||||||
|
from pathlib import Path
|
||||||
|
|
||||||
|
from engine import config
|
||||||
|
from engine.terminal import CLR, CURSOR_OFF, CURSOR_ON, G_DIM, G_HI, RST, W_GHOST
|
||||||
|
|
||||||
|
try:
|
||||||
|
CC_CMD_TOPIC = config.NTFY_CC_CMD_TOPIC
|
||||||
|
CC_RESP_TOPIC = config.NTFY_CC_RESP_TOPIC
|
||||||
|
except AttributeError:
|
||||||
|
CC_CMD_TOPIC = "https://ntfy.sh/klubhaus_terminal_mainline_cc_cmd/json"
|
||||||
|
CC_RESP_TOPIC = "https://ntfy.sh/klubhaus_terminal_mainline_cc_resp/json"
|
||||||
|
|
||||||
|
|
||||||
|
class NtfyResponsePoller:
|
||||||
|
"""Polls ntfy for command responses."""
|
||||||
|
|
||||||
|
def __init__(self, cmd_topic: str, resp_topic: str, timeout: float = 10.0):
|
||||||
|
self.cmd_topic = cmd_topic
|
||||||
|
self.resp_topic = resp_topic
|
||||||
|
self.timeout = timeout
|
||||||
|
self._last_id = None
|
||||||
|
self._lock = threading.Lock()
|
||||||
|
|
||||||
|
def _build_url(self) -> str:
|
||||||
|
from urllib.parse import parse_qs, urlencode, urlparse, urlunparse
|
||||||
|
|
||||||
|
parsed = urlparse(self.resp_topic)
|
||||||
|
params = parse_qs(parsed.query, keep_blank_values=True)
|
||||||
|
params["since"] = [self._last_id if self._last_id else "20s"]
|
||||||
|
new_query = urlencode({k: v[0] for k, v in params.items()})
|
||||||
|
return urlunparse(parsed._replace(query=new_query))
|
||||||
|
|
||||||
|
def send_and_wait(self, cmd: str) -> str:
|
||||||
|
"""Send command and wait for response."""
|
||||||
|
url = self.cmd_topic.replace("/json", "")
|
||||||
|
data = cmd.encode("utf-8")
|
||||||
|
|
||||||
|
req = urllib.request.Request(
|
||||||
|
url,
|
||||||
|
data=data,
|
||||||
|
headers={
|
||||||
|
"User-Agent": "mainline-cmdline/0.1",
|
||||||
|
"Content-Type": "text/plain",
|
||||||
|
},
|
||||||
|
method="POST",
|
||||||
|
)
|
||||||
|
|
||||||
|
try:
|
||||||
|
urllib.request.urlopen(req, timeout=5)
|
||||||
|
except Exception as e:
|
||||||
|
return f"Error sending command: {e}"
|
||||||
|
|
||||||
|
return self._wait_for_response(cmd)
|
||||||
|
|
||||||
|
def _wait_for_response(self, expected_cmd: str = "") -> str:
|
||||||
|
"""Poll for response message."""
|
||||||
|
start = time.time()
|
||||||
|
while time.time() - start < self.timeout:
|
||||||
|
try:
|
||||||
|
url = self._build_url()
|
||||||
|
req = urllib.request.Request(
|
||||||
|
url, headers={"User-Agent": "mainline-cmdline/0.1"}
|
||||||
|
)
|
||||||
|
with urllib.request.urlopen(req, timeout=10) as resp:
|
||||||
|
for line in resp:
|
||||||
|
try:
|
||||||
|
data = json.loads(line.decode("utf-8", errors="replace"))
|
||||||
|
except json.JSONDecodeError:
|
||||||
|
continue
|
||||||
|
if data.get("event") == "message":
|
||||||
|
self._last_id = data.get("id")
|
||||||
|
msg = data.get("message", "")
|
||||||
|
if msg:
|
||||||
|
return msg
|
||||||
|
except Exception:
|
||||||
|
pass
|
||||||
|
time.sleep(0.5)
|
||||||
|
return "Timeout waiting for response"
|
||||||
|
|
||||||
|
|
||||||
|
AVAILABLE_COMMANDS = """Available commands:
|
||||||
|
/effects list - List all effects and status
|
||||||
|
/effects <name> on - Enable an effect
|
||||||
|
/effects <name> off - Disable an effect
|
||||||
|
/effects <name> intensity <0.0-1.0> - Set effect intensity
|
||||||
|
/effects reorder <name1>,<name2>,... - Reorder pipeline
|
||||||
|
/effects stats - Show performance statistics
|
||||||
|
/help - Show this help
|
||||||
|
/quit - Exit
|
||||||
|
"""
|
||||||
|
|
||||||
|
|
||||||
|
def print_header():
|
||||||
|
w = 60
|
||||||
|
print(CLR, end="")
|
||||||
|
print(CURSOR_OFF, end="")
|
||||||
|
print(f"\033[1;1H", end="")
|
||||||
|
print(f" \033[1;38;5;231m╔{'═' * (w - 6)}╗\033[0m")
|
||||||
|
print(
|
||||||
|
f" \033[1;38;5;231m║\033[0m \033[1;38;5;82mMAINLINE\033[0m \033[3;38;5;245mCommand Center\033[0m \033[1;38;5;231m ║\033[0m"
|
||||||
|
)
|
||||||
|
print(f" \033[1;38;5;231m╚{'═' * (w - 6)}╝\033[0m")
|
||||||
|
print(f" \033[2;38;5;37mCMD: {CC_CMD_TOPIC.split('/')[-2]}\033[0m")
|
||||||
|
print(f" \033[2;38;5;37mRESP: {CC_RESP_TOPIC.split('/')[-2]}\033[0m")
|
||||||
|
print()
|
||||||
|
|
||||||
|
|
||||||
|
def print_response(response: str, is_error: bool = False) -> None:
|
||||||
|
"""Print response with nice formatting."""
|
||||||
|
print()
|
||||||
|
if is_error:
|
||||||
|
print(f" \033[1;38;5;196m✗ Error\033[0m")
|
||||||
|
print(f" \033[38;5;196m{'─' * 40}\033[0m")
|
||||||
|
else:
|
||||||
|
print(f" \033[1;38;5;82m✓ Response\033[0m")
|
||||||
|
print(f" \033[38;5;37m{'─' * 40}\033[0m")
|
||||||
|
|
||||||
|
for line in response.split("\n"):
|
||||||
|
print(f" {line}")
|
||||||
|
print()
|
||||||
|
|
||||||
|
|
||||||
|
def interactive_mode():
|
||||||
|
"""Interactive TUI for sending commands."""
|
||||||
|
import readline
|
||||||
|
|
||||||
|
print_header()
|
||||||
|
poller = NtfyResponsePoller(CC_CMD_TOPIC, CC_RESP_TOPIC)
|
||||||
|
|
||||||
|
print(f" \033[38;5;245mType /help for commands, /quit to exit\033[0m")
|
||||||
|
print()
|
||||||
|
|
||||||
|
while True:
|
||||||
|
try:
|
||||||
|
cmd = input(f" \033[1;38;5;82m❯\033[0m {G_HI}").strip()
|
||||||
|
except (EOFError, KeyboardInterrupt):
|
||||||
|
print()
|
||||||
|
break
|
||||||
|
|
||||||
|
if not cmd:
|
||||||
|
continue
|
||||||
|
|
||||||
|
if cmd.startswith("/"):
|
||||||
|
if cmd == "/quit" or cmd == "/exit":
|
||||||
|
print(f"\n \033[1;38;5;245mGoodbye!{RST}\n")
|
||||||
|
break
|
||||||
|
|
||||||
|
if cmd == "/help":
|
||||||
|
print(f"\n{AVAILABLE_COMMANDS}\n")
|
||||||
|
continue
|
||||||
|
|
||||||
|
print(f" \033[38;5;245m⟳ Sending to mainline...{RST}")
|
||||||
|
result = poller.send_and_wait(cmd)
|
||||||
|
print_response(result, is_error=result.startswith("Error"))
|
||||||
|
else:
|
||||||
|
print(f"\n \033[1;38;5;196m⚠ Commands must start with /{RST}\n")
|
||||||
|
|
||||||
|
print(CURSOR_ON, end="")
|
||||||
|
return 0
|
||||||
|
|
||||||
|
|
||||||
|
def main():
|
||||||
|
parser = argparse.ArgumentParser(
|
||||||
|
description="Mainline command-line interface",
|
||||||
|
formatter_class=argparse.RawDescriptionHelpFormatter,
|
||||||
|
epilog=AVAILABLE_COMMANDS,
|
||||||
|
)
|
||||||
|
parser.add_argument(
|
||||||
|
"command",
|
||||||
|
nargs="?",
|
||||||
|
default=None,
|
||||||
|
help="Command to send (e.g., /effects list)",
|
||||||
|
)
|
||||||
|
parser.add_argument(
|
||||||
|
"--watch",
|
||||||
|
"-w",
|
||||||
|
action="store_true",
|
||||||
|
help="Watch mode: continuously poll for stats (Ctrl+C to exit)",
|
||||||
|
)
|
||||||
|
|
||||||
|
args = parser.parse_args()
|
||||||
|
|
||||||
|
if args.command is None:
|
||||||
|
return interactive_mode()
|
||||||
|
|
||||||
|
poller = NtfyResponsePoller(CC_CMD_TOPIC, CC_RESP_TOPIC)
|
||||||
|
|
||||||
|
if args.watch and "/effects stats" in args.command:
|
||||||
|
import signal
|
||||||
|
|
||||||
|
def handle_sigterm(*_):
|
||||||
|
print(f"\n \033[1;38;5;245mStopped watching{RST}")
|
||||||
|
print(CURSOR_ON, end="")
|
||||||
|
sys.exit(0)
|
||||||
|
|
||||||
|
signal.signal(signal.SIGTERM, handle_sigterm)
|
||||||
|
|
||||||
|
print_header()
|
||||||
|
print(f" \033[38;5;245mWatching /effects stats (Ctrl+C to exit)...{RST}\n")
|
||||||
|
try:
|
||||||
|
while True:
|
||||||
|
result = poller.send_and_wait(args.command)
|
||||||
|
print(f"\033[2J\033[1;1H", end="")
|
||||||
|
print(
|
||||||
|
f" \033[1;38;5;82m❯\033[0m Performance Stats - \033[1;38;5;245m{time.strftime('%H:%M:%S')}{RST}"
|
||||||
|
)
|
||||||
|
print(f" \033[38;5;37m{'─' * 44}{RST}")
|
||||||
|
for line in result.split("\n"):
|
||||||
|
print(f" {line}")
|
||||||
|
time.sleep(2)
|
||||||
|
except KeyboardInterrupt:
|
||||||
|
print(f"\n \033[1;38;5;245mStopped watching{RST}")
|
||||||
|
return 0
|
||||||
|
return 0
|
||||||
|
|
||||||
|
result = poller.send_and_wait(args.command)
|
||||||
|
print(result)
|
||||||
|
return 0
|
||||||
|
|
||||||
|
|
||||||
|
if __name__ == "__main__":
|
||||||
|
main()
|
||||||
199
docs/PIPELINE.md
Normal file
199
docs/PIPELINE.md
Normal file
@@ -0,0 +1,199 @@
|
|||||||
|
# Mainline Pipeline
|
||||||
|
|
||||||
|
## Architecture Overview
|
||||||
|
|
||||||
|
```
|
||||||
|
Sources (static/dynamic) → Fetch → Prepare → Scroll → Effects → Render → Display
|
||||||
|
↓
|
||||||
|
NtfyPoller ← MicMonitor (async)
|
||||||
|
```
|
||||||
|
|
||||||
|
### Data Source Abstraction (sources_v2.py)
|
||||||
|
|
||||||
|
- **Static sources**: Data fetched once and cached (HeadlinesDataSource, PoetryDataSource)
|
||||||
|
- **Dynamic sources**: Idempotent fetch for runtime updates (PipelineDataSource)
|
||||||
|
- **SourceRegistry**: Discovery and management of data sources
|
||||||
|
|
||||||
|
### Camera Modes
|
||||||
|
|
||||||
|
- **Vertical**: Scroll up (default)
|
||||||
|
- **Horizontal**: Scroll left
|
||||||
|
- **Omni**: Diagonal scroll
|
||||||
|
- **Floating**: Sinusoidal bobbing
|
||||||
|
- **Trace**: Follow network path node-by-node (for pipeline viz)
|
||||||
|
|
||||||
|
## Content to Display Rendering Pipeline
|
||||||
|
|
||||||
|
```mermaid
|
||||||
|
flowchart TD
|
||||||
|
subgraph Sources["Data Sources (v2)"]
|
||||||
|
Headlines[HeadlinesDataSource]
|
||||||
|
Poetry[PoetryDataSource]
|
||||||
|
Pipeline[PipelineDataSource]
|
||||||
|
Registry[SourceRegistry]
|
||||||
|
end
|
||||||
|
|
||||||
|
subgraph SourcesLegacy["Data Sources (legacy)"]
|
||||||
|
RSS[("RSS Feeds")]
|
||||||
|
PoetryFeed[("Poetry Feed")]
|
||||||
|
Ntfy[("Ntfy Messages")]
|
||||||
|
Mic[("Microphone")]
|
||||||
|
end
|
||||||
|
|
||||||
|
subgraph Fetch["Fetch Layer"]
|
||||||
|
FC[fetch_all]
|
||||||
|
FP[fetch_poetry]
|
||||||
|
Cache[(Cache)]
|
||||||
|
end
|
||||||
|
|
||||||
|
subgraph Prepare["Prepare Layer"]
|
||||||
|
MB[make_block]
|
||||||
|
Strip[strip_tags]
|
||||||
|
Trans[translate]
|
||||||
|
end
|
||||||
|
|
||||||
|
subgraph Scroll["Scroll Engine"]
|
||||||
|
SC[StreamController]
|
||||||
|
CAM[Camera]
|
||||||
|
RTZ[render_ticker_zone]
|
||||||
|
Msg[render_message_overlay]
|
||||||
|
Grad[lr_gradient]
|
||||||
|
VT[vis_trunc / vis_offset]
|
||||||
|
end
|
||||||
|
|
||||||
|
subgraph Effects["Effect Pipeline"]
|
||||||
|
subgraph EffectsPlugins["Effect Plugins"]
|
||||||
|
Noise[NoiseEffect]
|
||||||
|
Fade[FadeEffect]
|
||||||
|
Glitch[GlitchEffect]
|
||||||
|
Firehose[FirehoseEffect]
|
||||||
|
Hud[HudEffect]
|
||||||
|
end
|
||||||
|
EC[EffectChain]
|
||||||
|
ER[EffectRegistry]
|
||||||
|
end
|
||||||
|
|
||||||
|
subgraph Render["Render Layer"]
|
||||||
|
BW[big_wrap]
|
||||||
|
RL[render_line]
|
||||||
|
end
|
||||||
|
|
||||||
|
subgraph Display["Display Backends"]
|
||||||
|
TD[TerminalDisplay]
|
||||||
|
PD[PygameDisplay]
|
||||||
|
SD[SixelDisplay]
|
||||||
|
KD[KittyDisplay]
|
||||||
|
WSD[WebSocketDisplay]
|
||||||
|
ND[NullDisplay]
|
||||||
|
end
|
||||||
|
|
||||||
|
subgraph Async["Async Sources"]
|
||||||
|
NTFY[NtfyPoller]
|
||||||
|
MIC[MicMonitor]
|
||||||
|
end
|
||||||
|
|
||||||
|
subgraph Animation["Animation System"]
|
||||||
|
AC[AnimationController]
|
||||||
|
PR[Preset]
|
||||||
|
end
|
||||||
|
|
||||||
|
Sources --> Fetch
|
||||||
|
RSS --> FC
|
||||||
|
PoetryFeed --> FP
|
||||||
|
FC --> Cache
|
||||||
|
FP --> Cache
|
||||||
|
Cache --> MB
|
||||||
|
Strip --> MB
|
||||||
|
Trans --> MB
|
||||||
|
MB --> SC
|
||||||
|
NTFY --> SC
|
||||||
|
SC --> RTZ
|
||||||
|
CAM --> RTZ
|
||||||
|
Grad --> RTZ
|
||||||
|
VT --> RTZ
|
||||||
|
RTZ --> EC
|
||||||
|
EC --> ER
|
||||||
|
ER --> EffectsPlugins
|
||||||
|
EffectsPlugins --> BW
|
||||||
|
BW --> RL
|
||||||
|
RL --> Display
|
||||||
|
Ntfy --> RL
|
||||||
|
Mic --> RL
|
||||||
|
MIC --> RL
|
||||||
|
|
||||||
|
style Sources fill:#f9f,stroke:#333
|
||||||
|
style Fetch fill:#bbf,stroke:#333
|
||||||
|
style Prepare fill:#bff,stroke:#333
|
||||||
|
style Scroll fill:#bfb,stroke:#333
|
||||||
|
style Effects fill:#fbf,stroke:#333
|
||||||
|
style Render fill:#ffb,stroke:#333
|
||||||
|
style Display fill:#bbf,stroke:#333
|
||||||
|
style Async fill:#fbb,stroke:#333
|
||||||
|
style Animation fill:#bfb,stroke:#333
|
||||||
|
```
|
||||||
|
|
||||||
|
## Animation & Presets
|
||||||
|
|
||||||
|
```mermaid
|
||||||
|
flowchart LR
|
||||||
|
subgraph Preset["Preset"]
|
||||||
|
PP[PipelineParams]
|
||||||
|
AC[AnimationController]
|
||||||
|
end
|
||||||
|
|
||||||
|
subgraph AnimationController["AnimationController"]
|
||||||
|
Clock[Clock]
|
||||||
|
Events[Events]
|
||||||
|
Triggers[Triggers]
|
||||||
|
end
|
||||||
|
|
||||||
|
subgraph Triggers["Trigger Types"]
|
||||||
|
TIME[TIME]
|
||||||
|
FRAME[FRAME]
|
||||||
|
CYCLE[CYCLE]
|
||||||
|
COND[CONDITION]
|
||||||
|
MANUAL[MANUAL]
|
||||||
|
end
|
||||||
|
|
||||||
|
PP --> AC
|
||||||
|
Clock --> AC
|
||||||
|
Events --> AC
|
||||||
|
Triggers --> Events
|
||||||
|
```
|
||||||
|
|
||||||
|
## Camera Modes
|
||||||
|
|
||||||
|
```mermaid
|
||||||
|
stateDiagram-v2
|
||||||
|
[*] --> Vertical
|
||||||
|
Vertical --> Horizontal: mode change
|
||||||
|
Horizontal --> Omni: mode change
|
||||||
|
Omni --> Floating: mode change
|
||||||
|
Floating --> Trace: mode change
|
||||||
|
Trace --> Vertical: mode change
|
||||||
|
|
||||||
|
state Vertical {
|
||||||
|
[*] --> ScrollUp
|
||||||
|
ScrollUp --> ScrollUp: +y each frame
|
||||||
|
}
|
||||||
|
|
||||||
|
state Horizontal {
|
||||||
|
[*] --> ScrollLeft
|
||||||
|
ScrollLeft --> ScrollLeft: +x each frame
|
||||||
|
}
|
||||||
|
|
||||||
|
state Omni {
|
||||||
|
[*] --> Diagonal
|
||||||
|
Diagonal --> Diagonal: +x, +y each frame
|
||||||
|
}
|
||||||
|
|
||||||
|
state Floating {
|
||||||
|
[*] --> Bobbing
|
||||||
|
Bobbing --> Bobbing: sin(time) for x,y
|
||||||
|
}
|
||||||
|
|
||||||
|
state Trace {
|
||||||
|
[*] --> FollowPath
|
||||||
|
FollowPath --> FollowPath: node by node
|
||||||
|
}
|
||||||
|
```
|
||||||
145
docs/superpowers/specs/2026-03-15-readme-update-design.md
Normal file
145
docs/superpowers/specs/2026-03-15-readme-update-design.md
Normal file
@@ -0,0 +1,145 @@
|
|||||||
|
# README Update Design — 2026-03-15
|
||||||
|
|
||||||
|
## Goal
|
||||||
|
|
||||||
|
Restructure and expand `README.md` to:
|
||||||
|
1. Align with the current codebase (Python 3.10+, uv/mise/pytest/ruff toolchain, 6 new fonts)
|
||||||
|
2. Add extensibility-focused content (`Extending` section)
|
||||||
|
3. Add developer workflow coverage (`Development` section)
|
||||||
|
4. Improve navigability via top-level grouping (Approach C)
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Proposed Structure
|
||||||
|
|
||||||
|
```
|
||||||
|
# MAINLINE
|
||||||
|
> tagline + description
|
||||||
|
|
||||||
|
## Using
|
||||||
|
### Run
|
||||||
|
### Config
|
||||||
|
### Feeds
|
||||||
|
### Fonts
|
||||||
|
### ntfy.sh
|
||||||
|
|
||||||
|
## Internals
|
||||||
|
### How it works
|
||||||
|
### Architecture
|
||||||
|
|
||||||
|
## Extending
|
||||||
|
### NtfyPoller
|
||||||
|
### MicMonitor
|
||||||
|
### Render pipeline
|
||||||
|
|
||||||
|
## Development
|
||||||
|
### Setup
|
||||||
|
### Tasks
|
||||||
|
### Testing
|
||||||
|
### Linting
|
||||||
|
|
||||||
|
## Roadmap
|
||||||
|
|
||||||
|
---
|
||||||
|
*footer*
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Section-by-section design
|
||||||
|
|
||||||
|
### Using
|
||||||
|
|
||||||
|
All existing content preserved verbatim. Two changes:
|
||||||
|
- **Run**: add `uv run mainline.py` as an alternative invocation; expand bootstrap note to mention `uv sync` / `uv sync --all-extras`
|
||||||
|
- **ntfy.sh**: remove `NtfyPoller` reuse code example (moves to Extending); keep push instructions and topic config
|
||||||
|
|
||||||
|
Subsections moved into Using (currently standalone):
|
||||||
|
- `Feeds` — it's configuration, not a concept
|
||||||
|
- `ntfy.sh` (usage half)
|
||||||
|
|
||||||
|
### Internals
|
||||||
|
|
||||||
|
All existing content preserved verbatim. One change:
|
||||||
|
- **Architecture**: append `tests/` directory listing to the module tree
|
||||||
|
|
||||||
|
### Extending
|
||||||
|
|
||||||
|
Entirely new section. Three subsections:
|
||||||
|
|
||||||
|
**NtfyPoller**
|
||||||
|
- Minimal working import + usage example
|
||||||
|
- Note: stdlib only dependencies
|
||||||
|
|
||||||
|
```python
|
||||||
|
from engine.ntfy import NtfyPoller
|
||||||
|
|
||||||
|
poller = NtfyPoller("https://ntfy.sh/my_topic/json?since=20s&poll=1")
|
||||||
|
poller.start()
|
||||||
|
|
||||||
|
# in your render loop:
|
||||||
|
msg = poller.get_active_message() # → (title, body, timestamp) or None
|
||||||
|
if msg:
|
||||||
|
title, body, ts = msg
|
||||||
|
render_my_message(title, body) # visualizer-specific
|
||||||
|
```
|
||||||
|
|
||||||
|
**MicMonitor**
|
||||||
|
- Minimal working import + usage example
|
||||||
|
- Note: sounddevice/numpy optional, degrades gracefully
|
||||||
|
|
||||||
|
```python
|
||||||
|
from engine.mic import MicMonitor
|
||||||
|
|
||||||
|
mic = MicMonitor(threshold_db=50)
|
||||||
|
if mic.start(): # returns False if sounddevice unavailable
|
||||||
|
excess = mic.excess # dB above threshold, clamped to 0
|
||||||
|
db = mic.db # raw RMS dB level
|
||||||
|
```
|
||||||
|
|
||||||
|
**Render pipeline**
|
||||||
|
- Brief prose about `engine.render` as importable pipeline
|
||||||
|
- Minimal sketch of serve.py / ESP32 usage pattern
|
||||||
|
- Reference to `Mainline Renderer + ntfy Message Queue for ESP32.md`
|
||||||
|
|
||||||
|
### Development
|
||||||
|
|
||||||
|
Entirely new section. Four subsections:
|
||||||
|
|
||||||
|
**Setup**
|
||||||
|
- Hard requirements: Python 3.10+, uv
|
||||||
|
- `uv sync` / `uv sync --all-extras` / `uv sync --group dev`
|
||||||
|
|
||||||
|
**Tasks** (via mise)
|
||||||
|
- `mise run test`, `test-cov`, `lint`, `lint-fix`, `format`, `run`, `run-poetry`, `run-firehose`
|
||||||
|
|
||||||
|
**Testing**
|
||||||
|
- Tests in `tests/` covering config, filter, mic, ntfy, sources, terminal
|
||||||
|
- `uv run pytest` and `uv run pytest --cov=engine --cov-report=term-missing`
|
||||||
|
|
||||||
|
**Linting**
|
||||||
|
- `uv run ruff check` and `uv run ruff format`
|
||||||
|
- Note: pre-commit hooks run lint via `hk`
|
||||||
|
|
||||||
|
### Roadmap
|
||||||
|
|
||||||
|
Existing `## Ideas / Future` content preserved verbatim. Only change: rename heading to `## Roadmap`.
|
||||||
|
|
||||||
|
### Footer
|
||||||
|
|
||||||
|
Update `Python 3.9+` → `Python 3.10+`.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Files changed
|
||||||
|
|
||||||
|
- `README.md` — restructured and expanded as above
|
||||||
|
- No other files
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## What is not changing
|
||||||
|
|
||||||
|
- All existing prose, examples, and config table values — preserved verbatim where retained
|
||||||
|
- The Ideas/Future content — kept intact under the new Roadmap heading
|
||||||
|
- The cyberpunk voice and terse style of the existing README
|
||||||
38
effects_plugins/__init__.py
Normal file
38
effects_plugins/__init__.py
Normal file
@@ -0,0 +1,38 @@
|
|||||||
|
from pathlib import Path
|
||||||
|
|
||||||
|
PLUGIN_DIR = Path(__file__).parent
|
||||||
|
|
||||||
|
|
||||||
|
def discover_plugins():
|
||||||
|
from engine.effects.registry import get_registry
|
||||||
|
from engine.effects.types import EffectPlugin
|
||||||
|
|
||||||
|
registry = get_registry()
|
||||||
|
imported = {}
|
||||||
|
|
||||||
|
for file_path in PLUGIN_DIR.glob("*.py"):
|
||||||
|
if file_path.name.startswith("_"):
|
||||||
|
continue
|
||||||
|
module_name = file_path.stem
|
||||||
|
if module_name in ("base", "types"):
|
||||||
|
continue
|
||||||
|
|
||||||
|
try:
|
||||||
|
module = __import__(f"effects_plugins.{module_name}", fromlist=[""])
|
||||||
|
for attr_name in dir(module):
|
||||||
|
attr = getattr(module, attr_name)
|
||||||
|
if (
|
||||||
|
isinstance(attr, type)
|
||||||
|
and issubclass(attr, EffectPlugin)
|
||||||
|
and attr is not EffectPlugin
|
||||||
|
and attr_name.endswith("Effect")
|
||||||
|
):
|
||||||
|
plugin = attr()
|
||||||
|
if not isinstance(plugin, EffectPlugin):
|
||||||
|
continue
|
||||||
|
registry.register(plugin)
|
||||||
|
imported[plugin.name] = plugin
|
||||||
|
except Exception:
|
||||||
|
pass
|
||||||
|
|
||||||
|
return imported
|
||||||
58
effects_plugins/fade.py
Normal file
58
effects_plugins/fade.py
Normal file
@@ -0,0 +1,58 @@
|
|||||||
|
import random
|
||||||
|
|
||||||
|
from engine.effects.types import EffectConfig, EffectContext, EffectPlugin
|
||||||
|
|
||||||
|
|
||||||
|
class FadeEffect(EffectPlugin):
|
||||||
|
name = "fade"
|
||||||
|
config = EffectConfig(enabled=True, intensity=1.0)
|
||||||
|
|
||||||
|
def process(self, buf: list[str], ctx: EffectContext) -> list[str]:
|
||||||
|
if not ctx.ticker_height:
|
||||||
|
return buf
|
||||||
|
result = list(buf)
|
||||||
|
intensity = self.config.intensity
|
||||||
|
|
||||||
|
top_zone = max(1, int(ctx.ticker_height * 0.25))
|
||||||
|
bot_zone = max(1, int(ctx.ticker_height * 0.10))
|
||||||
|
|
||||||
|
for r in range(len(result)):
|
||||||
|
if r >= ctx.ticker_height:
|
||||||
|
continue
|
||||||
|
top_f = min(1.0, r / top_zone) if top_zone > 0 else 1.0
|
||||||
|
bot_f = (
|
||||||
|
min(1.0, (ctx.ticker_height - 1 - r) / bot_zone)
|
||||||
|
if bot_zone > 0
|
||||||
|
else 1.0
|
||||||
|
)
|
||||||
|
row_fade = min(top_f, bot_f) * intensity
|
||||||
|
|
||||||
|
if row_fade < 1.0 and result[r].strip():
|
||||||
|
result[r] = self._fade_line(result[r], row_fade)
|
||||||
|
|
||||||
|
return result
|
||||||
|
|
||||||
|
def _fade_line(self, s: str, fade: float) -> str:
|
||||||
|
if fade >= 1.0:
|
||||||
|
return s
|
||||||
|
if fade <= 0.0:
|
||||||
|
return ""
|
||||||
|
result = []
|
||||||
|
i = 0
|
||||||
|
while i < len(s):
|
||||||
|
if s[i] == "\033" and i + 1 < len(s) and s[i + 1] == "[":
|
||||||
|
j = i + 2
|
||||||
|
while j < len(s) and not s[j].isalpha():
|
||||||
|
j += 1
|
||||||
|
result.append(s[i : j + 1])
|
||||||
|
i = j + 1
|
||||||
|
elif s[i] == " ":
|
||||||
|
result.append(" ")
|
||||||
|
i += 1
|
||||||
|
else:
|
||||||
|
result.append(s[i] if random.random() < fade else " ")
|
||||||
|
i += 1
|
||||||
|
return "".join(result)
|
||||||
|
|
||||||
|
def configure(self, config: EffectConfig) -> None:
|
||||||
|
self.config = config
|
||||||
72
effects_plugins/firehose.py
Normal file
72
effects_plugins/firehose.py
Normal file
@@ -0,0 +1,72 @@
|
|||||||
|
import random
|
||||||
|
from datetime import datetime
|
||||||
|
|
||||||
|
from engine import config
|
||||||
|
from engine.effects.types import EffectConfig, EffectContext, EffectPlugin
|
||||||
|
from engine.sources import FEEDS, POETRY_SOURCES
|
||||||
|
from engine.terminal import C_DIM, G_DIM, G_LO, RST, W_GHOST
|
||||||
|
|
||||||
|
|
||||||
|
class FirehoseEffect(EffectPlugin):
|
||||||
|
name = "firehose"
|
||||||
|
config = EffectConfig(enabled=True, intensity=1.0)
|
||||||
|
|
||||||
|
def process(self, buf: list[str], ctx: EffectContext) -> list[str]:
|
||||||
|
firehose_h = config.FIREHOSE_H if config.FIREHOSE else 0
|
||||||
|
if firehose_h <= 0 or not ctx.items:
|
||||||
|
return buf
|
||||||
|
|
||||||
|
result = list(buf)
|
||||||
|
intensity = self.config.intensity
|
||||||
|
h = ctx.terminal_height
|
||||||
|
|
||||||
|
for fr in range(firehose_h):
|
||||||
|
scr_row = h - firehose_h + fr + 1
|
||||||
|
fline = self._firehose_line(ctx.items, ctx.terminal_width, intensity)
|
||||||
|
result.append(f"\033[{scr_row};1H{fline}\033[K")
|
||||||
|
return result
|
||||||
|
|
||||||
|
def _firehose_line(self, items: list, w: int, intensity: float) -> str:
|
||||||
|
r = random.random()
|
||||||
|
if r < 0.35 * intensity:
|
||||||
|
title, src, ts = random.choice(items)
|
||||||
|
text = title[: w - 1]
|
||||||
|
color = random.choice([G_LO, G_DIM, W_GHOST, C_DIM])
|
||||||
|
return f"{color}{text}{RST}"
|
||||||
|
elif r < 0.55 * intensity:
|
||||||
|
d = random.choice([0.45, 0.55, 0.65, 0.75])
|
||||||
|
return "".join(
|
||||||
|
f"{random.choice([G_LO, G_DIM, C_DIM, W_GHOST])}"
|
||||||
|
f"{random.choice(config.GLITCH + config.KATA)}{RST}"
|
||||||
|
if random.random() < d
|
||||||
|
else " "
|
||||||
|
for _ in range(w)
|
||||||
|
)
|
||||||
|
elif r < 0.78 * intensity:
|
||||||
|
sources = FEEDS if config.MODE == "news" else POETRY_SOURCES
|
||||||
|
src = random.choice(list(sources.keys()))
|
||||||
|
msgs = [
|
||||||
|
f" SIGNAL :: {src} :: {datetime.now().strftime('%H:%M:%S.%f')[:-3]}",
|
||||||
|
f" ░░ FEED ACTIVE :: {src}",
|
||||||
|
f" >> DECODE 0x{random.randint(0x1000, 0xFFFF):04X} :: {src[:24]}",
|
||||||
|
f" ▒▒ ACQUIRE :: {random.choice(['TCP', 'UDP', 'RSS', 'ATOM', 'XML'])} :: {src}",
|
||||||
|
f" {''.join(random.choice(config.KATA) for _ in range(3))} STRM "
|
||||||
|
f"{random.randint(0, 255):02X}:{random.randint(0, 255):02X}",
|
||||||
|
]
|
||||||
|
text = random.choice(msgs)[: w - 1]
|
||||||
|
color = random.choice([G_LO, G_DIM, W_GHOST])
|
||||||
|
return f"{color}{text}{RST}"
|
||||||
|
else:
|
||||||
|
title, _, _ = random.choice(items)
|
||||||
|
start = random.randint(0, max(0, len(title) - 20))
|
||||||
|
frag = title[start : start + random.randint(10, 35)]
|
||||||
|
pad = random.randint(0, max(0, w - len(frag) - 8))
|
||||||
|
gp = "".join(
|
||||||
|
random.choice(config.GLITCH) for _ in range(random.randint(1, 3))
|
||||||
|
)
|
||||||
|
text = (" " * pad + gp + " " + frag)[: w - 1]
|
||||||
|
color = random.choice([G_LO, C_DIM, W_GHOST])
|
||||||
|
return f"{color}{text}{RST}"
|
||||||
|
|
||||||
|
def configure(self, config: EffectConfig) -> None:
|
||||||
|
self.config = config
|
||||||
37
effects_plugins/glitch.py
Normal file
37
effects_plugins/glitch.py
Normal file
@@ -0,0 +1,37 @@
|
|||||||
|
import random
|
||||||
|
|
||||||
|
from engine import config
|
||||||
|
from engine.effects.types import EffectConfig, EffectContext, EffectPlugin
|
||||||
|
from engine.terminal import C_DIM, DIM, G_DIM, G_LO, RST
|
||||||
|
|
||||||
|
|
||||||
|
class GlitchEffect(EffectPlugin):
|
||||||
|
name = "glitch"
|
||||||
|
config = EffectConfig(enabled=True, intensity=1.0)
|
||||||
|
|
||||||
|
def process(self, buf: list[str], ctx: EffectContext) -> list[str]:
|
||||||
|
if not buf:
|
||||||
|
return buf
|
||||||
|
result = list(buf)
|
||||||
|
intensity = self.config.intensity
|
||||||
|
|
||||||
|
glitch_prob = 0.32 + min(0.9, ctx.mic_excess * 0.16)
|
||||||
|
glitch_prob = glitch_prob * intensity
|
||||||
|
n_hits = 4 + int(ctx.mic_excess / 2)
|
||||||
|
n_hits = int(n_hits * intensity)
|
||||||
|
|
||||||
|
if random.random() < glitch_prob:
|
||||||
|
for _ in range(min(n_hits, len(result))):
|
||||||
|
gi = random.randint(0, len(result) - 1)
|
||||||
|
scr_row = gi + 1
|
||||||
|
result[gi] = f"\033[{scr_row};1H{self._glitch_bar(ctx.terminal_width)}"
|
||||||
|
return result
|
||||||
|
|
||||||
|
def _glitch_bar(self, w: int) -> str:
|
||||||
|
c = random.choice(["░", "▒", "─", "\xc2"])
|
||||||
|
n = random.randint(3, w // 2)
|
||||||
|
o = random.randint(0, w - n)
|
||||||
|
return " " * o + f"{G_LO}{DIM}" + c * n + RST
|
||||||
|
|
||||||
|
def configure(self, config: EffectConfig) -> None:
|
||||||
|
self.config = config
|
||||||
63
effects_plugins/hud.py
Normal file
63
effects_plugins/hud.py
Normal file
@@ -0,0 +1,63 @@
|
|||||||
|
from engine.effects.performance import get_monitor
|
||||||
|
from engine.effects.types import EffectConfig, EffectContext, EffectPlugin
|
||||||
|
|
||||||
|
|
||||||
|
class HudEffect(EffectPlugin):
|
||||||
|
name = "hud"
|
||||||
|
config = EffectConfig(enabled=True, intensity=1.0)
|
||||||
|
|
||||||
|
def process(self, buf: list[str], ctx: EffectContext) -> list[str]:
|
||||||
|
result = list(buf)
|
||||||
|
monitor = get_monitor()
|
||||||
|
|
||||||
|
fps = 0.0
|
||||||
|
frame_time = 0.0
|
||||||
|
if monitor:
|
||||||
|
stats = monitor.get_stats()
|
||||||
|
if stats and "pipeline" in stats:
|
||||||
|
frame_time = stats["pipeline"].get("avg_ms", 0.0)
|
||||||
|
frame_count = stats.get("frame_count", 0)
|
||||||
|
if frame_count > 0 and frame_time > 0:
|
||||||
|
fps = 1000.0 / frame_time
|
||||||
|
|
||||||
|
w = ctx.terminal_width
|
||||||
|
h = ctx.terminal_height
|
||||||
|
|
||||||
|
effect_name = self.config.params.get("display_effect", "none")
|
||||||
|
effect_intensity = self.config.params.get("display_intensity", 0.0)
|
||||||
|
|
||||||
|
hud_lines = []
|
||||||
|
hud_lines.append(
|
||||||
|
f"\033[1;1H\033[38;5;46mMAINLINE DEMO\033[0m \033[38;5;245m|\033[0m \033[38;5;39mFPS: {fps:.1f}\033[0m \033[38;5;245m|\033[0m \033[38;5;208m{frame_time:.1f}ms\033[0m"
|
||||||
|
)
|
||||||
|
|
||||||
|
bar_width = 20
|
||||||
|
filled = int(bar_width * effect_intensity)
|
||||||
|
bar = (
|
||||||
|
"\033[38;5;82m"
|
||||||
|
+ "█" * filled
|
||||||
|
+ "\033[38;5;240m"
|
||||||
|
+ "░" * (bar_width - filled)
|
||||||
|
+ "\033[0m"
|
||||||
|
)
|
||||||
|
hud_lines.append(
|
||||||
|
f"\033[2;1H\033[38;5;45mEFFECT:\033[0m \033[1;38;5;227m{effect_name:12s}\033[0m \033[38;5;245m|\033[0m {bar} \033[38;5;245m|\033[0m \033[38;5;219m{effect_intensity * 100:.0f}%\033[0m"
|
||||||
|
)
|
||||||
|
|
||||||
|
from engine.effects import get_effect_chain
|
||||||
|
|
||||||
|
chain = get_effect_chain()
|
||||||
|
order = chain.get_order()
|
||||||
|
pipeline_str = ",".join(order) if order else "(none)"
|
||||||
|
hud_lines.append(f"\033[3;1H\033[38;5;44mPIPELINE:\033[0m {pipeline_str}")
|
||||||
|
|
||||||
|
for i, line in enumerate(hud_lines):
|
||||||
|
if i < len(result):
|
||||||
|
result[i] = line + result[i][len(line) :]
|
||||||
|
else:
|
||||||
|
result.append(line)
|
||||||
|
|
||||||
|
return result
|
||||||
|
|
||||||
|
def configure(self, config: EffectConfig) -> None:
|
||||||
|
self.config = config
|
||||||
36
effects_plugins/noise.py
Normal file
36
effects_plugins/noise.py
Normal file
@@ -0,0 +1,36 @@
|
|||||||
|
import random
|
||||||
|
|
||||||
|
from engine import config
|
||||||
|
from engine.effects.types import EffectConfig, EffectContext, EffectPlugin
|
||||||
|
from engine.terminal import C_DIM, G_DIM, G_LO, RST, W_GHOST
|
||||||
|
|
||||||
|
|
||||||
|
class NoiseEffect(EffectPlugin):
|
||||||
|
name = "noise"
|
||||||
|
config = EffectConfig(enabled=True, intensity=0.15)
|
||||||
|
|
||||||
|
def process(self, buf: list[str], ctx: EffectContext) -> list[str]:
|
||||||
|
if not ctx.ticker_height:
|
||||||
|
return buf
|
||||||
|
result = list(buf)
|
||||||
|
intensity = self.config.intensity
|
||||||
|
probability = intensity * 0.15
|
||||||
|
|
||||||
|
for r in range(len(result)):
|
||||||
|
cy = ctx.scroll_cam + r
|
||||||
|
if random.random() < probability:
|
||||||
|
result[r] = self._generate_noise(ctx.terminal_width, cy)
|
||||||
|
return result
|
||||||
|
|
||||||
|
def _generate_noise(self, w: int, cy: int) -> str:
|
||||||
|
d = random.choice([0.15, 0.25, 0.35, 0.12])
|
||||||
|
return "".join(
|
||||||
|
f"{random.choice([G_LO, G_DIM, C_DIM, W_GHOST])}"
|
||||||
|
f"{random.choice(config.GLITCH + config.KATA)}{RST}"
|
||||||
|
if random.random() < d
|
||||||
|
else " "
|
||||||
|
for _ in range(w)
|
||||||
|
)
|
||||||
|
|
||||||
|
def configure(self, config: EffectConfig) -> None:
|
||||||
|
self.config = config
|
||||||
340
engine/animation.py
Normal file
340
engine/animation.py
Normal file
@@ -0,0 +1,340 @@
|
|||||||
|
"""
|
||||||
|
Animation system - Clock, events, triggers, durations, and animation controller.
|
||||||
|
"""
|
||||||
|
|
||||||
|
import time
|
||||||
|
from collections.abc import Callable
|
||||||
|
from dataclasses import dataclass, field
|
||||||
|
from enum import Enum, auto
|
||||||
|
from typing import Any
|
||||||
|
|
||||||
|
|
||||||
|
class Clock:
|
||||||
|
"""High-resolution clock for animation timing."""
|
||||||
|
|
||||||
|
def __init__(self):
|
||||||
|
self._start_time = time.perf_counter()
|
||||||
|
self._paused = False
|
||||||
|
self._pause_offset = 0.0
|
||||||
|
self._pause_start = 0.0
|
||||||
|
|
||||||
|
def reset(self) -> None:
|
||||||
|
self._start_time = time.perf_counter()
|
||||||
|
self._paused = False
|
||||||
|
self._pause_offset = 0.0
|
||||||
|
self._pause_start = 0.0
|
||||||
|
|
||||||
|
def elapsed(self) -> float:
|
||||||
|
if self._paused:
|
||||||
|
return self._pause_start - self._start_time - self._pause_offset
|
||||||
|
return time.perf_counter() - self._start_time - self._pause_offset
|
||||||
|
|
||||||
|
def elapsed_ms(self) -> float:
|
||||||
|
return self.elapsed() * 1000
|
||||||
|
|
||||||
|
def elapsed_frames(self, fps: float = 60.0) -> int:
|
||||||
|
return int(self.elapsed() * fps)
|
||||||
|
|
||||||
|
def pause(self) -> None:
|
||||||
|
if not self._paused:
|
||||||
|
self._paused = True
|
||||||
|
self._pause_start = time.perf_counter()
|
||||||
|
|
||||||
|
def resume(self) -> None:
|
||||||
|
if self._paused:
|
||||||
|
self._pause_offset += time.perf_counter() - self._pause_start
|
||||||
|
self._paused = False
|
||||||
|
|
||||||
|
|
||||||
|
class TriggerType(Enum):
|
||||||
|
TIME = auto() # Trigger after elapsed time
|
||||||
|
FRAME = auto() # Trigger after N frames
|
||||||
|
CYCLE = auto() # Trigger on cycle repeat
|
||||||
|
CONDITION = auto() # Trigger when condition is met
|
||||||
|
MANUAL = auto() # Trigger manually
|
||||||
|
|
||||||
|
|
||||||
|
@dataclass
|
||||||
|
class Trigger:
|
||||||
|
"""Event trigger configuration."""
|
||||||
|
|
||||||
|
type: TriggerType
|
||||||
|
value: float | int = 0
|
||||||
|
condition: Callable[["AnimationController"], bool] | None = None
|
||||||
|
repeat: bool = False
|
||||||
|
repeat_interval: float = 0.0
|
||||||
|
|
||||||
|
|
||||||
|
@dataclass
|
||||||
|
class Event:
|
||||||
|
"""An event with trigger, duration, and action."""
|
||||||
|
|
||||||
|
name: str
|
||||||
|
trigger: Trigger
|
||||||
|
action: Callable[["AnimationController", float], None]
|
||||||
|
duration: float = 0.0
|
||||||
|
ease: Callable[[float], float] | None = None
|
||||||
|
|
||||||
|
def __post_init__(self):
|
||||||
|
if self.ease is None:
|
||||||
|
self.ease = linear_ease
|
||||||
|
|
||||||
|
|
||||||
|
def linear_ease(t: float) -> float:
|
||||||
|
return t
|
||||||
|
|
||||||
|
|
||||||
|
def ease_in_out(t: float) -> float:
|
||||||
|
return t * t * (3 - 2 * t)
|
||||||
|
|
||||||
|
|
||||||
|
def ease_out_bounce(t: float) -> float:
|
||||||
|
if t < 1 / 2.75:
|
||||||
|
return 7.5625 * t * t
|
||||||
|
elif t < 2 / 2.75:
|
||||||
|
t -= 1.5 / 2.75
|
||||||
|
return 7.5625 * t * t + 0.75
|
||||||
|
elif t < 2.5 / 2.75:
|
||||||
|
t -= 2.25 / 2.75
|
||||||
|
return 7.5625 * t * t + 0.9375
|
||||||
|
else:
|
||||||
|
t -= 2.625 / 2.75
|
||||||
|
return 7.5625 * t * t + 0.984375
|
||||||
|
|
||||||
|
|
||||||
|
class AnimationController:
|
||||||
|
"""Controls animation parameters with clock and events."""
|
||||||
|
|
||||||
|
def __init__(self, fps: float = 60.0):
|
||||||
|
self.clock = Clock()
|
||||||
|
self.fps = fps
|
||||||
|
self.frame = 0
|
||||||
|
self._events: list[Event] = []
|
||||||
|
self._active_events: dict[str, float] = {}
|
||||||
|
self._params: dict[str, Any] = {}
|
||||||
|
self._cycled = 0
|
||||||
|
|
||||||
|
def add_event(self, event: Event) -> "AnimationController":
|
||||||
|
self._events.append(event)
|
||||||
|
return self
|
||||||
|
|
||||||
|
def set_param(self, key: str, value: Any) -> None:
|
||||||
|
self._params[key] = value
|
||||||
|
|
||||||
|
def get_param(self, key: str, default: Any = None) -> Any:
|
||||||
|
return self._params.get(key, default)
|
||||||
|
|
||||||
|
def update(self) -> dict[str, Any]:
|
||||||
|
"""Update animation state, return current params."""
|
||||||
|
elapsed = self.clock.elapsed()
|
||||||
|
|
||||||
|
for event in self._events:
|
||||||
|
triggered = False
|
||||||
|
|
||||||
|
if event.trigger.type == TriggerType.TIME:
|
||||||
|
if self.clock.elapsed() >= event.trigger.value:
|
||||||
|
triggered = True
|
||||||
|
elif event.trigger.type == TriggerType.FRAME:
|
||||||
|
if self.frame >= event.trigger.value:
|
||||||
|
triggered = True
|
||||||
|
elif event.trigger.type == TriggerType.CYCLE:
|
||||||
|
cycle_duration = event.trigger.value
|
||||||
|
if cycle_duration > 0:
|
||||||
|
current_cycle = int(elapsed / cycle_duration)
|
||||||
|
if current_cycle > self._cycled:
|
||||||
|
self._cycled = current_cycle
|
||||||
|
triggered = True
|
||||||
|
elif event.trigger.type == TriggerType.CONDITION:
|
||||||
|
if event.trigger.condition and event.trigger.condition(self):
|
||||||
|
triggered = True
|
||||||
|
elif event.trigger.type == TriggerType.MANUAL:
|
||||||
|
pass
|
||||||
|
|
||||||
|
if triggered:
|
||||||
|
if event.name not in self._active_events:
|
||||||
|
self._active_events[event.name] = 0.0
|
||||||
|
|
||||||
|
progress = 0.0
|
||||||
|
if event.duration > 0:
|
||||||
|
self._active_events[event.name] += 1 / self.fps
|
||||||
|
progress = min(
|
||||||
|
1.0, self._active_events[event.name] / event.duration
|
||||||
|
)
|
||||||
|
eased_progress = event.ease(progress)
|
||||||
|
event.action(self, eased_progress)
|
||||||
|
|
||||||
|
if progress >= 1.0:
|
||||||
|
if event.trigger.repeat:
|
||||||
|
self._active_events[event.name] = 0.0
|
||||||
|
else:
|
||||||
|
del self._active_events[event.name]
|
||||||
|
else:
|
||||||
|
event.action(self, 1.0)
|
||||||
|
if not event.trigger.repeat:
|
||||||
|
del self._active_events[event.name]
|
||||||
|
else:
|
||||||
|
self._active_events[event.name] = 0.0
|
||||||
|
|
||||||
|
self.frame += 1
|
||||||
|
return dict(self._params)
|
||||||
|
|
||||||
|
|
||||||
|
@dataclass
|
||||||
|
class PipelineParams:
|
||||||
|
"""Snapshot of pipeline parameters for animation."""
|
||||||
|
|
||||||
|
effect_enabled: dict[str, bool] = field(default_factory=dict)
|
||||||
|
effect_intensity: dict[str, float] = field(default_factory=dict)
|
||||||
|
camera_mode: str = "vertical"
|
||||||
|
camera_speed: float = 1.0
|
||||||
|
camera_x: int = 0
|
||||||
|
camera_y: int = 0
|
||||||
|
display_backend: str = "terminal"
|
||||||
|
scroll_speed: float = 1.0
|
||||||
|
|
||||||
|
|
||||||
|
class Preset:
|
||||||
|
"""Packages a starting pipeline config + Animation controller."""
|
||||||
|
|
||||||
|
def __init__(
|
||||||
|
self,
|
||||||
|
name: str,
|
||||||
|
description: str = "",
|
||||||
|
initial_params: PipelineParams | None = None,
|
||||||
|
animation: AnimationController | None = None,
|
||||||
|
):
|
||||||
|
self.name = name
|
||||||
|
self.description = description
|
||||||
|
self.initial_params = initial_params or PipelineParams()
|
||||||
|
self.animation = animation or AnimationController()
|
||||||
|
|
||||||
|
def create_controller(self) -> AnimationController:
|
||||||
|
controller = AnimationController()
|
||||||
|
for key, value in self.initial_params.__dict__.items():
|
||||||
|
controller.set_param(key, value)
|
||||||
|
for event in self.animation._events:
|
||||||
|
controller.add_event(event)
|
||||||
|
return controller
|
||||||
|
|
||||||
|
|
||||||
|
def create_demo_preset() -> Preset:
|
||||||
|
"""Create the demo preset with effect cycling and camera modes."""
|
||||||
|
animation = AnimationController(fps=60)
|
||||||
|
|
||||||
|
effects = ["noise", "fade", "glitch", "firehose"]
|
||||||
|
camera_modes = ["vertical", "horizontal", "omni", "floating", "trace"]
|
||||||
|
|
||||||
|
def make_effect_action(eff):
|
||||||
|
def action(ctrl, t):
|
||||||
|
ctrl.set_param("current_effect", eff)
|
||||||
|
ctrl.set_param("effect_intensity", t)
|
||||||
|
|
||||||
|
return action
|
||||||
|
|
||||||
|
def make_camera_action(cam_mode):
|
||||||
|
def action(ctrl, t):
|
||||||
|
ctrl.set_param("camera_mode", cam_mode)
|
||||||
|
|
||||||
|
return action
|
||||||
|
|
||||||
|
for i, effect in enumerate(effects):
|
||||||
|
effect_duration = 5.0
|
||||||
|
|
||||||
|
animation.add_event(
|
||||||
|
Event(
|
||||||
|
name=f"effect_{effect}",
|
||||||
|
trigger=Trigger(
|
||||||
|
type=TriggerType.TIME,
|
||||||
|
value=i * effect_duration,
|
||||||
|
repeat=True,
|
||||||
|
repeat_interval=len(effects) * effect_duration,
|
||||||
|
),
|
||||||
|
duration=effect_duration,
|
||||||
|
action=make_effect_action(effect),
|
||||||
|
ease=ease_in_out,
|
||||||
|
)
|
||||||
|
)
|
||||||
|
|
||||||
|
for i, mode in enumerate(camera_modes):
|
||||||
|
camera_duration = 10.0
|
||||||
|
animation.add_event(
|
||||||
|
Event(
|
||||||
|
name=f"camera_{mode}",
|
||||||
|
trigger=Trigger(
|
||||||
|
type=TriggerType.TIME,
|
||||||
|
value=i * camera_duration,
|
||||||
|
repeat=True,
|
||||||
|
repeat_interval=len(camera_modes) * camera_duration,
|
||||||
|
),
|
||||||
|
duration=0.5,
|
||||||
|
action=make_camera_action(mode),
|
||||||
|
)
|
||||||
|
)
|
||||||
|
|
||||||
|
animation.add_event(
|
||||||
|
Event(
|
||||||
|
name="pulse",
|
||||||
|
trigger=Trigger(type=TriggerType.CYCLE, value=2.0, repeat=True),
|
||||||
|
duration=1.0,
|
||||||
|
action=lambda ctrl, t: ctrl.set_param("pulse", t),
|
||||||
|
ease=ease_out_bounce,
|
||||||
|
)
|
||||||
|
)
|
||||||
|
|
||||||
|
return Preset(
|
||||||
|
name="demo",
|
||||||
|
description="Demo mode with effect cycling and camera modes",
|
||||||
|
initial_params=PipelineParams(
|
||||||
|
effect_enabled={
|
||||||
|
"noise": False,
|
||||||
|
"fade": False,
|
||||||
|
"glitch": False,
|
||||||
|
"firehose": False,
|
||||||
|
"hud": True,
|
||||||
|
},
|
||||||
|
effect_intensity={
|
||||||
|
"noise": 0.0,
|
||||||
|
"fade": 0.0,
|
||||||
|
"glitch": 0.0,
|
||||||
|
"firehose": 0.0,
|
||||||
|
},
|
||||||
|
camera_mode="vertical",
|
||||||
|
camera_speed=1.0,
|
||||||
|
display_backend="pygame",
|
||||||
|
),
|
||||||
|
animation=animation,
|
||||||
|
)
|
||||||
|
|
||||||
|
|
||||||
|
def create_pipeline_preset() -> Preset:
|
||||||
|
"""Create preset for pipeline visualization."""
|
||||||
|
animation = AnimationController(fps=60)
|
||||||
|
|
||||||
|
animation.add_event(
|
||||||
|
Event(
|
||||||
|
name="camera_trace",
|
||||||
|
trigger=Trigger(type=TriggerType.CYCLE, value=8.0, repeat=True),
|
||||||
|
duration=8.0,
|
||||||
|
action=lambda ctrl, t: ctrl.set_param("camera_mode", "trace"),
|
||||||
|
)
|
||||||
|
)
|
||||||
|
|
||||||
|
animation.add_event(
|
||||||
|
Event(
|
||||||
|
name="highlight_path",
|
||||||
|
trigger=Trigger(type=TriggerType.CYCLE, value=4.0, repeat=True),
|
||||||
|
duration=4.0,
|
||||||
|
action=lambda ctrl, t: ctrl.set_param("path_progress", t),
|
||||||
|
)
|
||||||
|
)
|
||||||
|
|
||||||
|
return Preset(
|
||||||
|
name="pipeline",
|
||||||
|
description="Pipeline visualization with trace camera",
|
||||||
|
initial_params=PipelineParams(
|
||||||
|
camera_mode="trace",
|
||||||
|
camera_speed=1.0,
|
||||||
|
display_backend="pygame",
|
||||||
|
),
|
||||||
|
animation=animation,
|
||||||
|
)
|
||||||
1025
engine/app.py
1025
engine/app.py
File diff suppressed because it is too large
Load Diff
4107
engine/beautiful_mermaid.py
Normal file
4107
engine/beautiful_mermaid.py
Normal file
File diff suppressed because it is too large
Load Diff
730
engine/benchmark.py
Normal file
730
engine/benchmark.py
Normal file
@@ -0,0 +1,730 @@
|
|||||||
|
#!/usr/bin/env python3
|
||||||
|
"""
|
||||||
|
Benchmark runner for mainline - tests performance across effects and displays.
|
||||||
|
|
||||||
|
Usage:
|
||||||
|
python -m engine.benchmark
|
||||||
|
python -m engine.benchmark --output report.md
|
||||||
|
python -m engine.benchmark --displays terminal,websocket --effects glitch,fade
|
||||||
|
python -m engine.benchmark --format json --output benchmark.json
|
||||||
|
|
||||||
|
Headless mode (default): suppress all terminal output during benchmarks.
|
||||||
|
"""
|
||||||
|
|
||||||
|
import argparse
|
||||||
|
import json
|
||||||
|
import sys
|
||||||
|
import time
|
||||||
|
from dataclasses import dataclass, field
|
||||||
|
from datetime import datetime
|
||||||
|
from io import StringIO
|
||||||
|
from pathlib import Path
|
||||||
|
from typing import Any
|
||||||
|
|
||||||
|
import numpy as np
|
||||||
|
|
||||||
|
|
||||||
|
@dataclass
|
||||||
|
class BenchmarkResult:
|
||||||
|
"""Result of a single benchmark run."""
|
||||||
|
|
||||||
|
name: str
|
||||||
|
display: str
|
||||||
|
effect: str | None
|
||||||
|
iterations: int
|
||||||
|
total_time_ms: float
|
||||||
|
avg_time_ms: float
|
||||||
|
std_dev_ms: float
|
||||||
|
min_ms: float
|
||||||
|
max_ms: float
|
||||||
|
fps: float
|
||||||
|
chars_processed: int
|
||||||
|
chars_per_sec: float
|
||||||
|
|
||||||
|
|
||||||
|
@dataclass
|
||||||
|
class BenchmarkReport:
|
||||||
|
"""Complete benchmark report."""
|
||||||
|
|
||||||
|
timestamp: str
|
||||||
|
python_version: str
|
||||||
|
results: list[BenchmarkResult] = field(default_factory=list)
|
||||||
|
summary: dict[str, Any] = field(default_factory=dict)
|
||||||
|
|
||||||
|
|
||||||
|
def get_sample_buffer(width: int = 80, height: int = 24) -> list[str]:
|
||||||
|
"""Generate a sample buffer for benchmarking."""
|
||||||
|
lines = []
|
||||||
|
for i in range(height):
|
||||||
|
line = f"\x1b[32mLine {i}\x1b[0m " + "A" * (width - 10)
|
||||||
|
lines.append(line)
|
||||||
|
return lines
|
||||||
|
|
||||||
|
|
||||||
|
def benchmark_display(
|
||||||
|
display_class,
|
||||||
|
buffer: list[str],
|
||||||
|
iterations: int = 100,
|
||||||
|
display=None,
|
||||||
|
reuse: bool = False,
|
||||||
|
) -> BenchmarkResult | None:
|
||||||
|
"""Benchmark a single display.
|
||||||
|
|
||||||
|
Args:
|
||||||
|
display_class: Display class to instantiate
|
||||||
|
buffer: Buffer to display
|
||||||
|
iterations: Number of iterations
|
||||||
|
display: Optional existing display instance to reuse
|
||||||
|
reuse: If True and display provided, use reuse mode
|
||||||
|
"""
|
||||||
|
old_stdout = sys.stdout
|
||||||
|
old_stderr = sys.stderr
|
||||||
|
|
||||||
|
try:
|
||||||
|
sys.stdout = StringIO()
|
||||||
|
sys.stderr = StringIO()
|
||||||
|
|
||||||
|
if display is None:
|
||||||
|
display = display_class()
|
||||||
|
display.init(80, 24, reuse=False)
|
||||||
|
should_cleanup = True
|
||||||
|
else:
|
||||||
|
should_cleanup = False
|
||||||
|
|
||||||
|
times = []
|
||||||
|
chars = sum(len(line) for line in buffer)
|
||||||
|
|
||||||
|
for _ in range(iterations):
|
||||||
|
t0 = time.perf_counter()
|
||||||
|
display.show(buffer)
|
||||||
|
elapsed = (time.perf_counter() - t0) * 1000
|
||||||
|
times.append(elapsed)
|
||||||
|
|
||||||
|
if should_cleanup and hasattr(display, "cleanup"):
|
||||||
|
display.cleanup(quit_pygame=False)
|
||||||
|
|
||||||
|
except Exception:
|
||||||
|
return None
|
||||||
|
finally:
|
||||||
|
sys.stdout = old_stdout
|
||||||
|
sys.stderr = old_stderr
|
||||||
|
|
||||||
|
times_arr = np.array(times)
|
||||||
|
|
||||||
|
return BenchmarkResult(
|
||||||
|
name=f"display_{display_class.__name__}",
|
||||||
|
display=display_class.__name__,
|
||||||
|
effect=None,
|
||||||
|
iterations=iterations,
|
||||||
|
total_time_ms=sum(times),
|
||||||
|
avg_time_ms=float(np.mean(times_arr)),
|
||||||
|
std_dev_ms=float(np.std(times_arr)),
|
||||||
|
min_ms=float(np.min(times_arr)),
|
||||||
|
max_ms=float(np.max(times_arr)),
|
||||||
|
fps=float(1000.0 / np.mean(times_arr)) if np.mean(times_arr) > 0 else 0.0,
|
||||||
|
chars_processed=chars * iterations,
|
||||||
|
chars_per_sec=float((chars * iterations) / (sum(times) / 1000))
|
||||||
|
if sum(times) > 0
|
||||||
|
else 0.0,
|
||||||
|
)
|
||||||
|
|
||||||
|
|
||||||
|
def benchmark_effect_with_display(
|
||||||
|
effect_class, display, buffer: list[str], iterations: int = 100, reuse: bool = False
|
||||||
|
) -> BenchmarkResult | None:
|
||||||
|
"""Benchmark an effect with a display.
|
||||||
|
|
||||||
|
Args:
|
||||||
|
effect_class: Effect class to instantiate
|
||||||
|
display: Display instance to use
|
||||||
|
buffer: Buffer to process and display
|
||||||
|
iterations: Number of iterations
|
||||||
|
reuse: If True, use reuse mode for display
|
||||||
|
"""
|
||||||
|
old_stdout = sys.stdout
|
||||||
|
old_stderr = sys.stderr
|
||||||
|
|
||||||
|
try:
|
||||||
|
from engine.effects.types import EffectConfig, EffectContext
|
||||||
|
|
||||||
|
sys.stdout = StringIO()
|
||||||
|
sys.stderr = StringIO()
|
||||||
|
|
||||||
|
effect = effect_class()
|
||||||
|
effect.configure(EffectConfig(enabled=True, intensity=1.0))
|
||||||
|
|
||||||
|
ctx = EffectContext(
|
||||||
|
terminal_width=80,
|
||||||
|
terminal_height=24,
|
||||||
|
scroll_cam=0,
|
||||||
|
ticker_height=0,
|
||||||
|
mic_excess=0.0,
|
||||||
|
grad_offset=0.0,
|
||||||
|
frame_number=0,
|
||||||
|
has_message=False,
|
||||||
|
)
|
||||||
|
|
||||||
|
times = []
|
||||||
|
chars = sum(len(line) for line in buffer)
|
||||||
|
|
||||||
|
for _ in range(iterations):
|
||||||
|
processed = effect.process(buffer, ctx)
|
||||||
|
t0 = time.perf_counter()
|
||||||
|
display.show(processed)
|
||||||
|
elapsed = (time.perf_counter() - t0) * 1000
|
||||||
|
times.append(elapsed)
|
||||||
|
|
||||||
|
if not reuse and hasattr(display, "cleanup"):
|
||||||
|
display.cleanup(quit_pygame=False)
|
||||||
|
|
||||||
|
except Exception:
|
||||||
|
return None
|
||||||
|
finally:
|
||||||
|
sys.stdout = old_stdout
|
||||||
|
sys.stderr = old_stderr
|
||||||
|
|
||||||
|
times_arr = np.array(times)
|
||||||
|
|
||||||
|
return BenchmarkResult(
|
||||||
|
name=f"effect_{effect_class.__name__}_with_{display.__class__.__name__}",
|
||||||
|
display=display.__class__.__name__,
|
||||||
|
effect=effect_class.__name__,
|
||||||
|
iterations=iterations,
|
||||||
|
total_time_ms=sum(times),
|
||||||
|
avg_time_ms=float(np.mean(times_arr)),
|
||||||
|
std_dev_ms=float(np.std(times_arr)),
|
||||||
|
min_ms=float(np.min(times_arr)),
|
||||||
|
max_ms=float(np.max(times_arr)),
|
||||||
|
fps=float(1000.0 / np.mean(times_arr)) if np.mean(times_arr) > 0 else 0.0,
|
||||||
|
chars_processed=chars * iterations,
|
||||||
|
chars_per_sec=float((chars * iterations) / (sum(times) / 1000))
|
||||||
|
if sum(times) > 0
|
||||||
|
else 0.0,
|
||||||
|
)
|
||||||
|
|
||||||
|
|
||||||
|
def get_available_displays():
|
||||||
|
"""Get available display classes."""
|
||||||
|
from engine.display import (
|
||||||
|
DisplayRegistry,
|
||||||
|
NullDisplay,
|
||||||
|
TerminalDisplay,
|
||||||
|
)
|
||||||
|
|
||||||
|
DisplayRegistry.initialize()
|
||||||
|
|
||||||
|
displays = [
|
||||||
|
("null", NullDisplay),
|
||||||
|
("terminal", TerminalDisplay),
|
||||||
|
]
|
||||||
|
|
||||||
|
try:
|
||||||
|
from engine.display.backends.websocket import WebSocketDisplay
|
||||||
|
|
||||||
|
displays.append(("websocket", WebSocketDisplay))
|
||||||
|
except Exception:
|
||||||
|
pass
|
||||||
|
|
||||||
|
try:
|
||||||
|
from engine.display.backends.sixel import SixelDisplay
|
||||||
|
|
||||||
|
displays.append(("sixel", SixelDisplay))
|
||||||
|
except Exception:
|
||||||
|
pass
|
||||||
|
|
||||||
|
try:
|
||||||
|
from engine.display.backends.pygame import PygameDisplay
|
||||||
|
|
||||||
|
displays.append(("pygame", PygameDisplay))
|
||||||
|
except Exception:
|
||||||
|
pass
|
||||||
|
|
||||||
|
return displays
|
||||||
|
|
||||||
|
|
||||||
|
def get_available_effects():
|
||||||
|
"""Get available effect classes."""
|
||||||
|
try:
|
||||||
|
from engine.effects import get_registry
|
||||||
|
|
||||||
|
try:
|
||||||
|
from effects_plugins import discover_plugins
|
||||||
|
|
||||||
|
discover_plugins()
|
||||||
|
except Exception:
|
||||||
|
pass
|
||||||
|
except Exception:
|
||||||
|
return []
|
||||||
|
|
||||||
|
effects = []
|
||||||
|
registry = get_registry()
|
||||||
|
|
||||||
|
for name, effect in registry.list_all().items():
|
||||||
|
if effect:
|
||||||
|
effect_cls = type(effect)
|
||||||
|
effects.append((name, effect_cls))
|
||||||
|
|
||||||
|
return effects
|
||||||
|
|
||||||
|
|
||||||
|
def run_benchmarks(
|
||||||
|
displays: list[tuple[str, Any]] | None = None,
|
||||||
|
effects: list[tuple[str, Any]] | None = None,
|
||||||
|
iterations: int = 100,
|
||||||
|
verbose: bool = False,
|
||||||
|
) -> BenchmarkReport:
|
||||||
|
"""Run all benchmarks and return report."""
|
||||||
|
from datetime import datetime
|
||||||
|
|
||||||
|
if displays is None:
|
||||||
|
displays = get_available_displays()
|
||||||
|
|
||||||
|
if effects is None:
|
||||||
|
effects = get_available_effects()
|
||||||
|
|
||||||
|
buffer = get_sample_buffer(80, 24)
|
||||||
|
results = []
|
||||||
|
|
||||||
|
if verbose:
|
||||||
|
print(f"Running benchmarks ({iterations} iterations each)...")
|
||||||
|
|
||||||
|
pygame_display = None
|
||||||
|
for name, display_class in displays:
|
||||||
|
if verbose:
|
||||||
|
print(f"Benchmarking display: {name}")
|
||||||
|
|
||||||
|
result = benchmark_display(display_class, buffer, iterations)
|
||||||
|
if result:
|
||||||
|
results.append(result)
|
||||||
|
if verbose:
|
||||||
|
print(f" {result.fps:.1f} FPS, {result.avg_time_ms:.2f}ms avg")
|
||||||
|
|
||||||
|
if name == "pygame":
|
||||||
|
pygame_display = result
|
||||||
|
|
||||||
|
if verbose:
|
||||||
|
print()
|
||||||
|
|
||||||
|
pygame_instance = None
|
||||||
|
if pygame_display:
|
||||||
|
try:
|
||||||
|
from engine.display.backends.pygame import PygameDisplay
|
||||||
|
|
||||||
|
PygameDisplay.reset_state()
|
||||||
|
pygame_instance = PygameDisplay()
|
||||||
|
pygame_instance.init(80, 24, reuse=False)
|
||||||
|
except Exception:
|
||||||
|
pygame_instance = None
|
||||||
|
|
||||||
|
for effect_name, effect_class in effects:
|
||||||
|
for display_name, display_class in displays:
|
||||||
|
if display_name == "websocket":
|
||||||
|
continue
|
||||||
|
|
||||||
|
if display_name == "pygame":
|
||||||
|
if verbose:
|
||||||
|
print(f"Benchmarking effect: {effect_name} with {display_name}")
|
||||||
|
|
||||||
|
if pygame_instance:
|
||||||
|
result = benchmark_effect_with_display(
|
||||||
|
effect_class, pygame_instance, buffer, iterations, reuse=True
|
||||||
|
)
|
||||||
|
if result:
|
||||||
|
results.append(result)
|
||||||
|
if verbose:
|
||||||
|
print(
|
||||||
|
f" {result.fps:.1f} FPS, {result.avg_time_ms:.2f}ms avg"
|
||||||
|
)
|
||||||
|
continue
|
||||||
|
|
||||||
|
if verbose:
|
||||||
|
print(f"Benchmarking effect: {effect_name} with {display_name}")
|
||||||
|
|
||||||
|
display = display_class()
|
||||||
|
display.init(80, 24)
|
||||||
|
result = benchmark_effect_with_display(
|
||||||
|
effect_class, display, buffer, iterations
|
||||||
|
)
|
||||||
|
if result:
|
||||||
|
results.append(result)
|
||||||
|
if verbose:
|
||||||
|
print(f" {result.fps:.1f} FPS, {result.avg_time_ms:.2f}ms avg")
|
||||||
|
|
||||||
|
if pygame_instance:
|
||||||
|
try:
|
||||||
|
pygame_instance.cleanup(quit_pygame=True)
|
||||||
|
except Exception:
|
||||||
|
pass
|
||||||
|
|
||||||
|
summary = generate_summary(results)
|
||||||
|
|
||||||
|
return BenchmarkReport(
|
||||||
|
timestamp=datetime.now().isoformat(),
|
||||||
|
python_version=sys.version,
|
||||||
|
results=results,
|
||||||
|
summary=summary,
|
||||||
|
)
|
||||||
|
|
||||||
|
|
||||||
|
def generate_summary(results: list[BenchmarkResult]) -> dict[str, Any]:
|
||||||
|
"""Generate summary statistics from results."""
|
||||||
|
by_display: dict[str, list[BenchmarkResult]] = {}
|
||||||
|
by_effect: dict[str, list[BenchmarkResult]] = {}
|
||||||
|
|
||||||
|
for r in results:
|
||||||
|
if r.display not in by_display:
|
||||||
|
by_display[r.display] = []
|
||||||
|
by_display[r.display].append(r)
|
||||||
|
|
||||||
|
if r.effect:
|
||||||
|
if r.effect not in by_effect:
|
||||||
|
by_effect[r.effect] = []
|
||||||
|
by_effect[r.effect].append(r)
|
||||||
|
|
||||||
|
summary = {
|
||||||
|
"by_display": {},
|
||||||
|
"by_effect": {},
|
||||||
|
"overall": {
|
||||||
|
"total_tests": len(results),
|
||||||
|
"displays_tested": len(by_display),
|
||||||
|
"effects_tested": len(by_effect),
|
||||||
|
},
|
||||||
|
}
|
||||||
|
|
||||||
|
for display, res in by_display.items():
|
||||||
|
fps_values = [r.fps for r in res]
|
||||||
|
summary["by_display"][display] = {
|
||||||
|
"avg_fps": float(np.mean(fps_values)),
|
||||||
|
"min_fps": float(np.min(fps_values)),
|
||||||
|
"max_fps": float(np.max(fps_values)),
|
||||||
|
"tests": len(res),
|
||||||
|
}
|
||||||
|
|
||||||
|
for effect, res in by_effect.items():
|
||||||
|
fps_values = [r.fps for r in res]
|
||||||
|
summary["by_effect"][effect] = {
|
||||||
|
"avg_fps": float(np.mean(fps_values)),
|
||||||
|
"min_fps": float(np.min(fps_values)),
|
||||||
|
"max_fps": float(np.max(fps_values)),
|
||||||
|
"tests": len(res),
|
||||||
|
}
|
||||||
|
|
||||||
|
return summary
|
||||||
|
|
||||||
|
|
||||||
|
DEFAULT_CACHE_PATH = Path.home() / ".mainline_benchmark_cache.json"
|
||||||
|
|
||||||
|
|
||||||
|
def load_baseline(cache_path: Path | None = None) -> dict[str, Any] | None:
|
||||||
|
"""Load baseline benchmark results from cache."""
|
||||||
|
path = cache_path or DEFAULT_CACHE_PATH
|
||||||
|
if not path.exists():
|
||||||
|
return None
|
||||||
|
try:
|
||||||
|
with open(path) as f:
|
||||||
|
return json.load(f)
|
||||||
|
except Exception:
|
||||||
|
return None
|
||||||
|
|
||||||
|
|
||||||
|
def save_baseline(
|
||||||
|
results: list[BenchmarkResult],
|
||||||
|
cache_path: Path | None = None,
|
||||||
|
) -> None:
|
||||||
|
"""Save benchmark results as baseline to cache."""
|
||||||
|
path = cache_path or DEFAULT_CACHE_PATH
|
||||||
|
baseline = {
|
||||||
|
"timestamp": datetime.now().isoformat(),
|
||||||
|
"results": {
|
||||||
|
r.name: {
|
||||||
|
"fps": r.fps,
|
||||||
|
"avg_time_ms": r.avg_time_ms,
|
||||||
|
"chars_per_sec": r.chars_per_sec,
|
||||||
|
}
|
||||||
|
for r in results
|
||||||
|
},
|
||||||
|
}
|
||||||
|
path.parent.mkdir(parents=True, exist_ok=True)
|
||||||
|
with open(path, "w") as f:
|
||||||
|
json.dump(baseline, f, indent=2)
|
||||||
|
|
||||||
|
|
||||||
|
def compare_with_baseline(
|
||||||
|
results: list[BenchmarkResult],
|
||||||
|
baseline: dict[str, Any],
|
||||||
|
threshold: float = 0.2,
|
||||||
|
verbose: bool = True,
|
||||||
|
) -> tuple[bool, list[str]]:
|
||||||
|
"""Compare current results with baseline. Returns (pass, messages)."""
|
||||||
|
baseline_results = baseline.get("results", {})
|
||||||
|
failures = []
|
||||||
|
warnings = []
|
||||||
|
|
||||||
|
for r in results:
|
||||||
|
if r.name not in baseline_results:
|
||||||
|
warnings.append(f"New test: {r.name} (no baseline)")
|
||||||
|
continue
|
||||||
|
|
||||||
|
b = baseline_results[r.name]
|
||||||
|
if b["fps"] == 0:
|
||||||
|
continue
|
||||||
|
|
||||||
|
degradation = (b["fps"] - r.fps) / b["fps"]
|
||||||
|
if degradation > threshold:
|
||||||
|
failures.append(
|
||||||
|
f"{r.name}: FPS degraded {degradation * 100:.1f}% "
|
||||||
|
f"(baseline: {b['fps']:.1f}, current: {r.fps:.1f})"
|
||||||
|
)
|
||||||
|
elif verbose:
|
||||||
|
print(f" {r.name}: {r.fps:.1f} FPS (baseline: {b['fps']:.1f})")
|
||||||
|
|
||||||
|
passed = len(failures) == 0
|
||||||
|
messages = []
|
||||||
|
if failures:
|
||||||
|
messages.extend(failures)
|
||||||
|
if warnings:
|
||||||
|
messages.extend(warnings)
|
||||||
|
|
||||||
|
return passed, messages
|
||||||
|
|
||||||
|
|
||||||
|
def run_hook_mode(
|
||||||
|
displays: list[tuple[str, Any]] | None = None,
|
||||||
|
effects: list[tuple[str, Any]] | None = None,
|
||||||
|
iterations: int = 20,
|
||||||
|
threshold: float = 0.2,
|
||||||
|
cache_path: Path | None = None,
|
||||||
|
verbose: bool = False,
|
||||||
|
) -> int:
|
||||||
|
"""Run in hook mode: compare against baseline, exit 0 on pass, 1 on fail."""
|
||||||
|
baseline = load_baseline(cache_path)
|
||||||
|
|
||||||
|
if baseline is None:
|
||||||
|
print("No baseline found. Run with --baseline to create one.")
|
||||||
|
return 1
|
||||||
|
|
||||||
|
report = run_benchmarks(displays, effects, iterations, verbose)
|
||||||
|
|
||||||
|
passed, messages = compare_with_baseline(
|
||||||
|
report.results, baseline, threshold, verbose
|
||||||
|
)
|
||||||
|
|
||||||
|
print("\n=== Benchmark Hook Results ===")
|
||||||
|
if passed:
|
||||||
|
print("PASSED - No significant performance degradation")
|
||||||
|
return 0
|
||||||
|
else:
|
||||||
|
print("FAILED - Performance degradation detected:")
|
||||||
|
for msg in messages:
|
||||||
|
print(f" - {msg}")
|
||||||
|
return 1
|
||||||
|
|
||||||
|
|
||||||
|
def format_report_text(report: BenchmarkReport) -> str:
|
||||||
|
"""Format report as human-readable text."""
|
||||||
|
lines = [
|
||||||
|
"# Mainline Performance Benchmark Report",
|
||||||
|
"",
|
||||||
|
f"Generated: {report.timestamp}",
|
||||||
|
f"Python: {report.python_version}",
|
||||||
|
"",
|
||||||
|
"## Summary",
|
||||||
|
"",
|
||||||
|
f"Total tests: {report.summary['overall']['total_tests']}",
|
||||||
|
f"Displays tested: {report.summary['overall']['displays_tested']}",
|
||||||
|
f"Effects tested: {report.summary['overall']['effects_tested']}",
|
||||||
|
"",
|
||||||
|
"## By Display",
|
||||||
|
"",
|
||||||
|
]
|
||||||
|
|
||||||
|
for display, stats in report.summary["by_display"].items():
|
||||||
|
lines.append(f"### {display}")
|
||||||
|
lines.append(f"- Avg FPS: {stats['avg_fps']:.1f}")
|
||||||
|
lines.append(f"- Min FPS: {stats['min_fps']:.1f}")
|
||||||
|
lines.append(f"- Max FPS: {stats['max_fps']:.1f}")
|
||||||
|
lines.append(f"- Tests: {stats['tests']}")
|
||||||
|
lines.append("")
|
||||||
|
|
||||||
|
if report.summary["by_effect"]:
|
||||||
|
lines.append("## By Effect")
|
||||||
|
lines.append("")
|
||||||
|
|
||||||
|
for effect, stats in report.summary["by_effect"].items():
|
||||||
|
lines.append(f"### {effect}")
|
||||||
|
lines.append(f"- Avg FPS: {stats['avg_fps']:.1f}")
|
||||||
|
lines.append(f"- Min FPS: {stats['min_fps']:.1f}")
|
||||||
|
lines.append(f"- Max FPS: {stats['max_fps']:.1f}")
|
||||||
|
lines.append(f"- Tests: {stats['tests']}")
|
||||||
|
lines.append("")
|
||||||
|
|
||||||
|
lines.append("## Detailed Results")
|
||||||
|
lines.append("")
|
||||||
|
lines.append("| Display | Effect | FPS | Avg ms | StdDev ms | Min ms | Max ms |")
|
||||||
|
lines.append("|---------|--------|-----|--------|-----------|--------|--------|")
|
||||||
|
|
||||||
|
for r in report.results:
|
||||||
|
effect_col = r.effect if r.effect else "-"
|
||||||
|
lines.append(
|
||||||
|
f"| {r.display} | {effect_col} | {r.fps:.1f} | {r.avg_time_ms:.2f} | "
|
||||||
|
f"{r.std_dev_ms:.2f} | {r.min_ms:.2f} | {r.max_ms:.2f} |"
|
||||||
|
)
|
||||||
|
|
||||||
|
return "\n".join(lines)
|
||||||
|
|
||||||
|
|
||||||
|
def format_report_json(report: BenchmarkReport) -> str:
|
||||||
|
"""Format report as JSON."""
|
||||||
|
data = {
|
||||||
|
"timestamp": report.timestamp,
|
||||||
|
"python_version": report.python_version,
|
||||||
|
"summary": report.summary,
|
||||||
|
"results": [
|
||||||
|
{
|
||||||
|
"name": r.name,
|
||||||
|
"display": r.display,
|
||||||
|
"effect": r.effect,
|
||||||
|
"iterations": r.iterations,
|
||||||
|
"total_time_ms": r.total_time_ms,
|
||||||
|
"avg_time_ms": r.avg_time_ms,
|
||||||
|
"std_dev_ms": r.std_dev_ms,
|
||||||
|
"min_ms": r.min_ms,
|
||||||
|
"max_ms": r.max_ms,
|
||||||
|
"fps": r.fps,
|
||||||
|
"chars_processed": r.chars_processed,
|
||||||
|
"chars_per_sec": r.chars_per_sec,
|
||||||
|
}
|
||||||
|
for r in report.results
|
||||||
|
],
|
||||||
|
}
|
||||||
|
return json.dumps(data, indent=2)
|
||||||
|
|
||||||
|
|
||||||
|
def main():
|
||||||
|
parser = argparse.ArgumentParser(description="Run mainline benchmarks")
|
||||||
|
parser.add_argument(
|
||||||
|
"--displays",
|
||||||
|
help="Comma-separated list of displays to test (default: all)",
|
||||||
|
)
|
||||||
|
parser.add_argument(
|
||||||
|
"--effects",
|
||||||
|
help="Comma-separated list of effects to test (default: all)",
|
||||||
|
)
|
||||||
|
parser.add_argument(
|
||||||
|
"--iterations",
|
||||||
|
type=int,
|
||||||
|
default=100,
|
||||||
|
help="Number of iterations per test (default: 100)",
|
||||||
|
)
|
||||||
|
parser.add_argument(
|
||||||
|
"--output",
|
||||||
|
help="Output file path (default: stdout)",
|
||||||
|
)
|
||||||
|
parser.add_argument(
|
||||||
|
"--format",
|
||||||
|
choices=["text", "json"],
|
||||||
|
default="text",
|
||||||
|
help="Output format (default: text)",
|
||||||
|
)
|
||||||
|
parser.add_argument(
|
||||||
|
"--verbose",
|
||||||
|
"-v",
|
||||||
|
action="store_true",
|
||||||
|
help="Show progress during benchmarking",
|
||||||
|
)
|
||||||
|
parser.add_argument(
|
||||||
|
"--hook",
|
||||||
|
action="store_true",
|
||||||
|
help="Run in hook mode: compare against baseline, exit 0 pass, 1 fail",
|
||||||
|
)
|
||||||
|
parser.add_argument(
|
||||||
|
"--baseline",
|
||||||
|
action="store_true",
|
||||||
|
help="Save current results as baseline for future hook comparisons",
|
||||||
|
)
|
||||||
|
parser.add_argument(
|
||||||
|
"--threshold",
|
||||||
|
type=float,
|
||||||
|
default=0.2,
|
||||||
|
help="Performance degradation threshold for hook mode (default: 0.2 = 20%%)",
|
||||||
|
)
|
||||||
|
parser.add_argument(
|
||||||
|
"--cache",
|
||||||
|
type=str,
|
||||||
|
default=None,
|
||||||
|
help="Path to baseline cache file (default: ~/.mainline_benchmark_cache.json)",
|
||||||
|
)
|
||||||
|
|
||||||
|
args = parser.parse_args()
|
||||||
|
|
||||||
|
cache_path = Path(args.cache) if args.cache else DEFAULT_CACHE_PATH
|
||||||
|
|
||||||
|
if args.hook:
|
||||||
|
displays = None
|
||||||
|
if args.displays:
|
||||||
|
display_map = dict(get_available_displays())
|
||||||
|
displays = [
|
||||||
|
(name, display_map[name])
|
||||||
|
for name in args.displays.split(",")
|
||||||
|
if name in display_map
|
||||||
|
]
|
||||||
|
|
||||||
|
effects = None
|
||||||
|
if args.effects:
|
||||||
|
effect_map = dict(get_available_effects())
|
||||||
|
effects = [
|
||||||
|
(name, effect_map[name])
|
||||||
|
for name in args.effects.split(",")
|
||||||
|
if name in effect_map
|
||||||
|
]
|
||||||
|
|
||||||
|
return run_hook_mode(
|
||||||
|
displays,
|
||||||
|
effects,
|
||||||
|
iterations=args.iterations,
|
||||||
|
threshold=args.threshold,
|
||||||
|
cache_path=cache_path,
|
||||||
|
verbose=args.verbose,
|
||||||
|
)
|
||||||
|
|
||||||
|
displays = None
|
||||||
|
if args.displays:
|
||||||
|
display_map = dict(get_available_displays())
|
||||||
|
displays = [
|
||||||
|
(name, display_map[name])
|
||||||
|
for name in args.displays.split(",")
|
||||||
|
if name in display_map
|
||||||
|
]
|
||||||
|
|
||||||
|
effects = None
|
||||||
|
if args.effects:
|
||||||
|
effect_map = dict(get_available_effects())
|
||||||
|
effects = [
|
||||||
|
(name, effect_map[name])
|
||||||
|
for name in args.effects.split(",")
|
||||||
|
if name in effect_map
|
||||||
|
]
|
||||||
|
|
||||||
|
report = run_benchmarks(displays, effects, args.iterations, args.verbose)
|
||||||
|
|
||||||
|
if args.baseline:
|
||||||
|
save_baseline(report.results, cache_path)
|
||||||
|
print(f"Baseline saved to {cache_path}")
|
||||||
|
return 0
|
||||||
|
|
||||||
|
if args.format == "json":
|
||||||
|
output = format_report_json(report)
|
||||||
|
else:
|
||||||
|
output = format_report_text(report)
|
||||||
|
|
||||||
|
if args.output:
|
||||||
|
with open(args.output, "w") as f:
|
||||||
|
f.write(output)
|
||||||
|
else:
|
||||||
|
print(output)
|
||||||
|
|
||||||
|
return 0
|
||||||
|
|
||||||
|
|
||||||
|
if __name__ == "__main__":
|
||||||
|
sys.exit(main())
|
||||||
109
engine/camera.py
Normal file
109
engine/camera.py
Normal file
@@ -0,0 +1,109 @@
|
|||||||
|
"""
|
||||||
|
Camera system for viewport scrolling.
|
||||||
|
|
||||||
|
Provides abstraction for camera motion in different modes:
|
||||||
|
- Vertical: traditional upward scroll
|
||||||
|
- Horizontal: left/right movement
|
||||||
|
- Omni: combination of both
|
||||||
|
- Floating: sinusoidal/bobbing motion
|
||||||
|
"""
|
||||||
|
|
||||||
|
import math
|
||||||
|
from collections.abc import Callable
|
||||||
|
from dataclasses import dataclass, field
|
||||||
|
from enum import Enum, auto
|
||||||
|
|
||||||
|
|
||||||
|
class CameraMode(Enum):
|
||||||
|
VERTICAL = auto()
|
||||||
|
HORIZONTAL = auto()
|
||||||
|
OMNI = auto()
|
||||||
|
FLOATING = auto()
|
||||||
|
|
||||||
|
|
||||||
|
@dataclass
|
||||||
|
class Camera:
|
||||||
|
"""Camera for viewport scrolling.
|
||||||
|
|
||||||
|
Attributes:
|
||||||
|
x: Current horizontal offset (positive = scroll left)
|
||||||
|
y: Current vertical offset (positive = scroll up)
|
||||||
|
mode: Current camera mode
|
||||||
|
speed: Base scroll speed
|
||||||
|
custom_update: Optional custom update function
|
||||||
|
"""
|
||||||
|
|
||||||
|
x: int = 0
|
||||||
|
y: int = 0
|
||||||
|
mode: CameraMode = CameraMode.VERTICAL
|
||||||
|
speed: float = 1.0
|
||||||
|
custom_update: Callable[["Camera", float], None] | None = None
|
||||||
|
_time: float = field(default=0.0, repr=False)
|
||||||
|
|
||||||
|
def update(self, dt: float) -> None:
|
||||||
|
"""Update camera position based on mode.
|
||||||
|
|
||||||
|
Args:
|
||||||
|
dt: Delta time in seconds
|
||||||
|
"""
|
||||||
|
self._time += dt
|
||||||
|
|
||||||
|
if self.custom_update:
|
||||||
|
self.custom_update(self, dt)
|
||||||
|
return
|
||||||
|
|
||||||
|
if self.mode == CameraMode.VERTICAL:
|
||||||
|
self._update_vertical(dt)
|
||||||
|
elif self.mode == CameraMode.HORIZONTAL:
|
||||||
|
self._update_horizontal(dt)
|
||||||
|
elif self.mode == CameraMode.OMNI:
|
||||||
|
self._update_omni(dt)
|
||||||
|
elif self.mode == CameraMode.FLOATING:
|
||||||
|
self._update_floating(dt)
|
||||||
|
|
||||||
|
def _update_vertical(self, dt: float) -> None:
|
||||||
|
self.y += int(self.speed * dt * 60)
|
||||||
|
|
||||||
|
def _update_horizontal(self, dt: float) -> None:
|
||||||
|
self.x += int(self.speed * dt * 60)
|
||||||
|
|
||||||
|
def _update_omni(self, dt: float) -> None:
|
||||||
|
speed = self.speed * dt * 60
|
||||||
|
self.y += int(speed)
|
||||||
|
self.x += int(speed * 0.5)
|
||||||
|
|
||||||
|
def _update_floating(self, dt: float) -> None:
|
||||||
|
base = self.speed * 30
|
||||||
|
self.y = int(math.sin(self._time * 2) * base)
|
||||||
|
self.x = int(math.cos(self._time * 1.5) * base * 0.5)
|
||||||
|
|
||||||
|
def reset(self) -> None:
|
||||||
|
"""Reset camera position."""
|
||||||
|
self.x = 0
|
||||||
|
self.y = 0
|
||||||
|
self._time = 0.0
|
||||||
|
|
||||||
|
@classmethod
|
||||||
|
def vertical(cls, speed: float = 1.0) -> "Camera":
|
||||||
|
"""Create a vertical scrolling camera."""
|
||||||
|
return cls(mode=CameraMode.VERTICAL, speed=speed)
|
||||||
|
|
||||||
|
@classmethod
|
||||||
|
def horizontal(cls, speed: float = 1.0) -> "Camera":
|
||||||
|
"""Create a horizontal scrolling camera."""
|
||||||
|
return cls(mode=CameraMode.HORIZONTAL, speed=speed)
|
||||||
|
|
||||||
|
@classmethod
|
||||||
|
def omni(cls, speed: float = 1.0) -> "Camera":
|
||||||
|
"""Create an omnidirectional scrolling camera."""
|
||||||
|
return cls(mode=CameraMode.OMNI, speed=speed)
|
||||||
|
|
||||||
|
@classmethod
|
||||||
|
def floating(cls, speed: float = 1.0) -> "Camera":
|
||||||
|
"""Create a floating/bobbing camera."""
|
||||||
|
return cls(mode=CameraMode.FLOATING, speed=speed)
|
||||||
|
|
||||||
|
@classmethod
|
||||||
|
def custom(cls, update_fn: Callable[["Camera", float], None]) -> "Camera":
|
||||||
|
"""Create a camera with custom update function."""
|
||||||
|
return cls(custom_update=update_fn)
|
||||||
241
engine/config.py
241
engine/config.py
@@ -1,23 +1,225 @@
|
|||||||
"""
|
"""
|
||||||
Configuration constants, CLI flags, and glyph tables.
|
Configuration constants, CLI flags, and glyph tables.
|
||||||
|
Supports both global constants (backward compatible) and injected config for testing.
|
||||||
"""
|
"""
|
||||||
|
|
||||||
import sys
|
import sys
|
||||||
|
from dataclasses import dataclass, field
|
||||||
|
from pathlib import Path
|
||||||
|
|
||||||
|
_REPO_ROOT = Path(__file__).resolve().parent.parent
|
||||||
|
_FONT_EXTENSIONS = {".otf", ".ttf", ".ttc"}
|
||||||
|
|
||||||
|
|
||||||
|
def _arg_value(flag, argv: list[str] | None = None):
|
||||||
|
"""Get value following a CLI flag, if present."""
|
||||||
|
argv = argv or sys.argv
|
||||||
|
if flag not in argv:
|
||||||
|
return None
|
||||||
|
i = argv.index(flag)
|
||||||
|
return argv[i + 1] if i + 1 < len(argv) else None
|
||||||
|
|
||||||
|
|
||||||
|
def _arg_int(flag, default, argv: list[str] | None = None):
|
||||||
|
"""Get int CLI argument with safe fallback."""
|
||||||
|
raw = _arg_value(flag, argv)
|
||||||
|
if raw is None:
|
||||||
|
return default
|
||||||
|
try:
|
||||||
|
return int(raw)
|
||||||
|
except ValueError:
|
||||||
|
return default
|
||||||
|
|
||||||
|
|
||||||
|
def _resolve_font_path(raw_path):
|
||||||
|
"""Resolve font path; relative paths are anchored to repo root."""
|
||||||
|
p = Path(raw_path).expanduser()
|
||||||
|
if p.is_absolute():
|
||||||
|
return str(p)
|
||||||
|
return str((_REPO_ROOT / p).resolve())
|
||||||
|
|
||||||
|
|
||||||
|
def _list_font_files(font_dir):
|
||||||
|
"""List supported font files within a font directory."""
|
||||||
|
font_root = Path(font_dir)
|
||||||
|
if not font_root.exists() or not font_root.is_dir():
|
||||||
|
return []
|
||||||
|
return [
|
||||||
|
str(path.resolve())
|
||||||
|
for path in sorted(font_root.iterdir())
|
||||||
|
if path.is_file() and path.suffix.lower() in _FONT_EXTENSIONS
|
||||||
|
]
|
||||||
|
|
||||||
|
|
||||||
|
def list_repo_font_files():
|
||||||
|
"""Public helper for discovering repository font files."""
|
||||||
|
return _list_font_files(FONT_DIR)
|
||||||
|
|
||||||
|
|
||||||
|
def _get_platform_font_paths() -> dict[str, str]:
|
||||||
|
"""Get platform-appropriate font paths for non-Latin scripts."""
|
||||||
|
import platform
|
||||||
|
|
||||||
|
system = platform.system()
|
||||||
|
|
||||||
|
if system == "Darwin":
|
||||||
|
return {
|
||||||
|
"zh-cn": "/System/Library/Fonts/STHeiti Medium.ttc",
|
||||||
|
"ja": "/System/Library/Fonts/ヒラギノ角ゴシック W9.ttc",
|
||||||
|
"ko": "/System/Library/Fonts/AppleSDGothicNeo.ttc",
|
||||||
|
"ru": "/System/Library/Fonts/Supplemental/Arial.ttf",
|
||||||
|
"uk": "/System/Library/Fonts/Supplemental/Arial.ttf",
|
||||||
|
"el": "/System/Library/Fonts/Supplemental/Arial.ttf",
|
||||||
|
"he": "/System/Library/Fonts/Supplemental/Arial.ttf",
|
||||||
|
"ar": "/System/Library/Fonts/GeezaPro.ttc",
|
||||||
|
"fa": "/System/Library/Fonts/GeezaPro.ttc",
|
||||||
|
"hi": "/System/Library/Fonts/Kohinoor.ttc",
|
||||||
|
"th": "/System/Library/Fonts/ThonburiUI.ttc",
|
||||||
|
}
|
||||||
|
elif system == "Linux":
|
||||||
|
return {
|
||||||
|
"zh-cn": "/usr/share/fonts/truetype/noto/NotoSansCJK-Regular.ttc",
|
||||||
|
"ja": "/usr/share/fonts/truetype/noto/NotoSansCJK-Regular.ttc",
|
||||||
|
"ko": "/usr/share/fonts/truetype/noto/NotoSansCJK-Regular.ttc",
|
||||||
|
"ru": "/usr/share/fonts/truetype/dejavu/DejaVuSans.ttf",
|
||||||
|
"uk": "/usr/share/fonts/truetype/dejavu/DejaVuSans.ttf",
|
||||||
|
"el": "/usr/share/fonts/truetype/dejavu/DejaVuSans.ttf",
|
||||||
|
"he": "/usr/share/fonts/truetype/dejavu/DejaVuSans.ttf",
|
||||||
|
"ar": "/usr/share/fonts/truetype/dejavu/DejaVuSans.ttf",
|
||||||
|
"fa": "/usr/share/fonts/truetype/dejavu/DejaVuSans.ttf",
|
||||||
|
"hi": "/usr/share/fonts/truetype/noto/NotoSansDevanagari-Regular.ttf",
|
||||||
|
"th": "/usr/share/fonts/truetype/noto/NotoSansThai-Regular.ttf",
|
||||||
|
}
|
||||||
|
else:
|
||||||
|
return {}
|
||||||
|
|
||||||
|
|
||||||
|
@dataclass(frozen=True)
|
||||||
|
class Config:
|
||||||
|
"""Immutable configuration container for injected config."""
|
||||||
|
|
||||||
|
headline_limit: int = 1000
|
||||||
|
feed_timeout: int = 10
|
||||||
|
mic_threshold_db: int = 50
|
||||||
|
mode: str = "news"
|
||||||
|
firehose: bool = False
|
||||||
|
|
||||||
|
ntfy_topic: str = "https://ntfy.sh/klubhaus_terminal_mainline/json"
|
||||||
|
ntfy_cc_cmd_topic: str = "https://ntfy.sh/klubhaus_terminal_mainline_cc_cmd/json"
|
||||||
|
ntfy_cc_resp_topic: str = "https://ntfy.sh/klubhaus_terminal_mainline_cc_resp/json"
|
||||||
|
ntfy_reconnect_delay: int = 5
|
||||||
|
message_display_secs: int = 30
|
||||||
|
|
||||||
|
font_dir: str = "fonts"
|
||||||
|
font_path: str = ""
|
||||||
|
font_index: int = 0
|
||||||
|
font_picker: bool = True
|
||||||
|
font_sz: int = 60
|
||||||
|
render_h: int = 8
|
||||||
|
|
||||||
|
ssaa: int = 4
|
||||||
|
|
||||||
|
scroll_dur: float = 5.625
|
||||||
|
frame_dt: float = 0.05
|
||||||
|
firehose_h: int = 12
|
||||||
|
grad_speed: float = 0.08
|
||||||
|
|
||||||
|
glitch_glyphs: str = "░▒▓█▌▐╌╍╎╏┃┆┇┊┋"
|
||||||
|
kata_glyphs: str = "ハミヒーウシナモニサワツオリアホテマケメエカキムユラセネスタヌヘ"
|
||||||
|
|
||||||
|
script_fonts: dict[str, str] = field(default_factory=_get_platform_font_paths)
|
||||||
|
|
||||||
|
display: str = "pygame"
|
||||||
|
websocket: bool = False
|
||||||
|
websocket_port: int = 8765
|
||||||
|
|
||||||
|
@classmethod
|
||||||
|
def from_args(cls, argv: list[str] | None = None) -> "Config":
|
||||||
|
"""Create Config from CLI arguments (or custom argv for testing)."""
|
||||||
|
argv = argv or sys.argv
|
||||||
|
|
||||||
|
font_dir = _resolve_font_path(_arg_value("--font-dir", argv) or "fonts")
|
||||||
|
font_file_arg = _arg_value("--font-file", argv)
|
||||||
|
font_files = _list_font_files(font_dir)
|
||||||
|
font_path = (
|
||||||
|
_resolve_font_path(font_file_arg)
|
||||||
|
if font_file_arg
|
||||||
|
else (font_files[0] if font_files else "")
|
||||||
|
)
|
||||||
|
|
||||||
|
return cls(
|
||||||
|
headline_limit=1000,
|
||||||
|
feed_timeout=10,
|
||||||
|
mic_threshold_db=50,
|
||||||
|
mode="poetry" if "--poetry" in argv or "-p" in argv else "news",
|
||||||
|
firehose="--firehose" in argv,
|
||||||
|
ntfy_topic="https://ntfy.sh/klubhaus_terminal_mainline/json",
|
||||||
|
ntfy_cc_cmd_topic="https://ntfy.sh/klubhaus_terminal_mainline_cc_cmd/json",
|
||||||
|
ntfy_cc_resp_topic="https://ntfy.sh/klubhaus_terminal_mainline_cc_resp/json",
|
||||||
|
ntfy_reconnect_delay=5,
|
||||||
|
message_display_secs=30,
|
||||||
|
font_dir=font_dir,
|
||||||
|
font_path=font_path,
|
||||||
|
font_index=max(0, _arg_int("--font-index", 0, argv)),
|
||||||
|
font_picker="--no-font-picker" not in argv,
|
||||||
|
font_sz=60,
|
||||||
|
render_h=8,
|
||||||
|
ssaa=4,
|
||||||
|
scroll_dur=5.625,
|
||||||
|
frame_dt=0.05,
|
||||||
|
firehose_h=12,
|
||||||
|
grad_speed=0.08,
|
||||||
|
glitch_glyphs="░▒▓█▌▐╌╍╎╏┃┆┇┊┋",
|
||||||
|
kata_glyphs="ハミヒーウシナモニサワツオリアホテマケメエカキムユラセネスタヌヘ",
|
||||||
|
script_fonts=_get_platform_font_paths(),
|
||||||
|
display=_arg_value("--display", argv) or "terminal",
|
||||||
|
websocket="--websocket" in argv,
|
||||||
|
websocket_port=_arg_int("--websocket-port", 8765, argv),
|
||||||
|
)
|
||||||
|
|
||||||
|
|
||||||
|
_config: Config | None = None
|
||||||
|
|
||||||
|
|
||||||
|
def get_config() -> Config:
|
||||||
|
"""Get the global config instance (lazy-loaded)."""
|
||||||
|
global _config
|
||||||
|
if _config is None:
|
||||||
|
_config = Config.from_args()
|
||||||
|
return _config
|
||||||
|
|
||||||
|
|
||||||
|
def set_config(config: Config) -> None:
|
||||||
|
"""Set the global config instance (for testing)."""
|
||||||
|
global _config
|
||||||
|
_config = config
|
||||||
|
|
||||||
|
|
||||||
# ─── RUNTIME ──────────────────────────────────────────────
|
# ─── RUNTIME ──────────────────────────────────────────────
|
||||||
HEADLINE_LIMIT = 1000
|
HEADLINE_LIMIT = 1000
|
||||||
FEED_TIMEOUT = 10
|
FEED_TIMEOUT = 10
|
||||||
MIC_THRESHOLD_DB = 50 # dB above which glitches intensify
|
MIC_THRESHOLD_DB = 50 # dB above which glitches intensify
|
||||||
MODE = 'poetry' if '--poetry' in sys.argv or '-p' in sys.argv else 'news'
|
MODE = "poetry" if "--poetry" in sys.argv or "-p" in sys.argv else "news"
|
||||||
FIREHOSE = '--firehose' in sys.argv
|
FIREHOSE = "--firehose" in sys.argv
|
||||||
|
|
||||||
# ─── NTFY MESSAGE QUEUE ──────────────────────────────────
|
# ─── NTFY MESSAGE QUEUE ──────────────────────────────────
|
||||||
NTFY_TOPIC = "https://ntfy.sh/klubhaus_terminal_mainline/json?since=20s&poll=1"
|
NTFY_TOPIC = "https://ntfy.sh/klubhaus_terminal_mainline/json"
|
||||||
NTFY_POLL_INTERVAL = 15 # seconds between polls
|
NTFY_CC_CMD_TOPIC = "https://ntfy.sh/klubhaus_terminal_mainline_cc_cmd/json"
|
||||||
|
NTFY_CC_RESP_TOPIC = "https://ntfy.sh/klubhaus_terminal_mainline_cc_resp/json"
|
||||||
|
NTFY_RECONNECT_DELAY = 5 # seconds before reconnecting after a dropped stream
|
||||||
MESSAGE_DISPLAY_SECS = 30 # how long a message holds the screen
|
MESSAGE_DISPLAY_SECS = 30 # how long a message holds the screen
|
||||||
|
|
||||||
# ─── FONT RENDERING ──────────────────────────────────────
|
# ─── FONT RENDERING ──────────────────────────────────────
|
||||||
FONT_PATH = "/Users/genejohnson/Documents/CS Bishop Drawn/CSBishopDrawn-Italic.otf"
|
FONT_DIR = _resolve_font_path(_arg_value("--font-dir") or "fonts")
|
||||||
|
_FONT_FILE_ARG = _arg_value("--font-file")
|
||||||
|
_FONT_FILES = _list_font_files(FONT_DIR)
|
||||||
|
FONT_PATH = (
|
||||||
|
_resolve_font_path(_FONT_FILE_ARG)
|
||||||
|
if _FONT_FILE_ARG
|
||||||
|
else (_FONT_FILES[0] if _FONT_FILES else "")
|
||||||
|
)
|
||||||
|
FONT_INDEX = max(0, _arg_int("--font-index", 0))
|
||||||
|
FONT_PICKER = "--no-font-picker" not in sys.argv
|
||||||
FONT_SZ = 60
|
FONT_SZ = 60
|
||||||
RENDER_H = 8 # terminal rows per rendered text line
|
RENDER_H = 8 # terminal rows per rendered text line
|
||||||
|
|
||||||
@@ -33,3 +235,32 @@ GRAD_SPEED = 0.08 # gradient traversal speed (cycles/sec, ~12s full swee
|
|||||||
# ─── GLYPHS ───────────────────────────────────────────────
|
# ─── GLYPHS ───────────────────────────────────────────────
|
||||||
GLITCH = "░▒▓█▌▐╌╍╎╏┃┆┇┊┋"
|
GLITCH = "░▒▓█▌▐╌╍╎╏┃┆┇┊┋"
|
||||||
KATA = "ハミヒーウシナモニサワツオリアホテマケメエカキムユラセネスタヌヘ"
|
KATA = "ハミヒーウシナモニサワツオリアホテマケメエカキムユラセネスタヌヘ"
|
||||||
|
|
||||||
|
# ─── WEBSOCKET ─────────────────────────────────────────────
|
||||||
|
DISPLAY = _arg_value("--display", sys.argv) or "pygame"
|
||||||
|
WEBSOCKET = "--websocket" in sys.argv
|
||||||
|
WEBSOCKET_PORT = _arg_int("--websocket-port", 8765)
|
||||||
|
|
||||||
|
# ─── DEMO MODE ────────────────────────────────────────────
|
||||||
|
DEMO = "--demo" in sys.argv
|
||||||
|
DEMO_EFFECT_DURATION = 5.0 # seconds per effect
|
||||||
|
PIPELINE_DEMO = "--pipeline-demo" in sys.argv
|
||||||
|
|
||||||
|
# ─── PIPELINE MODE (new unified architecture) ─────────────
|
||||||
|
PIPELINE_MODE = "--pipeline" in sys.argv
|
||||||
|
PIPELINE_PRESET = _arg_value("--pipeline-preset", sys.argv) or "demo"
|
||||||
|
|
||||||
|
# ─── PRESET MODE ────────────────────────────────────────────
|
||||||
|
PRESET = _arg_value("--preset", sys.argv)
|
||||||
|
|
||||||
|
# ─── PIPELINE DIAGRAM ────────────────────────────────────
|
||||||
|
PIPELINE_DIAGRAM = "--pipeline-diagram" in sys.argv
|
||||||
|
|
||||||
|
|
||||||
|
def set_font_selection(font_path=None, font_index=None):
|
||||||
|
"""Set runtime primary font selection."""
|
||||||
|
global FONT_PATH, FONT_INDEX
|
||||||
|
if font_path is not None:
|
||||||
|
FONT_PATH = _resolve_font_path(font_path)
|
||||||
|
if font_index is not None:
|
||||||
|
FONT_INDEX = max(0, int(font_index))
|
||||||
|
|||||||
181
engine/controller.py
Normal file
181
engine/controller.py
Normal file
@@ -0,0 +1,181 @@
|
|||||||
|
"""
|
||||||
|
Stream controller - manages input sources and orchestrates the render stream.
|
||||||
|
"""
|
||||||
|
|
||||||
|
from engine.config import Config, get_config
|
||||||
|
from engine.display import (
|
||||||
|
DisplayRegistry,
|
||||||
|
KittyDisplay,
|
||||||
|
MultiDisplay,
|
||||||
|
NullDisplay,
|
||||||
|
PygameDisplay,
|
||||||
|
SixelDisplay,
|
||||||
|
TerminalDisplay,
|
||||||
|
WebSocketDisplay,
|
||||||
|
)
|
||||||
|
from engine.effects.controller import handle_effects_command
|
||||||
|
from engine.eventbus import EventBus
|
||||||
|
from engine.events import EventType, StreamEvent
|
||||||
|
from engine.mic import MicMonitor
|
||||||
|
from engine.ntfy import NtfyPoller
|
||||||
|
from engine.scroll import stream
|
||||||
|
|
||||||
|
|
||||||
|
def _get_display(config: Config):
|
||||||
|
"""Get the appropriate display based on config."""
|
||||||
|
DisplayRegistry.initialize()
|
||||||
|
display_mode = config.display.lower()
|
||||||
|
|
||||||
|
displays = []
|
||||||
|
|
||||||
|
if display_mode in ("terminal", "both"):
|
||||||
|
displays.append(TerminalDisplay())
|
||||||
|
|
||||||
|
if display_mode in ("websocket", "both"):
|
||||||
|
ws = WebSocketDisplay(host="0.0.0.0", port=config.websocket_port)
|
||||||
|
ws.start_server()
|
||||||
|
ws.start_http_server()
|
||||||
|
displays.append(ws)
|
||||||
|
|
||||||
|
if display_mode == "sixel":
|
||||||
|
displays.append(SixelDisplay())
|
||||||
|
|
||||||
|
if display_mode == "kitty":
|
||||||
|
displays.append(KittyDisplay())
|
||||||
|
|
||||||
|
if display_mode == "pygame":
|
||||||
|
displays.append(PygameDisplay())
|
||||||
|
|
||||||
|
if not displays:
|
||||||
|
return NullDisplay()
|
||||||
|
|
||||||
|
if len(displays) == 1:
|
||||||
|
return displays[0]
|
||||||
|
|
||||||
|
return MultiDisplay(displays)
|
||||||
|
|
||||||
|
|
||||||
|
class StreamController:
|
||||||
|
"""Controls the stream lifecycle - initializes sources and runs the stream."""
|
||||||
|
|
||||||
|
_topics_warmed = False
|
||||||
|
|
||||||
|
def __init__(self, config: Config | None = None, event_bus: EventBus | None = None):
|
||||||
|
self.config = config or get_config()
|
||||||
|
self.event_bus = event_bus
|
||||||
|
self.mic: MicMonitor | None = None
|
||||||
|
self.ntfy: NtfyPoller | None = None
|
||||||
|
self.ntfy_cc: NtfyPoller | None = None
|
||||||
|
|
||||||
|
@classmethod
|
||||||
|
def warmup_topics(cls) -> None:
|
||||||
|
"""Warm up ntfy topics lazily (creates them if they don't exist)."""
|
||||||
|
if cls._topics_warmed:
|
||||||
|
return
|
||||||
|
|
||||||
|
import urllib.request
|
||||||
|
|
||||||
|
topics = [
|
||||||
|
"https://ntfy.sh/klubhaus_terminal_mainline_cc_cmd",
|
||||||
|
"https://ntfy.sh/klubhaus_terminal_mainline_cc_resp",
|
||||||
|
"https://ntfy.sh/klubhaus_terminal_mainline",
|
||||||
|
]
|
||||||
|
|
||||||
|
for topic in topics:
|
||||||
|
try:
|
||||||
|
req = urllib.request.Request(
|
||||||
|
topic,
|
||||||
|
data=b"init",
|
||||||
|
headers={
|
||||||
|
"User-Agent": "mainline/0.1",
|
||||||
|
"Content-Type": "text/plain",
|
||||||
|
},
|
||||||
|
method="POST",
|
||||||
|
)
|
||||||
|
urllib.request.urlopen(req, timeout=5)
|
||||||
|
except Exception:
|
||||||
|
pass
|
||||||
|
|
||||||
|
cls._topics_warmed = True
|
||||||
|
|
||||||
|
def initialize_sources(self) -> tuple[bool, bool]:
|
||||||
|
"""Initialize microphone and ntfy sources.
|
||||||
|
|
||||||
|
Returns:
|
||||||
|
(mic_ok, ntfy_ok) - success status for each source
|
||||||
|
"""
|
||||||
|
self.mic = MicMonitor(threshold_db=self.config.mic_threshold_db)
|
||||||
|
mic_ok = self.mic.start() if self.mic.available else False
|
||||||
|
|
||||||
|
self.ntfy = NtfyPoller(
|
||||||
|
self.config.ntfy_topic,
|
||||||
|
reconnect_delay=self.config.ntfy_reconnect_delay,
|
||||||
|
display_secs=self.config.message_display_secs,
|
||||||
|
)
|
||||||
|
ntfy_ok = self.ntfy.start()
|
||||||
|
|
||||||
|
self.ntfy_cc = NtfyPoller(
|
||||||
|
self.config.ntfy_cc_cmd_topic,
|
||||||
|
reconnect_delay=self.config.ntfy_reconnect_delay,
|
||||||
|
display_secs=5,
|
||||||
|
)
|
||||||
|
self.ntfy_cc.subscribe(self._handle_cc_message)
|
||||||
|
ntfy_cc_ok = self.ntfy_cc.start()
|
||||||
|
|
||||||
|
return bool(mic_ok), ntfy_ok and ntfy_cc_ok
|
||||||
|
|
||||||
|
def _handle_cc_message(self, event) -> None:
|
||||||
|
"""Handle incoming C&C message - like a serial port control interface."""
|
||||||
|
import urllib.request
|
||||||
|
|
||||||
|
cmd = event.body.strip() if hasattr(event, "body") else str(event).strip()
|
||||||
|
if not cmd.startswith("/"):
|
||||||
|
return
|
||||||
|
|
||||||
|
response = handle_effects_command(cmd)
|
||||||
|
|
||||||
|
topic_url = self.config.ntfy_cc_resp_topic.replace("/json", "")
|
||||||
|
data = response.encode("utf-8")
|
||||||
|
req = urllib.request.Request(
|
||||||
|
topic_url,
|
||||||
|
data=data,
|
||||||
|
headers={"User-Agent": "mainline/0.1", "Content-Type": "text/plain"},
|
||||||
|
method="POST",
|
||||||
|
)
|
||||||
|
try:
|
||||||
|
urllib.request.urlopen(req, timeout=5)
|
||||||
|
except Exception:
|
||||||
|
pass
|
||||||
|
|
||||||
|
def run(self, items: list) -> None:
|
||||||
|
"""Run the stream with initialized sources."""
|
||||||
|
if self.mic is None or self.ntfy is None:
|
||||||
|
self.initialize_sources()
|
||||||
|
|
||||||
|
if self.event_bus:
|
||||||
|
self.event_bus.publish(
|
||||||
|
EventType.STREAM_START,
|
||||||
|
StreamEvent(
|
||||||
|
event_type=EventType.STREAM_START,
|
||||||
|
headline_count=len(items),
|
||||||
|
),
|
||||||
|
)
|
||||||
|
|
||||||
|
display = _get_display(self.config)
|
||||||
|
stream(items, self.ntfy, self.mic, display)
|
||||||
|
if display:
|
||||||
|
display.cleanup()
|
||||||
|
|
||||||
|
if self.event_bus:
|
||||||
|
self.event_bus.publish(
|
||||||
|
EventType.STREAM_END,
|
||||||
|
StreamEvent(
|
||||||
|
event_type=EventType.STREAM_END,
|
||||||
|
headline_count=len(items),
|
||||||
|
),
|
||||||
|
)
|
||||||
|
|
||||||
|
def cleanup(self) -> None:
|
||||||
|
"""Clean up resources."""
|
||||||
|
if self.mic:
|
||||||
|
self.mic.stop()
|
||||||
124
engine/display/__init__.py
Normal file
124
engine/display/__init__.py
Normal file
@@ -0,0 +1,124 @@
|
|||||||
|
"""
|
||||||
|
Display backend system with registry pattern.
|
||||||
|
|
||||||
|
Allows swapping output backends via the Display protocol.
|
||||||
|
Supports auto-discovery of display backends.
|
||||||
|
"""
|
||||||
|
|
||||||
|
from typing import Protocol
|
||||||
|
|
||||||
|
from engine.display.backends.kitty import KittyDisplay
|
||||||
|
from engine.display.backends.multi import MultiDisplay
|
||||||
|
from engine.display.backends.null import NullDisplay
|
||||||
|
from engine.display.backends.pygame import PygameDisplay
|
||||||
|
from engine.display.backends.sixel import SixelDisplay
|
||||||
|
from engine.display.backends.terminal import TerminalDisplay
|
||||||
|
from engine.display.backends.websocket import WebSocketDisplay
|
||||||
|
|
||||||
|
|
||||||
|
class Display(Protocol):
|
||||||
|
"""Protocol for display backends.
|
||||||
|
|
||||||
|
All display backends must implement:
|
||||||
|
- width, height: Terminal dimensions
|
||||||
|
- init(width, height, reuse=False): Initialize the display
|
||||||
|
- show(buffer): Render buffer to display
|
||||||
|
- clear(): Clear the display
|
||||||
|
- cleanup(): Shutdown the display
|
||||||
|
|
||||||
|
The reuse flag allows attaching to an existing display instance
|
||||||
|
rather than creating a new window/connection.
|
||||||
|
"""
|
||||||
|
|
||||||
|
width: int
|
||||||
|
height: int
|
||||||
|
|
||||||
|
def init(self, width: int, height: int, reuse: bool = False) -> None:
|
||||||
|
"""Initialize display with dimensions.
|
||||||
|
|
||||||
|
Args:
|
||||||
|
width: Terminal width in characters
|
||||||
|
height: Terminal height in rows
|
||||||
|
reuse: If True, attach to existing display instead of creating new
|
||||||
|
"""
|
||||||
|
...
|
||||||
|
|
||||||
|
def show(self, buffer: list[str]) -> None:
|
||||||
|
"""Show buffer on display."""
|
||||||
|
...
|
||||||
|
|
||||||
|
def clear(self) -> None:
|
||||||
|
"""Clear display."""
|
||||||
|
...
|
||||||
|
|
||||||
|
def cleanup(self) -> None:
|
||||||
|
"""Shutdown display."""
|
||||||
|
...
|
||||||
|
|
||||||
|
|
||||||
|
class DisplayRegistry:
|
||||||
|
"""Registry for display backends with auto-discovery."""
|
||||||
|
|
||||||
|
_backends: dict[str, type[Display]] = {}
|
||||||
|
_initialized = False
|
||||||
|
|
||||||
|
@classmethod
|
||||||
|
def register(cls, name: str, backend_class: type[Display]) -> None:
|
||||||
|
"""Register a display backend."""
|
||||||
|
cls._backends[name.lower()] = backend_class
|
||||||
|
|
||||||
|
@classmethod
|
||||||
|
def get(cls, name: str) -> type[Display] | None:
|
||||||
|
"""Get a display backend class by name."""
|
||||||
|
return cls._backends.get(name.lower())
|
||||||
|
|
||||||
|
@classmethod
|
||||||
|
def list_backends(cls) -> list[str]:
|
||||||
|
"""List all available display backend names."""
|
||||||
|
return list(cls._backends.keys())
|
||||||
|
|
||||||
|
@classmethod
|
||||||
|
def create(cls, name: str, **kwargs) -> Display | None:
|
||||||
|
"""Create a display instance by name."""
|
||||||
|
cls.initialize()
|
||||||
|
backend_class = cls.get(name)
|
||||||
|
if backend_class:
|
||||||
|
return backend_class(**kwargs)
|
||||||
|
return None
|
||||||
|
|
||||||
|
@classmethod
|
||||||
|
def initialize(cls) -> None:
|
||||||
|
"""Initialize and register all built-in backends."""
|
||||||
|
if cls._initialized:
|
||||||
|
return
|
||||||
|
|
||||||
|
cls.register("terminal", TerminalDisplay)
|
||||||
|
cls.register("null", NullDisplay)
|
||||||
|
cls.register("websocket", WebSocketDisplay)
|
||||||
|
cls.register("sixel", SixelDisplay)
|
||||||
|
cls.register("kitty", KittyDisplay)
|
||||||
|
cls.register("pygame", PygameDisplay)
|
||||||
|
|
||||||
|
cls._initialized = True
|
||||||
|
|
||||||
|
|
||||||
|
def get_monitor():
|
||||||
|
"""Get the performance monitor."""
|
||||||
|
try:
|
||||||
|
from engine.effects.performance import get_monitor as _get_monitor
|
||||||
|
|
||||||
|
return _get_monitor()
|
||||||
|
except Exception:
|
||||||
|
return None
|
||||||
|
|
||||||
|
|
||||||
|
__all__ = [
|
||||||
|
"Display",
|
||||||
|
"DisplayRegistry",
|
||||||
|
"get_monitor",
|
||||||
|
"TerminalDisplay",
|
||||||
|
"NullDisplay",
|
||||||
|
"WebSocketDisplay",
|
||||||
|
"SixelDisplay",
|
||||||
|
"MultiDisplay",
|
||||||
|
]
|
||||||
152
engine/display/backends/kitty.py
Normal file
152
engine/display/backends/kitty.py
Normal file
@@ -0,0 +1,152 @@
|
|||||||
|
"""
|
||||||
|
Kitty graphics display backend - renders using kitty's native graphics protocol.
|
||||||
|
"""
|
||||||
|
|
||||||
|
import time
|
||||||
|
|
||||||
|
from engine.display.renderer import get_default_font_path, parse_ansi
|
||||||
|
|
||||||
|
|
||||||
|
def _encode_kitty_graphic(image_data: bytes, width: int, height: int) -> bytes:
|
||||||
|
"""Encode image data using kitty's graphics protocol."""
|
||||||
|
import base64
|
||||||
|
|
||||||
|
encoded = base64.b64encode(image_data).decode("ascii")
|
||||||
|
|
||||||
|
chunks = []
|
||||||
|
for i in range(0, len(encoded), 4096):
|
||||||
|
chunk = encoded[i : i + 4096]
|
||||||
|
if i == 0:
|
||||||
|
chunks.append(f"\x1b_Gf=100,t=d,s={width},v={height},c=1,r=1;{chunk}\x1b\\")
|
||||||
|
else:
|
||||||
|
chunks.append(f"\x1b_Gm={height};{chunk}\x1b\\")
|
||||||
|
|
||||||
|
return "".join(chunks).encode("utf-8")
|
||||||
|
|
||||||
|
|
||||||
|
class KittyDisplay:
|
||||||
|
"""Kitty graphics display backend using kitty's native protocol."""
|
||||||
|
|
||||||
|
width: int = 80
|
||||||
|
height: int = 24
|
||||||
|
|
||||||
|
def __init__(self, cell_width: int = 9, cell_height: int = 16):
|
||||||
|
self.width = 80
|
||||||
|
self.height = 24
|
||||||
|
self.cell_width = cell_width
|
||||||
|
self.cell_height = cell_height
|
||||||
|
self._initialized = False
|
||||||
|
self._font_path = None
|
||||||
|
|
||||||
|
def init(self, width: int, height: int, reuse: bool = False) -> None:
|
||||||
|
"""Initialize display with dimensions.
|
||||||
|
|
||||||
|
Args:
|
||||||
|
width: Terminal width in characters
|
||||||
|
height: Terminal height in rows
|
||||||
|
reuse: Ignored for KittyDisplay (protocol doesn't support reuse)
|
||||||
|
"""
|
||||||
|
self.width = width
|
||||||
|
self.height = height
|
||||||
|
self._initialized = True
|
||||||
|
|
||||||
|
def _get_font_path(self) -> str | None:
|
||||||
|
"""Get font path from env or detect common locations."""
|
||||||
|
import os
|
||||||
|
|
||||||
|
if self._font_path:
|
||||||
|
return self._font_path
|
||||||
|
|
||||||
|
env_font = os.environ.get("MAINLINE_KITTY_FONT")
|
||||||
|
if env_font and os.path.exists(env_font):
|
||||||
|
self._font_path = env_font
|
||||||
|
return env_font
|
||||||
|
|
||||||
|
font_path = get_default_font_path()
|
||||||
|
if font_path:
|
||||||
|
self._font_path = font_path
|
||||||
|
|
||||||
|
return self._font_path
|
||||||
|
|
||||||
|
def show(self, buffer: list[str]) -> None:
|
||||||
|
import sys
|
||||||
|
|
||||||
|
t0 = time.perf_counter()
|
||||||
|
|
||||||
|
img_width = self.width * self.cell_width
|
||||||
|
img_height = self.height * self.cell_height
|
||||||
|
|
||||||
|
try:
|
||||||
|
from PIL import Image, ImageDraw, ImageFont
|
||||||
|
except ImportError:
|
||||||
|
return
|
||||||
|
|
||||||
|
img = Image.new("RGBA", (img_width, img_height), (0, 0, 0, 255))
|
||||||
|
draw = ImageDraw.Draw(img)
|
||||||
|
|
||||||
|
font_path = self._get_font_path()
|
||||||
|
font = None
|
||||||
|
if font_path:
|
||||||
|
try:
|
||||||
|
font = ImageFont.truetype(font_path, self.cell_height - 2)
|
||||||
|
except Exception:
|
||||||
|
font = None
|
||||||
|
|
||||||
|
if font is None:
|
||||||
|
try:
|
||||||
|
font = ImageFont.load_default()
|
||||||
|
except Exception:
|
||||||
|
font = None
|
||||||
|
|
||||||
|
for row_idx, line in enumerate(buffer[: self.height]):
|
||||||
|
if row_idx >= self.height:
|
||||||
|
break
|
||||||
|
|
||||||
|
tokens = parse_ansi(line)
|
||||||
|
x_pos = 0
|
||||||
|
y_pos = row_idx * self.cell_height
|
||||||
|
|
||||||
|
for text, fg, bg, bold in tokens:
|
||||||
|
if not text:
|
||||||
|
continue
|
||||||
|
|
||||||
|
if bg != (0, 0, 0):
|
||||||
|
bbox = draw.textbbox((x_pos, y_pos), text, font=font)
|
||||||
|
draw.rectangle(bbox, fill=(*bg, 255))
|
||||||
|
|
||||||
|
if bold and font:
|
||||||
|
draw.text((x_pos - 1, y_pos - 1), text, fill=(*fg, 255), font=font)
|
||||||
|
|
||||||
|
draw.text((x_pos, y_pos), text, fill=(*fg, 255), font=font)
|
||||||
|
|
||||||
|
if font:
|
||||||
|
x_pos += draw.textlength(text, font=font)
|
||||||
|
|
||||||
|
from io import BytesIO
|
||||||
|
|
||||||
|
output = BytesIO()
|
||||||
|
img.save(output, format="PNG")
|
||||||
|
png_data = output.getvalue()
|
||||||
|
|
||||||
|
graphic = _encode_kitty_graphic(png_data, img_width, img_height)
|
||||||
|
|
||||||
|
sys.stdout.buffer.write(graphic)
|
||||||
|
sys.stdout.flush()
|
||||||
|
|
||||||
|
elapsed_ms = (time.perf_counter() - t0) * 1000
|
||||||
|
|
||||||
|
from engine.display import get_monitor
|
||||||
|
|
||||||
|
monitor = get_monitor()
|
||||||
|
if monitor:
|
||||||
|
chars_in = sum(len(line) for line in buffer)
|
||||||
|
monitor.record_effect("kitty_display", elapsed_ms, chars_in, chars_in)
|
||||||
|
|
||||||
|
def clear(self) -> None:
|
||||||
|
import sys
|
||||||
|
|
||||||
|
sys.stdout.buffer.write(b"\x1b_Ga=d\x1b\\")
|
||||||
|
sys.stdout.flush()
|
||||||
|
|
||||||
|
def cleanup(self) -> None:
|
||||||
|
self.clear()
|
||||||
43
engine/display/backends/multi.py
Normal file
43
engine/display/backends/multi.py
Normal file
@@ -0,0 +1,43 @@
|
|||||||
|
"""
|
||||||
|
Multi display backend - forwards to multiple displays.
|
||||||
|
"""
|
||||||
|
|
||||||
|
|
||||||
|
class MultiDisplay:
|
||||||
|
"""Display that forwards to multiple displays.
|
||||||
|
|
||||||
|
Supports reuse - passes reuse flag to all child displays.
|
||||||
|
"""
|
||||||
|
|
||||||
|
width: int = 80
|
||||||
|
height: int = 24
|
||||||
|
|
||||||
|
def __init__(self, displays: list):
|
||||||
|
self.displays = displays
|
||||||
|
self.width = 80
|
||||||
|
self.height = 24
|
||||||
|
|
||||||
|
def init(self, width: int, height: int, reuse: bool = False) -> None:
|
||||||
|
"""Initialize all child displays with dimensions.
|
||||||
|
|
||||||
|
Args:
|
||||||
|
width: Terminal width in characters
|
||||||
|
height: Terminal height in rows
|
||||||
|
reuse: If True, use reuse mode for child displays
|
||||||
|
"""
|
||||||
|
self.width = width
|
||||||
|
self.height = height
|
||||||
|
for d in self.displays:
|
||||||
|
d.init(width, height, reuse=reuse)
|
||||||
|
|
||||||
|
def show(self, buffer: list[str]) -> None:
|
||||||
|
for d in self.displays:
|
||||||
|
d.show(buffer)
|
||||||
|
|
||||||
|
def clear(self) -> None:
|
||||||
|
for d in self.displays:
|
||||||
|
d.clear()
|
||||||
|
|
||||||
|
def cleanup(self) -> None:
|
||||||
|
for d in self.displays:
|
||||||
|
d.cleanup()
|
||||||
43
engine/display/backends/null.py
Normal file
43
engine/display/backends/null.py
Normal file
@@ -0,0 +1,43 @@
|
|||||||
|
"""
|
||||||
|
Null/headless display backend.
|
||||||
|
"""
|
||||||
|
|
||||||
|
import time
|
||||||
|
|
||||||
|
|
||||||
|
class NullDisplay:
|
||||||
|
"""Headless/null display - discards all output.
|
||||||
|
|
||||||
|
This display does nothing - useful for headless benchmarking
|
||||||
|
or when no display output is needed.
|
||||||
|
"""
|
||||||
|
|
||||||
|
width: int = 80
|
||||||
|
height: int = 24
|
||||||
|
|
||||||
|
def init(self, width: int, height: int, reuse: bool = False) -> None:
|
||||||
|
"""Initialize display with dimensions.
|
||||||
|
|
||||||
|
Args:
|
||||||
|
width: Terminal width in characters
|
||||||
|
height: Terminal height in rows
|
||||||
|
reuse: Ignored for NullDisplay (no resources to reuse)
|
||||||
|
"""
|
||||||
|
self.width = width
|
||||||
|
self.height = height
|
||||||
|
|
||||||
|
def show(self, buffer: list[str]) -> None:
|
||||||
|
from engine.display import get_monitor
|
||||||
|
|
||||||
|
monitor = get_monitor()
|
||||||
|
if monitor:
|
||||||
|
t0 = time.perf_counter()
|
||||||
|
chars_in = sum(len(line) for line in buffer)
|
||||||
|
elapsed_ms = (time.perf_counter() - t0) * 1000
|
||||||
|
monitor.record_effect("null_display", elapsed_ms, chars_in, chars_in)
|
||||||
|
|
||||||
|
def clear(self) -> None:
|
||||||
|
pass
|
||||||
|
|
||||||
|
def cleanup(self) -> None:
|
||||||
|
pass
|
||||||
212
engine/display/backends/pygame.py
Normal file
212
engine/display/backends/pygame.py
Normal file
@@ -0,0 +1,212 @@
|
|||||||
|
"""
|
||||||
|
Pygame display backend - renders to a native application window.
|
||||||
|
"""
|
||||||
|
|
||||||
|
import time
|
||||||
|
|
||||||
|
from engine.display.renderer import parse_ansi
|
||||||
|
|
||||||
|
|
||||||
|
class PygameDisplay:
|
||||||
|
"""Pygame display backend - renders to native window.
|
||||||
|
|
||||||
|
Supports reuse mode - when reuse=True, skips SDL initialization
|
||||||
|
and reuses the existing pygame window from a previous instance.
|
||||||
|
"""
|
||||||
|
|
||||||
|
width: int = 80
|
||||||
|
window_width: int = 800
|
||||||
|
window_height: int = 600
|
||||||
|
_pygame_initialized: bool = False
|
||||||
|
|
||||||
|
def __init__(
|
||||||
|
self,
|
||||||
|
cell_width: int = 10,
|
||||||
|
cell_height: int = 18,
|
||||||
|
window_width: int = 800,
|
||||||
|
window_height: int = 600,
|
||||||
|
):
|
||||||
|
self.width = 80
|
||||||
|
self.height = 24
|
||||||
|
self.cell_width = cell_width
|
||||||
|
self.cell_height = cell_height
|
||||||
|
self.window_width = window_width
|
||||||
|
self.window_height = window_height
|
||||||
|
self._initialized = False
|
||||||
|
self._pygame = None
|
||||||
|
self._screen = None
|
||||||
|
self._font = None
|
||||||
|
self._resized = False
|
||||||
|
|
||||||
|
def _get_font_path(self) -> str | None:
|
||||||
|
"""Get font path for rendering."""
|
||||||
|
import os
|
||||||
|
import sys
|
||||||
|
from pathlib import Path
|
||||||
|
|
||||||
|
env_font = os.environ.get("MAINLINE_PYGAME_FONT")
|
||||||
|
if env_font and os.path.exists(env_font):
|
||||||
|
return env_font
|
||||||
|
|
||||||
|
def search_dir(base_path: str) -> str | None:
|
||||||
|
if not os.path.exists(base_path):
|
||||||
|
return None
|
||||||
|
if os.path.isfile(base_path):
|
||||||
|
return base_path
|
||||||
|
for font_file in Path(base_path).rglob("*"):
|
||||||
|
if font_file.suffix.lower() in (".ttf", ".otf", ".ttc"):
|
||||||
|
name = font_file.stem.lower()
|
||||||
|
if "geist" in name and ("nerd" in name or "mono" in name):
|
||||||
|
return str(font_file)
|
||||||
|
return None
|
||||||
|
|
||||||
|
search_dirs = []
|
||||||
|
if sys.platform == "darwin":
|
||||||
|
search_dirs.append(os.path.expanduser("~/Library/Fonts/"))
|
||||||
|
elif sys.platform == "win32":
|
||||||
|
search_dirs.append(
|
||||||
|
os.path.expanduser("~\\AppData\\Local\\Microsoft\\Windows\\Fonts\\")
|
||||||
|
)
|
||||||
|
else:
|
||||||
|
search_dirs.extend(
|
||||||
|
[
|
||||||
|
os.path.expanduser("~/.local/share/fonts/"),
|
||||||
|
os.path.expanduser("~/.fonts/"),
|
||||||
|
"/usr/share/fonts/",
|
||||||
|
]
|
||||||
|
)
|
||||||
|
|
||||||
|
for search_dir_path in search_dirs:
|
||||||
|
found = search_dir(search_dir_path)
|
||||||
|
if found:
|
||||||
|
return found
|
||||||
|
|
||||||
|
return None
|
||||||
|
|
||||||
|
def init(self, width: int, height: int, reuse: bool = False) -> None:
|
||||||
|
"""Initialize display with dimensions.
|
||||||
|
|
||||||
|
Args:
|
||||||
|
width: Terminal width in characters
|
||||||
|
height: Terminal height in rows
|
||||||
|
reuse: If True, attach to existing pygame window instead of creating new
|
||||||
|
"""
|
||||||
|
self.width = width
|
||||||
|
self.height = height
|
||||||
|
|
||||||
|
import os
|
||||||
|
|
||||||
|
os.environ["SDL_VIDEODRIVER"] = "x11"
|
||||||
|
|
||||||
|
try:
|
||||||
|
import pygame
|
||||||
|
except ImportError:
|
||||||
|
return
|
||||||
|
|
||||||
|
if reuse and PygameDisplay._pygame_initialized:
|
||||||
|
self._pygame = pygame
|
||||||
|
self._initialized = True
|
||||||
|
return
|
||||||
|
|
||||||
|
pygame.init()
|
||||||
|
pygame.display.set_caption("Mainline")
|
||||||
|
|
||||||
|
self._screen = pygame.display.set_mode(
|
||||||
|
(self.window_width, self.window_height),
|
||||||
|
pygame.RESIZABLE,
|
||||||
|
)
|
||||||
|
self._pygame = pygame
|
||||||
|
PygameDisplay._pygame_initialized = True
|
||||||
|
|
||||||
|
font_path = self._get_font_path()
|
||||||
|
if font_path:
|
||||||
|
try:
|
||||||
|
self._font = pygame.font.Font(font_path, self.cell_height - 2)
|
||||||
|
except Exception:
|
||||||
|
self._font = pygame.font.SysFont("monospace", self.cell_height - 2)
|
||||||
|
else:
|
||||||
|
self._font = pygame.font.SysFont("monospace", self.cell_height - 2)
|
||||||
|
|
||||||
|
self._initialized = True
|
||||||
|
|
||||||
|
def show(self, buffer: list[str]) -> None:
|
||||||
|
import sys
|
||||||
|
|
||||||
|
if not self._initialized or not self._pygame:
|
||||||
|
return
|
||||||
|
|
||||||
|
t0 = time.perf_counter()
|
||||||
|
|
||||||
|
for event in self._pygame.event.get():
|
||||||
|
if event.type == self._pygame.QUIT:
|
||||||
|
sys.exit(0)
|
||||||
|
elif event.type == self._pygame.VIDEORESIZE:
|
||||||
|
self.window_width = event.w
|
||||||
|
self.window_height = event.h
|
||||||
|
self.width = max(1, self.window_width // self.cell_width)
|
||||||
|
self.height = max(1, self.window_height // self.cell_height)
|
||||||
|
self._resized = True
|
||||||
|
|
||||||
|
self._screen.fill((0, 0, 0))
|
||||||
|
|
||||||
|
for row_idx, line in enumerate(buffer[: self.height]):
|
||||||
|
if row_idx >= self.height:
|
||||||
|
break
|
||||||
|
|
||||||
|
tokens = parse_ansi(line)
|
||||||
|
x_pos = 0
|
||||||
|
|
||||||
|
for text, fg, bg, _bold in tokens:
|
||||||
|
if not text:
|
||||||
|
continue
|
||||||
|
|
||||||
|
if bg != (0, 0, 0):
|
||||||
|
bg_surface = self._font.render(text, True, fg, bg)
|
||||||
|
self._screen.blit(bg_surface, (x_pos, row_idx * self.cell_height))
|
||||||
|
else:
|
||||||
|
text_surface = self._font.render(text, True, fg)
|
||||||
|
self._screen.blit(text_surface, (x_pos, row_idx * self.cell_height))
|
||||||
|
|
||||||
|
x_pos += self._font.size(text)[0]
|
||||||
|
|
||||||
|
self._pygame.display.flip()
|
||||||
|
|
||||||
|
elapsed_ms = (time.perf_counter() - t0) * 1000
|
||||||
|
|
||||||
|
from engine.display import get_monitor
|
||||||
|
|
||||||
|
monitor = get_monitor()
|
||||||
|
if monitor:
|
||||||
|
chars_in = sum(len(line) for line in buffer)
|
||||||
|
monitor.record_effect("pygame_display", elapsed_ms, chars_in, chars_in)
|
||||||
|
|
||||||
|
def clear(self) -> None:
|
||||||
|
if self._screen and self._pygame:
|
||||||
|
self._screen.fill((0, 0, 0))
|
||||||
|
self._pygame.display.flip()
|
||||||
|
|
||||||
|
def get_dimensions(self) -> tuple[int, int]:
|
||||||
|
"""Get current terminal dimensions based on window size.
|
||||||
|
|
||||||
|
Returns:
|
||||||
|
(width, height) in character cells
|
||||||
|
"""
|
||||||
|
if self._resized:
|
||||||
|
self._resized = False
|
||||||
|
return self.width, self.height
|
||||||
|
|
||||||
|
def cleanup(self, quit_pygame: bool = True) -> None:
|
||||||
|
"""Cleanup display resources.
|
||||||
|
|
||||||
|
Args:
|
||||||
|
quit_pygame: If True, quit pygame entirely. Set to False when
|
||||||
|
reusing the display to avoid closing shared window.
|
||||||
|
"""
|
||||||
|
if quit_pygame and self._pygame:
|
||||||
|
self._pygame.quit()
|
||||||
|
PygameDisplay._pygame_initialized = False
|
||||||
|
|
||||||
|
@classmethod
|
||||||
|
def reset_state(cls) -> None:
|
||||||
|
"""Reset pygame state - useful for testing."""
|
||||||
|
cls._pygame_initialized = False
|
||||||
200
engine/display/backends/sixel.py
Normal file
200
engine/display/backends/sixel.py
Normal file
@@ -0,0 +1,200 @@
|
|||||||
|
"""
|
||||||
|
Sixel graphics display backend - renders to sixel graphics in terminal.
|
||||||
|
"""
|
||||||
|
|
||||||
|
import time
|
||||||
|
|
||||||
|
from engine.display.renderer import get_default_font_path, parse_ansi
|
||||||
|
|
||||||
|
|
||||||
|
def _encode_sixel(image) -> str:
|
||||||
|
"""Encode a PIL Image to sixel format (pure Python)."""
|
||||||
|
img = image.convert("RGBA")
|
||||||
|
width, height = img.size
|
||||||
|
pixels = img.load()
|
||||||
|
|
||||||
|
palette = []
|
||||||
|
pixel_palette_idx = {}
|
||||||
|
|
||||||
|
def get_color_idx(r, g, b, a):
|
||||||
|
if a < 128:
|
||||||
|
return -1
|
||||||
|
key = (r // 32, g // 32, b // 32)
|
||||||
|
if key not in pixel_palette_idx:
|
||||||
|
idx = len(palette)
|
||||||
|
if idx < 256:
|
||||||
|
palette.append((r, g, b))
|
||||||
|
pixel_palette_idx[key] = idx
|
||||||
|
return pixel_palette_idx.get(key, 0)
|
||||||
|
|
||||||
|
for y in range(height):
|
||||||
|
for x in range(width):
|
||||||
|
r, g, b, a = pixels[x, y]
|
||||||
|
get_color_idx(r, g, b, a)
|
||||||
|
|
||||||
|
if not palette:
|
||||||
|
return ""
|
||||||
|
|
||||||
|
if len(palette) == 1:
|
||||||
|
palette = [palette[0], (0, 0, 0)]
|
||||||
|
|
||||||
|
sixel_data = []
|
||||||
|
sixel_data.append(
|
||||||
|
f'"{"".join(f"#{i};2;{r};{g};{b}" for i, (r, g, b) in enumerate(palette))}'
|
||||||
|
)
|
||||||
|
|
||||||
|
for x in range(width):
|
||||||
|
col_data = []
|
||||||
|
for y in range(0, height, 6):
|
||||||
|
bits = 0
|
||||||
|
color_idx = -1
|
||||||
|
for dy in range(6):
|
||||||
|
if y + dy < height:
|
||||||
|
r, g, b, a = pixels[x, y + dy]
|
||||||
|
if a >= 128:
|
||||||
|
bits |= 1 << dy
|
||||||
|
idx = get_color_idx(r, g, b, a)
|
||||||
|
if color_idx == -1:
|
||||||
|
color_idx = idx
|
||||||
|
elif color_idx != idx:
|
||||||
|
color_idx = -2
|
||||||
|
|
||||||
|
if color_idx >= 0:
|
||||||
|
col_data.append(
|
||||||
|
chr(63 + color_idx) + chr(63 + bits)
|
||||||
|
if bits
|
||||||
|
else chr(63 + color_idx) + "?"
|
||||||
|
)
|
||||||
|
elif color_idx == -2:
|
||||||
|
pass
|
||||||
|
|
||||||
|
if col_data:
|
||||||
|
sixel_data.append("".join(col_data) + "$")
|
||||||
|
else:
|
||||||
|
sixel_data.append("-" if x < width - 1 else "$")
|
||||||
|
|
||||||
|
sixel_data.append("\x1b\\")
|
||||||
|
|
||||||
|
return "\x1bPq" + "".join(sixel_data)
|
||||||
|
|
||||||
|
|
||||||
|
class SixelDisplay:
|
||||||
|
"""Sixel graphics display backend - renders to sixel graphics in terminal."""
|
||||||
|
|
||||||
|
width: int = 80
|
||||||
|
height: int = 24
|
||||||
|
|
||||||
|
def __init__(self, cell_width: int = 9, cell_height: int = 16):
|
||||||
|
self.width = 80
|
||||||
|
self.height = 24
|
||||||
|
self.cell_width = cell_width
|
||||||
|
self.cell_height = cell_height
|
||||||
|
self._initialized = False
|
||||||
|
self._font_path = None
|
||||||
|
|
||||||
|
def _get_font_path(self) -> str | None:
|
||||||
|
"""Get font path from env or detect common locations."""
|
||||||
|
import os
|
||||||
|
|
||||||
|
if self._font_path:
|
||||||
|
return self._font_path
|
||||||
|
|
||||||
|
env_font = os.environ.get("MAINLINE_SIXEL_FONT")
|
||||||
|
if env_font and os.path.exists(env_font):
|
||||||
|
self._font_path = env_font
|
||||||
|
return env_font
|
||||||
|
|
||||||
|
font_path = get_default_font_path()
|
||||||
|
if font_path:
|
||||||
|
self._font_path = font_path
|
||||||
|
|
||||||
|
return self._font_path
|
||||||
|
|
||||||
|
def init(self, width: int, height: int, reuse: bool = False) -> None:
|
||||||
|
"""Initialize display with dimensions.
|
||||||
|
|
||||||
|
Args:
|
||||||
|
width: Terminal width in characters
|
||||||
|
height: Terminal height in rows
|
||||||
|
reuse: Ignored for SixelDisplay
|
||||||
|
"""
|
||||||
|
self.width = width
|
||||||
|
self.height = height
|
||||||
|
self._initialized = True
|
||||||
|
|
||||||
|
def show(self, buffer: list[str]) -> None:
|
||||||
|
import sys
|
||||||
|
|
||||||
|
t0 = time.perf_counter()
|
||||||
|
|
||||||
|
img_width = self.width * self.cell_width
|
||||||
|
img_height = self.height * self.cell_height
|
||||||
|
|
||||||
|
try:
|
||||||
|
from PIL import Image, ImageDraw, ImageFont
|
||||||
|
except ImportError:
|
||||||
|
return
|
||||||
|
|
||||||
|
img = Image.new("RGBA", (img_width, img_height), (0, 0, 0, 255))
|
||||||
|
draw = ImageDraw.Draw(img)
|
||||||
|
|
||||||
|
font_path = self._get_font_path()
|
||||||
|
font = None
|
||||||
|
if font_path:
|
||||||
|
try:
|
||||||
|
font = ImageFont.truetype(font_path, self.cell_height - 2)
|
||||||
|
except Exception:
|
||||||
|
font = None
|
||||||
|
|
||||||
|
if font is None:
|
||||||
|
try:
|
||||||
|
font = ImageFont.load_default()
|
||||||
|
except Exception:
|
||||||
|
font = None
|
||||||
|
|
||||||
|
for row_idx, line in enumerate(buffer[: self.height]):
|
||||||
|
if row_idx >= self.height:
|
||||||
|
break
|
||||||
|
|
||||||
|
tokens = parse_ansi(line)
|
||||||
|
x_pos = 0
|
||||||
|
y_pos = row_idx * self.cell_height
|
||||||
|
|
||||||
|
for text, fg, bg, bold in tokens:
|
||||||
|
if not text:
|
||||||
|
continue
|
||||||
|
|
||||||
|
if bg != (0, 0, 0):
|
||||||
|
bbox = draw.textbbox((x_pos, y_pos), text, font=font)
|
||||||
|
draw.rectangle(bbox, fill=(*bg, 255))
|
||||||
|
|
||||||
|
if bold and font:
|
||||||
|
draw.text((x_pos - 1, y_pos - 1), text, fill=(*fg, 255), font=font)
|
||||||
|
|
||||||
|
draw.text((x_pos, y_pos), text, fill=(*fg, 255), font=font)
|
||||||
|
|
||||||
|
if font:
|
||||||
|
x_pos += draw.textlength(text, font=font)
|
||||||
|
|
||||||
|
sixel = _encode_sixel(img)
|
||||||
|
|
||||||
|
sys.stdout.buffer.write(sixel.encode("utf-8"))
|
||||||
|
sys.stdout.flush()
|
||||||
|
|
||||||
|
elapsed_ms = (time.perf_counter() - t0) * 1000
|
||||||
|
|
||||||
|
from engine.display import get_monitor
|
||||||
|
|
||||||
|
monitor = get_monitor()
|
||||||
|
if monitor:
|
||||||
|
chars_in = sum(len(line) for line in buffer)
|
||||||
|
monitor.record_effect("sixel_display", elapsed_ms, chars_in, chars_in)
|
||||||
|
|
||||||
|
def clear(self) -> None:
|
||||||
|
import sys
|
||||||
|
|
||||||
|
sys.stdout.buffer.write(b"\x1b[2J\x1b[H")
|
||||||
|
sys.stdout.flush()
|
||||||
|
|
||||||
|
def cleanup(self) -> None:
|
||||||
|
pass
|
||||||
59
engine/display/backends/terminal.py
Normal file
59
engine/display/backends/terminal.py
Normal file
@@ -0,0 +1,59 @@
|
|||||||
|
"""
|
||||||
|
ANSI terminal display backend.
|
||||||
|
"""
|
||||||
|
|
||||||
|
import time
|
||||||
|
|
||||||
|
|
||||||
|
class TerminalDisplay:
|
||||||
|
"""ANSI terminal display backend.
|
||||||
|
|
||||||
|
Renders buffer to stdout using ANSI escape codes.
|
||||||
|
Supports reuse - when reuse=True, skips re-initializing terminal state.
|
||||||
|
"""
|
||||||
|
|
||||||
|
width: int = 80
|
||||||
|
height: int = 24
|
||||||
|
_initialized: bool = False
|
||||||
|
|
||||||
|
def init(self, width: int, height: int, reuse: bool = False) -> None:
|
||||||
|
"""Initialize display with dimensions.
|
||||||
|
|
||||||
|
Args:
|
||||||
|
width: Terminal width in characters
|
||||||
|
height: Terminal height in rows
|
||||||
|
reuse: If True, skip terminal re-initialization
|
||||||
|
"""
|
||||||
|
from engine.terminal import CURSOR_OFF
|
||||||
|
|
||||||
|
self.width = width
|
||||||
|
self.height = height
|
||||||
|
|
||||||
|
if not reuse or not self._initialized:
|
||||||
|
print(CURSOR_OFF, end="", flush=True)
|
||||||
|
self._initialized = True
|
||||||
|
|
||||||
|
def show(self, buffer: list[str]) -> None:
|
||||||
|
import sys
|
||||||
|
|
||||||
|
t0 = time.perf_counter()
|
||||||
|
sys.stdout.buffer.write("".join(buffer).encode())
|
||||||
|
sys.stdout.flush()
|
||||||
|
elapsed_ms = (time.perf_counter() - t0) * 1000
|
||||||
|
|
||||||
|
from engine.display import get_monitor
|
||||||
|
|
||||||
|
monitor = get_monitor()
|
||||||
|
if monitor:
|
||||||
|
chars_in = sum(len(line) for line in buffer)
|
||||||
|
monitor.record_effect("terminal_display", elapsed_ms, chars_in, chars_in)
|
||||||
|
|
||||||
|
def clear(self) -> None:
|
||||||
|
from engine.terminal import CLR
|
||||||
|
|
||||||
|
print(CLR, end="", flush=True)
|
||||||
|
|
||||||
|
def cleanup(self) -> None:
|
||||||
|
from engine.terminal import CURSOR_ON
|
||||||
|
|
||||||
|
print(CURSOR_ON, end="", flush=True)
|
||||||
274
engine/display/backends/websocket.py
Normal file
274
engine/display/backends/websocket.py
Normal file
@@ -0,0 +1,274 @@
|
|||||||
|
"""
|
||||||
|
WebSocket display backend - broadcasts frame buffer to connected web clients.
|
||||||
|
"""
|
||||||
|
|
||||||
|
import asyncio
|
||||||
|
import json
|
||||||
|
import threading
|
||||||
|
import time
|
||||||
|
from typing import Protocol
|
||||||
|
|
||||||
|
try:
|
||||||
|
import websockets
|
||||||
|
except ImportError:
|
||||||
|
websockets = None
|
||||||
|
|
||||||
|
|
||||||
|
class Display(Protocol):
|
||||||
|
"""Protocol for display backends."""
|
||||||
|
|
||||||
|
width: int
|
||||||
|
height: int
|
||||||
|
|
||||||
|
def init(self, width: int, height: int) -> None:
|
||||||
|
"""Initialize display with dimensions."""
|
||||||
|
...
|
||||||
|
|
||||||
|
def show(self, buffer: list[str]) -> None:
|
||||||
|
"""Show buffer on display."""
|
||||||
|
...
|
||||||
|
|
||||||
|
def clear(self) -> None:
|
||||||
|
"""Clear display."""
|
||||||
|
...
|
||||||
|
|
||||||
|
def cleanup(self) -> None:
|
||||||
|
"""Shutdown display."""
|
||||||
|
...
|
||||||
|
|
||||||
|
|
||||||
|
def get_monitor():
|
||||||
|
"""Get the performance monitor."""
|
||||||
|
try:
|
||||||
|
from engine.effects.performance import get_monitor as _get_monitor
|
||||||
|
|
||||||
|
return _get_monitor()
|
||||||
|
except Exception:
|
||||||
|
return None
|
||||||
|
|
||||||
|
|
||||||
|
class WebSocketDisplay:
|
||||||
|
"""WebSocket display backend - broadcasts to HTML Canvas clients."""
|
||||||
|
|
||||||
|
width: int = 80
|
||||||
|
height: int = 24
|
||||||
|
|
||||||
|
def __init__(
|
||||||
|
self,
|
||||||
|
host: str = "0.0.0.0",
|
||||||
|
port: int = 8765,
|
||||||
|
http_port: int = 8766,
|
||||||
|
):
|
||||||
|
self.host = host
|
||||||
|
self.port = port
|
||||||
|
self.http_port = http_port
|
||||||
|
self.width = 80
|
||||||
|
self.height = 24
|
||||||
|
self._clients: set = set()
|
||||||
|
self._server_running = False
|
||||||
|
self._http_running = False
|
||||||
|
self._server_thread: threading.Thread | None = None
|
||||||
|
self._http_thread: threading.Thread | None = None
|
||||||
|
self._available = True
|
||||||
|
self._max_clients = 10
|
||||||
|
self._client_connected_callback = None
|
||||||
|
self._client_disconnected_callback = None
|
||||||
|
self._frame_delay = 0.0
|
||||||
|
|
||||||
|
try:
|
||||||
|
import websockets as _ws
|
||||||
|
|
||||||
|
self._available = _ws is not None
|
||||||
|
except ImportError:
|
||||||
|
self._available = False
|
||||||
|
|
||||||
|
def is_available(self) -> bool:
|
||||||
|
"""Check if WebSocket support is available."""
|
||||||
|
return self._available
|
||||||
|
|
||||||
|
def init(self, width: int, height: int, reuse: bool = False) -> None:
|
||||||
|
"""Initialize display with dimensions and start server.
|
||||||
|
|
||||||
|
Args:
|
||||||
|
width: Terminal width in characters
|
||||||
|
height: Terminal height in rows
|
||||||
|
reuse: If True, skip starting servers (assume already running)
|
||||||
|
"""
|
||||||
|
self.width = width
|
||||||
|
self.height = height
|
||||||
|
|
||||||
|
if not reuse or not self._server_running:
|
||||||
|
self.start_server()
|
||||||
|
self.start_http_server()
|
||||||
|
|
||||||
|
def show(self, buffer: list[str]) -> None:
|
||||||
|
"""Broadcast buffer to all connected clients."""
|
||||||
|
t0 = time.perf_counter()
|
||||||
|
|
||||||
|
if self._clients:
|
||||||
|
frame_data = {
|
||||||
|
"type": "frame",
|
||||||
|
"width": self.width,
|
||||||
|
"height": self.height,
|
||||||
|
"lines": buffer,
|
||||||
|
}
|
||||||
|
message = json.dumps(frame_data)
|
||||||
|
|
||||||
|
disconnected = set()
|
||||||
|
for client in list(self._clients):
|
||||||
|
try:
|
||||||
|
asyncio.run(client.send(message))
|
||||||
|
except Exception:
|
||||||
|
disconnected.add(client)
|
||||||
|
|
||||||
|
for client in disconnected:
|
||||||
|
self._clients.discard(client)
|
||||||
|
if self._client_disconnected_callback:
|
||||||
|
self._client_disconnected_callback(client)
|
||||||
|
|
||||||
|
elapsed_ms = (time.perf_counter() - t0) * 1000
|
||||||
|
monitor = get_monitor()
|
||||||
|
if monitor:
|
||||||
|
chars_in = sum(len(line) for line in buffer)
|
||||||
|
monitor.record_effect("websocket_display", elapsed_ms, chars_in, chars_in)
|
||||||
|
|
||||||
|
def clear(self) -> None:
|
||||||
|
"""Broadcast clear command to all clients."""
|
||||||
|
if self._clients:
|
||||||
|
clear_data = {"type": "clear"}
|
||||||
|
message = json.dumps(clear_data)
|
||||||
|
for client in list(self._clients):
|
||||||
|
try:
|
||||||
|
asyncio.run(client.send(message))
|
||||||
|
except Exception:
|
||||||
|
pass
|
||||||
|
|
||||||
|
def cleanup(self) -> None:
|
||||||
|
"""Stop the servers."""
|
||||||
|
self.stop_server()
|
||||||
|
self.stop_http_server()
|
||||||
|
|
||||||
|
async def _websocket_handler(self, websocket):
|
||||||
|
"""Handle WebSocket connections."""
|
||||||
|
if len(self._clients) >= self._max_clients:
|
||||||
|
await websocket.close()
|
||||||
|
return
|
||||||
|
|
||||||
|
self._clients.add(websocket)
|
||||||
|
if self._client_connected_callback:
|
||||||
|
self._client_connected_callback(websocket)
|
||||||
|
|
||||||
|
try:
|
||||||
|
async for message in websocket:
|
||||||
|
try:
|
||||||
|
data = json.loads(message)
|
||||||
|
if data.get("type") == "resize":
|
||||||
|
self.width = data.get("width", 80)
|
||||||
|
self.height = data.get("height", 24)
|
||||||
|
except json.JSONDecodeError:
|
||||||
|
pass
|
||||||
|
except Exception:
|
||||||
|
pass
|
||||||
|
finally:
|
||||||
|
self._clients.discard(websocket)
|
||||||
|
if self._client_disconnected_callback:
|
||||||
|
self._client_disconnected_callback(websocket)
|
||||||
|
|
||||||
|
async def _run_websocket_server(self):
|
||||||
|
"""Run the WebSocket server."""
|
||||||
|
async with websockets.serve(self._websocket_handler, self.host, self.port):
|
||||||
|
while self._server_running:
|
||||||
|
await asyncio.sleep(0.1)
|
||||||
|
|
||||||
|
async def _run_http_server(self):
|
||||||
|
"""Run simple HTTP server for the client."""
|
||||||
|
import os
|
||||||
|
from http.server import HTTPServer, SimpleHTTPRequestHandler
|
||||||
|
|
||||||
|
client_dir = os.path.join(
|
||||||
|
os.path.dirname(os.path.dirname(os.path.dirname(__file__))), "client"
|
||||||
|
)
|
||||||
|
|
||||||
|
class Handler(SimpleHTTPRequestHandler):
|
||||||
|
def __init__(self, *args, **kwargs):
|
||||||
|
super().__init__(*args, directory=client_dir, **kwargs)
|
||||||
|
|
||||||
|
def log_message(self, format, *args):
|
||||||
|
pass
|
||||||
|
|
||||||
|
httpd = HTTPServer((self.host, self.http_port), Handler)
|
||||||
|
while self._http_running:
|
||||||
|
httpd.handle_request()
|
||||||
|
|
||||||
|
def _run_async(self, coro):
|
||||||
|
"""Run coroutine in background."""
|
||||||
|
try:
|
||||||
|
asyncio.run(coro)
|
||||||
|
except Exception as e:
|
||||||
|
print(f"WebSocket async error: {e}")
|
||||||
|
|
||||||
|
def start_server(self):
|
||||||
|
"""Start the WebSocket server in a background thread."""
|
||||||
|
if not self._available:
|
||||||
|
return
|
||||||
|
if self._server_thread is not None:
|
||||||
|
return
|
||||||
|
|
||||||
|
self._server_running = True
|
||||||
|
self._server_thread = threading.Thread(
|
||||||
|
target=self._run_async, args=(self._run_websocket_server(),), daemon=True
|
||||||
|
)
|
||||||
|
self._server_thread.start()
|
||||||
|
|
||||||
|
def stop_server(self):
|
||||||
|
"""Stop the WebSocket server."""
|
||||||
|
self._server_running = False
|
||||||
|
self._server_thread = None
|
||||||
|
|
||||||
|
def start_http_server(self):
|
||||||
|
"""Start the HTTP server in a background thread."""
|
||||||
|
if not self._available:
|
||||||
|
return
|
||||||
|
if self._http_thread is not None:
|
||||||
|
return
|
||||||
|
|
||||||
|
self._http_running = True
|
||||||
|
|
||||||
|
self._http_running = True
|
||||||
|
self._http_thread = threading.Thread(
|
||||||
|
target=self._run_async, args=(self._run_http_server(),), daemon=True
|
||||||
|
)
|
||||||
|
self._http_thread.start()
|
||||||
|
|
||||||
|
def stop_http_server(self):
|
||||||
|
"""Stop the HTTP server."""
|
||||||
|
self._http_running = False
|
||||||
|
self._http_thread = None
|
||||||
|
|
||||||
|
def client_count(self) -> int:
|
||||||
|
"""Return number of connected clients."""
|
||||||
|
return len(self._clients)
|
||||||
|
|
||||||
|
def get_ws_port(self) -> int:
|
||||||
|
"""Return WebSocket port."""
|
||||||
|
return self.port
|
||||||
|
|
||||||
|
def get_http_port(self) -> int:
|
||||||
|
"""Return HTTP port."""
|
||||||
|
return self.http_port
|
||||||
|
|
||||||
|
def set_frame_delay(self, delay: float) -> None:
|
||||||
|
"""Set delay between frames in seconds."""
|
||||||
|
self._frame_delay = delay
|
||||||
|
|
||||||
|
def get_frame_delay(self) -> float:
|
||||||
|
"""Get delay between frames."""
|
||||||
|
return self._frame_delay
|
||||||
|
|
||||||
|
def set_client_connected_callback(self, callback) -> None:
|
||||||
|
"""Set callback for client connections."""
|
||||||
|
self._client_connected_callback = callback
|
||||||
|
|
||||||
|
def set_client_disconnected_callback(self, callback) -> None:
|
||||||
|
"""Set callback for client disconnections."""
|
||||||
|
self._client_disconnected_callback = callback
|
||||||
280
engine/display/renderer.py
Normal file
280
engine/display/renderer.py
Normal file
@@ -0,0 +1,280 @@
|
|||||||
|
"""
|
||||||
|
Shared display rendering utilities.
|
||||||
|
|
||||||
|
Provides common functionality for displays that render text to images
|
||||||
|
(Pygame, Sixel, Kitty displays).
|
||||||
|
"""
|
||||||
|
|
||||||
|
from typing import Any
|
||||||
|
|
||||||
|
ANSI_COLORS = {
|
||||||
|
0: (0, 0, 0),
|
||||||
|
1: (205, 49, 49),
|
||||||
|
2: (13, 188, 121),
|
||||||
|
3: (229, 229, 16),
|
||||||
|
4: (36, 114, 200),
|
||||||
|
5: (188, 63, 188),
|
||||||
|
6: (17, 168, 205),
|
||||||
|
7: (229, 229, 229),
|
||||||
|
8: (102, 102, 102),
|
||||||
|
9: (241, 76, 76),
|
||||||
|
10: (35, 209, 139),
|
||||||
|
11: (245, 245, 67),
|
||||||
|
12: (59, 142, 234),
|
||||||
|
13: (214, 112, 214),
|
||||||
|
14: (41, 184, 219),
|
||||||
|
15: (255, 255, 255),
|
||||||
|
}
|
||||||
|
|
||||||
|
|
||||||
|
def parse_ansi(
|
||||||
|
text: str,
|
||||||
|
) -> list[tuple[str, tuple[int, int, int], tuple[int, int, int], bool]]:
|
||||||
|
"""Parse ANSI escape sequences into text tokens with colors.
|
||||||
|
|
||||||
|
Args:
|
||||||
|
text: Text containing ANSI escape sequences
|
||||||
|
|
||||||
|
Returns:
|
||||||
|
List of (text, fg_rgb, bg_rgb, bold) tuples
|
||||||
|
"""
|
||||||
|
tokens = []
|
||||||
|
current_text = ""
|
||||||
|
fg = (204, 204, 204)
|
||||||
|
bg = (0, 0, 0)
|
||||||
|
bold = False
|
||||||
|
i = 0
|
||||||
|
|
||||||
|
ANSI_COLORS_4BIT = {
|
||||||
|
0: (0, 0, 0),
|
||||||
|
1: (205, 49, 49),
|
||||||
|
2: (13, 188, 121),
|
||||||
|
3: (229, 229, 16),
|
||||||
|
4: (36, 114, 200),
|
||||||
|
5: (188, 63, 188),
|
||||||
|
6: (17, 168, 205),
|
||||||
|
7: (229, 229, 229),
|
||||||
|
8: (102, 102, 102),
|
||||||
|
9: (241, 76, 76),
|
||||||
|
10: (35, 209, 139),
|
||||||
|
11: (245, 245, 67),
|
||||||
|
12: (59, 142, 234),
|
||||||
|
13: (214, 112, 214),
|
||||||
|
14: (41, 184, 219),
|
||||||
|
15: (255, 255, 255),
|
||||||
|
}
|
||||||
|
|
||||||
|
while i < len(text):
|
||||||
|
char = text[i]
|
||||||
|
|
||||||
|
if char == "\x1b" and i + 1 < len(text) and text[i + 1] == "[":
|
||||||
|
if current_text:
|
||||||
|
tokens.append((current_text, fg, bg, bold))
|
||||||
|
current_text = ""
|
||||||
|
|
||||||
|
i += 2
|
||||||
|
code = ""
|
||||||
|
while i < len(text):
|
||||||
|
c = text[i]
|
||||||
|
if c.isalpha():
|
||||||
|
break
|
||||||
|
code += c
|
||||||
|
i += 1
|
||||||
|
|
||||||
|
if code:
|
||||||
|
codes = code.split(";")
|
||||||
|
for c in codes:
|
||||||
|
if c == "0":
|
||||||
|
fg = (204, 204, 204)
|
||||||
|
bg = (0, 0, 0)
|
||||||
|
bold = False
|
||||||
|
elif c == "1":
|
||||||
|
bold = True
|
||||||
|
elif c == "22":
|
||||||
|
bold = False
|
||||||
|
elif c == "39":
|
||||||
|
fg = (204, 204, 204)
|
||||||
|
elif c == "49":
|
||||||
|
bg = (0, 0, 0)
|
||||||
|
elif c.isdigit():
|
||||||
|
color_idx = int(c)
|
||||||
|
if color_idx in ANSI_COLORS_4BIT:
|
||||||
|
fg = ANSI_COLORS_4BIT[color_idx]
|
||||||
|
elif 30 <= color_idx <= 37:
|
||||||
|
fg = ANSI_COLORS_4BIT.get(color_idx - 30, fg)
|
||||||
|
elif 40 <= color_idx <= 47:
|
||||||
|
bg = ANSI_COLORS_4BIT.get(color_idx - 40, bg)
|
||||||
|
elif 90 <= color_idx <= 97:
|
||||||
|
fg = ANSI_COLORS_4BIT.get(color_idx - 90 + 8, fg)
|
||||||
|
elif 100 <= color_idx <= 107:
|
||||||
|
bg = ANSI_COLORS_4BIT.get(color_idx - 100 + 8, bg)
|
||||||
|
elif c.startswith("38;5;"):
|
||||||
|
idx = int(c.split(";")[-1])
|
||||||
|
if idx < 256:
|
||||||
|
if idx < 16:
|
||||||
|
fg = ANSI_COLORS_4BIT.get(idx, fg)
|
||||||
|
elif idx < 232:
|
||||||
|
c_idx = idx - 16
|
||||||
|
fg = (
|
||||||
|
(c_idx >> 4) * 51,
|
||||||
|
((c_idx >> 2) & 7) * 51,
|
||||||
|
(c_idx & 3) * 85,
|
||||||
|
)
|
||||||
|
else:
|
||||||
|
gray = (idx - 232) * 10 + 8
|
||||||
|
fg = (gray, gray, gray)
|
||||||
|
elif c.startswith("48;5;"):
|
||||||
|
idx = int(c.split(";")[-1])
|
||||||
|
if idx < 256:
|
||||||
|
if idx < 16:
|
||||||
|
bg = ANSI_COLORS_4BIT.get(idx, bg)
|
||||||
|
elif idx < 232:
|
||||||
|
c_idx = idx - 16
|
||||||
|
bg = (
|
||||||
|
(c_idx >> 4) * 51,
|
||||||
|
((c_idx >> 2) & 7) * 51,
|
||||||
|
(c_idx & 3) * 85,
|
||||||
|
)
|
||||||
|
else:
|
||||||
|
gray = (idx - 232) * 10 + 8
|
||||||
|
bg = (gray, gray, gray)
|
||||||
|
i += 1
|
||||||
|
else:
|
||||||
|
current_text += char
|
||||||
|
i += 1
|
||||||
|
|
||||||
|
if current_text:
|
||||||
|
tokens.append((current_text, fg, bg, bold))
|
||||||
|
|
||||||
|
return tokens if tokens else [("", fg, bg, bold)]
|
||||||
|
|
||||||
|
|
||||||
|
def get_default_font_path() -> str | None:
|
||||||
|
"""Get the path to a default monospace font."""
|
||||||
|
import os
|
||||||
|
import sys
|
||||||
|
from pathlib import Path
|
||||||
|
|
||||||
|
def search_dir(base_path: str) -> str | None:
|
||||||
|
if not os.path.exists(base_path):
|
||||||
|
return None
|
||||||
|
if os.path.isfile(base_path):
|
||||||
|
return base_path
|
||||||
|
for font_file in Path(base_path).rglob("*"):
|
||||||
|
if font_file.suffix.lower() in (".ttf", ".otf", ".ttc"):
|
||||||
|
name = font_file.stem.lower()
|
||||||
|
if "geist" in name and ("nerd" in name or "mono" in name):
|
||||||
|
return str(font_file)
|
||||||
|
if "mono" in name or "courier" in name or "terminal" in name:
|
||||||
|
return str(font_file)
|
||||||
|
return None
|
||||||
|
|
||||||
|
search_dirs = []
|
||||||
|
if sys.platform == "darwin":
|
||||||
|
search_dirs.extend(
|
||||||
|
[
|
||||||
|
os.path.expanduser("~/Library/Fonts/"),
|
||||||
|
"/System/Library/Fonts/",
|
||||||
|
]
|
||||||
|
)
|
||||||
|
elif sys.platform == "win32":
|
||||||
|
search_dirs.extend(
|
||||||
|
[
|
||||||
|
os.path.expanduser("~\\AppData\\Local\\Microsoft\\Windows\\Fonts\\"),
|
||||||
|
"C:\\Windows\\Fonts\\",
|
||||||
|
]
|
||||||
|
)
|
||||||
|
else:
|
||||||
|
search_dirs.extend(
|
||||||
|
[
|
||||||
|
os.path.expanduser("~/.local/share/fonts/"),
|
||||||
|
os.path.expanduser("~/.fonts/"),
|
||||||
|
"/usr/share/fonts/",
|
||||||
|
]
|
||||||
|
)
|
||||||
|
|
||||||
|
for search_dir_path in search_dirs:
|
||||||
|
found = search_dir(search_dir_path)
|
||||||
|
if found:
|
||||||
|
return found
|
||||||
|
|
||||||
|
if sys.platform != "win32":
|
||||||
|
try:
|
||||||
|
import subprocess
|
||||||
|
|
||||||
|
for pattern in ["monospace", "DejaVuSansMono", "LiberationMono"]:
|
||||||
|
result = subprocess.run(
|
||||||
|
["fc-match", "-f", "%{file}", pattern],
|
||||||
|
capture_output=True,
|
||||||
|
text=True,
|
||||||
|
timeout=5,
|
||||||
|
)
|
||||||
|
if result.returncode == 0 and result.stdout.strip():
|
||||||
|
font_file = result.stdout.strip()
|
||||||
|
if os.path.exists(font_file):
|
||||||
|
return font_file
|
||||||
|
except Exception:
|
||||||
|
pass
|
||||||
|
|
||||||
|
return None
|
||||||
|
|
||||||
|
|
||||||
|
def render_to_pil(
|
||||||
|
buffer: list[str],
|
||||||
|
width: int,
|
||||||
|
height: int,
|
||||||
|
cell_width: int = 10,
|
||||||
|
cell_height: int = 18,
|
||||||
|
font_path: str | None = None,
|
||||||
|
) -> Any:
|
||||||
|
"""Render buffer to a PIL Image.
|
||||||
|
|
||||||
|
Args:
|
||||||
|
buffer: List of text lines to render
|
||||||
|
width: Terminal width in characters
|
||||||
|
height: Terminal height in rows
|
||||||
|
cell_width: Width of each character cell in pixels
|
||||||
|
cell_height: Height of each character cell in pixels
|
||||||
|
font_path: Path to TTF/OTF font file (optional)
|
||||||
|
|
||||||
|
Returns:
|
||||||
|
PIL Image object
|
||||||
|
"""
|
||||||
|
from PIL import Image, ImageDraw, ImageFont
|
||||||
|
|
||||||
|
img_width = width * cell_width
|
||||||
|
img_height = height * cell_height
|
||||||
|
|
||||||
|
img = Image.new("RGBA", (img_width, img_height), (0, 0, 0, 255))
|
||||||
|
draw = ImageDraw.Draw(img)
|
||||||
|
|
||||||
|
if font_path:
|
||||||
|
try:
|
||||||
|
font = ImageFont.truetype(font_path, cell_height - 2)
|
||||||
|
except Exception:
|
||||||
|
font = ImageFont.load_default()
|
||||||
|
else:
|
||||||
|
font = ImageFont.load_default()
|
||||||
|
|
||||||
|
for row_idx, line in enumerate(buffer[:height]):
|
||||||
|
if row_idx >= height:
|
||||||
|
break
|
||||||
|
|
||||||
|
tokens = parse_ansi(line)
|
||||||
|
x_pos = 0
|
||||||
|
y_pos = row_idx * cell_height
|
||||||
|
|
||||||
|
for text, fg, bg, _bold in tokens:
|
||||||
|
if not text:
|
||||||
|
continue
|
||||||
|
|
||||||
|
if bg != (0, 0, 0):
|
||||||
|
bbox = draw.textbbox((x_pos, y_pos), text, font=font)
|
||||||
|
draw.rectangle(bbox, fill=(*bg, 255))
|
||||||
|
|
||||||
|
draw.text((x_pos, y_pos), text, fill=(*fg, 255), font=font)
|
||||||
|
|
||||||
|
if font:
|
||||||
|
x_pos += draw.textlength(text, font=font)
|
||||||
|
|
||||||
|
return img
|
||||||
50
engine/effects/__init__.py
Normal file
50
engine/effects/__init__.py
Normal file
@@ -0,0 +1,50 @@
|
|||||||
|
from engine.effects.chain import EffectChain
|
||||||
|
from engine.effects.controller import handle_effects_command, show_effects_menu
|
||||||
|
from engine.effects.legacy import (
|
||||||
|
fade_line,
|
||||||
|
firehose_line,
|
||||||
|
glitch_bar,
|
||||||
|
next_headline,
|
||||||
|
noise,
|
||||||
|
vis_offset,
|
||||||
|
vis_trunc,
|
||||||
|
)
|
||||||
|
from engine.effects.performance import PerformanceMonitor, get_monitor, set_monitor
|
||||||
|
from engine.effects.registry import EffectRegistry, get_registry, set_registry
|
||||||
|
from engine.effects.types import (
|
||||||
|
EffectConfig,
|
||||||
|
EffectContext,
|
||||||
|
PipelineConfig,
|
||||||
|
create_effect_context,
|
||||||
|
)
|
||||||
|
|
||||||
|
|
||||||
|
def get_effect_chain():
|
||||||
|
from engine.layers import get_effect_chain as _chain
|
||||||
|
|
||||||
|
return _chain()
|
||||||
|
|
||||||
|
|
||||||
|
__all__ = [
|
||||||
|
"EffectChain",
|
||||||
|
"EffectRegistry",
|
||||||
|
"EffectConfig",
|
||||||
|
"EffectContext",
|
||||||
|
"PipelineConfig",
|
||||||
|
"create_effect_context",
|
||||||
|
"get_registry",
|
||||||
|
"set_registry",
|
||||||
|
"get_effect_chain",
|
||||||
|
"get_monitor",
|
||||||
|
"set_monitor",
|
||||||
|
"PerformanceMonitor",
|
||||||
|
"handle_effects_command",
|
||||||
|
"show_effects_menu",
|
||||||
|
"fade_line",
|
||||||
|
"firehose_line",
|
||||||
|
"glitch_bar",
|
||||||
|
"noise",
|
||||||
|
"next_headline",
|
||||||
|
"vis_trunc",
|
||||||
|
"vis_offset",
|
||||||
|
]
|
||||||
71
engine/effects/chain.py
Normal file
71
engine/effects/chain.py
Normal file
@@ -0,0 +1,71 @@
|
|||||||
|
import time
|
||||||
|
|
||||||
|
from engine.effects.performance import PerformanceMonitor, get_monitor
|
||||||
|
from engine.effects.registry import EffectRegistry
|
||||||
|
from engine.effects.types import EffectContext
|
||||||
|
|
||||||
|
|
||||||
|
class EffectChain:
|
||||||
|
def __init__(
|
||||||
|
self, registry: EffectRegistry, monitor: PerformanceMonitor | None = None
|
||||||
|
):
|
||||||
|
self._registry = registry
|
||||||
|
self._order: list[str] = []
|
||||||
|
self._monitor = monitor
|
||||||
|
|
||||||
|
def _get_monitor(self) -> PerformanceMonitor:
|
||||||
|
if self._monitor is not None:
|
||||||
|
return self._monitor
|
||||||
|
return get_monitor()
|
||||||
|
|
||||||
|
def set_order(self, names: list[str]) -> None:
|
||||||
|
self._order = list(names)
|
||||||
|
|
||||||
|
def get_order(self) -> list[str]:
|
||||||
|
return self._order.copy()
|
||||||
|
|
||||||
|
def add_effect(self, name: str, position: int | None = None) -> bool:
|
||||||
|
if name not in self._registry.list_all():
|
||||||
|
return False
|
||||||
|
if position is None:
|
||||||
|
self._order.append(name)
|
||||||
|
else:
|
||||||
|
self._order.insert(position, name)
|
||||||
|
return True
|
||||||
|
|
||||||
|
def remove_effect(self, name: str) -> bool:
|
||||||
|
if name in self._order:
|
||||||
|
self._order.remove(name)
|
||||||
|
return True
|
||||||
|
return False
|
||||||
|
|
||||||
|
def reorder(self, new_order: list[str]) -> bool:
|
||||||
|
all_plugins = set(self._registry.list_all().keys())
|
||||||
|
if not all(name in all_plugins for name in new_order):
|
||||||
|
return False
|
||||||
|
self._order = list(new_order)
|
||||||
|
return True
|
||||||
|
|
||||||
|
def process(self, buf: list[str], ctx: EffectContext) -> list[str]:
|
||||||
|
monitor = self._get_monitor()
|
||||||
|
frame_number = ctx.frame_number
|
||||||
|
monitor.start_frame(frame_number)
|
||||||
|
|
||||||
|
frame_start = time.perf_counter()
|
||||||
|
result = list(buf)
|
||||||
|
for name in self._order:
|
||||||
|
plugin = self._registry.get(name)
|
||||||
|
if plugin and plugin.config.enabled:
|
||||||
|
chars_in = sum(len(line) for line in result)
|
||||||
|
effect_start = time.perf_counter()
|
||||||
|
try:
|
||||||
|
result = plugin.process(result, ctx)
|
||||||
|
except Exception:
|
||||||
|
plugin.config.enabled = False
|
||||||
|
elapsed = time.perf_counter() - effect_start
|
||||||
|
chars_out = sum(len(line) for line in result)
|
||||||
|
monitor.record_effect(name, elapsed * 1000, chars_in, chars_out)
|
||||||
|
|
||||||
|
total_elapsed = time.perf_counter() - frame_start
|
||||||
|
monitor.end_frame(frame_number, total_elapsed * 1000)
|
||||||
|
return result
|
||||||
144
engine/effects/controller.py
Normal file
144
engine/effects/controller.py
Normal file
@@ -0,0 +1,144 @@
|
|||||||
|
from engine.effects.performance import get_monitor
|
||||||
|
from engine.effects.registry import get_registry
|
||||||
|
|
||||||
|
_effect_chain_ref = None
|
||||||
|
|
||||||
|
|
||||||
|
def _get_effect_chain():
|
||||||
|
global _effect_chain_ref
|
||||||
|
if _effect_chain_ref is not None:
|
||||||
|
return _effect_chain_ref
|
||||||
|
try:
|
||||||
|
from engine.layers import get_effect_chain as _chain
|
||||||
|
|
||||||
|
return _chain()
|
||||||
|
except Exception:
|
||||||
|
return None
|
||||||
|
|
||||||
|
|
||||||
|
def set_effect_chain_ref(chain) -> None:
|
||||||
|
global _effect_chain_ref
|
||||||
|
_effect_chain_ref = chain
|
||||||
|
|
||||||
|
|
||||||
|
def handle_effects_command(cmd: str) -> str:
|
||||||
|
"""Handle /effects command from NTFY message.
|
||||||
|
|
||||||
|
Commands:
|
||||||
|
/effects list - list all effects and their status
|
||||||
|
/effects <name> on - enable an effect
|
||||||
|
/effects <name> off - disable an effect
|
||||||
|
/effects <name> intensity <0.0-1.0> - set intensity
|
||||||
|
/effects reorder <name1>,<name2>,... - reorder pipeline
|
||||||
|
/effects stats - show performance statistics
|
||||||
|
"""
|
||||||
|
parts = cmd.strip().split()
|
||||||
|
if not parts or parts[0] != "/effects":
|
||||||
|
return "Unknown command"
|
||||||
|
|
||||||
|
registry = get_registry()
|
||||||
|
chain = _get_effect_chain()
|
||||||
|
|
||||||
|
if len(parts) == 1 or parts[1] == "list":
|
||||||
|
result = ["Effects:"]
|
||||||
|
for name, plugin in registry.list_all().items():
|
||||||
|
status = "ON" if plugin.config.enabled else "OFF"
|
||||||
|
intensity = plugin.config.intensity
|
||||||
|
result.append(f" {name}: {status} (intensity={intensity})")
|
||||||
|
if chain:
|
||||||
|
result.append(f"Order: {chain.get_order()}")
|
||||||
|
return "\n".join(result)
|
||||||
|
|
||||||
|
if parts[1] == "stats":
|
||||||
|
return _format_stats()
|
||||||
|
|
||||||
|
if parts[1] == "reorder" and len(parts) >= 3:
|
||||||
|
new_order = parts[2].split(",")
|
||||||
|
if chain and chain.reorder(new_order):
|
||||||
|
return f"Reordered pipeline: {new_order}"
|
||||||
|
return "Failed to reorder pipeline"
|
||||||
|
|
||||||
|
if len(parts) < 3:
|
||||||
|
return "Usage: /effects <name> on|off|intensity <value>"
|
||||||
|
|
||||||
|
effect_name = parts[1]
|
||||||
|
action = parts[2]
|
||||||
|
|
||||||
|
if effect_name not in registry.list_all():
|
||||||
|
return f"Unknown effect: {effect_name}"
|
||||||
|
|
||||||
|
if action == "on":
|
||||||
|
registry.enable(effect_name)
|
||||||
|
return f"Enabled: {effect_name}"
|
||||||
|
|
||||||
|
if action == "off":
|
||||||
|
registry.disable(effect_name)
|
||||||
|
return f"Disabled: {effect_name}"
|
||||||
|
|
||||||
|
if action == "intensity" and len(parts) >= 4:
|
||||||
|
try:
|
||||||
|
value = float(parts[3])
|
||||||
|
if not 0.0 <= value <= 1.0:
|
||||||
|
return "Intensity must be between 0.0 and 1.0"
|
||||||
|
plugin = registry.get(effect_name)
|
||||||
|
if plugin:
|
||||||
|
plugin.config.intensity = value
|
||||||
|
return f"Set {effect_name} intensity to {value}"
|
||||||
|
except ValueError:
|
||||||
|
return "Invalid intensity value"
|
||||||
|
|
||||||
|
return f"Unknown action: {action}"
|
||||||
|
|
||||||
|
|
||||||
|
def _format_stats() -> str:
|
||||||
|
monitor = get_monitor()
|
||||||
|
stats = monitor.get_stats()
|
||||||
|
|
||||||
|
if "error" in stats:
|
||||||
|
return stats["error"]
|
||||||
|
|
||||||
|
lines = ["Performance Stats:"]
|
||||||
|
|
||||||
|
pipeline = stats["pipeline"]
|
||||||
|
lines.append(
|
||||||
|
f" Pipeline: avg={pipeline['avg_ms']:.2f}ms min={pipeline['min_ms']:.2f}ms max={pipeline['max_ms']:.2f}ms (over {stats['frame_count']} frames)"
|
||||||
|
)
|
||||||
|
|
||||||
|
if stats["effects"]:
|
||||||
|
lines.append(" Per-effect (avg ms):")
|
||||||
|
for name, effect_stats in stats["effects"].items():
|
||||||
|
lines.append(
|
||||||
|
f" {name}: avg={effect_stats['avg_ms']:.2f}ms min={effect_stats['min_ms']:.2f}ms max={effect_stats['max_ms']:.2f}ms"
|
||||||
|
)
|
||||||
|
|
||||||
|
return "\n".join(lines)
|
||||||
|
|
||||||
|
|
||||||
|
def show_effects_menu() -> str:
|
||||||
|
"""Generate effects menu text for display."""
|
||||||
|
registry = get_registry()
|
||||||
|
chain = _get_effect_chain()
|
||||||
|
|
||||||
|
lines = [
|
||||||
|
"\033[1;38;5;231m=== EFFECTS MENU ===\033[0m",
|
||||||
|
"",
|
||||||
|
"Effects:",
|
||||||
|
]
|
||||||
|
|
||||||
|
for name, plugin in registry.list_all().items():
|
||||||
|
status = "ON" if plugin.config.enabled else "OFF"
|
||||||
|
intensity = plugin.config.intensity
|
||||||
|
lines.append(f" [{status:3}] {name}: intensity={intensity:.2f}")
|
||||||
|
|
||||||
|
if chain:
|
||||||
|
lines.append("")
|
||||||
|
lines.append(f"Pipeline order: {' -> '.join(chain.get_order())}")
|
||||||
|
|
||||||
|
lines.append("")
|
||||||
|
lines.append("Controls:")
|
||||||
|
lines.append(" /effects <name> on|off")
|
||||||
|
lines.append(" /effects <name> intensity <0.0-1.0>")
|
||||||
|
lines.append(" /effects reorder name1,name2,...")
|
||||||
|
lines.append("")
|
||||||
|
|
||||||
|
return "\n".join(lines)
|
||||||
@@ -1,14 +1,22 @@
|
|||||||
"""
|
"""
|
||||||
Visual effects: noise, glitch, fade, ANSI-aware truncation, firehose, headline pool.
|
Visual effects: noise, glitch, fade, ANSI-aware truncation, firehose, headline pool.
|
||||||
Depends on: config, terminal, sources.
|
Depends on: config, terminal, sources.
|
||||||
|
|
||||||
|
These are low-level functional implementations of visual effects. They are used
|
||||||
|
internally by the EffectPlugin system (effects_plugins/*.py) and also directly
|
||||||
|
by layers.py and scroll.py for rendering.
|
||||||
|
|
||||||
|
The plugin system provides a higher-level OOP interface with configuration
|
||||||
|
support, while these legacy functions provide direct functional access.
|
||||||
|
Both systems coexist - there are no current plans to deprecate the legacy functions.
|
||||||
"""
|
"""
|
||||||
|
|
||||||
import random
|
import random
|
||||||
from datetime import datetime
|
from datetime import datetime
|
||||||
|
|
||||||
from engine import config
|
from engine import config
|
||||||
from engine.terminal import RST, DIM, G_LO, G_DIM, W_GHOST, C_DIM
|
|
||||||
from engine.sources import FEEDS, POETRY_SOURCES
|
from engine.sources import FEEDS, POETRY_SOURCES
|
||||||
|
from engine.terminal import C_DIM, DIM, G_DIM, G_LO, RST, W_GHOST
|
||||||
|
|
||||||
|
|
||||||
def noise(w):
|
def noise(w):
|
||||||
@@ -34,23 +42,23 @@ def fade_line(s, fade):
|
|||||||
if fade >= 1.0:
|
if fade >= 1.0:
|
||||||
return s
|
return s
|
||||||
if fade <= 0.0:
|
if fade <= 0.0:
|
||||||
return ''
|
return ""
|
||||||
result = []
|
result = []
|
||||||
i = 0
|
i = 0
|
||||||
while i < len(s):
|
while i < len(s):
|
||||||
if s[i] == '\033' and i + 1 < len(s) and s[i + 1] == '[':
|
if s[i] == "\033" and i + 1 < len(s) and s[i + 1] == "[":
|
||||||
j = i + 2
|
j = i + 2
|
||||||
while j < len(s) and not s[j].isalpha():
|
while j < len(s) and not s[j].isalpha():
|
||||||
j += 1
|
j += 1
|
||||||
result.append(s[i : j + 1])
|
result.append(s[i : j + 1])
|
||||||
i = j + 1
|
i = j + 1
|
||||||
elif s[i] == ' ':
|
elif s[i] == " ":
|
||||||
result.append(' ')
|
result.append(" ")
|
||||||
i += 1
|
i += 1
|
||||||
else:
|
else:
|
||||||
result.append(s[i] if random.random() < fade else ' ')
|
result.append(s[i] if random.random() < fade else " ")
|
||||||
i += 1
|
i += 1
|
||||||
return ''.join(result)
|
return "".join(result)
|
||||||
|
|
||||||
|
|
||||||
def vis_trunc(s, w):
|
def vis_trunc(s, w):
|
||||||
@@ -61,7 +69,7 @@ def vis_trunc(s, w):
|
|||||||
while i < len(s):
|
while i < len(s):
|
||||||
if vw >= w:
|
if vw >= w:
|
||||||
break
|
break
|
||||||
if s[i] == '\033' and i + 1 < len(s) and s[i + 1] == '[':
|
if s[i] == "\033" and i + 1 < len(s) and s[i + 1] == "[":
|
||||||
j = i + 2
|
j = i + 2
|
||||||
while j < len(s) and not s[j].isalpha():
|
while j < len(s) and not s[j].isalpha():
|
||||||
j += 1
|
j += 1
|
||||||
@@ -71,7 +79,38 @@ def vis_trunc(s, w):
|
|||||||
result.append(s[i])
|
result.append(s[i])
|
||||||
vw += 1
|
vw += 1
|
||||||
i += 1
|
i += 1
|
||||||
return ''.join(result)
|
return "".join(result)
|
||||||
|
|
||||||
|
|
||||||
|
def vis_offset(s, offset):
|
||||||
|
"""Offset string by skipping first offset visual characters, skipping ANSI escape codes."""
|
||||||
|
if offset <= 0:
|
||||||
|
return s
|
||||||
|
result = []
|
||||||
|
vw = 0
|
||||||
|
i = 0
|
||||||
|
skipping = True
|
||||||
|
while i < len(s):
|
||||||
|
if s[i] == "\033" and i + 1 < len(s) and s[i + 1] == "[":
|
||||||
|
j = i + 2
|
||||||
|
while j < len(s) and not s[j].isalpha():
|
||||||
|
j += 1
|
||||||
|
if skipping:
|
||||||
|
i = j + 1
|
||||||
|
continue
|
||||||
|
result.append(s[i : j + 1])
|
||||||
|
i = j + 1
|
||||||
|
else:
|
||||||
|
if skipping:
|
||||||
|
if vw >= offset:
|
||||||
|
skipping = False
|
||||||
|
result.append(s[i])
|
||||||
|
vw += 1
|
||||||
|
i += 1
|
||||||
|
else:
|
||||||
|
result.append(s[i])
|
||||||
|
i += 1
|
||||||
|
return "".join(result)
|
||||||
|
|
||||||
|
|
||||||
def next_headline(pool, items, seen):
|
def next_headline(pool, items, seen):
|
||||||
@@ -103,12 +142,13 @@ def firehose_line(items, w):
|
|||||||
return "".join(
|
return "".join(
|
||||||
f"{random.choice([G_LO, G_DIM, C_DIM, W_GHOST])}"
|
f"{random.choice([G_LO, G_DIM, C_DIM, W_GHOST])}"
|
||||||
f"{random.choice(config.GLITCH + config.KATA)}{RST}"
|
f"{random.choice(config.GLITCH + config.KATA)}{RST}"
|
||||||
if random.random() < d else " "
|
if random.random() < d
|
||||||
|
else " "
|
||||||
for _ in range(w)
|
for _ in range(w)
|
||||||
)
|
)
|
||||||
elif r < 0.78:
|
elif r < 0.78:
|
||||||
# Status / program output
|
# Status / program output
|
||||||
sources = FEEDS if config.MODE == 'news' else POETRY_SOURCES
|
sources = FEEDS if config.MODE == "news" else POETRY_SOURCES
|
||||||
src = random.choice(list(sources.keys()))
|
src = random.choice(list(sources.keys()))
|
||||||
msgs = [
|
msgs = [
|
||||||
f" SIGNAL :: {src} :: {datetime.now().strftime('%H:%M:%S.%f')[:-3]}",
|
f" SIGNAL :: {src} :: {datetime.now().strftime('%H:%M:%S.%f')[:-3]}",
|
||||||
@@ -127,7 +167,7 @@ def firehose_line(items, w):
|
|||||||
start = random.randint(0, max(0, len(title) - 20))
|
start = random.randint(0, max(0, len(title) - 20))
|
||||||
frag = title[start : start + random.randint(10, 35)]
|
frag = title[start : start + random.randint(10, 35)]
|
||||||
pad = random.randint(0, max(0, w - len(frag) - 8))
|
pad = random.randint(0, max(0, w - len(frag) - 8))
|
||||||
gp = ''.join(random.choice(config.GLITCH) for _ in range(random.randint(1, 3)))
|
gp = "".join(random.choice(config.GLITCH) for _ in range(random.randint(1, 3)))
|
||||||
text = (' ' * pad + gp + ' ' + frag)[:w - 1]
|
text = (" " * pad + gp + " " + frag)[: w - 1]
|
||||||
color = random.choice([G_LO, C_DIM, W_GHOST])
|
color = random.choice([G_LO, C_DIM, W_GHOST])
|
||||||
return f"{color}{text}{RST}"
|
return f"{color}{text}{RST}"
|
||||||
103
engine/effects/performance.py
Normal file
103
engine/effects/performance.py
Normal file
@@ -0,0 +1,103 @@
|
|||||||
|
from collections import deque
|
||||||
|
from dataclasses import dataclass
|
||||||
|
|
||||||
|
|
||||||
|
@dataclass
|
||||||
|
class EffectTiming:
|
||||||
|
name: str
|
||||||
|
duration_ms: float
|
||||||
|
buffer_chars_in: int
|
||||||
|
buffer_chars_out: int
|
||||||
|
|
||||||
|
|
||||||
|
@dataclass
|
||||||
|
class FrameTiming:
|
||||||
|
frame_number: int
|
||||||
|
total_ms: float
|
||||||
|
effects: list[EffectTiming]
|
||||||
|
|
||||||
|
|
||||||
|
class PerformanceMonitor:
|
||||||
|
"""Collects and stores performance metrics for effect pipeline."""
|
||||||
|
|
||||||
|
def __init__(self, max_frames: int = 60):
|
||||||
|
self._max_frames = max_frames
|
||||||
|
self._frames: deque[FrameTiming] = deque(maxlen=max_frames)
|
||||||
|
self._current_frame: list[EffectTiming] = []
|
||||||
|
|
||||||
|
def start_frame(self, frame_number: int) -> None:
|
||||||
|
self._current_frame = []
|
||||||
|
|
||||||
|
def record_effect(
|
||||||
|
self, name: str, duration_ms: float, chars_in: int, chars_out: int
|
||||||
|
) -> None:
|
||||||
|
self._current_frame.append(
|
||||||
|
EffectTiming(
|
||||||
|
name=name,
|
||||||
|
duration_ms=duration_ms,
|
||||||
|
buffer_chars_in=chars_in,
|
||||||
|
buffer_chars_out=chars_out,
|
||||||
|
)
|
||||||
|
)
|
||||||
|
|
||||||
|
def end_frame(self, frame_number: int, total_ms: float) -> None:
|
||||||
|
self._frames.append(
|
||||||
|
FrameTiming(
|
||||||
|
frame_number=frame_number,
|
||||||
|
total_ms=total_ms,
|
||||||
|
effects=self._current_frame,
|
||||||
|
)
|
||||||
|
)
|
||||||
|
|
||||||
|
def get_stats(self) -> dict:
|
||||||
|
if not self._frames:
|
||||||
|
return {"error": "No timing data available"}
|
||||||
|
|
||||||
|
total_times = [f.total_ms for f in self._frames]
|
||||||
|
avg_total = sum(total_times) / len(total_times)
|
||||||
|
min_total = min(total_times)
|
||||||
|
max_total = max(total_times)
|
||||||
|
|
||||||
|
effect_stats: dict[str, dict] = {}
|
||||||
|
for frame in self._frames:
|
||||||
|
for effect in frame.effects:
|
||||||
|
if effect.name not in effect_stats:
|
||||||
|
effect_stats[effect.name] = {"times": [], "total_chars": 0}
|
||||||
|
effect_stats[effect.name]["times"].append(effect.duration_ms)
|
||||||
|
effect_stats[effect.name]["total_chars"] += effect.buffer_chars_out
|
||||||
|
|
||||||
|
for name, stats in effect_stats.items():
|
||||||
|
times = stats["times"]
|
||||||
|
stats["avg_ms"] = sum(times) / len(times)
|
||||||
|
stats["min_ms"] = min(times)
|
||||||
|
stats["max_ms"] = max(times)
|
||||||
|
del stats["times"]
|
||||||
|
|
||||||
|
return {
|
||||||
|
"frame_count": len(self._frames),
|
||||||
|
"pipeline": {
|
||||||
|
"avg_ms": avg_total,
|
||||||
|
"min_ms": min_total,
|
||||||
|
"max_ms": max_total,
|
||||||
|
},
|
||||||
|
"effects": effect_stats,
|
||||||
|
}
|
||||||
|
|
||||||
|
def reset(self) -> None:
|
||||||
|
self._frames.clear()
|
||||||
|
self._current_frame = []
|
||||||
|
|
||||||
|
|
||||||
|
_monitor: PerformanceMonitor | None = None
|
||||||
|
|
||||||
|
|
||||||
|
def get_monitor() -> PerformanceMonitor:
|
||||||
|
global _monitor
|
||||||
|
if _monitor is None:
|
||||||
|
_monitor = PerformanceMonitor()
|
||||||
|
return _monitor
|
||||||
|
|
||||||
|
|
||||||
|
def set_monitor(monitor: PerformanceMonitor) -> None:
|
||||||
|
global _monitor
|
||||||
|
_monitor = monitor
|
||||||
59
engine/effects/registry.py
Normal file
59
engine/effects/registry.py
Normal file
@@ -0,0 +1,59 @@
|
|||||||
|
from engine.effects.types import EffectConfig, EffectPlugin
|
||||||
|
|
||||||
|
|
||||||
|
class EffectRegistry:
|
||||||
|
def __init__(self):
|
||||||
|
self._plugins: dict[str, EffectPlugin] = {}
|
||||||
|
self._discovered: bool = False
|
||||||
|
|
||||||
|
def register(self, plugin: EffectPlugin) -> None:
|
||||||
|
self._plugins[plugin.name] = plugin
|
||||||
|
|
||||||
|
def get(self, name: str) -> EffectPlugin | None:
|
||||||
|
return self._plugins.get(name)
|
||||||
|
|
||||||
|
def list_all(self) -> dict[str, EffectPlugin]:
|
||||||
|
return self._plugins.copy()
|
||||||
|
|
||||||
|
def list_enabled(self) -> list[EffectPlugin]:
|
||||||
|
return [p for p in self._plugins.values() if p.config.enabled]
|
||||||
|
|
||||||
|
def enable(self, name: str) -> bool:
|
||||||
|
plugin = self._plugins.get(name)
|
||||||
|
if plugin:
|
||||||
|
plugin.config.enabled = True
|
||||||
|
return True
|
||||||
|
return False
|
||||||
|
|
||||||
|
def disable(self, name: str) -> bool:
|
||||||
|
plugin = self._plugins.get(name)
|
||||||
|
if plugin:
|
||||||
|
plugin.config.enabled = False
|
||||||
|
return True
|
||||||
|
return False
|
||||||
|
|
||||||
|
def configure(self, name: str, config: EffectConfig) -> bool:
|
||||||
|
plugin = self._plugins.get(name)
|
||||||
|
if plugin:
|
||||||
|
plugin.configure(config)
|
||||||
|
return True
|
||||||
|
return False
|
||||||
|
|
||||||
|
def is_enabled(self, name: str) -> bool:
|
||||||
|
plugin = self._plugins.get(name)
|
||||||
|
return plugin.config.enabled if plugin else False
|
||||||
|
|
||||||
|
|
||||||
|
_registry: EffectRegistry | None = None
|
||||||
|
|
||||||
|
|
||||||
|
def get_registry() -> EffectRegistry:
|
||||||
|
global _registry
|
||||||
|
if _registry is None:
|
||||||
|
_registry = EffectRegistry()
|
||||||
|
return _registry
|
||||||
|
|
||||||
|
|
||||||
|
def set_registry(registry: EffectRegistry) -> None:
|
||||||
|
global _registry
|
||||||
|
_registry = registry
|
||||||
122
engine/effects/types.py
Normal file
122
engine/effects/types.py
Normal file
@@ -0,0 +1,122 @@
|
|||||||
|
"""
|
||||||
|
Visual effects type definitions and base classes.
|
||||||
|
|
||||||
|
EffectPlugin Architecture:
|
||||||
|
- Uses ABC (Abstract Base Class) for interface enforcement
|
||||||
|
- Runtime discovery via directory scanning (effects_plugins/)
|
||||||
|
- Configuration via EffectConfig dataclass
|
||||||
|
- Context passed through EffectContext dataclass
|
||||||
|
|
||||||
|
Plugin System Research (see AGENTS.md for references):
|
||||||
|
- VST: Standardized audio interfaces, chaining, presets (FXP/FXB)
|
||||||
|
- Python Entry Points: Namespace packages, importlib.metadata discovery
|
||||||
|
- Shadertoy: Shader-based with uniforms as context
|
||||||
|
|
||||||
|
Current gaps vs industry patterns:
|
||||||
|
- No preset save/load system
|
||||||
|
- No external plugin distribution via entry points
|
||||||
|
- No plugin metadata (version, author, description)
|
||||||
|
"""
|
||||||
|
|
||||||
|
from abc import ABC, abstractmethod
|
||||||
|
from dataclasses import dataclass, field
|
||||||
|
from typing import Any
|
||||||
|
|
||||||
|
|
||||||
|
@dataclass
|
||||||
|
class EffectContext:
|
||||||
|
terminal_width: int
|
||||||
|
terminal_height: int
|
||||||
|
scroll_cam: int
|
||||||
|
ticker_height: int
|
||||||
|
camera_x: int = 0
|
||||||
|
mic_excess: float = 0.0
|
||||||
|
grad_offset: float = 0.0
|
||||||
|
frame_number: int = 0
|
||||||
|
has_message: bool = False
|
||||||
|
items: list = field(default_factory=list)
|
||||||
|
|
||||||
|
|
||||||
|
@dataclass
|
||||||
|
class EffectConfig:
|
||||||
|
enabled: bool = True
|
||||||
|
intensity: float = 1.0
|
||||||
|
params: dict[str, Any] = field(default_factory=dict)
|
||||||
|
|
||||||
|
|
||||||
|
class EffectPlugin(ABC):
|
||||||
|
"""Abstract base class for effect plugins.
|
||||||
|
|
||||||
|
Subclasses must define:
|
||||||
|
- name: str - unique identifier for the effect
|
||||||
|
- config: EffectConfig - current configuration
|
||||||
|
|
||||||
|
And implement:
|
||||||
|
- process(buf, ctx) -> list[str]
|
||||||
|
- configure(config) -> None
|
||||||
|
|
||||||
|
Effect Behavior with ticker_height=0:
|
||||||
|
- NoiseEffect: Returns buffer unchanged (no ticker to apply noise to)
|
||||||
|
- FadeEffect: Returns buffer unchanged (no ticker to fade)
|
||||||
|
- GlitchEffect: Processes normally (doesn't depend on ticker_height)
|
||||||
|
- FirehoseEffect: Returns buffer unchanged if no items in context
|
||||||
|
|
||||||
|
Effects should handle missing or zero context values gracefully by
|
||||||
|
returning the input buffer unchanged rather than raising errors.
|
||||||
|
"""
|
||||||
|
|
||||||
|
name: str
|
||||||
|
config: EffectConfig
|
||||||
|
|
||||||
|
@abstractmethod
|
||||||
|
def process(self, buf: list[str], ctx: EffectContext) -> list[str]:
|
||||||
|
"""Process the buffer with this effect applied.
|
||||||
|
|
||||||
|
Args:
|
||||||
|
buf: List of lines to process
|
||||||
|
ctx: Effect context with terminal state
|
||||||
|
|
||||||
|
Returns:
|
||||||
|
Processed buffer (may be same object or new list)
|
||||||
|
"""
|
||||||
|
...
|
||||||
|
|
||||||
|
@abstractmethod
|
||||||
|
def configure(self, config: EffectConfig) -> None:
|
||||||
|
"""Configure the effect with new settings.
|
||||||
|
|
||||||
|
Args:
|
||||||
|
config: New configuration to apply
|
||||||
|
"""
|
||||||
|
...
|
||||||
|
|
||||||
|
|
||||||
|
def create_effect_context(
|
||||||
|
terminal_width: int = 80,
|
||||||
|
terminal_height: int = 24,
|
||||||
|
scroll_cam: int = 0,
|
||||||
|
ticker_height: int = 0,
|
||||||
|
mic_excess: float = 0.0,
|
||||||
|
grad_offset: float = 0.0,
|
||||||
|
frame_number: int = 0,
|
||||||
|
has_message: bool = False,
|
||||||
|
items: list | None = None,
|
||||||
|
) -> EffectContext:
|
||||||
|
"""Factory function to create EffectContext with sensible defaults."""
|
||||||
|
return EffectContext(
|
||||||
|
terminal_width=terminal_width,
|
||||||
|
terminal_height=terminal_height,
|
||||||
|
scroll_cam=scroll_cam,
|
||||||
|
ticker_height=ticker_height,
|
||||||
|
mic_excess=mic_excess,
|
||||||
|
grad_offset=grad_offset,
|
||||||
|
frame_number=frame_number,
|
||||||
|
has_message=has_message,
|
||||||
|
items=items or [],
|
||||||
|
)
|
||||||
|
|
||||||
|
|
||||||
|
@dataclass
|
||||||
|
class PipelineConfig:
|
||||||
|
order: list[str] = field(default_factory=list)
|
||||||
|
effects: dict[str, EffectConfig] = field(default_factory=dict)
|
||||||
25
engine/emitters.py
Normal file
25
engine/emitters.py
Normal file
@@ -0,0 +1,25 @@
|
|||||||
|
"""
|
||||||
|
Event emitter protocols - abstract interfaces for event-producing components.
|
||||||
|
"""
|
||||||
|
|
||||||
|
from collections.abc import Callable
|
||||||
|
from typing import Any, Protocol
|
||||||
|
|
||||||
|
|
||||||
|
class EventEmitter(Protocol):
|
||||||
|
"""Protocol for components that emit events."""
|
||||||
|
|
||||||
|
def subscribe(self, callback: Callable[[Any], None]) -> None: ...
|
||||||
|
def unsubscribe(self, callback: Callable[[Any], None]) -> None: ...
|
||||||
|
|
||||||
|
|
||||||
|
class Startable(Protocol):
|
||||||
|
"""Protocol for components that can be started."""
|
||||||
|
|
||||||
|
def start(self) -> Any: ...
|
||||||
|
|
||||||
|
|
||||||
|
class Stoppable(Protocol):
|
||||||
|
"""Protocol for components that can be stopped."""
|
||||||
|
|
||||||
|
def stop(self) -> None: ...
|
||||||
72
engine/eventbus.py
Normal file
72
engine/eventbus.py
Normal file
@@ -0,0 +1,72 @@
|
|||||||
|
"""
|
||||||
|
Event bus - pub/sub messaging for decoupled component communication.
|
||||||
|
"""
|
||||||
|
|
||||||
|
import threading
|
||||||
|
from collections import defaultdict
|
||||||
|
from collections.abc import Callable
|
||||||
|
from typing import Any
|
||||||
|
|
||||||
|
from engine.events import EventType
|
||||||
|
|
||||||
|
|
||||||
|
class EventBus:
|
||||||
|
"""Thread-safe event bus for publish-subscribe messaging."""
|
||||||
|
|
||||||
|
def __init__(self):
|
||||||
|
self._subscribers: dict[EventType, list[Callable[[Any], None]]] = defaultdict(
|
||||||
|
list
|
||||||
|
)
|
||||||
|
self._lock = threading.Lock()
|
||||||
|
|
||||||
|
def subscribe(self, event_type: EventType, callback: Callable[[Any], None]) -> None:
|
||||||
|
"""Register a callback for a specific event type."""
|
||||||
|
with self._lock:
|
||||||
|
self._subscribers[event_type].append(callback)
|
||||||
|
|
||||||
|
def unsubscribe(
|
||||||
|
self, event_type: EventType, callback: Callable[[Any], None]
|
||||||
|
) -> None:
|
||||||
|
"""Remove a callback for a specific event type."""
|
||||||
|
with self._lock:
|
||||||
|
if callback in self._subscribers[event_type]:
|
||||||
|
self._subscribers[event_type].remove(callback)
|
||||||
|
|
||||||
|
def publish(self, event_type: EventType, event: Any = None) -> None:
|
||||||
|
"""Publish an event to all subscribers."""
|
||||||
|
with self._lock:
|
||||||
|
callbacks = list(self._subscribers.get(event_type, []))
|
||||||
|
for callback in callbacks:
|
||||||
|
try:
|
||||||
|
callback(event)
|
||||||
|
except Exception:
|
||||||
|
pass
|
||||||
|
|
||||||
|
def clear(self) -> None:
|
||||||
|
"""Remove all subscribers."""
|
||||||
|
with self._lock:
|
||||||
|
self._subscribers.clear()
|
||||||
|
|
||||||
|
def subscriber_count(self, event_type: EventType | None = None) -> int:
|
||||||
|
"""Get subscriber count for an event type, or total if None."""
|
||||||
|
with self._lock:
|
||||||
|
if event_type is None:
|
||||||
|
return sum(len(cb) for cb in self._subscribers.values())
|
||||||
|
return len(self._subscribers.get(event_type, []))
|
||||||
|
|
||||||
|
|
||||||
|
_event_bus: EventBus | None = None
|
||||||
|
|
||||||
|
|
||||||
|
def get_event_bus() -> EventBus:
|
||||||
|
"""Get the global event bus instance."""
|
||||||
|
global _event_bus
|
||||||
|
if _event_bus is None:
|
||||||
|
_event_bus = EventBus()
|
||||||
|
return _event_bus
|
||||||
|
|
||||||
|
|
||||||
|
def set_event_bus(bus: EventBus) -> None:
|
||||||
|
"""Set the global event bus instance (for testing)."""
|
||||||
|
global _event_bus
|
||||||
|
_event_bus = bus
|
||||||
67
engine/events.py
Normal file
67
engine/events.py
Normal file
@@ -0,0 +1,67 @@
|
|||||||
|
"""
|
||||||
|
Event types for the mainline application.
|
||||||
|
Defines the core events that flow through the system.
|
||||||
|
These types support a future migration to an event-driven architecture.
|
||||||
|
"""
|
||||||
|
|
||||||
|
from dataclasses import dataclass
|
||||||
|
from datetime import datetime
|
||||||
|
from enum import Enum, auto
|
||||||
|
|
||||||
|
|
||||||
|
class EventType(Enum):
|
||||||
|
"""Core event types in the mainline application."""
|
||||||
|
|
||||||
|
NEW_HEADLINE = auto()
|
||||||
|
FRAME_TICK = auto()
|
||||||
|
MIC_LEVEL = auto()
|
||||||
|
NTFY_MESSAGE = auto()
|
||||||
|
STREAM_START = auto()
|
||||||
|
STREAM_END = auto()
|
||||||
|
|
||||||
|
|
||||||
|
@dataclass
|
||||||
|
class HeadlineEvent:
|
||||||
|
"""Event emitted when a new headline is ready for display."""
|
||||||
|
|
||||||
|
title: str
|
||||||
|
source: str
|
||||||
|
timestamp: str
|
||||||
|
language: str | None = None
|
||||||
|
|
||||||
|
|
||||||
|
@dataclass
|
||||||
|
class FrameTickEvent:
|
||||||
|
"""Event emitted on each render frame."""
|
||||||
|
|
||||||
|
frame_number: int
|
||||||
|
timestamp: datetime
|
||||||
|
delta_seconds: float
|
||||||
|
|
||||||
|
|
||||||
|
@dataclass
|
||||||
|
class MicLevelEvent:
|
||||||
|
"""Event emitted when microphone level changes significantly."""
|
||||||
|
|
||||||
|
db_level: float
|
||||||
|
excess_above_threshold: float
|
||||||
|
timestamp: datetime
|
||||||
|
|
||||||
|
|
||||||
|
@dataclass
|
||||||
|
class NtfyMessageEvent:
|
||||||
|
"""Event emitted when an ntfy message is received."""
|
||||||
|
|
||||||
|
title: str
|
||||||
|
body: str
|
||||||
|
message_id: str | None = None
|
||||||
|
timestamp: datetime | None = None
|
||||||
|
|
||||||
|
|
||||||
|
@dataclass
|
||||||
|
class StreamEvent:
|
||||||
|
"""Event emitted when stream starts or ends."""
|
||||||
|
|
||||||
|
event_type: EventType
|
||||||
|
headline_count: int = 0
|
||||||
|
timestamp: datetime | None = None
|
||||||
@@ -3,21 +3,27 @@ RSS feed fetching, Project Gutenberg parsing, and headline caching.
|
|||||||
Depends on: config, sources, filter, terminal.
|
Depends on: config, sources, filter, terminal.
|
||||||
"""
|
"""
|
||||||
|
|
||||||
import re
|
|
||||||
import json
|
import json
|
||||||
import pathlib
|
import pathlib
|
||||||
|
import re
|
||||||
import urllib.request
|
import urllib.request
|
||||||
from datetime import datetime
|
from datetime import datetime
|
||||||
|
from typing import Any
|
||||||
|
|
||||||
import feedparser
|
import feedparser
|
||||||
|
|
||||||
from engine import config
|
from engine import config
|
||||||
|
from engine.filter import skip, strip_tags
|
||||||
from engine.sources import FEEDS, POETRY_SOURCES
|
from engine.sources import FEEDS, POETRY_SOURCES
|
||||||
from engine.filter import strip_tags, skip
|
|
||||||
from engine.terminal import boot_ln
|
from engine.terminal import boot_ln
|
||||||
|
|
||||||
|
# Type alias for headline items
|
||||||
|
HeadlineTuple = tuple[str, str, str]
|
||||||
|
|
||||||
|
|
||||||
# ─── SINGLE FEED ──────────────────────────────────────────
|
# ─── SINGLE FEED ──────────────────────────────────────────
|
||||||
def fetch_feed(url):
|
def fetch_feed(url: str) -> Any | None:
|
||||||
|
"""Fetch and parse a single RSS feed URL."""
|
||||||
try:
|
try:
|
||||||
req = urllib.request.Request(url, headers={"User-Agent": "mainline/0.1"})
|
req = urllib.request.Request(url, headers={"User-Agent": "mainline/0.1"})
|
||||||
resp = urllib.request.urlopen(req, timeout=config.FEED_TIMEOUT)
|
resp = urllib.request.urlopen(req, timeout=config.FEED_TIMEOUT)
|
||||||
@@ -27,8 +33,9 @@ def fetch_feed(url):
|
|||||||
|
|
||||||
|
|
||||||
# ─── ALL RSS FEEDS ────────────────────────────────────────
|
# ─── ALL RSS FEEDS ────────────────────────────────────────
|
||||||
def fetch_all():
|
def fetch_all() -> tuple[list[HeadlineTuple], int, int]:
|
||||||
items = []
|
"""Fetch all RSS feeds and return items, linked count, failed count."""
|
||||||
|
items: list[HeadlineTuple] = []
|
||||||
linked = failed = 0
|
linked = failed = 0
|
||||||
for src, url in FEEDS.items():
|
for src, url in FEEDS.items():
|
||||||
feed = fetch_feed(url)
|
feed = fetch_feed(url)
|
||||||
@@ -58,31 +65,36 @@ def fetch_all():
|
|||||||
|
|
||||||
|
|
||||||
# ─── PROJECT GUTENBERG ────────────────────────────────────
|
# ─── PROJECT GUTENBERG ────────────────────────────────────
|
||||||
def _fetch_gutenberg(url, label):
|
def _fetch_gutenberg(url: str, label: str) -> list[HeadlineTuple]:
|
||||||
"""Download and parse stanzas/passages from a Project Gutenberg text."""
|
"""Download and parse stanzas/passages from a Project Gutenberg text."""
|
||||||
try:
|
try:
|
||||||
req = urllib.request.Request(url, headers={"User-Agent": "mainline/0.1"})
|
req = urllib.request.Request(url, headers={"User-Agent": "mainline/0.1"})
|
||||||
resp = urllib.request.urlopen(req, timeout=15)
|
resp = urllib.request.urlopen(req, timeout=15)
|
||||||
text = resp.read().decode('utf-8', errors='replace').replace('\r\n', '\n').replace('\r', '\n')
|
text = (
|
||||||
|
resp.read()
|
||||||
|
.decode("utf-8", errors="replace")
|
||||||
|
.replace("\r\n", "\n")
|
||||||
|
.replace("\r", "\n")
|
||||||
|
)
|
||||||
# Strip PG boilerplate
|
# Strip PG boilerplate
|
||||||
m = re.search(r'\*\*\*\s*START OF[^\n]*\n', text)
|
m = re.search(r"\*\*\*\s*START OF[^\n]*\n", text)
|
||||||
if m:
|
if m:
|
||||||
text = text[m.end() :]
|
text = text[m.end() :]
|
||||||
m = re.search(r'\*\*\*\s*END OF', text)
|
m = re.search(r"\*\*\*\s*END OF", text)
|
||||||
if m:
|
if m:
|
||||||
text = text[: m.start()]
|
text = text[: m.start()]
|
||||||
# Split on blank lines into stanzas/passages
|
# Split on blank lines into stanzas/passages
|
||||||
blocks = re.split(r'\n{2,}', text.strip())
|
blocks = re.split(r"\n{2,}", text.strip())
|
||||||
items = []
|
items = []
|
||||||
for blk in blocks:
|
for blk in blocks:
|
||||||
blk = ' '.join(blk.split()) # flatten to one line
|
blk = " ".join(blk.split()) # flatten to one line
|
||||||
if len(blk) < 20 or len(blk) > 280:
|
if len(blk) < 20 or len(blk) > 280:
|
||||||
continue
|
continue
|
||||||
if blk.isupper(): # skip all-caps headers
|
if blk.isupper(): # skip all-caps headers
|
||||||
continue
|
continue
|
||||||
if re.match(r'^[IVXLCDM]+\.?\s*$', blk): # roman numerals
|
if re.match(r"^[IVXLCDM]+\.?\s*$", blk): # roman numerals
|
||||||
continue
|
continue
|
||||||
items.append((blk, label, ''))
|
items.append((blk, label, ""))
|
||||||
return items
|
return items
|
||||||
except Exception:
|
except Exception:
|
||||||
return []
|
return []
|
||||||
|
|||||||
@@ -29,29 +29,29 @@ def strip_tags(html):
|
|||||||
|
|
||||||
# ─── CONTENT FILTER ───────────────────────────────────────
|
# ─── CONTENT FILTER ───────────────────────────────────────
|
||||||
_SKIP_RE = re.compile(
|
_SKIP_RE = re.compile(
|
||||||
r'\b(?:'
|
r"\b(?:"
|
||||||
# ── sports ──
|
# ── sports ──
|
||||||
r'football|soccer|basketball|baseball|softball|tennis|golf|cricket|rugby|'
|
r"football|soccer|basketball|baseball|softball|tennis|golf|cricket|rugby|"
|
||||||
r'hockey|lacrosse|volleyball|badminton|'
|
r"hockey|lacrosse|volleyball|badminton|"
|
||||||
r'nba|nfl|nhl|mlb|mls|fifa|uefa|'
|
r"nba|nfl|nhl|mlb|mls|fifa|uefa|"
|
||||||
r'premier league|champions league|la liga|serie a|bundesliga|'
|
r"premier league|champions league|la liga|serie a|bundesliga|"
|
||||||
r'world cup|super bowl|world series|stanley cup|'
|
r"world cup|super bowl|world series|stanley cup|"
|
||||||
r'playoff|playoffs|touchdown|goalkeeper|striker|quarterback|'
|
r"playoff|playoffs|touchdown|goalkeeper|striker|quarterback|"
|
||||||
r'slam dunk|home run|grand slam|offside|halftime|'
|
r"slam dunk|home run|grand slam|offside|halftime|"
|
||||||
r'batting|wicket|innings|'
|
r"batting|wicket|innings|"
|
||||||
r'formula 1|nascar|motogp|'
|
r"formula 1|nascar|motogp|"
|
||||||
r'boxing|ufc|mma|'
|
r"boxing|ufc|mma|"
|
||||||
r'marathon|tour de france|'
|
r"marathon|tour de france|"
|
||||||
r'transfer window|draft pick|relegation|'
|
r"transfer window|draft pick|relegation|"
|
||||||
# ── vapid / insipid ──
|
# ── vapid / insipid ──
|
||||||
r'kardashian|jenner|reality tv|reality show|'
|
r"kardashian|jenner|reality tv|reality show|"
|
||||||
r'influencer|viral video|tiktok|instagram|'
|
r"influencer|viral video|tiktok|instagram|"
|
||||||
r'best dressed|worst dressed|red carpet|'
|
r"best dressed|worst dressed|red carpet|"
|
||||||
r'horoscope|zodiac|gossip|bikini|selfie|'
|
r"horoscope|zodiac|gossip|bikini|selfie|"
|
||||||
r'you won.t believe|what happened next|'
|
r"you won.t believe|what happened next|"
|
||||||
r'celebrity couple|celebrity feud|baby bump'
|
r"celebrity couple|celebrity feud|baby bump"
|
||||||
r')\b',
|
r")\b",
|
||||||
re.IGNORECASE
|
re.IGNORECASE,
|
||||||
)
|
)
|
||||||
|
|
||||||
|
|
||||||
|
|||||||
57
engine/frame.py
Normal file
57
engine/frame.py
Normal file
@@ -0,0 +1,57 @@
|
|||||||
|
"""
|
||||||
|
Frame timing utilities — FPS control and precise timing.
|
||||||
|
"""
|
||||||
|
|
||||||
|
import time
|
||||||
|
|
||||||
|
|
||||||
|
class FrameTimer:
|
||||||
|
"""Frame timer for consistent render loop timing."""
|
||||||
|
|
||||||
|
def __init__(self, target_frame_dt: float = 0.05):
|
||||||
|
self.target_frame_dt = target_frame_dt
|
||||||
|
self._frame_count = 0
|
||||||
|
self._start_time = time.monotonic()
|
||||||
|
self._last_frame_time = self._start_time
|
||||||
|
|
||||||
|
@property
|
||||||
|
def fps(self) -> float:
|
||||||
|
"""Current FPS based on elapsed frames."""
|
||||||
|
elapsed = time.monotonic() - self._start_time
|
||||||
|
if elapsed > 0:
|
||||||
|
return self._frame_count / elapsed
|
||||||
|
return 0.0
|
||||||
|
|
||||||
|
def sleep_until_next_frame(self) -> float:
|
||||||
|
"""Sleep to maintain target frame rate. Returns actual elapsed time."""
|
||||||
|
now = time.monotonic()
|
||||||
|
elapsed = now - self._last_frame_time
|
||||||
|
self._last_frame_time = now
|
||||||
|
self._frame_count += 1
|
||||||
|
|
||||||
|
sleep_time = max(0, self.target_frame_dt - elapsed)
|
||||||
|
if sleep_time > 0:
|
||||||
|
time.sleep(sleep_time)
|
||||||
|
return elapsed
|
||||||
|
|
||||||
|
def reset(self) -> None:
|
||||||
|
"""Reset frame counter and start time."""
|
||||||
|
self._frame_count = 0
|
||||||
|
self._start_time = time.monotonic()
|
||||||
|
self._last_frame_time = self._start_time
|
||||||
|
|
||||||
|
|
||||||
|
def calculate_scroll_step(
|
||||||
|
scroll_dur: float, view_height: int, padding: int = 15
|
||||||
|
) -> float:
|
||||||
|
"""Calculate scroll step interval for smooth scrolling.
|
||||||
|
|
||||||
|
Args:
|
||||||
|
scroll_dur: Duration in seconds for one headline to scroll through view
|
||||||
|
view_height: Terminal height in rows
|
||||||
|
padding: Extra rows for off-screen content
|
||||||
|
|
||||||
|
Returns:
|
||||||
|
Time in seconds between scroll steps
|
||||||
|
"""
|
||||||
|
return scroll_dur / (view_height + padding) * 2
|
||||||
267
engine/layers.py
Normal file
267
engine/layers.py
Normal file
@@ -0,0 +1,267 @@
|
|||||||
|
"""
|
||||||
|
Layer compositing — message overlay, ticker zone, firehose, noise.
|
||||||
|
Depends on: config, render, effects.
|
||||||
|
"""
|
||||||
|
|
||||||
|
import random
|
||||||
|
import re
|
||||||
|
import time
|
||||||
|
from datetime import datetime
|
||||||
|
|
||||||
|
from engine import config
|
||||||
|
from engine.effects import (
|
||||||
|
EffectChain,
|
||||||
|
EffectContext,
|
||||||
|
fade_line,
|
||||||
|
firehose_line,
|
||||||
|
glitch_bar,
|
||||||
|
noise,
|
||||||
|
vis_offset,
|
||||||
|
vis_trunc,
|
||||||
|
)
|
||||||
|
from engine.render import big_wrap, lr_gradient, lr_gradient_opposite
|
||||||
|
from engine.terminal import RST, W_COOL
|
||||||
|
|
||||||
|
MSG_META = "\033[38;5;245m"
|
||||||
|
MSG_BORDER = "\033[2;38;5;37m"
|
||||||
|
|
||||||
|
|
||||||
|
def render_message_overlay(
|
||||||
|
msg: tuple[str, str, float] | None,
|
||||||
|
w: int,
|
||||||
|
h: int,
|
||||||
|
msg_cache: tuple,
|
||||||
|
) -> tuple[list[str], tuple]:
|
||||||
|
"""Render ntfy message overlay.
|
||||||
|
|
||||||
|
Args:
|
||||||
|
msg: (title, body, timestamp) or None
|
||||||
|
w: terminal width
|
||||||
|
h: terminal height
|
||||||
|
msg_cache: (cache_key, rendered_rows) for caching
|
||||||
|
|
||||||
|
Returns:
|
||||||
|
(list of ANSI strings, updated cache)
|
||||||
|
"""
|
||||||
|
overlay = []
|
||||||
|
if msg is None:
|
||||||
|
return overlay, msg_cache
|
||||||
|
|
||||||
|
m_title, m_body, m_ts = msg
|
||||||
|
display_text = m_body or m_title or "(empty)"
|
||||||
|
display_text = re.sub(r"\s+", " ", display_text.upper())
|
||||||
|
|
||||||
|
cache_key = (display_text, w)
|
||||||
|
if msg_cache[0] != cache_key:
|
||||||
|
msg_rows = big_wrap(display_text, w - 4)
|
||||||
|
msg_cache = (cache_key, msg_rows)
|
||||||
|
else:
|
||||||
|
msg_rows = msg_cache[1]
|
||||||
|
|
||||||
|
msg_rows = lr_gradient_opposite(
|
||||||
|
msg_rows, (time.monotonic() * config.GRAD_SPEED) % 1.0
|
||||||
|
)
|
||||||
|
|
||||||
|
elapsed_s = int(time.monotonic() - m_ts)
|
||||||
|
remaining = max(0, config.MESSAGE_DISPLAY_SECS - elapsed_s)
|
||||||
|
ts_str = datetime.now().strftime("%H:%M:%S")
|
||||||
|
panel_h = len(msg_rows) + 2
|
||||||
|
panel_top = max(0, (h - panel_h) // 2)
|
||||||
|
|
||||||
|
row_idx = 0
|
||||||
|
for mr in msg_rows:
|
||||||
|
ln = vis_trunc(mr, w)
|
||||||
|
overlay.append(f"\033[{panel_top + row_idx + 1};1H {ln}\033[0m\033[K")
|
||||||
|
row_idx += 1
|
||||||
|
|
||||||
|
meta_parts = []
|
||||||
|
if m_title and m_title != m_body:
|
||||||
|
meta_parts.append(m_title)
|
||||||
|
meta_parts.append(f"ntfy \u00b7 {ts_str} \u00b7 {remaining}s")
|
||||||
|
meta = (
|
||||||
|
" " + " \u00b7 ".join(meta_parts)
|
||||||
|
if len(meta_parts) > 1
|
||||||
|
else " " + meta_parts[0]
|
||||||
|
)
|
||||||
|
overlay.append(f"\033[{panel_top + row_idx + 1};1H{MSG_META}{meta}\033[0m\033[K")
|
||||||
|
row_idx += 1
|
||||||
|
|
||||||
|
bar = "\u2500" * (w - 4)
|
||||||
|
overlay.append(f"\033[{panel_top + row_idx + 1};1H {MSG_BORDER}{bar}\033[0m\033[K")
|
||||||
|
|
||||||
|
return overlay, msg_cache
|
||||||
|
|
||||||
|
|
||||||
|
def render_ticker_zone(
|
||||||
|
active: list,
|
||||||
|
scroll_cam: int,
|
||||||
|
camera_x: int = 0,
|
||||||
|
ticker_h: int = 0,
|
||||||
|
w: int = 80,
|
||||||
|
noise_cache: dict | None = None,
|
||||||
|
grad_offset: float = 0.0,
|
||||||
|
) -> tuple[list[str], dict]:
|
||||||
|
"""Render the ticker scroll zone.
|
||||||
|
|
||||||
|
Args:
|
||||||
|
active: list of (content_rows, color, canvas_y, meta_idx)
|
||||||
|
scroll_cam: camera position (viewport top)
|
||||||
|
camera_x: horizontal camera offset
|
||||||
|
ticker_h: height of ticker zone
|
||||||
|
w: terminal width
|
||||||
|
noise_cache: dict of cy -> noise string
|
||||||
|
grad_offset: gradient animation offset
|
||||||
|
|
||||||
|
Returns:
|
||||||
|
(list of ANSI strings, updated noise_cache)
|
||||||
|
"""
|
||||||
|
if noise_cache is None:
|
||||||
|
noise_cache = {}
|
||||||
|
buf = []
|
||||||
|
top_zone = max(1, int(ticker_h * 0.25))
|
||||||
|
bot_zone = max(1, int(ticker_h * 0.10))
|
||||||
|
|
||||||
|
def noise_at(cy):
|
||||||
|
if cy not in noise_cache:
|
||||||
|
noise_cache[cy] = noise(w) if random.random() < 0.15 else None
|
||||||
|
return noise_cache[cy]
|
||||||
|
|
||||||
|
for r in range(ticker_h):
|
||||||
|
scr_row = r + 1
|
||||||
|
cy = scroll_cam + r
|
||||||
|
top_f = min(1.0, r / top_zone) if top_zone > 0 else 1.0
|
||||||
|
bot_f = min(1.0, (ticker_h - 1 - r) / bot_zone) if bot_zone > 0 else 1.0
|
||||||
|
row_fade = min(top_f, bot_f)
|
||||||
|
drawn = False
|
||||||
|
|
||||||
|
for content, hc, by, midx in active:
|
||||||
|
cr = cy - by
|
||||||
|
if 0 <= cr < len(content):
|
||||||
|
raw = content[cr]
|
||||||
|
if cr != midx:
|
||||||
|
colored = lr_gradient([raw], grad_offset)[0]
|
||||||
|
else:
|
||||||
|
colored = raw
|
||||||
|
ln = vis_trunc(vis_offset(colored, camera_x), w)
|
||||||
|
if row_fade < 1.0:
|
||||||
|
ln = fade_line(ln, row_fade)
|
||||||
|
|
||||||
|
if cr == midx:
|
||||||
|
buf.append(f"\033[{scr_row};1H{W_COOL}{ln}{RST}\033[K")
|
||||||
|
elif ln.strip():
|
||||||
|
buf.append(f"\033[{scr_row};1H{ln}{RST}\033[K")
|
||||||
|
else:
|
||||||
|
buf.append(f"\033[{scr_row};1H\033[K")
|
||||||
|
drawn = True
|
||||||
|
break
|
||||||
|
|
||||||
|
if not drawn:
|
||||||
|
n = noise_at(cy)
|
||||||
|
if row_fade < 1.0 and n:
|
||||||
|
n = fade_line(n, row_fade)
|
||||||
|
if n:
|
||||||
|
buf.append(f"\033[{scr_row};1H{n}")
|
||||||
|
else:
|
||||||
|
buf.append(f"\033[{scr_row};1H\033[K")
|
||||||
|
|
||||||
|
return buf, noise_cache
|
||||||
|
|
||||||
|
|
||||||
|
def apply_glitch(
|
||||||
|
buf: list[str],
|
||||||
|
ticker_buf_start: int,
|
||||||
|
mic_excess: float,
|
||||||
|
w: int,
|
||||||
|
) -> list[str]:
|
||||||
|
"""Apply glitch effect to ticker buffer.
|
||||||
|
|
||||||
|
Args:
|
||||||
|
buf: current buffer
|
||||||
|
ticker_buf_start: index where ticker starts in buffer
|
||||||
|
mic_excess: mic level above threshold
|
||||||
|
w: terminal width
|
||||||
|
|
||||||
|
Returns:
|
||||||
|
Updated buffer with glitches applied
|
||||||
|
"""
|
||||||
|
glitch_prob = 0.32 + min(0.9, mic_excess * 0.16)
|
||||||
|
n_hits = 4 + int(mic_excess / 2)
|
||||||
|
ticker_buf_len = len(buf) - ticker_buf_start
|
||||||
|
|
||||||
|
if random.random() < glitch_prob and ticker_buf_len > 0:
|
||||||
|
for _ in range(min(n_hits, ticker_buf_len)):
|
||||||
|
gi = random.randint(0, ticker_buf_len - 1)
|
||||||
|
scr_row = gi + 1
|
||||||
|
buf[ticker_buf_start + gi] = f"\033[{scr_row};1H{glitch_bar(w)}"
|
||||||
|
|
||||||
|
return buf
|
||||||
|
|
||||||
|
|
||||||
|
def render_firehose(items: list, w: int, fh: int, h: int) -> list[str]:
|
||||||
|
"""Render firehose strip at bottom of screen."""
|
||||||
|
buf = []
|
||||||
|
if fh > 0:
|
||||||
|
for fr in range(fh):
|
||||||
|
scr_row = h - fh + fr + 1
|
||||||
|
fline = firehose_line(items, w)
|
||||||
|
buf.append(f"\033[{scr_row};1H{fline}\033[K")
|
||||||
|
return buf
|
||||||
|
|
||||||
|
|
||||||
|
_effect_chain = None
|
||||||
|
|
||||||
|
|
||||||
|
def init_effects() -> None:
|
||||||
|
"""Initialize effect plugins and chain."""
|
||||||
|
global _effect_chain
|
||||||
|
from engine.effects import EffectChain, get_registry
|
||||||
|
|
||||||
|
registry = get_registry()
|
||||||
|
|
||||||
|
import effects_plugins
|
||||||
|
|
||||||
|
effects_plugins.discover_plugins()
|
||||||
|
|
||||||
|
chain = EffectChain(registry)
|
||||||
|
chain.set_order(["noise", "fade", "glitch", "firehose"])
|
||||||
|
_effect_chain = chain
|
||||||
|
|
||||||
|
|
||||||
|
def process_effects(
|
||||||
|
buf: list[str],
|
||||||
|
w: int,
|
||||||
|
h: int,
|
||||||
|
scroll_cam: int,
|
||||||
|
ticker_h: int,
|
||||||
|
camera_x: int = 0,
|
||||||
|
mic_excess: float = 0.0,
|
||||||
|
grad_offset: float = 0.0,
|
||||||
|
frame_number: int = 0,
|
||||||
|
has_message: bool = False,
|
||||||
|
items: list | None = None,
|
||||||
|
) -> list[str]:
|
||||||
|
"""Process buffer through effect chain."""
|
||||||
|
if _effect_chain is None:
|
||||||
|
init_effects()
|
||||||
|
|
||||||
|
ctx = EffectContext(
|
||||||
|
terminal_width=w,
|
||||||
|
terminal_height=h,
|
||||||
|
scroll_cam=scroll_cam,
|
||||||
|
camera_x=camera_x,
|
||||||
|
ticker_height=ticker_h,
|
||||||
|
mic_excess=mic_excess,
|
||||||
|
grad_offset=grad_offset,
|
||||||
|
frame_number=frame_number,
|
||||||
|
has_message=has_message,
|
||||||
|
items=items or [],
|
||||||
|
)
|
||||||
|
return _effect_chain.process(buf, ctx)
|
||||||
|
|
||||||
|
|
||||||
|
def get_effect_chain() -> EffectChain | None:
|
||||||
|
"""Get the effect chain instance."""
|
||||||
|
global _effect_chain
|
||||||
|
if _effect_chain is None:
|
||||||
|
init_effects()
|
||||||
|
return _effect_chain
|
||||||
@@ -4,15 +4,21 @@ Gracefully degrades if sounddevice/numpy are unavailable.
|
|||||||
"""
|
"""
|
||||||
|
|
||||||
import atexit
|
import atexit
|
||||||
|
from collections.abc import Callable
|
||||||
|
from datetime import datetime
|
||||||
|
|
||||||
try:
|
try:
|
||||||
import sounddevice as _sd
|
|
||||||
import numpy as _np
|
import numpy as _np
|
||||||
|
import sounddevice as _sd
|
||||||
|
|
||||||
_HAS_MIC = True
|
_HAS_MIC = True
|
||||||
except Exception:
|
except Exception:
|
||||||
_HAS_MIC = False
|
_HAS_MIC = False
|
||||||
|
|
||||||
|
|
||||||
|
from engine.events import MicLevelEvent
|
||||||
|
|
||||||
|
|
||||||
class MicMonitor:
|
class MicMonitor:
|
||||||
"""Background mic stream that exposes current RMS dB level."""
|
"""Background mic stream that exposes current RMS dB level."""
|
||||||
|
|
||||||
@@ -20,6 +26,7 @@ class MicMonitor:
|
|||||||
self.threshold_db = threshold_db
|
self.threshold_db = threshold_db
|
||||||
self._db = -99.0
|
self._db = -99.0
|
||||||
self._stream = None
|
self._stream = None
|
||||||
|
self._subscribers: list[Callable[[MicLevelEvent], None]] = []
|
||||||
|
|
||||||
@property
|
@property
|
||||||
def available(self):
|
def available(self):
|
||||||
@@ -36,16 +43,43 @@ class MicMonitor:
|
|||||||
"""dB above threshold (clamped to 0)."""
|
"""dB above threshold (clamped to 0)."""
|
||||||
return max(0.0, self._db - self.threshold_db)
|
return max(0.0, self._db - self.threshold_db)
|
||||||
|
|
||||||
|
def subscribe(self, callback: Callable[[MicLevelEvent], None]) -> None:
|
||||||
|
"""Register a callback to be called when mic level changes."""
|
||||||
|
self._subscribers.append(callback)
|
||||||
|
|
||||||
|
def unsubscribe(self, callback: Callable[[MicLevelEvent], None]) -> None:
|
||||||
|
"""Remove a registered callback."""
|
||||||
|
if callback in self._subscribers:
|
||||||
|
self._subscribers.remove(callback)
|
||||||
|
|
||||||
|
def _emit(self, event: MicLevelEvent) -> None:
|
||||||
|
"""Emit an event to all subscribers."""
|
||||||
|
for cb in self._subscribers:
|
||||||
|
try:
|
||||||
|
cb(event)
|
||||||
|
except Exception:
|
||||||
|
pass
|
||||||
|
|
||||||
def start(self):
|
def start(self):
|
||||||
"""Start background mic stream. Returns True on success, False/None otherwise."""
|
"""Start background mic stream. Returns True on success, False/None otherwise."""
|
||||||
if not _HAS_MIC:
|
if not _HAS_MIC:
|
||||||
return None
|
return None
|
||||||
|
|
||||||
def _cb(indata, frames, t, status):
|
def _cb(indata, frames, t, status):
|
||||||
rms = float(_np.sqrt(_np.mean(indata**2)))
|
rms = float(_np.sqrt(_np.mean(indata**2)))
|
||||||
self._db = 20 * _np.log10(rms) if rms > 0 else -99.0
|
self._db = 20 * _np.log10(rms) if rms > 0 else -99.0
|
||||||
|
if self._subscribers:
|
||||||
|
event = MicLevelEvent(
|
||||||
|
db_level=self._db,
|
||||||
|
excess_above_threshold=max(0.0, self._db - self.threshold_db),
|
||||||
|
timestamp=datetime.now(),
|
||||||
|
)
|
||||||
|
self._emit(event)
|
||||||
|
|
||||||
try:
|
try:
|
||||||
self._stream = _sd.InputStream(
|
self._stream = _sd.InputStream(
|
||||||
callback=_cb, channels=1, samplerate=44100, blocksize=2048)
|
callback=_cb, channels=1, samplerate=44100, blocksize=2048
|
||||||
|
)
|
||||||
self._stream.start()
|
self._stream.start()
|
||||||
atexit.register(self.stop)
|
atexit.register(self.stop)
|
||||||
return True
|
return True
|
||||||
|
|||||||
@@ -1,9 +1,9 @@
|
|||||||
"""
|
"""
|
||||||
ntfy.sh message poller — standalone, zero internal dependencies.
|
ntfy.sh SSE stream listener — standalone, zero internal dependencies.
|
||||||
Reusable by any visualizer:
|
Reusable by any visualizer:
|
||||||
|
|
||||||
from engine.ntfy import NtfyPoller
|
from engine.ntfy import NtfyPoller
|
||||||
poller = NtfyPoller("https://ntfy.sh/my_topic/json?since=20s&poll=1")
|
poller = NtfyPoller("https://ntfy.sh/my_topic/json")
|
||||||
poller.start()
|
poller.start()
|
||||||
# in render loop:
|
# in render loop:
|
||||||
msg = poller.get_active_message()
|
msg = poller.get_active_message()
|
||||||
@@ -13,24 +13,47 @@ Reusable by any visualizer:
|
|||||||
"""
|
"""
|
||||||
|
|
||||||
import json
|
import json
|
||||||
import time
|
|
||||||
import threading
|
import threading
|
||||||
|
import time
|
||||||
import urllib.request
|
import urllib.request
|
||||||
|
from collections.abc import Callable
|
||||||
|
from datetime import datetime
|
||||||
|
from urllib.parse import parse_qs, urlencode, urlparse, urlunparse
|
||||||
|
|
||||||
|
from engine.events import NtfyMessageEvent
|
||||||
|
|
||||||
|
|
||||||
class NtfyPoller:
|
class NtfyPoller:
|
||||||
"""Background poller for ntfy.sh topics."""
|
"""SSE stream listener for ntfy.sh topics. Messages arrive in ~1s (network RTT)."""
|
||||||
|
|
||||||
def __init__(self, topic_url, poll_interval=15, display_secs=30):
|
def __init__(self, topic_url, reconnect_delay=5, display_secs=30):
|
||||||
self.topic_url = topic_url
|
self.topic_url = topic_url
|
||||||
self.poll_interval = poll_interval
|
self.reconnect_delay = reconnect_delay
|
||||||
self.display_secs = display_secs
|
self.display_secs = display_secs
|
||||||
self._message = None # (title, body, monotonic_timestamp) or None
|
self._message = None # (title, body, monotonic_timestamp) or None
|
||||||
self._lock = threading.Lock()
|
self._lock = threading.Lock()
|
||||||
|
self._subscribers: list[Callable[[NtfyMessageEvent], None]] = []
|
||||||
|
|
||||||
|
def subscribe(self, callback: Callable[[NtfyMessageEvent], None]) -> None:
|
||||||
|
"""Register a callback to be called when a message is received."""
|
||||||
|
self._subscribers.append(callback)
|
||||||
|
|
||||||
|
def unsubscribe(self, callback: Callable[[NtfyMessageEvent], None]) -> None:
|
||||||
|
"""Remove a registered callback."""
|
||||||
|
if callback in self._subscribers:
|
||||||
|
self._subscribers.remove(callback)
|
||||||
|
|
||||||
|
def _emit(self, event: NtfyMessageEvent) -> None:
|
||||||
|
"""Emit an event to all subscribers."""
|
||||||
|
for cb in self._subscribers:
|
||||||
|
try:
|
||||||
|
cb(event)
|
||||||
|
except Exception:
|
||||||
|
pass
|
||||||
|
|
||||||
def start(self):
|
def start(self):
|
||||||
"""Start background polling thread. Returns True."""
|
"""Start background stream thread. Returns True."""
|
||||||
t = threading.Thread(target=self._poll_loop, daemon=True)
|
t = threading.Thread(target=self._stream_loop, daemon=True)
|
||||||
t.start()
|
t.start()
|
||||||
return True
|
return True
|
||||||
|
|
||||||
@@ -50,19 +73,36 @@ class NtfyPoller:
|
|||||||
with self._lock:
|
with self._lock:
|
||||||
self._message = None
|
self._message = None
|
||||||
|
|
||||||
def _poll_loop(self):
|
def _build_url(self, last_id=None):
|
||||||
|
"""Build the stream URL, substituting since= to avoid message replays on reconnect."""
|
||||||
|
parsed = urlparse(self.topic_url)
|
||||||
|
params = parse_qs(parsed.query, keep_blank_values=True)
|
||||||
|
params["since"] = [last_id if last_id else "20s"]
|
||||||
|
new_query = urlencode({k: v[0] for k, v in params.items()})
|
||||||
|
return urlunparse(parsed._replace(query=new_query))
|
||||||
|
|
||||||
|
def _stream_loop(self):
|
||||||
|
last_id = None
|
||||||
while True:
|
while True:
|
||||||
try:
|
try:
|
||||||
|
url = self._build_url(last_id)
|
||||||
req = urllib.request.Request(
|
req = urllib.request.Request(
|
||||||
self.topic_url, headers={"User-Agent": "mainline/0.1"})
|
url, headers={"User-Agent": "mainline/0.1"}
|
||||||
resp = urllib.request.urlopen(req, timeout=10)
|
)
|
||||||
for line in resp.read().decode('utf-8', errors='replace').strip().split('\n'):
|
# timeout=90 keeps the socket alive through ntfy.sh keepalive heartbeats
|
||||||
if not line.strip():
|
resp = urllib.request.urlopen(req, timeout=90)
|
||||||
continue
|
while True:
|
||||||
|
line = resp.readline()
|
||||||
|
if not line:
|
||||||
|
break # server closed connection — reconnect
|
||||||
try:
|
try:
|
||||||
data = json.loads(line)
|
data = json.loads(line.decode("utf-8", errors="replace"))
|
||||||
except json.JSONDecodeError:
|
except json.JSONDecodeError:
|
||||||
continue
|
continue
|
||||||
|
# Advance cursor on every event (message + keepalive) to
|
||||||
|
# avoid replaying already-seen events after a reconnect.
|
||||||
|
if "id" in data:
|
||||||
|
last_id = data["id"]
|
||||||
if data.get("event") == "message":
|
if data.get("event") == "message":
|
||||||
with self._lock:
|
with self._lock:
|
||||||
self._message = (
|
self._message = (
|
||||||
@@ -70,6 +110,13 @@ class NtfyPoller:
|
|||||||
data.get("message", ""),
|
data.get("message", ""),
|
||||||
time.monotonic(),
|
time.monotonic(),
|
||||||
)
|
)
|
||||||
|
event = NtfyMessageEvent(
|
||||||
|
title=data.get("title", ""),
|
||||||
|
body=data.get("message", ""),
|
||||||
|
message_id=data.get("id"),
|
||||||
|
timestamp=datetime.now(),
|
||||||
|
)
|
||||||
|
self._emit(event)
|
||||||
except Exception:
|
except Exception:
|
||||||
pass
|
pass
|
||||||
time.sleep(self.poll_interval)
|
time.sleep(self.reconnect_delay)
|
||||||
|
|||||||
611
engine/pipeline.py
Normal file
611
engine/pipeline.py
Normal file
@@ -0,0 +1,611 @@
|
|||||||
|
"""
|
||||||
|
Pipeline introspection - generates self-documenting diagrams of the render pipeline.
|
||||||
|
|
||||||
|
Pipeline Architecture:
|
||||||
|
- Sources: Data providers (RSS, Poetry, Ntfy, Mic) - static or dynamic
|
||||||
|
- Fetch: Retrieve data from sources
|
||||||
|
- Prepare: Transform raw data (make_block, strip_tags, translate)
|
||||||
|
- Scroll: Camera-based viewport rendering (ticker zone, message overlay)
|
||||||
|
- Effects: Post-processing chain (noise, fade, glitch, firehose, hud)
|
||||||
|
- Render: Final line rendering and layout
|
||||||
|
- Display: Output backends (terminal, pygame, websocket, sixel, kitty)
|
||||||
|
|
||||||
|
Key abstractions:
|
||||||
|
- DataSource: Sources can be static (cached) or dynamic (idempotent fetch)
|
||||||
|
- Camera: Viewport controller (vertical, horizontal, omni, floating, trace)
|
||||||
|
- EffectChain: Ordered effect processing pipeline
|
||||||
|
- Display: Pluggable output backends
|
||||||
|
- SourceRegistry: Source discovery and management
|
||||||
|
- AnimationController: Time-based parameter animation
|
||||||
|
- Preset: Package of initial params + animation for demo modes
|
||||||
|
"""
|
||||||
|
|
||||||
|
from __future__ import annotations
|
||||||
|
|
||||||
|
from dataclasses import dataclass
|
||||||
|
|
||||||
|
|
||||||
|
@dataclass
|
||||||
|
class PipelineNode:
|
||||||
|
"""Represents a node in the pipeline."""
|
||||||
|
|
||||||
|
name: str
|
||||||
|
module: str
|
||||||
|
class_name: str | None = None
|
||||||
|
func_name: str | None = None
|
||||||
|
description: str = ""
|
||||||
|
inputs: list[str] | None = None
|
||||||
|
outputs: list[str] | None = None
|
||||||
|
metrics: dict | None = None # Performance metrics (avg_ms, min_ms, max_ms)
|
||||||
|
|
||||||
|
|
||||||
|
class PipelineIntrospector:
|
||||||
|
"""Introspects the render pipeline and generates documentation."""
|
||||||
|
|
||||||
|
def __init__(self):
|
||||||
|
self.nodes: list[PipelineNode] = []
|
||||||
|
|
||||||
|
def add_node(self, node: PipelineNode) -> None:
|
||||||
|
self.nodes.append(node)
|
||||||
|
|
||||||
|
def generate_mermaid_flowchart(self) -> str:
|
||||||
|
"""Generate a Mermaid flowchart of the pipeline."""
|
||||||
|
lines = ["```mermaid", "flowchart TD"]
|
||||||
|
|
||||||
|
subgraph_groups = {
|
||||||
|
"Sources": [],
|
||||||
|
"Fetch": [],
|
||||||
|
"Prepare": [],
|
||||||
|
"Scroll": [],
|
||||||
|
"Effects": [],
|
||||||
|
"Display": [],
|
||||||
|
"Async": [],
|
||||||
|
"Animation": [],
|
||||||
|
"Viz": [],
|
||||||
|
}
|
||||||
|
|
||||||
|
other_nodes = []
|
||||||
|
|
||||||
|
for node in self.nodes:
|
||||||
|
node_id = node.name.replace("-", "_").replace(" ", "_").replace(":", "_")
|
||||||
|
label = node.name
|
||||||
|
if node.class_name:
|
||||||
|
label = f"{node.name}\\n({node.class_name})"
|
||||||
|
elif node.func_name:
|
||||||
|
label = f"{node.name}\\n({node.func_name})"
|
||||||
|
|
||||||
|
if node.description:
|
||||||
|
label += f"\\n{node.description}"
|
||||||
|
|
||||||
|
if node.metrics:
|
||||||
|
avg = node.metrics.get("avg_ms", 0)
|
||||||
|
if avg > 0:
|
||||||
|
label += f"\\n⏱ {avg:.1f}ms"
|
||||||
|
impact = node.metrics.get("impact_pct", 0)
|
||||||
|
if impact > 0:
|
||||||
|
label += f" ({impact:.0f}%)"
|
||||||
|
|
||||||
|
node_entry = f' {node_id}["{label}"]'
|
||||||
|
|
||||||
|
if "DataSource" in node.name or "SourceRegistry" in node.name:
|
||||||
|
subgraph_groups["Sources"].append(node_entry)
|
||||||
|
elif "fetch" in node.name.lower():
|
||||||
|
subgraph_groups["Fetch"].append(node_entry)
|
||||||
|
elif (
|
||||||
|
"make_block" in node.name
|
||||||
|
or "strip_tags" in node.name
|
||||||
|
or "translate" in node.name
|
||||||
|
):
|
||||||
|
subgraph_groups["Prepare"].append(node_entry)
|
||||||
|
elif (
|
||||||
|
"StreamController" in node.name
|
||||||
|
or "render_ticker" in node.name
|
||||||
|
or "render_message" in node.name
|
||||||
|
or "Camera" in node.name
|
||||||
|
):
|
||||||
|
subgraph_groups["Scroll"].append(node_entry)
|
||||||
|
elif "Effect" in node.name or "effect" in node.module:
|
||||||
|
subgraph_groups["Effects"].append(node_entry)
|
||||||
|
elif "Display:" in node.name:
|
||||||
|
subgraph_groups["Display"].append(node_entry)
|
||||||
|
elif "Ntfy" in node.name or "Mic" in node.name:
|
||||||
|
subgraph_groups["Async"].append(node_entry)
|
||||||
|
elif "Animation" in node.name or "Preset" in node.name:
|
||||||
|
subgraph_groups["Animation"].append(node_entry)
|
||||||
|
elif "pipeline_viz" in node.module or "CameraLarge" in node.name:
|
||||||
|
subgraph_groups["Viz"].append(node_entry)
|
||||||
|
else:
|
||||||
|
other_nodes.append(node_entry)
|
||||||
|
|
||||||
|
for group_name, nodes in subgraph_groups.items():
|
||||||
|
if nodes:
|
||||||
|
lines.append(f" subgraph {group_name}")
|
||||||
|
for node in nodes:
|
||||||
|
lines.append(node)
|
||||||
|
lines.append(" end")
|
||||||
|
|
||||||
|
for node in other_nodes:
|
||||||
|
lines.append(node)
|
||||||
|
|
||||||
|
lines.append("")
|
||||||
|
|
||||||
|
for node in self.nodes:
|
||||||
|
node_id = node.name.replace("-", "_").replace(" ", "_").replace(":", "_")
|
||||||
|
if node.inputs:
|
||||||
|
for inp in node.inputs:
|
||||||
|
inp_id = inp.replace("-", "_").replace(" ", "_").replace(":", "_")
|
||||||
|
lines.append(f" {inp_id} --> {node_id}")
|
||||||
|
|
||||||
|
lines.append("```")
|
||||||
|
return "\n".join(lines)
|
||||||
|
|
||||||
|
def generate_mermaid_sequence(self) -> str:
|
||||||
|
"""Generate a Mermaid sequence diagram of message flow."""
|
||||||
|
lines = ["```mermaid", "sequenceDiagram"]
|
||||||
|
|
||||||
|
lines.append(" participant Sources")
|
||||||
|
lines.append(" participant Fetch")
|
||||||
|
lines.append(" participant Scroll")
|
||||||
|
lines.append(" participant Effects")
|
||||||
|
lines.append(" participant Display")
|
||||||
|
|
||||||
|
lines.append(" Sources->>Fetch: headlines")
|
||||||
|
lines.append(" Fetch->>Scroll: content blocks")
|
||||||
|
lines.append(" Scroll->>Effects: buffer")
|
||||||
|
lines.append(" Effects->>Effects: process chain")
|
||||||
|
lines.append(" Effects->>Display: rendered buffer")
|
||||||
|
|
||||||
|
lines.append("```")
|
||||||
|
return "\n".join(lines)
|
||||||
|
|
||||||
|
def generate_mermaid_state(self) -> str:
|
||||||
|
"""Generate a Mermaid state diagram of camera modes."""
|
||||||
|
lines = ["```mermaid", "stateDiagram-v2"]
|
||||||
|
|
||||||
|
lines.append(" [*] --> Vertical")
|
||||||
|
lines.append(" Vertical --> Horizontal: set_mode()")
|
||||||
|
lines.append(" Horizontal --> Omni: set_mode()")
|
||||||
|
lines.append(" Omni --> Floating: set_mode()")
|
||||||
|
lines.append(" Floating --> Trace: set_mode()")
|
||||||
|
lines.append(" Trace --> Vertical: set_mode()")
|
||||||
|
|
||||||
|
lines.append(" state Vertical {")
|
||||||
|
lines.append(" [*] --> ScrollUp")
|
||||||
|
lines.append(" ScrollUp --> ScrollUp: +y each frame")
|
||||||
|
lines.append(" }")
|
||||||
|
|
||||||
|
lines.append(" state Horizontal {")
|
||||||
|
lines.append(" [*] --> ScrollLeft")
|
||||||
|
lines.append(" ScrollLeft --> ScrollLeft: +x each frame")
|
||||||
|
lines.append(" }")
|
||||||
|
|
||||||
|
lines.append(" state Omni {")
|
||||||
|
lines.append(" [*] --> Diagonal")
|
||||||
|
lines.append(" Diagonal --> Diagonal: +x, +y")
|
||||||
|
lines.append(" }")
|
||||||
|
|
||||||
|
lines.append(" state Floating {")
|
||||||
|
lines.append(" [*] --> Bobbing")
|
||||||
|
lines.append(" Bobbing --> Bobbing: sin(time)")
|
||||||
|
lines.append(" }")
|
||||||
|
|
||||||
|
lines.append(" state Trace {")
|
||||||
|
lines.append(" [*] --> FollowPath")
|
||||||
|
lines.append(" FollowPath --> FollowPath: node by node")
|
||||||
|
lines.append(" }")
|
||||||
|
|
||||||
|
lines.append("```")
|
||||||
|
return "\n".join(lines)
|
||||||
|
|
||||||
|
def generate_full_diagram(self) -> str:
|
||||||
|
"""Generate full pipeline documentation."""
|
||||||
|
lines = [
|
||||||
|
"# Render Pipeline",
|
||||||
|
"",
|
||||||
|
"## Data Flow",
|
||||||
|
"",
|
||||||
|
self.generate_mermaid_flowchart(),
|
||||||
|
"",
|
||||||
|
"## Message Sequence",
|
||||||
|
"",
|
||||||
|
self.generate_mermaid_sequence(),
|
||||||
|
"",
|
||||||
|
"## Camera States",
|
||||||
|
"",
|
||||||
|
self.generate_mermaid_state(),
|
||||||
|
]
|
||||||
|
return "\n".join(lines)
|
||||||
|
|
||||||
|
def introspect_sources(self) -> None:
|
||||||
|
"""Introspect data sources."""
|
||||||
|
from engine import sources
|
||||||
|
|
||||||
|
for name in dir(sources):
|
||||||
|
obj = getattr(sources, name)
|
||||||
|
if isinstance(obj, dict):
|
||||||
|
self.add_node(
|
||||||
|
PipelineNode(
|
||||||
|
name=f"Data Source: {name}",
|
||||||
|
module="engine.sources",
|
||||||
|
description=f"{len(obj)} feeds configured",
|
||||||
|
)
|
||||||
|
)
|
||||||
|
|
||||||
|
def introspect_sources_v2(self) -> None:
|
||||||
|
"""Introspect data sources v2 (new abstraction)."""
|
||||||
|
from engine.sources_v2 import SourceRegistry, init_default_sources
|
||||||
|
|
||||||
|
init_default_sources()
|
||||||
|
SourceRegistry()
|
||||||
|
|
||||||
|
self.add_node(
|
||||||
|
PipelineNode(
|
||||||
|
name="SourceRegistry",
|
||||||
|
module="engine.sources_v2",
|
||||||
|
class_name="SourceRegistry",
|
||||||
|
description="Source discovery and management",
|
||||||
|
)
|
||||||
|
)
|
||||||
|
|
||||||
|
for name, desc in [
|
||||||
|
("HeadlinesDataSource", "RSS feed headlines"),
|
||||||
|
("PoetryDataSource", "Poetry DB"),
|
||||||
|
("PipelineDataSource", "Pipeline viz (dynamic)"),
|
||||||
|
]:
|
||||||
|
self.add_node(
|
||||||
|
PipelineNode(
|
||||||
|
name=f"DataSource: {name}",
|
||||||
|
module="engine.sources_v2",
|
||||||
|
class_name=name,
|
||||||
|
description=f"{desc}",
|
||||||
|
)
|
||||||
|
)
|
||||||
|
|
||||||
|
def introspect_prepare(self) -> None:
|
||||||
|
"""Introspect prepare layer (transformation)."""
|
||||||
|
self.add_node(
|
||||||
|
PipelineNode(
|
||||||
|
name="make_block",
|
||||||
|
module="engine.render",
|
||||||
|
func_name="make_block",
|
||||||
|
description="Transform headline into display block",
|
||||||
|
inputs=["title", "source", "timestamp", "width"],
|
||||||
|
outputs=["block"],
|
||||||
|
)
|
||||||
|
)
|
||||||
|
|
||||||
|
self.add_node(
|
||||||
|
PipelineNode(
|
||||||
|
name="strip_tags",
|
||||||
|
module="engine.filter",
|
||||||
|
func_name="strip_tags",
|
||||||
|
description="Remove HTML tags from content",
|
||||||
|
inputs=["html"],
|
||||||
|
outputs=["plain_text"],
|
||||||
|
)
|
||||||
|
)
|
||||||
|
|
||||||
|
self.add_node(
|
||||||
|
PipelineNode(
|
||||||
|
name="translate_headline",
|
||||||
|
module="engine.translate",
|
||||||
|
func_name="translate_headline",
|
||||||
|
description="Translate headline to target language",
|
||||||
|
inputs=["title", "target_lang"],
|
||||||
|
outputs=["translated_title"],
|
||||||
|
)
|
||||||
|
)
|
||||||
|
|
||||||
|
def introspect_fetch(self) -> None:
|
||||||
|
"""Introspect fetch layer."""
|
||||||
|
self.add_node(
|
||||||
|
PipelineNode(
|
||||||
|
name="fetch_all",
|
||||||
|
module="engine.fetch",
|
||||||
|
func_name="fetch_all",
|
||||||
|
description="Fetch RSS feeds",
|
||||||
|
outputs=["items"],
|
||||||
|
)
|
||||||
|
)
|
||||||
|
|
||||||
|
self.add_node(
|
||||||
|
PipelineNode(
|
||||||
|
name="fetch_poetry",
|
||||||
|
module="engine.fetch",
|
||||||
|
func_name="fetch_poetry",
|
||||||
|
description="Fetch Poetry DB",
|
||||||
|
outputs=["items"],
|
||||||
|
)
|
||||||
|
)
|
||||||
|
|
||||||
|
def introspect_scroll(self) -> None:
|
||||||
|
"""Introspect scroll engine."""
|
||||||
|
self.add_node(
|
||||||
|
PipelineNode(
|
||||||
|
name="StreamController",
|
||||||
|
module="engine.controller",
|
||||||
|
class_name="StreamController",
|
||||||
|
description="Main render loop orchestrator",
|
||||||
|
inputs=["items", "ntfy_poller", "mic_monitor", "display"],
|
||||||
|
outputs=["buffer"],
|
||||||
|
)
|
||||||
|
)
|
||||||
|
|
||||||
|
self.add_node(
|
||||||
|
PipelineNode(
|
||||||
|
name="render_ticker_zone",
|
||||||
|
module="engine.layers",
|
||||||
|
func_name="render_ticker_zone",
|
||||||
|
description="Render scrolling ticker content",
|
||||||
|
inputs=["active", "camera"],
|
||||||
|
outputs=["buffer"],
|
||||||
|
)
|
||||||
|
)
|
||||||
|
|
||||||
|
self.add_node(
|
||||||
|
PipelineNode(
|
||||||
|
name="render_message_overlay",
|
||||||
|
module="engine.layers",
|
||||||
|
func_name="render_message_overlay",
|
||||||
|
description="Render ntfy message overlay",
|
||||||
|
inputs=["msg", "width", "height"],
|
||||||
|
outputs=["overlay", "cache"],
|
||||||
|
)
|
||||||
|
)
|
||||||
|
|
||||||
|
def introspect_render(self) -> None:
|
||||||
|
"""Introspect render layer."""
|
||||||
|
self.add_node(
|
||||||
|
PipelineNode(
|
||||||
|
name="big_wrap",
|
||||||
|
module="engine.render",
|
||||||
|
func_name="big_wrap",
|
||||||
|
description="Word-wrap text to width",
|
||||||
|
inputs=["text", "width"],
|
||||||
|
outputs=["lines"],
|
||||||
|
)
|
||||||
|
)
|
||||||
|
|
||||||
|
self.add_node(
|
||||||
|
PipelineNode(
|
||||||
|
name="lr_gradient",
|
||||||
|
module="engine.render",
|
||||||
|
func_name="lr_gradient",
|
||||||
|
description="Apply left-right gradient to lines",
|
||||||
|
inputs=["lines", "position"],
|
||||||
|
outputs=["styled_lines"],
|
||||||
|
)
|
||||||
|
)
|
||||||
|
|
||||||
|
def introspect_async_sources(self) -> None:
|
||||||
|
"""Introspect async data sources (ntfy, mic)."""
|
||||||
|
self.add_node(
|
||||||
|
PipelineNode(
|
||||||
|
name="NtfyPoller",
|
||||||
|
module="engine.ntfy",
|
||||||
|
class_name="NtfyPoller",
|
||||||
|
description="Poll ntfy for messages (async)",
|
||||||
|
inputs=["topic"],
|
||||||
|
outputs=["message"],
|
||||||
|
)
|
||||||
|
)
|
||||||
|
|
||||||
|
self.add_node(
|
||||||
|
PipelineNode(
|
||||||
|
name="MicMonitor",
|
||||||
|
module="engine.mic",
|
||||||
|
class_name="MicMonitor",
|
||||||
|
description="Monitor microphone input (async)",
|
||||||
|
outputs=["audio_level"],
|
||||||
|
)
|
||||||
|
)
|
||||||
|
|
||||||
|
def introspect_eventbus(self) -> None:
|
||||||
|
"""Introspect event bus for decoupled communication."""
|
||||||
|
self.add_node(
|
||||||
|
PipelineNode(
|
||||||
|
name="EventBus",
|
||||||
|
module="engine.eventbus",
|
||||||
|
class_name="EventBus",
|
||||||
|
description="Thread-safe event publishing",
|
||||||
|
inputs=["event"],
|
||||||
|
outputs=["subscribers"],
|
||||||
|
)
|
||||||
|
)
|
||||||
|
|
||||||
|
def introspect_animation(self) -> None:
|
||||||
|
"""Introspect animation system."""
|
||||||
|
self.add_node(
|
||||||
|
PipelineNode(
|
||||||
|
name="AnimationController",
|
||||||
|
module="engine.animation",
|
||||||
|
class_name="AnimationController",
|
||||||
|
description="Time-based parameter animation",
|
||||||
|
inputs=["dt"],
|
||||||
|
outputs=["params"],
|
||||||
|
)
|
||||||
|
)
|
||||||
|
|
||||||
|
self.add_node(
|
||||||
|
PipelineNode(
|
||||||
|
name="Preset",
|
||||||
|
module="engine.animation",
|
||||||
|
class_name="Preset",
|
||||||
|
description="Package of initial params + animation",
|
||||||
|
)
|
||||||
|
)
|
||||||
|
|
||||||
|
def introspect_pipeline_viz(self) -> None:
|
||||||
|
"""Introspect pipeline visualization."""
|
||||||
|
self.add_node(
|
||||||
|
PipelineNode(
|
||||||
|
name="generate_large_network_viewport",
|
||||||
|
module="engine.pipeline_viz",
|
||||||
|
func_name="generate_large_network_viewport",
|
||||||
|
description="Large animated network visualization",
|
||||||
|
inputs=["viewport_w", "viewport_h", "frame"],
|
||||||
|
outputs=["buffer"],
|
||||||
|
)
|
||||||
|
)
|
||||||
|
|
||||||
|
self.add_node(
|
||||||
|
PipelineNode(
|
||||||
|
name="CameraLarge",
|
||||||
|
module="engine.pipeline_viz",
|
||||||
|
class_name="CameraLarge",
|
||||||
|
description="Large grid camera (trace mode)",
|
||||||
|
)
|
||||||
|
)
|
||||||
|
|
||||||
|
def introspect_camera(self) -> None:
|
||||||
|
"""Introspect camera system."""
|
||||||
|
self.add_node(
|
||||||
|
PipelineNode(
|
||||||
|
name="Camera",
|
||||||
|
module="engine.camera",
|
||||||
|
class_name="Camera",
|
||||||
|
description="Viewport position controller",
|
||||||
|
inputs=["dt"],
|
||||||
|
outputs=["x", "y"],
|
||||||
|
)
|
||||||
|
)
|
||||||
|
|
||||||
|
def introspect_effects(self) -> None:
|
||||||
|
"""Introspect effect system."""
|
||||||
|
self.add_node(
|
||||||
|
PipelineNode(
|
||||||
|
name="EffectChain",
|
||||||
|
module="engine.effects",
|
||||||
|
class_name="EffectChain",
|
||||||
|
description="Process effects in sequence",
|
||||||
|
inputs=["buffer", "context"],
|
||||||
|
outputs=["buffer"],
|
||||||
|
)
|
||||||
|
)
|
||||||
|
|
||||||
|
self.add_node(
|
||||||
|
PipelineNode(
|
||||||
|
name="EffectRegistry",
|
||||||
|
module="engine.effects",
|
||||||
|
class_name="EffectRegistry",
|
||||||
|
description="Manage effect plugins",
|
||||||
|
)
|
||||||
|
)
|
||||||
|
|
||||||
|
def introspect_display(self) -> None:
|
||||||
|
"""Introspect display backends."""
|
||||||
|
from engine.display import DisplayRegistry
|
||||||
|
|
||||||
|
DisplayRegistry.initialize()
|
||||||
|
backends = DisplayRegistry.list_backends()
|
||||||
|
|
||||||
|
for backend in backends:
|
||||||
|
self.add_node(
|
||||||
|
PipelineNode(
|
||||||
|
name=f"Display: {backend}",
|
||||||
|
module="engine.display.backends",
|
||||||
|
class_name=f"{backend.title()}Display",
|
||||||
|
description=f"Render to {backend}",
|
||||||
|
inputs=["buffer"],
|
||||||
|
)
|
||||||
|
)
|
||||||
|
|
||||||
|
def introspect_new_pipeline(self, pipeline=None) -> None:
|
||||||
|
"""Introspect new unified pipeline stages with metrics.
|
||||||
|
|
||||||
|
Args:
|
||||||
|
pipeline: Optional Pipeline instance to collect metrics from
|
||||||
|
"""
|
||||||
|
|
||||||
|
stages_info = [
|
||||||
|
(
|
||||||
|
"ItemsSource",
|
||||||
|
"engine.pipeline.adapters",
|
||||||
|
"ItemsStage",
|
||||||
|
"Provides pre-fetched items",
|
||||||
|
),
|
||||||
|
(
|
||||||
|
"Render",
|
||||||
|
"engine.pipeline.adapters",
|
||||||
|
"RenderStage",
|
||||||
|
"Renders items to buffer",
|
||||||
|
),
|
||||||
|
(
|
||||||
|
"Effect",
|
||||||
|
"engine.pipeline.adapters",
|
||||||
|
"EffectPluginStage",
|
||||||
|
"Applies effect",
|
||||||
|
),
|
||||||
|
(
|
||||||
|
"Display",
|
||||||
|
"engine.pipeline.adapters",
|
||||||
|
"DisplayStage",
|
||||||
|
"Outputs to display",
|
||||||
|
),
|
||||||
|
]
|
||||||
|
|
||||||
|
metrics = None
|
||||||
|
if pipeline and hasattr(pipeline, "get_metrics_summary"):
|
||||||
|
metrics = pipeline.get_metrics_summary()
|
||||||
|
if "error" in metrics:
|
||||||
|
metrics = None
|
||||||
|
|
||||||
|
total_avg = metrics.get("pipeline", {}).get("avg_ms", 0) if metrics else 0
|
||||||
|
|
||||||
|
for stage_name, module, class_name, desc in stages_info:
|
||||||
|
node_metrics = None
|
||||||
|
if metrics and "stages" in metrics:
|
||||||
|
for name, stats in metrics["stages"].items():
|
||||||
|
if stage_name.lower() in name.lower():
|
||||||
|
impact_pct = (
|
||||||
|
(stats.get("avg_ms", 0) / total_avg * 100)
|
||||||
|
if total_avg > 0
|
||||||
|
else 0
|
||||||
|
)
|
||||||
|
node_metrics = {
|
||||||
|
"avg_ms": stats.get("avg_ms", 0),
|
||||||
|
"min_ms": stats.get("min_ms", 0),
|
||||||
|
"max_ms": stats.get("max_ms", 0),
|
||||||
|
"impact_pct": impact_pct,
|
||||||
|
}
|
||||||
|
break
|
||||||
|
|
||||||
|
self.add_node(
|
||||||
|
PipelineNode(
|
||||||
|
name=f"Pipeline: {stage_name}",
|
||||||
|
module=module,
|
||||||
|
class_name=class_name,
|
||||||
|
description=desc,
|
||||||
|
inputs=["data"],
|
||||||
|
outputs=["data"],
|
||||||
|
metrics=node_metrics,
|
||||||
|
)
|
||||||
|
)
|
||||||
|
|
||||||
|
def run(self) -> str:
|
||||||
|
"""Run full introspection."""
|
||||||
|
self.introspect_sources()
|
||||||
|
self.introspect_sources_v2()
|
||||||
|
self.introspect_fetch()
|
||||||
|
self.introspect_prepare()
|
||||||
|
self.introspect_scroll()
|
||||||
|
self.introspect_render()
|
||||||
|
self.introspect_camera()
|
||||||
|
self.introspect_effects()
|
||||||
|
self.introspect_display()
|
||||||
|
self.introspect_async_sources()
|
||||||
|
self.introspect_eventbus()
|
||||||
|
self.introspect_animation()
|
||||||
|
self.introspect_pipeline_viz()
|
||||||
|
|
||||||
|
return self.generate_full_diagram()
|
||||||
|
|
||||||
|
|
||||||
|
def generate_pipeline_diagram() -> str:
|
||||||
|
"""Generate a self-documenting pipeline diagram."""
|
||||||
|
introspector = PipelineIntrospector()
|
||||||
|
return introspector.run()
|
||||||
|
|
||||||
|
|
||||||
|
if __name__ == "__main__":
|
||||||
|
print(generate_pipeline_diagram())
|
||||||
107
engine/pipeline/__init__.py
Normal file
107
engine/pipeline/__init__.py
Normal file
@@ -0,0 +1,107 @@
|
|||||||
|
"""
|
||||||
|
Unified Pipeline Architecture.
|
||||||
|
|
||||||
|
This module provides a clean, dependency-managed pipeline system:
|
||||||
|
- Stage: Base class for all pipeline components
|
||||||
|
- Pipeline: DAG-based execution orchestrator
|
||||||
|
- PipelineParams: Runtime configuration for animation
|
||||||
|
- PipelinePreset: Pre-configured pipeline configurations
|
||||||
|
- StageRegistry: Unified registration for all stage types
|
||||||
|
|
||||||
|
The pipeline architecture supports:
|
||||||
|
- Sources: Data providers (headlines, poetry, pipeline viz)
|
||||||
|
- Effects: Post-processors (noise, fade, glitch, hud)
|
||||||
|
- Displays: Output backends (terminal, pygame, websocket)
|
||||||
|
- Cameras: Viewport controllers (vertical, horizontal, omni)
|
||||||
|
|
||||||
|
Example:
|
||||||
|
from engine.pipeline import Pipeline, PipelineConfig, StageRegistry
|
||||||
|
|
||||||
|
pipeline = Pipeline(PipelineConfig(source="headlines", display="terminal"))
|
||||||
|
pipeline.add_stage("source", StageRegistry.create("source", "headlines"))
|
||||||
|
pipeline.add_stage("display", StageRegistry.create("display", "terminal"))
|
||||||
|
pipeline.build().initialize()
|
||||||
|
|
||||||
|
result = pipeline.execute(initial_data)
|
||||||
|
"""
|
||||||
|
|
||||||
|
from engine.pipeline.controller import (
|
||||||
|
Pipeline,
|
||||||
|
PipelineConfig,
|
||||||
|
PipelineRunner,
|
||||||
|
create_default_pipeline,
|
||||||
|
create_pipeline_from_params,
|
||||||
|
)
|
||||||
|
from engine.pipeline.core import (
|
||||||
|
PipelineContext,
|
||||||
|
Stage,
|
||||||
|
StageConfig,
|
||||||
|
StageError,
|
||||||
|
StageResult,
|
||||||
|
)
|
||||||
|
from engine.pipeline.params import (
|
||||||
|
DEFAULT_HEADLINE_PARAMS,
|
||||||
|
DEFAULT_PIPELINE_PARAMS,
|
||||||
|
DEFAULT_PYGAME_PARAMS,
|
||||||
|
PipelineParams,
|
||||||
|
)
|
||||||
|
from engine.pipeline.presets import (
|
||||||
|
DEMO_PRESET,
|
||||||
|
FIREHOSE_PRESET,
|
||||||
|
PIPELINE_VIZ_PRESET,
|
||||||
|
POETRY_PRESET,
|
||||||
|
PRESETS,
|
||||||
|
SIXEL_PRESET,
|
||||||
|
WEBSOCKET_PRESET,
|
||||||
|
PipelinePreset,
|
||||||
|
create_preset_from_params,
|
||||||
|
get_preset,
|
||||||
|
list_presets,
|
||||||
|
)
|
||||||
|
from engine.pipeline.registry import (
|
||||||
|
StageRegistry,
|
||||||
|
discover_stages,
|
||||||
|
register_camera,
|
||||||
|
register_display,
|
||||||
|
register_effect,
|
||||||
|
register_source,
|
||||||
|
)
|
||||||
|
|
||||||
|
__all__ = [
|
||||||
|
# Core
|
||||||
|
"Stage",
|
||||||
|
"StageConfig",
|
||||||
|
"StageError",
|
||||||
|
"StageResult",
|
||||||
|
"PipelineContext",
|
||||||
|
# Controller
|
||||||
|
"Pipeline",
|
||||||
|
"PipelineConfig",
|
||||||
|
"PipelineRunner",
|
||||||
|
"create_default_pipeline",
|
||||||
|
"create_pipeline_from_params",
|
||||||
|
# Params
|
||||||
|
"PipelineParams",
|
||||||
|
"DEFAULT_HEADLINE_PARAMS",
|
||||||
|
"DEFAULT_PIPELINE_PARAMS",
|
||||||
|
"DEFAULT_PYGAME_PARAMS",
|
||||||
|
# Presets
|
||||||
|
"PipelinePreset",
|
||||||
|
"PRESETS",
|
||||||
|
"DEMO_PRESET",
|
||||||
|
"POETRY_PRESET",
|
||||||
|
"PIPELINE_VIZ_PRESET",
|
||||||
|
"WEBSOCKET_PRESET",
|
||||||
|
"SIXEL_PRESET",
|
||||||
|
"FIREHOSE_PRESET",
|
||||||
|
"get_preset",
|
||||||
|
"list_presets",
|
||||||
|
"create_preset_from_params",
|
||||||
|
# Registry
|
||||||
|
"StageRegistry",
|
||||||
|
"discover_stages",
|
||||||
|
"register_source",
|
||||||
|
"register_effect",
|
||||||
|
"register_display",
|
||||||
|
"register_camera",
|
||||||
|
]
|
||||||
299
engine/pipeline/adapters.py
Normal file
299
engine/pipeline/adapters.py
Normal file
@@ -0,0 +1,299 @@
|
|||||||
|
"""
|
||||||
|
Stage adapters - Bridge existing components to the Stage interface.
|
||||||
|
|
||||||
|
This module provides adapters that wrap existing components
|
||||||
|
(EffectPlugin, Display, DataSource, Camera) as Stage implementations.
|
||||||
|
"""
|
||||||
|
|
||||||
|
import random
|
||||||
|
from typing import Any
|
||||||
|
|
||||||
|
from engine.pipeline.core import PipelineContext, Stage
|
||||||
|
|
||||||
|
|
||||||
|
class RenderStage(Stage):
|
||||||
|
"""Stage that renders items to a text buffer for display.
|
||||||
|
|
||||||
|
This mimics the old demo's render pipeline:
|
||||||
|
- Selects headlines and renders them to blocks
|
||||||
|
- Applies camera scroll position
|
||||||
|
- Adds firehose layer if enabled
|
||||||
|
"""
|
||||||
|
|
||||||
|
def __init__(
|
||||||
|
self,
|
||||||
|
items: list,
|
||||||
|
width: int = 80,
|
||||||
|
height: int = 24,
|
||||||
|
camera_speed: float = 1.0,
|
||||||
|
camera_mode: str = "vertical",
|
||||||
|
firehose_enabled: bool = False,
|
||||||
|
name: str = "render",
|
||||||
|
):
|
||||||
|
self.name = name
|
||||||
|
self.category = "render"
|
||||||
|
self.optional = False
|
||||||
|
self._items = items
|
||||||
|
self._width = width
|
||||||
|
self._height = height
|
||||||
|
self._camera_speed = camera_speed
|
||||||
|
self._camera_mode = camera_mode
|
||||||
|
self._firehose_enabled = firehose_enabled
|
||||||
|
|
||||||
|
self._camera_y = 0.0
|
||||||
|
self._camera_x = 0
|
||||||
|
self._scroll_accum = 0.0
|
||||||
|
self._ticker_next_y = 0
|
||||||
|
self._active: list = []
|
||||||
|
self._seen: set = set()
|
||||||
|
self._pool: list = list(items)
|
||||||
|
self._noise_cache: dict = {}
|
||||||
|
self._frame_count = 0
|
||||||
|
|
||||||
|
@property
|
||||||
|
def capabilities(self) -> set[str]:
|
||||||
|
return {"render.output"}
|
||||||
|
|
||||||
|
@property
|
||||||
|
def dependencies(self) -> set[str]:
|
||||||
|
return {"source.items"}
|
||||||
|
|
||||||
|
def init(self, ctx: PipelineContext) -> bool:
|
||||||
|
random.shuffle(self._pool)
|
||||||
|
return True
|
||||||
|
|
||||||
|
def process(self, data: Any, ctx: PipelineContext) -> Any:
|
||||||
|
"""Render items to a text buffer."""
|
||||||
|
from engine.effects import next_headline
|
||||||
|
from engine.layers import render_firehose, render_ticker_zone
|
||||||
|
from engine.render import make_block
|
||||||
|
|
||||||
|
items = data or self._items
|
||||||
|
w = ctx.params.viewport_width if ctx.params else self._width
|
||||||
|
h = ctx.params.viewport_height if ctx.params else self._height
|
||||||
|
camera_speed = ctx.params.camera_speed if ctx.params else self._camera_speed
|
||||||
|
firehose = ctx.params.firehose_enabled if ctx.params else self._firehose_enabled
|
||||||
|
|
||||||
|
scroll_step = 0.5 / (camera_speed * 10)
|
||||||
|
self._scroll_accum += scroll_step
|
||||||
|
|
||||||
|
GAP = 3
|
||||||
|
|
||||||
|
while self._scroll_accum >= scroll_step:
|
||||||
|
self._scroll_accum -= scroll_step
|
||||||
|
self._camera_y += 1.0
|
||||||
|
|
||||||
|
while (
|
||||||
|
self._ticker_next_y < int(self._camera_y) + h + 10
|
||||||
|
and len(self._active) < 50
|
||||||
|
):
|
||||||
|
t, src, ts = next_headline(self._pool, items, self._seen)
|
||||||
|
ticker_content, hc, midx = make_block(t, src, ts, w)
|
||||||
|
self._active.append((ticker_content, hc, self._ticker_next_y, midx))
|
||||||
|
self._ticker_next_y += len(ticker_content) + GAP
|
||||||
|
|
||||||
|
self._active = [
|
||||||
|
(c, hc, by, mi)
|
||||||
|
for c, hc, by, mi in self._active
|
||||||
|
if by + len(c) > int(self._camera_y)
|
||||||
|
]
|
||||||
|
for k in list(self._noise_cache):
|
||||||
|
if k < int(self._camera_y):
|
||||||
|
del self._noise_cache[k]
|
||||||
|
|
||||||
|
grad_offset = (self._frame_count * 0.01) % 1.0
|
||||||
|
|
||||||
|
buf, self._noise_cache = render_ticker_zone(
|
||||||
|
self._active,
|
||||||
|
scroll_cam=int(self._camera_y),
|
||||||
|
camera_x=self._camera_x,
|
||||||
|
ticker_h=h,
|
||||||
|
w=w,
|
||||||
|
noise_cache=self._noise_cache,
|
||||||
|
grad_offset=grad_offset,
|
||||||
|
)
|
||||||
|
|
||||||
|
if firehose:
|
||||||
|
firehose_buf = render_firehose(items, w, 0, h)
|
||||||
|
buf.extend(firehose_buf)
|
||||||
|
|
||||||
|
self._frame_count += 1
|
||||||
|
return buf
|
||||||
|
|
||||||
|
|
||||||
|
class EffectPluginStage(Stage):
|
||||||
|
"""Adapter wrapping EffectPlugin as a Stage."""
|
||||||
|
|
||||||
|
def __init__(self, effect_plugin, name: str = "effect"):
|
||||||
|
self._effect = effect_plugin
|
||||||
|
self.name = name
|
||||||
|
self.category = "effect"
|
||||||
|
self.optional = False
|
||||||
|
|
||||||
|
@property
|
||||||
|
def capabilities(self) -> set[str]:
|
||||||
|
return {f"effect.{self.name}"}
|
||||||
|
|
||||||
|
@property
|
||||||
|
def dependencies(self) -> set[str]:
|
||||||
|
return set()
|
||||||
|
|
||||||
|
def process(self, data: Any, ctx: PipelineContext) -> Any:
|
||||||
|
"""Process data through the effect."""
|
||||||
|
if data is None:
|
||||||
|
return None
|
||||||
|
from engine.effects import EffectContext
|
||||||
|
|
||||||
|
w = ctx.params.viewport_width if ctx.params else 80
|
||||||
|
h = ctx.params.viewport_height if ctx.params else 24
|
||||||
|
frame = ctx.params.frame_number if ctx.params else 0
|
||||||
|
|
||||||
|
effect_ctx = EffectContext(
|
||||||
|
terminal_width=w,
|
||||||
|
terminal_height=h,
|
||||||
|
scroll_cam=0,
|
||||||
|
ticker_height=h,
|
||||||
|
camera_x=0,
|
||||||
|
mic_excess=0.0,
|
||||||
|
grad_offset=(frame * 0.01) % 1.0,
|
||||||
|
frame_number=frame,
|
||||||
|
has_message=False,
|
||||||
|
items=ctx.get("items", []),
|
||||||
|
)
|
||||||
|
return self._effect.process(data, effect_ctx)
|
||||||
|
|
||||||
|
|
||||||
|
class DisplayStage(Stage):
|
||||||
|
"""Adapter wrapping Display as a Stage."""
|
||||||
|
|
||||||
|
def __init__(self, display, name: str = "terminal"):
|
||||||
|
self._display = display
|
||||||
|
self.name = name
|
||||||
|
self.category = "display"
|
||||||
|
self.optional = False
|
||||||
|
|
||||||
|
@property
|
||||||
|
def capabilities(self) -> set[str]:
|
||||||
|
return {"display.output"}
|
||||||
|
|
||||||
|
@property
|
||||||
|
def dependencies(self) -> set[str]:
|
||||||
|
return set()
|
||||||
|
|
||||||
|
def init(self, ctx: PipelineContext) -> bool:
|
||||||
|
w = ctx.params.viewport_width if ctx.params else 80
|
||||||
|
h = ctx.params.viewport_height if ctx.params else 24
|
||||||
|
result = self._display.init(w, h, reuse=False)
|
||||||
|
return result is not False
|
||||||
|
|
||||||
|
def process(self, data: Any, ctx: PipelineContext) -> Any:
|
||||||
|
"""Output data to display."""
|
||||||
|
if data is not None:
|
||||||
|
self._display.show(data)
|
||||||
|
return data
|
||||||
|
|
||||||
|
def cleanup(self) -> None:
|
||||||
|
self._display.cleanup()
|
||||||
|
|
||||||
|
|
||||||
|
class DataSourceStage(Stage):
|
||||||
|
"""Adapter wrapping DataSource as a Stage."""
|
||||||
|
|
||||||
|
def __init__(self, data_source, name: str = "headlines"):
|
||||||
|
self._source = data_source
|
||||||
|
self.name = name
|
||||||
|
self.category = "source"
|
||||||
|
self.optional = False
|
||||||
|
|
||||||
|
@property
|
||||||
|
def capabilities(self) -> set[str]:
|
||||||
|
return {f"source.{self.name}"}
|
||||||
|
|
||||||
|
@property
|
||||||
|
def dependencies(self) -> set[str]:
|
||||||
|
return set()
|
||||||
|
|
||||||
|
def process(self, data: Any, ctx: PipelineContext) -> Any:
|
||||||
|
"""Fetch data from source."""
|
||||||
|
if hasattr(self._source, "get_items"):
|
||||||
|
return self._source.get_items()
|
||||||
|
return data
|
||||||
|
|
||||||
|
|
||||||
|
class ItemsStage(Stage):
|
||||||
|
"""Stage that holds pre-fetched items and provides them to the pipeline."""
|
||||||
|
|
||||||
|
def __init__(self, items, name: str = "headlines"):
|
||||||
|
self._items = items
|
||||||
|
self.name = name
|
||||||
|
self.category = "source"
|
||||||
|
self.optional = False
|
||||||
|
|
||||||
|
@property
|
||||||
|
def capabilities(self) -> set[str]:
|
||||||
|
return {f"source.{self.name}"}
|
||||||
|
|
||||||
|
@property
|
||||||
|
def dependencies(self) -> set[str]:
|
||||||
|
return set()
|
||||||
|
|
||||||
|
def process(self, data: Any, ctx: PipelineContext) -> Any:
|
||||||
|
"""Return the pre-fetched items."""
|
||||||
|
return self._items
|
||||||
|
|
||||||
|
|
||||||
|
class CameraStage(Stage):
|
||||||
|
"""Adapter wrapping Camera as a Stage."""
|
||||||
|
|
||||||
|
def __init__(self, camera, name: str = "vertical"):
|
||||||
|
self._camera = camera
|
||||||
|
self.name = name
|
||||||
|
self.category = "camera"
|
||||||
|
self.optional = True
|
||||||
|
|
||||||
|
@property
|
||||||
|
def capabilities(self) -> set[str]:
|
||||||
|
return {"camera"}
|
||||||
|
|
||||||
|
@property
|
||||||
|
def dependencies(self) -> set[str]:
|
||||||
|
return {"source.items"}
|
||||||
|
|
||||||
|
def process(self, data: Any, ctx: PipelineContext) -> Any:
|
||||||
|
"""Apply camera transformation to data."""
|
||||||
|
if data is None:
|
||||||
|
return None
|
||||||
|
if hasattr(self._camera, "apply"):
|
||||||
|
return self._camera.apply(
|
||||||
|
data, ctx.params.viewport_width if ctx.params else 80
|
||||||
|
)
|
||||||
|
return data
|
||||||
|
|
||||||
|
def cleanup(self) -> None:
|
||||||
|
if hasattr(self._camera, "reset"):
|
||||||
|
self._camera.reset()
|
||||||
|
|
||||||
|
|
||||||
|
def create_stage_from_display(display, name: str = "terminal") -> DisplayStage:
|
||||||
|
"""Create a Stage from a Display instance."""
|
||||||
|
return DisplayStage(display, name)
|
||||||
|
|
||||||
|
|
||||||
|
def create_stage_from_effect(effect_plugin, name: str) -> EffectPluginStage:
|
||||||
|
"""Create a Stage from an EffectPlugin."""
|
||||||
|
return EffectPluginStage(effect_plugin, name)
|
||||||
|
|
||||||
|
|
||||||
|
def create_stage_from_source(data_source, name: str = "headlines") -> DataSourceStage:
|
||||||
|
"""Create a Stage from a DataSource."""
|
||||||
|
return DataSourceStage(data_source, name)
|
||||||
|
|
||||||
|
|
||||||
|
def create_stage_from_camera(camera, name: str = "vertical") -> CameraStage:
|
||||||
|
"""Create a Stage from a Camera."""
|
||||||
|
return CameraStage(camera, name)
|
||||||
|
|
||||||
|
|
||||||
|
def create_items_stage(items, name: str = "headlines") -> ItemsStage:
|
||||||
|
"""Create a Stage that holds pre-fetched items."""
|
||||||
|
return ItemsStage(items, name)
|
||||||
320
engine/pipeline/controller.py
Normal file
320
engine/pipeline/controller.py
Normal file
@@ -0,0 +1,320 @@
|
|||||||
|
"""
|
||||||
|
Pipeline controller - DAG-based pipeline execution.
|
||||||
|
|
||||||
|
The Pipeline class orchestrates stages in dependency order, handling
|
||||||
|
the complete render cycle from source to display.
|
||||||
|
"""
|
||||||
|
|
||||||
|
import time
|
||||||
|
from dataclasses import dataclass, field
|
||||||
|
from typing import Any
|
||||||
|
|
||||||
|
from engine.pipeline.core import PipelineContext, Stage, StageError, StageResult
|
||||||
|
from engine.pipeline.params import PipelineParams
|
||||||
|
from engine.pipeline.registry import StageRegistry
|
||||||
|
|
||||||
|
|
||||||
|
@dataclass
|
||||||
|
class PipelineConfig:
|
||||||
|
"""Configuration for a pipeline instance."""
|
||||||
|
|
||||||
|
source: str = "headlines"
|
||||||
|
display: str = "terminal"
|
||||||
|
camera: str = "vertical"
|
||||||
|
effects: list[str] = field(default_factory=list)
|
||||||
|
enable_metrics: bool = True
|
||||||
|
|
||||||
|
|
||||||
|
@dataclass
|
||||||
|
class StageMetrics:
|
||||||
|
"""Metrics for a single stage execution."""
|
||||||
|
|
||||||
|
name: str
|
||||||
|
duration_ms: float
|
||||||
|
chars_in: int = 0
|
||||||
|
chars_out: int = 0
|
||||||
|
|
||||||
|
|
||||||
|
@dataclass
|
||||||
|
class FrameMetrics:
|
||||||
|
"""Metrics for a single frame through the pipeline."""
|
||||||
|
|
||||||
|
frame_number: int
|
||||||
|
total_ms: float
|
||||||
|
stages: list[StageMetrics] = field(default_factory=list)
|
||||||
|
|
||||||
|
|
||||||
|
class Pipeline:
|
||||||
|
"""Main pipeline orchestrator.
|
||||||
|
|
||||||
|
Manages the execution of all stages in dependency order,
|
||||||
|
handling initialization, processing, and cleanup.
|
||||||
|
"""
|
||||||
|
|
||||||
|
def __init__(
|
||||||
|
self,
|
||||||
|
config: PipelineConfig | None = None,
|
||||||
|
context: PipelineContext | None = None,
|
||||||
|
):
|
||||||
|
self.config = config or PipelineConfig()
|
||||||
|
self.context = context or PipelineContext()
|
||||||
|
self._stages: dict[str, Stage] = {}
|
||||||
|
self._execution_order: list[str] = []
|
||||||
|
self._initialized = False
|
||||||
|
|
||||||
|
self._metrics_enabled = self.config.enable_metrics
|
||||||
|
self._frame_metrics: list[FrameMetrics] = []
|
||||||
|
self._max_metrics_frames = 60
|
||||||
|
self._current_frame_number = 0
|
||||||
|
|
||||||
|
def add_stage(self, name: str, stage: Stage) -> "Pipeline":
|
||||||
|
"""Add a stage to the pipeline."""
|
||||||
|
self._stages[name] = stage
|
||||||
|
return self
|
||||||
|
|
||||||
|
def remove_stage(self, name: str) -> None:
|
||||||
|
"""Remove a stage from the pipeline."""
|
||||||
|
if name in self._stages:
|
||||||
|
del self._stages[name]
|
||||||
|
|
||||||
|
def get_stage(self, name: str) -> Stage | None:
|
||||||
|
"""Get a stage by name."""
|
||||||
|
return self._stages.get(name)
|
||||||
|
|
||||||
|
def build(self) -> "Pipeline":
|
||||||
|
"""Build execution order based on dependencies."""
|
||||||
|
self._execution_order = self._resolve_dependencies()
|
||||||
|
self._initialized = True
|
||||||
|
return self
|
||||||
|
|
||||||
|
def _resolve_dependencies(self) -> list[str]:
|
||||||
|
"""Resolve stage execution order using topological sort."""
|
||||||
|
ordered = []
|
||||||
|
visited = set()
|
||||||
|
temp_mark = set()
|
||||||
|
|
||||||
|
def visit(name: str) -> None:
|
||||||
|
if name in temp_mark:
|
||||||
|
raise StageError(name, "Circular dependency detected")
|
||||||
|
if name in visited:
|
||||||
|
return
|
||||||
|
|
||||||
|
temp_mark.add(name)
|
||||||
|
stage = self._stages.get(name)
|
||||||
|
if stage:
|
||||||
|
for dep in stage.dependencies:
|
||||||
|
dep_stage = self._stages.get(dep)
|
||||||
|
if dep_stage:
|
||||||
|
visit(dep)
|
||||||
|
|
||||||
|
temp_mark.remove(name)
|
||||||
|
visited.add(name)
|
||||||
|
ordered.append(name)
|
||||||
|
|
||||||
|
for name in self._stages:
|
||||||
|
if name not in visited:
|
||||||
|
visit(name)
|
||||||
|
|
||||||
|
return ordered
|
||||||
|
|
||||||
|
def initialize(self) -> bool:
|
||||||
|
"""Initialize all stages in execution order."""
|
||||||
|
for name in self._execution_order:
|
||||||
|
stage = self._stages.get(name)
|
||||||
|
if stage and not stage.init(self.context) and not stage.optional:
|
||||||
|
return False
|
||||||
|
return True
|
||||||
|
|
||||||
|
def execute(self, data: Any | None = None) -> StageResult:
|
||||||
|
"""Execute the pipeline with the given input data."""
|
||||||
|
if not self._initialized:
|
||||||
|
self.build()
|
||||||
|
|
||||||
|
if not self._initialized:
|
||||||
|
return StageResult(
|
||||||
|
success=False,
|
||||||
|
data=None,
|
||||||
|
error="Pipeline not initialized",
|
||||||
|
)
|
||||||
|
|
||||||
|
current_data = data
|
||||||
|
frame_start = time.perf_counter() if self._metrics_enabled else 0
|
||||||
|
stage_timings: list[StageMetrics] = []
|
||||||
|
|
||||||
|
for name in self._execution_order:
|
||||||
|
stage = self._stages.get(name)
|
||||||
|
if not stage or not stage.is_enabled():
|
||||||
|
continue
|
||||||
|
|
||||||
|
stage_start = time.perf_counter() if self._metrics_enabled else 0
|
||||||
|
|
||||||
|
try:
|
||||||
|
current_data = stage.process(current_data, self.context)
|
||||||
|
except Exception as e:
|
||||||
|
if not stage.optional:
|
||||||
|
return StageResult(
|
||||||
|
success=False,
|
||||||
|
data=current_data,
|
||||||
|
error=str(e),
|
||||||
|
stage_name=name,
|
||||||
|
)
|
||||||
|
continue
|
||||||
|
|
||||||
|
if self._metrics_enabled:
|
||||||
|
stage_duration = (time.perf_counter() - stage_start) * 1000
|
||||||
|
chars_in = len(str(data)) if data else 0
|
||||||
|
chars_out = len(str(current_data)) if current_data else 0
|
||||||
|
stage_timings.append(
|
||||||
|
StageMetrics(
|
||||||
|
name=name,
|
||||||
|
duration_ms=stage_duration,
|
||||||
|
chars_in=chars_in,
|
||||||
|
chars_out=chars_out,
|
||||||
|
)
|
||||||
|
)
|
||||||
|
|
||||||
|
if self._metrics_enabled:
|
||||||
|
total_duration = (time.perf_counter() - frame_start) * 1000
|
||||||
|
self._frame_metrics.append(
|
||||||
|
FrameMetrics(
|
||||||
|
frame_number=self._current_frame_number,
|
||||||
|
total_ms=total_duration,
|
||||||
|
stages=stage_timings,
|
||||||
|
)
|
||||||
|
)
|
||||||
|
if len(self._frame_metrics) > self._max_metrics_frames:
|
||||||
|
self._frame_metrics.pop(0)
|
||||||
|
self._current_frame_number += 1
|
||||||
|
|
||||||
|
return StageResult(success=True, data=current_data)
|
||||||
|
|
||||||
|
def cleanup(self) -> None:
|
||||||
|
"""Clean up all stages in reverse order."""
|
||||||
|
for name in reversed(self._execution_order):
|
||||||
|
stage = self._stages.get(name)
|
||||||
|
if stage:
|
||||||
|
try:
|
||||||
|
stage.cleanup()
|
||||||
|
except Exception:
|
||||||
|
pass
|
||||||
|
self._stages.clear()
|
||||||
|
self._initialized = False
|
||||||
|
|
||||||
|
@property
|
||||||
|
def stages(self) -> dict[str, Stage]:
|
||||||
|
"""Get all stages."""
|
||||||
|
return self._stages.copy()
|
||||||
|
|
||||||
|
@property
|
||||||
|
def execution_order(self) -> list[str]:
|
||||||
|
"""Get execution order."""
|
||||||
|
return self._execution_order.copy()
|
||||||
|
|
||||||
|
def get_stage_names(self) -> list[str]:
|
||||||
|
"""Get list of stage names."""
|
||||||
|
return list(self._stages.keys())
|
||||||
|
|
||||||
|
def get_metrics_summary(self) -> dict:
|
||||||
|
"""Get summary of collected metrics."""
|
||||||
|
if not self._frame_metrics:
|
||||||
|
return {"error": "No metrics collected"}
|
||||||
|
|
||||||
|
total_times = [f.total_ms for f in self._frame_metrics]
|
||||||
|
avg_total = sum(total_times) / len(total_times)
|
||||||
|
min_total = min(total_times)
|
||||||
|
max_total = max(total_times)
|
||||||
|
|
||||||
|
stage_stats: dict[str, dict] = {}
|
||||||
|
for frame in self._frame_metrics:
|
||||||
|
for stage in frame.stages:
|
||||||
|
if stage.name not in stage_stats:
|
||||||
|
stage_stats[stage.name] = {"times": [], "total_chars": 0}
|
||||||
|
stage_stats[stage.name]["times"].append(stage.duration_ms)
|
||||||
|
stage_stats[stage.name]["total_chars"] += stage.chars_out
|
||||||
|
|
||||||
|
for name, stats in stage_stats.items():
|
||||||
|
times = stats["times"]
|
||||||
|
stats["avg_ms"] = sum(times) / len(times)
|
||||||
|
stats["min_ms"] = min(times)
|
||||||
|
stats["max_ms"] = max(times)
|
||||||
|
del stats["times"]
|
||||||
|
|
||||||
|
return {
|
||||||
|
"frame_count": len(self._frame_metrics),
|
||||||
|
"pipeline": {
|
||||||
|
"avg_ms": avg_total,
|
||||||
|
"min_ms": min_total,
|
||||||
|
"max_ms": max_total,
|
||||||
|
},
|
||||||
|
"stages": stage_stats,
|
||||||
|
}
|
||||||
|
|
||||||
|
def reset_metrics(self) -> None:
|
||||||
|
"""Reset collected metrics."""
|
||||||
|
self._frame_metrics.clear()
|
||||||
|
self._current_frame_number = 0
|
||||||
|
|
||||||
|
|
||||||
|
class PipelineRunner:
|
||||||
|
"""High-level pipeline runner with animation support."""
|
||||||
|
|
||||||
|
def __init__(
|
||||||
|
self,
|
||||||
|
pipeline: Pipeline,
|
||||||
|
params: PipelineParams | None = None,
|
||||||
|
):
|
||||||
|
self.pipeline = pipeline
|
||||||
|
self.params = params or PipelineParams()
|
||||||
|
self._running = False
|
||||||
|
|
||||||
|
def start(self) -> bool:
|
||||||
|
"""Start the pipeline."""
|
||||||
|
self._running = True
|
||||||
|
return self.pipeline.initialize()
|
||||||
|
|
||||||
|
def step(self, input_data: Any | None = None) -> Any:
|
||||||
|
"""Execute one pipeline step."""
|
||||||
|
self.params.frame_number += 1
|
||||||
|
self.pipeline.context.params = self.params
|
||||||
|
result = self.pipeline.execute(input_data)
|
||||||
|
return result.data if result.success else None
|
||||||
|
|
||||||
|
def stop(self) -> None:
|
||||||
|
"""Stop and clean up the pipeline."""
|
||||||
|
self._running = False
|
||||||
|
self.pipeline.cleanup()
|
||||||
|
|
||||||
|
@property
|
||||||
|
def is_running(self) -> bool:
|
||||||
|
"""Check if runner is active."""
|
||||||
|
return self._running
|
||||||
|
|
||||||
|
|
||||||
|
def create_pipeline_from_params(params: PipelineParams) -> Pipeline:
|
||||||
|
"""Create a pipeline from PipelineParams."""
|
||||||
|
config = PipelineConfig(
|
||||||
|
source=params.source,
|
||||||
|
display=params.display,
|
||||||
|
camera=params.camera_mode,
|
||||||
|
effects=params.effect_order,
|
||||||
|
)
|
||||||
|
return Pipeline(config=config)
|
||||||
|
|
||||||
|
|
||||||
|
def create_default_pipeline() -> Pipeline:
|
||||||
|
"""Create a default pipeline with all standard components."""
|
||||||
|
from engine.pipeline.adapters import DataSourceStage
|
||||||
|
from engine.sources_v2 import HeadlinesDataSource
|
||||||
|
|
||||||
|
pipeline = Pipeline()
|
||||||
|
|
||||||
|
# Add source stage (wrapped as Stage)
|
||||||
|
source = HeadlinesDataSource()
|
||||||
|
pipeline.add_stage("source", DataSourceStage(source, name="headlines"))
|
||||||
|
|
||||||
|
# Add display stage
|
||||||
|
display = StageRegistry.create("display", "terminal")
|
||||||
|
if display:
|
||||||
|
pipeline.add_stage("display", display)
|
||||||
|
|
||||||
|
return pipeline.build()
|
||||||
221
engine/pipeline/core.py
Normal file
221
engine/pipeline/core.py
Normal file
@@ -0,0 +1,221 @@
|
|||||||
|
"""
|
||||||
|
Pipeline core - Unified Stage abstraction and PipelineContext.
|
||||||
|
|
||||||
|
This module provides the foundation for a clean, dependency-managed pipeline:
|
||||||
|
- Stage: Base class for all pipeline components (sources, effects, displays, cameras)
|
||||||
|
- PipelineContext: Dependency injection context for runtime data exchange
|
||||||
|
- Capability system: Explicit capability declarations with duck-typing support
|
||||||
|
"""
|
||||||
|
|
||||||
|
from abc import ABC, abstractmethod
|
||||||
|
from collections.abc import Callable
|
||||||
|
from dataclasses import dataclass, field
|
||||||
|
from typing import TYPE_CHECKING, Any
|
||||||
|
|
||||||
|
if TYPE_CHECKING:
|
||||||
|
from engine.pipeline.params import PipelineParams
|
||||||
|
|
||||||
|
|
||||||
|
@dataclass
|
||||||
|
class StageConfig:
|
||||||
|
"""Configuration for a single stage."""
|
||||||
|
|
||||||
|
name: str
|
||||||
|
category: str
|
||||||
|
enabled: bool = True
|
||||||
|
optional: bool = False
|
||||||
|
params: dict[str, Any] = field(default_factory=dict)
|
||||||
|
|
||||||
|
|
||||||
|
class Stage(ABC):
|
||||||
|
"""Abstract base class for all pipeline stages.
|
||||||
|
|
||||||
|
A Stage is a single component in the rendering pipeline. Stages can be:
|
||||||
|
- Sources: Data providers (headlines, poetry, pipeline viz)
|
||||||
|
- Effects: Post-processors (noise, fade, glitch, hud)
|
||||||
|
- Displays: Output backends (terminal, pygame, websocket)
|
||||||
|
- Cameras: Viewport controllers (vertical, horizontal, omni)
|
||||||
|
|
||||||
|
Stages declare:
|
||||||
|
- capabilities: What they provide to other stages
|
||||||
|
- dependencies: What they need from other stages
|
||||||
|
|
||||||
|
Duck-typing is supported: any class with the required methods can act as a Stage.
|
||||||
|
"""
|
||||||
|
|
||||||
|
name: str
|
||||||
|
category: str # "source", "effect", "display", "camera"
|
||||||
|
optional: bool = False # If True, pipeline continues even if stage fails
|
||||||
|
|
||||||
|
@property
|
||||||
|
def capabilities(self) -> set[str]:
|
||||||
|
"""Return set of capabilities this stage provides.
|
||||||
|
|
||||||
|
Examples:
|
||||||
|
- "source.headlines"
|
||||||
|
- "effect.noise"
|
||||||
|
- "display.output"
|
||||||
|
- "camera"
|
||||||
|
"""
|
||||||
|
return {f"{self.category}.{self.name}"}
|
||||||
|
|
||||||
|
@property
|
||||||
|
def dependencies(self) -> set[str]:
|
||||||
|
"""Return set of capability names this stage needs.
|
||||||
|
|
||||||
|
Examples:
|
||||||
|
- {"display.output"}
|
||||||
|
- {"source.headlines"}
|
||||||
|
- {"camera"}
|
||||||
|
"""
|
||||||
|
return set()
|
||||||
|
|
||||||
|
def init(self, ctx: "PipelineContext") -> bool:
|
||||||
|
"""Initialize stage with pipeline context.
|
||||||
|
|
||||||
|
Args:
|
||||||
|
ctx: PipelineContext for accessing services
|
||||||
|
|
||||||
|
Returns:
|
||||||
|
True if initialization succeeded, False otherwise
|
||||||
|
"""
|
||||||
|
return True
|
||||||
|
|
||||||
|
@abstractmethod
|
||||||
|
def process(self, data: Any, ctx: "PipelineContext") -> Any:
|
||||||
|
"""Process input data and return output.
|
||||||
|
|
||||||
|
Args:
|
||||||
|
data: Input data from previous stage (or initial data for first stage)
|
||||||
|
ctx: PipelineContext for accessing services and state
|
||||||
|
|
||||||
|
Returns:
|
||||||
|
Processed data for next stage
|
||||||
|
"""
|
||||||
|
...
|
||||||
|
|
||||||
|
def cleanup(self) -> None: # noqa: B027
|
||||||
|
"""Clean up resources when pipeline shuts down."""
|
||||||
|
pass
|
||||||
|
|
||||||
|
def get_config(self) -> StageConfig:
|
||||||
|
"""Return current configuration of this stage."""
|
||||||
|
return StageConfig(
|
||||||
|
name=self.name,
|
||||||
|
category=self.category,
|
||||||
|
optional=self.optional,
|
||||||
|
)
|
||||||
|
|
||||||
|
def set_enabled(self, enabled: bool) -> None:
|
||||||
|
"""Enable or disable this stage."""
|
||||||
|
self._enabled = enabled # type: ignore[attr-defined]
|
||||||
|
|
||||||
|
def is_enabled(self) -> bool:
|
||||||
|
"""Check if stage is enabled."""
|
||||||
|
return getattr(self, "_enabled", True)
|
||||||
|
|
||||||
|
|
||||||
|
@dataclass
|
||||||
|
class StageResult:
|
||||||
|
"""Result of stage processing, including success/failure info."""
|
||||||
|
|
||||||
|
success: bool
|
||||||
|
data: Any
|
||||||
|
error: str | None = None
|
||||||
|
stage_name: str = ""
|
||||||
|
|
||||||
|
|
||||||
|
class PipelineContext:
|
||||||
|
"""Dependency injection context passed through the pipeline.
|
||||||
|
|
||||||
|
Provides:
|
||||||
|
- services: Named services (display, config, event_bus, etc.)
|
||||||
|
- state: Runtime state shared between stages
|
||||||
|
- params: PipelineParams for animation-driven config
|
||||||
|
|
||||||
|
Services can be injected at construction time or lazily resolved.
|
||||||
|
"""
|
||||||
|
|
||||||
|
def __init__(
|
||||||
|
self,
|
||||||
|
services: dict[str, Any] | None = None,
|
||||||
|
initial_state: dict[str, Any] | None = None,
|
||||||
|
):
|
||||||
|
self.services: dict[str, Any] = services or {}
|
||||||
|
self.state: dict[str, Any] = initial_state or {}
|
||||||
|
self._params: PipelineParams | None = None
|
||||||
|
|
||||||
|
# Lazy resolvers for common services
|
||||||
|
self._lazy_resolvers: dict[str, Callable[[], Any]] = {
|
||||||
|
"config": self._resolve_config,
|
||||||
|
"event_bus": self._resolve_event_bus,
|
||||||
|
}
|
||||||
|
|
||||||
|
def _resolve_config(self) -> Any:
|
||||||
|
from engine.config import get_config
|
||||||
|
|
||||||
|
return get_config()
|
||||||
|
|
||||||
|
def _resolve_event_bus(self) -> Any:
|
||||||
|
from engine.eventbus import get_event_bus
|
||||||
|
|
||||||
|
return get_event_bus()
|
||||||
|
|
||||||
|
def get(self, key: str, default: Any = None) -> Any:
|
||||||
|
"""Get a service or state value by key.
|
||||||
|
|
||||||
|
First checks services, then state, then lazy resolution.
|
||||||
|
"""
|
||||||
|
if key in self.services:
|
||||||
|
return self.services[key]
|
||||||
|
if key in self.state:
|
||||||
|
return self.state[key]
|
||||||
|
if key in self._lazy_resolvers:
|
||||||
|
try:
|
||||||
|
return self._lazy_resolvers[key]()
|
||||||
|
except Exception:
|
||||||
|
return default
|
||||||
|
return default
|
||||||
|
|
||||||
|
def set(self, key: str, value: Any) -> None:
|
||||||
|
"""Set a service or state value."""
|
||||||
|
self.services[key] = value
|
||||||
|
|
||||||
|
def set_state(self, key: str, value: Any) -> None:
|
||||||
|
"""Set a runtime state value."""
|
||||||
|
self.state[key] = value
|
||||||
|
|
||||||
|
def get_state(self, key: str, default: Any = None) -> Any:
|
||||||
|
"""Get a runtime state value."""
|
||||||
|
return self.state.get(key, default)
|
||||||
|
|
||||||
|
@property
|
||||||
|
def params(self) -> "PipelineParams | None":
|
||||||
|
"""Get current pipeline params (for animation)."""
|
||||||
|
return self._params
|
||||||
|
|
||||||
|
@params.setter
|
||||||
|
def params(self, value: "PipelineParams") -> None:
|
||||||
|
"""Set pipeline params (from animation controller)."""
|
||||||
|
self._params = value
|
||||||
|
|
||||||
|
def has_capability(self, capability: str) -> bool:
|
||||||
|
"""Check if a capability is available."""
|
||||||
|
return capability in self.services or capability in self._lazy_resolvers
|
||||||
|
|
||||||
|
|
||||||
|
class StageError(Exception):
|
||||||
|
"""Raised when a stage fails to process."""
|
||||||
|
|
||||||
|
def __init__(self, stage_name: str, message: str, is_optional: bool = False):
|
||||||
|
self.stage_name = stage_name
|
||||||
|
self.message = message
|
||||||
|
self.is_optional = is_optional
|
||||||
|
super().__init__(f"Stage '{stage_name}' failed: {message}")
|
||||||
|
|
||||||
|
|
||||||
|
def create_stage_error(
|
||||||
|
stage_name: str, error: Exception, is_optional: bool = False
|
||||||
|
) -> StageError:
|
||||||
|
"""Helper to create a StageError from an exception."""
|
||||||
|
return StageError(stage_name, str(error), is_optional)
|
||||||
144
engine/pipeline/params.py
Normal file
144
engine/pipeline/params.py
Normal file
@@ -0,0 +1,144 @@
|
|||||||
|
"""
|
||||||
|
Pipeline parameters - Runtime configuration layer for animation control.
|
||||||
|
|
||||||
|
PipelineParams is the target for AnimationController - animation events
|
||||||
|
modify these params, which the pipeline then applies to its stages.
|
||||||
|
"""
|
||||||
|
|
||||||
|
from dataclasses import dataclass, field
|
||||||
|
from typing import Any
|
||||||
|
|
||||||
|
|
||||||
|
@dataclass
|
||||||
|
class PipelineParams:
|
||||||
|
"""Runtime configuration for the pipeline.
|
||||||
|
|
||||||
|
This is the canonical config object that AnimationController modifies.
|
||||||
|
Stages read from these params to adjust their behavior.
|
||||||
|
"""
|
||||||
|
|
||||||
|
# Source config
|
||||||
|
source: str = "headlines"
|
||||||
|
source_refresh_interval: float = 60.0
|
||||||
|
|
||||||
|
# Display config
|
||||||
|
display: str = "terminal"
|
||||||
|
|
||||||
|
# Camera config
|
||||||
|
camera_mode: str = "vertical"
|
||||||
|
camera_speed: float = 1.0
|
||||||
|
camera_x: int = 0 # For horizontal scrolling
|
||||||
|
|
||||||
|
# Effect config
|
||||||
|
effect_order: list[str] = field(
|
||||||
|
default_factory=lambda: ["noise", "fade", "glitch", "firehose", "hud"]
|
||||||
|
)
|
||||||
|
effect_enabled: dict[str, bool] = field(default_factory=dict)
|
||||||
|
effect_intensity: dict[str, float] = field(default_factory=dict)
|
||||||
|
|
||||||
|
# Animation-driven state (set by AnimationController)
|
||||||
|
pulse: float = 0.0
|
||||||
|
current_effect: str | None = None
|
||||||
|
path_progress: float = 0.0
|
||||||
|
|
||||||
|
# Viewport
|
||||||
|
viewport_width: int = 80
|
||||||
|
viewport_height: int = 24
|
||||||
|
|
||||||
|
# Firehose
|
||||||
|
firehose_enabled: bool = False
|
||||||
|
|
||||||
|
# Runtime state
|
||||||
|
frame_number: int = 0
|
||||||
|
fps: float = 60.0
|
||||||
|
|
||||||
|
def get_effect_config(self, name: str) -> tuple[bool, float]:
|
||||||
|
"""Get (enabled, intensity) for an effect."""
|
||||||
|
enabled = self.effect_enabled.get(name, True)
|
||||||
|
intensity = self.effect_intensity.get(name, 1.0)
|
||||||
|
return enabled, intensity
|
||||||
|
|
||||||
|
def set_effect_config(self, name: str, enabled: bool, intensity: float) -> None:
|
||||||
|
"""Set effect configuration."""
|
||||||
|
self.effect_enabled[name] = enabled
|
||||||
|
self.effect_intensity[name] = intensity
|
||||||
|
|
||||||
|
def is_effect_enabled(self, name: str) -> bool:
|
||||||
|
"""Check if an effect is enabled."""
|
||||||
|
if name not in self.effect_enabled:
|
||||||
|
return True # Default to enabled
|
||||||
|
return self.effect_enabled.get(name, True)
|
||||||
|
|
||||||
|
def get_effect_intensity(self, name: str) -> float:
|
||||||
|
"""Get effect intensity (0.0 to 1.0)."""
|
||||||
|
return self.effect_intensity.get(name, 1.0)
|
||||||
|
|
||||||
|
def to_dict(self) -> dict[str, Any]:
|
||||||
|
"""Convert to dictionary for serialization."""
|
||||||
|
return {
|
||||||
|
"source": self.source,
|
||||||
|
"display": self.display,
|
||||||
|
"camera_mode": self.camera_mode,
|
||||||
|
"camera_speed": self.camera_speed,
|
||||||
|
"effect_order": self.effect_order,
|
||||||
|
"effect_enabled": self.effect_enabled.copy(),
|
||||||
|
"effect_intensity": self.effect_intensity.copy(),
|
||||||
|
"pulse": self.pulse,
|
||||||
|
"current_effect": self.current_effect,
|
||||||
|
"viewport_width": self.viewport_width,
|
||||||
|
"viewport_height": self.viewport_height,
|
||||||
|
"firehose_enabled": self.firehose_enabled,
|
||||||
|
}
|
||||||
|
|
||||||
|
@classmethod
|
||||||
|
def from_dict(cls, data: dict[str, Any]) -> "PipelineParams":
|
||||||
|
"""Create from dictionary."""
|
||||||
|
params = cls()
|
||||||
|
for key, value in data.items():
|
||||||
|
if hasattr(params, key):
|
||||||
|
setattr(params, key, value)
|
||||||
|
return params
|
||||||
|
|
||||||
|
def copy(self) -> "PipelineParams":
|
||||||
|
"""Create a copy of this params object."""
|
||||||
|
params = PipelineParams()
|
||||||
|
params.source = self.source
|
||||||
|
params.display = self.display
|
||||||
|
params.camera_mode = self.camera_mode
|
||||||
|
params.camera_speed = self.camera_speed
|
||||||
|
params.camera_x = self.camera_x
|
||||||
|
params.effect_order = self.effect_order.copy()
|
||||||
|
params.effect_enabled = self.effect_enabled.copy()
|
||||||
|
params.effect_intensity = self.effect_intensity.copy()
|
||||||
|
params.pulse = self.pulse
|
||||||
|
params.current_effect = self.current_effect
|
||||||
|
params.path_progress = self.path_progress
|
||||||
|
params.viewport_width = self.viewport_width
|
||||||
|
params.viewport_height = self.viewport_height
|
||||||
|
params.firehose_enabled = self.firehose_enabled
|
||||||
|
params.frame_number = self.frame_number
|
||||||
|
params.fps = self.fps
|
||||||
|
return params
|
||||||
|
|
||||||
|
|
||||||
|
# Default params for different modes
|
||||||
|
DEFAULT_HEADLINE_PARAMS = PipelineParams(
|
||||||
|
source="headlines",
|
||||||
|
display="terminal",
|
||||||
|
camera_mode="vertical",
|
||||||
|
effect_order=["noise", "fade", "glitch", "firehose", "hud"],
|
||||||
|
)
|
||||||
|
|
||||||
|
DEFAULT_PYGAME_PARAMS = PipelineParams(
|
||||||
|
source="headlines",
|
||||||
|
display="pygame",
|
||||||
|
camera_mode="vertical",
|
||||||
|
effect_order=["noise", "fade", "glitch", "firehose", "hud"],
|
||||||
|
)
|
||||||
|
|
||||||
|
DEFAULT_PIPELINE_PARAMS = PipelineParams(
|
||||||
|
source="pipeline",
|
||||||
|
display="pygame",
|
||||||
|
camera_mode="trace",
|
||||||
|
effect_order=["hud"], # Just HUD for pipeline viz
|
||||||
|
)
|
||||||
155
engine/pipeline/presets.py
Normal file
155
engine/pipeline/presets.py
Normal file
@@ -0,0 +1,155 @@
|
|||||||
|
"""
|
||||||
|
Pipeline presets - Pre-configured pipeline configurations.
|
||||||
|
|
||||||
|
Provides PipelinePreset as a unified preset system that wraps
|
||||||
|
the existing Preset class from animation.py for backwards compatibility.
|
||||||
|
"""
|
||||||
|
|
||||||
|
from dataclasses import dataclass, field
|
||||||
|
|
||||||
|
from engine.animation import Preset as AnimationPreset
|
||||||
|
from engine.pipeline.params import PipelineParams
|
||||||
|
|
||||||
|
|
||||||
|
@dataclass
|
||||||
|
class PipelinePreset:
|
||||||
|
"""Pre-configured pipeline with stages and animation.
|
||||||
|
|
||||||
|
A PipelinePreset packages:
|
||||||
|
- Initial params: Starting configuration
|
||||||
|
- Stages: List of stage configurations to create
|
||||||
|
- Animation: Optional animation controller
|
||||||
|
|
||||||
|
This is the new unified preset that works with the Pipeline class.
|
||||||
|
"""
|
||||||
|
|
||||||
|
name: str
|
||||||
|
description: str = ""
|
||||||
|
source: str = "headlines"
|
||||||
|
display: str = "terminal"
|
||||||
|
camera: str = "vertical"
|
||||||
|
effects: list[str] = field(default_factory=list)
|
||||||
|
initial_params: PipelineParams | None = None
|
||||||
|
animation_preset: AnimationPreset | None = None
|
||||||
|
|
||||||
|
def to_params(self) -> PipelineParams:
|
||||||
|
"""Convert to PipelineParams."""
|
||||||
|
if self.initial_params:
|
||||||
|
return self.initial_params.copy()
|
||||||
|
params = PipelineParams()
|
||||||
|
params.source = self.source
|
||||||
|
params.display = self.display
|
||||||
|
params.camera_mode = self.camera
|
||||||
|
params.effect_order = self.effects.copy()
|
||||||
|
return params
|
||||||
|
|
||||||
|
@classmethod
|
||||||
|
def from_animation_preset(cls, preset: AnimationPreset) -> "PipelinePreset":
|
||||||
|
"""Create a PipelinePreset from an existing animation Preset."""
|
||||||
|
params = preset.initial_params
|
||||||
|
return cls(
|
||||||
|
name=preset.name,
|
||||||
|
description=preset.description,
|
||||||
|
source=params.source,
|
||||||
|
display=params.display,
|
||||||
|
camera=params.camera_mode,
|
||||||
|
effects=params.effect_order.copy(),
|
||||||
|
initial_params=params,
|
||||||
|
animation_preset=preset,
|
||||||
|
)
|
||||||
|
|
||||||
|
def create_animation_controller(self):
|
||||||
|
"""Create an AnimationController from this preset."""
|
||||||
|
if self.animation_preset:
|
||||||
|
return self.animation_preset.create_controller()
|
||||||
|
return None
|
||||||
|
|
||||||
|
|
||||||
|
# Built-in presets
|
||||||
|
DEMO_PRESET = PipelinePreset(
|
||||||
|
name="demo",
|
||||||
|
description="Demo mode with effect cycling and camera modes",
|
||||||
|
source="headlines",
|
||||||
|
display="terminal",
|
||||||
|
camera="vertical",
|
||||||
|
effects=["noise", "fade", "glitch", "firehose", "hud"],
|
||||||
|
)
|
||||||
|
|
||||||
|
POETRY_PRESET = PipelinePreset(
|
||||||
|
name="poetry",
|
||||||
|
description="Poetry feed with subtle effects",
|
||||||
|
source="poetry",
|
||||||
|
display="terminal",
|
||||||
|
camera="vertical",
|
||||||
|
effects=["fade", "hud"],
|
||||||
|
)
|
||||||
|
|
||||||
|
PIPELINE_VIZ_PRESET = PipelinePreset(
|
||||||
|
name="pipeline",
|
||||||
|
description="Pipeline visualization mode",
|
||||||
|
source="pipeline",
|
||||||
|
display="terminal",
|
||||||
|
camera="trace",
|
||||||
|
effects=["hud"],
|
||||||
|
)
|
||||||
|
|
||||||
|
WEBSOCKET_PRESET = PipelinePreset(
|
||||||
|
name="websocket",
|
||||||
|
description="WebSocket display mode",
|
||||||
|
source="headlines",
|
||||||
|
display="websocket",
|
||||||
|
camera="vertical",
|
||||||
|
effects=["noise", "fade", "glitch", "hud"],
|
||||||
|
)
|
||||||
|
|
||||||
|
SIXEL_PRESET = PipelinePreset(
|
||||||
|
name="sixel",
|
||||||
|
description="Sixel graphics display mode",
|
||||||
|
source="headlines",
|
||||||
|
display="sixel",
|
||||||
|
camera="vertical",
|
||||||
|
effects=["noise", "fade", "glitch", "hud"],
|
||||||
|
)
|
||||||
|
|
||||||
|
FIREHOSE_PRESET = PipelinePreset(
|
||||||
|
name="firehose",
|
||||||
|
description="High-speed firehose mode",
|
||||||
|
source="headlines",
|
||||||
|
display="terminal",
|
||||||
|
camera="vertical",
|
||||||
|
effects=["noise", "fade", "glitch", "firehose", "hud"],
|
||||||
|
)
|
||||||
|
|
||||||
|
|
||||||
|
PRESETS: dict[str, PipelinePreset] = {
|
||||||
|
"demo": DEMO_PRESET,
|
||||||
|
"poetry": POETRY_PRESET,
|
||||||
|
"pipeline": PIPELINE_VIZ_PRESET,
|
||||||
|
"websocket": WEBSOCKET_PRESET,
|
||||||
|
"sixel": SIXEL_PRESET,
|
||||||
|
"firehose": FIREHOSE_PRESET,
|
||||||
|
}
|
||||||
|
|
||||||
|
|
||||||
|
def get_preset(name: str) -> PipelinePreset | None:
|
||||||
|
"""Get a preset by name."""
|
||||||
|
return PRESETS.get(name)
|
||||||
|
|
||||||
|
|
||||||
|
def list_presets() -> list[str]:
|
||||||
|
"""List all available preset names."""
|
||||||
|
return list(PRESETS.keys())
|
||||||
|
|
||||||
|
|
||||||
|
def create_preset_from_params(
|
||||||
|
params: PipelineParams, name: str = "custom"
|
||||||
|
) -> PipelinePreset:
|
||||||
|
"""Create a preset from PipelineParams."""
|
||||||
|
return PipelinePreset(
|
||||||
|
name=name,
|
||||||
|
source=params.source,
|
||||||
|
display=params.display,
|
||||||
|
camera=params.camera_mode,
|
||||||
|
effects=params.effect_order.copy(),
|
||||||
|
initial_params=params,
|
||||||
|
)
|
||||||
165
engine/pipeline/registry.py
Normal file
165
engine/pipeline/registry.py
Normal file
@@ -0,0 +1,165 @@
|
|||||||
|
"""
|
||||||
|
Stage registry - Unified registration for all pipeline stages.
|
||||||
|
|
||||||
|
Provides a single registry for sources, effects, displays, and cameras.
|
||||||
|
"""
|
||||||
|
|
||||||
|
from __future__ import annotations
|
||||||
|
|
||||||
|
from engine.pipeline.core import Stage
|
||||||
|
|
||||||
|
|
||||||
|
class StageRegistry:
|
||||||
|
"""Unified registry for all pipeline stage types."""
|
||||||
|
|
||||||
|
_categories: dict[str, dict[str, type[Stage]]] = {}
|
||||||
|
_discovered: bool = False
|
||||||
|
_instances: dict[str, Stage] = {}
|
||||||
|
|
||||||
|
@classmethod
|
||||||
|
def register(cls, category: str, stage_class: type[Stage]) -> None:
|
||||||
|
"""Register a stage class in a category.
|
||||||
|
|
||||||
|
Args:
|
||||||
|
category: Category name (source, effect, display, camera)
|
||||||
|
stage_class: Stage subclass to register
|
||||||
|
"""
|
||||||
|
if category not in cls._categories:
|
||||||
|
cls._categories[category] = {}
|
||||||
|
|
||||||
|
# Use class name as key
|
||||||
|
key = getattr(stage_class, "__name__", stage_class.__class__.__name__)
|
||||||
|
cls._categories[category][key] = stage_class
|
||||||
|
|
||||||
|
@classmethod
|
||||||
|
def get(cls, category: str, name: str) -> type[Stage] | None:
|
||||||
|
"""Get a stage class by category and name."""
|
||||||
|
return cls._categories.get(category, {}).get(name)
|
||||||
|
|
||||||
|
@classmethod
|
||||||
|
def list(cls, category: str) -> list[str]:
|
||||||
|
"""List all stage names in a category."""
|
||||||
|
return list(cls._categories.get(category, {}).keys())
|
||||||
|
|
||||||
|
@classmethod
|
||||||
|
def list_categories(cls) -> list[str]:
|
||||||
|
"""List all registered categories."""
|
||||||
|
return list(cls._categories.keys())
|
||||||
|
|
||||||
|
@classmethod
|
||||||
|
def create(cls, category: str, name: str, **kwargs) -> Stage | None:
|
||||||
|
"""Create a stage instance by category and name."""
|
||||||
|
stage_class = cls.get(category, name)
|
||||||
|
if stage_class:
|
||||||
|
return stage_class(**kwargs)
|
||||||
|
return None
|
||||||
|
|
||||||
|
@classmethod
|
||||||
|
def create_instance(cls, stage: Stage | type[Stage], **kwargs) -> Stage:
|
||||||
|
"""Create an instance from a stage class or return as-is."""
|
||||||
|
if isinstance(stage, Stage):
|
||||||
|
return stage
|
||||||
|
if isinstance(stage, type) and issubclass(stage, Stage):
|
||||||
|
return stage(**kwargs)
|
||||||
|
raise TypeError(f"Expected Stage class or instance, got {type(stage)}")
|
||||||
|
|
||||||
|
@classmethod
|
||||||
|
def register_instance(cls, name: str, stage: Stage) -> None:
|
||||||
|
"""Register a stage instance by name."""
|
||||||
|
cls._instances[name] = stage
|
||||||
|
|
||||||
|
@classmethod
|
||||||
|
def get_instance(cls, name: str) -> Stage | None:
|
||||||
|
"""Get a registered stage instance by name."""
|
||||||
|
return cls._instances.get(name)
|
||||||
|
|
||||||
|
|
||||||
|
def discover_stages() -> None:
|
||||||
|
"""Auto-discover and register all stage implementations."""
|
||||||
|
if StageRegistry._discovered:
|
||||||
|
return
|
||||||
|
|
||||||
|
# Import and register all stage implementations
|
||||||
|
try:
|
||||||
|
from engine.sources_v2 import (
|
||||||
|
HeadlinesDataSource,
|
||||||
|
PipelineDataSource,
|
||||||
|
PoetryDataSource,
|
||||||
|
)
|
||||||
|
|
||||||
|
StageRegistry.register("source", HeadlinesDataSource)
|
||||||
|
StageRegistry.register("source", PoetryDataSource)
|
||||||
|
StageRegistry.register("source", PipelineDataSource)
|
||||||
|
|
||||||
|
StageRegistry._categories["source"]["headlines"] = HeadlinesDataSource
|
||||||
|
StageRegistry._categories["source"]["poetry"] = PoetryDataSource
|
||||||
|
StageRegistry._categories["source"]["pipeline"] = PipelineDataSource
|
||||||
|
except ImportError:
|
||||||
|
pass
|
||||||
|
|
||||||
|
try:
|
||||||
|
from engine.effects.types import EffectPlugin # noqa: F401
|
||||||
|
except ImportError:
|
||||||
|
pass
|
||||||
|
|
||||||
|
# Register display stages
|
||||||
|
_register_display_stages()
|
||||||
|
|
||||||
|
StageRegistry._discovered = True
|
||||||
|
|
||||||
|
|
||||||
|
def _register_display_stages() -> None:
|
||||||
|
"""Register display backends as stages."""
|
||||||
|
try:
|
||||||
|
from engine.display import DisplayRegistry
|
||||||
|
except ImportError:
|
||||||
|
return
|
||||||
|
|
||||||
|
DisplayRegistry.initialize()
|
||||||
|
|
||||||
|
for backend_name in DisplayRegistry.list_backends():
|
||||||
|
factory = _DisplayStageFactory(backend_name)
|
||||||
|
StageRegistry._categories.setdefault("display", {})[backend_name] = factory
|
||||||
|
|
||||||
|
|
||||||
|
class _DisplayStageFactory:
|
||||||
|
"""Factory that creates DisplayStage instances for a specific backend."""
|
||||||
|
|
||||||
|
def __init__(self, backend_name: str):
|
||||||
|
self._backend_name = backend_name
|
||||||
|
|
||||||
|
def __call__(self):
|
||||||
|
from engine.display import DisplayRegistry
|
||||||
|
from engine.pipeline.adapters import DisplayStage
|
||||||
|
|
||||||
|
display = DisplayRegistry.create(self._backend_name)
|
||||||
|
if display is None:
|
||||||
|
raise RuntimeError(
|
||||||
|
f"Failed to create display backend: {self._backend_name}"
|
||||||
|
)
|
||||||
|
return DisplayStage(display, name=self._backend_name)
|
||||||
|
|
||||||
|
@property
|
||||||
|
def __name__(self) -> str:
|
||||||
|
return self._backend_name.capitalize() + "Stage"
|
||||||
|
|
||||||
|
|
||||||
|
# Convenience functions
|
||||||
|
def register_source(stage_class: type[Stage]) -> None:
|
||||||
|
"""Register a source stage."""
|
||||||
|
StageRegistry.register("source", stage_class)
|
||||||
|
|
||||||
|
|
||||||
|
def register_effect(stage_class: type[Stage]) -> None:
|
||||||
|
"""Register an effect stage."""
|
||||||
|
StageRegistry.register("effect", stage_class)
|
||||||
|
|
||||||
|
|
||||||
|
def register_display(stage_class: type[Stage]) -> None:
|
||||||
|
"""Register a display stage."""
|
||||||
|
StageRegistry.register("display", stage_class)
|
||||||
|
|
||||||
|
|
||||||
|
def register_camera(stage_class: type[Stage]) -> None:
|
||||||
|
"""Register a camera stage."""
|
||||||
|
StageRegistry.register("camera", stage_class)
|
||||||
364
engine/pipeline_viz.py
Normal file
364
engine/pipeline_viz.py
Normal file
@@ -0,0 +1,364 @@
|
|||||||
|
"""
|
||||||
|
Pipeline visualization - Large animated network visualization with camera modes.
|
||||||
|
"""
|
||||||
|
|
||||||
|
import math
|
||||||
|
|
||||||
|
NODE_NETWORK = {
|
||||||
|
"sources": [
|
||||||
|
{"id": "RSS", "label": "RSS FEEDS", "x": 20, "y": 20},
|
||||||
|
{"id": "POETRY", "label": "POETRY DB", "x": 100, "y": 20},
|
||||||
|
{"id": "NTFY", "label": "NTFY MSG", "x": 180, "y": 20},
|
||||||
|
{"id": "MIC", "label": "MICROPHONE", "x": 260, "y": 20},
|
||||||
|
],
|
||||||
|
"fetch": [
|
||||||
|
{"id": "FETCH", "label": "FETCH LAYER", "x": 140, "y": 100},
|
||||||
|
{"id": "CACHE", "label": "CACHE", "x": 220, "y": 100},
|
||||||
|
],
|
||||||
|
"scroll": [
|
||||||
|
{"id": "STREAM", "label": "STREAM CTRL", "x": 60, "y": 180},
|
||||||
|
{"id": "CAMERA", "label": "CAMERA", "x": 140, "y": 180},
|
||||||
|
{"id": "RENDER", "label": "RENDER", "x": 220, "y": 180},
|
||||||
|
],
|
||||||
|
"effects": [
|
||||||
|
{"id": "NOISE", "label": "NOISE", "x": 20, "y": 260},
|
||||||
|
{"id": "FADE", "label": "FADE", "x": 80, "y": 260},
|
||||||
|
{"id": "GLITCH", "label": "GLITCH", "x": 140, "y": 260},
|
||||||
|
{"id": "FIRE", "label": "FIREHOSE", "x": 200, "y": 260},
|
||||||
|
{"id": "HUD", "label": "HUD", "x": 260, "y": 260},
|
||||||
|
],
|
||||||
|
"display": [
|
||||||
|
{"id": "TERM", "label": "TERMINAL", "x": 20, "y": 340},
|
||||||
|
{"id": "WEB", "label": "WEBSOCKET", "x": 80, "y": 340},
|
||||||
|
{"id": "PYGAME", "label": "PYGAME", "x": 140, "y": 340},
|
||||||
|
{"id": "SIXEL", "label": "SIXEL", "x": 200, "y": 340},
|
||||||
|
{"id": "KITTY", "label": "KITTY", "x": 260, "y": 340},
|
||||||
|
],
|
||||||
|
}
|
||||||
|
|
||||||
|
ALL_NODES = []
|
||||||
|
for group_nodes in NODE_NETWORK.values():
|
||||||
|
ALL_NODES.extend(group_nodes)
|
||||||
|
|
||||||
|
NETWORK_PATHS = [
|
||||||
|
["RSS", "FETCH", "CACHE", "STREAM", "CAMERA", "RENDER", "NOISE", "TERM"],
|
||||||
|
["POETRY", "FETCH", "CACHE", "STREAM", "CAMERA", "RENDER", "FADE", "WEB"],
|
||||||
|
["NTFY", "FETCH", "CACHE", "STREAM", "CAMERA", "RENDER", "GLITCH", "PYGAME"],
|
||||||
|
["MIC", "FETCH", "CACHE", "STREAM", "CAMERA", "RENDER", "FIRE", "SIXEL"],
|
||||||
|
["RSS", "FETCH", "CACHE", "STREAM", "CAMERA", "RENDER", "HUD", "KITTY"],
|
||||||
|
]
|
||||||
|
|
||||||
|
GRID_WIDTH = 300
|
||||||
|
GRID_HEIGHT = 400
|
||||||
|
|
||||||
|
|
||||||
|
def get_node_by_id(node_id: str):
|
||||||
|
for node in ALL_NODES:
|
||||||
|
if node["id"] == node_id:
|
||||||
|
return node
|
||||||
|
return None
|
||||||
|
|
||||||
|
|
||||||
|
def draw_network_to_grid(frame: int = 0) -> list[list[str]]:
|
||||||
|
grid = [[" " for _ in range(GRID_WIDTH)] for _ in range(GRID_HEIGHT)]
|
||||||
|
|
||||||
|
active_path_idx = (frame // 60) % len(NETWORK_PATHS)
|
||||||
|
active_path = NETWORK_PATHS[active_path_idx]
|
||||||
|
|
||||||
|
for node in ALL_NODES:
|
||||||
|
x, y = node["x"], node["y"]
|
||||||
|
label = node["label"]
|
||||||
|
is_active = node["id"] in active_path
|
||||||
|
is_highlight = node["id"] == active_path[(frame // 15) % len(active_path)]
|
||||||
|
|
||||||
|
node_w, node_h = 20, 7
|
||||||
|
|
||||||
|
for dy in range(node_h):
|
||||||
|
for dx in range(node_w):
|
||||||
|
gx, gy = x + dx, y + dy
|
||||||
|
if 0 <= gx < GRID_WIDTH and 0 <= gy < GRID_HEIGHT:
|
||||||
|
if dy == 0:
|
||||||
|
char = "┌" if dx == 0 else ("┐" if dx == node_w - 1 else "─")
|
||||||
|
elif dy == node_h - 1:
|
||||||
|
char = "└" if dx == 0 else ("┘" if dx == node_w - 1 else "─")
|
||||||
|
elif dy == node_h // 2:
|
||||||
|
if dx == 0 or dx == node_w - 1:
|
||||||
|
char = "│"
|
||||||
|
else:
|
||||||
|
pad = (node_w - 2 - len(label)) // 2
|
||||||
|
if dx - 1 == pad and len(label) <= node_w - 2:
|
||||||
|
char = (
|
||||||
|
label[dx - 1 - pad]
|
||||||
|
if dx - 1 - pad < len(label)
|
||||||
|
else " "
|
||||||
|
)
|
||||||
|
else:
|
||||||
|
char = " "
|
||||||
|
else:
|
||||||
|
char = "│" if dx == 0 or dx == node_w - 1 else " "
|
||||||
|
|
||||||
|
if char.strip():
|
||||||
|
if is_highlight:
|
||||||
|
grid[gy][gx] = "\033[1;38;5;46m" + char + "\033[0m"
|
||||||
|
elif is_active:
|
||||||
|
grid[gy][gx] = "\033[1;38;5;220m" + char + "\033[0m"
|
||||||
|
else:
|
||||||
|
grid[gy][gx] = "\033[38;5;240m" + char + "\033[0m"
|
||||||
|
|
||||||
|
for i, node_id in enumerate(active_path[:-1]):
|
||||||
|
curr = get_node_by_id(node_id)
|
||||||
|
next_id = active_path[i + 1]
|
||||||
|
next_node = get_node_by_id(next_id)
|
||||||
|
if curr and next_node:
|
||||||
|
x1, y1 = curr["x"] + 7, curr["y"] + 2
|
||||||
|
x2, y2 = next_node["x"] + 7, next_node["y"] + 2
|
||||||
|
|
||||||
|
step = 1 if x2 >= x1 else -1
|
||||||
|
for x in range(x1, x2 + step, step):
|
||||||
|
if 0 <= x < GRID_WIDTH and 0 <= y1 < GRID_HEIGHT:
|
||||||
|
grid[y1][x] = "\033[38;5;45m─\033[0m"
|
||||||
|
|
||||||
|
step = 1 if y2 >= y1 else -1
|
||||||
|
for y in range(y1, y2 + step, step):
|
||||||
|
if 0 <= x2 < GRID_WIDTH and 0 <= y < GRID_HEIGHT:
|
||||||
|
grid[y][x2] = "\033[38;5;45m│\033[0m"
|
||||||
|
|
||||||
|
return grid
|
||||||
|
|
||||||
|
|
||||||
|
class TraceCamera:
|
||||||
|
def __init__(self):
|
||||||
|
self.x = 0
|
||||||
|
self.y = 0
|
||||||
|
self.target_x = 0
|
||||||
|
self.target_y = 0
|
||||||
|
self.current_node_idx = 0
|
||||||
|
self.path = []
|
||||||
|
self.frame = 0
|
||||||
|
|
||||||
|
def update(self, dt: float, frame: int = 0) -> None:
|
||||||
|
self.frame = frame
|
||||||
|
active_path = NETWORK_PATHS[(frame // 60) % len(NETWORK_PATHS)]
|
||||||
|
|
||||||
|
if self.path != active_path:
|
||||||
|
self.path = active_path
|
||||||
|
self.current_node_idx = 0
|
||||||
|
|
||||||
|
if self.current_node_idx < len(self.path):
|
||||||
|
node_id = self.path[self.current_node_idx]
|
||||||
|
node = get_node_by_id(node_id)
|
||||||
|
if node:
|
||||||
|
self.target_x = max(0, node["x"] - 40)
|
||||||
|
self.target_y = max(0, node["y"] - 10)
|
||||||
|
|
||||||
|
self.current_node_idx += 1
|
||||||
|
|
||||||
|
self.x += int((self.target_x - self.x) * 0.1)
|
||||||
|
self.y += int((self.target_y - self.y) * 0.1)
|
||||||
|
|
||||||
|
|
||||||
|
class CameraLarge:
|
||||||
|
def __init__(self, viewport_w: int, viewport_h: int, frame: int):
|
||||||
|
self.viewport_w = viewport_w
|
||||||
|
self.viewport_h = viewport_h
|
||||||
|
self.frame = frame
|
||||||
|
self.x = 0
|
||||||
|
self.y = 0
|
||||||
|
self.mode = "trace"
|
||||||
|
self.trace_camera = TraceCamera()
|
||||||
|
|
||||||
|
def set_vertical_mode(self):
|
||||||
|
self.mode = "vertical"
|
||||||
|
|
||||||
|
def set_horizontal_mode(self):
|
||||||
|
self.mode = "horizontal"
|
||||||
|
|
||||||
|
def set_omni_mode(self):
|
||||||
|
self.mode = "omni"
|
||||||
|
|
||||||
|
def set_floating_mode(self):
|
||||||
|
self.mode = "floating"
|
||||||
|
|
||||||
|
def set_trace_mode(self):
|
||||||
|
self.mode = "trace"
|
||||||
|
|
||||||
|
def update(self, dt: float):
|
||||||
|
self.frame += 1
|
||||||
|
|
||||||
|
if self.mode == "vertical":
|
||||||
|
self.y = int((self.frame * 0.5) % (GRID_HEIGHT - self.viewport_h))
|
||||||
|
elif self.mode == "horizontal":
|
||||||
|
self.x = int((self.frame * 0.5) % (GRID_WIDTH - self.viewport_w))
|
||||||
|
elif self.mode == "omni":
|
||||||
|
self.x = int((self.frame * 0.3) % (GRID_WIDTH - self.viewport_w))
|
||||||
|
self.y = int((self.frame * 0.5) % (GRID_HEIGHT - self.viewport_h))
|
||||||
|
elif self.mode == "floating":
|
||||||
|
self.x = int(50 + math.sin(self.frame * 0.02) * 30)
|
||||||
|
self.y = int(50 + math.cos(self.frame * 0.015) * 30)
|
||||||
|
elif self.mode == "trace":
|
||||||
|
self.trace_camera.update(dt, self.frame)
|
||||||
|
self.x = self.trace_camera.x
|
||||||
|
self.y = self.trace_camera.y
|
||||||
|
|
||||||
|
|
||||||
|
def generate_mermaid_graph(frame: int = 0) -> str:
|
||||||
|
effects = ["NOISE", "FADE", "GLITCH", "FIREHOSE"]
|
||||||
|
active_effect = effects[(frame // 30) % 4]
|
||||||
|
|
||||||
|
cam_modes = ["VERTICAL", "HORIZONTAL", "OMNI", "FLOATING", "TRACE"]
|
||||||
|
active_cam = cam_modes[(frame // 100) % 5]
|
||||||
|
|
||||||
|
return f"""graph LR
|
||||||
|
subgraph SOURCES
|
||||||
|
RSS[RSS Feeds]
|
||||||
|
Poetry[Poetry DB]
|
||||||
|
Ntfy[Ntfy Msg]
|
||||||
|
Mic[Microphone]
|
||||||
|
end
|
||||||
|
|
||||||
|
subgraph FETCH
|
||||||
|
Fetch(fetch_all)
|
||||||
|
Cache[(Cache)]
|
||||||
|
end
|
||||||
|
|
||||||
|
subgraph SCROLL
|
||||||
|
Scroll(StreamController)
|
||||||
|
Camera({active_cam})
|
||||||
|
end
|
||||||
|
|
||||||
|
subgraph EFFECTS
|
||||||
|
Noise[NOISE]
|
||||||
|
Fade[FADE]
|
||||||
|
Glitch[GLITCH]
|
||||||
|
Fire[FIREHOSE]
|
||||||
|
Hud[HUD]
|
||||||
|
end
|
||||||
|
|
||||||
|
subgraph DISPLAY
|
||||||
|
Term[Terminal]
|
||||||
|
Web[WebSocket]
|
||||||
|
Pygame[PyGame]
|
||||||
|
Sixel[Sixel]
|
||||||
|
end
|
||||||
|
|
||||||
|
RSS --> Fetch
|
||||||
|
Poetry --> Fetch
|
||||||
|
Ntfy --> Fetch
|
||||||
|
Fetch --> Cache
|
||||||
|
Cache --> Scroll
|
||||||
|
Scroll --> Noise
|
||||||
|
Scroll --> Fade
|
||||||
|
Scroll --> Glitch
|
||||||
|
Scroll --> Fire
|
||||||
|
Scroll --> Hud
|
||||||
|
|
||||||
|
Noise --> Term
|
||||||
|
Fade --> Web
|
||||||
|
Glitch --> Pygame
|
||||||
|
Fire --> Sixel
|
||||||
|
|
||||||
|
style {active_effect} fill:#90EE90
|
||||||
|
style Camera fill:#87CEEB
|
||||||
|
"""
|
||||||
|
|
||||||
|
|
||||||
|
def generate_network_pipeline(
|
||||||
|
width: int = 80, height: int = 24, frame: int = 0
|
||||||
|
) -> list[str]:
|
||||||
|
try:
|
||||||
|
from engine.beautiful_mermaid import render_mermaid_ascii
|
||||||
|
|
||||||
|
mermaid_graph = generate_mermaid_graph(frame)
|
||||||
|
ascii_output = render_mermaid_ascii(mermaid_graph, padding_x=2, padding_y=1)
|
||||||
|
|
||||||
|
lines = ascii_output.split("\n")
|
||||||
|
|
||||||
|
result = []
|
||||||
|
for y in range(height):
|
||||||
|
if y < len(lines):
|
||||||
|
line = lines[y]
|
||||||
|
if len(line) < width:
|
||||||
|
line = line + " " * (width - len(line))
|
||||||
|
elif len(line) > width:
|
||||||
|
line = line[:width]
|
||||||
|
result.append(line)
|
||||||
|
else:
|
||||||
|
result.append(" " * width)
|
||||||
|
|
||||||
|
status_y = height - 2
|
||||||
|
if status_y < height:
|
||||||
|
fps = 60 - (frame % 15)
|
||||||
|
|
||||||
|
cam_modes = ["VERTICAL", "HORIZONTAL", "OMNI", "FLOATING", "TRACE"]
|
||||||
|
cam = cam_modes[(frame // 100) % 5]
|
||||||
|
effects = ["NOISE", "FADE", "GLITCH", "FIREHOSE"]
|
||||||
|
eff = effects[(frame // 30) % 4]
|
||||||
|
|
||||||
|
anim = "▓▒░ "[frame % 4]
|
||||||
|
status = f" FPS:{fps:3.0f} │ {anim} {eff} │ Cam:{cam}"
|
||||||
|
status = status[: width - 4].ljust(width - 4)
|
||||||
|
result[status_y] = "║ " + status + " ║"
|
||||||
|
|
||||||
|
if height > 0:
|
||||||
|
result[0] = "═" * width
|
||||||
|
result[height - 1] = "═" * width
|
||||||
|
|
||||||
|
return result
|
||||||
|
|
||||||
|
except Exception as e:
|
||||||
|
return [
|
||||||
|
f"Error: {e}" + " " * (width - len(f"Error: {e}")) for _ in range(height)
|
||||||
|
]
|
||||||
|
|
||||||
|
|
||||||
|
def generate_large_network_viewport(
|
||||||
|
viewport_w: int = 80, viewport_h: int = 24, frame: int = 0
|
||||||
|
) -> list[str]:
|
||||||
|
cam_modes = ["VERTICAL", "HORIZONTAL", "OMNI", "FLOATING", "TRACE"]
|
||||||
|
camera_mode = cam_modes[(frame // 100) % 5]
|
||||||
|
|
||||||
|
camera = CameraLarge(viewport_w, viewport_h, frame)
|
||||||
|
|
||||||
|
if camera_mode == "TRACE":
|
||||||
|
camera.set_trace_mode()
|
||||||
|
elif camera_mode == "VERTICAL":
|
||||||
|
camera.set_vertical_mode()
|
||||||
|
elif camera_mode == "HORIZONTAL":
|
||||||
|
camera.set_horizontal_mode()
|
||||||
|
elif camera_mode == "OMNI":
|
||||||
|
camera.set_omni_mode()
|
||||||
|
elif camera_mode == "FLOATING":
|
||||||
|
camera.set_floating_mode()
|
||||||
|
|
||||||
|
camera.update(1 / 60)
|
||||||
|
|
||||||
|
grid = draw_network_to_grid(frame)
|
||||||
|
|
||||||
|
result = []
|
||||||
|
for vy in range(viewport_h):
|
||||||
|
line = ""
|
||||||
|
for vx in range(viewport_w):
|
||||||
|
gx = camera.x + vx
|
||||||
|
gy = camera.y + vy
|
||||||
|
if 0 <= gx < GRID_WIDTH and 0 <= gy < GRID_HEIGHT:
|
||||||
|
line += grid[gy][gx]
|
||||||
|
else:
|
||||||
|
line += " "
|
||||||
|
result.append(line)
|
||||||
|
|
||||||
|
fps = 60 - (frame % 15)
|
||||||
|
|
||||||
|
active_path = NETWORK_PATHS[(frame // 60) % len(NETWORK_PATHS)]
|
||||||
|
active_node = active_path[(frame // 15) % len(active_path)]
|
||||||
|
|
||||||
|
anim = "▓▒░ "[frame % 4]
|
||||||
|
status = f" FPS:{fps:3.0f} │ {anim} {camera_mode:9s} │ Node:{active_node}"
|
||||||
|
status = status[: viewport_w - 4].ljust(viewport_w - 4)
|
||||||
|
if viewport_h > 2:
|
||||||
|
result[viewport_h - 2] = "║ " + status + " ║"
|
||||||
|
|
||||||
|
if viewport_h > 0:
|
||||||
|
result[0] = "═" * viewport_w
|
||||||
|
result[viewport_h - 1] = "═" * viewport_w
|
||||||
|
|
||||||
|
return result
|
||||||
109
engine/render.py
109
engine/render.py
@@ -4,14 +4,15 @@ Font loading, text rasterization, word-wrap, gradient coloring, headline block a
|
|||||||
Depends on: config, terminal, sources, translate.
|
Depends on: config, terminal, sources, translate.
|
||||||
"""
|
"""
|
||||||
|
|
||||||
import re
|
|
||||||
import random
|
import random
|
||||||
|
import re
|
||||||
|
from pathlib import Path
|
||||||
|
|
||||||
from PIL import Image, ImageDraw, ImageFont
|
from PIL import Image, ImageDraw, ImageFont
|
||||||
|
|
||||||
from engine import config
|
from engine import config
|
||||||
|
from engine.sources import NO_UPPER, SCRIPT_FONTS, SOURCE_LANGS
|
||||||
from engine.terminal import RST
|
from engine.terminal import RST
|
||||||
from engine.sources import SCRIPT_FONTS, SOURCE_LANGS, NO_UPPER
|
|
||||||
from engine.translate import detect_location_language, translate_headline
|
from engine.translate import detect_location_language, translate_headline
|
||||||
|
|
||||||
# ─── GRADIENT ─────────────────────────────────────────────
|
# ─── GRADIENT ─────────────────────────────────────────────
|
||||||
@@ -31,19 +32,75 @@ GRAD_COLS = [
|
|||||||
"\033[2;38;5;235m", # near black
|
"\033[2;38;5;235m", # near black
|
||||||
]
|
]
|
||||||
|
|
||||||
|
# Complementary sweep for queue messages (opposite hue family from ticker greens)
|
||||||
|
MSG_GRAD_COLS = [
|
||||||
|
"\033[1;38;5;231m", # white
|
||||||
|
"\033[1;38;5;225m", # pale pink-white
|
||||||
|
"\033[38;5;219m", # bright pink
|
||||||
|
"\033[38;5;213m", # hot pink
|
||||||
|
"\033[38;5;207m", # magenta
|
||||||
|
"\033[38;5;201m", # bright magenta
|
||||||
|
"\033[38;5;165m", # orchid-red
|
||||||
|
"\033[38;5;161m", # ruby-magenta
|
||||||
|
"\033[38;5;125m", # dark magenta
|
||||||
|
"\033[38;5;89m", # deep maroon-magenta
|
||||||
|
"\033[2;38;5;89m", # dim deep maroon-magenta
|
||||||
|
"\033[2;38;5;235m", # near black
|
||||||
|
]
|
||||||
|
|
||||||
# ─── FONT LOADING ─────────────────────────────────────────
|
# ─── FONT LOADING ─────────────────────────────────────────
|
||||||
_FONT_OBJ = None
|
_FONT_OBJ = None
|
||||||
|
_FONT_OBJ_KEY = None
|
||||||
_FONT_CACHE = {}
|
_FONT_CACHE = {}
|
||||||
|
|
||||||
|
|
||||||
def font():
|
def font():
|
||||||
"""Lazy-load the primary OTF font."""
|
"""Lazy-load the primary OTF font (path + face index aware)."""
|
||||||
global _FONT_OBJ
|
global _FONT_OBJ, _FONT_OBJ_KEY
|
||||||
if _FONT_OBJ is None:
|
if not config.FONT_PATH:
|
||||||
_FONT_OBJ = ImageFont.truetype(config.FONT_PATH, config.FONT_SZ)
|
raise FileNotFoundError(
|
||||||
|
f"No primary font selected. Add .otf/.ttf/.ttc files to {config.FONT_DIR}."
|
||||||
|
)
|
||||||
|
key = (config.FONT_PATH, config.FONT_INDEX, config.FONT_SZ)
|
||||||
|
if _FONT_OBJ is None or key != _FONT_OBJ_KEY:
|
||||||
|
_FONT_OBJ = ImageFont.truetype(
|
||||||
|
config.FONT_PATH, config.FONT_SZ, index=config.FONT_INDEX
|
||||||
|
)
|
||||||
|
_FONT_OBJ_KEY = key
|
||||||
return _FONT_OBJ
|
return _FONT_OBJ
|
||||||
|
|
||||||
|
|
||||||
|
def clear_font_cache():
|
||||||
|
"""Reset cached font objects after changing primary font selection."""
|
||||||
|
global _FONT_OBJ, _FONT_OBJ_KEY
|
||||||
|
_FONT_OBJ = None
|
||||||
|
_FONT_OBJ_KEY = None
|
||||||
|
|
||||||
|
|
||||||
|
def load_font_face(font_path, font_index=0, size=None):
|
||||||
|
"""Load a specific face from a font file or collection."""
|
||||||
|
font_size = size or config.FONT_SZ
|
||||||
|
return ImageFont.truetype(font_path, font_size, index=font_index)
|
||||||
|
|
||||||
|
|
||||||
|
def list_font_faces(font_path, max_faces=64):
|
||||||
|
"""Return discoverable face indexes + display names from a font file."""
|
||||||
|
faces = []
|
||||||
|
for idx in range(max_faces):
|
||||||
|
try:
|
||||||
|
fnt = load_font_face(font_path, idx)
|
||||||
|
except Exception:
|
||||||
|
if idx == 0:
|
||||||
|
raise
|
||||||
|
break
|
||||||
|
family, style = fnt.getname()
|
||||||
|
display = f"{family} {style}".strip()
|
||||||
|
if not display:
|
||||||
|
display = f"{Path(font_path).stem} [{idx}]"
|
||||||
|
faces.append({"index": idx, "name": display})
|
||||||
|
return faces
|
||||||
|
|
||||||
|
|
||||||
def font_for_lang(lang=None):
|
def font_for_lang(lang=None):
|
||||||
"""Get appropriate font for a language."""
|
"""Get appropriate font for a language."""
|
||||||
if lang is None or lang not in SCRIPT_FONTS:
|
if lang is None or lang not in SCRIPT_FONTS:
|
||||||
@@ -67,7 +124,7 @@ def render_line(text, fnt=None):
|
|||||||
pad = 4
|
pad = 4
|
||||||
img_w = bbox[2] - bbox[0] + pad * 2
|
img_w = bbox[2] - bbox[0] + pad * 2
|
||||||
img_h = bbox[3] - bbox[1] + pad * 2
|
img_h = bbox[3] - bbox[1] + pad * 2
|
||||||
img = Image.new('L', (img_w, img_h), 0)
|
img = Image.new("L", (img_w, img_h), 0)
|
||||||
draw = ImageDraw.Draw(img)
|
draw = ImageDraw.Draw(img)
|
||||||
draw.text((-bbox[0] + pad, -bbox[1] + pad), text, fill=255, font=fnt)
|
draw.text((-bbox[0] + pad, -bbox[1] + pad), text, fill=255, font=fnt)
|
||||||
pix_h = config.RENDER_H * 2
|
pix_h = config.RENDER_H * 2
|
||||||
@@ -132,9 +189,10 @@ def big_wrap(text, max_w, fnt=None):
|
|||||||
return out
|
return out
|
||||||
|
|
||||||
|
|
||||||
def lr_gradient(rows, offset=0.0):
|
def lr_gradient(rows, offset=0.0, grad_cols=None):
|
||||||
"""Color each non-space block character with a shifting left-to-right gradient."""
|
"""Color each non-space block character with a shifting left-to-right gradient."""
|
||||||
n = len(GRAD_COLS)
|
cols = grad_cols or GRAD_COLS
|
||||||
|
n = len(cols)
|
||||||
max_x = max((len(r.rstrip()) for r in rows if r.strip()), default=1)
|
max_x = max((len(r.rstrip()) for r in rows if r.strip()), default=1)
|
||||||
out = []
|
out = []
|
||||||
for row in rows:
|
for row in rows:
|
||||||
@@ -143,20 +201,29 @@ def lr_gradient(rows, offset=0.0):
|
|||||||
continue
|
continue
|
||||||
buf = []
|
buf = []
|
||||||
for x, ch in enumerate(row):
|
for x, ch in enumerate(row):
|
||||||
if ch == ' ':
|
if ch == " ":
|
||||||
buf.append(' ')
|
buf.append(" ")
|
||||||
else:
|
else:
|
||||||
shifted = (x / max(max_x - 1, 1) + offset) % 1.0
|
shifted = (x / max(max_x - 1, 1) + offset) % 1.0
|
||||||
idx = min(round(shifted * (n - 1)), n - 1)
|
idx = min(round(shifted * (n - 1)), n - 1)
|
||||||
buf.append(f"{GRAD_COLS[idx]}{ch}\033[0m")
|
buf.append(f"{cols[idx]}{ch}{RST}")
|
||||||
out.append("".join(buf))
|
out.append("".join(buf))
|
||||||
return out
|
return out
|
||||||
|
|
||||||
|
|
||||||
|
def lr_gradient_opposite(rows, offset=0.0):
|
||||||
|
"""Complementary (opposite wheel) gradient used for queue message panels."""
|
||||||
|
return lr_gradient(rows, offset, MSG_GRAD_COLS)
|
||||||
|
|
||||||
|
|
||||||
# ─── HEADLINE BLOCK ASSEMBLY ─────────────────────────────
|
# ─── HEADLINE BLOCK ASSEMBLY ─────────────────────────────
|
||||||
def make_block(title, src, ts, w):
|
def make_block(title, src, ts, w):
|
||||||
"""Render a headline into a content block with color."""
|
"""Render a headline into a content block with color."""
|
||||||
target_lang = (SOURCE_LANGS.get(src) or detect_location_language(title)) if config.MODE == 'news' else None
|
target_lang = (
|
||||||
|
(SOURCE_LANGS.get(src) or detect_location_language(title))
|
||||||
|
if config.MODE == "news"
|
||||||
|
else None
|
||||||
|
)
|
||||||
lang_font = font_for_lang(target_lang)
|
lang_font = font_for_lang(target_lang)
|
||||||
if target_lang:
|
if target_lang:
|
||||||
title = translate_headline(title, target_lang)
|
title = translate_headline(title, target_lang)
|
||||||
@@ -165,11 +232,18 @@ def make_block(title, src, ts, w):
|
|||||||
title_up = re.sub(r"\s+", " ", title)
|
title_up = re.sub(r"\s+", " ", title)
|
||||||
else:
|
else:
|
||||||
title_up = re.sub(r"\s+", " ", title.upper())
|
title_up = re.sub(r"\s+", " ", title.upper())
|
||||||
for old, new in [("\u2019","'"), ("\u2018","'"), ("\u201c",'"'),
|
for old, new in [
|
||||||
("\u201d",'"'), ("\u2013","-"), ("\u2014","-")]:
|
("\u2019", "'"),
|
||||||
|
("\u2018", "'"),
|
||||||
|
("\u201c", '"'),
|
||||||
|
("\u201d", '"'),
|
||||||
|
("\u2013", "-"),
|
||||||
|
("\u2014", "-"),
|
||||||
|
]:
|
||||||
title_up = title_up.replace(old, new)
|
title_up = title_up.replace(old, new)
|
||||||
big_rows = big_wrap(title_up, w - 4, lang_font)
|
big_rows = big_wrap(title_up, w - 4, lang_font)
|
||||||
hc = random.choice([
|
hc = random.choice(
|
||||||
|
[
|
||||||
"\033[38;5;46m", # matrix green
|
"\033[38;5;46m", # matrix green
|
||||||
"\033[38;5;34m", # dark green
|
"\033[38;5;34m", # dark green
|
||||||
"\033[38;5;82m", # lime
|
"\033[38;5;82m", # lime
|
||||||
@@ -186,7 +260,8 @@ def make_block(title, src, ts, w):
|
|||||||
"\033[38;5;115m", # sage
|
"\033[38;5;115m", # sage
|
||||||
"\033[1;38;5;46m", # bold green
|
"\033[1;38;5;46m", # bold green
|
||||||
"\033[1;38;5;250m", # bold white
|
"\033[1;38;5;250m", # bold white
|
||||||
])
|
]
|
||||||
|
)
|
||||||
content = [" " + r for r in big_rows]
|
content = [" " + r for r in big_rows]
|
||||||
content.append("")
|
content.append("")
|
||||||
meta = f"\u2591 {src} \u00b7 {ts}"
|
meta = f"\u2591 {src} \u00b7 {ts}"
|
||||||
|
|||||||
232
engine/scroll.py
232
engine/scroll.py
@@ -1,195 +1,151 @@
|
|||||||
"""
|
"""
|
||||||
Render engine — ticker content, scroll motion, message panel, and firehose overlay.
|
Render engine — ticker content, scroll motion, message panel, and firehose overlay.
|
||||||
Depends on: config, terminal, render, effects, ntfy, mic.
|
Orchestrates viewport, frame timing, and layers.
|
||||||
"""
|
"""
|
||||||
|
|
||||||
import re
|
|
||||||
import sys
|
|
||||||
import time
|
|
||||||
import random
|
import random
|
||||||
from datetime import datetime
|
import time
|
||||||
|
|
||||||
from engine import config
|
from engine import config
|
||||||
from engine.terminal import RST, W_COOL, CLR, tw, th
|
from engine.camera import Camera
|
||||||
from engine.render import big_wrap, lr_gradient, make_block
|
from engine.display import (
|
||||||
from engine.effects import noise, glitch_bar, fade_line, vis_trunc, next_headline, firehose_line
|
Display,
|
||||||
|
TerminalDisplay,
|
||||||
|
)
|
||||||
|
from engine.display import (
|
||||||
|
get_monitor as _get_display_monitor,
|
||||||
|
)
|
||||||
|
from engine.frame import calculate_scroll_step
|
||||||
|
from engine.layers import (
|
||||||
|
apply_glitch,
|
||||||
|
process_effects,
|
||||||
|
render_firehose,
|
||||||
|
render_message_overlay,
|
||||||
|
render_ticker_zone,
|
||||||
|
)
|
||||||
|
from engine.viewport import th, tw
|
||||||
|
|
||||||
|
USE_EFFECT_CHAIN = True
|
||||||
|
|
||||||
|
|
||||||
def stream(items, ntfy_poller, mic_monitor):
|
def stream(
|
||||||
|
items,
|
||||||
|
ntfy_poller,
|
||||||
|
mic_monitor,
|
||||||
|
display: Display | None = None,
|
||||||
|
camera: Camera | None = None,
|
||||||
|
):
|
||||||
"""Main render loop with four layers: message, ticker, scroll motion, firehose."""
|
"""Main render loop with four layers: message, ticker, scroll motion, firehose."""
|
||||||
|
if display is None:
|
||||||
|
display = TerminalDisplay()
|
||||||
|
if camera is None:
|
||||||
|
camera = Camera.vertical()
|
||||||
|
|
||||||
random.shuffle(items)
|
random.shuffle(items)
|
||||||
pool = list(items)
|
pool = list(items)
|
||||||
seen = set()
|
seen = set()
|
||||||
queued = 0
|
queued = 0
|
||||||
|
|
||||||
time.sleep(0.5)
|
time.sleep(0.5)
|
||||||
sys.stdout.write(CLR)
|
|
||||||
sys.stdout.flush()
|
|
||||||
|
|
||||||
w, h = tw(), th()
|
w, h = tw(), th()
|
||||||
|
display.init(w, h)
|
||||||
|
display.clear()
|
||||||
fh = config.FIREHOSE_H if config.FIREHOSE else 0
|
fh = config.FIREHOSE_H if config.FIREHOSE else 0
|
||||||
ticker_view_h = h - fh # reserve fixed firehose strip at bottom
|
ticker_view_h = h - fh
|
||||||
GAP = 3 # blank rows between headlines
|
GAP = 3
|
||||||
scroll_step_interval = config.SCROLL_DUR / (ticker_view_h + 15) * 2
|
scroll_step_interval = calculate_scroll_step(config.SCROLL_DUR, ticker_view_h)
|
||||||
|
|
||||||
# Taxonomy:
|
|
||||||
# - message: centered ntfy overlay panel
|
|
||||||
# - ticker: large headline text content
|
|
||||||
# - scroll: upward camera motion applied to ticker content
|
|
||||||
# - firehose: fixed carriage-return style strip pinned at bottom
|
|
||||||
# Active ticker blocks: (content_rows, color, canvas_y, meta_idx)
|
|
||||||
active = []
|
active = []
|
||||||
scroll_cam = 0 # viewport top in virtual canvas coords
|
ticker_next_y = ticker_view_h
|
||||||
ticker_next_y = ticker_view_h # canvas-y where next block starts (off-screen bottom)
|
|
||||||
noise_cache = {}
|
noise_cache = {}
|
||||||
scroll_motion_accum = 0.0
|
scroll_motion_accum = 0.0
|
||||||
|
msg_cache = (None, None)
|
||||||
|
frame_number = 0
|
||||||
|
|
||||||
def _noise_at(cy):
|
while True:
|
||||||
if cy not in noise_cache:
|
if queued >= config.HEADLINE_LIMIT and not active:
|
||||||
noise_cache[cy] = noise(w) if random.random() < 0.15 else None
|
break
|
||||||
return noise_cache[cy]
|
|
||||||
|
|
||||||
# Message color: bright cyan/white — distinct from headline greens
|
|
||||||
MSG_META = "\033[38;5;245m" # cool grey
|
|
||||||
MSG_BORDER = "\033[2;38;5;37m" # dim teal
|
|
||||||
_msg_cache = (None, None) # (cache_key, rendered_rows)
|
|
||||||
|
|
||||||
while queued < config.HEADLINE_LIMIT or active:
|
|
||||||
t0 = time.monotonic()
|
t0 = time.monotonic()
|
||||||
w, h = tw(), th()
|
w, h = tw(), th()
|
||||||
fh = config.FIREHOSE_H if config.FIREHOSE else 0
|
fh = config.FIREHOSE_H if config.FIREHOSE else 0
|
||||||
ticker_view_h = h - fh
|
ticker_view_h = h - fh
|
||||||
|
scroll_step_interval = calculate_scroll_step(config.SCROLL_DUR, ticker_view_h)
|
||||||
|
|
||||||
# ── Check for ntfy message ────────────────────────
|
|
||||||
msg_h = 0
|
|
||||||
msg_overlay = []
|
|
||||||
msg = ntfy_poller.get_active_message()
|
msg = ntfy_poller.get_active_message()
|
||||||
|
msg_overlay, msg_cache = render_message_overlay(msg, w, h, msg_cache)
|
||||||
|
|
||||||
buf = []
|
buf = []
|
||||||
if msg is not None:
|
ticker_h = ticker_view_h
|
||||||
m_title, m_body, m_ts = msg
|
|
||||||
# ── Message overlay: centered in the viewport ──
|
|
||||||
display_text = m_body or m_title or "(empty)"
|
|
||||||
display_text = re.sub(r"\s+", " ", display_text.upper())
|
|
||||||
cache_key = (display_text, w)
|
|
||||||
if _msg_cache[0] != cache_key:
|
|
||||||
msg_rows = big_wrap(display_text, w - 4)
|
|
||||||
_msg_cache = (cache_key, msg_rows)
|
|
||||||
else:
|
|
||||||
msg_rows = _msg_cache[1]
|
|
||||||
msg_rows = lr_gradient(msg_rows, (time.monotonic() * config.GRAD_SPEED) % 1.0)
|
|
||||||
# Layout: rendered text + meta + border
|
|
||||||
elapsed_s = int(time.monotonic() - m_ts)
|
|
||||||
remaining = max(0, config.MESSAGE_DISPLAY_SECS - elapsed_s)
|
|
||||||
ts_str = datetime.now().strftime("%H:%M:%S")
|
|
||||||
panel_h = len(msg_rows) + 2 # meta + border
|
|
||||||
panel_top = max(0, (h - panel_h) // 2)
|
|
||||||
row_idx = 0
|
|
||||||
for mr in msg_rows:
|
|
||||||
ln = vis_trunc(mr, w)
|
|
||||||
msg_overlay.append(f"\033[{panel_top + row_idx + 1};1H {ln}{RST}\033[K")
|
|
||||||
row_idx += 1
|
|
||||||
# Meta line: title (if distinct) + source + countdown
|
|
||||||
meta_parts = []
|
|
||||||
if m_title and m_title != m_body:
|
|
||||||
meta_parts.append(m_title)
|
|
||||||
meta_parts.append(f"ntfy \u00b7 {ts_str} \u00b7 {remaining}s")
|
|
||||||
meta = " " + " \u00b7 ".join(meta_parts) if len(meta_parts) > 1 else " " + meta_parts[0]
|
|
||||||
msg_overlay.append(f"\033[{panel_top + row_idx + 1};1H{MSG_META}{meta}{RST}\033[K")
|
|
||||||
row_idx += 1
|
|
||||||
# Border — constant boundary under message panel
|
|
||||||
bar = "\u2500" * (w - 4)
|
|
||||||
msg_overlay.append(f"\033[{panel_top + row_idx + 1};1H {MSG_BORDER}{bar}{RST}\033[K")
|
|
||||||
|
|
||||||
# Ticker draws above the fixed firehose strip; message is a centered overlay.
|
|
||||||
ticker_h = ticker_view_h - msg_h
|
|
||||||
|
|
||||||
# ── Ticker content + scroll motion (always runs) ──
|
|
||||||
scroll_motion_accum += config.FRAME_DT
|
scroll_motion_accum += config.FRAME_DT
|
||||||
while scroll_motion_accum >= scroll_step_interval:
|
while scroll_motion_accum >= scroll_step_interval:
|
||||||
scroll_motion_accum -= scroll_step_interval
|
scroll_motion_accum -= scroll_step_interval
|
||||||
scroll_cam += 1
|
camera.update(config.FRAME_DT)
|
||||||
|
|
||||||
|
while (
|
||||||
|
ticker_next_y < camera.y + ticker_view_h + 10
|
||||||
|
and queued < config.HEADLINE_LIMIT
|
||||||
|
):
|
||||||
|
from engine.effects import next_headline
|
||||||
|
from engine.render import make_block
|
||||||
|
|
||||||
# Enqueue new headlines when room at the bottom
|
|
||||||
while ticker_next_y < scroll_cam + ticker_view_h + 10 and queued < config.HEADLINE_LIMIT:
|
|
||||||
t, src, ts = next_headline(pool, items, seen)
|
t, src, ts = next_headline(pool, items, seen)
|
||||||
ticker_content, hc, midx = make_block(t, src, ts, w)
|
ticker_content, hc, midx = make_block(t, src, ts, w)
|
||||||
active.append((ticker_content, hc, ticker_next_y, midx))
|
active.append((ticker_content, hc, ticker_next_y, midx))
|
||||||
ticker_next_y += len(ticker_content) + GAP
|
ticker_next_y += len(ticker_content) + GAP
|
||||||
queued += 1
|
queued += 1
|
||||||
|
|
||||||
# Prune off-screen blocks and stale noise
|
active = [
|
||||||
active = [(c, hc, by, mi) for c, hc, by, mi in active
|
(c, hc, by, mi) for c, hc, by, mi in active if by + len(c) > camera.y
|
||||||
if by + len(c) > scroll_cam]
|
]
|
||||||
for k in list(noise_cache):
|
for k in list(noise_cache):
|
||||||
if k < scroll_cam:
|
if k < camera.y:
|
||||||
del noise_cache[k]
|
del noise_cache[k]
|
||||||
|
|
||||||
# Draw ticker zone (above fixed firehose strip)
|
|
||||||
top_zone = max(1, int(ticker_h * 0.25))
|
|
||||||
bot_zone = max(1, int(ticker_h * 0.10))
|
|
||||||
grad_offset = (time.monotonic() * config.GRAD_SPEED) % 1.0
|
grad_offset = (time.monotonic() * config.GRAD_SPEED) % 1.0
|
||||||
ticker_buf_start = len(buf) # track where ticker rows start in buf
|
ticker_buf_start = len(buf)
|
||||||
for r in range(ticker_h):
|
|
||||||
scr_row = r + 1 # 1-indexed ANSI screen row
|
ticker_buf, noise_cache = render_ticker_zone(
|
||||||
cy = scroll_cam + r
|
active, camera.y, camera.x, ticker_h, w, noise_cache, grad_offset
|
||||||
top_f = min(1.0, r / top_zone) if top_zone > 0 else 1.0
|
)
|
||||||
bot_f = min(1.0, (ticker_h - 1 - r) / bot_zone) if bot_zone > 0 else 1.0
|
buf.extend(ticker_buf)
|
||||||
row_fade = min(top_f, bot_f)
|
|
||||||
drawn = False
|
|
||||||
for content, hc, by, midx in active:
|
|
||||||
cr = cy - by
|
|
||||||
if 0 <= cr < len(content):
|
|
||||||
raw = content[cr]
|
|
||||||
if cr != midx:
|
|
||||||
colored = lr_gradient([raw], grad_offset)[0]
|
|
||||||
else:
|
|
||||||
colored = raw
|
|
||||||
ln = vis_trunc(colored, w)
|
|
||||||
if row_fade < 1.0:
|
|
||||||
ln = fade_line(ln, row_fade)
|
|
||||||
if cr == midx:
|
|
||||||
buf.append(f"\033[{scr_row};1H{W_COOL}{ln}{RST}\033[K")
|
|
||||||
elif ln.strip():
|
|
||||||
buf.append(f"\033[{scr_row};1H{ln}{RST}\033[K")
|
|
||||||
else:
|
|
||||||
buf.append(f"\033[{scr_row};1H\033[K")
|
|
||||||
drawn = True
|
|
||||||
break
|
|
||||||
if not drawn:
|
|
||||||
n = _noise_at(cy)
|
|
||||||
if row_fade < 1.0 and n:
|
|
||||||
n = fade_line(n, row_fade)
|
|
||||||
if n:
|
|
||||||
buf.append(f"\033[{scr_row};1H{n}")
|
|
||||||
else:
|
|
||||||
buf.append(f"\033[{scr_row};1H\033[K")
|
|
||||||
|
|
||||||
# Glitch — base rate + mic-reactive spikes (ticker zone only)
|
|
||||||
mic_excess = mic_monitor.excess
|
mic_excess = mic_monitor.excess
|
||||||
glitch_prob = 0.32 + min(0.9, mic_excess * 0.16)
|
render_start = time.perf_counter()
|
||||||
n_hits = 4 + int(mic_excess / 2)
|
|
||||||
ticker_buf_len = len(buf) - ticker_buf_start
|
if USE_EFFECT_CHAIN:
|
||||||
if random.random() < glitch_prob and ticker_buf_len > 0:
|
buf = process_effects(
|
||||||
for _ in range(min(n_hits, ticker_buf_len)):
|
buf,
|
||||||
gi = random.randint(0, ticker_buf_len - 1)
|
w,
|
||||||
scr_row = gi + 1
|
h,
|
||||||
buf[ticker_buf_start + gi] = f"\033[{scr_row};1H{glitch_bar(w)}"
|
camera.y,
|
||||||
|
ticker_h,
|
||||||
|
camera.x,
|
||||||
|
mic_excess,
|
||||||
|
grad_offset,
|
||||||
|
frame_number,
|
||||||
|
msg is not None,
|
||||||
|
items,
|
||||||
|
)
|
||||||
|
else:
|
||||||
|
buf = apply_glitch(buf, ticker_buf_start, mic_excess, w)
|
||||||
|
firehose_buf = render_firehose(items, w, fh, h)
|
||||||
|
buf.extend(firehose_buf)
|
||||||
|
|
||||||
if config.FIREHOSE and fh > 0:
|
|
||||||
for fr in range(fh):
|
|
||||||
scr_row = h - fh + fr + 1
|
|
||||||
fline = firehose_line(items, w)
|
|
||||||
buf.append(f"\033[{scr_row};1H{fline}\033[K")
|
|
||||||
if msg_overlay:
|
if msg_overlay:
|
||||||
buf.extend(msg_overlay)
|
buf.extend(msg_overlay)
|
||||||
|
|
||||||
sys.stdout.buffer.write("".join(buf).encode())
|
render_elapsed = (time.perf_counter() - render_start) * 1000
|
||||||
sys.stdout.flush()
|
monitor = _get_display_monitor()
|
||||||
|
if monitor:
|
||||||
|
chars = sum(len(line) for line in buf)
|
||||||
|
monitor.record_effect("render", render_elapsed, chars, chars)
|
||||||
|
|
||||||
|
display.show(buf)
|
||||||
|
|
||||||
# Precise frame timing
|
|
||||||
elapsed = time.monotonic() - t0
|
elapsed = time.monotonic() - t0
|
||||||
time.sleep(max(0, config.FRAME_DT - elapsed))
|
time.sleep(max(0, config.FRAME_DT - elapsed))
|
||||||
|
frame_number += 1
|
||||||
|
|
||||||
sys.stdout.write(CLR)
|
display.cleanup()
|
||||||
sys.stdout.flush()
|
|
||||||
|
|||||||
@@ -75,41 +75,41 @@ SOURCE_LANGS = {
|
|||||||
|
|
||||||
# ─── LOCATION → LANGUAGE ─────────────────────────────────
|
# ─── LOCATION → LANGUAGE ─────────────────────────────────
|
||||||
LOCATION_LANGS = {
|
LOCATION_LANGS = {
|
||||||
r'\b(?:china|chinese|beijing|shanghai|hong kong|xi jinping)\b': 'zh-cn',
|
r"\b(?:china|chinese|beijing|shanghai|hong kong|xi jinping)\b": "zh-cn",
|
||||||
r'\b(?:japan|japanese|tokyo|osaka|kishida)\b': 'ja',
|
r"\b(?:japan|japanese|tokyo|osaka|kishida)\b": "ja",
|
||||||
r'\b(?:korea|korean|seoul|pyongyang)\b': 'ko',
|
r"\b(?:korea|korean|seoul|pyongyang)\b": "ko",
|
||||||
r'\b(?:russia|russian|moscow|kremlin|putin)\b': 'ru',
|
r"\b(?:russia|russian|moscow|kremlin|putin)\b": "ru",
|
||||||
r'\b(?:saudi|dubai|qatar|egypt|cairo|arabic)\b': 'ar',
|
r"\b(?:saudi|dubai|qatar|egypt|cairo|arabic)\b": "ar",
|
||||||
r'\b(?:india|indian|delhi|mumbai|modi)\b': 'hi',
|
r"\b(?:india|indian|delhi|mumbai|modi)\b": "hi",
|
||||||
r'\b(?:germany|german|berlin|munich|scholz)\b': 'de',
|
r"\b(?:germany|german|berlin|munich|scholz)\b": "de",
|
||||||
r'\b(?:france|french|paris|lyon|macron)\b': 'fr',
|
r"\b(?:france|french|paris|lyon|macron)\b": "fr",
|
||||||
r'\b(?:spain|spanish|madrid)\b': 'es',
|
r"\b(?:spain|spanish|madrid)\b": "es",
|
||||||
r'\b(?:italy|italian|rome|milan|meloni)\b': 'it',
|
r"\b(?:italy|italian|rome|milan|meloni)\b": "it",
|
||||||
r'\b(?:portugal|portuguese|lisbon)\b': 'pt',
|
r"\b(?:portugal|portuguese|lisbon)\b": "pt",
|
||||||
r'\b(?:brazil|brazilian|são paulo|lula)\b': 'pt',
|
r"\b(?:brazil|brazilian|são paulo|lula)\b": "pt",
|
||||||
r'\b(?:greece|greek|athens)\b': 'el',
|
r"\b(?:greece|greek|athens)\b": "el",
|
||||||
r'\b(?:turkey|turkish|istanbul|ankara|erdogan)\b': 'tr',
|
r"\b(?:turkey|turkish|istanbul|ankara|erdogan)\b": "tr",
|
||||||
r'\b(?:iran|iranian|tehran)\b': 'fa',
|
r"\b(?:iran|iranian|tehran)\b": "fa",
|
||||||
r'\b(?:thailand|thai|bangkok)\b': 'th',
|
r"\b(?:thailand|thai|bangkok)\b": "th",
|
||||||
r'\b(?:vietnam|vietnamese|hanoi)\b': 'vi',
|
r"\b(?:vietnam|vietnamese|hanoi)\b": "vi",
|
||||||
r'\b(?:ukraine|ukrainian|kyiv|kiev|zelensky)\b': 'uk',
|
r"\b(?:ukraine|ukrainian|kyiv|kiev|zelensky)\b": "uk",
|
||||||
r'\b(?:israel|israeli|jerusalem|tel aviv|netanyahu)\b': 'he',
|
r"\b(?:israel|israeli|jerusalem|tel aviv|netanyahu)\b": "he",
|
||||||
}
|
}
|
||||||
|
|
||||||
# ─── NON-LATIN SCRIPT FONTS (macOS) ──────────────────────
|
# ─── NON-LATIN SCRIPT FONTS (macOS) ──────────────────────
|
||||||
SCRIPT_FONTS = {
|
SCRIPT_FONTS = {
|
||||||
'zh-cn': '/System/Library/Fonts/STHeiti Medium.ttc',
|
"zh-cn": "/System/Library/Fonts/STHeiti Medium.ttc",
|
||||||
'ja': '/System/Library/Fonts/ヒラギノ角ゴシック W9.ttc',
|
"ja": "/System/Library/Fonts/ヒラギノ角ゴシック W9.ttc",
|
||||||
'ko': '/System/Library/Fonts/AppleSDGothicNeo.ttc',
|
"ko": "/System/Library/Fonts/AppleSDGothicNeo.ttc",
|
||||||
'ru': '/System/Library/Fonts/Supplemental/Arial.ttf',
|
"ru": "/System/Library/Fonts/Supplemental/Arial.ttf",
|
||||||
'uk': '/System/Library/Fonts/Supplemental/Arial.ttf',
|
"uk": "/System/Library/Fonts/Supplemental/Arial.ttf",
|
||||||
'el': '/System/Library/Fonts/Supplemental/Arial.ttf',
|
"el": "/System/Library/Fonts/Supplemental/Arial.ttf",
|
||||||
'he': '/System/Library/Fonts/Supplemental/Arial.ttf',
|
"he": "/System/Library/Fonts/Supplemental/Arial.ttf",
|
||||||
'ar': '/System/Library/Fonts/GeezaPro.ttc',
|
"ar": "/System/Library/Fonts/GeezaPro.ttc",
|
||||||
'fa': '/System/Library/Fonts/GeezaPro.ttc',
|
"fa": "/System/Library/Fonts/GeezaPro.ttc",
|
||||||
'hi': '/System/Library/Fonts/Kohinoor.ttc',
|
"hi": "/System/Library/Fonts/Kohinoor.ttc",
|
||||||
'th': '/System/Library/Fonts/ThonburiUI.ttc',
|
"th": "/System/Library/Fonts/ThonburiUI.ttc",
|
||||||
}
|
}
|
||||||
|
|
||||||
# Scripts that have no uppercase
|
# Scripts that have no uppercase
|
||||||
NO_UPPER = {'zh-cn', 'ja', 'ko', 'ar', 'fa', 'hi', 'th', 'he'}
|
NO_UPPER = {"zh-cn", "ja", "ko", "ar", "fa", "hi", "th", "he"}
|
||||||
|
|||||||
362
engine/sources_v2.py
Normal file
362
engine/sources_v2.py
Normal file
@@ -0,0 +1,362 @@
|
|||||||
|
"""
|
||||||
|
Data source abstraction - Treat data sources as first-class citizens in the pipeline.
|
||||||
|
|
||||||
|
Each data source implements a common interface:
|
||||||
|
- name: Display name for the source
|
||||||
|
- fetch(): Fetch fresh data
|
||||||
|
- stream(): Stream data continuously (optional)
|
||||||
|
- get_items(): Get current items
|
||||||
|
"""
|
||||||
|
|
||||||
|
from abc import ABC, abstractmethod
|
||||||
|
from collections.abc import Callable
|
||||||
|
from dataclasses import dataclass
|
||||||
|
from typing import Any
|
||||||
|
|
||||||
|
|
||||||
|
@dataclass
|
||||||
|
class SourceItem:
|
||||||
|
"""A single item from a data source."""
|
||||||
|
|
||||||
|
content: str
|
||||||
|
source: str
|
||||||
|
timestamp: str
|
||||||
|
metadata: dict[str, Any] | None = None
|
||||||
|
|
||||||
|
|
||||||
|
class DataSource(ABC):
|
||||||
|
"""Abstract base class for data sources.
|
||||||
|
|
||||||
|
Static sources: Data fetched once and cached. Safe to call fetch() multiple times.
|
||||||
|
Dynamic sources: Data changes over time. fetch() should be idempotent.
|
||||||
|
"""
|
||||||
|
|
||||||
|
@property
|
||||||
|
@abstractmethod
|
||||||
|
def name(self) -> str:
|
||||||
|
"""Display name for this source."""
|
||||||
|
...
|
||||||
|
|
||||||
|
@property
|
||||||
|
def is_dynamic(self) -> bool:
|
||||||
|
"""Whether this source updates dynamically while the app runs. Default False."""
|
||||||
|
return False
|
||||||
|
|
||||||
|
@abstractmethod
|
||||||
|
def fetch(self) -> list[SourceItem]:
|
||||||
|
"""Fetch fresh data from the source. Must be idempotent."""
|
||||||
|
...
|
||||||
|
|
||||||
|
def get_items(self) -> list[SourceItem]:
|
||||||
|
"""Get current items. Default implementation returns cached fetch results."""
|
||||||
|
if not hasattr(self, "_items") or self._items is None:
|
||||||
|
self._items = self.fetch()
|
||||||
|
return self._items
|
||||||
|
|
||||||
|
def refresh(self) -> list[SourceItem]:
|
||||||
|
"""Force refresh - clear cache and fetch fresh data."""
|
||||||
|
self._items = self.fetch()
|
||||||
|
return self._items
|
||||||
|
|
||||||
|
def stream(self):
|
||||||
|
"""Optional: Yield items continuously. Override for streaming sources."""
|
||||||
|
raise NotImplementedError
|
||||||
|
|
||||||
|
def __post_init__(self):
|
||||||
|
self._items: list[SourceItem] | None = None
|
||||||
|
|
||||||
|
|
||||||
|
class HeadlinesDataSource(DataSource):
|
||||||
|
"""Data source for RSS feed headlines."""
|
||||||
|
|
||||||
|
@property
|
||||||
|
def name(self) -> str:
|
||||||
|
return "headlines"
|
||||||
|
|
||||||
|
def fetch(self) -> list[SourceItem]:
|
||||||
|
from engine.fetch import fetch_all
|
||||||
|
|
||||||
|
items, _, _ = fetch_all()
|
||||||
|
return [SourceItem(content=t, source=s, timestamp=ts) for t, s, ts in items]
|
||||||
|
|
||||||
|
|
||||||
|
class PoetryDataSource(DataSource):
|
||||||
|
"""Data source for Poetry DB."""
|
||||||
|
|
||||||
|
@property
|
||||||
|
def name(self) -> str:
|
||||||
|
return "poetry"
|
||||||
|
|
||||||
|
def fetch(self) -> list[SourceItem]:
|
||||||
|
from engine.fetch import fetch_poetry
|
||||||
|
|
||||||
|
items, _, _ = fetch_poetry()
|
||||||
|
return [SourceItem(content=t, source=s, timestamp=ts) for t, s, ts in items]
|
||||||
|
|
||||||
|
|
||||||
|
class PipelineDataSource(DataSource):
|
||||||
|
"""Data source for pipeline visualization (demo mode). Dynamic - updates every frame."""
|
||||||
|
|
||||||
|
def __init__(self, viewport_width: int = 80, viewport_height: int = 24):
|
||||||
|
self.viewport_width = viewport_width
|
||||||
|
self.viewport_height = viewport_height
|
||||||
|
self.frame = 0
|
||||||
|
|
||||||
|
@property
|
||||||
|
def name(self) -> str:
|
||||||
|
return "pipeline"
|
||||||
|
|
||||||
|
@property
|
||||||
|
def is_dynamic(self) -> bool:
|
||||||
|
return True
|
||||||
|
|
||||||
|
def fetch(self) -> list[SourceItem]:
|
||||||
|
from engine.pipeline_viz import generate_large_network_viewport
|
||||||
|
|
||||||
|
buffer = generate_large_network_viewport(
|
||||||
|
self.viewport_width, self.viewport_height, self.frame
|
||||||
|
)
|
||||||
|
self.frame += 1
|
||||||
|
content = "\n".join(buffer)
|
||||||
|
return [
|
||||||
|
SourceItem(content=content, source="pipeline", timestamp=f"f{self.frame}")
|
||||||
|
]
|
||||||
|
|
||||||
|
def get_items(self) -> list[SourceItem]:
|
||||||
|
return self.fetch()
|
||||||
|
|
||||||
|
|
||||||
|
class MetricsDataSource(DataSource):
|
||||||
|
"""Data source that renders live pipeline metrics as ASCII art.
|
||||||
|
|
||||||
|
Wraps a Pipeline and displays active stages with their average execution
|
||||||
|
time and approximate FPS impact. Updates lazily when camera is about to
|
||||||
|
focus on a new node (frame % 15 == 12).
|
||||||
|
"""
|
||||||
|
|
||||||
|
def __init__(
|
||||||
|
self,
|
||||||
|
pipeline: Any,
|
||||||
|
viewport_width: int = 80,
|
||||||
|
viewport_height: int = 24,
|
||||||
|
):
|
||||||
|
self.pipeline = pipeline
|
||||||
|
self.viewport_width = viewport_width
|
||||||
|
self.viewport_height = viewport_height
|
||||||
|
self.frame = 0
|
||||||
|
self._cached_metrics: dict | None = None
|
||||||
|
|
||||||
|
@property
|
||||||
|
def name(self) -> str:
|
||||||
|
return "metrics"
|
||||||
|
|
||||||
|
@property
|
||||||
|
def is_dynamic(self) -> bool:
|
||||||
|
return True
|
||||||
|
|
||||||
|
def fetch(self) -> list[SourceItem]:
|
||||||
|
if self.frame % 15 == 12:
|
||||||
|
self._cached_metrics = None
|
||||||
|
|
||||||
|
if self._cached_metrics is None:
|
||||||
|
self._cached_metrics = self._fetch_metrics()
|
||||||
|
|
||||||
|
buffer = self._render_metrics(self._cached_metrics)
|
||||||
|
self.frame += 1
|
||||||
|
content = "\n".join(buffer)
|
||||||
|
return [
|
||||||
|
SourceItem(content=content, source="metrics", timestamp=f"f{self.frame}")
|
||||||
|
]
|
||||||
|
|
||||||
|
def _fetch_metrics(self) -> dict:
|
||||||
|
if hasattr(self.pipeline, "get_metrics_summary"):
|
||||||
|
metrics = self.pipeline.get_metrics_summary()
|
||||||
|
if "error" not in metrics:
|
||||||
|
return metrics
|
||||||
|
return {"stages": {}, "pipeline": {"avg_ms": 0}}
|
||||||
|
|
||||||
|
def _render_metrics(self, metrics: dict) -> list[str]:
|
||||||
|
stages = metrics.get("stages", {})
|
||||||
|
|
||||||
|
if not stages:
|
||||||
|
return self._render_empty()
|
||||||
|
|
||||||
|
active_stages = {
|
||||||
|
name: stats for name, stats in stages.items() if stats.get("avg_ms", 0) > 0
|
||||||
|
}
|
||||||
|
|
||||||
|
if not active_stages:
|
||||||
|
return self._render_empty()
|
||||||
|
|
||||||
|
total_avg = sum(s["avg_ms"] for s in active_stages.values())
|
||||||
|
if total_avg == 0:
|
||||||
|
total_avg = 1
|
||||||
|
|
||||||
|
lines: list[str] = []
|
||||||
|
lines.append("═" * self.viewport_width)
|
||||||
|
lines.append(" PIPELINE METRICS ".center(self.viewport_width, "─"))
|
||||||
|
lines.append("─" * self.viewport_width)
|
||||||
|
|
||||||
|
header = f"{'STAGE':<20} {'AVG_MS':>8} {'FPS %':>8}"
|
||||||
|
lines.append(header)
|
||||||
|
lines.append("─" * self.viewport_width)
|
||||||
|
|
||||||
|
for name, stats in sorted(active_stages.items()):
|
||||||
|
avg_ms = stats.get("avg_ms", 0)
|
||||||
|
fps_impact = (avg_ms / 16.67) * 100 if avg_ms > 0 else 0
|
||||||
|
|
||||||
|
row = f"{name:<20} {avg_ms:>7.2f} {fps_impact:>7.1f}%"
|
||||||
|
lines.append(row[: self.viewport_width])
|
||||||
|
|
||||||
|
lines.append("─" * self.viewport_width)
|
||||||
|
total_row = (
|
||||||
|
f"{'TOTAL':<20} {total_avg:>7.2f} {(total_avg / 16.67) * 100:>7.1f}%"
|
||||||
|
)
|
||||||
|
lines.append(total_row[: self.viewport_width])
|
||||||
|
lines.append("─" * self.viewport_width)
|
||||||
|
lines.append(
|
||||||
|
f" Frame:{self.frame:04d} Cache:{'HIT' if self._cached_metrics else 'MISS'}"
|
||||||
|
)
|
||||||
|
|
||||||
|
while len(lines) < self.viewport_height:
|
||||||
|
lines.append(" " * self.viewport_width)
|
||||||
|
|
||||||
|
return lines[: self.viewport_height]
|
||||||
|
|
||||||
|
def _render_empty(self) -> list[str]:
|
||||||
|
lines = [" " * self.viewport_width for _ in range(self.viewport_height)]
|
||||||
|
msg = "No metrics available"
|
||||||
|
y = self.viewport_height // 2
|
||||||
|
x = (self.viewport_width - len(msg)) // 2
|
||||||
|
lines[y] = " " * x + msg + " " * (self.viewport_width - x - len(msg))
|
||||||
|
return lines
|
||||||
|
|
||||||
|
def get_items(self) -> list[SourceItem]:
|
||||||
|
return self.fetch()
|
||||||
|
|
||||||
|
|
||||||
|
class CachedDataSource(DataSource):
|
||||||
|
"""Data source that wraps another source with caching."""
|
||||||
|
|
||||||
|
def __init__(self, source: DataSource, max_items: int = 100):
|
||||||
|
self.source = source
|
||||||
|
self.max_items = max_items
|
||||||
|
|
||||||
|
@property
|
||||||
|
def name(self) -> str:
|
||||||
|
return f"cached:{self.source.name}"
|
||||||
|
|
||||||
|
def fetch(self) -> list[SourceItem]:
|
||||||
|
items = self.source.fetch()
|
||||||
|
return items[: self.max_items]
|
||||||
|
|
||||||
|
def get_items(self) -> list[SourceItem]:
|
||||||
|
if not hasattr(self, "_items") or self._items is None:
|
||||||
|
self._items = self.fetch()
|
||||||
|
return self._items
|
||||||
|
|
||||||
|
|
||||||
|
class TransformDataSource(DataSource):
|
||||||
|
"""Data source that transforms items from another source.
|
||||||
|
|
||||||
|
Applies optional filter and map functions to each item.
|
||||||
|
This enables chaining: source → transform → transformed output.
|
||||||
|
|
||||||
|
Args:
|
||||||
|
source: The source to fetch items from
|
||||||
|
filter_fn: Optional function(item: SourceItem) -> bool
|
||||||
|
map_fn: Optional function(item: SourceItem) -> SourceItem
|
||||||
|
"""
|
||||||
|
|
||||||
|
def __init__(
|
||||||
|
self,
|
||||||
|
source: DataSource,
|
||||||
|
filter_fn: Callable[[SourceItem], bool] | None = None,
|
||||||
|
map_fn: Callable[[SourceItem], SourceItem] | None = None,
|
||||||
|
):
|
||||||
|
self.source = source
|
||||||
|
self.filter_fn = filter_fn
|
||||||
|
self.map_fn = map_fn
|
||||||
|
|
||||||
|
@property
|
||||||
|
def name(self) -> str:
|
||||||
|
return f"transform:{self.source.name}"
|
||||||
|
|
||||||
|
def fetch(self) -> list[SourceItem]:
|
||||||
|
items = self.source.fetch()
|
||||||
|
|
||||||
|
if self.filter_fn:
|
||||||
|
items = [item for item in items if self.filter_fn(item)]
|
||||||
|
|
||||||
|
if self.map_fn:
|
||||||
|
items = [self.map_fn(item) for item in items]
|
||||||
|
|
||||||
|
return items
|
||||||
|
|
||||||
|
|
||||||
|
class CompositeDataSource(DataSource):
|
||||||
|
"""Data source that combines multiple sources."""
|
||||||
|
|
||||||
|
def __init__(self, sources: list[DataSource]):
|
||||||
|
self.sources = sources
|
||||||
|
|
||||||
|
@property
|
||||||
|
def name(self) -> str:
|
||||||
|
return "composite"
|
||||||
|
|
||||||
|
def fetch(self) -> list[SourceItem]:
|
||||||
|
items = []
|
||||||
|
for source in self.sources:
|
||||||
|
items.extend(source.fetch())
|
||||||
|
return items
|
||||||
|
|
||||||
|
|
||||||
|
class SourceRegistry:
|
||||||
|
"""Registry for data sources."""
|
||||||
|
|
||||||
|
def __init__(self):
|
||||||
|
self._sources: dict[str, DataSource] = {}
|
||||||
|
self._default: str | None = None
|
||||||
|
|
||||||
|
def register(self, source: DataSource, default: bool = False) -> None:
|
||||||
|
self._sources[source.name] = source
|
||||||
|
if default or self._default is None:
|
||||||
|
self._default = source.name
|
||||||
|
|
||||||
|
def get(self, name: str) -> DataSource | None:
|
||||||
|
return self._sources.get(name)
|
||||||
|
|
||||||
|
def list_all(self) -> dict[str, DataSource]:
|
||||||
|
return dict(self._sources)
|
||||||
|
|
||||||
|
def default(self) -> DataSource | None:
|
||||||
|
if self._default:
|
||||||
|
return self._sources.get(self._default)
|
||||||
|
return None
|
||||||
|
|
||||||
|
def create_headlines(self) -> HeadlinesDataSource:
|
||||||
|
return HeadlinesDataSource()
|
||||||
|
|
||||||
|
def create_poetry(self) -> PoetryDataSource:
|
||||||
|
return PoetryDataSource()
|
||||||
|
|
||||||
|
def create_pipeline(self, width: int = 80, height: int = 24) -> PipelineDataSource:
|
||||||
|
return PipelineDataSource(width, height)
|
||||||
|
|
||||||
|
|
||||||
|
_global_registry: SourceRegistry | None = None
|
||||||
|
|
||||||
|
|
||||||
|
def get_source_registry() -> SourceRegistry:
|
||||||
|
global _global_registry
|
||||||
|
if _global_registry is None:
|
||||||
|
_global_registry = SourceRegistry()
|
||||||
|
return _global_registry
|
||||||
|
|
||||||
|
|
||||||
|
def init_default_sources() -> SourceRegistry:
|
||||||
|
"""Initialize the default source registry with standard sources."""
|
||||||
|
registry = get_source_registry()
|
||||||
|
registry.register(HeadlinesDataSource(), default=True)
|
||||||
|
registry.register(PoetryDataSource())
|
||||||
|
return registry
|
||||||
@@ -4,8 +4,8 @@ No internal dependencies.
|
|||||||
"""
|
"""
|
||||||
|
|
||||||
import os
|
import os
|
||||||
import sys
|
|
||||||
import random
|
import random
|
||||||
|
import sys
|
||||||
import time
|
import time
|
||||||
|
|
||||||
# ─── ANSI ─────────────────────────────────────────────────
|
# ─── ANSI ─────────────────────────────────────────────────
|
||||||
|
|||||||
@@ -3,14 +3,33 @@ Google Translate wrapper and location→language detection.
|
|||||||
Depends on: sources (for LOCATION_LANGS).
|
Depends on: sources (for LOCATION_LANGS).
|
||||||
"""
|
"""
|
||||||
|
|
||||||
import re
|
|
||||||
import json
|
import json
|
||||||
import urllib.request
|
import re
|
||||||
import urllib.parse
|
import urllib.parse
|
||||||
|
import urllib.request
|
||||||
|
from functools import lru_cache
|
||||||
|
|
||||||
from engine.sources import LOCATION_LANGS
|
from engine.sources import LOCATION_LANGS
|
||||||
|
|
||||||
_TRANSLATE_CACHE = {}
|
TRANSLATE_CACHE_SIZE = 500
|
||||||
|
|
||||||
|
|
||||||
|
@lru_cache(maxsize=TRANSLATE_CACHE_SIZE)
|
||||||
|
def _translate_cached(title: str, target_lang: str) -> str:
|
||||||
|
"""Cached translation implementation."""
|
||||||
|
try:
|
||||||
|
q = urllib.parse.quote(title)
|
||||||
|
url = (
|
||||||
|
"https://translate.googleapis.com/translate_a/single"
|
||||||
|
f"?client=gtx&sl=en&tl={target_lang}&dt=t&q={q}"
|
||||||
|
)
|
||||||
|
req = urllib.request.Request(url, headers={"User-Agent": "mainline/0.1"})
|
||||||
|
resp = urllib.request.urlopen(req, timeout=5)
|
||||||
|
data = json.loads(resp.read())
|
||||||
|
result = "".join(p[0] for p in data[0] if p[0]) or title
|
||||||
|
except Exception:
|
||||||
|
result = title
|
||||||
|
return result
|
||||||
|
|
||||||
|
|
||||||
def detect_location_language(title):
|
def detect_location_language(title):
|
||||||
@@ -22,20 +41,6 @@ def detect_location_language(title):
|
|||||||
return None
|
return None
|
||||||
|
|
||||||
|
|
||||||
def translate_headline(title, target_lang):
|
def translate_headline(title: str, target_lang: str) -> str:
|
||||||
"""Translate headline via Google Translate API (zero dependencies)."""
|
"""Translate headline via Google Translate API (zero dependencies)."""
|
||||||
key = (title, target_lang)
|
return _translate_cached(title, target_lang)
|
||||||
if key in _TRANSLATE_CACHE:
|
|
||||||
return _TRANSLATE_CACHE[key]
|
|
||||||
try:
|
|
||||||
q = urllib.parse.quote(title)
|
|
||||||
url = ("https://translate.googleapis.com/translate_a/single"
|
|
||||||
f"?client=gtx&sl=en&tl={target_lang}&dt=t&q={q}")
|
|
||||||
req = urllib.request.Request(url, headers={"User-Agent": "mainline/0.1"})
|
|
||||||
resp = urllib.request.urlopen(req, timeout=5)
|
|
||||||
data = json.loads(resp.read())
|
|
||||||
result = "".join(p[0] for p in data[0] if p[0]) or title
|
|
||||||
except Exception:
|
|
||||||
result = title
|
|
||||||
_TRANSLATE_CACHE[key] = result
|
|
||||||
return result
|
|
||||||
|
|||||||
60
engine/types.py
Normal file
60
engine/types.py
Normal file
@@ -0,0 +1,60 @@
|
|||||||
|
"""
|
||||||
|
Shared dataclasses for the mainline application.
|
||||||
|
Provides named types for tuple returns across modules.
|
||||||
|
"""
|
||||||
|
|
||||||
|
from dataclasses import dataclass
|
||||||
|
|
||||||
|
|
||||||
|
@dataclass
|
||||||
|
class HeadlineItem:
|
||||||
|
"""A single headline item: title, source, and timestamp."""
|
||||||
|
|
||||||
|
title: str
|
||||||
|
source: str
|
||||||
|
timestamp: str
|
||||||
|
|
||||||
|
def to_tuple(self) -> tuple[str, str, str]:
|
||||||
|
"""Convert to tuple for backward compatibility."""
|
||||||
|
return (self.title, self.source, self.timestamp)
|
||||||
|
|
||||||
|
@classmethod
|
||||||
|
def from_tuple(cls, t: tuple[str, str, str]) -> "HeadlineItem":
|
||||||
|
"""Create from tuple for backward compatibility."""
|
||||||
|
return cls(title=t[0], source=t[1], timestamp=t[2])
|
||||||
|
|
||||||
|
|
||||||
|
def items_to_tuples(items: list[HeadlineItem]) -> list[tuple[str, str, str]]:
|
||||||
|
"""Convert list of HeadlineItem to list of tuples."""
|
||||||
|
return [item.to_tuple() for item in items]
|
||||||
|
|
||||||
|
|
||||||
|
def tuples_to_items(tuples: list[tuple[str, str, str]]) -> list[HeadlineItem]:
|
||||||
|
"""Convert list of tuples to list of HeadlineItem."""
|
||||||
|
return [HeadlineItem.from_tuple(t) for t in tuples]
|
||||||
|
|
||||||
|
|
||||||
|
@dataclass
|
||||||
|
class FetchResult:
|
||||||
|
"""Result from fetch_all() or fetch_poetry()."""
|
||||||
|
|
||||||
|
items: list[HeadlineItem]
|
||||||
|
linked: int
|
||||||
|
failed: int
|
||||||
|
|
||||||
|
def to_legacy_tuple(self) -> tuple[list[tuple], int, int]:
|
||||||
|
"""Convert to legacy tuple format for backward compatibility."""
|
||||||
|
return ([item.to_tuple() for item in self.items], self.linked, self.failed)
|
||||||
|
|
||||||
|
|
||||||
|
@dataclass
|
||||||
|
class Block:
|
||||||
|
"""Rendered headline block from make_block()."""
|
||||||
|
|
||||||
|
content: list[str]
|
||||||
|
color: str
|
||||||
|
meta_row_index: int
|
||||||
|
|
||||||
|
def to_legacy_tuple(self) -> tuple[list[str], str, int]:
|
||||||
|
"""Convert to legacy tuple format for backward compatibility."""
|
||||||
|
return (self.content, self.color, self.meta_row_index)
|
||||||
37
engine/viewport.py
Normal file
37
engine/viewport.py
Normal file
@@ -0,0 +1,37 @@
|
|||||||
|
"""
|
||||||
|
Viewport utilities — terminal dimensions and ANSI positioning helpers.
|
||||||
|
No internal dependencies.
|
||||||
|
"""
|
||||||
|
|
||||||
|
import os
|
||||||
|
|
||||||
|
|
||||||
|
def tw() -> int:
|
||||||
|
"""Get terminal width (columns)."""
|
||||||
|
try:
|
||||||
|
return os.get_terminal_size().columns
|
||||||
|
except Exception:
|
||||||
|
return 80
|
||||||
|
|
||||||
|
|
||||||
|
def th() -> int:
|
||||||
|
"""Get terminal height (lines)."""
|
||||||
|
try:
|
||||||
|
return os.get_terminal_size().lines
|
||||||
|
except Exception:
|
||||||
|
return 24
|
||||||
|
|
||||||
|
|
||||||
|
def move_to(row: int, col: int = 1) -> str:
|
||||||
|
"""Generate ANSI escape to move cursor to row, col (1-indexed)."""
|
||||||
|
return f"\033[{row};{col}H"
|
||||||
|
|
||||||
|
|
||||||
|
def clear_screen() -> str:
|
||||||
|
"""Clear screen and move cursor to home."""
|
||||||
|
return "\033[2J\033[H"
|
||||||
|
|
||||||
|
|
||||||
|
def clear_line() -> str:
|
||||||
|
"""Clear current line."""
|
||||||
|
return "\033[K"
|
||||||
BIN
fonts/AgorTechnoDemo-Regular.otf
Normal file
BIN
fonts/AgorTechnoDemo-Regular.otf
Normal file
Binary file not shown.
BIN
fonts/AlphatronDemo-Display.otf
Normal file
BIN
fonts/AlphatronDemo-Display.otf
Normal file
Binary file not shown.
BIN
fonts/CSBishopDrawn-Italic.otf
Normal file
BIN
fonts/CSBishopDrawn-Italic.otf
Normal file
Binary file not shown.
BIN
fonts/CSBishopDrawn-Italic.ttf
Normal file
BIN
fonts/CSBishopDrawn-Italic.ttf
Normal file
Binary file not shown.
BIN
fonts/CSBishopDrawn-Regular.otf
Normal file
BIN
fonts/CSBishopDrawn-Regular.otf
Normal file
Binary file not shown.
BIN
fonts/CSBishopDrawn-Regular.ttf
Normal file
BIN
fonts/CSBishopDrawn-Regular.ttf
Normal file
Binary file not shown.
BIN
fonts/Corptic DEMO.otf
Normal file
BIN
fonts/Corptic DEMO.otf
Normal file
Binary file not shown.
BIN
fonts/CubaTechnologyDemo-Regular.otf
Normal file
BIN
fonts/CubaTechnologyDemo-Regular.otf
Normal file
Binary file not shown.
BIN
fonts/CyberformDemo-Oblique.otf
Normal file
BIN
fonts/CyberformDemo-Oblique.otf
Normal file
Binary file not shown.
BIN
fonts/CyberformDemo.otf
Normal file
BIN
fonts/CyberformDemo.otf
Normal file
Binary file not shown.
BIN
fonts/Eyekons.otf
Normal file
BIN
fonts/Eyekons.otf
Normal file
Binary file not shown.
BIN
fonts/KATA Mac.otf
Normal file
BIN
fonts/KATA Mac.otf
Normal file
Binary file not shown.
BIN
fonts/KATA Mac.ttf
Normal file
BIN
fonts/KATA Mac.ttf
Normal file
Binary file not shown.
BIN
fonts/KATA.otf
Normal file
BIN
fonts/KATA.otf
Normal file
Binary file not shown.
BIN
fonts/KATA.ttf
Normal file
BIN
fonts/KATA.ttf
Normal file
Binary file not shown.
BIN
fonts/Microbots Demo.otf
Normal file
BIN
fonts/Microbots Demo.otf
Normal file
Binary file not shown.
BIN
fonts/ModernSpaceDemo-Regular.otf
Normal file
BIN
fonts/ModernSpaceDemo-Regular.otf
Normal file
Binary file not shown.
BIN
fonts/Neoform-Demo.otf
Normal file
BIN
fonts/Neoform-Demo.otf
Normal file
Binary file not shown.
BIN
fonts/Pixel Sparta.otf
Normal file
BIN
fonts/Pixel Sparta.otf
Normal file
Binary file not shown.
BIN
fonts/RaceHugoDemo-Regular.otf
Normal file
BIN
fonts/RaceHugoDemo-Regular.otf
Normal file
Binary file not shown.
BIN
fonts/Resond-Regular.otf
Normal file
BIN
fonts/Resond-Regular.otf
Normal file
Binary file not shown.
BIN
fonts/Robocops-Demo.otf
Normal file
BIN
fonts/Robocops-Demo.otf
Normal file
Binary file not shown.
BIN
fonts/Synthetix.otf
Normal file
BIN
fonts/Synthetix.otf
Normal file
Binary file not shown.
BIN
fonts/Xeonic.ttf
Normal file
BIN
fonts/Xeonic.ttf
Normal file
Binary file not shown.
30
hk.pkl
Normal file
30
hk.pkl
Normal file
@@ -0,0 +1,30 @@
|
|||||||
|
amends "package://github.com/jdx/hk/releases/download/v1.38.0/hk@1.38.0#/Config.pkl"
|
||||||
|
import "package://github.com/jdx/hk/releases/download/v1.38.0/hk@1.38.0#/Builtins.pkl"
|
||||||
|
|
||||||
|
hooks {
|
||||||
|
["pre-commit"] {
|
||||||
|
fix = true
|
||||||
|
stash = "git"
|
||||||
|
steps {
|
||||||
|
["ruff-format"] = (Builtins.ruff_format) {
|
||||||
|
prefix = "uv run"
|
||||||
|
}
|
||||||
|
["ruff"] = (Builtins.ruff) {
|
||||||
|
prefix = "uv run"
|
||||||
|
check = "ruff check engine/ tests/"
|
||||||
|
fix = "ruff check --fix --unsafe-fixes engine/ tests/"
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
["pre-push"] {
|
||||||
|
steps {
|
||||||
|
["ruff"] = (Builtins.ruff) {
|
||||||
|
prefix = "uv run"
|
||||||
|
check = "ruff check engine/ tests/"
|
||||||
|
}
|
||||||
|
["benchmark"] {
|
||||||
|
check = "uv run python -m engine.benchmark --hook --displays null --iterations 20"
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
31
kitty_test.py
Normal file
31
kitty_test.py
Normal file
@@ -0,0 +1,31 @@
|
|||||||
|
#!/usr/bin/env python3
|
||||||
|
"""Test script for Kitty graphics display."""
|
||||||
|
|
||||||
|
import sys
|
||||||
|
|
||||||
|
|
||||||
|
def test_kitty_simple():
|
||||||
|
"""Test simple Kitty graphics output with embedded PNG."""
|
||||||
|
import base64
|
||||||
|
|
||||||
|
# Minimal 1x1 red pixel PNG (pre-encoded)
|
||||||
|
# This is a tiny valid PNG with a red pixel
|
||||||
|
png_red_1x1 = (
|
||||||
|
b"\x89PNG\r\n\x1a\n\x00\x00\x00\rIHDR\x00\x00\x00"
|
||||||
|
b"\x01\x00\x00\x00\x01\x08\x02\x00\x00\x00\x90wS\xde"
|
||||||
|
b"\x00\x00\x00\x0cIDATx\x9cc\xf8\xcf\xc0\x00\x00\x00"
|
||||||
|
b"\x03\x00\x01\x00\x05\xfe\xd4\x00\x00\x00\x00IEND\xaeB`\x82"
|
||||||
|
)
|
||||||
|
|
||||||
|
encoded = base64.b64encode(png_red_1x1).decode("ascii")
|
||||||
|
|
||||||
|
graphic = f"\x1b_Gf=100,t=d,s=1,v=1,c=1,r=1;{encoded}\x1b\\"
|
||||||
|
sys.stdout.buffer.write(graphic.encode("utf-8"))
|
||||||
|
sys.stdout.flush()
|
||||||
|
|
||||||
|
print("\n[If you see a red dot above, Kitty graphics is working!]")
|
||||||
|
print("[If you see nothing or garbage, it's not working]")
|
||||||
|
|
||||||
|
|
||||||
|
if __name__ == "__main__":
|
||||||
|
test_kitty_simple()
|
||||||
35
mainline.py
35
mainline.py
@@ -5,40 +5,7 @@ Digital news consciousness stream.
|
|||||||
Matrix aesthetic · THX-1138 hue.
|
Matrix aesthetic · THX-1138 hue.
|
||||||
"""
|
"""
|
||||||
|
|
||||||
import subprocess, sys, pathlib
|
from engine.app import main
|
||||||
|
|
||||||
# ─── BOOTSTRAP VENV ───────────────────────────────────────
|
|
||||||
_VENV = pathlib.Path(__file__).resolve().parent / ".mainline_venv"
|
|
||||||
_MARKER = _VENV / ".installed_v3"
|
|
||||||
|
|
||||||
def _ensure_venv():
|
|
||||||
"""Create a local venv and install deps if needed."""
|
|
||||||
if _MARKER.exists():
|
|
||||||
return
|
|
||||||
import venv
|
|
||||||
print("\033[2;38;5;34m > first run — creating environment...\033[0m")
|
|
||||||
venv.create(str(_VENV), with_pip=True, clear=True)
|
|
||||||
pip = str(_VENV / "bin" / "pip")
|
|
||||||
subprocess.check_call(
|
|
||||||
[pip, "install", "feedparser", "Pillow", "-q"],
|
|
||||||
stdout=subprocess.DEVNULL, stderr=subprocess.DEVNULL,
|
|
||||||
)
|
|
||||||
_MARKER.touch()
|
|
||||||
|
|
||||||
_ensure_venv()
|
|
||||||
|
|
||||||
# Install sounddevice on first run after v3
|
|
||||||
_MARKER_SD = _VENV / ".installed_sd"
|
|
||||||
if not _MARKER_SD.exists():
|
|
||||||
_pip = str(_VENV / "bin" / "pip")
|
|
||||||
subprocess.check_call([_pip, "install", "sounddevice", "numpy", "-q"],
|
|
||||||
stdout=subprocess.DEVNULL, stderr=subprocess.DEVNULL)
|
|
||||||
_MARKER_SD.touch()
|
|
||||||
|
|
||||||
sys.path.insert(0, str(next((_VENV / "lib").glob("python*/site-packages"))))
|
|
||||||
|
|
||||||
# ─── DELEGATE TO ENGINE ───────────────────────────────────
|
|
||||||
from engine.app import main # noqa: E402
|
|
||||||
|
|
||||||
if __name__ == "__main__":
|
if __name__ == "__main__":
|
||||||
main()
|
main()
|
||||||
|
|||||||
108
mise.toml
Normal file
108
mise.toml
Normal file
@@ -0,0 +1,108 @@
|
|||||||
|
[tools]
|
||||||
|
python = "3.12"
|
||||||
|
hk = "latest"
|
||||||
|
pkl = "latest"
|
||||||
|
|
||||||
|
[tasks]
|
||||||
|
# =====================
|
||||||
|
# Testing
|
||||||
|
# =====================
|
||||||
|
|
||||||
|
test = "uv run pytest"
|
||||||
|
test-v = { run = "uv run pytest -v", depends = ["sync-all"] }
|
||||||
|
test-cov = { run = "uv run pytest --cov=engine --cov-report=term-missing --cov-report=html", depends = ["sync-all"] }
|
||||||
|
test-cov-open = { run = "mise run test-cov && open htmlcov/index.html", depends = ["sync-all"] }
|
||||||
|
|
||||||
|
test-browser-install = { run = "uv run playwright install chromium", depends = ["sync-all"] }
|
||||||
|
test-browser = { run = "uv run pytest tests/e2e/", depends = ["test-browser-install"] }
|
||||||
|
|
||||||
|
# =====================
|
||||||
|
# Linting & Formatting
|
||||||
|
# =====================
|
||||||
|
|
||||||
|
lint = "uv run ruff check engine/ mainline.py"
|
||||||
|
lint-fix = "uv run ruff check --fix engine/ mainline.py"
|
||||||
|
format = "uv run ruff format engine/ mainline.py"
|
||||||
|
|
||||||
|
# =====================
|
||||||
|
# Runtime Modes
|
||||||
|
# =====================
|
||||||
|
|
||||||
|
run = "uv run mainline.py"
|
||||||
|
run-poetry = "uv run mainline.py --poetry"
|
||||||
|
run-firehose = "uv run mainline.py --firehose"
|
||||||
|
|
||||||
|
run-websocket = { run = "uv run mainline.py --display websocket", depends = ["sync-all"] }
|
||||||
|
run-sixel = { run = "uv run mainline.py --display sixel", depends = ["sync-all"] }
|
||||||
|
run-kitty = { run = "uv run mainline.py --display kitty", depends = ["sync-all"] }
|
||||||
|
run-pygame = { run = "uv run mainline.py --display pygame", depends = ["sync-all"] }
|
||||||
|
run-both = { run = "uv run mainline.py --display both", depends = ["sync-all"] }
|
||||||
|
run-client = { run = "mise run run-both & sleep 2 && $(open http://localhost:8766 2>/dev/null || xdg-open http://localhost:8766 2>/dev/null || echo 'Open http://localhost:8766 manually'); wait", depends = ["sync-all"] }
|
||||||
|
|
||||||
|
# =====================
|
||||||
|
# Pipeline Architecture (unified Stage-based)
|
||||||
|
# =====================
|
||||||
|
|
||||||
|
run-pipeline = { run = "uv run mainline.py --pipeline --display pygame", depends = ["sync-all"] }
|
||||||
|
run-pipeline-demo = { run = "uv run mainline.py --pipeline --pipeline-preset demo --display pygame", depends = ["sync-all"] }
|
||||||
|
run-pipeline-poetry = { run = "uv run mainline.py --pipeline --pipeline-preset poetry --display pygame", depends = ["sync-all"] }
|
||||||
|
run-pipeline-websocket = { run = "uv run mainline.py --pipeline --pipeline-preset websocket", depends = ["sync-all"] }
|
||||||
|
run-pipeline-firehose = { run = "uv run mainline.py --pipeline --pipeline-preset firehose --display pygame", depends = ["sync-all"] }
|
||||||
|
|
||||||
|
# =====================
|
||||||
|
# Presets (Animation-controlled modes)
|
||||||
|
# =====================
|
||||||
|
|
||||||
|
run-preset-demo = { run = "uv run mainline.py --preset demo --display pygame", depends = ["sync-all"] }
|
||||||
|
run-preset-pipeline = { run = "uv run mainline.py --preset pipeline --display pygame", depends = ["sync-all"] }
|
||||||
|
|
||||||
|
# =====================
|
||||||
|
# Command & Control
|
||||||
|
# =====================
|
||||||
|
|
||||||
|
cmd = "uv run cmdline.py"
|
||||||
|
cmd-stats = { run = "uv run cmdline.py -w \"/effects stats\"", depends = ["sync-all"] }
|
||||||
|
|
||||||
|
# =====================
|
||||||
|
# Benchmark
|
||||||
|
# =====================
|
||||||
|
|
||||||
|
benchmark = { run = "uv run python -m engine.benchmark", depends = ["sync-all"] }
|
||||||
|
benchmark-json = { run = "uv run python -m engine.benchmark --format json --output benchmark.json", depends = ["sync-all"] }
|
||||||
|
benchmark-report = { run = "uv run python -m engine.benchmark --output BENCHMARK.md", depends = ["sync-all"] }
|
||||||
|
|
||||||
|
# Initialize ntfy topics (warm up before first use)
|
||||||
|
topics-init = "curl -s -d 'init' https://ntfy.sh/klubhaus_terminal_mainline_cc_cmd > /dev/null && curl -s -d 'init' https://ntfy.sh/klubhaus_terminal_mainline_cc_resp > /dev/null && curl -s -d 'init' https://ntfy.sh/klubhaus_terminal_mainline > /dev/null"
|
||||||
|
|
||||||
|
# =====================
|
||||||
|
# Daemon
|
||||||
|
# =====================
|
||||||
|
|
||||||
|
daemon = "nohup uv run mainline.py > nohup.out 2>&1 &"
|
||||||
|
daemon-stop = "pkill -f 'uv run mainline.py' 2>/dev/null || true"
|
||||||
|
daemon-restart = "mise run daemon-stop && sleep 2 && mise run daemon"
|
||||||
|
|
||||||
|
# =====================
|
||||||
|
# Environment
|
||||||
|
# =====================
|
||||||
|
|
||||||
|
sync = "uv sync"
|
||||||
|
sync-all = "uv sync --all-extras"
|
||||||
|
install = "mise run sync"
|
||||||
|
install-dev = { run = "mise run sync-all && uv sync --group dev", depends = ["sync-all"] }
|
||||||
|
bootstrap = { run = "mise run sync-all && uv run mainline.py --help", depends = ["sync-all"] }
|
||||||
|
|
||||||
|
clean = "rm -rf .venv htmlcov .coverage tests/.pytest_cache .mainline_cache_*.json nohup.out"
|
||||||
|
clobber = "git clean -fdx && rm -rf .venv htmlcov .coverage tests/.pytest_cache .mainline_cache_*.json nohup.out"
|
||||||
|
|
||||||
|
# =====================
|
||||||
|
# CI/CD
|
||||||
|
# =====================
|
||||||
|
|
||||||
|
ci = { run = "mise run topics-init && mise run lint && mise run test-cov", depends = ["topics-init", "lint", "test-cov"] }
|
||||||
|
|
||||||
|
# =====================
|
||||||
|
# Git Hooks (via hk)
|
||||||
|
# =====================
|
||||||
|
|
||||||
|
pre-commit = "hk run pre-commit"
|
||||||
107
pyproject.toml
Normal file
107
pyproject.toml
Normal file
@@ -0,0 +1,107 @@
|
|||||||
|
[project]
|
||||||
|
name = "mainline"
|
||||||
|
version = "0.1.0"
|
||||||
|
description = "Terminal news ticker with Matrix aesthetic"
|
||||||
|
readme = "README.md"
|
||||||
|
requires-python = ">=3.10"
|
||||||
|
authors = [
|
||||||
|
{ name = "Mainline", email = "mainline@example.com" }
|
||||||
|
]
|
||||||
|
license = { text = "MIT" }
|
||||||
|
classifiers = [
|
||||||
|
"Development Status :: 4 - Beta",
|
||||||
|
"Environment :: Console",
|
||||||
|
"License :: OSI Approved :: MIT License",
|
||||||
|
"Programming Language :: Python :: 3",
|
||||||
|
"Programming Language :: Python :: 3.10",
|
||||||
|
"Programming Language :: Python :: 3.11",
|
||||||
|
"Programming Language :: Python :: 3.12",
|
||||||
|
"Topic :: Terminals",
|
||||||
|
]
|
||||||
|
|
||||||
|
dependencies = [
|
||||||
|
"feedparser>=6.0.0",
|
||||||
|
"Pillow>=10.0.0",
|
||||||
|
"pyright>=1.1.408",
|
||||||
|
]
|
||||||
|
|
||||||
|
[project.optional-dependencies]
|
||||||
|
mic = [
|
||||||
|
"sounddevice>=0.4.0",
|
||||||
|
"numpy>=1.24.0",
|
||||||
|
]
|
||||||
|
websocket = [
|
||||||
|
"websockets>=12.0",
|
||||||
|
]
|
||||||
|
sixel = [
|
||||||
|
"Pillow>=10.0.0",
|
||||||
|
]
|
||||||
|
pygame = [
|
||||||
|
"pygame>=2.0.0",
|
||||||
|
]
|
||||||
|
browser = [
|
||||||
|
"playwright>=1.40.0",
|
||||||
|
]
|
||||||
|
dev = [
|
||||||
|
"pytest>=8.0.0",
|
||||||
|
"pytest-cov>=4.1.0",
|
||||||
|
"pytest-mock>=3.12.0",
|
||||||
|
"ruff>=0.1.0",
|
||||||
|
]
|
||||||
|
|
||||||
|
[project.scripts]
|
||||||
|
mainline = "engine.app:main"
|
||||||
|
|
||||||
|
[build-system]
|
||||||
|
requires = ["hatchling"]
|
||||||
|
build-backend = "hatchling.build"
|
||||||
|
|
||||||
|
[dependency-groups]
|
||||||
|
dev = [
|
||||||
|
"pytest>=8.0.0",
|
||||||
|
"pytest-cov>=4.1.0",
|
||||||
|
"pytest-mock>=3.12.0",
|
||||||
|
"ruff>=0.1.0",
|
||||||
|
]
|
||||||
|
|
||||||
|
[tool.pytest.ini_options]
|
||||||
|
testpaths = ["tests"]
|
||||||
|
python_files = ["test_*.py"]
|
||||||
|
python_functions = ["test_*"]
|
||||||
|
addopts = [
|
||||||
|
"--strict-markers",
|
||||||
|
"--tb=short",
|
||||||
|
"-v",
|
||||||
|
]
|
||||||
|
markers = [
|
||||||
|
"benchmark: marks tests as performance benchmarks (may be slow)",
|
||||||
|
"e2e: marks tests as end-to-end tests (require network/display)",
|
||||||
|
"integration: marks tests as integration tests (require external services)",
|
||||||
|
"ntfy: marks tests that require ntfy service",
|
||||||
|
]
|
||||||
|
filterwarnings = [
|
||||||
|
"ignore::DeprecationWarning",
|
||||||
|
]
|
||||||
|
|
||||||
|
[tool.coverage.run]
|
||||||
|
source = ["engine"]
|
||||||
|
branch = true
|
||||||
|
|
||||||
|
[tool.coverage.report]
|
||||||
|
exclude_lines = [
|
||||||
|
"pragma: no cover",
|
||||||
|
"def __repr__",
|
||||||
|
"raise AssertionError",
|
||||||
|
"raise NotImplementedError",
|
||||||
|
"if __name__ == .__main__.:",
|
||||||
|
"if TYPE_CHECKING:",
|
||||||
|
"@abstractmethod",
|
||||||
|
]
|
||||||
|
|
||||||
|
[tool.ruff]
|
||||||
|
line-length = 88
|
||||||
|
target-version = "py310"
|
||||||
|
|
||||||
|
[tool.ruff.lint]
|
||||||
|
select = ["E", "F", "W", "I", "N", "UP", "B", "C4", "SIM"]
|
||||||
|
ignore = ["E501", "SIM105", "N806", "B007", "SIM108"]
|
||||||
4
requirements-dev.txt
Normal file
4
requirements-dev.txt
Normal file
@@ -0,0 +1,4 @@
|
|||||||
|
pytest>=8.0.0
|
||||||
|
pytest-cov>=4.1.0
|
||||||
|
pytest-mock>=3.12.0
|
||||||
|
ruff>=0.1.0
|
||||||
4
requirements.txt
Normal file
4
requirements.txt
Normal file
@@ -0,0 +1,4 @@
|
|||||||
|
feedparser>=6.0.0
|
||||||
|
Pillow>=10.0.0
|
||||||
|
sounddevice>=0.4.0
|
||||||
|
numpy>=1.24.0
|
||||||
0
tests/__init__.py
Normal file
0
tests/__init__.py
Normal file
36
tests/conftest.py
Normal file
36
tests/conftest.py
Normal file
@@ -0,0 +1,36 @@
|
|||||||
|
"""
|
||||||
|
Pytest configuration for mainline.
|
||||||
|
"""
|
||||||
|
|
||||||
|
import pytest
|
||||||
|
|
||||||
|
|
||||||
|
def pytest_configure(config):
|
||||||
|
"""Configure pytest to skip integration tests by default."""
|
||||||
|
config.addinivalue_line(
|
||||||
|
"markers",
|
||||||
|
"integration: marks tests as integration tests (require external services)",
|
||||||
|
)
|
||||||
|
config.addinivalue_line("markers", "ntfy: marks tests that require ntfy service")
|
||||||
|
|
||||||
|
|
||||||
|
def pytest_collection_modifyitems(config, items):
|
||||||
|
"""Skip integration/e2e tests unless explicitly requested with -m."""
|
||||||
|
# Get the current marker expression
|
||||||
|
marker_expr = config.getoption("-m", default="")
|
||||||
|
|
||||||
|
# If explicitly running integration or e2e, don't skip them
|
||||||
|
if marker_expr in ("integration", "e2e", "integration or e2e"):
|
||||||
|
return
|
||||||
|
|
||||||
|
# Skip integration tests
|
||||||
|
skip_integration = pytest.mark.skip(reason="need -m integration to run")
|
||||||
|
for item in items:
|
||||||
|
if "integration" in item.keywords:
|
||||||
|
item.add_marker(skip_integration)
|
||||||
|
|
||||||
|
# Skip e2e tests by default (they require browser/display)
|
||||||
|
skip_e2e = pytest.mark.skip(reason="need -m e2e to run")
|
||||||
|
for item in items:
|
||||||
|
if "e2e" in item.keywords and "integration" not in item.keywords:
|
||||||
|
item.add_marker(skip_e2e)
|
||||||
133
tests/e2e/test_web_client.py
Normal file
133
tests/e2e/test_web_client.py
Normal file
@@ -0,0 +1,133 @@
|
|||||||
|
"""
|
||||||
|
End-to-end tests for web client with headless browser.
|
||||||
|
"""
|
||||||
|
|
||||||
|
import os
|
||||||
|
import socketserver
|
||||||
|
import threading
|
||||||
|
from http.server import HTTPServer, SimpleHTTPRequestHandler
|
||||||
|
from pathlib import Path
|
||||||
|
|
||||||
|
import pytest
|
||||||
|
|
||||||
|
CLIENT_DIR = Path(__file__).parent.parent.parent / "client"
|
||||||
|
|
||||||
|
|
||||||
|
class ThreadedHTTPServer(socketserver.ThreadingMixIn, HTTPServer):
|
||||||
|
"""Threaded HTTP server for handling concurrent requests."""
|
||||||
|
|
||||||
|
daemon_threads = True
|
||||||
|
|
||||||
|
|
||||||
|
@pytest.fixture(scope="module")
|
||||||
|
def http_server():
|
||||||
|
"""Start a local HTTP server for the client."""
|
||||||
|
os.chdir(CLIENT_DIR)
|
||||||
|
|
||||||
|
handler = SimpleHTTPRequestHandler
|
||||||
|
server = ThreadedHTTPServer(("127.0.0.1", 0), handler)
|
||||||
|
port = server.server_address[1]
|
||||||
|
|
||||||
|
thread = threading.Thread(target=server.serve_forever, daemon=True)
|
||||||
|
thread.start()
|
||||||
|
|
||||||
|
yield f"http://127.0.0.1:{port}"
|
||||||
|
|
||||||
|
server.shutdown()
|
||||||
|
|
||||||
|
|
||||||
|
class TestWebClient:
|
||||||
|
"""Tests for the web client using Playwright."""
|
||||||
|
|
||||||
|
@pytest.fixture(autouse=True)
|
||||||
|
def setup_browser(self):
|
||||||
|
"""Set up browser for tests."""
|
||||||
|
pytest.importorskip("playwright")
|
||||||
|
from playwright.sync_api import sync_playwright
|
||||||
|
|
||||||
|
self.playwright = sync_playwright().start()
|
||||||
|
self.browser = self.playwright.chromium.launch(headless=True)
|
||||||
|
self.context = self.browser.new_context()
|
||||||
|
self.page = self.context.new_page()
|
||||||
|
|
||||||
|
yield
|
||||||
|
|
||||||
|
self.page.close()
|
||||||
|
self.context.close()
|
||||||
|
self.browser.close()
|
||||||
|
self.playwright.stop()
|
||||||
|
|
||||||
|
def test_client_loads(self, http_server):
|
||||||
|
"""Web client loads without errors."""
|
||||||
|
response = self.page.goto(http_server)
|
||||||
|
assert response.status == 200, f"Page load failed with status {response.status}"
|
||||||
|
|
||||||
|
self.page.wait_for_load_state("domcontentloaded")
|
||||||
|
|
||||||
|
content = self.page.content()
|
||||||
|
assert "<canvas" in content, "Canvas element not found in page"
|
||||||
|
|
||||||
|
canvas = self.page.locator("#terminal")
|
||||||
|
assert canvas.count() > 0, "Canvas not found"
|
||||||
|
|
||||||
|
def test_status_shows_connecting(self, http_server):
|
||||||
|
"""Status shows connecting initially."""
|
||||||
|
self.page.goto(http_server)
|
||||||
|
self.page.wait_for_load_state("domcontentloaded")
|
||||||
|
|
||||||
|
status = self.page.locator("#status")
|
||||||
|
assert status.count() > 0, "Status element not found"
|
||||||
|
|
||||||
|
def test_canvas_has_dimensions(self, http_server):
|
||||||
|
"""Canvas has correct dimensions after load."""
|
||||||
|
self.page.goto(http_server)
|
||||||
|
self.page.wait_for_load_state("domcontentloaded")
|
||||||
|
|
||||||
|
canvas = self.page.locator("#terminal")
|
||||||
|
assert canvas.count() > 0, "Canvas not found"
|
||||||
|
|
||||||
|
def test_no_console_errors_on_load(self, http_server):
|
||||||
|
"""No JavaScript errors on page load (websocket errors are expected without server)."""
|
||||||
|
js_errors = []
|
||||||
|
|
||||||
|
def handle_console(msg):
|
||||||
|
if msg.type == "error":
|
||||||
|
text = msg.text
|
||||||
|
if "WebSocket" not in text:
|
||||||
|
js_errors.append(text)
|
||||||
|
|
||||||
|
self.page.on("console", handle_console)
|
||||||
|
self.page.goto(http_server)
|
||||||
|
self.page.wait_for_load_state("domcontentloaded")
|
||||||
|
|
||||||
|
assert len(js_errors) == 0, f"JavaScript errors: {js_errors}"
|
||||||
|
|
||||||
|
|
||||||
|
class TestWebClientProtocol:
|
||||||
|
"""Tests for WebSocket protocol handling in client."""
|
||||||
|
|
||||||
|
@pytest.fixture(autouse=True)
|
||||||
|
def setup_browser(self):
|
||||||
|
"""Set up browser for tests."""
|
||||||
|
pytest.importorskip("playwright")
|
||||||
|
from playwright.sync_api import sync_playwright
|
||||||
|
|
||||||
|
self.playwright = sync_playwright().start()
|
||||||
|
self.browser = self.playwright.chromium.launch(headless=True)
|
||||||
|
self.context = self.browser.new_context()
|
||||||
|
self.page = self.context.new_page()
|
||||||
|
|
||||||
|
yield
|
||||||
|
|
||||||
|
self.page.close()
|
||||||
|
self.context.close()
|
||||||
|
self.browser.close()
|
||||||
|
self.playwright.stop()
|
||||||
|
|
||||||
|
def test_websocket_reconnection(self, http_server):
|
||||||
|
"""Client attempts reconnection on disconnect."""
|
||||||
|
self.page.goto(http_server)
|
||||||
|
self.page.wait_for_load_state("domcontentloaded")
|
||||||
|
|
||||||
|
status = self.page.locator("#status")
|
||||||
|
assert status.count() > 0, "Status element not found"
|
||||||
236
tests/fixtures/__init__.py
vendored
Normal file
236
tests/fixtures/__init__.py
vendored
Normal file
@@ -0,0 +1,236 @@
|
|||||||
|
"""
|
||||||
|
Pytest fixtures for mocking external dependencies (network, filesystem).
|
||||||
|
"""
|
||||||
|
|
||||||
|
import json
|
||||||
|
from unittest.mock import MagicMock
|
||||||
|
|
||||||
|
import pytest
|
||||||
|
|
||||||
|
|
||||||
|
@pytest.fixture
|
||||||
|
def mock_feed_response():
|
||||||
|
"""Mock RSS feed response data."""
|
||||||
|
return b"""<?xml version="1.0" encoding="UTF-8" ?>
|
||||||
|
<rss version="2.0">
|
||||||
|
<channel>
|
||||||
|
<title>Test Feed</title>
|
||||||
|
<link>https://example.com</link>
|
||||||
|
<item>
|
||||||
|
<title>Test Headline One</title>
|
||||||
|
<pubDate>Sat, 15 Mar 2025 12:00:00 GMT</pubDate>
|
||||||
|
</item>
|
||||||
|
<item>
|
||||||
|
<title>Test Headline Two</title>
|
||||||
|
<pubDate>Sat, 15 Mar 2025 11:00:00 GMT</pubDate>
|
||||||
|
</item>
|
||||||
|
<item>
|
||||||
|
<title>Sports: Team Wins Championship</title>
|
||||||
|
<pubDate>Sat, 15 Mar 2025 10:00:00 GMT</pubDate>
|
||||||
|
</item>
|
||||||
|
</channel>
|
||||||
|
</rss>"""
|
||||||
|
|
||||||
|
|
||||||
|
@pytest.fixture
|
||||||
|
def mock_gutenberg_response():
|
||||||
|
"""Mock Project Gutenberg text response."""
|
||||||
|
return """Project Gutenberg's Collection, by Various
|
||||||
|
|
||||||
|
*** START OF SOME TEXT ***
|
||||||
|
This is a test poem with multiple lines
|
||||||
|
that should be parsed as stanzas.
|
||||||
|
|
||||||
|
Another stanza here with different content
|
||||||
|
and more lines to test the parsing logic.
|
||||||
|
|
||||||
|
Yet another stanza for variety
|
||||||
|
in the test data.
|
||||||
|
|
||||||
|
*** END OF SOME TEXT ***"""
|
||||||
|
|
||||||
|
|
||||||
|
@pytest.fixture
|
||||||
|
def mock_gutenberg_empty():
|
||||||
|
"""Mock Gutenberg response with no valid stanzas."""
|
||||||
|
return """Project Gutenberg's Collection
|
||||||
|
|
||||||
|
*** START OF TEXT ***
|
||||||
|
THIS IS ALL CAPS AND SHOULD BE SKIPPED
|
||||||
|
|
||||||
|
I.
|
||||||
|
|
||||||
|
*** END OF TEXT ***"""
|
||||||
|
|
||||||
|
|
||||||
|
@pytest.fixture
|
||||||
|
def mock_ntfy_message():
|
||||||
|
"""Mock ntfy.sh SSE message."""
|
||||||
|
return json.dumps(
|
||||||
|
{
|
||||||
|
"id": "test123",
|
||||||
|
"event": "message",
|
||||||
|
"title": "Test Title",
|
||||||
|
"message": "Test message body",
|
||||||
|
"time": 1234567890,
|
||||||
|
}
|
||||||
|
).encode()
|
||||||
|
|
||||||
|
|
||||||
|
@pytest.fixture
|
||||||
|
def mock_ntfy_keepalive():
|
||||||
|
"""Mock ntfy.sh keepalive message."""
|
||||||
|
return b'data: {"event":"keepalive"}\n\n'
|
||||||
|
|
||||||
|
|
||||||
|
@pytest.fixture
|
||||||
|
def mock_google_translate_response():
|
||||||
|
"""Mock Google Translate API response."""
|
||||||
|
return json.dumps(
|
||||||
|
[
|
||||||
|
[["Translated text", "Original text", None, 0.8], None, "en"],
|
||||||
|
None,
|
||||||
|
None,
|
||||||
|
[],
|
||||||
|
[],
|
||||||
|
[],
|
||||||
|
[],
|
||||||
|
]
|
||||||
|
)
|
||||||
|
|
||||||
|
|
||||||
|
@pytest.fixture
|
||||||
|
def mock_feedparser():
|
||||||
|
"""Create a mock feedparser.parse function."""
|
||||||
|
|
||||||
|
def _mock(data):
|
||||||
|
mock_result = MagicMock()
|
||||||
|
mock_result.bozo = False
|
||||||
|
mock_result.entries = [
|
||||||
|
{
|
||||||
|
"title": "Test Headline",
|
||||||
|
"published_parsed": (2025, 3, 15, 12, 0, 0, 0, 0, 0),
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"title": "Another Headline",
|
||||||
|
"updated_parsed": (2025, 3, 15, 11, 0, 0, 0, 0, 0),
|
||||||
|
},
|
||||||
|
]
|
||||||
|
return mock_result
|
||||||
|
|
||||||
|
return _mock
|
||||||
|
|
||||||
|
|
||||||
|
@pytest.fixture
|
||||||
|
def mock_urllib_open(mock_feed_response):
|
||||||
|
"""Create a mock urllib.request.urlopen that returns feed data."""
|
||||||
|
|
||||||
|
def _mock(url):
|
||||||
|
mock_response = MagicMock()
|
||||||
|
mock_response.read.return_value = mock_feed_response
|
||||||
|
return mock_response
|
||||||
|
|
||||||
|
return _mock
|
||||||
|
|
||||||
|
|
||||||
|
@pytest.fixture
|
||||||
|
def sample_items():
|
||||||
|
"""Sample items as returned by fetch module (title, source, timestamp)."""
|
||||||
|
return [
|
||||||
|
("Headline One", "Test Source", "12:00"),
|
||||||
|
("Headline Two", "Another Source", "11:30"),
|
||||||
|
("Headline Three", "Third Source", "10:45"),
|
||||||
|
]
|
||||||
|
|
||||||
|
|
||||||
|
@pytest.fixture
|
||||||
|
def sample_config():
|
||||||
|
"""Sample config for testing."""
|
||||||
|
from engine.config import Config
|
||||||
|
|
||||||
|
return Config(
|
||||||
|
headline_limit=100,
|
||||||
|
feed_timeout=10,
|
||||||
|
mic_threshold_db=50,
|
||||||
|
mode="news",
|
||||||
|
firehose=False,
|
||||||
|
ntfy_topic="https://ntfy.sh/test/json",
|
||||||
|
ntfy_reconnect_delay=5,
|
||||||
|
message_display_secs=30,
|
||||||
|
font_dir="fonts",
|
||||||
|
font_path="",
|
||||||
|
font_index=0,
|
||||||
|
font_picker=False,
|
||||||
|
font_sz=60,
|
||||||
|
render_h=8,
|
||||||
|
ssaa=4,
|
||||||
|
scroll_dur=5.625,
|
||||||
|
frame_dt=0.05,
|
||||||
|
firehose_h=12,
|
||||||
|
grad_speed=0.08,
|
||||||
|
glitch_glyphs="░▒▓█▌▐",
|
||||||
|
kata_glyphs="ハミヒーウ",
|
||||||
|
script_fonts={},
|
||||||
|
)
|
||||||
|
|
||||||
|
|
||||||
|
@pytest.fixture
|
||||||
|
def poetry_config():
|
||||||
|
"""Sample config for poetry mode."""
|
||||||
|
from engine.config import Config
|
||||||
|
|
||||||
|
return Config(
|
||||||
|
headline_limit=100,
|
||||||
|
feed_timeout=10,
|
||||||
|
mic_threshold_db=50,
|
||||||
|
mode="poetry",
|
||||||
|
firehose=False,
|
||||||
|
ntfy_topic="https://ntfy.sh/test/json",
|
||||||
|
ntfy_reconnect_delay=5,
|
||||||
|
message_display_secs=30,
|
||||||
|
font_dir="fonts",
|
||||||
|
font_path="",
|
||||||
|
font_index=0,
|
||||||
|
font_picker=False,
|
||||||
|
font_sz=60,
|
||||||
|
render_h=8,
|
||||||
|
ssaa=4,
|
||||||
|
scroll_dur=5.625,
|
||||||
|
frame_dt=0.05,
|
||||||
|
firehose_h=12,
|
||||||
|
grad_speed=0.08,
|
||||||
|
glitch_glyphs="░▒▓█▌▐",
|
||||||
|
kata_glyphs="ハミヒーウ",
|
||||||
|
script_fonts={},
|
||||||
|
)
|
||||||
|
|
||||||
|
|
||||||
|
@pytest.fixture
|
||||||
|
def firehose_config():
|
||||||
|
"""Sample config with firehose enabled."""
|
||||||
|
from engine.config import Config
|
||||||
|
|
||||||
|
return Config(
|
||||||
|
headline_limit=100,
|
||||||
|
feed_timeout=10,
|
||||||
|
mic_threshold_db=50,
|
||||||
|
mode="news",
|
||||||
|
firehose=True,
|
||||||
|
ntfy_topic="https://ntfy.sh/test/json",
|
||||||
|
ntfy_reconnect_delay=5,
|
||||||
|
message_display_secs=30,
|
||||||
|
font_dir="fonts",
|
||||||
|
font_path="",
|
||||||
|
font_index=0,
|
||||||
|
font_picker=False,
|
||||||
|
font_sz=60,
|
||||||
|
render_h=8,
|
||||||
|
ssaa=4,
|
||||||
|
scroll_dur=5.625,
|
||||||
|
frame_dt=0.05,
|
||||||
|
firehose_h=12,
|
||||||
|
grad_speed=0.08,
|
||||||
|
glitch_glyphs="░▒▓█▌▐",
|
||||||
|
kata_glyphs="ハミヒーウ",
|
||||||
|
script_fonts={},
|
||||||
|
)
|
||||||
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user