forked from genewildish/Mainline
Compare commits
28 Commits
60ae4f7dfb
...
docs/updat
| Author | SHA1 | Date | |
|---|---|---|---|
| 9415e18679 | |||
| 0819f8d160 | |||
| edd1416407 | |||
| ac9b47f668 | |||
| b149825bcb | |||
| 1b29e91f9d | |||
| 001158214c | |||
| 31f5d9f171 | |||
| bc20a35ea9 | |||
| d4d0344a12 | |||
| 84cb16d463 | |||
| d67423fe4c | |||
| ebe7b04ba5 | |||
| abc4483859 | |||
| d9422b1fec | |||
| 6daea90b0a | |||
| 9d9172ef0d | |||
| 667bef2685 | |||
| f085042dee | |||
| 8b696c96ce | |||
| 72d21459ca | |||
| 58dbbbdba7 | |||
| 7ff78c66ed | |||
| 2229ccdea4 | |||
| f13e89f823 | |||
| 4228400c43 | |||
| 05cc475858 | |||
| cfd7e8931e |
3
.gitignore
vendored
3
.gitignore
vendored
@@ -9,6 +9,3 @@ htmlcov/
|
|||||||
.coverage
|
.coverage
|
||||||
.pytest_cache/
|
.pytest_cache/
|
||||||
*.egg-info/
|
*.egg-info/
|
||||||
coverage.xml
|
|
||||||
*.dot
|
|
||||||
*.png
|
|
||||||
|
|||||||
@@ -1,78 +0,0 @@
|
|||||||
---
|
|
||||||
name: mainline-architecture
|
|
||||||
description: Pipeline stages, capability resolution, and core architecture patterns
|
|
||||||
compatibility: opencode
|
|
||||||
metadata:
|
|
||||||
audience: developers
|
|
||||||
source_type: codebase
|
|
||||||
---
|
|
||||||
|
|
||||||
## What This Skill Covers
|
|
||||||
|
|
||||||
This skill covers Mainline's pipeline architecture - the Stage-based system for dependency resolution, data flow, and component composition.
|
|
||||||
|
|
||||||
## Key Concepts
|
|
||||||
|
|
||||||
### Stage Class (engine/pipeline/core.py)
|
|
||||||
|
|
||||||
The `Stage` ABC is the foundation. All pipeline components inherit from it:
|
|
||||||
|
|
||||||
```python
|
|
||||||
class Stage(ABC):
|
|
||||||
name: str
|
|
||||||
category: str # "source", "effect", "overlay", "display", "camera"
|
|
||||||
optional: bool = False
|
|
||||||
|
|
||||||
@property
|
|
||||||
def capabilities(self) -> set[str]:
|
|
||||||
"""What this stage provides (e.g., 'source.headlines')"""
|
|
||||||
return set()
|
|
||||||
|
|
||||||
@property
|
|
||||||
def dependencies(self) -> list[str]:
|
|
||||||
"""What this stage needs (e.g., ['source'])"""
|
|
||||||
return []
|
|
||||||
```
|
|
||||||
|
|
||||||
### Capability-Based Dependencies
|
|
||||||
|
|
||||||
The Pipeline resolves dependencies using **prefix matching**:
|
|
||||||
- `"source"` matches `"source.headlines"`, `"source.poetry"`, etc.
|
|
||||||
- This allows flexible composition without hardcoding specific stage names
|
|
||||||
|
|
||||||
### DataType Enum
|
|
||||||
|
|
||||||
PureData-style data types for inlet/outlet validation:
|
|
||||||
- `SOURCE_ITEMS`: List[SourceItem] - raw items from sources
|
|
||||||
- `ITEM_TUPLES`: List[tuple] - (title, source, timestamp) tuples
|
|
||||||
- `TEXT_BUFFER`: List[str] - rendered ANSI buffer
|
|
||||||
- `RAW_TEXT`: str - raw text strings
|
|
||||||
- `PIL_IMAGE`: PIL Image object
|
|
||||||
|
|
||||||
### Pipeline Execution
|
|
||||||
|
|
||||||
The Pipeline (engine/pipeline/controller.py):
|
|
||||||
1. Collects all stages from StageRegistry
|
|
||||||
2. Resolves dependencies using prefix matching
|
|
||||||
3. Executes stages in dependency order
|
|
||||||
4. Handles errors for non-optional stages
|
|
||||||
|
|
||||||
### Canvas & Camera
|
|
||||||
|
|
||||||
- **Canvas** (`engine/canvas.py`): 2D rendering surface with dirty region tracking
|
|
||||||
- **Camera** (`engine/camera.py`): Viewport controller for scrolling content
|
|
||||||
|
|
||||||
Canvas tracks dirty regions automatically when content is written via `put_region`, `put_text`, `fill`, enabling partial buffer updates.
|
|
||||||
|
|
||||||
## Adding New Stages
|
|
||||||
|
|
||||||
1. Create a class inheriting from `Stage`
|
|
||||||
2. Define `capabilities` and `dependencies` properties
|
|
||||||
3. Implement required abstract methods
|
|
||||||
4. Register in StageRegistry or use as adapter
|
|
||||||
|
|
||||||
## Common Patterns
|
|
||||||
|
|
||||||
- Use adapters (engine/pipeline/adapters.py) to wrap existing components as stages
|
|
||||||
- Set `optional=True` for stages that can fail gracefully
|
|
||||||
- Use `stage_type` and `render_order` for execution ordering
|
|
||||||
@@ -1,86 +0,0 @@
|
|||||||
---
|
|
||||||
name: mainline-display
|
|
||||||
description: Display backend implementation and the Display protocol
|
|
||||||
compatibility: opencode
|
|
||||||
metadata:
|
|
||||||
audience: developers
|
|
||||||
source_type: codebase
|
|
||||||
---
|
|
||||||
|
|
||||||
## What This Skill Covers
|
|
||||||
|
|
||||||
This skill covers Mainline's display backend system - how to implement new display backends and how the Display protocol works.
|
|
||||||
|
|
||||||
## Key Concepts
|
|
||||||
|
|
||||||
### Display Protocol
|
|
||||||
|
|
||||||
All backends implement a common Display protocol (in `engine/display/__init__.py`):
|
|
||||||
|
|
||||||
```python
|
|
||||||
class Display(Protocol):
|
|
||||||
def show(self, buf: list[str]) -> None:
|
|
||||||
"""Display the buffer"""
|
|
||||||
...
|
|
||||||
|
|
||||||
def clear(self) -> None:
|
|
||||||
"""Clear the display"""
|
|
||||||
...
|
|
||||||
|
|
||||||
def size(self) -> tuple[int, int]:
|
|
||||||
"""Return (width, height)"""
|
|
||||||
...
|
|
||||||
```
|
|
||||||
|
|
||||||
### DisplayRegistry
|
|
||||||
|
|
||||||
Discovers and manages backends:
|
|
||||||
|
|
||||||
```python
|
|
||||||
from engine.display import get_monitor
|
|
||||||
display = get_monitor("terminal") # or "websocket", "sixel", "null", "multi"
|
|
||||||
```
|
|
||||||
|
|
||||||
### Available Backends
|
|
||||||
|
|
||||||
| Backend | File | Description |
|
|
||||||
|---------|------|-------------|
|
|
||||||
| terminal | backends/terminal.py | ANSI terminal output |
|
|
||||||
| websocket | backends/websocket.py | Web browser via WebSocket |
|
|
||||||
| sixel | backends/sixel.py | Sixel graphics (pure Python) |
|
|
||||||
| null | backends/null.py | Headless for testing |
|
|
||||||
| multi | backends/multi.py | Forwards to multiple displays |
|
|
||||||
|
|
||||||
### WebSocket Backend
|
|
||||||
|
|
||||||
- WebSocket server: port 8765
|
|
||||||
- HTTP server: port 8766 (serves client/index.html)
|
|
||||||
- Client has ANSI color parsing and fullscreen support
|
|
||||||
|
|
||||||
### Multi Backend
|
|
||||||
|
|
||||||
Forwards to multiple displays simultaneously - useful for `terminal + websocket`.
|
|
||||||
|
|
||||||
## Adding a New Backend
|
|
||||||
|
|
||||||
1. Create `engine/display/backends/my_backend.py`
|
|
||||||
2. Implement the Display protocol methods
|
|
||||||
3. Register in `engine/display/__init__.py`'s `DisplayRegistry`
|
|
||||||
|
|
||||||
Required methods:
|
|
||||||
- `show(buf: list[str])` - Display buffer
|
|
||||||
- `clear()` - Clear screen
|
|
||||||
- `size() -> tuple[int, int]` - Terminal dimensions
|
|
||||||
|
|
||||||
Optional methods:
|
|
||||||
- `title(text: str)` - Set window title
|
|
||||||
- `cursor(show: bool)` - Control cursor
|
|
||||||
|
|
||||||
## Usage
|
|
||||||
|
|
||||||
```bash
|
|
||||||
python mainline.py --display terminal # default
|
|
||||||
python mainline.py --display websocket
|
|
||||||
python mainline.py --display sixel
|
|
||||||
python mainline.py --display both # terminal + websocket
|
|
||||||
```
|
|
||||||
@@ -1,113 +0,0 @@
|
|||||||
---
|
|
||||||
name: mainline-effects
|
|
||||||
description: How to add new effect plugins to Mainline's effect system
|
|
||||||
compatibility: opencode
|
|
||||||
metadata:
|
|
||||||
audience: developers
|
|
||||||
source_type: codebase
|
|
||||||
---
|
|
||||||
|
|
||||||
## What This Skill Covers
|
|
||||||
|
|
||||||
This skill covers Mainline's effect plugin system - how to create, configure, and integrate visual effects into the pipeline.
|
|
||||||
|
|
||||||
## Key Concepts
|
|
||||||
|
|
||||||
### EffectPlugin ABC (engine/effects/types.py)
|
|
||||||
|
|
||||||
All effects must inherit from `EffectPlugin` and implement:
|
|
||||||
|
|
||||||
```python
|
|
||||||
class EffectPlugin(ABC):
|
|
||||||
name: str
|
|
||||||
config: EffectConfig
|
|
||||||
param_bindings: dict[str, dict[str, str | float]] = {}
|
|
||||||
supports_partial_updates: bool = False
|
|
||||||
|
|
||||||
@abstractmethod
|
|
||||||
def process(self, buf: list[str], ctx: EffectContext) -> list[str]:
|
|
||||||
"""Process buffer with effect applied"""
|
|
||||||
...
|
|
||||||
|
|
||||||
@abstractmethod
|
|
||||||
def configure(self, config: EffectConfig) -> None:
|
|
||||||
"""Configure the effect"""
|
|
||||||
...
|
|
||||||
```
|
|
||||||
|
|
||||||
### EffectContext
|
|
||||||
|
|
||||||
Passed to every effect's process method:
|
|
||||||
|
|
||||||
```python
|
|
||||||
@dataclass
|
|
||||||
class EffectContext:
|
|
||||||
terminal_width: int
|
|
||||||
terminal_height: int
|
|
||||||
scroll_cam: int
|
|
||||||
ticker_height: int
|
|
||||||
camera_x: int = 0
|
|
||||||
mic_excess: float = 0.0
|
|
||||||
grad_offset: float = 0.0
|
|
||||||
frame_number: int = 0
|
|
||||||
has_message: bool = False
|
|
||||||
items: list = field(default_factory=list)
|
|
||||||
_state: dict[str, Any] = field(default_factory=dict)
|
|
||||||
```
|
|
||||||
|
|
||||||
Access sensor values via `ctx.get_sensor_value("sensor_name")`.
|
|
||||||
|
|
||||||
### EffectConfig
|
|
||||||
|
|
||||||
Configuration dataclass:
|
|
||||||
|
|
||||||
```python
|
|
||||||
@dataclass
|
|
||||||
class EffectConfig:
|
|
||||||
enabled: bool = True
|
|
||||||
intensity: float = 1.0
|
|
||||||
params: dict[str, Any] = field(default_factory=dict)
|
|
||||||
```
|
|
||||||
|
|
||||||
### Partial Updates
|
|
||||||
|
|
||||||
For performance optimization, set `supports_partial_updates = True` and implement `process_partial`:
|
|
||||||
|
|
||||||
```python
|
|
||||||
class MyEffect(EffectPlugin):
|
|
||||||
supports_partial_updates = True
|
|
||||||
|
|
||||||
def process_partial(self, buf, ctx, partial: PartialUpdate) -> list[str]:
|
|
||||||
# Only process changed regions
|
|
||||||
...
|
|
||||||
```
|
|
||||||
|
|
||||||
## Adding a New Effect
|
|
||||||
|
|
||||||
1. Create file in `effects_plugins/my_effect.py`
|
|
||||||
2. Inherit from `EffectPlugin`
|
|
||||||
3. Implement `process()` and `configure()`
|
|
||||||
4. Add to `effects_plugins/__init__.py` (runtime discovery via issubclass checks)
|
|
||||||
|
|
||||||
## Param Bindings
|
|
||||||
|
|
||||||
Declarative sensor-to-param mappings:
|
|
||||||
|
|
||||||
```python
|
|
||||||
param_bindings = {
|
|
||||||
"intensity": {"sensor": "mic", "transform": "linear"},
|
|
||||||
"rate": {"sensor": "oscillator", "transform": "exponential"},
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
Transforms: `linear`, `exponential`, `threshold`
|
|
||||||
|
|
||||||
## Effect Chain
|
|
||||||
|
|
||||||
Effects are chained via `engine/effects/chain.py` - processes each effect in order, passing output to next.
|
|
||||||
|
|
||||||
## Existing Effects
|
|
||||||
|
|
||||||
See `effects_plugins/`:
|
|
||||||
- noise.py, fade.py, glitch.py, firehose.py
|
|
||||||
- border.py, crop.py, tint.py, hud.py
|
|
||||||
@@ -1,103 +0,0 @@
|
|||||||
---
|
|
||||||
name: mainline-presets
|
|
||||||
description: Creating pipeline presets in TOML format for Mainline
|
|
||||||
compatibility: opencode
|
|
||||||
metadata:
|
|
||||||
audience: developers
|
|
||||||
source_type: codebase
|
|
||||||
---
|
|
||||||
|
|
||||||
## What This Skill Covers
|
|
||||||
|
|
||||||
This skill covers how to create pipeline presets in TOML format for Mainline's rendering pipeline.
|
|
||||||
|
|
||||||
## Key Concepts
|
|
||||||
|
|
||||||
### Preset Loading Order
|
|
||||||
|
|
||||||
Presets are loaded from multiple locations (later overrides earlier):
|
|
||||||
1. Built-in: `engine/presets.toml`
|
|
||||||
2. User config: `~/.config/mainline/presets.toml`
|
|
||||||
3. Local override: `./presets.toml`
|
|
||||||
|
|
||||||
### PipelinePreset Dataclass
|
|
||||||
|
|
||||||
```python
|
|
||||||
@dataclass
|
|
||||||
class PipelinePreset:
|
|
||||||
name: str
|
|
||||||
description: str = ""
|
|
||||||
source: str = "headlines" # Data source
|
|
||||||
display: str = "terminal" # Display backend
|
|
||||||
camera: str = "scroll" # Camera mode
|
|
||||||
effects: list[str] = field(default_factory=list)
|
|
||||||
border: bool = False
|
|
||||||
```
|
|
||||||
|
|
||||||
### TOML Format
|
|
||||||
|
|
||||||
```toml
|
|
||||||
[presets.my-preset]
|
|
||||||
description = "My custom pipeline"
|
|
||||||
source = "headlines"
|
|
||||||
display = "terminal"
|
|
||||||
camera = "scroll"
|
|
||||||
effects = ["noise", "fade"]
|
|
||||||
border = true
|
|
||||||
```
|
|
||||||
|
|
||||||
## Creating a Preset
|
|
||||||
|
|
||||||
### Option 1: User Config
|
|
||||||
|
|
||||||
Create/edit `~/.config/mainline/presets.toml`:
|
|
||||||
|
|
||||||
```toml
|
|
||||||
[presets.my-cool-preset]
|
|
||||||
description = "Noise and glitch effects"
|
|
||||||
source = "headlines"
|
|
||||||
display = "terminal"
|
|
||||||
effects = ["noise", "glitch"]
|
|
||||||
```
|
|
||||||
|
|
||||||
### Option 2: Local Override
|
|
||||||
|
|
||||||
Create `./presets.toml` in project root:
|
|
||||||
|
|
||||||
```toml
|
|
||||||
[presets.dev-inspect]
|
|
||||||
description = "Pipeline introspection for development"
|
|
||||||
source = "headlines"
|
|
||||||
display = "terminal"
|
|
||||||
effects = ["hud"]
|
|
||||||
```
|
|
||||||
|
|
||||||
### Option 3: Built-in
|
|
||||||
|
|
||||||
Edit `engine/presets.toml` (requires PR to repository).
|
|
||||||
|
|
||||||
## Available Sources
|
|
||||||
|
|
||||||
- `headlines` - RSS news feeds
|
|
||||||
- `poetry` - Literature mode
|
|
||||||
- `pipeline-inspect` - Live DAG visualization
|
|
||||||
|
|
||||||
## Available Displays
|
|
||||||
|
|
||||||
- `terminal` - ANSI terminal
|
|
||||||
- `websocket` - Web browser
|
|
||||||
- `sixel` - Sixel graphics
|
|
||||||
- `null` - Headless
|
|
||||||
|
|
||||||
## Available Effects
|
|
||||||
|
|
||||||
See `effects_plugins/`:
|
|
||||||
- noise, fade, glitch, firehose
|
|
||||||
- border, crop, tint, hud
|
|
||||||
|
|
||||||
## Validation Functions
|
|
||||||
|
|
||||||
Use these from `engine/pipeline/presets.py`:
|
|
||||||
- `validate_preset()` - Validate preset structure
|
|
||||||
- `validate_signal_path()` - Detect circular dependencies
|
|
||||||
- `generate_preset_toml()` - Generate skeleton preset
|
|
||||||
@@ -1,136 +0,0 @@
|
|||||||
---
|
|
||||||
name: mainline-sensors
|
|
||||||
description: Sensor framework for real-time input in Mainline
|
|
||||||
compatibility: opencode
|
|
||||||
metadata:
|
|
||||||
audience: developers
|
|
||||||
source_type: codebase
|
|
||||||
---
|
|
||||||
|
|
||||||
## What This Skill Covers
|
|
||||||
|
|
||||||
This skill covers Mainline's sensor framework - how to use, create, and integrate sensors for real-time input.
|
|
||||||
|
|
||||||
## Key Concepts
|
|
||||||
|
|
||||||
### Sensor Base Class (engine/sensors/__init__.py)
|
|
||||||
|
|
||||||
```python
|
|
||||||
class Sensor(ABC):
|
|
||||||
name: str
|
|
||||||
unit: str = ""
|
|
||||||
|
|
||||||
@property
|
|
||||||
def available(self) -> bool:
|
|
||||||
"""Whether sensor is currently available"""
|
|
||||||
return True
|
|
||||||
|
|
||||||
@abstractmethod
|
|
||||||
def read(self) -> SensorValue | None:
|
|
||||||
"""Read current sensor value"""
|
|
||||||
...
|
|
||||||
|
|
||||||
def start(self) -> None:
|
|
||||||
"""Initialize sensor (optional)"""
|
|
||||||
pass
|
|
||||||
|
|
||||||
def stop(self) -> None:
|
|
||||||
"""Clean up sensor (optional)"""
|
|
||||||
pass
|
|
||||||
```
|
|
||||||
|
|
||||||
### SensorValue Dataclass
|
|
||||||
|
|
||||||
```python
|
|
||||||
@dataclass
|
|
||||||
class SensorValue:
|
|
||||||
sensor_name: str
|
|
||||||
value: float
|
|
||||||
timestamp: float
|
|
||||||
unit: str = ""
|
|
||||||
```
|
|
||||||
|
|
||||||
### SensorRegistry
|
|
||||||
|
|
||||||
Discovers and manages sensors globally:
|
|
||||||
|
|
||||||
```python
|
|
||||||
from engine.sensors import SensorRegistry
|
|
||||||
registry = SensorRegistry()
|
|
||||||
sensor = registry.get("mic")
|
|
||||||
```
|
|
||||||
|
|
||||||
### SensorStage
|
|
||||||
|
|
||||||
Pipeline adapter that provides sensor values to effects:
|
|
||||||
|
|
||||||
```python
|
|
||||||
from engine.pipeline.adapters import SensorStage
|
|
||||||
stage = SensorStage(sensor_name="mic")
|
|
||||||
```
|
|
||||||
|
|
||||||
## Built-in Sensors
|
|
||||||
|
|
||||||
| Sensor | File | Description |
|
|
||||||
|--------|------|-------------|
|
|
||||||
| MicSensor | sensors/mic.py | Microphone input (RMS dB) |
|
|
||||||
| OscillatorSensor | sensors/oscillator.py | Test sine wave generator |
|
|
||||||
| PipelineMetricsSensor | sensors/pipeline_metrics.py | FPS, frame time, etc. |
|
|
||||||
|
|
||||||
## Param Bindings
|
|
||||||
|
|
||||||
Effects declare sensor-to-param mappings:
|
|
||||||
|
|
||||||
```python
|
|
||||||
class GlitchEffect(EffectPlugin):
|
|
||||||
param_bindings = {
|
|
||||||
"intensity": {"sensor": "mic", "transform": "linear"},
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
### Transform Functions
|
|
||||||
|
|
||||||
- `linear` - Direct mapping to param range
|
|
||||||
- `exponential` - Exponential scaling
|
|
||||||
- `threshold` - Binary on/off
|
|
||||||
|
|
||||||
## Adding a New Sensor
|
|
||||||
|
|
||||||
1. Create `engine/sensors/my_sensor.py`
|
|
||||||
2. Inherit from `Sensor` ABC
|
|
||||||
3. Implement required methods
|
|
||||||
4. Register in `SensorRegistry`
|
|
||||||
|
|
||||||
Example:
|
|
||||||
```python
|
|
||||||
class MySensor(Sensor):
|
|
||||||
name = "my-sensor"
|
|
||||||
unit = "units"
|
|
||||||
|
|
||||||
def read(self) -> SensorValue | None:
|
|
||||||
return SensorValue(
|
|
||||||
sensor_name=self.name,
|
|
||||||
value=self._read_hardware(),
|
|
||||||
timestamp=time.time(),
|
|
||||||
unit=self.unit
|
|
||||||
)
|
|
||||||
```
|
|
||||||
|
|
||||||
## Using Sensors in Effects
|
|
||||||
|
|
||||||
Access sensor values via EffectContext:
|
|
||||||
|
|
||||||
```python
|
|
||||||
def process(self, buf, ctx):
|
|
||||||
mic_level = ctx.get_sensor_value("mic")
|
|
||||||
if mic_level and mic_level > 0.5:
|
|
||||||
# Apply intense effect
|
|
||||||
...
|
|
||||||
```
|
|
||||||
|
|
||||||
Or via param_bindings (automatic):
|
|
||||||
|
|
||||||
```python
|
|
||||||
# If intensity is bound to "mic", it's automatically
|
|
||||||
# available in self.config.intensity
|
|
||||||
```
|
|
||||||
@@ -1,87 +0,0 @@
|
|||||||
---
|
|
||||||
name: mainline-sources
|
|
||||||
description: Adding new RSS feeds and data sources to Mainline
|
|
||||||
compatibility: opencode
|
|
||||||
metadata:
|
|
||||||
audience: developers
|
|
||||||
source_type: codebase
|
|
||||||
---
|
|
||||||
|
|
||||||
## What This Skill Covers
|
|
||||||
|
|
||||||
This skill covers how to add new data sources (RSS feeds, poetry) to Mainline.
|
|
||||||
|
|
||||||
## Key Concepts
|
|
||||||
|
|
||||||
### Feeds Dictionary (engine/sources.py)
|
|
||||||
|
|
||||||
All feeds are defined in a simple dictionary:
|
|
||||||
|
|
||||||
```python
|
|
||||||
FEEDS = {
|
|
||||||
"Feed Name": "https://example.com/feed.xml",
|
|
||||||
# Category comments help organize:
|
|
||||||
# Science & Technology
|
|
||||||
# Economics & Business
|
|
||||||
# World & Politics
|
|
||||||
# Culture & Ideas
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
### Poetry Sources
|
|
||||||
|
|
||||||
Project Gutenberg URLs for public domain literature:
|
|
||||||
|
|
||||||
```python
|
|
||||||
POETRY_SOURCES = {
|
|
||||||
"Author Name": "https://www.gutenberg.org/cache/epub/1234/pg1234.txt",
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
### Language & Script Mapping
|
|
||||||
|
|
||||||
The sources.py also contains language/script detection mappings used for auto-translation and font selection.
|
|
||||||
|
|
||||||
## Adding a New RSS Feed
|
|
||||||
|
|
||||||
1. Edit `engine/sources.py`
|
|
||||||
2. Add entry to `FEEDS` dict under appropriate category:
|
|
||||||
```python
|
|
||||||
"My Feed": "https://example.com/feed.xml",
|
|
||||||
```
|
|
||||||
3. The feed will be automatically discovered on next run
|
|
||||||
|
|
||||||
### Feed Requirements
|
|
||||||
|
|
||||||
- Must be valid RSS or Atom XML
|
|
||||||
- Should have `<title>` elements for items
|
|
||||||
- Must be HTTP/HTTPS accessible
|
|
||||||
|
|
||||||
## Adding Poetry Sources
|
|
||||||
|
|
||||||
1. Edit `engine/sources.py`
|
|
||||||
2. Add to `POETRY_SOURCES` dict:
|
|
||||||
```python
|
|
||||||
"Author": "https://www.gutenberg.org/cache/epub/XXXX/pgXXXX.txt",
|
|
||||||
```
|
|
||||||
|
|
||||||
### Poetry Requirements
|
|
||||||
|
|
||||||
- Plain text (UTF-8)
|
|
||||||
- Project Gutenberg format preferred
|
|
||||||
- No DRM-protected sources
|
|
||||||
|
|
||||||
## Data Flow
|
|
||||||
|
|
||||||
Feeds are fetched via `engine/fetch.py`:
|
|
||||||
- `fetch_feed(url)` - Fetches and parses RSS/Atom
|
|
||||||
- Results cached for fast restarts
|
|
||||||
- Filtered via `engine/filter.py` for content cleaning
|
|
||||||
|
|
||||||
## Categories
|
|
||||||
|
|
||||||
Organize new feeds by category using comments:
|
|
||||||
- Science & Technology
|
|
||||||
- Economics & Business
|
|
||||||
- World & Politics
|
|
||||||
- Culture & Ideas
|
|
||||||
407
AGENTS.md
407
AGENTS.md
@@ -4,222 +4,88 @@
|
|||||||
|
|
||||||
This project uses:
|
This project uses:
|
||||||
- **mise** (mise.jdx.dev) - tool version manager and task runner
|
- **mise** (mise.jdx.dev) - tool version manager and task runner
|
||||||
|
- **hk** (hk.jdx.dev) - git hook manager
|
||||||
- **uv** - fast Python package installer
|
- **uv** - fast Python package installer
|
||||||
- **ruff** - linter and formatter (line-length 88, target Python 3.10)
|
- **ruff** - linter and formatter
|
||||||
- **pytest** - test runner with strict marker enforcement
|
- **pytest** - test runner
|
||||||
|
|
||||||
### Setup
|
### Setup
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
mise run install # Install dependencies
|
# Install dependencies
|
||||||
# Or: uv sync --all-extras # includes mic, websocket, sixel support
|
mise run install
|
||||||
|
|
||||||
|
# Or equivalently:
|
||||||
|
uv sync
|
||||||
```
|
```
|
||||||
|
|
||||||
### Available Commands
|
### Available Commands
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
# Testing
|
mise run test # Run tests
|
||||||
mise run test # Run all tests
|
mise run test-v # Run tests verbose
|
||||||
mise run test-cov # Run tests with coverage report
|
mise run test-cov # Run tests with coverage report
|
||||||
pytest tests/test_foo.py::TestClass::test_method # Run single test
|
mise run lint # Run ruff linter
|
||||||
|
mise run lint-fix # Run ruff with auto-fix
|
||||||
# Linting & Formatting
|
mise run format # Run ruff formatter
|
||||||
mise run lint # Run ruff linter
|
mise run ci # Full CI pipeline (sync + test + coverage)
|
||||||
mise run lint-fix # Run ruff with auto-fix
|
|
||||||
mise run format # Run ruff formatter
|
|
||||||
|
|
||||||
# CI
|
|
||||||
mise run ci # Full CI pipeline (topics-init + lint + test-cov)
|
|
||||||
```
|
```
|
||||||
|
|
||||||
### Running a Single Test
|
## Git Hooks
|
||||||
|
|
||||||
|
**At the start of every agent session**, verify hooks are installed:
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
# Run a specific test function
|
ls -la .git/hooks/pre-commit
|
||||||
pytest tests/test_eventbus.py::TestEventBusInit::test_init_creates_empty_subscribers
|
|
||||||
|
|
||||||
# Run all tests in a file
|
|
||||||
pytest tests/test_eventbus.py
|
|
||||||
|
|
||||||
# Run tests matching a pattern
|
|
||||||
pytest -k "test_subscribe"
|
|
||||||
```
|
```
|
||||||
|
|
||||||
### Git Hooks
|
If hooks are not installed, install them with:
|
||||||
|
|
||||||
Install hooks at start of session:
|
|
||||||
```bash
|
```bash
|
||||||
ls -la .git/hooks/pre-commit # Verify installed
|
hk init --mise
|
||||||
hk init --mise # Install if missing
|
mise run pre-commit
|
||||||
mise run pre-commit # Run manually
|
|
||||||
```
|
```
|
||||||
|
|
||||||
## Code Style Guidelines
|
The project uses hk configured in `hk.pkl`:
|
||||||
|
- **pre-commit**: runs ruff-format and ruff (with auto-fix)
|
||||||
### Imports (three sections, alphabetical within each)
|
- **pre-push**: runs ruff check
|
||||||
|
|
||||||
```python
|
|
||||||
# 1. Standard library
|
|
||||||
import os
|
|
||||||
import threading
|
|
||||||
from collections import defaultdict
|
|
||||||
from collections.abc import Callable
|
|
||||||
from dataclasses import dataclass, field
|
|
||||||
from typing import Any
|
|
||||||
|
|
||||||
# 2. Third-party
|
|
||||||
from abc import ABC, abstractmethod
|
|
||||||
|
|
||||||
# 3. Local project
|
|
||||||
from engine.events import EventType
|
|
||||||
```
|
|
||||||
|
|
||||||
### Type Hints
|
|
||||||
|
|
||||||
- Use type hints for all function signatures (parameters and return)
|
|
||||||
- Use `|` for unions (Python 3.10+): `EventType | None`
|
|
||||||
- Use `dict[K, V]`, `list[V]` (generic syntax): `dict[str, list[int]]`
|
|
||||||
- Use `Callable[[ArgType], ReturnType]` for callbacks
|
|
||||||
|
|
||||||
```python
|
|
||||||
def subscribe(self, event_type: EventType, callback: Callable[[Any], None]) -> None:
|
|
||||||
...
|
|
||||||
|
|
||||||
def get_sensor_value(self, sensor_name: str) -> float | None:
|
|
||||||
return self._state.get(f"sensor.{sensor_name}")
|
|
||||||
```
|
|
||||||
|
|
||||||
### Naming Conventions
|
|
||||||
|
|
||||||
- **Classes**: `PascalCase` (e.g., `EventBus`, `EffectPlugin`)
|
|
||||||
- **Functions/methods**: `snake_case` (e.g., `get_event_bus`, `process_partial`)
|
|
||||||
- **Constants**: `SCREAMING_SNAKE_CASE` (e.g., `CURSOR_OFF`)
|
|
||||||
- **Private methods**: `_snake_case` prefix (e.g., `_initialize`)
|
|
||||||
- **Type variables**: `PascalCase` (e.g., `T`, `EffectT`)
|
|
||||||
|
|
||||||
### Dataclasses
|
|
||||||
|
|
||||||
Use `@dataclass` for simple data containers:
|
|
||||||
|
|
||||||
```python
|
|
||||||
@dataclass
|
|
||||||
class EffectContext:
|
|
||||||
terminal_width: int
|
|
||||||
terminal_height: int
|
|
||||||
scroll_cam: int
|
|
||||||
ticker_height: int = 0
|
|
||||||
_state: dict[str, Any] = field(default_factory=dict, repr=False)
|
|
||||||
```
|
|
||||||
|
|
||||||
### Abstract Base Classes
|
|
||||||
|
|
||||||
Use ABC for interface enforcement:
|
|
||||||
|
|
||||||
```python
|
|
||||||
class EffectPlugin(ABC):
|
|
||||||
name: str
|
|
||||||
config: EffectConfig
|
|
||||||
|
|
||||||
@abstractmethod
|
|
||||||
def process(self, buf: list[str], ctx: EffectContext) -> list[str]:
|
|
||||||
...
|
|
||||||
|
|
||||||
@abstractmethod
|
|
||||||
def configure(self, config: EffectConfig) -> None:
|
|
||||||
...
|
|
||||||
```
|
|
||||||
|
|
||||||
### Error Handling
|
|
||||||
|
|
||||||
- Catch specific exceptions, not bare `Exception`
|
|
||||||
- Use `try/except` with fallbacks for optional features
|
|
||||||
- Silent pass in event callbacks to prevent one handler from breaking others
|
|
||||||
|
|
||||||
```python
|
|
||||||
# Good: specific exception
|
|
||||||
try:
|
|
||||||
term_size = os.get_terminal_size()
|
|
||||||
except OSError:
|
|
||||||
term_width = 80
|
|
||||||
|
|
||||||
# Good: silent pass in callbacks
|
|
||||||
for callback in callbacks:
|
|
||||||
try:
|
|
||||||
callback(event)
|
|
||||||
except Exception:
|
|
||||||
pass
|
|
||||||
```
|
|
||||||
|
|
||||||
### Thread Safety
|
|
||||||
|
|
||||||
Use locks for shared state:
|
|
||||||
|
|
||||||
```python
|
|
||||||
class EventBus:
|
|
||||||
def __init__(self):
|
|
||||||
self._lock = threading.Lock()
|
|
||||||
|
|
||||||
def publish(self, event_type: EventType, event: Any = None) -> None:
|
|
||||||
with self._lock:
|
|
||||||
callbacks = list(self._subscribers.get(event_type, []))
|
|
||||||
```
|
|
||||||
|
|
||||||
### Comments
|
|
||||||
|
|
||||||
- **DO NOT ADD comments** unless explicitly required
|
|
||||||
- Let code be self-documenting with good naming
|
|
||||||
- Use docstrings only for public APIs or complex logic
|
|
||||||
|
|
||||||
### Testing Patterns
|
|
||||||
|
|
||||||
Follow pytest conventions:
|
|
||||||
|
|
||||||
```python
|
|
||||||
class TestEventBusSubscribe:
|
|
||||||
"""Tests for EventBus.subscribe method."""
|
|
||||||
|
|
||||||
def test_subscribe_adds_callback(self):
|
|
||||||
"""subscribe() adds a callback for an event type."""
|
|
||||||
bus = EventBus()
|
|
||||||
def callback(e):
|
|
||||||
return None
|
|
||||||
bus.subscribe(EventType.NTFY_MESSAGE, callback)
|
|
||||||
assert bus.subscriber_count(EventType.NTFY_MESSAGE) == 1
|
|
||||||
```
|
|
||||||
|
|
||||||
- Use classes to group related tests (`Test<ClassName>`, `Test<method_name>`)
|
|
||||||
- Test docstrings follow `"<method>() <action>"` pattern
|
|
||||||
- Use descriptive assertion messages via pytest behavior
|
|
||||||
|
|
||||||
## Workflow Rules
|
## Workflow Rules
|
||||||
|
|
||||||
### Before Committing
|
### Before Committing
|
||||||
|
|
||||||
1. Run tests: `mise run test`
|
1. **Always run the test suite** - never commit code that fails tests:
|
||||||
2. Run linter: `mise run lint`
|
```bash
|
||||||
3. Review changes: `git diff`
|
mise run test
|
||||||
|
```
|
||||||
|
|
||||||
|
2. **Always run the linter**:
|
||||||
|
```bash
|
||||||
|
mise run lint
|
||||||
|
```
|
||||||
|
|
||||||
|
3. **Fix any lint errors** before committing (or let the pre-commit hook handle it).
|
||||||
|
|
||||||
|
4. **Review your changes** using `git diff` to understand what will be committed.
|
||||||
|
|
||||||
### On Failing Tests
|
### On Failing Tests
|
||||||
|
|
||||||
- **Out-of-date test**: Update test to match new expected behavior
|
When tests fail, **determine whether it's an out-of-date test or a correctly failing test**:
|
||||||
- **Correctly failing test**: Fix implementation, not the test
|
|
||||||
|
- **Out-of-date test**: The test was written for old behavior that has legitimately changed. Update the test to match the new expected behavior.
|
||||||
|
|
||||||
|
- **Correctly failing test**: The test correctly identifies a broken contract. Fix the implementation, not the test.
|
||||||
|
|
||||||
**Never** modify a test to make it pass without understanding why it failed.
|
**Never** modify a test to make it pass without understanding why it failed.
|
||||||
|
|
||||||
## Architecture Overview
|
### Code Review
|
||||||
|
|
||||||
- **Pipeline**: source → render → effects → display
|
Before committing significant changes:
|
||||||
- **EffectPlugin**: ABC with `process()` and `configure()` methods
|
- Run `git diff` to review all changes
|
||||||
- **Display backends**: terminal, websocket, sixel, null (for testing)
|
- Ensure new code follows existing patterns in the codebase
|
||||||
- **EventBus**: thread-safe pub/sub messaging
|
- Check that type hints are added for new functions
|
||||||
- **Presets**: TOML format in `engine/presets.toml`
|
- Verify that tests exist for new functionality
|
||||||
|
|
||||||
Key files:
|
|
||||||
- `engine/pipeline/core.py` - Stage base class
|
|
||||||
- `engine/effects/types.py` - EffectPlugin ABC and dataclasses
|
|
||||||
- `engine/display/backends/` - Display backend implementations
|
|
||||||
- `engine/eventbus.py` - Thread-safe event system
|
|
||||||
=======
|
|
||||||
## Testing
|
## Testing
|
||||||
|
|
||||||
Tests live in `tests/` and follow the pattern `test_*.py`.
|
Tests live in `tests/` and follow the pattern `test_*.py`.
|
||||||
@@ -236,178 +102,9 @@ mise run test-cov
|
|||||||
|
|
||||||
The project uses pytest with strict marker enforcement. Test configuration is in `pyproject.toml` under `[tool.pytest.ini_options]`.
|
The project uses pytest with strict marker enforcement. Test configuration is in `pyproject.toml` under `[tool.pytest.ini_options]`.
|
||||||
|
|
||||||
### Test Coverage Strategy
|
|
||||||
|
|
||||||
Current coverage: 56% (463 tests)
|
|
||||||
|
|
||||||
Key areas with lower coverage (acceptable for now):
|
|
||||||
- **app.py** (8%): Main entry point - integration heavy, requires terminal
|
|
||||||
- **scroll.py** (10%): Terminal-dependent rendering logic (unused)
|
|
||||||
|
|
||||||
Key areas with good coverage:
|
|
||||||
- **display/backends/null.py** (95%): Easy to test headlessly
|
|
||||||
- **display/backends/terminal.py** (96%): Uses mocking
|
|
||||||
- **display/backends/multi.py** (100%): Simple forwarding logic
|
|
||||||
- **effects/performance.py** (99%): Pure Python logic
|
|
||||||
- **eventbus.py** (96%): Simple event system
|
|
||||||
- **effects/controller.py** (95%): Effects command handling
|
|
||||||
|
|
||||||
Areas needing more tests:
|
|
||||||
- **websocket.py** (48%): Network I/O, hard to test in CI
|
|
||||||
- **ntfy.py** (50%): Network I/O, hard to test in CI
|
|
||||||
- **mic.py** (61%): Audio I/O, hard to test in CI
|
|
||||||
|
|
||||||
Note: Terminal-dependent modules (scroll, layers render) are harder to test in CI.
|
|
||||||
Performance regression tests are in `tests/test_benchmark.py` with `@pytest.mark.benchmark`.
|
|
||||||
|
|
||||||
## Architecture Notes
|
## Architecture Notes
|
||||||
|
|
||||||
- **ntfy.py** - standalone notification poller with zero internal dependencies
|
- **ntfy.py** and **mic.py** are standalone modules with zero internal dependencies
|
||||||
- **sensors/** - Sensor framework (MicSensor, OscillatorSensor) for real-time input
|
|
||||||
- **eventbus.py** provides thread-safe event publishing for decoupled communication
|
- **eventbus.py** provides thread-safe event publishing for decoupled communication
|
||||||
- **effects/** - plugin architecture with performance monitoring
|
- **controller.py** coordinates ntfy/mic monitoring
|
||||||
- The new pipeline architecture: source → render → effects → display
|
- The render pipeline: fetch → render → effects → scroll → terminal output
|
||||||
|
|
||||||
#### Canvas & Camera
|
|
||||||
|
|
||||||
- **Canvas** (`engine/canvas.py`): 2D rendering surface with dirty region tracking
|
|
||||||
- **Camera** (`engine/camera.py`): Viewport controller for scrolling content
|
|
||||||
|
|
||||||
The Canvas tracks dirty regions automatically when content is written (via `put_region`, `put_text`, `fill`), enabling partial buffer updates for optimized effect processing.
|
|
||||||
|
|
||||||
### Pipeline Architecture
|
|
||||||
|
|
||||||
The new Stage-based pipeline architecture provides capability-based dependency resolution:
|
|
||||||
|
|
||||||
- **Stage** (`engine/pipeline/core.py`): Base class for pipeline stages
|
|
||||||
- **Pipeline** (`engine/pipeline/controller.py`): Executes stages with capability-based dependency resolution
|
|
||||||
- **StageRegistry** (`engine/pipeline/registry.py`): Discovers and registers stages
|
|
||||||
- **Stage Adapters** (`engine/pipeline/adapters.py`): Wraps existing components as stages
|
|
||||||
|
|
||||||
#### Capability-Based Dependencies
|
|
||||||
|
|
||||||
Stages declare capabilities (what they provide) and dependencies (what they need). The Pipeline resolves dependencies using prefix matching:
|
|
||||||
- `"source"` matches `"source.headlines"`, `"source.poetry"`, etc.
|
|
||||||
- This allows flexible composition without hardcoding specific stage names
|
|
||||||
|
|
||||||
#### Sensor Framework
|
|
||||||
|
|
||||||
- **Sensor** (`engine/sensors/__init__.py`): Base class for real-time input sensors
|
|
||||||
- **SensorRegistry**: Discovers available sensors
|
|
||||||
- **SensorStage**: Pipeline adapter that provides sensor values to effects
|
|
||||||
- **MicSensor** (`engine/sensors/mic.py`): Self-contained microphone input
|
|
||||||
- **OscillatorSensor** (`engine/sensors/oscillator.py`): Test sensor for development
|
|
||||||
- **PipelineMetricsSensor** (`engine/sensors/pipeline_metrics.py`): Exposes pipeline metrics as sensor values
|
|
||||||
|
|
||||||
Sensors support param bindings to drive effect parameters in real-time.
|
|
||||||
|
|
||||||
#### Pipeline Introspection
|
|
||||||
|
|
||||||
- **PipelineIntrospectionSource** (`engine/data_sources/pipeline_introspection.py`): Renders live ASCII visualization of pipeline DAG with metrics
|
|
||||||
- **PipelineIntrospectionDemo** (`engine/pipeline/pipeline_introspection_demo.py`): 3-phase demo controller for effect animation
|
|
||||||
|
|
||||||
Preset: `pipeline-inspect` - Live pipeline introspection with DAG and performance metrics
|
|
||||||
|
|
||||||
#### Partial Update Support
|
|
||||||
|
|
||||||
Effect plugins can opt-in to partial buffer updates for performance optimization:
|
|
||||||
- Set `supports_partial_updates = True` on the effect class
|
|
||||||
- Implement `process_partial(buf, ctx, partial)` method
|
|
||||||
- The `PartialUpdate` dataclass indicates which regions changed
|
|
||||||
|
|
||||||
### Preset System
|
|
||||||
|
|
||||||
Presets use TOML format (no external dependencies):
|
|
||||||
|
|
||||||
- Built-in: `engine/presets.toml`
|
|
||||||
- User config: `~/.config/mainline/presets.toml`
|
|
||||||
- Local override: `./presets.toml`
|
|
||||||
|
|
||||||
- **Preset loader** (`engine/pipeline/preset_loader.py`): Loads and validates presets
|
|
||||||
- **PipelinePreset** (`engine/pipeline/presets.py`): Dataclass for preset configuration
|
|
||||||
|
|
||||||
Functions:
|
|
||||||
- `validate_preset()` - Validate preset structure
|
|
||||||
- `validate_signal_path()` - Detect circular dependencies
|
|
||||||
- `generate_preset_toml()` - Generate skeleton preset
|
|
||||||
|
|
||||||
### Display System
|
|
||||||
|
|
||||||
- **Display abstraction** (`engine/display/`): swap display backends via the Display protocol
|
|
||||||
- `display/backends/terminal.py` - ANSI terminal output
|
|
||||||
- `display/backends/websocket.py` - broadcasts to web clients via WebSocket
|
|
||||||
- `display/backends/sixel.py` - renders to Sixel graphics (pure Python, no C dependency)
|
|
||||||
- `display/backends/null.py` - headless display for testing
|
|
||||||
- `display/backends/multi.py` - forwards to multiple displays simultaneously
|
|
||||||
- `display/__init__.py` - DisplayRegistry for backend discovery
|
|
||||||
|
|
||||||
- **WebSocket display** (`engine/display/backends/websocket.py`): real-time frame broadcasting to web browsers
|
|
||||||
- WebSocket server on port 8765
|
|
||||||
- HTTP server on port 8766 (serves HTML client)
|
|
||||||
- Client at `client/index.html` with ANSI color parsing and fullscreen support
|
|
||||||
|
|
||||||
- **Display modes** (`--display` flag):
|
|
||||||
- `terminal` - Default ANSI terminal output
|
|
||||||
- `websocket` - Web browser display (requires websockets package)
|
|
||||||
- `sixel` - Sixel graphics in supported terminals (iTerm2, mintty, etc.)
|
|
||||||
- `both` - Terminal + WebSocket simultaneously
|
|
||||||
|
|
||||||
### Effect Plugin System
|
|
||||||
|
|
||||||
- **EffectPlugin ABC** (`engine/effects/types.py`): abstract base class for effects
|
|
||||||
- All effects must inherit from EffectPlugin and implement `process()` and `configure()`
|
|
||||||
- Runtime discovery via `effects_plugins/__init__.py` using `issubclass()` checks
|
|
||||||
|
|
||||||
- **EffectRegistry** (`engine/effects/registry.py`): manages registered effects
|
|
||||||
- **EffectChain** (`engine/effects/chain.py`): chains effects in pipeline order
|
|
||||||
|
|
||||||
### Command & Control
|
|
||||||
|
|
||||||
- C&C uses separate ntfy topics for commands and responses
|
|
||||||
- `NTFY_CC_CMD_TOPIC` - commands from cmdline.py
|
|
||||||
- `NTFY_CC_RESP_TOPIC` - responses back to cmdline.py
|
|
||||||
- Effects controller handles `/effects` commands (list, on/off, intensity, reorder, stats)
|
|
||||||
|
|
||||||
### Pipeline Documentation
|
|
||||||
|
|
||||||
The rendering pipeline is documented in `docs/PIPELINE.md` using Mermaid diagrams.
|
|
||||||
|
|
||||||
**IMPORTANT**: When making significant architectural changes to the rendering pipeline (new layers, effects, display backends), update `docs/PIPELINE.md` to reflect the changes:
|
|
||||||
1. Edit `docs/PIPELINE.md` with the new architecture
|
|
||||||
2. If adding new SVG diagrams, render them manually using an external tool (e.g., Mermaid Live Editor)
|
|
||||||
3. Commit both the markdown and any new diagram files
|
|
||||||
|
|
||||||
## Skills Library
|
|
||||||
|
|
||||||
A skills library MCP server (`skills`) is available for capturing and tracking learned knowledge. Skills are stored in `~/.skills/`.
|
|
||||||
|
|
||||||
### Workflow
|
|
||||||
|
|
||||||
**Before starting work:**
|
|
||||||
1. Run `skills_list_skills` to see available skills
|
|
||||||
2. Use `skills_peek_skill({name: "skill-name"})` to preview relevant skills
|
|
||||||
3. Use `skills_skill_slice({name: "skill-name", query: "your question"})` to get relevant sections
|
|
||||||
|
|
||||||
**While working:**
|
|
||||||
- If a skill was wrong or incomplete: `skills_update_skill` → `skills_record_assessment` → `skills_report_outcome({quality: 1})`
|
|
||||||
- If a skill worked correctly: `skills_report_outcome({quality: 4})` (normal) or `quality: 5` (perfect)
|
|
||||||
|
|
||||||
**End of session:**
|
|
||||||
- Run `skills_reflect_on_session({context_summary: "what you did"})` to identify new skills to capture
|
|
||||||
- Use `skills_create_skill` to add new skills
|
|
||||||
- Use `skills_record_assessment` to score them
|
|
||||||
|
|
||||||
### Useful Tools
|
|
||||||
- `skills_review_stale_skills()` - Skills due for review (negative days_until_due)
|
|
||||||
- `skills_skills_report()` - Overview of entire collection
|
|
||||||
- `skills_validate_skill({name: "skill-name"})` - Load skill for review with sources
|
|
||||||
|
|
||||||
### Agent Skills
|
|
||||||
|
|
||||||
This project also has Agent Skills (SKILL.md files) in `.opencode/skills/`. Use the `skill` tool to load them:
|
|
||||||
- `skill({name: "mainline-architecture"})` - Pipeline stages, capability resolution
|
|
||||||
- `skill({name: "mainline-effects"})` - How to add new effect plugins
|
|
||||||
- `skill({name: "mainline-display"})` - Display backend implementation
|
|
||||||
- `skill({name: "mainline-sources"})` - Adding new RSS feeds
|
|
||||||
- `skill({name: "mainline-presets"})` - Creating pipeline presets
|
|
||||||
- `skill({name: "mainline-sensors"})` - Sensor framework usage
|
|
||||||
|
|||||||
@@ -1,18 +1,11 @@
|
|||||||
#
|
# Refactor mainline\.py into modular package
|
||||||
|
|
||||||
Refactor mainline\.py into modular package
|
|
||||||
|
|
||||||
## Problem
|
## Problem
|
||||||
|
|
||||||
`mainline.py` is a single 1085\-line file with ~10 interleaved concerns\. This prevents:
|
`mainline.py` is a single 1085\-line file with ~10 interleaved concerns\. This prevents:
|
||||||
|
|
||||||
* Reusing the ntfy doorbell interrupt in other visualizers
|
* Reusing the ntfy doorbell interrupt in other visualizers
|
||||||
* Importing the render pipeline from `serve.py` \(future ESP32 HTTP server\)
|
* Importing the render pipeline from `serve.py` \(future ESP32 HTTP server\)
|
||||||
* Testing any concern in isolation
|
* Testing any concern in isolation
|
||||||
* Porting individual layers to Rust independently
|
* Porting individual layers to Rust independently
|
||||||
|
|
||||||
## Target structure
|
## Target structure
|
||||||
|
|
||||||
```warp-runnable-command
|
```warp-runnable-command
|
||||||
mainline.py # thin entrypoint: venv bootstrap → engine.app.main()
|
mainline.py # thin entrypoint: venv bootstrap → engine.app.main()
|
||||||
engine/
|
engine/
|
||||||
@@ -30,11 +23,8 @@ engine/
|
|||||||
scroll.py # stream() frame loop + message rendering
|
scroll.py # stream() frame loop + message rendering
|
||||||
app.py # main(), TITLE art, boot sequence, signal handler
|
app.py # main(), TITLE art, boot sequence, signal handler
|
||||||
```
|
```
|
||||||
|
|
||||||
The package is named `engine/` to avoid a naming conflict with the `mainline.py` entrypoint\.
|
The package is named `engine/` to avoid a naming conflict with the `mainline.py` entrypoint\.
|
||||||
|
|
||||||
## Module dependency graph
|
## Module dependency graph
|
||||||
|
|
||||||
```warp-runnable-command
|
```warp-runnable-command
|
||||||
config ← (nothing)
|
config ← (nothing)
|
||||||
sources ← (nothing)
|
sources ← (nothing)
|
||||||
@@ -49,92 +39,64 @@ mic ← (nothing — sounddevice only)
|
|||||||
scroll ← config, terminal, render, effects, ntfy, mic
|
scroll ← config, terminal, render, effects, ntfy, mic
|
||||||
app ← everything above
|
app ← everything above
|
||||||
```
|
```
|
||||||
|
|
||||||
Critical property: **ntfy\.py and mic\.py have zero internal dependencies**, making ntfy reusable by any visualizer\.
|
Critical property: **ntfy\.py and mic\.py have zero internal dependencies**, making ntfy reusable by any visualizer\.
|
||||||
|
|
||||||
## Module details
|
## Module details
|
||||||
|
|
||||||
### mainline\.py \(entrypoint — slimmed down\)
|
### mainline\.py \(entrypoint — slimmed down\)
|
||||||
|
|
||||||
Keeps only the venv bootstrap \(lines 10\-38\) which must run before any third\-party imports\. After bootstrap, delegates to `engine.app.main()`\.
|
Keeps only the venv bootstrap \(lines 10\-38\) which must run before any third\-party imports\. After bootstrap, delegates to `engine.app.main()`\.
|
||||||
|
|
||||||
### engine/config\.py
|
### engine/config\.py
|
||||||
|
|
||||||
From current mainline\.py:
|
From current mainline\.py:
|
||||||
|
|
||||||
* `HEADLINE_LIMIT`, `FEED_TIMEOUT`, `MIC_THRESHOLD_DB` \(lines 55\-57\)
|
* `HEADLINE_LIMIT`, `FEED_TIMEOUT`, `MIC_THRESHOLD_DB` \(lines 55\-57\)
|
||||||
* `MODE`, `FIREHOSE` CLI flag parsing \(lines 58\-59\)
|
* `MODE`, `FIREHOSE` CLI flag parsing \(lines 58\-59\)
|
||||||
* `NTFY_TOPIC`, `NTFY_POLL_INTERVAL`, `MESSAGE_DISPLAY_SECS` \(lines 62\-64\)
|
* `NTFY_TOPIC`, `NTFY_POLL_INTERVAL`, `MESSAGE_DISPLAY_SECS` \(lines 62\-64\)
|
||||||
* `_FONT_PATH`, `_FONT_SZ`, `_RENDER_H` \(lines 147\-150\)
|
* `_FONT_PATH`, `_FONT_SZ`, `_RENDER_H` \(lines 147\-150\)
|
||||||
* `_SCROLL_DUR`, `_FRAME_DT`, `FIREHOSE_H` \(lines 505\-507\)
|
* `_SCROLL_DUR`, `_FRAME_DT`, `FIREHOSE_H` \(lines 505\-507\)
|
||||||
* `GLITCH`, `KATA` glyph tables \(lines 143\-144\)
|
* `GLITCH`, `KATA` glyph tables \(lines 143\-144\)
|
||||||
|
|
||||||
### engine/sources\.py
|
### engine/sources\.py
|
||||||
|
|
||||||
Pure data, no logic:
|
Pure data, no logic:
|
||||||
|
|
||||||
* `FEEDS` dict \(lines 102\-140\)
|
* `FEEDS` dict \(lines 102\-140\)
|
||||||
* `POETRY_SOURCES` dict \(lines 67\-80\)
|
* `POETRY_SOURCES` dict \(lines 67\-80\)
|
||||||
* `SOURCE_LANGS` dict \(lines 258\-266\)
|
* `SOURCE_LANGS` dict \(lines 258\-266\)
|
||||||
* `_LOCATION_LANGS` dict \(lines 269\-289\)
|
* `_LOCATION_LANGS` dict \(lines 269\-289\)
|
||||||
* `_SCRIPT_FONTS` dict \(lines 153\-165\)
|
* `_SCRIPT_FONTS` dict \(lines 153\-165\)
|
||||||
* `_NO_UPPER` set \(line 167\)
|
* `_NO_UPPER` set \(line 167\)
|
||||||
|
|
||||||
### engine/terminal\.py
|
### engine/terminal\.py
|
||||||
|
|
||||||
ANSI primitives and terminal I/O:
|
ANSI primitives and terminal I/O:
|
||||||
|
|
||||||
* All ANSI constants: `RST`, `BOLD`, `DIM`, `G_HI`, `G_MID`, `G_LO`, `G_DIM`, `W_COOL`, `W_DIM`, `W_GHOST`, `C_DIM`, `CLR`, `CURSOR_OFF`, `CURSOR_ON` \(lines 83\-99\)
|
* All ANSI constants: `RST`, `BOLD`, `DIM`, `G_HI`, `G_MID`, `G_LO`, `G_DIM`, `W_COOL`, `W_DIM`, `W_GHOST`, `C_DIM`, `CLR`, `CURSOR_OFF`, `CURSOR_ON` \(lines 83\-99\)
|
||||||
* `tw()`, `th()` \(lines 223\-234\)
|
* `tw()`, `th()` \(lines 223\-234\)
|
||||||
* `type_out()`, `slow_print()`, `boot_ln()` \(lines 355\-386\)
|
* `type_out()`, `slow_print()`, `boot_ln()` \(lines 355\-386\)
|
||||||
|
|
||||||
### engine/filter\.py
|
### engine/filter\.py
|
||||||
|
|
||||||
* `_Strip` HTML parser class \(lines 205\-214\)
|
* `_Strip` HTML parser class \(lines 205\-214\)
|
||||||
* `strip_tags()` \(lines 217\-220\)
|
* `strip_tags()` \(lines 217\-220\)
|
||||||
* `_SKIP_RE` compiled regex \(lines 322\-346\)
|
* `_SKIP_RE` compiled regex \(lines 322\-346\)
|
||||||
* `_skip()` predicate \(lines 349\-351\)
|
* `_skip()` predicate \(lines 349\-351\)
|
||||||
|
|
||||||
### engine/translate\.py
|
### engine/translate\.py
|
||||||
|
|
||||||
* `_TRANSLATE_CACHE` \(line 291\)
|
* `_TRANSLATE_CACHE` \(line 291\)
|
||||||
* `_detect_location_language()` \(lines 294\-300\) — imports `_LOCATION_LANGS` from sources
|
* `_detect_location_language()` \(lines 294\-300\) — imports `_LOCATION_LANGS` from sources
|
||||||
* `_translate_headline()` \(lines 303\-319\)
|
* `_translate_headline()` \(lines 303\-319\)
|
||||||
|
|
||||||
### engine/render\.py
|
### engine/render\.py
|
||||||
|
|
||||||
The OTF→terminal pipeline\. This is exactly what `serve.py` will import to produce 1\-bit bitmaps for the ESP32\.
|
The OTF→terminal pipeline\. This is exactly what `serve.py` will import to produce 1\-bit bitmaps for the ESP32\.
|
||||||
|
|
||||||
* `_GRAD_COLS` gradient table \(lines 169\-182\)
|
* `_GRAD_COLS` gradient table \(lines 169\-182\)
|
||||||
* `_font()`, `_font_for_lang()` with lazy\-load \+ cache \(lines 185\-202\)
|
* `_font()`, `_font_for_lang()` with lazy\-load \+ cache \(lines 185\-202\)
|
||||||
* `_render_line()` — OTF text → half\-block terminal rows \(lines 567\-605\)
|
* `_render_line()` — OTF text → half\-block terminal rows \(lines 567\-605\)
|
||||||
* `_big_wrap()` — word\-wrap \+ render \(lines 608\-636\)
|
* `_big_wrap()` — word\-wrap \+ render \(lines 608\-636\)
|
||||||
* `_lr_gradient()` — apply left→right color gradient \(lines 639\-656\)
|
* `_lr_gradient()` — apply left→right color gradient \(lines 639\-656\)
|
||||||
* `_make_block()` — composite: translate → render → colorize a headline \(lines 718\-756\)\. Imports from translate, sources\.
|
* `_make_block()` — composite: translate → render → colorize a headline \(lines 718\-756\)\. Imports from translate, sources\.
|
||||||
|
|
||||||
### engine/effects\.py
|
### engine/effects\.py
|
||||||
|
|
||||||
Visual effects applied during the frame loop:
|
Visual effects applied during the frame loop:
|
||||||
|
|
||||||
* `noise()` \(lines 237\-245\)
|
* `noise()` \(lines 237\-245\)
|
||||||
* `glitch_bar()` \(lines 248\-252\)
|
* `glitch_bar()` \(lines 248\-252\)
|
||||||
* `_fade_line()` — probabilistic character dissolve \(lines 659\-680\)
|
* `_fade_line()` — probabilistic character dissolve \(lines 659\-680\)
|
||||||
* `_vis_trunc()` — ANSI\-aware width truncation \(lines 683\-701\)
|
* `_vis_trunc()` — ANSI\-aware width truncation \(lines 683\-701\)
|
||||||
* `_firehose_line()` \(lines 759\-801\) — imports config\.MODE, sources\.FEEDS/POETRY\_SOURCES
|
* `_firehose_line()` \(lines 759\-801\) — imports config\.MODE, sources\.FEEDS/POETRY\_SOURCES
|
||||||
* `_next_headline()` — pool management \(lines 704\-715\)
|
* `_next_headline()` — pool management \(lines 704\-715\)
|
||||||
|
|
||||||
### engine/fetch\.py
|
### engine/fetch\.py
|
||||||
|
|
||||||
* `fetch_feed()` \(lines 390\-396\)
|
* `fetch_feed()` \(lines 390\-396\)
|
||||||
* `fetch_all()` \(lines 399\-426\) — imports filter\.\_skip, filter\.strip\_tags, terminal\.boot\_ln
|
* `fetch_all()` \(lines 399\-426\) — imports filter\.\_skip, filter\.strip\_tags, terminal\.boot\_ln
|
||||||
* `_fetch_gutenberg()` \(lines 429\-456\)
|
* `_fetch_gutenberg()` \(lines 429\-456\)
|
||||||
* `fetch_poetry()` \(lines 459\-472\)
|
* `fetch_poetry()` \(lines 459\-472\)
|
||||||
* `_cache_path()`, `_load_cache()`, `_save_cache()` \(lines 476\-501\)
|
* `_cache_path()`, `_load_cache()`, `_save_cache()` \(lines 476\-501\)
|
||||||
|
|
||||||
### engine/ntfy\.py — standalone, reusable
|
### engine/ntfy\.py — standalone, reusable
|
||||||
|
|
||||||
Refactored from the current globals \+ thread \(lines 531\-564\) and the message rendering section of `stream()` \(lines 845\-909\) into a class:
|
Refactored from the current globals \+ thread \(lines 531\-564\) and the message rendering section of `stream()` \(lines 845\-909\) into a class:
|
||||||
|
|
||||||
```python
|
```python
|
||||||
class NtfyPoller:
|
class NtfyPoller:
|
||||||
def __init__(self, topic_url, poll_interval=15, display_secs=30):
|
def __init__(self, topic_url, poll_interval=15, display_secs=30):
|
||||||
@@ -146,10 +108,8 @@ class NtfyPoller:
|
|||||||
def dismiss(self):
|
def dismiss(self):
|
||||||
"""Manually dismiss current message."""
|
"""Manually dismiss current message."""
|
||||||
```
|
```
|
||||||
|
|
||||||
Dependencies: `urllib.request`, `json`, `threading`, `time` — all stdlib\. No internal imports\.
|
Dependencies: `urllib.request`, `json`, `threading`, `time` — all stdlib\. No internal imports\.
|
||||||
Other visualizers use it like:
|
Other visualizers use it like:
|
||||||
|
|
||||||
```python
|
```python
|
||||||
from engine.ntfy import NtfyPoller
|
from engine.ntfy import NtfyPoller
|
||||||
poller = NtfyPoller("https://ntfy.sh/my_topic/json?since=20s&poll=1")
|
poller = NtfyPoller("https://ntfy.sh/my_topic/json?since=20s&poll=1")
|
||||||
@@ -160,11 +120,8 @@ if msg:
|
|||||||
title, body, ts = msg
|
title, body, ts = msg
|
||||||
render_my_message(title, body) # visualizer-specific
|
render_my_message(title, body) # visualizer-specific
|
||||||
```
|
```
|
||||||
|
|
||||||
### engine/mic\.py — standalone
|
### engine/mic\.py — standalone
|
||||||
|
|
||||||
Refactored from the current globals \(lines 508\-528\) into a class:
|
Refactored from the current globals \(lines 508\-528\) into a class:
|
||||||
|
|
||||||
```python
|
```python
|
||||||
class MicMonitor:
|
class MicMonitor:
|
||||||
def __init__(self, threshold_db=50):
|
def __init__(self, threshold_db=50):
|
||||||
@@ -180,75 +137,41 @@ class MicMonitor:
|
|||||||
def excess(self) -> float:
|
def excess(self) -> float:
|
||||||
"""dB above threshold (clamped to 0)."""
|
"""dB above threshold (clamped to 0)."""
|
||||||
```
|
```
|
||||||
|
|
||||||
Dependencies: `sounddevice`, `numpy` \(both optional — graceful fallback\)\.
|
Dependencies: `sounddevice`, `numpy` \(both optional — graceful fallback\)\.
|
||||||
|
|
||||||
### engine/scroll\.py
|
### engine/scroll\.py
|
||||||
|
|
||||||
The `stream()` function \(lines 804\-990\)\. Receives its dependencies via arguments or imports:
|
The `stream()` function \(lines 804\-990\)\. Receives its dependencies via arguments or imports:
|
||||||
|
|
||||||
* `stream(items, ntfy_poller, mic_monitor, config)` or similar
|
* `stream(items, ntfy_poller, mic_monitor, config)` or similar
|
||||||
* Message rendering \(lines 855\-909\) stays here since it's terminal\-display\-specific — a different visualizer would render messages differently
|
* Message rendering \(lines 855\-909\) stays here since it's terminal\-display\-specific — a different visualizer would render messages differently
|
||||||
|
|
||||||
### engine/app\.py
|
### engine/app\.py
|
||||||
|
|
||||||
The orchestrator:
|
The orchestrator:
|
||||||
|
|
||||||
* `TITLE` ASCII art \(lines 994\-1001\)
|
* `TITLE` ASCII art \(lines 994\-1001\)
|
||||||
* `main()` \(lines 1004\-1084\): CLI handling, signal setup, boot animation, fetch, wire up ntfy/mic/scroll
|
* `main()` \(lines 1004\-1084\): CLI handling, signal setup, boot animation, fetch, wire up ntfy/mic/scroll
|
||||||
|
|
||||||
## Execution order
|
## Execution order
|
||||||
|
|
||||||
### Step 1: Create engine/ package skeleton
|
### Step 1: Create engine/ package skeleton
|
||||||
|
|
||||||
Create `engine/__init__.py` and all empty module files\.
|
Create `engine/__init__.py` and all empty module files\.
|
||||||
|
|
||||||
### Step 2: Extract pure data modules \(zero\-dep\)
|
### Step 2: Extract pure data modules \(zero\-dep\)
|
||||||
|
|
||||||
Move constants and data dicts into `config.py`, `sources.py`\. These have no logic dependencies\.
|
Move constants and data dicts into `config.py`, `sources.py`\. These have no logic dependencies\.
|
||||||
|
|
||||||
### Step 3: Extract terminal\.py
|
### Step 3: Extract terminal\.py
|
||||||
|
|
||||||
Move ANSI codes and terminal I/O helpers\. No internal deps\.
|
Move ANSI codes and terminal I/O helpers\. No internal deps\.
|
||||||
|
|
||||||
### Step 4: Extract filter\.py and translate\.py
|
### Step 4: Extract filter\.py and translate\.py
|
||||||
|
|
||||||
Both are small, self\-contained\. translate imports from sources\.
|
Both are small, self\-contained\. translate imports from sources\.
|
||||||
|
|
||||||
### Step 5: Extract render\.py
|
### Step 5: Extract render\.py
|
||||||
|
|
||||||
Font loading \+ the OTF→half\-block pipeline\. Imports from config, terminal, sources\. This is the module `serve.py` will later import\.
|
Font loading \+ the OTF→half\-block pipeline\. Imports from config, terminal, sources\. This is the module `serve.py` will later import\.
|
||||||
|
|
||||||
### Step 6: Extract effects\.py
|
### Step 6: Extract effects\.py
|
||||||
|
|
||||||
Visual effects\. Imports from config, terminal, sources\.
|
Visual effects\. Imports from config, terminal, sources\.
|
||||||
|
|
||||||
### Step 7: Extract fetch\.py
|
### Step 7: Extract fetch\.py
|
||||||
|
|
||||||
Feed/Gutenberg fetching \+ caching\. Imports from config, sources, filter, terminal\.
|
Feed/Gutenberg fetching \+ caching\. Imports from config, sources, filter, terminal\.
|
||||||
|
|
||||||
### Step 8: Extract ntfy\.py and mic\.py
|
### Step 8: Extract ntfy\.py and mic\.py
|
||||||
|
|
||||||
Refactor globals\+threads into classes\. Zero internal deps\.
|
Refactor globals\+threads into classes\. Zero internal deps\.
|
||||||
|
|
||||||
### Step 9: Extract scroll\.py
|
### Step 9: Extract scroll\.py
|
||||||
|
|
||||||
The frame loop\. Last to extract because it depends on everything above\.
|
The frame loop\. Last to extract because it depends on everything above\.
|
||||||
|
|
||||||
### Step 10: Extract app\.py
|
### Step 10: Extract app\.py
|
||||||
|
|
||||||
The `main()` function, boot sequence, signal handler\. Wire up all modules\.
|
The `main()` function, boot sequence, signal handler\. Wire up all modules\.
|
||||||
|
|
||||||
### Step 11: Slim down mainline\.py
|
### Step 11: Slim down mainline\.py
|
||||||
|
|
||||||
Keep only venv bootstrap \+ `from engine.app import main; main()`\.
|
Keep only venv bootstrap \+ `from engine.app import main; main()`\.
|
||||||
|
|
||||||
### Step 12: Verify
|
### Step 12: Verify
|
||||||
|
|
||||||
Run `python3 mainline.py`, `python3 mainline.py --poetry`, and `python3 mainline.py --firehose` to confirm identical behavior\. No behavioral changes in this refactor\.
|
Run `python3 mainline.py`, `python3 mainline.py --poetry`, and `python3 mainline.py --firehose` to confirm identical behavior\. No behavioral changes in this refactor\.
|
||||||
|
|
||||||
## What this enables
|
## What this enables
|
||||||
|
|
||||||
* **serve\.py** \(future\): `from engine.render import _render_line, _big_wrap` \+ `from engine.fetch import fetch_all` — imports the pipeline directly
|
* **serve\.py** \(future\): `from engine.render import _render_line, _big_wrap` \+ `from engine.fetch import fetch_all` — imports the pipeline directly
|
||||||
* **Other visualizers**: `from engine.ntfy import NtfyPoller` — doorbell feature with no coupling to mainline's scroll engine
|
* **Other visualizers**: `from engine.ntfy import NtfyPoller` — doorbell feature with no coupling to mainline's scroll engine
|
||||||
* **Rust port**: Clear boundaries for what to port first \(ntfy client, render pipeline\) vs what stays in Python \(fetching, caching — the server side\)
|
* **Rust port**: Clear boundaries for what to port first \(ntfy client, render pipeline\) vs what stays in Python \(fetching, caching — the server side\)
|
||||||
@@ -1,366 +0,0 @@
|
|||||||
<!DOCTYPE html>
|
|
||||||
<html lang="en">
|
|
||||||
<head>
|
|
||||||
<meta charset="UTF-8">
|
|
||||||
<meta name="viewport" content="width=device-width, initial-scale=1.0">
|
|
||||||
<title>Mainline Terminal</title>
|
|
||||||
<style>
|
|
||||||
* {
|
|
||||||
margin: 0;
|
|
||||||
padding: 0;
|
|
||||||
box-sizing: border-box;
|
|
||||||
}
|
|
||||||
body {
|
|
||||||
background: #0a0a0a;
|
|
||||||
color: #ccc;
|
|
||||||
font-family: 'Fira Code', 'Cascadia Code', 'Consolas', monospace;
|
|
||||||
display: flex;
|
|
||||||
flex-direction: column;
|
|
||||||
align-items: center;
|
|
||||||
justify-content: center;
|
|
||||||
min-height: 100vh;
|
|
||||||
padding: 20px;
|
|
||||||
}
|
|
||||||
body.fullscreen {
|
|
||||||
padding: 0;
|
|
||||||
}
|
|
||||||
body.fullscreen #controls {
|
|
||||||
display: none;
|
|
||||||
}
|
|
||||||
#container {
|
|
||||||
position: relative;
|
|
||||||
}
|
|
||||||
canvas {
|
|
||||||
background: #000;
|
|
||||||
border: 1px solid #333;
|
|
||||||
image-rendering: pixelated;
|
|
||||||
image-rendering: crisp-edges;
|
|
||||||
}
|
|
||||||
body.fullscreen canvas {
|
|
||||||
border: none;
|
|
||||||
width: 100vw;
|
|
||||||
height: 100vh;
|
|
||||||
max-width: 100vw;
|
|
||||||
max-height: 100vh;
|
|
||||||
}
|
|
||||||
#controls {
|
|
||||||
display: flex;
|
|
||||||
gap: 10px;
|
|
||||||
margin-top: 10px;
|
|
||||||
align-items: center;
|
|
||||||
}
|
|
||||||
#controls button {
|
|
||||||
background: #333;
|
|
||||||
color: #ccc;
|
|
||||||
border: 1px solid #555;
|
|
||||||
padding: 5px 12px;
|
|
||||||
cursor: pointer;
|
|
||||||
font-family: inherit;
|
|
||||||
font-size: 12px;
|
|
||||||
}
|
|
||||||
#controls button:hover {
|
|
||||||
background: #444;
|
|
||||||
}
|
|
||||||
#controls input {
|
|
||||||
width: 60px;
|
|
||||||
background: #222;
|
|
||||||
color: #ccc;
|
|
||||||
border: 1px solid #444;
|
|
||||||
padding: 4px 8px;
|
|
||||||
font-family: inherit;
|
|
||||||
text-align: center;
|
|
||||||
}
|
|
||||||
#status {
|
|
||||||
margin-top: 10px;
|
|
||||||
font-size: 12px;
|
|
||||||
color: #666;
|
|
||||||
}
|
|
||||||
#status.connected {
|
|
||||||
color: #4f4;
|
|
||||||
}
|
|
||||||
#status.disconnected {
|
|
||||||
color: #f44;
|
|
||||||
}
|
|
||||||
</style>
|
|
||||||
</head>
|
|
||||||
<body>
|
|
||||||
<div id="container">
|
|
||||||
<canvas id="terminal"></canvas>
|
|
||||||
</div>
|
|
||||||
<div id="controls">
|
|
||||||
<label>Cols: <input type="number" id="cols" value="80" min="20" max="200"></label>
|
|
||||||
<label>Rows: <input type="number" id="rows" value="24" min="10" max="60"></label>
|
|
||||||
<button id="apply">Apply</button>
|
|
||||||
<button id="fullscreen">Fullscreen</button>
|
|
||||||
</div>
|
|
||||||
<div id="status" class="disconnected">Connecting...</div>
|
|
||||||
|
|
||||||
<script>
|
|
||||||
const canvas = document.getElementById('terminal');
|
|
||||||
const ctx = canvas.getContext('2d');
|
|
||||||
const status = document.getElementById('status');
|
|
||||||
const colsInput = document.getElementById('cols');
|
|
||||||
const rowsInput = document.getElementById('rows');
|
|
||||||
const applyBtn = document.getElementById('apply');
|
|
||||||
const fullscreenBtn = document.getElementById('fullscreen');
|
|
||||||
|
|
||||||
const CHAR_WIDTH = 9;
|
|
||||||
const CHAR_HEIGHT = 16;
|
|
||||||
|
|
||||||
const ANSI_COLORS = {
|
|
||||||
0: '#000000', 1: '#cd3131', 2: '#0dbc79', 3: '#e5e510',
|
|
||||||
4: '#2472c8', 5: '#bc3fbc', 6: '#11a8cd', 7: '#e5e5e5',
|
|
||||||
8: '#666666', 9: '#f14c4c', 10: '#23d18b', 11: '#f5f543',
|
|
||||||
12: '#3b8eea', 13: '#d670d6', 14: '#29b8db', 15: '#ffffff',
|
|
||||||
};
|
|
||||||
|
|
||||||
let cols = 80;
|
|
||||||
let rows = 24;
|
|
||||||
let ws = null;
|
|
||||||
|
|
||||||
function resizeCanvas() {
|
|
||||||
canvas.width = cols * CHAR_WIDTH;
|
|
||||||
canvas.height = rows * CHAR_HEIGHT;
|
|
||||||
}
|
|
||||||
|
|
||||||
function parseAnsi(text) {
|
|
||||||
if (!text) return [];
|
|
||||||
|
|
||||||
const tokens = [];
|
|
||||||
let currentText = '';
|
|
||||||
let fg = '#cccccc';
|
|
||||||
let bg = '#000000';
|
|
||||||
let bold = false;
|
|
||||||
let i = 0;
|
|
||||||
let inEscape = false;
|
|
||||||
let escapeCode = '';
|
|
||||||
|
|
||||||
while (i < text.length) {
|
|
||||||
const char = text[i];
|
|
||||||
|
|
||||||
if (inEscape) {
|
|
||||||
if (char >= '0' && char <= '9' || char === ';' || char === '[') {
|
|
||||||
escapeCode += char;
|
|
||||||
}
|
|
||||||
|
|
||||||
if (char === 'm') {
|
|
||||||
const codes = escapeCode.replace('\x1b[', '').split(';');
|
|
||||||
|
|
||||||
for (const code of codes) {
|
|
||||||
const num = parseInt(code) || 0;
|
|
||||||
|
|
||||||
if (num === 0) {
|
|
||||||
fg = '#cccccc';
|
|
||||||
bg = '#000000';
|
|
||||||
bold = false;
|
|
||||||
} else if (num === 1) {
|
|
||||||
bold = true;
|
|
||||||
} else if (num === 22) {
|
|
||||||
bold = false;
|
|
||||||
} else if (num === 39) {
|
|
||||||
fg = '#cccccc';
|
|
||||||
} else if (num === 49) {
|
|
||||||
bg = '#000000';
|
|
||||||
} else if (num >= 30 && num <= 37) {
|
|
||||||
fg = ANSI_COLORS[num - 30 + (bold ? 8 : 0)] || '#cccccc';
|
|
||||||
} else if (num >= 40 && num <= 47) {
|
|
||||||
bg = ANSI_COLORS[num - 40] || '#000000';
|
|
||||||
} else if (num >= 90 && num <= 97) {
|
|
||||||
fg = ANSI_COLORS[num - 90 + 8] || '#cccccc';
|
|
||||||
} else if (num >= 100 && num <= 107) {
|
|
||||||
bg = ANSI_COLORS[num - 100 + 8] || '#000000';
|
|
||||||
} else if (num >= 1 && num <= 256) {
|
|
||||||
// 256 colors
|
|
||||||
if (num < 16) {
|
|
||||||
fg = ANSI_COLORS[num] || '#cccccc';
|
|
||||||
} else if (num < 232) {
|
|
||||||
const c = num - 16;
|
|
||||||
const r = Math.floor(c / 36) * 51;
|
|
||||||
const g = Math.floor((c % 36) / 6) * 51;
|
|
||||||
const b = (c % 6) * 51;
|
|
||||||
fg = `#${r.toString(16).padStart(2,'0')}${g.toString(16).padStart(2,'0')}${b.toString(16).padStart(2,'0')}`;
|
|
||||||
} else {
|
|
||||||
const gray = (num - 232) * 10 + 8;
|
|
||||||
fg = `#${gray.toString(16).repeat(2)}`;
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
if (currentText) {
|
|
||||||
tokens.push({ text: currentText, fg, bg, bold });
|
|
||||||
currentText = '';
|
|
||||||
}
|
|
||||||
inEscape = false;
|
|
||||||
escapeCode = '';
|
|
||||||
}
|
|
||||||
} else if (char === '\x1b' && text[i + 1] === '[') {
|
|
||||||
if (currentText) {
|
|
||||||
tokens.push({ text: currentText, fg, bg, bold });
|
|
||||||
currentText = '';
|
|
||||||
}
|
|
||||||
inEscape = true;
|
|
||||||
escapeCode = '';
|
|
||||||
i++;
|
|
||||||
} else {
|
|
||||||
currentText += char;
|
|
||||||
}
|
|
||||||
i++;
|
|
||||||
}
|
|
||||||
|
|
||||||
if (currentText) {
|
|
||||||
tokens.push({ text: currentText, fg, bg, bold });
|
|
||||||
}
|
|
||||||
|
|
||||||
return tokens;
|
|
||||||
}
|
|
||||||
|
|
||||||
function renderLine(text, x, y, lineHeight) {
|
|
||||||
const tokens = parseAnsi(text);
|
|
||||||
let xOffset = x;
|
|
||||||
|
|
||||||
for (const token of tokens) {
|
|
||||||
if (token.text) {
|
|
||||||
if (token.bold) {
|
|
||||||
ctx.font = 'bold 16px monospace';
|
|
||||||
} else {
|
|
||||||
ctx.font = '16px monospace';
|
|
||||||
}
|
|
||||||
|
|
||||||
const metrics = ctx.measureText(token.text);
|
|
||||||
|
|
||||||
if (token.bg !== '#000000') {
|
|
||||||
ctx.fillStyle = token.bg;
|
|
||||||
ctx.fillRect(xOffset, y - 2, metrics.width + 1, lineHeight);
|
|
||||||
}
|
|
||||||
|
|
||||||
ctx.fillStyle = token.fg;
|
|
||||||
ctx.fillText(token.text, xOffset, y);
|
|
||||||
xOffset += metrics.width;
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
function connect() {
|
|
||||||
const protocol = window.location.protocol === 'https:' ? 'wss:' : 'ws:';
|
|
||||||
const wsUrl = `${protocol}//${window.location.hostname}:8765`;
|
|
||||||
|
|
||||||
ws = new WebSocket(wsUrl);
|
|
||||||
|
|
||||||
ws.onopen = () => {
|
|
||||||
status.textContent = 'Connected';
|
|
||||||
status.className = 'connected';
|
|
||||||
sendSize();
|
|
||||||
};
|
|
||||||
|
|
||||||
ws.onclose = () => {
|
|
||||||
status.textContent = 'Disconnected - Reconnecting...';
|
|
||||||
status.className = 'disconnected';
|
|
||||||
setTimeout(connect, 1000);
|
|
||||||
};
|
|
||||||
|
|
||||||
ws.onerror = () => {
|
|
||||||
status.textContent = 'Connection error';
|
|
||||||
status.className = 'disconnected';
|
|
||||||
};
|
|
||||||
|
|
||||||
ws.onmessage = (event) => {
|
|
||||||
try {
|
|
||||||
const data = JSON.parse(event.data);
|
|
||||||
|
|
||||||
if (data.type === 'frame') {
|
|
||||||
cols = data.width || 80;
|
|
||||||
rows = data.height || 24;
|
|
||||||
colsInput.value = cols;
|
|
||||||
rowsInput.value = rows;
|
|
||||||
resizeCanvas();
|
|
||||||
render(data.lines || []);
|
|
||||||
} else if (data.type === 'clear') {
|
|
||||||
ctx.fillStyle = '#000';
|
|
||||||
ctx.fillRect(0, 0, canvas.width, canvas.height);
|
|
||||||
}
|
|
||||||
} catch (e) {
|
|
||||||
console.error('Failed to parse message:', e);
|
|
||||||
}
|
|
||||||
};
|
|
||||||
}
|
|
||||||
|
|
||||||
function sendSize() {
|
|
||||||
if (ws && ws.readyState === WebSocket.OPEN) {
|
|
||||||
ws.send(JSON.stringify({
|
|
||||||
type: 'resize',
|
|
||||||
width: parseInt(colsInput.value),
|
|
||||||
height: parseInt(rowsInput.value)
|
|
||||||
}));
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
function render(lines) {
|
|
||||||
ctx.fillStyle = '#000';
|
|
||||||
ctx.fillRect(0, 0, canvas.width, canvas.height);
|
|
||||||
|
|
||||||
ctx.font = '16px monospace';
|
|
||||||
ctx.textBaseline = 'top';
|
|
||||||
|
|
||||||
const lineHeight = CHAR_HEIGHT;
|
|
||||||
const maxLines = Math.min(lines.length, rows);
|
|
||||||
|
|
||||||
for (let i = 0; i < maxLines; i++) {
|
|
||||||
const line = lines[i] || '';
|
|
||||||
renderLine(line, 0, i * lineHeight, lineHeight);
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
function calculateViewportSize() {
|
|
||||||
const isFullscreen = document.fullscreenElement !== null;
|
|
||||||
const padding = isFullscreen ? 0 : 40;
|
|
||||||
const controlsHeight = isFullscreen ? 0 : 60;
|
|
||||||
const availableWidth = window.innerWidth - padding;
|
|
||||||
const availableHeight = window.innerHeight - controlsHeight;
|
|
||||||
cols = Math.max(20, Math.floor(availableWidth / CHAR_WIDTH));
|
|
||||||
rows = Math.max(10, Math.floor(availableHeight / CHAR_HEIGHT));
|
|
||||||
colsInput.value = cols;
|
|
||||||
rowsInput.value = rows;
|
|
||||||
resizeCanvas();
|
|
||||||
console.log('Fullscreen:', isFullscreen, 'Size:', cols, 'x', rows);
|
|
||||||
sendSize();
|
|
||||||
}
|
|
||||||
|
|
||||||
applyBtn.addEventListener('click', () => {
|
|
||||||
cols = parseInt(colsInput.value);
|
|
||||||
rows = parseInt(rowsInput.value);
|
|
||||||
resizeCanvas();
|
|
||||||
sendSize();
|
|
||||||
});
|
|
||||||
|
|
||||||
fullscreenBtn.addEventListener('click', () => {
|
|
||||||
if (!document.fullscreenElement) {
|
|
||||||
document.body.classList.add('fullscreen');
|
|
||||||
document.documentElement.requestFullscreen().then(() => {
|
|
||||||
calculateViewportSize();
|
|
||||||
});
|
|
||||||
} else {
|
|
||||||
document.exitFullscreen().then(() => {
|
|
||||||
calculateViewportSize();
|
|
||||||
});
|
|
||||||
}
|
|
||||||
});
|
|
||||||
|
|
||||||
document.addEventListener('fullscreenchange', () => {
|
|
||||||
if (!document.fullscreenElement) {
|
|
||||||
document.body.classList.remove('fullscreen');
|
|
||||||
calculateViewportSize();
|
|
||||||
}
|
|
||||||
});
|
|
||||||
|
|
||||||
window.addEventListener('resize', () => {
|
|
||||||
if (document.fullscreenElement) {
|
|
||||||
calculateViewportSize();
|
|
||||||
}
|
|
||||||
});
|
|
||||||
|
|
||||||
// Initial setup
|
|
||||||
resizeCanvas();
|
|
||||||
connect();
|
|
||||||
</script>
|
|
||||||
</body>
|
|
||||||
</html>
|
|
||||||
@@ -1,5 +1,4 @@
|
|||||||
#!/usr/bin/env python3
|
#!/usr/bin/env python3
|
||||||
# -*- coding: utf-8 -*-
|
|
||||||
"""
|
"""
|
||||||
Command-line utility for interacting with mainline via ntfy.
|
Command-line utility for interacting with mainline via ntfy.
|
||||||
|
|
||||||
@@ -21,11 +20,6 @@ C&C works like a serial port:
|
|||||||
3. Cmdline polls for response
|
3. Cmdline polls for response
|
||||||
"""
|
"""
|
||||||
|
|
||||||
import os
|
|
||||||
|
|
||||||
os.environ["FORCE_COLOR"] = "1"
|
|
||||||
os.environ["TERM"] = "xterm-256color"
|
|
||||||
|
|
||||||
import argparse
|
import argparse
|
||||||
import json
|
import json
|
||||||
import sys
|
import sys
|
||||||
|
|||||||
@@ -1,156 +0,0 @@
|
|||||||
# Mainline Architecture Diagrams
|
|
||||||
|
|
||||||
> These diagrams use Mermaid. Render with: `npx @mermaid-js/mermaid-cli -i ARCHITECTURE.md` or view in GitHub/GitLab/Notion.
|
|
||||||
|
|
||||||
## Class Hierarchy (Mermaid)
|
|
||||||
|
|
||||||
```mermaid
|
|
||||||
classDiagram
|
|
||||||
class Stage {
|
|
||||||
<<abstract>>
|
|
||||||
+str name
|
|
||||||
+set[str] capabilities
|
|
||||||
+set[str] dependencies
|
|
||||||
+process(data, ctx) Any
|
|
||||||
}
|
|
||||||
|
|
||||||
Stage <|-- DataSourceStage
|
|
||||||
Stage <|-- CameraStage
|
|
||||||
Stage <|-- FontStage
|
|
||||||
Stage <|-- ViewportFilterStage
|
|
||||||
Stage <|-- EffectPluginStage
|
|
||||||
Stage <|-- DisplayStage
|
|
||||||
Stage <|-- SourceItemsToBufferStage
|
|
||||||
Stage <|-- PassthroughStage
|
|
||||||
Stage <|-- ImageToTextStage
|
|
||||||
Stage <|-- CanvasStage
|
|
||||||
|
|
||||||
class EffectPlugin {
|
|
||||||
<<abstract>>
|
|
||||||
+str name
|
|
||||||
+EffectConfig config
|
|
||||||
+process(buf, ctx) list[str]
|
|
||||||
+configure(config) None
|
|
||||||
}
|
|
||||||
|
|
||||||
EffectPlugin <|-- NoiseEffect
|
|
||||||
EffectPlugin <|-- FadeEffect
|
|
||||||
EffectPlugin <|-- GlitchEffect
|
|
||||||
EffectPlugin <|-- FirehoseEffect
|
|
||||||
EffectPlugin <|-- CropEffect
|
|
||||||
EffectPlugin <|-- TintEffect
|
|
||||||
|
|
||||||
class Display {
|
|
||||||
<<protocol>>
|
|
||||||
+int width
|
|
||||||
+int height
|
|
||||||
+init(width, height, reuse)
|
|
||||||
+show(buffer, border)
|
|
||||||
+clear() None
|
|
||||||
+cleanup() None
|
|
||||||
}
|
|
||||||
|
|
||||||
Display <|.. TerminalDisplay
|
|
||||||
Display <|.. NullDisplay
|
|
||||||
Display <|.. PygameDisplay
|
|
||||||
Display <|.. WebSocketDisplay
|
|
||||||
Display <|.. SixelDisplay
|
|
||||||
|
|
||||||
class Camera {
|
|
||||||
+int viewport_width
|
|
||||||
+int viewport_height
|
|
||||||
+CameraMode mode
|
|
||||||
+apply(buffer, width, height) list[str]
|
|
||||||
}
|
|
||||||
|
|
||||||
class Pipeline {
|
|
||||||
+dict[str, Stage] stages
|
|
||||||
+PipelineContext context
|
|
||||||
+execute(data) StageResult
|
|
||||||
}
|
|
||||||
|
|
||||||
Pipeline --> Stage
|
|
||||||
Stage --> Display
|
|
||||||
```
|
|
||||||
|
|
||||||
## Data Flow (Mermaid)
|
|
||||||
|
|
||||||
```mermaid
|
|
||||||
flowchart LR
|
|
||||||
DataSource[Data Source] --> DataSourceStage
|
|
||||||
DataSourceStage --> FontStage
|
|
||||||
FontStage --> CameraStage
|
|
||||||
CameraStage --> EffectStages
|
|
||||||
EffectStages --> DisplayStage
|
|
||||||
DisplayStage --> TerminalDisplay
|
|
||||||
DisplayStage --> BrowserWebSocket
|
|
||||||
DisplayStage --> SixelDisplay
|
|
||||||
DisplayStage --> NullDisplay
|
|
||||||
```
|
|
||||||
|
|
||||||
## Effect Chain (Mermaid)
|
|
||||||
|
|
||||||
```mermaid
|
|
||||||
flowchart LR
|
|
||||||
InputBuffer --> NoiseEffect
|
|
||||||
NoiseEffect --> FadeEffect
|
|
||||||
FadeEffect --> GlitchEffect
|
|
||||||
GlitchEffect --> FirehoseEffect
|
|
||||||
FirehoseEffect --> Output
|
|
||||||
```
|
|
||||||
|
|
||||||
> **Note:** Each effect must preserve buffer dimensions (line count and visible width).
|
|
||||||
|
|
||||||
## Stage Capabilities
|
|
||||||
|
|
||||||
```mermaid
|
|
||||||
flowchart TB
|
|
||||||
subgraph "Capability Resolution"
|
|
||||||
D[DataSource<br/>provides: source.*]
|
|
||||||
C[Camera<br/>provides: render.output]
|
|
||||||
E[Effects<br/>provides: render.effect]
|
|
||||||
DIS[Display<br/>provides: display.output]
|
|
||||||
end
|
|
||||||
```
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Legacy ASCII Diagrams
|
|
||||||
|
|
||||||
### Stage Inheritance
|
|
||||||
```
|
|
||||||
Stage(ABC)
|
|
||||||
├── DataSourceStage
|
|
||||||
├── CameraStage
|
|
||||||
├── FontStage
|
|
||||||
├── ViewportFilterStage
|
|
||||||
├── EffectPluginStage
|
|
||||||
├── DisplayStage
|
|
||||||
├── SourceItemsToBufferStage
|
|
||||||
├── PassthroughStage
|
|
||||||
├── ImageToTextStage
|
|
||||||
└── CanvasStage
|
|
||||||
```
|
|
||||||
|
|
||||||
### Display Backends
|
|
||||||
```
|
|
||||||
Display(Protocol)
|
|
||||||
├── TerminalDisplay
|
|
||||||
├── NullDisplay
|
|
||||||
├── PygameDisplay
|
|
||||||
├── WebSocketDisplay
|
|
||||||
├── SixelDisplay
|
|
||||||
├── KittyDisplay
|
|
||||||
└── MultiDisplay
|
|
||||||
```
|
|
||||||
|
|
||||||
### Camera Modes
|
|
||||||
```
|
|
||||||
Camera
|
|
||||||
├── FEED # Static view
|
|
||||||
├── SCROLL # Horizontal scroll
|
|
||||||
├── VERTICAL # Vertical scroll
|
|
||||||
├── HORIZONTAL # Same as scroll
|
|
||||||
├── OMNI # Omnidirectional
|
|
||||||
├── FLOATING # Floating particles
|
|
||||||
└── BOUNCE # Bouncing camera
|
|
||||||
199
docs/PIPELINE.md
199
docs/PIPELINE.md
@@ -1,199 +0,0 @@
|
|||||||
# Mainline Pipeline
|
|
||||||
|
|
||||||
## Architecture Overview
|
|
||||||
|
|
||||||
```
|
|
||||||
Sources (static/dynamic) → Fetch → Prepare → Scroll → Effects → Render → Display
|
|
||||||
↓
|
|
||||||
NtfyPoller ← MicMonitor (async)
|
|
||||||
```
|
|
||||||
|
|
||||||
### Data Source Abstraction (sources_v2.py)
|
|
||||||
|
|
||||||
- **Static sources**: Data fetched once and cached (HeadlinesDataSource, PoetryDataSource)
|
|
||||||
- **Dynamic sources**: Idempotent fetch for runtime updates (PipelineDataSource)
|
|
||||||
- **SourceRegistry**: Discovery and management of data sources
|
|
||||||
|
|
||||||
### Camera Modes
|
|
||||||
|
|
||||||
- **Vertical**: Scroll up (default)
|
|
||||||
- **Horizontal**: Scroll left
|
|
||||||
- **Omni**: Diagonal scroll
|
|
||||||
- **Floating**: Sinusoidal bobbing
|
|
||||||
- **Trace**: Follow network path node-by-node (for pipeline viz)
|
|
||||||
|
|
||||||
## Content to Display Rendering Pipeline
|
|
||||||
|
|
||||||
```mermaid
|
|
||||||
flowchart TD
|
|
||||||
subgraph Sources["Data Sources (v2)"]
|
|
||||||
Headlines[HeadlinesDataSource]
|
|
||||||
Poetry[PoetryDataSource]
|
|
||||||
Pipeline[PipelineDataSource]
|
|
||||||
Registry[SourceRegistry]
|
|
||||||
end
|
|
||||||
|
|
||||||
subgraph SourcesLegacy["Data Sources (legacy)"]
|
|
||||||
RSS[("RSS Feeds")]
|
|
||||||
PoetryFeed[("Poetry Feed")]
|
|
||||||
Ntfy[("Ntfy Messages")]
|
|
||||||
Mic[("Microphone")]
|
|
||||||
end
|
|
||||||
|
|
||||||
subgraph Fetch["Fetch Layer"]
|
|
||||||
FC[fetch_all]
|
|
||||||
FP[fetch_poetry]
|
|
||||||
Cache[(Cache)]
|
|
||||||
end
|
|
||||||
|
|
||||||
subgraph Prepare["Prepare Layer"]
|
|
||||||
MB[make_block]
|
|
||||||
Strip[strip_tags]
|
|
||||||
Trans[translate]
|
|
||||||
end
|
|
||||||
|
|
||||||
subgraph Scroll["Scroll Engine"]
|
|
||||||
SC[StreamController]
|
|
||||||
CAM[Camera]
|
|
||||||
RTZ[render_ticker_zone]
|
|
||||||
Msg[render_message_overlay]
|
|
||||||
Grad[lr_gradient]
|
|
||||||
VT[vis_trunc / vis_offset]
|
|
||||||
end
|
|
||||||
|
|
||||||
subgraph Effects["Effect Pipeline"]
|
|
||||||
subgraph EffectsPlugins["Effect Plugins"]
|
|
||||||
Noise[NoiseEffect]
|
|
||||||
Fade[FadeEffect]
|
|
||||||
Glitch[GlitchEffect]
|
|
||||||
Firehose[FirehoseEffect]
|
|
||||||
Hud[HudEffect]
|
|
||||||
end
|
|
||||||
EC[EffectChain]
|
|
||||||
ER[EffectRegistry]
|
|
||||||
end
|
|
||||||
|
|
||||||
subgraph Render["Render Layer"]
|
|
||||||
BW[big_wrap]
|
|
||||||
RL[render_line]
|
|
||||||
end
|
|
||||||
|
|
||||||
subgraph Display["Display Backends"]
|
|
||||||
TD[TerminalDisplay]
|
|
||||||
PD[PygameDisplay]
|
|
||||||
SD[SixelDisplay]
|
|
||||||
KD[KittyDisplay]
|
|
||||||
WSD[WebSocketDisplay]
|
|
||||||
ND[NullDisplay]
|
|
||||||
end
|
|
||||||
|
|
||||||
subgraph Async["Async Sources"]
|
|
||||||
NTFY[NtfyPoller]
|
|
||||||
MIC[MicMonitor]
|
|
||||||
end
|
|
||||||
|
|
||||||
subgraph Animation["Animation System"]
|
|
||||||
AC[AnimationController]
|
|
||||||
PR[Preset]
|
|
||||||
end
|
|
||||||
|
|
||||||
Sources --> Fetch
|
|
||||||
RSS --> FC
|
|
||||||
PoetryFeed --> FP
|
|
||||||
FC --> Cache
|
|
||||||
FP --> Cache
|
|
||||||
Cache --> MB
|
|
||||||
Strip --> MB
|
|
||||||
Trans --> MB
|
|
||||||
MB --> SC
|
|
||||||
NTFY --> SC
|
|
||||||
SC --> RTZ
|
|
||||||
CAM --> RTZ
|
|
||||||
Grad --> RTZ
|
|
||||||
VT --> RTZ
|
|
||||||
RTZ --> EC
|
|
||||||
EC --> ER
|
|
||||||
ER --> EffectsPlugins
|
|
||||||
EffectsPlugins --> BW
|
|
||||||
BW --> RL
|
|
||||||
RL --> Display
|
|
||||||
Ntfy --> RL
|
|
||||||
Mic --> RL
|
|
||||||
MIC --> RL
|
|
||||||
|
|
||||||
style Sources fill:#f9f,stroke:#333
|
|
||||||
style Fetch fill:#bbf,stroke:#333
|
|
||||||
style Prepare fill:#bff,stroke:#333
|
|
||||||
style Scroll fill:#bfb,stroke:#333
|
|
||||||
style Effects fill:#fbf,stroke:#333
|
|
||||||
style Render fill:#ffb,stroke:#333
|
|
||||||
style Display fill:#bbf,stroke:#333
|
|
||||||
style Async fill:#fbb,stroke:#333
|
|
||||||
style Animation fill:#bfb,stroke:#333
|
|
||||||
```
|
|
||||||
|
|
||||||
## Animation & Presets
|
|
||||||
|
|
||||||
```mermaid
|
|
||||||
flowchart LR
|
|
||||||
subgraph Preset["Preset"]
|
|
||||||
PP[PipelineParams]
|
|
||||||
AC[AnimationController]
|
|
||||||
end
|
|
||||||
|
|
||||||
subgraph AnimationController["AnimationController"]
|
|
||||||
Clock[Clock]
|
|
||||||
Events[Events]
|
|
||||||
Triggers[Triggers]
|
|
||||||
end
|
|
||||||
|
|
||||||
subgraph Triggers["Trigger Types"]
|
|
||||||
TIME[TIME]
|
|
||||||
FRAME[FRAME]
|
|
||||||
CYCLE[CYCLE]
|
|
||||||
COND[CONDITION]
|
|
||||||
MANUAL[MANUAL]
|
|
||||||
end
|
|
||||||
|
|
||||||
PP --> AC
|
|
||||||
Clock --> AC
|
|
||||||
Events --> AC
|
|
||||||
Triggers --> Events
|
|
||||||
```
|
|
||||||
|
|
||||||
## Camera Modes
|
|
||||||
|
|
||||||
```mermaid
|
|
||||||
stateDiagram-v2
|
|
||||||
[*] --> Vertical
|
|
||||||
Vertical --> Horizontal: mode change
|
|
||||||
Horizontal --> Omni: mode change
|
|
||||||
Omni --> Floating: mode change
|
|
||||||
Floating --> Trace: mode change
|
|
||||||
Trace --> Vertical: mode change
|
|
||||||
|
|
||||||
state Vertical {
|
|
||||||
[*] --> ScrollUp
|
|
||||||
ScrollUp --> ScrollUp: +y each frame
|
|
||||||
}
|
|
||||||
|
|
||||||
state Horizontal {
|
|
||||||
[*] --> ScrollLeft
|
|
||||||
ScrollLeft --> ScrollLeft: +x each frame
|
|
||||||
}
|
|
||||||
|
|
||||||
state Omni {
|
|
||||||
[*] --> Diagonal
|
|
||||||
Diagonal --> Diagonal: +x, +y each frame
|
|
||||||
}
|
|
||||||
|
|
||||||
state Floating {
|
|
||||||
[*] --> Bobbing
|
|
||||||
Bobbing --> Bobbing: sin(time) for x,y
|
|
||||||
}
|
|
||||||
|
|
||||||
state Trace {
|
|
||||||
[*] --> FollowPath
|
|
||||||
FollowPath --> FollowPath: node by node
|
|
||||||
}
|
|
||||||
```
|
|
||||||
894
docs/superpowers/plans/2026-03-16-color-scheme-implementation.md
Normal file
894
docs/superpowers/plans/2026-03-16-color-scheme-implementation.md
Normal file
@@ -0,0 +1,894 @@
|
|||||||
|
# Color Scheme Switcher Implementation Plan
|
||||||
|
|
||||||
|
> **For agentic workers:** REQUIRED: Use superpowers:subagent-driven-development (if subagents available) or superpowers:executing-plans to implement this plan. Steps use checkbox (`- [ ]`) syntax for tracking.
|
||||||
|
|
||||||
|
**Goal:** Implement interactive color theme picker at startup that lets users choose between green, orange, or purple gradients with complementary message queue colors.
|
||||||
|
|
||||||
|
**Architecture:** New `themes.py` data module defines Theme class and THEME_REGISTRY. Config adds `ACTIVE_THEME` global set by picker. Render functions read from active theme instead of hardcoded constants. App adds picker UI that mirrors font picker pattern.
|
||||||
|
|
||||||
|
**Tech Stack:** Python 3.10+, ANSI 256-color codes, existing terminal I/O utilities
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## File Structure
|
||||||
|
|
||||||
|
| File | Purpose | Change Type |
|
||||||
|
|------|---------|------------|
|
||||||
|
| `engine/themes.py` | Theme class, THEME_REGISTRY, color codes | Create |
|
||||||
|
| `engine/config.py` | ACTIVE_THEME global, set_active_theme() | Modify |
|
||||||
|
| `engine/render.py` | Replace GRAD_COLS/MSG_GRAD_COLS with config lookup | Modify |
|
||||||
|
| `engine/scroll.py` | Update message gradient call | Modify |
|
||||||
|
| `engine/app.py` | pick_color_theme(), call in main() | Modify |
|
||||||
|
| `tests/test_themes.py` | Theme class and registry unit tests | Create |
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Chunk 1: Theme Data Module
|
||||||
|
|
||||||
|
### Task 1: Create themes.py with Theme class and registry
|
||||||
|
|
||||||
|
**Files:**
|
||||||
|
- Create: `engine/themes.py`
|
||||||
|
- Test: `tests/test_themes.py`
|
||||||
|
|
||||||
|
- [ ] **Step 1: Write failing test for Theme class**
|
||||||
|
|
||||||
|
Create `tests/test_themes.py`:
|
||||||
|
|
||||||
|
```python
|
||||||
|
"""Test color themes and registry."""
|
||||||
|
from engine.themes import Theme, THEME_REGISTRY, get_theme
|
||||||
|
|
||||||
|
|
||||||
|
def test_theme_construction():
|
||||||
|
"""Theme stores name and gradient lists."""
|
||||||
|
main = ["\033[1;38;5;231m"] * 12
|
||||||
|
msg = ["\033[1;38;5;225m"] * 12
|
||||||
|
theme = Theme(name="Test Green", main_gradient=main, message_gradient=msg)
|
||||||
|
|
||||||
|
assert theme.name == "Test Green"
|
||||||
|
assert theme.main_gradient == main
|
||||||
|
assert theme.message_gradient == msg
|
||||||
|
|
||||||
|
|
||||||
|
def test_gradient_length():
|
||||||
|
"""Each gradient must have exactly 12 ANSI codes."""
|
||||||
|
for theme_id, theme in THEME_REGISTRY.items():
|
||||||
|
assert len(theme.main_gradient) == 12, f"{theme_id} main gradient wrong length"
|
||||||
|
assert len(theme.message_gradient) == 12, f"{theme_id} message gradient wrong length"
|
||||||
|
|
||||||
|
|
||||||
|
def test_theme_registry_has_three_themes():
|
||||||
|
"""Registry contains green, orange, purple."""
|
||||||
|
assert len(THEME_REGISTRY) == 3
|
||||||
|
assert "green" in THEME_REGISTRY
|
||||||
|
assert "orange" in THEME_REGISTRY
|
||||||
|
assert "purple" in THEME_REGISTRY
|
||||||
|
|
||||||
|
|
||||||
|
def test_get_theme_valid():
|
||||||
|
"""get_theme returns Theme object for valid ID."""
|
||||||
|
theme = get_theme("green")
|
||||||
|
assert isinstance(theme, Theme)
|
||||||
|
assert theme.name == "Verdant Green"
|
||||||
|
|
||||||
|
|
||||||
|
def test_get_theme_invalid():
|
||||||
|
"""get_theme raises KeyError for invalid ID."""
|
||||||
|
with pytest.raises(KeyError):
|
||||||
|
get_theme("invalid_theme")
|
||||||
|
|
||||||
|
|
||||||
|
def test_green_theme_unchanged():
|
||||||
|
"""Green theme uses original green → magenta colors."""
|
||||||
|
green_theme = get_theme("green")
|
||||||
|
# First color should be white (bold)
|
||||||
|
assert green_theme.main_gradient[0] == "\033[1;38;5;231m"
|
||||||
|
# Last deep green
|
||||||
|
assert green_theme.main_gradient[9] == "\033[38;5;22m"
|
||||||
|
# Message gradient is magenta
|
||||||
|
assert green_theme.message_gradient[9] == "\033[38;5;89m"
|
||||||
|
```
|
||||||
|
|
||||||
|
Run: `pytest tests/test_themes.py -v`
|
||||||
|
Expected: FAIL (module doesn't exist)
|
||||||
|
|
||||||
|
- [ ] **Step 2: Create themes.py with Theme class and finalized gradients**
|
||||||
|
|
||||||
|
Create `engine/themes.py`:
|
||||||
|
|
||||||
|
```python
|
||||||
|
"""Color theme definitions and registry."""
|
||||||
|
from typing import Optional
|
||||||
|
|
||||||
|
|
||||||
|
class Theme:
|
||||||
|
"""Encapsulates a color scheme: name, main gradient, message gradient."""
|
||||||
|
|
||||||
|
def __init__(self, name: str, main_gradient: list[str], message_gradient: list[str]):
|
||||||
|
"""Initialize theme with display name and gradient lists.
|
||||||
|
|
||||||
|
Args:
|
||||||
|
name: Display name (e.g., "Verdant Green")
|
||||||
|
main_gradient: List of 12 ANSI 256-color codes (white → primary color)
|
||||||
|
message_gradient: List of 12 ANSI codes (white → complementary color)
|
||||||
|
"""
|
||||||
|
self.name = name
|
||||||
|
self.main_gradient = main_gradient
|
||||||
|
self.message_gradient = message_gradient
|
||||||
|
|
||||||
|
|
||||||
|
# ─── FINALIZED GRADIENTS ──────────────────────────────────────────────────
|
||||||
|
# Each gradient: white → primary/complementary, 12 steps total
|
||||||
|
# Format: "\033[<brightness>;<color>m" where color is 38;5;<colorcode>
|
||||||
|
|
||||||
|
_GREEN_MAIN = [
|
||||||
|
"\033[1;38;5;231m", # white (bold)
|
||||||
|
"\033[1;38;5;195m", # pale white-tint
|
||||||
|
"\033[38;5;123m", # bright cyan
|
||||||
|
"\033[38;5;118m", # bright lime
|
||||||
|
"\033[38;5;82m", # lime
|
||||||
|
"\033[38;5;46m", # bright green
|
||||||
|
"\033[38;5;40m", # green
|
||||||
|
"\033[38;5;34m", # medium green
|
||||||
|
"\033[38;5;28m", # dark green
|
||||||
|
"\033[38;5;22m", # deep green
|
||||||
|
"\033[2;38;5;22m", # dim deep green
|
||||||
|
"\033[2;38;5;235m", # near black
|
||||||
|
]
|
||||||
|
|
||||||
|
_GREEN_MESSAGE = [
|
||||||
|
"\033[1;38;5;231m", # white (bold)
|
||||||
|
"\033[1;38;5;225m", # pale pink-white
|
||||||
|
"\033[38;5;219m", # bright pink
|
||||||
|
"\033[38;5;213m", # hot pink
|
||||||
|
"\033[38;5;207m", # magenta
|
||||||
|
"\033[38;5;201m", # bright magenta
|
||||||
|
"\033[38;5;165m", # orchid-red
|
||||||
|
"\033[38;5;161m", # ruby-magenta
|
||||||
|
"\033[38;5;125m", # dark magenta
|
||||||
|
"\033[38;5;89m", # deep maroon-magenta
|
||||||
|
"\033[2;38;5;89m", # dim deep maroon-magenta
|
||||||
|
"\033[2;38;5;235m", # near black
|
||||||
|
]
|
||||||
|
|
||||||
|
_ORANGE_MAIN = [
|
||||||
|
"\033[1;38;5;231m", # white (bold)
|
||||||
|
"\033[1;38;5;215m", # pale orange-white
|
||||||
|
"\033[38;5;209m", # bright orange
|
||||||
|
"\033[38;5;208m", # vibrant orange
|
||||||
|
"\033[38;5;202m", # orange
|
||||||
|
"\033[38;5;166m", # dark orange
|
||||||
|
"\033[38;5;130m", # burnt orange
|
||||||
|
"\033[38;5;94m", # rust
|
||||||
|
"\033[38;5;58m", # dark rust
|
||||||
|
"\033[38;5;94m", # rust (hold)
|
||||||
|
"\033[2;38;5;94m", # dim rust
|
||||||
|
"\033[2;38;5;235m", # near black
|
||||||
|
]
|
||||||
|
|
||||||
|
_ORANGE_MESSAGE = [
|
||||||
|
"\033[1;38;5;231m", # white (bold)
|
||||||
|
"\033[1;38;5;195m", # pale cyan-white
|
||||||
|
"\033[38;5;33m", # bright blue
|
||||||
|
"\033[38;5;27m", # blue
|
||||||
|
"\033[38;5;21m", # deep blue
|
||||||
|
"\033[38;5;21m", # deep blue (hold)
|
||||||
|
"\033[38;5;21m", # deep blue (hold)
|
||||||
|
"\033[38;5;18m", # navy
|
||||||
|
"\033[38;5;18m", # navy (hold)
|
||||||
|
"\033[38;5;18m", # navy (hold)
|
||||||
|
"\033[2;38;5;18m", # dim navy
|
||||||
|
"\033[2;38;5;235m", # near black
|
||||||
|
]
|
||||||
|
|
||||||
|
_PURPLE_MAIN = [
|
||||||
|
"\033[1;38;5;231m", # white (bold)
|
||||||
|
"\033[1;38;5;225m", # pale purple-white
|
||||||
|
"\033[38;5;177m", # bright purple
|
||||||
|
"\033[38;5;171m", # vibrant purple
|
||||||
|
"\033[38;5;165m", # purple
|
||||||
|
"\033[38;5;135m", # medium purple
|
||||||
|
"\033[38;5;129m", # purple
|
||||||
|
"\033[38;5;93m", # dark purple
|
||||||
|
"\033[38;5;57m", # deep purple
|
||||||
|
"\033[38;5;57m", # deep purple (hold)
|
||||||
|
"\033[2;38;5;57m", # dim deep purple
|
||||||
|
"\033[2;38;5;235m", # near black
|
||||||
|
]
|
||||||
|
|
||||||
|
_PURPLE_MESSAGE = [
|
||||||
|
"\033[1;38;5;231m", # white (bold)
|
||||||
|
"\033[1;38;5;226m", # pale yellow-white
|
||||||
|
"\033[38;5;226m", # bright yellow
|
||||||
|
"\033[38;5;220m", # yellow
|
||||||
|
"\033[38;5;220m", # yellow (hold)
|
||||||
|
"\033[38;5;184m", # dark yellow
|
||||||
|
"\033[38;5;184m", # dark yellow (hold)
|
||||||
|
"\033[38;5;178m", # olive-yellow
|
||||||
|
"\033[38;5;178m", # olive-yellow (hold)
|
||||||
|
"\033[38;5;172m", # golden
|
||||||
|
"\033[2;38;5;172m", # dim golden
|
||||||
|
"\033[2;38;5;235m", # near black
|
||||||
|
]
|
||||||
|
|
||||||
|
# ─── THEME REGISTRY ───────────────────────────────────────────────────────
|
||||||
|
|
||||||
|
THEME_REGISTRY = {
|
||||||
|
"green": Theme(
|
||||||
|
name="Verdant Green",
|
||||||
|
main_gradient=_GREEN_MAIN,
|
||||||
|
message_gradient=_GREEN_MESSAGE,
|
||||||
|
),
|
||||||
|
"orange": Theme(
|
||||||
|
name="Molten Orange",
|
||||||
|
main_gradient=_ORANGE_MAIN,
|
||||||
|
message_gradient=_ORANGE_MESSAGE,
|
||||||
|
),
|
||||||
|
"purple": Theme(
|
||||||
|
name="Violet Purple",
|
||||||
|
main_gradient=_PURPLE_MAIN,
|
||||||
|
message_gradient=_PURPLE_MESSAGE,
|
||||||
|
),
|
||||||
|
}
|
||||||
|
|
||||||
|
|
||||||
|
def get_theme(theme_id: str) -> Theme:
|
||||||
|
"""Retrieve a theme by ID.
|
||||||
|
|
||||||
|
Args:
|
||||||
|
theme_id: One of "green", "orange", "purple"
|
||||||
|
|
||||||
|
Returns:
|
||||||
|
Theme object
|
||||||
|
|
||||||
|
Raises:
|
||||||
|
KeyError: If theme_id not found in registry
|
||||||
|
"""
|
||||||
|
if theme_id not in THEME_REGISTRY:
|
||||||
|
raise KeyError(f"Unknown theme: {theme_id}. Available: {list(THEME_REGISTRY.keys())}")
|
||||||
|
return THEME_REGISTRY[theme_id]
|
||||||
|
```
|
||||||
|
|
||||||
|
- [ ] **Step 3: Run tests to verify they pass**
|
||||||
|
|
||||||
|
Run: `pytest tests/test_themes.py -v`
|
||||||
|
Expected: PASS (all 6 tests)
|
||||||
|
|
||||||
|
- [ ] **Step 4: Commit**
|
||||||
|
|
||||||
|
```bash
|
||||||
|
git add engine/themes.py tests/test_themes.py
|
||||||
|
git commit -m "feat: create Theme class and registry with finalized color gradients
|
||||||
|
|
||||||
|
- Define Theme class to encapsulate name and main/message gradients
|
||||||
|
- Create THEME_REGISTRY with green, orange, purple themes
|
||||||
|
- Each gradient has 12 ANSI 256-color codes finalized
|
||||||
|
- Complementary color pairs: green/magenta, orange/blue, purple/yellow
|
||||||
|
- Add get_theme() lookup with error handling
|
||||||
|
- Add comprehensive unit tests"
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Chunk 2: Config Integration
|
||||||
|
|
||||||
|
### Task 2: Add ACTIVE_THEME global and set_active_theme() to config.py
|
||||||
|
|
||||||
|
**Files:**
|
||||||
|
- Modify: `engine/config.py:1-30`
|
||||||
|
- Test: `tests/test_config.py` (expand existing)
|
||||||
|
|
||||||
|
- [ ] **Step 1: Write failing tests for config changes**
|
||||||
|
|
||||||
|
Add to `tests/test_config.py`:
|
||||||
|
|
||||||
|
```python
|
||||||
|
def test_active_theme_initially_none():
|
||||||
|
"""ACTIVE_THEME is None before initialization."""
|
||||||
|
# This test may fail if config is already initialized
|
||||||
|
# We'll set it to None first for testing
|
||||||
|
import engine.config
|
||||||
|
engine.config.ACTIVE_THEME = None
|
||||||
|
assert engine.config.ACTIVE_THEME is None
|
||||||
|
|
||||||
|
|
||||||
|
def test_set_active_theme_green():
|
||||||
|
"""set_active_theme('green') sets ACTIVE_THEME to green theme."""
|
||||||
|
from engine.config import set_active_theme
|
||||||
|
from engine.themes import get_theme
|
||||||
|
|
||||||
|
set_active_theme("green")
|
||||||
|
|
||||||
|
assert config.ACTIVE_THEME is not None
|
||||||
|
assert config.ACTIVE_THEME.name == "Verdant Green"
|
||||||
|
assert config.ACTIVE_THEME == get_theme("green")
|
||||||
|
|
||||||
|
|
||||||
|
def test_set_active_theme_default():
|
||||||
|
"""set_active_theme() with no args defaults to green."""
|
||||||
|
from engine.config import set_active_theme
|
||||||
|
|
||||||
|
set_active_theme()
|
||||||
|
|
||||||
|
assert config.ACTIVE_THEME.name == "Verdant Green"
|
||||||
|
|
||||||
|
|
||||||
|
def test_set_active_theme_invalid():
|
||||||
|
"""set_active_theme() with invalid ID raises KeyError."""
|
||||||
|
from engine.config import set_active_theme
|
||||||
|
|
||||||
|
with pytest.raises(KeyError):
|
||||||
|
set_active_theme("invalid")
|
||||||
|
```
|
||||||
|
|
||||||
|
Run: `pytest tests/test_config.py -v`
|
||||||
|
Expected: FAIL (functions don't exist yet)
|
||||||
|
|
||||||
|
- [ ] **Step 2: Add ACTIVE_THEME global and set_active_theme() to config.py**
|
||||||
|
|
||||||
|
Edit `engine/config.py`, add after line 30 (after `_resolve_font_path` function):
|
||||||
|
|
||||||
|
```python
|
||||||
|
# ─── COLOR THEME ──────────────────────────────────────────────────────────
|
||||||
|
ACTIVE_THEME = None # set by set_active_theme() after picker
|
||||||
|
|
||||||
|
|
||||||
|
def set_active_theme(theme_id: str = "green"):
|
||||||
|
"""Set the active color theme. Defaults to 'green' if not specified.
|
||||||
|
|
||||||
|
Args:
|
||||||
|
theme_id: One of "green", "orange", "purple"
|
||||||
|
|
||||||
|
Raises:
|
||||||
|
KeyError: If theme_id is invalid
|
||||||
|
"""
|
||||||
|
global ACTIVE_THEME
|
||||||
|
from engine import themes
|
||||||
|
ACTIVE_THEME = themes.get_theme(theme_id)
|
||||||
|
```
|
||||||
|
|
||||||
|
- [ ] **Step 3: Remove hardcoded GRAD_COLS and MSG_GRAD_COLS from render.py**
|
||||||
|
|
||||||
|
Edit `engine/render.py`, find and delete lines 20-49 (the hardcoded gradient arrays):
|
||||||
|
|
||||||
|
```python
|
||||||
|
# DELETED:
|
||||||
|
# GRAD_COLS = [...]
|
||||||
|
# MSG_GRAD_COLS = [...]
|
||||||
|
```
|
||||||
|
|
||||||
|
- [ ] **Step 4: Run tests to verify they pass**
|
||||||
|
|
||||||
|
Run: `pytest tests/test_config.py::test_active_theme_initially_none -v`
|
||||||
|
Run: `pytest tests/test_config.py::test_set_active_theme_green -v`
|
||||||
|
Run: `pytest tests/test_config.py::test_set_active_theme_default -v`
|
||||||
|
Run: `pytest tests/test_config.py::test_set_active_theme_invalid -v`
|
||||||
|
|
||||||
|
Expected: PASS (all 4 new tests)
|
||||||
|
|
||||||
|
- [ ] **Step 5: Verify existing config tests still pass**
|
||||||
|
|
||||||
|
Run: `pytest tests/test_config.py -v`
|
||||||
|
|
||||||
|
Expected: PASS (all existing + new tests)
|
||||||
|
|
||||||
|
- [ ] **Step 6: Commit**
|
||||||
|
|
||||||
|
```bash
|
||||||
|
git add engine/config.py tests/test_config.py
|
||||||
|
git commit -m "feat: add ACTIVE_THEME global and set_active_theme() to config
|
||||||
|
|
||||||
|
- Add ACTIVE_THEME global (initialized to None)
|
||||||
|
- Add set_active_theme(theme_id) function with green default
|
||||||
|
- Remove hardcoded GRAD_COLS and MSG_GRAD_COLS (move to themes.py)
|
||||||
|
- Add comprehensive tests for theme setting"
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Chunk 3: Render Pipeline Integration
|
||||||
|
|
||||||
|
### Task 3: Update render.py to use config.ACTIVE_THEME
|
||||||
|
|
||||||
|
**Files:**
|
||||||
|
- Modify: `engine/render.py:15-220`
|
||||||
|
- Test: `tests/test_render.py` (expand existing)
|
||||||
|
|
||||||
|
- [ ] **Step 1: Write failing test for lr_gradient with theme**
|
||||||
|
|
||||||
|
Add to `tests/test_render.py`:
|
||||||
|
|
||||||
|
```python
|
||||||
|
def test_lr_gradient_uses_active_theme(monkeypatch):
|
||||||
|
"""lr_gradient uses config.ACTIVE_THEME when cols=None."""
|
||||||
|
from engine import config, render
|
||||||
|
from engine.themes import get_theme
|
||||||
|
|
||||||
|
# Set orange theme
|
||||||
|
config.set_active_theme("orange")
|
||||||
|
|
||||||
|
# Create simple rows
|
||||||
|
rows = ["test row"]
|
||||||
|
result = render.lr_gradient(rows, offset=0, cols=None)
|
||||||
|
|
||||||
|
# Result should start with first color from orange main gradient
|
||||||
|
assert result[0].startswith("\033[1;38;5;231m") # white (same for all)
|
||||||
|
|
||||||
|
|
||||||
|
def test_lr_gradient_fallback_when_no_theme(monkeypatch):
|
||||||
|
"""lr_gradient uses fallback when ACTIVE_THEME is None."""
|
||||||
|
from engine import config, render
|
||||||
|
|
||||||
|
# Clear active theme
|
||||||
|
config.ACTIVE_THEME = None
|
||||||
|
|
||||||
|
rows = ["test row"]
|
||||||
|
result = render.lr_gradient(rows, offset=0, cols=None)
|
||||||
|
|
||||||
|
# Should not crash and should return something
|
||||||
|
assert result is not None
|
||||||
|
assert len(result) > 0
|
||||||
|
|
||||||
|
|
||||||
|
def test_default_green_gradient_length():
|
||||||
|
"""_default_green_gradient returns 12 colors."""
|
||||||
|
from engine import render
|
||||||
|
|
||||||
|
colors = render._default_green_gradient()
|
||||||
|
assert len(colors) == 12
|
||||||
|
```
|
||||||
|
|
||||||
|
Run: `pytest tests/test_render.py::test_lr_gradient_uses_active_theme -v`
|
||||||
|
Expected: FAIL (function signature doesn't match)
|
||||||
|
|
||||||
|
- [ ] **Step 2: Update lr_gradient() to use config.ACTIVE_THEME**
|
||||||
|
|
||||||
|
Edit `engine/render.py`, find the `lr_gradient()` function (around line 194) and update it:
|
||||||
|
|
||||||
|
```python
|
||||||
|
def lr_gradient(rows, offset, cols=None):
|
||||||
|
"""
|
||||||
|
Render rows through a left-to-right color sweep.
|
||||||
|
|
||||||
|
Args:
|
||||||
|
rows: List of text rows to colorize
|
||||||
|
offset: Gradient position offset (for animation)
|
||||||
|
cols: Optional list of color codes. If None, uses active theme.
|
||||||
|
|
||||||
|
Returns:
|
||||||
|
List of colorized rows
|
||||||
|
"""
|
||||||
|
if cols is None:
|
||||||
|
from engine import config
|
||||||
|
cols = (
|
||||||
|
config.ACTIVE_THEME.main_gradient
|
||||||
|
if config.ACTIVE_THEME
|
||||||
|
else _default_green_gradient()
|
||||||
|
)
|
||||||
|
|
||||||
|
# ... rest of function unchanged ...
|
||||||
|
```
|
||||||
|
|
||||||
|
- [ ] **Step 3: Add _default_green_gradient() fallback function**
|
||||||
|
|
||||||
|
Add to `engine/render.py` before `lr_gradient()`:
|
||||||
|
|
||||||
|
```python
|
||||||
|
def _default_green_gradient():
|
||||||
|
"""Fallback green gradient (original colors) for initialization."""
|
||||||
|
return [
|
||||||
|
"\033[1;38;5;231m", # white (bold)
|
||||||
|
"\033[1;38;5;195m", # pale white-tint
|
||||||
|
"\033[38;5;123m", # bright cyan
|
||||||
|
"\033[38;5;118m", # bright lime
|
||||||
|
"\033[38;5;82m", # lime
|
||||||
|
"\033[38;5;46m", # bright green
|
||||||
|
"\033[38;5;40m", # green
|
||||||
|
"\033[38;5;34m", # medium green
|
||||||
|
"\033[38;5;28m", # dark green
|
||||||
|
"\033[38;5;22m", # deep green
|
||||||
|
"\033[2;38;5;22m", # dim deep green
|
||||||
|
"\033[2;38;5;235m", # near black
|
||||||
|
]
|
||||||
|
|
||||||
|
|
||||||
|
def _default_magenta_gradient():
|
||||||
|
"""Fallback magenta gradient (original message colors) for initialization."""
|
||||||
|
return [
|
||||||
|
"\033[1;38;5;231m", # white (bold)
|
||||||
|
"\033[1;38;5;225m", # pale pink-white
|
||||||
|
"\033[38;5;219m", # bright pink
|
||||||
|
"\033[38;5;213m", # hot pink
|
||||||
|
"\033[38;5;207m", # magenta
|
||||||
|
"\033[38;5;201m", # bright magenta
|
||||||
|
"\033[38;5;165m", # orchid-red
|
||||||
|
"\033[38;5;161m", # ruby-magenta
|
||||||
|
"\033[38;5;125m", # dark magenta
|
||||||
|
"\033[38;5;89m", # deep maroon-magenta
|
||||||
|
"\033[2;38;5;89m", # dim deep maroon-magenta
|
||||||
|
"\033[2;38;5;235m", # near black
|
||||||
|
]
|
||||||
|
```
|
||||||
|
|
||||||
|
- [ ] **Step 4: Run tests to verify they pass**
|
||||||
|
|
||||||
|
Run: `pytest tests/test_render.py::test_lr_gradient_uses_active_theme -v`
|
||||||
|
Run: `pytest tests/test_render.py::test_lr_gradient_fallback_when_no_theme -v`
|
||||||
|
Run: `pytest tests/test_render.py::test_default_green_gradient_length -v`
|
||||||
|
|
||||||
|
Expected: PASS (all 3 new tests)
|
||||||
|
|
||||||
|
- [ ] **Step 5: Run full render test suite**
|
||||||
|
|
||||||
|
Run: `pytest tests/test_render.py -v`
|
||||||
|
|
||||||
|
Expected: PASS (existing tests may need adjustment for mocking)
|
||||||
|
|
||||||
|
- [ ] **Step 6: Commit**
|
||||||
|
|
||||||
|
```bash
|
||||||
|
git add engine/render.py tests/test_render.py
|
||||||
|
git commit -m "feat: update lr_gradient to use config.ACTIVE_THEME
|
||||||
|
|
||||||
|
- Update lr_gradient(cols=None) to check config.ACTIVE_THEME
|
||||||
|
- Add _default_green_gradient() and _default_magenta_gradient() fallbacks
|
||||||
|
- Fallback used when ACTIVE_THEME is None (non-interactive init)
|
||||||
|
- Add tests for theme-aware and fallback gradient rendering"
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Chunk 4: Message Gradient Integration
|
||||||
|
|
||||||
|
### Task 4: Update scroll.py to use message gradient from config
|
||||||
|
|
||||||
|
**Files:**
|
||||||
|
- Modify: `engine/scroll.py:85-95`
|
||||||
|
- Test: existing `tests/test_scroll.py`
|
||||||
|
|
||||||
|
- [ ] **Step 1: Locate message gradient calls in scroll.py**
|
||||||
|
|
||||||
|
Run: `grep -n "MSG_GRAD_COLS\|lr_gradient_opposite" /Users/genejohnson/Dev/mainline/engine/scroll.py`
|
||||||
|
|
||||||
|
Expected: Should find line(s) where `MSG_GRAD_COLS` or similar is used
|
||||||
|
|
||||||
|
- [ ] **Step 2: Update scroll.py to use theme message gradient**
|
||||||
|
|
||||||
|
Edit `engine/scroll.py`, find the line that uses message gradients (around line 89 based on spec) and update:
|
||||||
|
|
||||||
|
Old code:
|
||||||
|
```python
|
||||||
|
# Some variation of:
|
||||||
|
rows = lr_gradient(rows, offset, MSG_GRAD_COLS)
|
||||||
|
```
|
||||||
|
|
||||||
|
New code:
|
||||||
|
```python
|
||||||
|
from engine import config
|
||||||
|
msg_cols = (
|
||||||
|
config.ACTIVE_THEME.message_gradient
|
||||||
|
if config.ACTIVE_THEME
|
||||||
|
else render._default_magenta_gradient()
|
||||||
|
)
|
||||||
|
rows = lr_gradient(rows, offset, msg_cols)
|
||||||
|
```
|
||||||
|
|
||||||
|
Or use the helper approach (create `msg_gradient()` in render.py):
|
||||||
|
|
||||||
|
```python
|
||||||
|
def msg_gradient(rows, offset):
|
||||||
|
"""Apply message (ntfy) gradient using theme complementary colors."""
|
||||||
|
from engine import config
|
||||||
|
cols = (
|
||||||
|
config.ACTIVE_THEME.message_gradient
|
||||||
|
if config.ACTIVE_THEME
|
||||||
|
else _default_magenta_gradient()
|
||||||
|
)
|
||||||
|
return lr_gradient(rows, offset, cols)
|
||||||
|
```
|
||||||
|
|
||||||
|
Then in scroll.py:
|
||||||
|
```python
|
||||||
|
rows = render.msg_gradient(rows, offset)
|
||||||
|
```
|
||||||
|
|
||||||
|
- [ ] **Step 3: Run existing scroll tests**
|
||||||
|
|
||||||
|
Run: `pytest tests/test_scroll.py -v`
|
||||||
|
|
||||||
|
Expected: PASS (existing functionality unchanged)
|
||||||
|
|
||||||
|
- [ ] **Step 4: Commit**
|
||||||
|
|
||||||
|
```bash
|
||||||
|
git add engine/scroll.py engine/render.py
|
||||||
|
git commit -m "feat: update scroll.py to use theme message gradient
|
||||||
|
|
||||||
|
- Replace MSG_GRAD_COLS reference with config.ACTIVE_THEME.message_gradient
|
||||||
|
- Use fallback magenta gradient when theme not initialized
|
||||||
|
- Ensure ntfy messages render in complementary color from selected theme"
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Chunk 5: Color Picker UI
|
||||||
|
|
||||||
|
### Task 5: Create pick_color_theme() function in app.py
|
||||||
|
|
||||||
|
**Files:**
|
||||||
|
- Modify: `engine/app.py:1-300`
|
||||||
|
- Test: manual/integration (interactive)
|
||||||
|
|
||||||
|
- [ ] **Step 1: Write helper functions for color picker UI**
|
||||||
|
|
||||||
|
Edit `engine/app.py`, add before `pick_font_face()` function:
|
||||||
|
|
||||||
|
```python
|
||||||
|
def _draw_color_picker(themes_list, selected):
|
||||||
|
"""Draw the color theme picker menu."""
|
||||||
|
import sys
|
||||||
|
from engine.terminal import CLR, W_GHOST, G_HI, G_DIM, tw
|
||||||
|
|
||||||
|
print(CLR, end="")
|
||||||
|
print()
|
||||||
|
print(f" {G_HI}▼ COLOR THEME{W_GHOST} ─ ↑/↓ or j/k to move, Enter/q to select{G_DIM}")
|
||||||
|
print(f" {W_GHOST}{'─' * (tw() - 4)}\n")
|
||||||
|
|
||||||
|
for i, (theme_id, theme) in enumerate(themes_list):
|
||||||
|
prefix = " ▶ " if i == selected else " "
|
||||||
|
color = G_HI if i == selected else ""
|
||||||
|
reset = "" if i == selected else W_GHOST
|
||||||
|
print(f"{prefix}{color}{theme.name}{reset}")
|
||||||
|
|
||||||
|
print()
|
||||||
|
```
|
||||||
|
|
||||||
|
- [ ] **Step 2: Create pick_color_theme() function**
|
||||||
|
|
||||||
|
Edit `engine/app.py`, add after helper function:
|
||||||
|
|
||||||
|
```python
|
||||||
|
def pick_color_theme():
|
||||||
|
"""Interactive color theme picker. Defaults to 'green' if not TTY."""
|
||||||
|
import sys
|
||||||
|
import termios
|
||||||
|
import tty
|
||||||
|
from engine import config, themes
|
||||||
|
|
||||||
|
# Non-interactive fallback: use green
|
||||||
|
if not sys.stdin.isatty():
|
||||||
|
config.set_active_theme("green")
|
||||||
|
return
|
||||||
|
|
||||||
|
themes_list = list(themes.THEME_REGISTRY.items())
|
||||||
|
selected = 0
|
||||||
|
|
||||||
|
fd = sys.stdin.fileno()
|
||||||
|
old_settings = termios.tcgetattr(fd)
|
||||||
|
try:
|
||||||
|
tty.setcbreak(fd)
|
||||||
|
while True:
|
||||||
|
_draw_color_picker(themes_list, selected)
|
||||||
|
key = _read_picker_key()
|
||||||
|
if key == "up":
|
||||||
|
selected = max(0, selected - 1)
|
||||||
|
elif key == "down":
|
||||||
|
selected = min(len(themes_list) - 1, selected + 1)
|
||||||
|
elif key == "enter":
|
||||||
|
break
|
||||||
|
elif key == "interrupt":
|
||||||
|
raise KeyboardInterrupt
|
||||||
|
finally:
|
||||||
|
termios.tcsetattr(fd, termios.TCSADRAIN, old_settings)
|
||||||
|
|
||||||
|
selected_theme_id = themes_list[selected][0]
|
||||||
|
config.set_active_theme(selected_theme_id)
|
||||||
|
|
||||||
|
theme_name = themes_list[selected][1].name
|
||||||
|
print(f" {G_DIM}> using {theme_name}{RST}")
|
||||||
|
time.sleep(0.8)
|
||||||
|
print(CLR, end="")
|
||||||
|
print(CURSOR_OFF, end="")
|
||||||
|
print()
|
||||||
|
```
|
||||||
|
|
||||||
|
- [ ] **Step 3: Update main() to call pick_color_theme() before pick_font_face()**
|
||||||
|
|
||||||
|
Edit `engine/app.py`, find the `main()` function and locate where `pick_font_face()` is called (around line 265). Add before it:
|
||||||
|
|
||||||
|
```python
|
||||||
|
def main():
|
||||||
|
# ... existing signal handler setup ...
|
||||||
|
|
||||||
|
pick_color_theme() # NEW LINE - before font picker
|
||||||
|
pick_font_face()
|
||||||
|
|
||||||
|
# ... rest of main unchanged ...
|
||||||
|
```
|
||||||
|
|
||||||
|
- [ ] **Step 4: Manual test - run in interactive terminal**
|
||||||
|
|
||||||
|
Run: `python3 mainline.py`
|
||||||
|
|
||||||
|
Expected:
|
||||||
|
- See color theme picker menu before font picker
|
||||||
|
- Can navigate with ↑/↓ or j/k
|
||||||
|
- Can select with Enter or q
|
||||||
|
- Selected theme applies to scrolling headlines
|
||||||
|
- Can select different themes and see colors change
|
||||||
|
|
||||||
|
- [ ] **Step 5: Manual test - run in non-interactive environment**
|
||||||
|
|
||||||
|
Run: `echo "" | python3 mainline.py`
|
||||||
|
|
||||||
|
Expected:
|
||||||
|
- No color picker menu shown
|
||||||
|
- Defaults to green theme
|
||||||
|
- App runs without error
|
||||||
|
|
||||||
|
- [ ] **Step 6: Commit**
|
||||||
|
|
||||||
|
```bash
|
||||||
|
git add engine/app.py
|
||||||
|
git commit -m "feat: add pick_color_theme() UI and integration
|
||||||
|
|
||||||
|
- Create _draw_color_picker() to render menu
|
||||||
|
- Create pick_color_theme() function mirroring font picker pattern
|
||||||
|
- Integrate into main() before font picker
|
||||||
|
- Fallback to green theme in non-interactive environments
|
||||||
|
- Support arrow keys and j/k navigation"
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Chunk 6: Integration & Validation
|
||||||
|
|
||||||
|
### Task 6: End-to-end testing and cleanup
|
||||||
|
|
||||||
|
**Files:**
|
||||||
|
- Test: All modified files
|
||||||
|
- Verify: App functionality
|
||||||
|
|
||||||
|
- [ ] **Step 1: Run full test suite**
|
||||||
|
|
||||||
|
Run: `pytest tests/ -v`
|
||||||
|
|
||||||
|
Expected: PASS (all tests, including new ones)
|
||||||
|
|
||||||
|
- [ ] **Step 2: Run linter**
|
||||||
|
|
||||||
|
Run: `ruff check engine/ mainline.py`
|
||||||
|
|
||||||
|
Expected: No errors (fix any style issues)
|
||||||
|
|
||||||
|
- [ ] **Step 3: Manual integration test - green theme**
|
||||||
|
|
||||||
|
Run: `python3 mainline.py`
|
||||||
|
|
||||||
|
Then select "Verdant Green" from picker.
|
||||||
|
|
||||||
|
Expected:
|
||||||
|
- Headlines render in green → deep green
|
||||||
|
- ntfy messages render in magenta gradient
|
||||||
|
- Both work correctly during streaming
|
||||||
|
|
||||||
|
- [ ] **Step 4: Manual integration test - orange theme**
|
||||||
|
|
||||||
|
Run: `python3 mainline.py`
|
||||||
|
|
||||||
|
Then select "Molten Orange" from picker.
|
||||||
|
|
||||||
|
Expected:
|
||||||
|
- Headlines render in orange → deep orange
|
||||||
|
- ntfy messages render in blue gradient
|
||||||
|
- Colors are visually distinct from green
|
||||||
|
|
||||||
|
- [ ] **Step 5: Manual integration test - purple theme**
|
||||||
|
|
||||||
|
Run: `python3 mainline.py`
|
||||||
|
|
||||||
|
Then select "Violet Purple" from picker.
|
||||||
|
|
||||||
|
Expected:
|
||||||
|
- Headlines render in purple → deep purple
|
||||||
|
- ntfy messages render in yellow gradient
|
||||||
|
- Colors are visually distinct from green and orange
|
||||||
|
|
||||||
|
- [ ] **Step 6: Test poetry mode with color picker**
|
||||||
|
|
||||||
|
Run: `python3 mainline.py --poetry`
|
||||||
|
|
||||||
|
Then select "orange" from picker.
|
||||||
|
|
||||||
|
Expected:
|
||||||
|
- Poetry mode works with color picker
|
||||||
|
- Colors apply to poetry rendering
|
||||||
|
|
||||||
|
- [ ] **Step 7: Test code mode with color picker**
|
||||||
|
|
||||||
|
Run: `python3 mainline.py --code`
|
||||||
|
|
||||||
|
Then select "purple" from picker.
|
||||||
|
|
||||||
|
Expected:
|
||||||
|
- Code mode works with color picker
|
||||||
|
- Colors apply to code rendering
|
||||||
|
|
||||||
|
- [ ] **Step 8: Verify acceptance criteria**
|
||||||
|
|
||||||
|
✓ Color picker displays 3 theme options at startup
|
||||||
|
✓ Selection applies to all headline and message gradients
|
||||||
|
✓ Boot UI (title, status) uses hardcoded green (not theme)
|
||||||
|
✓ Scrolling headlines and ntfy messages use theme gradients
|
||||||
|
✓ No persistence between runs (each run picks fresh)
|
||||||
|
✓ Non-TTY environments default to green without error
|
||||||
|
✓ Architecture supports future random/animation modes
|
||||||
|
✓ All gradient color codes finalized with no TBD values
|
||||||
|
|
||||||
|
- [ ] **Step 9: Final commit**
|
||||||
|
|
||||||
|
```bash
|
||||||
|
git add -A
|
||||||
|
git commit -m "feat: color scheme switcher implementation complete
|
||||||
|
|
||||||
|
Closes color-pick feature with:
|
||||||
|
- Three selectable color themes (green, orange, purple)
|
||||||
|
- Interactive menu at startup (mirrors font picker UI)
|
||||||
|
- Complementary colors for ntfy message queue
|
||||||
|
- Fallback to green in non-interactive environments
|
||||||
|
- All tests passing, manual validation complete"
|
||||||
|
```
|
||||||
|
|
||||||
|
- [ ] **Step 10: Create feature branch PR summary**
|
||||||
|
|
||||||
|
```
|
||||||
|
## Color Scheme Switcher
|
||||||
|
|
||||||
|
Implements interactive color theme selection for Mainline news ticker.
|
||||||
|
|
||||||
|
### What's New
|
||||||
|
- 3 color themes: Verdant Green, Molten Orange, Violet Purple
|
||||||
|
- Interactive picker at startup (↑/↓ or j/k, Enter to select)
|
||||||
|
- Complementary gradients for ntfy messages (magenta, blue, yellow)
|
||||||
|
- Fresh theme selection each run (no persistence)
|
||||||
|
|
||||||
|
### Files Changed
|
||||||
|
- `engine/themes.py` (new)
|
||||||
|
- `engine/config.py` (ACTIVE_THEME, set_active_theme)
|
||||||
|
- `engine/render.py` (theme-aware gradients)
|
||||||
|
- `engine/scroll.py` (message gradient integration)
|
||||||
|
- `engine/app.py` (pick_color_theme UI)
|
||||||
|
- `tests/test_themes.py` (new theme tests)
|
||||||
|
- `README.md` (documentation)
|
||||||
|
|
||||||
|
### Acceptance Criteria
|
||||||
|
All met. App fully tested and ready for merge.
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Testing Checklist
|
||||||
|
|
||||||
|
- [ ] Unit tests: `pytest tests/test_themes.py -v`
|
||||||
|
- [ ] Unit tests: `pytest tests/test_config.py -v`
|
||||||
|
- [ ] Unit tests: `pytest tests/test_render.py -v`
|
||||||
|
- [ ] Full suite: `pytest tests/ -v`
|
||||||
|
- [ ] Linting: `ruff check engine/ mainline.py`
|
||||||
|
- [ ] Manual: Green theme selection
|
||||||
|
- [ ] Manual: Orange theme selection
|
||||||
|
- [ ] Manual: Purple theme selection
|
||||||
|
- [ ] Manual: Poetry mode with colors
|
||||||
|
- [ ] Manual: Code mode with colors
|
||||||
|
- [ ] Manual: Non-TTY fallback
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Notes
|
||||||
|
|
||||||
|
- `themes.py` is data-only; never import config or render to prevent cycles
|
||||||
|
- `ACTIVE_THEME` initialized to None; guaranteed non-None before stream() via pick_color_theme()
|
||||||
|
- Font picker UI remains hardcoded green; title/subtitle use G_HI/G_DIM constants (not theme)
|
||||||
|
- Message gradients use complementary colors; lookup in scroll.py
|
||||||
|
- Each gradient has 12 colors; verify length in tests
|
||||||
|
- No persistence; fresh picker each run
|
||||||
154
docs/superpowers/specs/2026-03-16-code-scroll-design.md
Normal file
154
docs/superpowers/specs/2026-03-16-code-scroll-design.md
Normal file
@@ -0,0 +1,154 @@
|
|||||||
|
# Code Scroll Mode — Design Spec
|
||||||
|
|
||||||
|
**Date:** 2026-03-16
|
||||||
|
**Branch:** feat/code-scroll
|
||||||
|
**Status:** Approved
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Overview
|
||||||
|
|
||||||
|
Add a `--code` CLI flag that puts MAINLINE into "source consciousness" mode. Instead of RSS headlines or poetry stanzas, the program's own source code scrolls upward as large OTF half-block characters with the standard white-hot → deep green gradient. Each scroll item is one non-blank, non-comment line from `engine/*.py`, attributed to its enclosing function/class scope and dotted module path.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Goals
|
||||||
|
|
||||||
|
- Mirror the existing `--poetry` mode pattern as closely as possible
|
||||||
|
- Zero new runtime dependencies (stdlib `ast` and `pathlib` only)
|
||||||
|
- No changes to `scroll.py` or the render pipeline
|
||||||
|
- The item tuple shape `(text, src, ts)` is unchanged
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## New Files
|
||||||
|
|
||||||
|
### `engine/fetch_code.py`
|
||||||
|
|
||||||
|
Single public function `fetch_code()` that returns `(items, line_count, 0)`.
|
||||||
|
|
||||||
|
**Algorithm:**
|
||||||
|
|
||||||
|
1. Glob `engine/*.py` in sorted order
|
||||||
|
2. For each file:
|
||||||
|
a. Read source text
|
||||||
|
b. `ast.parse(source)` → build a `{line_number: scope_label}` map by walking all `FunctionDef`, `AsyncFunctionDef`, and `ClassDef` nodes. Each node covers its full line range. Inner scopes override outer ones.
|
||||||
|
c. Iterate source lines (1-indexed). Skip if:
|
||||||
|
- The stripped line is empty
|
||||||
|
- The stripped line starts with `#`
|
||||||
|
d. For each kept line emit:
|
||||||
|
- `text` = `line.rstrip()` (preserve indentation for readability in the big render)
|
||||||
|
- `src` = scope label from the AST map, e.g. `stream()` for functions, `MicMonitor` for classes, `<module>` for top-level lines
|
||||||
|
- `ts` = dotted module path derived from filename, e.g. `engine/scroll.py` → `engine.scroll`
|
||||||
|
3. Return `(items, len(items), 0)`
|
||||||
|
|
||||||
|
**Scope label rules:**
|
||||||
|
- `FunctionDef` / `AsyncFunctionDef` → `name()`
|
||||||
|
- `ClassDef` → `name` (no parens)
|
||||||
|
- No enclosing node → `<module>`
|
||||||
|
|
||||||
|
**Dependencies:** `ast`, `pathlib` — stdlib only.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Modified Files
|
||||||
|
|
||||||
|
### `engine/config.py`
|
||||||
|
|
||||||
|
Extend `MODE` detection to recognise `--code`:
|
||||||
|
|
||||||
|
```python
|
||||||
|
MODE = (
|
||||||
|
"poetry" if "--poetry" in sys.argv or "-p" in sys.argv
|
||||||
|
else "code" if "--code" in sys.argv
|
||||||
|
else "news"
|
||||||
|
)
|
||||||
|
```
|
||||||
|
|
||||||
|
### `engine/app.py`
|
||||||
|
|
||||||
|
**Subtitle line** — extend the subtitle dict:
|
||||||
|
|
||||||
|
```python
|
||||||
|
_subtitle = {
|
||||||
|
"poetry": "literary consciousness stream",
|
||||||
|
"code": "source consciousness stream",
|
||||||
|
}.get(config.MODE, "digital consciousness stream")
|
||||||
|
```
|
||||||
|
|
||||||
|
**Boot sequence** — add `elif config.MODE == "code":` branch after the poetry branch:
|
||||||
|
|
||||||
|
```python
|
||||||
|
elif config.MODE == "code":
|
||||||
|
from engine.fetch_code import fetch_code
|
||||||
|
slow_print(" > INITIALIZING SOURCE ARRAY...\n")
|
||||||
|
time.sleep(0.2)
|
||||||
|
print()
|
||||||
|
items, line_count, _ = fetch_code()
|
||||||
|
print()
|
||||||
|
print(f" {G_DIM}>{RST} {G_MID}{line_count} LINES ACQUIRED{RST}")
|
||||||
|
```
|
||||||
|
|
||||||
|
No cache save/load — local source files are read instantly and change only on disk writes.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Data Flow
|
||||||
|
|
||||||
|
```
|
||||||
|
engine/*.py (sorted)
|
||||||
|
│
|
||||||
|
▼
|
||||||
|
fetch_code()
|
||||||
|
│ ast.parse → scope map
|
||||||
|
│ filter blank + comment lines
|
||||||
|
│ emit (line, scope(), engine.module)
|
||||||
|
▼
|
||||||
|
items: List[Tuple[str, str, str]]
|
||||||
|
│
|
||||||
|
▼
|
||||||
|
stream(items, ntfy, mic) ← unchanged
|
||||||
|
│
|
||||||
|
▼
|
||||||
|
next_headline() shuffles + recycles automatically
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Error Handling
|
||||||
|
|
||||||
|
- If a file fails to `ast.parse` (malformed source), fall back to `<module>` scope for all lines in that file — do not crash.
|
||||||
|
- If `engine/` contains no `.py` files (shouldn't happen in practice), `fetch_code()` returns an empty list; `app.py`'s existing `if not items:` guard handles this.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Testing
|
||||||
|
|
||||||
|
New file: `tests/test_fetch_code.py`
|
||||||
|
|
||||||
|
| Test | Assertion |
|
||||||
|
|------|-----------|
|
||||||
|
| `test_items_are_tuples` | Every item from `fetch_code()` is a 3-tuple of strings |
|
||||||
|
| `test_blank_and_comment_lines_excluded` | No item text is empty; no item text (stripped) starts with `#` |
|
||||||
|
| `test_module_path_format` | Every `ts` field matches pattern `engine\.\w+` |
|
||||||
|
|
||||||
|
No mocking — tests read the real engine source files, keeping them honest against actual content.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## CLI
|
||||||
|
|
||||||
|
```bash
|
||||||
|
python3 mainline.py --code # source consciousness mode
|
||||||
|
uv run mainline.py --code
|
||||||
|
```
|
||||||
|
|
||||||
|
Compatible with all existing flags (`--no-font-picker`, `--font-file`, `--firehose`, etc.).
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Out of Scope
|
||||||
|
|
||||||
|
- Syntax highlighting / token-aware coloring (can be added later)
|
||||||
|
- `--code-dir` flag for pointing at arbitrary directories (YAGNI)
|
||||||
|
- Caching code items to disk
|
||||||
299
docs/superpowers/specs/2026-03-16-color-scheme-design.md
Normal file
299
docs/superpowers/specs/2026-03-16-color-scheme-design.md
Normal file
@@ -0,0 +1,299 @@
|
|||||||
|
# Color Scheme Switcher Design
|
||||||
|
|
||||||
|
**Date:** 2026-03-16
|
||||||
|
**Status:** Revised after review
|
||||||
|
**Scope:** Interactive color theme selection for Mainline news ticker
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Overview
|
||||||
|
|
||||||
|
Mainline currently renders news headlines with a fixed white-hot → deep green gradient. This feature adds an interactive theme picker at startup that lets users choose between three precise color schemes (green, orange, purple), each with complementary message queue colors.
|
||||||
|
|
||||||
|
The implementation uses a dedicated `Theme` class to encapsulate gradients and metadata, enabling future extensions like random rotation, animation, or additional themes without architectural changes.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Requirements
|
||||||
|
|
||||||
|
**Functional:**
|
||||||
|
1. User selects a color theme from an interactive menu at startup (green, orange, or purple)
|
||||||
|
2. Main headline gradient uses the selected primary color (white → color)
|
||||||
|
3. Message queue (ntfy) gradient uses the precise complementary color (white → opposite)
|
||||||
|
4. Selection is fresh each run (no persistence)
|
||||||
|
5. Design supports future "random rotation" mode without refactoring
|
||||||
|
|
||||||
|
**Complementary colors (precise opposites):**
|
||||||
|
- Green (38;5;22) → Magenta (38;5;89) *(current, unchanged)*
|
||||||
|
- Orange (38;5;208) → Blue (38;5;21)
|
||||||
|
- Purple (38;5;129) → Yellow (38;5;226)
|
||||||
|
|
||||||
|
**Non-functional:**
|
||||||
|
- Reuse the existing font picker pattern for UI consistency
|
||||||
|
- Zero runtime overhead during streaming (theme lookup happens once at startup)
|
||||||
|
- **Boot UI (title, subtitle, status lines) use hardcoded green color constants (G_HI, G_DIM, G_MID); only scrolling headlines and ntfy messages use theme gradients**
|
||||||
|
- Font picker UI remains hardcoded green for visual continuity
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Architecture
|
||||||
|
|
||||||
|
### New Module: `engine/themes.py`
|
||||||
|
|
||||||
|
**Data-only module:** Contains Theme class, THEME_REGISTRY, and get_theme() function. **Imports only typing; does NOT import config or render** to prevent circular dependencies.
|
||||||
|
|
||||||
|
```python
|
||||||
|
class Theme:
|
||||||
|
"""Encapsulates a color scheme: name, main gradient, message gradient."""
|
||||||
|
|
||||||
|
def __init__(self, name: str, main_gradient: list[str], message_gradient: list[str]):
|
||||||
|
self.name = name
|
||||||
|
self.main_gradient = main_gradient # white → primary color
|
||||||
|
self.message_gradient = message_gradient # white → complementary
|
||||||
|
```
|
||||||
|
|
||||||
|
**Theme Registry:**
|
||||||
|
Three instances registered by ID: `"green"`, `"orange"`, `"purple"` (IDs match menu labels for clarity).
|
||||||
|
|
||||||
|
Each gradient is a list of 12 ANSI 256-color codes matching the current green gradient:
|
||||||
|
```
|
||||||
|
[
|
||||||
|
"\033[1;38;5;231m", # white (bold)
|
||||||
|
"\033[1;38;5;195m", # pale white-tint
|
||||||
|
"\033[38;5;123m", # bright cyan
|
||||||
|
"\033[38;5;118m", # bright lime
|
||||||
|
"\033[38;5;82m", # lime
|
||||||
|
"\033[38;5;46m", # bright color
|
||||||
|
"\033[38;5;40m", # color
|
||||||
|
"\033[38;5;34m", # medium color
|
||||||
|
"\033[38;5;28m", # dark color
|
||||||
|
"\033[38;5;22m", # deep color
|
||||||
|
"\033[2;38;5;22m", # dim deep color
|
||||||
|
"\033[2;38;5;235m", # near black
|
||||||
|
]
|
||||||
|
```
|
||||||
|
|
||||||
|
**Finalized color codes:**
|
||||||
|
|
||||||
|
**Green (primary: 22, complementary: 89)** — unchanged from current
|
||||||
|
- Main: `[231, 195, 123, 118, 82, 46, 40, 34, 28, 22, 22(dim), 235]`
|
||||||
|
- Messages: `[231, 225, 219, 213, 207, 201, 165, 161, 125, 89, 89(dim), 235]`
|
||||||
|
|
||||||
|
**Orange (primary: 208, complementary: 21)**
|
||||||
|
- Main: `[231, 215, 209, 208, 202, 166, 130, 94, 58, 94, 94(dim), 235]`
|
||||||
|
- Messages: `[231, 195, 33, 27, 21, 21, 21, 18, 18, 18, 18(dim), 235]`
|
||||||
|
|
||||||
|
**Purple (primary: 129, complementary: 226)**
|
||||||
|
- Main: `[231, 225, 177, 171, 165, 135, 129, 93, 57, 57, 57(dim), 235]`
|
||||||
|
- Messages: `[231, 226, 226, 220, 220, 184, 184, 178, 178, 172, 172(dim), 235]`
|
||||||
|
|
||||||
|
**Public API:**
|
||||||
|
- `get_theme(theme_id: str) -> Theme` — lookup by ID, raises KeyError if not found
|
||||||
|
- `THEME_REGISTRY` — dict of all available themes (for picker)
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### Modified: `engine/config.py`
|
||||||
|
|
||||||
|
**New globals:**
|
||||||
|
```python
|
||||||
|
ACTIVE_THEME = None # set by set_active_theme() after picker; guaranteed non-None during stream()
|
||||||
|
```
|
||||||
|
|
||||||
|
**New function:**
|
||||||
|
```python
|
||||||
|
def set_active_theme(theme_id: str = "green"):
|
||||||
|
"""Set the active theme. Defaults to 'green' if not specified."""
|
||||||
|
global ACTIVE_THEME
|
||||||
|
from engine import themes
|
||||||
|
ACTIVE_THEME = themes.get_theme(theme_id)
|
||||||
|
```
|
||||||
|
|
||||||
|
**Behavior:**
|
||||||
|
- Called by `app.pick_color_theme()` with user selection
|
||||||
|
- Has default fallback to "green" for non-interactive environments (CI, testing, piped stdin)
|
||||||
|
- Guarantees `ACTIVE_THEME` is set before any render functions are called
|
||||||
|
|
||||||
|
**Removal:**
|
||||||
|
- Delete hardcoded `GRAD_COLS` and `MSG_GRAD_COLS` constants
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### Modified: `engine/render.py`
|
||||||
|
|
||||||
|
**Updated gradient access in existing functions:**
|
||||||
|
|
||||||
|
Current pattern (will be removed):
|
||||||
|
```python
|
||||||
|
GRAD_COLS = [...] # hardcoded green
|
||||||
|
MSG_GRAD_COLS = [...] # hardcoded magenta
|
||||||
|
```
|
||||||
|
|
||||||
|
New pattern — update `lr_gradient()` function:
|
||||||
|
```python
|
||||||
|
def lr_gradient(rows, offset, cols=None):
|
||||||
|
if cols is None:
|
||||||
|
from engine import config
|
||||||
|
cols = (config.ACTIVE_THEME.main_gradient
|
||||||
|
if config.ACTIVE_THEME
|
||||||
|
else _default_green_gradient())
|
||||||
|
# ... rest of function unchanged
|
||||||
|
```
|
||||||
|
|
||||||
|
**Define fallback:**
|
||||||
|
```python
|
||||||
|
def _default_green_gradient():
|
||||||
|
"""Fallback green gradient (current colors)."""
|
||||||
|
return [
|
||||||
|
"\033[1;38;5;231m", "\033[1;38;5;195m", "\033[38;5;123m",
|
||||||
|
"\033[38;5;118m", "\033[38;5;82m", "\033[38;5;46m",
|
||||||
|
"\033[38;5;40m", "\033[38;5;34m", "\033[38;5;28m",
|
||||||
|
"\033[38;5;22m", "\033[2;38;5;22m", "\033[2;38;5;235m",
|
||||||
|
]
|
||||||
|
```
|
||||||
|
|
||||||
|
**Message gradient handling:**
|
||||||
|
|
||||||
|
The existing code (scroll.py line 89) calls `lr_gradient()` with `MSG_GRAD_COLS`. Change this call to:
|
||||||
|
```python
|
||||||
|
# Instead of: lr_gradient(rows, offset, MSG_GRAD_COLS)
|
||||||
|
# Use:
|
||||||
|
from engine import config
|
||||||
|
cols = (config.ACTIVE_THEME.message_gradient
|
||||||
|
if config.ACTIVE_THEME
|
||||||
|
else _default_magenta_gradient())
|
||||||
|
lr_gradient(rows, offset, cols)
|
||||||
|
```
|
||||||
|
|
||||||
|
or define a helper:
|
||||||
|
```python
|
||||||
|
def msg_gradient(rows, offset):
|
||||||
|
"""Apply message (ntfy) gradient using theme complementary colors."""
|
||||||
|
from engine import config
|
||||||
|
cols = (config.ACTIVE_THEME.message_gradient
|
||||||
|
if config.ACTIVE_THEME
|
||||||
|
else _default_magenta_gradient())
|
||||||
|
return lr_gradient(rows, offset, cols)
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### Modified: `engine/app.py`
|
||||||
|
|
||||||
|
**New function: `pick_color_theme()`**
|
||||||
|
|
||||||
|
Mirrors `pick_font_face()` pattern:
|
||||||
|
|
||||||
|
```python
|
||||||
|
def pick_color_theme():
|
||||||
|
"""Interactive color theme picker. Defaults to 'green' if not TTY."""
|
||||||
|
import sys
|
||||||
|
from engine import config, themes
|
||||||
|
|
||||||
|
# Non-interactive fallback: use default
|
||||||
|
if not sys.stdin.isatty():
|
||||||
|
config.set_active_theme("green")
|
||||||
|
return
|
||||||
|
|
||||||
|
# Interactive picker (similar to font picker)
|
||||||
|
themes_list = list(themes.THEME_REGISTRY.items())
|
||||||
|
selected = 0
|
||||||
|
|
||||||
|
# ... render menu, handle arrow keys j/k, ↑/↓ ...
|
||||||
|
# ... on Enter, call config.set_active_theme(themes_list[selected][0]) ...
|
||||||
|
```
|
||||||
|
|
||||||
|
**Placement in `main()`:**
|
||||||
|
```python
|
||||||
|
def main():
|
||||||
|
# ... signal handler setup ...
|
||||||
|
pick_color_theme() # NEW — before title/subtitle
|
||||||
|
pick_font_face()
|
||||||
|
# ... rest of boot sequence, title/subtitle use hardcoded G_HI/G_DIM ...
|
||||||
|
```
|
||||||
|
|
||||||
|
**Important:** The title and subtitle render with hardcoded `G_HI`/`G_DIM` constants, not theme gradients. This is intentional for visual consistency with the font picker menu.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Data Flow
|
||||||
|
|
||||||
|
```
|
||||||
|
User starts: mainline.py
|
||||||
|
↓
|
||||||
|
main() called
|
||||||
|
↓
|
||||||
|
pick_color_theme()
|
||||||
|
→ If TTY: display menu, read input, call config.set_active_theme(user_choice)
|
||||||
|
→ If not TTY: silently call config.set_active_theme("green")
|
||||||
|
↓
|
||||||
|
pick_font_face() — renders in hardcoded green UI colors
|
||||||
|
↓
|
||||||
|
Boot messages (title, status) — all use hardcoded G_HI/G_DIM (not theme gradients)
|
||||||
|
↓
|
||||||
|
stream() — headlines + ntfy messages use config.ACTIVE_THEME gradients
|
||||||
|
↓
|
||||||
|
On exit: no persistence
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Implementation Notes
|
||||||
|
|
||||||
|
### Initialization Guarantee
|
||||||
|
`config.ACTIVE_THEME` is guaranteed to be non-None before `stream()` is called because:
|
||||||
|
1. `pick_color_theme()` always sets it (either interactively or via fallback)
|
||||||
|
2. It's called before any rendering happens
|
||||||
|
3. Default fallback ensures non-TTY environments don't crash
|
||||||
|
|
||||||
|
### Module Independence
|
||||||
|
`themes.py` is a pure data module with no imports of `config` or `render`. This prevents circular dependencies and allows it to be imported by multiple consumers without side effects.
|
||||||
|
|
||||||
|
### Color Code Finalization
|
||||||
|
All three gradient sequences (green, orange, purple main + complementary) are now finalized with specific ANSI codes. No TBD placeholders remain.
|
||||||
|
|
||||||
|
### Theme ID Naming
|
||||||
|
IDs are `"green"`, `"orange"`, `"purple"` — matching the menu labels exactly for clarity.
|
||||||
|
|
||||||
|
### Terminal Resize Handling
|
||||||
|
The `pick_color_theme()` function mirrors `pick_font_face()`, which does not handle terminal resizing during the picker display. If the terminal is resized while the picker menu is shown, the menu redraw may be incomplete; pressing any key (arrow, j/k, q) continues normally. This is acceptable because:
|
||||||
|
1. The picker completes quickly (< 5 seconds typical interaction)
|
||||||
|
2. Once a theme is selected, the menu closes and rendering begins
|
||||||
|
3. The streaming phase (`stream()`) is resilient to terminal resizing and auto-reflows to new dimensions
|
||||||
|
|
||||||
|
No special resize handling is needed for the color picker beyond what exists for the font picker.
|
||||||
|
|
||||||
|
### Testing Strategy
|
||||||
|
1. **Unit tests** (`tests/test_themes.py`):
|
||||||
|
- Verify Theme class construction
|
||||||
|
- Test THEME_REGISTRY lookup (valid and invalid IDs)
|
||||||
|
- Confirm gradient lists have correct length (12)
|
||||||
|
|
||||||
|
2. **Integration tests** (`tests/test_render.py`):
|
||||||
|
- Mock `config.ACTIVE_THEME` to each theme
|
||||||
|
- Verify `lr_gradient()` uses correct colors
|
||||||
|
- Verify fallback works when `ACTIVE_THEME` is None
|
||||||
|
|
||||||
|
3. **Existing tests:**
|
||||||
|
- Render tests that check gradient output will need to mock `config.ACTIVE_THEME`
|
||||||
|
- Use pytest fixtures to set theme per test case
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Files Changed
|
||||||
|
- `engine/themes.py` (new)
|
||||||
|
- `engine/config.py` (add `ACTIVE_THEME`, `set_active_theme()`)
|
||||||
|
- `engine/render.py` (replace GRAD_COLS/MSG_GRAD_COLS references with config lookups)
|
||||||
|
- `engine/app.py` (add `pick_color_theme()`, call in main)
|
||||||
|
- `tests/test_themes.py` (new unit tests)
|
||||||
|
- `tests/test_render.py` (update mocking strategy)
|
||||||
|
|
||||||
|
## Acceptance Criteria
|
||||||
|
1. ✓ Color picker displays 3 theme options at startup
|
||||||
|
2. ✓ Selection applies to all headline and message gradients
|
||||||
|
3. ✓ Boot UI (title, status) uses hardcoded green (not theme)
|
||||||
|
4. ✓ Scrolling headlines and ntfy messages use theme gradients
|
||||||
|
5. ✓ No persistence between runs
|
||||||
|
6. ✓ Non-TTY environments default to green without error
|
||||||
|
7. ✓ Architecture supports future random/animation modes
|
||||||
|
8. ✓ All gradient color codes finalized with no TBD values
|
||||||
@@ -18,7 +18,7 @@ def discover_plugins():
|
|||||||
continue
|
continue
|
||||||
|
|
||||||
try:
|
try:
|
||||||
module = __import__(f"engine.effects.plugins.{module_name}", fromlist=[""])
|
module = __import__(f"effects_plugins.{module_name}", fromlist=[""])
|
||||||
for attr_name in dir(module):
|
for attr_name in dir(module):
|
||||||
attr = getattr(module, attr_name)
|
attr = getattr(module, attr_name)
|
||||||
if (
|
if (
|
||||||
@@ -28,8 +28,6 @@ def discover_plugins():
|
|||||||
and attr_name.endswith("Effect")
|
and attr_name.endswith("Effect")
|
||||||
):
|
):
|
||||||
plugin = attr()
|
plugin = attr()
|
||||||
if not isinstance(plugin, EffectPlugin):
|
|
||||||
continue
|
|
||||||
registry.register(plugin)
|
registry.register(plugin)
|
||||||
imported[plugin.name] = plugin
|
imported[plugin.name] = plugin
|
||||||
except Exception:
|
except Exception:
|
||||||
@@ -36,7 +36,7 @@ class FadeEffect(EffectPlugin):
|
|||||||
if fade >= 1.0:
|
if fade >= 1.0:
|
||||||
return s
|
return s
|
||||||
if fade <= 0.0:
|
if fade <= 0.0:
|
||||||
return s # Preserve original line length - don't return empty
|
return ""
|
||||||
result = []
|
result = []
|
||||||
i = 0
|
i = 0
|
||||||
while i < len(s):
|
while i < len(s):
|
||||||
@@ -54,5 +54,5 @@ class FadeEffect(EffectPlugin):
|
|||||||
i += 1
|
i += 1
|
||||||
return "".join(result)
|
return "".join(result)
|
||||||
|
|
||||||
def configure(self, config: EffectConfig) -> None:
|
def configure(self, cfg: EffectConfig) -> None:
|
||||||
self.config = config
|
self.config = cfg
|
||||||
@@ -68,5 +68,5 @@ class FirehoseEffect(EffectPlugin):
|
|||||||
color = random.choice([G_LO, C_DIM, W_GHOST])
|
color = random.choice([G_LO, C_DIM, W_GHOST])
|
||||||
return f"{color}{text}{RST}"
|
return f"{color}{text}{RST}"
|
||||||
|
|
||||||
def configure(self, config: EffectConfig) -> None:
|
def configure(self, cfg: EffectConfig) -> None:
|
||||||
self.config = config
|
self.config = cfg
|
||||||
37
effects_plugins/glitch.py
Normal file
37
effects_plugins/glitch.py
Normal file
@@ -0,0 +1,37 @@
|
|||||||
|
import random
|
||||||
|
|
||||||
|
from engine import config
|
||||||
|
from engine.effects.types import EffectConfig, EffectContext, EffectPlugin
|
||||||
|
from engine.terminal import C_DIM, DIM, G_DIM, G_LO, RST
|
||||||
|
|
||||||
|
|
||||||
|
class GlitchEffect(EffectPlugin):
|
||||||
|
name = "glitch"
|
||||||
|
config = EffectConfig(enabled=True, intensity=1.0)
|
||||||
|
|
||||||
|
def process(self, buf: list[str], ctx: EffectContext) -> list[str]:
|
||||||
|
if not buf:
|
||||||
|
return buf
|
||||||
|
result = list(buf)
|
||||||
|
intensity = self.config.intensity
|
||||||
|
|
||||||
|
glitch_prob = 0.32 + min(0.9, ctx.mic_excess * 0.16)
|
||||||
|
glitch_prob = glitch_prob * intensity
|
||||||
|
n_hits = 4 + int(ctx.mic_excess / 2)
|
||||||
|
n_hits = int(n_hits * intensity)
|
||||||
|
|
||||||
|
if random.random() < glitch_prob:
|
||||||
|
for _ in range(min(n_hits, len(result))):
|
||||||
|
gi = random.randint(0, len(result) - 1)
|
||||||
|
scr_row = gi + 1
|
||||||
|
result[gi] = f"\033[{scr_row};1H{self._glitch_bar(ctx.terminal_width)}"
|
||||||
|
return result
|
||||||
|
|
||||||
|
def _glitch_bar(self, w: int) -> str:
|
||||||
|
c = random.choice(["░", "▒", "─", "\xc2"])
|
||||||
|
n = random.randint(3, w // 2)
|
||||||
|
o = random.randint(0, w - n)
|
||||||
|
return " " * o + f"{G_LO}{DIM}" + c * n + RST
|
||||||
|
|
||||||
|
def configure(self, cfg: EffectConfig) -> None:
|
||||||
|
self.config = cfg
|
||||||
@@ -19,8 +19,7 @@ class NoiseEffect(EffectPlugin):
|
|||||||
for r in range(len(result)):
|
for r in range(len(result)):
|
||||||
cy = ctx.scroll_cam + r
|
cy = ctx.scroll_cam + r
|
||||||
if random.random() < probability:
|
if random.random() < probability:
|
||||||
original_line = result[r]
|
result[r] = self._generate_noise(ctx.terminal_width, cy)
|
||||||
result[r] = self._generate_noise(len(original_line), cy)
|
|
||||||
return result
|
return result
|
||||||
|
|
||||||
def _generate_noise(self, w: int, cy: int) -> str:
|
def _generate_noise(self, w: int, cy: int) -> str:
|
||||||
@@ -33,5 +32,5 @@ class NoiseEffect(EffectPlugin):
|
|||||||
for _ in range(w)
|
for _ in range(w)
|
||||||
)
|
)
|
||||||
|
|
||||||
def configure(self, config: EffectConfig) -> None:
|
def configure(self, cfg: EffectConfig) -> None:
|
||||||
self.config = config
|
self.config = cfg
|
||||||
667
engine/app.py
667
engine/app.py
@@ -1,282 +1,429 @@
|
|||||||
"""
|
"""
|
||||||
Application orchestrator — pipeline mode entry point.
|
Application orchestrator — boot sequence, signal handling, main loop wiring.
|
||||||
"""
|
"""
|
||||||
|
|
||||||
|
import atexit
|
||||||
|
import os
|
||||||
|
import signal
|
||||||
import sys
|
import sys
|
||||||
|
import termios
|
||||||
import time
|
import time
|
||||||
|
import tty
|
||||||
|
|
||||||
import engine.effects.plugins as effects_plugins
|
from engine import config, render, themes
|
||||||
from engine import config
|
from engine.fetch import fetch_all, fetch_poetry, load_cache, save_cache
|
||||||
from engine.display import DisplayRegistry
|
from engine.mic import MicMonitor
|
||||||
from engine.effects import PerformanceMonitor, get_registry, set_monitor
|
from engine.ntfy import NtfyPoller
|
||||||
from engine.fetch import fetch_all, fetch_poetry, load_cache
|
from engine.scroll import stream
|
||||||
from engine.pipeline import (
|
from engine.terminal import (
|
||||||
Pipeline,
|
CLR,
|
||||||
PipelineConfig,
|
CURSOR_OFF,
|
||||||
get_preset,
|
CURSOR_ON,
|
||||||
list_presets,
|
G_DIM,
|
||||||
)
|
G_HI,
|
||||||
from engine.pipeline.adapters import (
|
G_MID,
|
||||||
SourceItemsToBufferStage,
|
RST,
|
||||||
create_stage_from_display,
|
W_DIM,
|
||||||
create_stage_from_effect,
|
W_GHOST,
|
||||||
|
boot_ln,
|
||||||
|
slow_print,
|
||||||
|
tw,
|
||||||
)
|
)
|
||||||
|
|
||||||
|
TITLE = [
|
||||||
|
" ███╗ ███╗ █████╗ ██╗███╗ ██╗██╗ ██╗███╗ ██╗███████╗",
|
||||||
|
" ████╗ ████║██╔══██╗██║████╗ ██║██║ ██║████╗ ██║██╔════╝",
|
||||||
|
" ██╔████╔██║███████║██║██╔██╗ ██║██║ ██║██╔██╗ ██║█████╗ ",
|
||||||
|
" ██║╚██╔╝██║██╔══██║██║██║╚██╗██║██║ ██║██║╚██╗██║██╔══╝ ",
|
||||||
|
" ██║ ╚═╝ ██║██║ ██║██║██║ ╚████║███████╗██║██║ ╚████║███████╗",
|
||||||
|
" ╚═╝ ╚═╝╚═╝ ╚═╝╚═╝╚═╝ ╚═══╝╚══════╝╚═╝╚═╝ ╚═══╝╚══════╝",
|
||||||
|
]
|
||||||
|
|
||||||
|
|
||||||
|
def _read_picker_key():
|
||||||
|
ch = sys.stdin.read(1)
|
||||||
|
if ch == "\x03":
|
||||||
|
return "interrupt"
|
||||||
|
if ch in ("\r", "\n"):
|
||||||
|
return "enter"
|
||||||
|
if ch == "\x1b":
|
||||||
|
c1 = sys.stdin.read(1)
|
||||||
|
if c1 != "[":
|
||||||
|
return None
|
||||||
|
c2 = sys.stdin.read(1)
|
||||||
|
if c2 == "A":
|
||||||
|
return "up"
|
||||||
|
if c2 == "B":
|
||||||
|
return "down"
|
||||||
|
return None
|
||||||
|
if ch in ("k", "K"):
|
||||||
|
return "up"
|
||||||
|
if ch in ("j", "J"):
|
||||||
|
return "down"
|
||||||
|
if ch in ("q", "Q"):
|
||||||
|
return "enter"
|
||||||
|
return None
|
||||||
|
|
||||||
|
|
||||||
|
def _draw_color_picker(themes_list, selected):
|
||||||
|
"""Draw the color theme picker menu.
|
||||||
|
|
||||||
|
Args:
|
||||||
|
themes_list: List of (theme_id, Theme) tuples from THEME_REGISTRY.items()
|
||||||
|
selected: Index of currently selected theme (0-2)
|
||||||
|
"""
|
||||||
|
print(CLR, end="")
|
||||||
|
print()
|
||||||
|
|
||||||
|
print(
|
||||||
|
f" {G_HI}▼ COLOR THEME{RST} {W_GHOST}─ ↑/↓ or j/k to move, Enter/q to select{RST}"
|
||||||
|
)
|
||||||
|
print(f" {W_GHOST}{'─' * (tw() - 4)}{RST}\n")
|
||||||
|
|
||||||
|
for i, (theme_id, theme) in enumerate(themes_list):
|
||||||
|
prefix = " ▶ " if i == selected else " "
|
||||||
|
color = G_HI if i == selected else ""
|
||||||
|
reset = "" if i == selected else W_GHOST
|
||||||
|
print(f"{prefix}{color}{theme.name}{reset}")
|
||||||
|
|
||||||
|
print()
|
||||||
|
|
||||||
|
|
||||||
|
def _normalize_preview_rows(rows):
|
||||||
|
"""Trim shared left padding and trailing spaces for stable on-screen previews."""
|
||||||
|
non_empty = [r for r in rows if r.strip()]
|
||||||
|
if not non_empty:
|
||||||
|
return [""]
|
||||||
|
left_pad = min(len(r) - len(r.lstrip(" ")) for r in non_empty)
|
||||||
|
out = []
|
||||||
|
for row in rows:
|
||||||
|
if left_pad < len(row):
|
||||||
|
out.append(row[left_pad:].rstrip())
|
||||||
|
else:
|
||||||
|
out.append(row.rstrip())
|
||||||
|
return out
|
||||||
|
|
||||||
|
|
||||||
|
def _draw_font_picker(faces, selected):
|
||||||
|
w = tw()
|
||||||
|
h = 24
|
||||||
|
try:
|
||||||
|
h = os.get_terminal_size().lines
|
||||||
|
except Exception:
|
||||||
|
pass
|
||||||
|
|
||||||
|
max_preview_w = max(24, w - 8)
|
||||||
|
header_h = 6
|
||||||
|
footer_h = 3
|
||||||
|
preview_h = max(4, min(config.RENDER_H + 2, max(4, h // 2)))
|
||||||
|
visible = max(1, h - header_h - preview_h - footer_h)
|
||||||
|
top = max(0, selected - (visible // 2))
|
||||||
|
bottom = min(len(faces), top + visible)
|
||||||
|
top = max(0, bottom - visible)
|
||||||
|
|
||||||
|
print(CLR, end="")
|
||||||
|
print(CURSOR_OFF, end="")
|
||||||
|
print()
|
||||||
|
print(f" {G_HI}FONT PICKER{RST}")
|
||||||
|
print(f" {W_GHOST}{'─' * (w - 4)}{RST}")
|
||||||
|
print(f" {W_DIM}{config.FONT_DIR[:max_preview_w]}{RST}")
|
||||||
|
print(f" {W_GHOST}↑/↓ move · Enter select · q accept current{RST}")
|
||||||
|
print()
|
||||||
|
|
||||||
|
for pos in range(top, bottom):
|
||||||
|
face = faces[pos]
|
||||||
|
active = pos == selected
|
||||||
|
pointer = "▶" if active else " "
|
||||||
|
color = G_HI if active else W_DIM
|
||||||
|
print(
|
||||||
|
f" {color}{pointer} {face['name']}{RST}{W_GHOST} · {face['file_name']}{RST}"
|
||||||
|
)
|
||||||
|
|
||||||
|
if top > 0:
|
||||||
|
print(f" {W_GHOST}… {top} above{RST}")
|
||||||
|
if bottom < len(faces):
|
||||||
|
print(f" {W_GHOST}… {len(faces) - bottom} below{RST}")
|
||||||
|
|
||||||
|
print()
|
||||||
|
print(f" {W_GHOST}{'─' * (w - 4)}{RST}")
|
||||||
|
print(
|
||||||
|
f" {W_DIM}Preview: {faces[selected]['name']} · {faces[selected]['file_name']}{RST}"
|
||||||
|
)
|
||||||
|
preview_rows = faces[selected]["preview_rows"][:preview_h]
|
||||||
|
for row in preview_rows:
|
||||||
|
shown = row[:max_preview_w]
|
||||||
|
print(f" {shown}")
|
||||||
|
|
||||||
|
|
||||||
|
def pick_color_theme():
|
||||||
|
"""Interactive color theme picker. Defaults to 'green' if not TTY.
|
||||||
|
|
||||||
|
Displays a menu of available themes and lets user select with arrow keys.
|
||||||
|
Non-interactive environments (piped stdin, CI) silently default to green.
|
||||||
|
"""
|
||||||
|
# Non-interactive fallback
|
||||||
|
if not sys.stdin.isatty():
|
||||||
|
config.set_active_theme("green")
|
||||||
|
return
|
||||||
|
|
||||||
|
# Interactive picker
|
||||||
|
themes_list = list(themes.THEME_REGISTRY.items())
|
||||||
|
selected = 0
|
||||||
|
|
||||||
|
fd = sys.stdin.fileno()
|
||||||
|
old_settings = termios.tcgetattr(fd)
|
||||||
|
try:
|
||||||
|
tty.setcbreak(fd)
|
||||||
|
while True:
|
||||||
|
_draw_color_picker(themes_list, selected)
|
||||||
|
key = _read_picker_key()
|
||||||
|
if key == "up":
|
||||||
|
selected = max(0, selected - 1)
|
||||||
|
elif key == "down":
|
||||||
|
selected = min(len(themes_list) - 1, selected + 1)
|
||||||
|
elif key == "enter":
|
||||||
|
break
|
||||||
|
elif key == "interrupt":
|
||||||
|
raise KeyboardInterrupt
|
||||||
|
finally:
|
||||||
|
termios.tcsetattr(fd, termios.TCSADRAIN, old_settings)
|
||||||
|
|
||||||
|
selected_theme_id = themes_list[selected][0]
|
||||||
|
config.set_active_theme(selected_theme_id)
|
||||||
|
|
||||||
|
theme_name = themes_list[selected][1].name
|
||||||
|
print(f" {G_DIM}> using {theme_name}{RST}")
|
||||||
|
time.sleep(0.8)
|
||||||
|
print(CLR, end="")
|
||||||
|
print(CURSOR_OFF, end="")
|
||||||
|
print()
|
||||||
|
|
||||||
|
|
||||||
|
def pick_font_face():
|
||||||
|
"""Interactive startup picker for selecting a face from repo OTF files."""
|
||||||
|
if not config.FONT_PICKER:
|
||||||
|
return
|
||||||
|
|
||||||
|
font_files = config.list_repo_font_files()
|
||||||
|
if not font_files:
|
||||||
|
print(CLR, end="")
|
||||||
|
print(CURSOR_OFF, end="")
|
||||||
|
print()
|
||||||
|
print(f" {G_HI}FONT PICKER{RST}")
|
||||||
|
print(f" {W_GHOST}{'─' * (tw() - 4)}{RST}")
|
||||||
|
print(f" {G_DIM}> no .otf/.ttf/.ttc files found in: {config.FONT_DIR}{RST}")
|
||||||
|
print(f" {W_GHOST}> add font files to the fonts folder, then rerun{RST}")
|
||||||
|
time.sleep(1.8)
|
||||||
|
sys.exit(1)
|
||||||
|
|
||||||
|
prepared = []
|
||||||
|
for font_path in font_files:
|
||||||
|
try:
|
||||||
|
faces = render.list_font_faces(font_path, max_faces=64)
|
||||||
|
except Exception:
|
||||||
|
fallback = os.path.splitext(os.path.basename(font_path))[0]
|
||||||
|
faces = [{"index": 0, "name": fallback}]
|
||||||
|
for face in faces:
|
||||||
|
idx = face["index"]
|
||||||
|
name = face["name"]
|
||||||
|
file_name = os.path.basename(font_path)
|
||||||
|
try:
|
||||||
|
fnt = render.load_font_face(font_path, idx)
|
||||||
|
rows = _normalize_preview_rows(render.render_line(name, fnt))
|
||||||
|
except Exception:
|
||||||
|
rows = ["(preview unavailable)"]
|
||||||
|
prepared.append(
|
||||||
|
{
|
||||||
|
"font_path": font_path,
|
||||||
|
"font_index": idx,
|
||||||
|
"name": name,
|
||||||
|
"file_name": file_name,
|
||||||
|
"preview_rows": rows,
|
||||||
|
}
|
||||||
|
)
|
||||||
|
|
||||||
|
if not prepared:
|
||||||
|
print(CLR, end="")
|
||||||
|
print(CURSOR_OFF, end="")
|
||||||
|
print()
|
||||||
|
print(f" {G_HI}FONT PICKER{RST}")
|
||||||
|
print(f" {W_GHOST}{'─' * (tw() - 4)}{RST}")
|
||||||
|
print(f" {G_DIM}> no readable font faces found in: {config.FONT_DIR}{RST}")
|
||||||
|
time.sleep(1.8)
|
||||||
|
sys.exit(1)
|
||||||
|
|
||||||
|
def _same_path(a, b):
|
||||||
|
try:
|
||||||
|
return os.path.samefile(a, b)
|
||||||
|
except Exception:
|
||||||
|
return os.path.abspath(a) == os.path.abspath(b)
|
||||||
|
|
||||||
|
selected = next(
|
||||||
|
(
|
||||||
|
i
|
||||||
|
for i, f in enumerate(prepared)
|
||||||
|
if _same_path(f["font_path"], config.FONT_PATH)
|
||||||
|
and f["font_index"] == config.FONT_INDEX
|
||||||
|
),
|
||||||
|
0,
|
||||||
|
)
|
||||||
|
|
||||||
|
if not sys.stdin.isatty():
|
||||||
|
selected_font = prepared[selected]
|
||||||
|
config.set_font_selection(
|
||||||
|
font_path=selected_font["font_path"],
|
||||||
|
font_index=selected_font["font_index"],
|
||||||
|
)
|
||||||
|
render.clear_font_cache()
|
||||||
|
print(
|
||||||
|
f" {G_DIM}> using {selected_font['name']} ({selected_font['file_name']}){RST}"
|
||||||
|
)
|
||||||
|
time.sleep(0.8)
|
||||||
|
print(CLR, end="")
|
||||||
|
print(CURSOR_OFF, end="")
|
||||||
|
print()
|
||||||
|
return
|
||||||
|
|
||||||
|
fd = sys.stdin.fileno()
|
||||||
|
old_settings = termios.tcgetattr(fd)
|
||||||
|
try:
|
||||||
|
tty.setcbreak(fd)
|
||||||
|
while True:
|
||||||
|
_draw_font_picker(prepared, selected)
|
||||||
|
key = _read_picker_key()
|
||||||
|
if key == "up":
|
||||||
|
selected = max(0, selected - 1)
|
||||||
|
elif key == "down":
|
||||||
|
selected = min(len(prepared) - 1, selected + 1)
|
||||||
|
elif key == "enter":
|
||||||
|
break
|
||||||
|
elif key == "interrupt":
|
||||||
|
raise KeyboardInterrupt
|
||||||
|
finally:
|
||||||
|
termios.tcsetattr(fd, termios.TCSADRAIN, old_settings)
|
||||||
|
|
||||||
|
selected_font = prepared[selected]
|
||||||
|
config.set_font_selection(
|
||||||
|
font_path=selected_font["font_path"],
|
||||||
|
font_index=selected_font["font_index"],
|
||||||
|
)
|
||||||
|
render.clear_font_cache()
|
||||||
|
print(
|
||||||
|
f" {G_DIM}> using {selected_font['name']} ({selected_font['file_name']}){RST}"
|
||||||
|
)
|
||||||
|
time.sleep(0.8)
|
||||||
|
print(CLR, end="")
|
||||||
|
print(CURSOR_OFF, end="")
|
||||||
|
print()
|
||||||
|
|
||||||
|
|
||||||
def main():
|
def main():
|
||||||
"""Main entry point - all modes now use presets."""
|
atexit.register(lambda: print(CURSOR_ON, end="", flush=True))
|
||||||
if config.PIPELINE_DIAGRAM:
|
|
||||||
try:
|
|
||||||
from engine.pipeline import generate_pipeline_diagram
|
|
||||||
except ImportError:
|
|
||||||
print("Error: pipeline diagram not available")
|
|
||||||
return
|
|
||||||
print(generate_pipeline_diagram())
|
|
||||||
return
|
|
||||||
|
|
||||||
preset_name = None
|
def handle_sigint(*_):
|
||||||
|
print(f"\n\n {G_DIM}> SIGNAL LOST{RST}")
|
||||||
|
print(f" {W_GHOST}> connection terminated{RST}\n")
|
||||||
|
sys.exit(0)
|
||||||
|
|
||||||
if config.PRESET:
|
signal.signal(signal.SIGINT, handle_sigint)
|
||||||
preset_name = config.PRESET
|
|
||||||
elif config.PIPELINE_MODE:
|
|
||||||
preset_name = config.PIPELINE_PRESET
|
|
||||||
else:
|
|
||||||
preset_name = "demo"
|
|
||||||
|
|
||||||
available = list_presets()
|
w = tw()
|
||||||
if preset_name not in available:
|
print(CLR, end="")
|
||||||
print(f"Error: Unknown preset '{preset_name}'")
|
print(CURSOR_OFF, end="")
|
||||||
print(f"Available presets: {', '.join(available)}")
|
pick_color_theme()
|
||||||
sys.exit(1)
|
pick_font_face()
|
||||||
|
w = tw()
|
||||||
|
print()
|
||||||
|
time.sleep(0.4)
|
||||||
|
|
||||||
run_pipeline_mode(preset_name)
|
for ln in TITLE:
|
||||||
|
print(f"{G_HI}{ln}{RST}")
|
||||||
|
time.sleep(0.07)
|
||||||
|
|
||||||
|
print()
|
||||||
|
_subtitle = {
|
||||||
|
"poetry": "literary consciousness stream",
|
||||||
|
"code": "source consciousness stream",
|
||||||
|
}.get(config.MODE, "digital consciousness stream")
|
||||||
|
print(f" {W_DIM}v0.1 · {_subtitle}{RST}")
|
||||||
|
print(f" {W_GHOST}{'─' * (w - 4)}{RST}")
|
||||||
|
print()
|
||||||
|
time.sleep(0.4)
|
||||||
|
|
||||||
def run_pipeline_mode(preset_name: str = "demo"):
|
cached = load_cache() if "--refresh" not in sys.argv else None
|
||||||
"""Run using the new unified pipeline architecture."""
|
if cached:
|
||||||
print(" \033[1;38;5;46mPIPELINE MODE\033[0m")
|
items = cached
|
||||||
print(" \033[38;5;245mUsing unified pipeline architecture\033[0m")
|
boot_ln("Cache", f"LOADED [{len(items)} SIGNALS]", True)
|
||||||
|
elif config.MODE == "poetry":
|
||||||
effects_plugins.discover_plugins()
|
slow_print(" > INITIALIZING LITERARY CORPUS...\n")
|
||||||
|
time.sleep(0.2)
|
||||||
monitor = PerformanceMonitor()
|
print()
|
||||||
set_monitor(monitor)
|
items, linked, failed = fetch_poetry()
|
||||||
|
print()
|
||||||
preset = get_preset(preset_name)
|
print(
|
||||||
if not preset:
|
f" {G_DIM}>{RST} {G_MID}{linked} TEXTS LOADED{RST} {W_GHOST}· {failed} DARK{RST}"
|
||||||
print(f" \033[38;5;196mUnknown preset: {preset_name}\033[0m")
|
|
||||||
sys.exit(1)
|
|
||||||
|
|
||||||
print(f" \033[38;5;245mPreset: {preset.name} - {preset.description}\033[0m")
|
|
||||||
|
|
||||||
params = preset.to_params()
|
|
||||||
params.viewport_width = 80
|
|
||||||
params.viewport_height = 24
|
|
||||||
|
|
||||||
pipeline = Pipeline(
|
|
||||||
config=PipelineConfig(
|
|
||||||
source=preset.source,
|
|
||||||
display=preset.display,
|
|
||||||
camera=preset.camera,
|
|
||||||
effects=preset.effects,
|
|
||||||
)
|
)
|
||||||
|
print(f" {G_DIM}>{RST} {G_MID}{len(items)} STANZAS ACQUIRED{RST}")
|
||||||
|
save_cache(items)
|
||||||
|
elif config.MODE == "code":
|
||||||
|
from engine.fetch_code import fetch_code
|
||||||
|
|
||||||
|
slow_print(" > INITIALIZING SOURCE ARRAY...\n")
|
||||||
|
time.sleep(0.2)
|
||||||
|
print()
|
||||||
|
items, line_count, _ = fetch_code()
|
||||||
|
print()
|
||||||
|
print(f" {G_DIM}>{RST} {G_MID}{line_count} LINES ACQUIRED{RST}")
|
||||||
|
else:
|
||||||
|
slow_print(" > INITIALIZING FEED ARRAY...\n")
|
||||||
|
time.sleep(0.2)
|
||||||
|
print()
|
||||||
|
items, linked, failed = fetch_all()
|
||||||
|
print()
|
||||||
|
print(
|
||||||
|
f" {G_DIM}>{RST} {G_MID}{linked} SOURCES LINKED{RST} {W_GHOST}· {failed} DARK{RST}"
|
||||||
|
)
|
||||||
|
print(f" {G_DIM}>{RST} {G_MID}{len(items)} SIGNALS ACQUIRED{RST}")
|
||||||
|
save_cache(items)
|
||||||
|
|
||||||
|
if not items:
|
||||||
|
print(f"\n {W_DIM}> NO SIGNAL — check network{RST}")
|
||||||
|
sys.exit(1)
|
||||||
|
|
||||||
|
print()
|
||||||
|
mic = MicMonitor(threshold_db=config.MIC_THRESHOLD_DB)
|
||||||
|
mic_ok = mic.start()
|
||||||
|
if mic.available:
|
||||||
|
boot_ln(
|
||||||
|
"Microphone",
|
||||||
|
"ACTIVE"
|
||||||
|
if mic_ok
|
||||||
|
else "OFFLINE · check System Settings → Privacy → Microphone",
|
||||||
|
bool(mic_ok),
|
||||||
|
)
|
||||||
|
|
||||||
|
ntfy = NtfyPoller(
|
||||||
|
config.NTFY_TOPIC,
|
||||||
|
reconnect_delay=config.NTFY_RECONNECT_DELAY,
|
||||||
|
display_secs=config.MESSAGE_DISPLAY_SECS,
|
||||||
)
|
)
|
||||||
|
ntfy_ok = ntfy.start()
|
||||||
|
boot_ln("ntfy", "LISTENING" if ntfy_ok else "OFFLINE", ntfy_ok)
|
||||||
|
|
||||||
print(" \033[38;5;245mFetching content...\033[0m")
|
if config.FIREHOSE:
|
||||||
|
boot_ln("Firehose", "ENGAGED", True)
|
||||||
|
|
||||||
# Handle special sources that don't need traditional fetching
|
time.sleep(0.4)
|
||||||
introspection_source = None
|
slow_print(" > STREAMING...\n")
|
||||||
if preset.source == "pipeline-inspect":
|
time.sleep(0.2)
|
||||||
items = []
|
print(f" {W_GHOST}{'─' * (w - 4)}{RST}")
|
||||||
print(" \033[38;5;245mUsing pipeline introspection source\033[0m")
|
print()
|
||||||
elif preset.source == "empty":
|
time.sleep(0.4)
|
||||||
items = []
|
|
||||||
print(" \033[38;5;245mUsing empty source (no content)\033[0m")
|
|
||||||
else:
|
|
||||||
cached = load_cache()
|
|
||||||
if cached:
|
|
||||||
items = cached
|
|
||||||
elif preset.source == "poetry":
|
|
||||||
items, _, _ = fetch_poetry()
|
|
||||||
else:
|
|
||||||
items, _, _ = fetch_all()
|
|
||||||
|
|
||||||
if not items:
|
stream(items, ntfy, mic)
|
||||||
print(" \033[38;5;196mNo content available\033[0m")
|
|
||||||
sys.exit(1)
|
|
||||||
|
|
||||||
print(f" \033[38;5;82mLoaded {len(items)} items\033[0m")
|
print()
|
||||||
|
print(f" {W_GHOST}{'─' * (tw() - 4)}{RST}")
|
||||||
# CLI --display flag takes priority over preset
|
print(f" {G_DIM}> {config.HEADLINE_LIMIT} SIGNALS PROCESSED{RST}")
|
||||||
# Check if --display was explicitly provided
|
print(f" {W_GHOST}> end of stream{RST}")
|
||||||
display_name = preset.display
|
print()
|
||||||
if "--display" in sys.argv:
|
|
||||||
idx = sys.argv.index("--display")
|
|
||||||
if idx + 1 < len(sys.argv):
|
|
||||||
display_name = sys.argv[idx + 1]
|
|
||||||
|
|
||||||
display = DisplayRegistry.create(display_name)
|
|
||||||
if not display and not display_name.startswith("multi"):
|
|
||||||
print(f" \033[38;5;196mFailed to create display: {display_name}\033[0m")
|
|
||||||
sys.exit(1)
|
|
||||||
|
|
||||||
# Handle multi display (format: "multi:terminal,pygame")
|
|
||||||
if not display and display_name.startswith("multi"):
|
|
||||||
parts = display_name[6:].split(
|
|
||||||
","
|
|
||||||
) # "multi:terminal,pygame" -> ["terminal", "pygame"]
|
|
||||||
display = DisplayRegistry.create_multi(parts)
|
|
||||||
if not display:
|
|
||||||
print(f" \033[38;5;196mFailed to create multi display: {parts}\033[0m")
|
|
||||||
sys.exit(1)
|
|
||||||
|
|
||||||
if not display:
|
|
||||||
print(f" \033[38;5;196mFailed to create display: {display_name}\033[0m")
|
|
||||||
sys.exit(1)
|
|
||||||
|
|
||||||
display.init(0, 0)
|
|
||||||
|
|
||||||
effect_registry = get_registry()
|
|
||||||
|
|
||||||
# Create source stage based on preset source type
|
|
||||||
if preset.source == "pipeline-inspect":
|
|
||||||
from engine.data_sources.pipeline_introspection import (
|
|
||||||
PipelineIntrospectionSource,
|
|
||||||
)
|
|
||||||
from engine.pipeline.adapters import DataSourceStage
|
|
||||||
|
|
||||||
introspection_source = PipelineIntrospectionSource(
|
|
||||||
pipeline=None, # Will be set after pipeline.build()
|
|
||||||
viewport_width=80,
|
|
||||||
viewport_height=24,
|
|
||||||
)
|
|
||||||
pipeline.add_stage(
|
|
||||||
"source", DataSourceStage(introspection_source, name="pipeline-inspect")
|
|
||||||
)
|
|
||||||
elif preset.source == "empty":
|
|
||||||
from engine.data_sources.sources import EmptyDataSource
|
|
||||||
from engine.pipeline.adapters import DataSourceStage
|
|
||||||
|
|
||||||
empty_source = EmptyDataSource(width=80, height=24)
|
|
||||||
pipeline.add_stage("source", DataSourceStage(empty_source, name="empty"))
|
|
||||||
else:
|
|
||||||
from engine.data_sources.sources import ListDataSource
|
|
||||||
from engine.pipeline.adapters import DataSourceStage
|
|
||||||
|
|
||||||
list_source = ListDataSource(items, name=preset.source)
|
|
||||||
pipeline.add_stage("source", DataSourceStage(list_source, name=preset.source))
|
|
||||||
|
|
||||||
# Add FontStage for headlines/poetry (default for demo)
|
|
||||||
if preset.source in ["headlines", "poetry"]:
|
|
||||||
from engine.pipeline.adapters import FontStage, ViewportFilterStage
|
|
||||||
|
|
||||||
# Add viewport filter to prevent rendering all items
|
|
||||||
pipeline.add_stage(
|
|
||||||
"viewport_filter", ViewportFilterStage(name="viewport-filter")
|
|
||||||
)
|
|
||||||
pipeline.add_stage("font", FontStage(name="font"))
|
|
||||||
else:
|
|
||||||
# Fallback to simple conversion for other sources
|
|
||||||
pipeline.add_stage("render", SourceItemsToBufferStage(name="items-to-buffer"))
|
|
||||||
|
|
||||||
# Add camera stage if specified in preset
|
|
||||||
if preset.camera:
|
|
||||||
from engine.camera import Camera
|
|
||||||
from engine.pipeline.adapters import CameraStage
|
|
||||||
|
|
||||||
camera = None
|
|
||||||
speed = getattr(preset, "camera_speed", 1.0)
|
|
||||||
if preset.camera == "feed":
|
|
||||||
camera = Camera.feed(speed=speed)
|
|
||||||
elif preset.camera == "scroll":
|
|
||||||
camera = Camera.scroll(speed=speed)
|
|
||||||
elif preset.camera == "vertical":
|
|
||||||
camera = Camera.scroll(speed=speed) # Backwards compat
|
|
||||||
elif preset.camera == "horizontal":
|
|
||||||
camera = Camera.horizontal(speed=speed)
|
|
||||||
elif preset.camera == "omni":
|
|
||||||
camera = Camera.omni(speed=speed)
|
|
||||||
elif preset.camera == "floating":
|
|
||||||
camera = Camera.floating(speed=speed)
|
|
||||||
elif preset.camera == "bounce":
|
|
||||||
camera = Camera.bounce(speed=speed)
|
|
||||||
|
|
||||||
if camera:
|
|
||||||
pipeline.add_stage("camera", CameraStage(camera, name=preset.camera))
|
|
||||||
|
|
||||||
for effect_name in preset.effects:
|
|
||||||
effect = effect_registry.get(effect_name)
|
|
||||||
if effect:
|
|
||||||
pipeline.add_stage(
|
|
||||||
f"effect_{effect_name}", create_stage_from_effect(effect, effect_name)
|
|
||||||
)
|
|
||||||
|
|
||||||
pipeline.add_stage("display", create_stage_from_display(display, display_name))
|
|
||||||
|
|
||||||
pipeline.build()
|
|
||||||
|
|
||||||
# For pipeline-inspect, set the pipeline after build to avoid circular dependency
|
|
||||||
if introspection_source is not None:
|
|
||||||
introspection_source.set_pipeline(pipeline)
|
|
||||||
|
|
||||||
if not pipeline.initialize():
|
|
||||||
print(" \033[38;5;196mFailed to initialize pipeline\033[0m")
|
|
||||||
sys.exit(1)
|
|
||||||
|
|
||||||
print(" \033[38;5;82mStarting pipeline...\033[0m")
|
|
||||||
print(" \033[38;5;245mPress Ctrl+C to exit\033[0m\n")
|
|
||||||
|
|
||||||
ctx = pipeline.context
|
|
||||||
ctx.params = params
|
|
||||||
ctx.set("display", display)
|
|
||||||
ctx.set("items", items)
|
|
||||||
ctx.set("pipeline", pipeline)
|
|
||||||
ctx.set("pipeline_order", pipeline.execution_order)
|
|
||||||
ctx.set("camera_y", 0)
|
|
||||||
|
|
||||||
current_width = 80
|
|
||||||
current_height = 24
|
|
||||||
|
|
||||||
if hasattr(display, "get_dimensions"):
|
|
||||||
current_width, current_height = display.get_dimensions()
|
|
||||||
params.viewport_width = current_width
|
|
||||||
params.viewport_height = current_height
|
|
||||||
|
|
||||||
try:
|
|
||||||
frame = 0
|
|
||||||
while True:
|
|
||||||
params.frame_number = frame
|
|
||||||
ctx.params = params
|
|
||||||
|
|
||||||
result = pipeline.execute(items)
|
|
||||||
if result.success:
|
|
||||||
display.show(result.data, border=params.border)
|
|
||||||
|
|
||||||
if hasattr(display, "is_quit_requested") and display.is_quit_requested():
|
|
||||||
if hasattr(display, "clear_quit_request"):
|
|
||||||
display.clear_quit_request()
|
|
||||||
raise KeyboardInterrupt()
|
|
||||||
|
|
||||||
if hasattr(display, "get_dimensions"):
|
|
||||||
new_w, new_h = display.get_dimensions()
|
|
||||||
if new_w != current_width or new_h != current_height:
|
|
||||||
current_width, current_height = new_w, new_h
|
|
||||||
params.viewport_width = current_width
|
|
||||||
params.viewport_height = current_height
|
|
||||||
|
|
||||||
time.sleep(1 / 60)
|
|
||||||
frame += 1
|
|
||||||
|
|
||||||
except KeyboardInterrupt:
|
|
||||||
pipeline.cleanup()
|
|
||||||
display.cleanup()
|
|
||||||
print("\n \033[38;5;245mPipeline stopped\033[0m")
|
|
||||||
return
|
|
||||||
|
|
||||||
pipeline.cleanup()
|
|
||||||
display.cleanup()
|
|
||||||
print("\n \033[38;5;245mPipeline stopped\033[0m")
|
|
||||||
|
|
||||||
|
|
||||||
if __name__ == "__main__":
|
|
||||||
main()
|
|
||||||
|
|||||||
@@ -1,73 +0,0 @@
|
|||||||
"""
|
|
||||||
Benchmark module for performance testing.
|
|
||||||
|
|
||||||
Usage:
|
|
||||||
python -m engine.benchmark # Run all benchmarks
|
|
||||||
python -m engine.benchmark --hook # Run benchmarks in hook mode (for CI)
|
|
||||||
python -m engine.benchmark --displays null --iterations 20
|
|
||||||
"""
|
|
||||||
|
|
||||||
import argparse
|
|
||||||
import sys
|
|
||||||
|
|
||||||
|
|
||||||
def main():
|
|
||||||
parser = argparse.ArgumentParser(description="Run performance benchmarks")
|
|
||||||
parser.add_argument(
|
|
||||||
"--hook",
|
|
||||||
action="store_true",
|
|
||||||
help="Run in hook mode (fail on regression)",
|
|
||||||
)
|
|
||||||
parser.add_argument(
|
|
||||||
"--displays",
|
|
||||||
default="null",
|
|
||||||
help="Comma-separated list of displays to benchmark",
|
|
||||||
)
|
|
||||||
parser.add_argument(
|
|
||||||
"--iterations",
|
|
||||||
type=int,
|
|
||||||
default=100,
|
|
||||||
help="Number of iterations per benchmark",
|
|
||||||
)
|
|
||||||
args = parser.parse_args()
|
|
||||||
|
|
||||||
# Run pytest with benchmark markers
|
|
||||||
pytest_args = [
|
|
||||||
"-v",
|
|
||||||
"-m",
|
|
||||||
"benchmark",
|
|
||||||
]
|
|
||||||
|
|
||||||
if args.hook:
|
|
||||||
# Hook mode: stricter settings
|
|
||||||
pytest_args.extend(
|
|
||||||
[
|
|
||||||
"--benchmark-only",
|
|
||||||
"--benchmark-compare",
|
|
||||||
"--benchmark-compare-fail=min:5%", # Fail if >5% slower
|
|
||||||
]
|
|
||||||
)
|
|
||||||
|
|
||||||
# Add display filter if specified
|
|
||||||
if args.displays:
|
|
||||||
pytest_args.extend(["-k", args.displays])
|
|
||||||
|
|
||||||
# Add iterations
|
|
||||||
if args.iterations:
|
|
||||||
# Set environment variable for benchmark tests
|
|
||||||
import os
|
|
||||||
|
|
||||||
os.environ["BENCHMARK_ITERATIONS"] = str(args.iterations)
|
|
||||||
|
|
||||||
# Run pytest
|
|
||||||
import subprocess
|
|
||||||
|
|
||||||
result = subprocess.run(
|
|
||||||
[sys.executable, "-m", "pytest", "tests/test_benchmark.py"] + pytest_args,
|
|
||||||
cwd=None, # Current directory
|
|
||||||
)
|
|
||||||
sys.exit(result.returncode)
|
|
||||||
|
|
||||||
|
|
||||||
if __name__ == "__main__":
|
|
||||||
main()
|
|
||||||
354
engine/camera.py
354
engine/camera.py
@@ -1,354 +0,0 @@
|
|||||||
"""
|
|
||||||
Camera system for viewport scrolling.
|
|
||||||
|
|
||||||
Provides abstraction for camera motion in different modes:
|
|
||||||
- Vertical: traditional upward scroll
|
|
||||||
- Horizontal: left/right movement
|
|
||||||
- Omni: combination of both
|
|
||||||
- Floating: sinusoidal/bobbing motion
|
|
||||||
|
|
||||||
The camera defines a visible viewport into a larger Canvas.
|
|
||||||
"""
|
|
||||||
|
|
||||||
import math
|
|
||||||
from collections.abc import Callable
|
|
||||||
from dataclasses import dataclass, field
|
|
||||||
from enum import Enum, auto
|
|
||||||
|
|
||||||
|
|
||||||
class CameraMode(Enum):
|
|
||||||
FEED = auto() # Single item view (static or rapid cycling)
|
|
||||||
SCROLL = auto() # Smooth vertical scrolling (movie credits style)
|
|
||||||
HORIZONTAL = auto()
|
|
||||||
OMNI = auto()
|
|
||||||
FLOATING = auto()
|
|
||||||
BOUNCE = auto()
|
|
||||||
|
|
||||||
|
|
||||||
@dataclass
|
|
||||||
class CameraViewport:
|
|
||||||
"""Represents the visible viewport."""
|
|
||||||
|
|
||||||
x: int
|
|
||||||
y: int
|
|
||||||
width: int
|
|
||||||
height: int
|
|
||||||
|
|
||||||
|
|
||||||
@dataclass
|
|
||||||
class Camera:
|
|
||||||
"""Camera for viewport scrolling.
|
|
||||||
|
|
||||||
The camera defines a visible viewport into a Canvas.
|
|
||||||
It can be smaller than the canvas to allow scrolling,
|
|
||||||
and supports zoom to scale the view.
|
|
||||||
|
|
||||||
Attributes:
|
|
||||||
x: Current horizontal offset (positive = scroll left)
|
|
||||||
y: Current vertical offset (positive = scroll up)
|
|
||||||
mode: Current camera mode
|
|
||||||
speed: Base scroll speed
|
|
||||||
zoom: Zoom factor (1.0 = 100%, 2.0 = 200% zoom out)
|
|
||||||
canvas_width: Width of the canvas being viewed
|
|
||||||
canvas_height: Height of the canvas being viewed
|
|
||||||
custom_update: Optional custom update function
|
|
||||||
"""
|
|
||||||
|
|
||||||
x: int = 0
|
|
||||||
y: int = 0
|
|
||||||
mode: CameraMode = CameraMode.FEED
|
|
||||||
speed: float = 1.0
|
|
||||||
zoom: float = 1.0
|
|
||||||
canvas_width: int = 200 # Larger than viewport for scrolling
|
|
||||||
canvas_height: int = 200
|
|
||||||
custom_update: Callable[["Camera", float], None] | None = None
|
|
||||||
_x_float: float = field(default=0.0, repr=False)
|
|
||||||
_y_float: float = field(default=0.0, repr=False)
|
|
||||||
_time: float = field(default=0.0, repr=False)
|
|
||||||
|
|
||||||
@property
|
|
||||||
def w(self) -> int:
|
|
||||||
"""Shorthand for viewport_width."""
|
|
||||||
return self.viewport_width
|
|
||||||
|
|
||||||
@property
|
|
||||||
def h(self) -> int:
|
|
||||||
"""Shorthand for viewport_height."""
|
|
||||||
return self.viewport_height
|
|
||||||
|
|
||||||
@property
|
|
||||||
def viewport_width(self) -> int:
|
|
||||||
"""Get the visible viewport width.
|
|
||||||
|
|
||||||
This is the canvas width divided by zoom.
|
|
||||||
"""
|
|
||||||
return max(1, int(self.canvas_width / self.zoom))
|
|
||||||
|
|
||||||
@property
|
|
||||||
def viewport_height(self) -> int:
|
|
||||||
"""Get the visible viewport height.
|
|
||||||
|
|
||||||
This is the canvas height divided by zoom.
|
|
||||||
"""
|
|
||||||
return max(1, int(self.canvas_height / self.zoom))
|
|
||||||
|
|
||||||
def get_viewport(self) -> CameraViewport:
|
|
||||||
"""Get the current viewport bounds.
|
|
||||||
|
|
||||||
Returns:
|
|
||||||
CameraViewport with position and size (clamped to canvas bounds)
|
|
||||||
"""
|
|
||||||
vw = self.viewport_width
|
|
||||||
vh = self.viewport_height
|
|
||||||
|
|
||||||
clamped_x = max(0, min(self.x, self.canvas_width - vw))
|
|
||||||
clamped_y = max(0, min(self.y, self.canvas_height - vh))
|
|
||||||
|
|
||||||
return CameraViewport(
|
|
||||||
x=clamped_x,
|
|
||||||
y=clamped_y,
|
|
||||||
width=vw,
|
|
||||||
height=vh,
|
|
||||||
)
|
|
||||||
|
|
||||||
def set_zoom(self, zoom: float) -> None:
|
|
||||||
"""Set the zoom factor.
|
|
||||||
|
|
||||||
Args:
|
|
||||||
zoom: Zoom factor (1.0 = 100%, 2.0 = zoomed out 2x, 0.5 = zoomed in 2x)
|
|
||||||
"""
|
|
||||||
self.zoom = max(0.1, min(10.0, zoom))
|
|
||||||
|
|
||||||
def update(self, dt: float) -> None:
|
|
||||||
"""Update camera position based on mode.
|
|
||||||
|
|
||||||
Args:
|
|
||||||
dt: Delta time in seconds
|
|
||||||
"""
|
|
||||||
self._time += dt
|
|
||||||
|
|
||||||
if self.custom_update:
|
|
||||||
self.custom_update(self, dt)
|
|
||||||
return
|
|
||||||
|
|
||||||
if self.mode == CameraMode.FEED:
|
|
||||||
self._update_feed(dt)
|
|
||||||
elif self.mode == CameraMode.SCROLL:
|
|
||||||
self._update_scroll(dt)
|
|
||||||
elif self.mode == CameraMode.HORIZONTAL:
|
|
||||||
self._update_horizontal(dt)
|
|
||||||
elif self.mode == CameraMode.OMNI:
|
|
||||||
self._update_omni(dt)
|
|
||||||
elif self.mode == CameraMode.FLOATING:
|
|
||||||
self._update_floating(dt)
|
|
||||||
elif self.mode == CameraMode.BOUNCE:
|
|
||||||
self._update_bounce(dt)
|
|
||||||
|
|
||||||
# Bounce mode handles its own bounds checking
|
|
||||||
if self.mode != CameraMode.BOUNCE:
|
|
||||||
self._clamp_to_bounds()
|
|
||||||
|
|
||||||
def _clamp_to_bounds(self) -> None:
|
|
||||||
"""Clamp camera position to stay within canvas bounds.
|
|
||||||
|
|
||||||
Only clamps if the viewport is smaller than the canvas.
|
|
||||||
If viewport equals canvas (no scrolling needed), allows any position
|
|
||||||
for backwards compatibility with original behavior.
|
|
||||||
"""
|
|
||||||
vw = self.viewport_width
|
|
||||||
vh = self.viewport_height
|
|
||||||
|
|
||||||
# Only clamp if there's room to scroll
|
|
||||||
if vw < self.canvas_width:
|
|
||||||
self.x = max(0, min(self.x, self.canvas_width - vw))
|
|
||||||
if vh < self.canvas_height:
|
|
||||||
self.y = max(0, min(self.y, self.canvas_height - vh))
|
|
||||||
|
|
||||||
def _update_feed(self, dt: float) -> None:
|
|
||||||
"""Feed mode: rapid scrolling (1 row per frame at speed=1.0)."""
|
|
||||||
self.y += int(self.speed * dt * 60)
|
|
||||||
|
|
||||||
def _update_scroll(self, dt: float) -> None:
|
|
||||||
"""Scroll mode: smooth vertical scrolling with float accumulation."""
|
|
||||||
self._y_float += self.speed * dt * 60
|
|
||||||
self.y = int(self._y_float)
|
|
||||||
|
|
||||||
def _update_horizontal(self, dt: float) -> None:
|
|
||||||
self.x += int(self.speed * dt * 60)
|
|
||||||
|
|
||||||
def _update_omni(self, dt: float) -> None:
|
|
||||||
speed = self.speed * dt * 60
|
|
||||||
self.y += int(speed)
|
|
||||||
self.x += int(speed * 0.5)
|
|
||||||
|
|
||||||
def _update_floating(self, dt: float) -> None:
|
|
||||||
base = self.speed * 30
|
|
||||||
self.y = int(math.sin(self._time * 2) * base)
|
|
||||||
self.x = int(math.cos(self._time * 1.5) * base * 0.5)
|
|
||||||
|
|
||||||
def _update_bounce(self, dt: float) -> None:
|
|
||||||
"""Bouncing DVD-style camera that bounces off canvas edges."""
|
|
||||||
vw = self.viewport_width
|
|
||||||
vh = self.viewport_height
|
|
||||||
|
|
||||||
# Initialize direction if not set
|
|
||||||
if not hasattr(self, "_bounce_dx"):
|
|
||||||
self._bounce_dx = 1
|
|
||||||
self._bounce_dy = 1
|
|
||||||
|
|
||||||
# Calculate max positions
|
|
||||||
max_x = max(0, self.canvas_width - vw)
|
|
||||||
max_y = max(0, self.canvas_height - vh)
|
|
||||||
|
|
||||||
# Move
|
|
||||||
move_speed = self.speed * dt * 60
|
|
||||||
|
|
||||||
# Bounce off edges - reverse direction when hitting bounds
|
|
||||||
self.x += int(move_speed * self._bounce_dx)
|
|
||||||
self.y += int(move_speed * self._bounce_dy)
|
|
||||||
|
|
||||||
# Bounce horizontally
|
|
||||||
if self.x <= 0:
|
|
||||||
self.x = 0
|
|
||||||
self._bounce_dx = 1
|
|
||||||
elif self.x >= max_x:
|
|
||||||
self.x = max_x
|
|
||||||
self._bounce_dx = -1
|
|
||||||
|
|
||||||
# Bounce vertically
|
|
||||||
if self.y <= 0:
|
|
||||||
self.y = 0
|
|
||||||
self._bounce_dy = 1
|
|
||||||
elif self.y >= max_y:
|
|
||||||
self.y = max_y
|
|
||||||
self._bounce_dy = -1
|
|
||||||
|
|
||||||
def reset(self) -> None:
|
|
||||||
"""Reset camera position."""
|
|
||||||
self.x = 0
|
|
||||||
self.y = 0
|
|
||||||
self._time = 0.0
|
|
||||||
self.zoom = 1.0
|
|
||||||
|
|
||||||
def set_canvas_size(self, width: int, height: int) -> None:
|
|
||||||
"""Set the canvas size and clamp position if needed.
|
|
||||||
|
|
||||||
Args:
|
|
||||||
width: New canvas width
|
|
||||||
height: New canvas height
|
|
||||||
"""
|
|
||||||
self.canvas_width = width
|
|
||||||
self.canvas_height = height
|
|
||||||
self._clamp_to_bounds()
|
|
||||||
|
|
||||||
def apply(
|
|
||||||
self, buffer: list[str], viewport_width: int, viewport_height: int | None = None
|
|
||||||
) -> list[str]:
|
|
||||||
"""Apply camera viewport to a text buffer.
|
|
||||||
|
|
||||||
Slices the buffer based on camera position (x, y) and viewport dimensions.
|
|
||||||
Handles ANSI escape codes correctly for colored/styled text.
|
|
||||||
|
|
||||||
Args:
|
|
||||||
buffer: List of strings representing lines of text
|
|
||||||
viewport_width: Width of the visible viewport in characters
|
|
||||||
viewport_height: Height of the visible viewport (overrides camera's viewport_height if provided)
|
|
||||||
|
|
||||||
Returns:
|
|
||||||
Sliced buffer containing only the visible lines and columns
|
|
||||||
"""
|
|
||||||
from engine.effects.legacy import vis_offset, vis_trunc
|
|
||||||
|
|
||||||
if not buffer:
|
|
||||||
return buffer
|
|
||||||
|
|
||||||
# Get current viewport bounds (clamped to canvas size)
|
|
||||||
viewport = self.get_viewport()
|
|
||||||
|
|
||||||
# Use provided viewport_height if given, otherwise use camera's viewport
|
|
||||||
vh = viewport_height if viewport_height is not None else viewport.height
|
|
||||||
|
|
||||||
# Vertical slice: extract lines that fit in viewport height
|
|
||||||
start_y = viewport.y
|
|
||||||
end_y = min(viewport.y + vh, len(buffer))
|
|
||||||
|
|
||||||
if start_y >= len(buffer):
|
|
||||||
# Scrolled past end of buffer, return empty viewport
|
|
||||||
return [""] * vh
|
|
||||||
|
|
||||||
vertical_slice = buffer[start_y:end_y]
|
|
||||||
|
|
||||||
# Horizontal slice: apply horizontal offset and truncate to width
|
|
||||||
horizontal_slice = []
|
|
||||||
for line in vertical_slice:
|
|
||||||
# Apply horizontal offset (skip first x characters, handling ANSI)
|
|
||||||
offset_line = vis_offset(line, viewport.x)
|
|
||||||
# Truncate to viewport width (handling ANSI)
|
|
||||||
truncated_line = vis_trunc(offset_line, viewport_width)
|
|
||||||
|
|
||||||
# Pad line to full viewport width to prevent ghosting when panning
|
|
||||||
import re
|
|
||||||
|
|
||||||
visible_len = len(re.sub(r"\x1b\[[0-9;]*m", "", truncated_line))
|
|
||||||
if visible_len < viewport_width:
|
|
||||||
truncated_line += " " * (viewport_width - visible_len)
|
|
||||||
|
|
||||||
horizontal_slice.append(truncated_line)
|
|
||||||
|
|
||||||
# Pad with empty lines if needed to fill viewport height
|
|
||||||
while len(horizontal_slice) < vh:
|
|
||||||
horizontal_slice.append("")
|
|
||||||
|
|
||||||
return horizontal_slice
|
|
||||||
|
|
||||||
@classmethod
|
|
||||||
def feed(cls, speed: float = 1.0) -> "Camera":
|
|
||||||
"""Create a feed camera (rapid single-item scrolling, 1 row/frame at speed=1.0)."""
|
|
||||||
return cls(mode=CameraMode.FEED, speed=speed, canvas_height=200)
|
|
||||||
|
|
||||||
@classmethod
|
|
||||||
def scroll(cls, speed: float = 0.5) -> "Camera":
|
|
||||||
"""Create a smooth scrolling camera (movie credits style).
|
|
||||||
|
|
||||||
Uses float accumulation for sub-integer speeds.
|
|
||||||
Sets canvas_width=0 so it matches viewport_width for proper text wrapping.
|
|
||||||
"""
|
|
||||||
return cls(
|
|
||||||
mode=CameraMode.SCROLL, speed=speed, canvas_width=0, canvas_height=200
|
|
||||||
)
|
|
||||||
|
|
||||||
@classmethod
|
|
||||||
def vertical(cls, speed: float = 1.0) -> "Camera":
|
|
||||||
"""Deprecated: Use feed() or scroll() instead."""
|
|
||||||
return cls(mode=CameraMode.FEED, speed=speed, canvas_height=200)
|
|
||||||
|
|
||||||
@classmethod
|
|
||||||
def horizontal(cls, speed: float = 1.0) -> "Camera":
|
|
||||||
"""Create a horizontal scrolling camera."""
|
|
||||||
return cls(mode=CameraMode.HORIZONTAL, speed=speed, canvas_width=200)
|
|
||||||
|
|
||||||
@classmethod
|
|
||||||
def omni(cls, speed: float = 1.0) -> "Camera":
|
|
||||||
"""Create an omnidirectional scrolling camera."""
|
|
||||||
return cls(
|
|
||||||
mode=CameraMode.OMNI, speed=speed, canvas_width=200, canvas_height=200
|
|
||||||
)
|
|
||||||
|
|
||||||
@classmethod
|
|
||||||
def floating(cls, speed: float = 1.0) -> "Camera":
|
|
||||||
"""Create a floating/bobbing camera."""
|
|
||||||
return cls(
|
|
||||||
mode=CameraMode.FLOATING, speed=speed, canvas_width=200, canvas_height=200
|
|
||||||
)
|
|
||||||
|
|
||||||
@classmethod
|
|
||||||
def bounce(cls, speed: float = 1.0) -> "Camera":
|
|
||||||
"""Create a bouncing DVD-style camera that bounces off canvas edges."""
|
|
||||||
return cls(
|
|
||||||
mode=CameraMode.BOUNCE, speed=speed, canvas_width=200, canvas_height=200
|
|
||||||
)
|
|
||||||
|
|
||||||
@classmethod
|
|
||||||
def custom(cls, update_fn: Callable[["Camera", float], None]) -> "Camera":
|
|
||||||
"""Create a camera with custom update function."""
|
|
||||||
return cls(custom_update=update_fn)
|
|
||||||
186
engine/canvas.py
186
engine/canvas.py
@@ -1,186 +0,0 @@
|
|||||||
"""
|
|
||||||
Canvas - 2D surface for rendering.
|
|
||||||
|
|
||||||
The Canvas represents a full rendered surface that can be larger than the display.
|
|
||||||
The Camera then defines the visible viewport into this canvas.
|
|
||||||
"""
|
|
||||||
|
|
||||||
from dataclasses import dataclass
|
|
||||||
|
|
||||||
|
|
||||||
@dataclass
|
|
||||||
class CanvasRegion:
|
|
||||||
"""A rectangular region on the canvas."""
|
|
||||||
|
|
||||||
x: int
|
|
||||||
y: int
|
|
||||||
width: int
|
|
||||||
height: int
|
|
||||||
|
|
||||||
def is_valid(self) -> bool:
|
|
||||||
"""Check if region has positive dimensions."""
|
|
||||||
return self.width > 0 and self.height > 0
|
|
||||||
|
|
||||||
def rows(self) -> set[int]:
|
|
||||||
"""Return set of row indices in this region."""
|
|
||||||
return set(range(self.y, self.y + self.height))
|
|
||||||
|
|
||||||
|
|
||||||
class Canvas:
|
|
||||||
"""2D canvas for rendering content.
|
|
||||||
|
|
||||||
The canvas is a 2D grid of cells that can hold text content.
|
|
||||||
It can be larger than the visible viewport (display).
|
|
||||||
|
|
||||||
Attributes:
|
|
||||||
width: Total width in characters
|
|
||||||
height: Total height in characters
|
|
||||||
"""
|
|
||||||
|
|
||||||
def __init__(self, width: int = 80, height: int = 24):
|
|
||||||
self.width = width
|
|
||||||
self.height = height
|
|
||||||
self._grid: list[list[str]] = [
|
|
||||||
[" " for _ in range(width)] for _ in range(height)
|
|
||||||
]
|
|
||||||
self._dirty_regions: list[CanvasRegion] = [] # Track dirty regions
|
|
||||||
|
|
||||||
def clear(self) -> None:
|
|
||||||
"""Clear the entire canvas."""
|
|
||||||
self._grid = [[" " for _ in range(self.width)] for _ in range(self.height)]
|
|
||||||
self._dirty_regions = [CanvasRegion(0, 0, self.width, self.height)]
|
|
||||||
|
|
||||||
def mark_dirty(self, x: int, y: int, width: int, height: int) -> None:
|
|
||||||
"""Mark a region as dirty (caller declares what they changed)."""
|
|
||||||
self._dirty_regions.append(CanvasRegion(x, y, width, height))
|
|
||||||
|
|
||||||
def get_dirty_regions(self) -> list[CanvasRegion]:
|
|
||||||
"""Get all dirty regions and clear the set."""
|
|
||||||
regions = self._dirty_regions
|
|
||||||
self._dirty_regions = []
|
|
||||||
return regions
|
|
||||||
|
|
||||||
def get_dirty_rows(self) -> set[int]:
|
|
||||||
"""Get union of all dirty rows."""
|
|
||||||
rows: set[int] = set()
|
|
||||||
for region in self._dirty_regions:
|
|
||||||
rows.update(region.rows())
|
|
||||||
return rows
|
|
||||||
|
|
||||||
def is_dirty(self) -> bool:
|
|
||||||
"""Check if any region is dirty."""
|
|
||||||
return len(self._dirty_regions) > 0
|
|
||||||
|
|
||||||
def get_region(self, x: int, y: int, width: int, height: int) -> list[list[str]]:
|
|
||||||
"""Get a rectangular region from the canvas.
|
|
||||||
|
|
||||||
Args:
|
|
||||||
x: Left position
|
|
||||||
y: Top position
|
|
||||||
width: Region width
|
|
||||||
height: Region height
|
|
||||||
|
|
||||||
Returns:
|
|
||||||
2D list of characters (height rows, width columns)
|
|
||||||
"""
|
|
||||||
region: list[list[str]] = []
|
|
||||||
for py in range(y, y + height):
|
|
||||||
row: list[str] = []
|
|
||||||
for px in range(x, x + width):
|
|
||||||
if 0 <= py < self.height and 0 <= px < self.width:
|
|
||||||
row.append(self._grid[py][px])
|
|
||||||
else:
|
|
||||||
row.append(" ")
|
|
||||||
region.append(row)
|
|
||||||
return region
|
|
||||||
|
|
||||||
def get_region_flat(self, x: int, y: int, width: int, height: int) -> list[str]:
|
|
||||||
"""Get a rectangular region as flat list of lines.
|
|
||||||
|
|
||||||
Args:
|
|
||||||
x: Left position
|
|
||||||
y: Top position
|
|
||||||
width: Region width
|
|
||||||
height: Region height
|
|
||||||
|
|
||||||
Returns:
|
|
||||||
List of strings (one per row)
|
|
||||||
"""
|
|
||||||
region = self.get_region(x, y, width, height)
|
|
||||||
return ["".join(row) for row in region]
|
|
||||||
|
|
||||||
def put_region(self, x: int, y: int, content: list[list[str]]) -> None:
|
|
||||||
"""Put content into a rectangular region on the canvas.
|
|
||||||
|
|
||||||
Args:
|
|
||||||
x: Left position
|
|
||||||
y: Top position
|
|
||||||
content: 2D list of characters to place
|
|
||||||
"""
|
|
||||||
height = len(content) if content else 0
|
|
||||||
width = len(content[0]) if height > 0 else 0
|
|
||||||
|
|
||||||
for py, row in enumerate(content):
|
|
||||||
for px, char in enumerate(row):
|
|
||||||
canvas_x = x + px
|
|
||||||
canvas_y = y + py
|
|
||||||
if 0 <= canvas_y < self.height and 0 <= canvas_x < self.width:
|
|
||||||
self._grid[canvas_y][canvas_x] = char
|
|
||||||
|
|
||||||
if width > 0 and height > 0:
|
|
||||||
self.mark_dirty(x, y, width, height)
|
|
||||||
|
|
||||||
def put_text(self, x: int, y: int, text: str) -> None:
|
|
||||||
"""Put a single line of text at position.
|
|
||||||
|
|
||||||
Args:
|
|
||||||
x: Left position
|
|
||||||
y: Row position
|
|
||||||
text: Text to place
|
|
||||||
"""
|
|
||||||
text_len = len(text)
|
|
||||||
for i, char in enumerate(text):
|
|
||||||
canvas_x = x + i
|
|
||||||
if 0 <= canvas_x < self.width and 0 <= y < self.height:
|
|
||||||
self._grid[y][canvas_x] = char
|
|
||||||
|
|
||||||
if text_len > 0:
|
|
||||||
self.mark_dirty(x, y, text_len, 1)
|
|
||||||
|
|
||||||
def fill(self, x: int, y: int, width: int, height: int, char: str = " ") -> None:
|
|
||||||
"""Fill a rectangular region with a character.
|
|
||||||
|
|
||||||
Args:
|
|
||||||
x: Left position
|
|
||||||
y: Top position
|
|
||||||
width: Region width
|
|
||||||
height: Region height
|
|
||||||
char: Character to fill with
|
|
||||||
"""
|
|
||||||
for py in range(y, y + height):
|
|
||||||
for px in range(x, x + width):
|
|
||||||
if 0 <= py < self.height and 0 <= px < self.width:
|
|
||||||
self._grid[py][px] = char
|
|
||||||
|
|
||||||
if width > 0 and height > 0:
|
|
||||||
self.mark_dirty(x, y, width, height)
|
|
||||||
|
|
||||||
def resize(self, width: int, height: int) -> None:
|
|
||||||
"""Resize the canvas.
|
|
||||||
|
|
||||||
Args:
|
|
||||||
width: New width
|
|
||||||
height: New height
|
|
||||||
"""
|
|
||||||
if width == self.width and height == self.height:
|
|
||||||
return
|
|
||||||
|
|
||||||
new_grid: list[list[str]] = [[" " for _ in range(width)] for _ in range(height)]
|
|
||||||
|
|
||||||
for py in range(min(self.height, height)):
|
|
||||||
for px in range(min(self.width, width)):
|
|
||||||
new_grid[py][px] = self._grid[py][px]
|
|
||||||
|
|
||||||
self.width = width
|
|
||||||
self.height = height
|
|
||||||
self._grid = new_grid
|
|
||||||
@@ -105,8 +105,6 @@ class Config:
|
|||||||
firehose: bool = False
|
firehose: bool = False
|
||||||
|
|
||||||
ntfy_topic: str = "https://ntfy.sh/klubhaus_terminal_mainline/json"
|
ntfy_topic: str = "https://ntfy.sh/klubhaus_terminal_mainline/json"
|
||||||
ntfy_cc_cmd_topic: str = "https://ntfy.sh/klubhaus_terminal_mainline_cc_cmd/json"
|
|
||||||
ntfy_cc_resp_topic: str = "https://ntfy.sh/klubhaus_terminal_mainline_cc_resp/json"
|
|
||||||
ntfy_reconnect_delay: int = 5
|
ntfy_reconnect_delay: int = 5
|
||||||
message_display_secs: int = 30
|
message_display_secs: int = 30
|
||||||
|
|
||||||
@@ -129,10 +127,6 @@ class Config:
|
|||||||
|
|
||||||
script_fonts: dict[str, str] = field(default_factory=_get_platform_font_paths)
|
script_fonts: dict[str, str] = field(default_factory=_get_platform_font_paths)
|
||||||
|
|
||||||
display: str = "pygame"
|
|
||||||
websocket: bool = False
|
|
||||||
websocket_port: int = 8765
|
|
||||||
|
|
||||||
@classmethod
|
@classmethod
|
||||||
def from_args(cls, argv: list[str] | None = None) -> "Config":
|
def from_args(cls, argv: list[str] | None = None) -> "Config":
|
||||||
"""Create Config from CLI arguments (or custom argv for testing)."""
|
"""Create Config from CLI arguments (or custom argv for testing)."""
|
||||||
@@ -154,8 +148,6 @@ class Config:
|
|||||||
mode="poetry" if "--poetry" in argv or "-p" in argv else "news",
|
mode="poetry" if "--poetry" in argv or "-p" in argv else "news",
|
||||||
firehose="--firehose" in argv,
|
firehose="--firehose" in argv,
|
||||||
ntfy_topic="https://ntfy.sh/klubhaus_terminal_mainline/json",
|
ntfy_topic="https://ntfy.sh/klubhaus_terminal_mainline/json",
|
||||||
ntfy_cc_cmd_topic="https://ntfy.sh/klubhaus_terminal_mainline_cc_cmd/json",
|
|
||||||
ntfy_cc_resp_topic="https://ntfy.sh/klubhaus_terminal_mainline_cc_resp/json",
|
|
||||||
ntfy_reconnect_delay=5,
|
ntfy_reconnect_delay=5,
|
||||||
message_display_secs=30,
|
message_display_secs=30,
|
||||||
font_dir=font_dir,
|
font_dir=font_dir,
|
||||||
@@ -172,9 +164,6 @@ class Config:
|
|||||||
glitch_glyphs="░▒▓█▌▐╌╍╎╏┃┆┇┊┋",
|
glitch_glyphs="░▒▓█▌▐╌╍╎╏┃┆┇┊┋",
|
||||||
kata_glyphs="ハミヒーウシナモニサワツオリアホテマケメエカキムユラセネスタヌヘ",
|
kata_glyphs="ハミヒーウシナモニサワツオリアホテマケメエカキムユラセネスタヌヘ",
|
||||||
script_fonts=_get_platform_font_paths(),
|
script_fonts=_get_platform_font_paths(),
|
||||||
display=_arg_value("--display", argv) or "terminal",
|
|
||||||
websocket="--websocket" in argv,
|
|
||||||
websocket_port=_arg_int("--websocket-port", 8765, argv),
|
|
||||||
)
|
)
|
||||||
|
|
||||||
|
|
||||||
@@ -199,13 +188,17 @@ def set_config(config: Config) -> None:
|
|||||||
HEADLINE_LIMIT = 1000
|
HEADLINE_LIMIT = 1000
|
||||||
FEED_TIMEOUT = 10
|
FEED_TIMEOUT = 10
|
||||||
MIC_THRESHOLD_DB = 50 # dB above which glitches intensify
|
MIC_THRESHOLD_DB = 50 # dB above which glitches intensify
|
||||||
MODE = "poetry" if "--poetry" in sys.argv or "-p" in sys.argv else "news"
|
MODE = (
|
||||||
|
"poetry"
|
||||||
|
if "--poetry" in sys.argv or "-p" in sys.argv
|
||||||
|
else "code"
|
||||||
|
if "--code" in sys.argv
|
||||||
|
else "news"
|
||||||
|
)
|
||||||
FIREHOSE = "--firehose" in sys.argv
|
FIREHOSE = "--firehose" in sys.argv
|
||||||
|
|
||||||
# ─── NTFY MESSAGE QUEUE ──────────────────────────────────
|
# ─── NTFY MESSAGE QUEUE ──────────────────────────────────
|
||||||
NTFY_TOPIC = "https://ntfy.sh/klubhaus_terminal_mainline/json"
|
NTFY_TOPIC = "https://ntfy.sh/klubhaus_terminal_mainline/json"
|
||||||
NTFY_CC_CMD_TOPIC = "https://ntfy.sh/klubhaus_terminal_mainline_cc_cmd/json"
|
|
||||||
NTFY_CC_RESP_TOPIC = "https://ntfy.sh/klubhaus_terminal_mainline_cc_resp/json"
|
|
||||||
NTFY_RECONNECT_DELAY = 5 # seconds before reconnecting after a dropped stream
|
NTFY_RECONNECT_DELAY = 5 # seconds before reconnecting after a dropped stream
|
||||||
MESSAGE_DISPLAY_SECS = 30 # how long a message holds the screen
|
MESSAGE_DISPLAY_SECS = 30 # how long a message holds the screen
|
||||||
|
|
||||||
@@ -236,26 +229,6 @@ GRAD_SPEED = 0.08 # gradient traversal speed (cycles/sec, ~12s full sweep)
|
|||||||
GLITCH = "░▒▓█▌▐╌╍╎╏┃┆┇┊┋"
|
GLITCH = "░▒▓█▌▐╌╍╎╏┃┆┇┊┋"
|
||||||
KATA = "ハミヒーウシナモニサワツオリアホテマケメエカキムユラセネスタヌヘ"
|
KATA = "ハミヒーウシナモニサワツオリアホテマケメエカキムユラセネスタヌヘ"
|
||||||
|
|
||||||
# ─── WEBSOCKET ─────────────────────────────────────────────
|
|
||||||
DISPLAY = _arg_value("--display", sys.argv) or "pygame"
|
|
||||||
WEBSOCKET = "--websocket" in sys.argv
|
|
||||||
WEBSOCKET_PORT = _arg_int("--websocket-port", 8765)
|
|
||||||
|
|
||||||
# ─── DEMO MODE ────────────────────────────────────────────
|
|
||||||
DEMO = "--demo" in sys.argv
|
|
||||||
DEMO_EFFECT_DURATION = 5.0 # seconds per effect
|
|
||||||
PIPELINE_DEMO = "--pipeline-demo" in sys.argv
|
|
||||||
|
|
||||||
# ─── PIPELINE MODE (new unified architecture) ─────────────
|
|
||||||
PIPELINE_MODE = "--pipeline" in sys.argv
|
|
||||||
PIPELINE_PRESET = _arg_value("--pipeline-preset", sys.argv) or "demo"
|
|
||||||
|
|
||||||
# ─── PRESET MODE ────────────────────────────────────────────
|
|
||||||
PRESET = _arg_value("--preset", sys.argv)
|
|
||||||
|
|
||||||
# ─── PIPELINE DIAGRAM ────────────────────────────────────
|
|
||||||
PIPELINE_DIAGRAM = "--pipeline-diagram" in sys.argv
|
|
||||||
|
|
||||||
|
|
||||||
def set_font_selection(font_path=None, font_index=None):
|
def set_font_selection(font_path=None, font_index=None):
|
||||||
"""Set runtime primary font selection."""
|
"""Set runtime primary font selection."""
|
||||||
@@ -264,3 +237,26 @@ def set_font_selection(font_path=None, font_index=None):
|
|||||||
FONT_PATH = _resolve_font_path(font_path)
|
FONT_PATH = _resolve_font_path(font_path)
|
||||||
if font_index is not None:
|
if font_index is not None:
|
||||||
FONT_INDEX = max(0, int(font_index))
|
FONT_INDEX = max(0, int(font_index))
|
||||||
|
|
||||||
|
|
||||||
|
# ─── THEME MANAGEMENT ─────────────────────────────────────────
|
||||||
|
ACTIVE_THEME = None
|
||||||
|
|
||||||
|
|
||||||
|
def set_active_theme(theme_id: str = "green"):
|
||||||
|
"""Set the active theme by ID.
|
||||||
|
|
||||||
|
Args:
|
||||||
|
theme_id: Theme identifier ("green", "orange", or "purple")
|
||||||
|
Defaults to "green"
|
||||||
|
|
||||||
|
Raises:
|
||||||
|
KeyError: If theme_id is not in the theme registry
|
||||||
|
|
||||||
|
Side Effects:
|
||||||
|
Sets the ACTIVE_THEME global variable
|
||||||
|
"""
|
||||||
|
global ACTIVE_THEME
|
||||||
|
from engine import themes
|
||||||
|
|
||||||
|
ACTIVE_THEME = themes.get_theme(theme_id)
|
||||||
|
|||||||
68
engine/controller.py
Normal file
68
engine/controller.py
Normal file
@@ -0,0 +1,68 @@
|
|||||||
|
"""
|
||||||
|
Stream controller - manages input sources and orchestrates the render stream.
|
||||||
|
"""
|
||||||
|
|
||||||
|
from engine.config import Config, get_config
|
||||||
|
from engine.eventbus import EventBus
|
||||||
|
from engine.events import EventType, StreamEvent
|
||||||
|
from engine.mic import MicMonitor
|
||||||
|
from engine.ntfy import NtfyPoller
|
||||||
|
from engine.scroll import stream
|
||||||
|
|
||||||
|
|
||||||
|
class StreamController:
|
||||||
|
"""Controls the stream lifecycle - initializes sources and runs the stream."""
|
||||||
|
|
||||||
|
def __init__(self, config: Config | None = None, event_bus: EventBus | None = None):
|
||||||
|
self.config = config or get_config()
|
||||||
|
self.event_bus = event_bus
|
||||||
|
self.mic: MicMonitor | None = None
|
||||||
|
self.ntfy: NtfyPoller | None = None
|
||||||
|
|
||||||
|
def initialize_sources(self) -> tuple[bool, bool]:
|
||||||
|
"""Initialize microphone and ntfy sources.
|
||||||
|
|
||||||
|
Returns:
|
||||||
|
(mic_ok, ntfy_ok) - success status for each source
|
||||||
|
"""
|
||||||
|
self.mic = MicMonitor(threshold_db=self.config.mic_threshold_db)
|
||||||
|
mic_ok = self.mic.start() if self.mic.available else False
|
||||||
|
|
||||||
|
self.ntfy = NtfyPoller(
|
||||||
|
self.config.ntfy_topic,
|
||||||
|
reconnect_delay=self.config.ntfy_reconnect_delay,
|
||||||
|
display_secs=self.config.message_display_secs,
|
||||||
|
)
|
||||||
|
ntfy_ok = self.ntfy.start()
|
||||||
|
|
||||||
|
return bool(mic_ok), ntfy_ok
|
||||||
|
|
||||||
|
def run(self, items: list) -> None:
|
||||||
|
"""Run the stream with initialized sources."""
|
||||||
|
if self.mic is None or self.ntfy is None:
|
||||||
|
self.initialize_sources()
|
||||||
|
|
||||||
|
if self.event_bus:
|
||||||
|
self.event_bus.publish(
|
||||||
|
EventType.STREAM_START,
|
||||||
|
StreamEvent(
|
||||||
|
event_type=EventType.STREAM_START,
|
||||||
|
headline_count=len(items),
|
||||||
|
),
|
||||||
|
)
|
||||||
|
|
||||||
|
stream(items, self.ntfy, self.mic)
|
||||||
|
|
||||||
|
if self.event_bus:
|
||||||
|
self.event_bus.publish(
|
||||||
|
EventType.STREAM_END,
|
||||||
|
StreamEvent(
|
||||||
|
event_type=EventType.STREAM_END,
|
||||||
|
headline_count=len(items),
|
||||||
|
),
|
||||||
|
)
|
||||||
|
|
||||||
|
def cleanup(self) -> None:
|
||||||
|
"""Clean up resources."""
|
||||||
|
if self.mic:
|
||||||
|
self.mic.stop()
|
||||||
@@ -1,12 +0,0 @@
|
|||||||
"""
|
|
||||||
Data source implementations for the pipeline architecture.
|
|
||||||
|
|
||||||
Import directly from submodules:
|
|
||||||
from engine.data_sources.sources import DataSource, SourceItem, HeadlinesDataSource
|
|
||||||
from engine.data_sources.pipeline_introspection import PipelineIntrospectionSource
|
|
||||||
"""
|
|
||||||
|
|
||||||
# Re-export for convenience
|
|
||||||
from engine.data_sources.sources import ImageItem, SourceItem
|
|
||||||
|
|
||||||
__all__ = ["ImageItem", "SourceItem"]
|
|
||||||
@@ -1,312 +0,0 @@
|
|||||||
"""
|
|
||||||
Pipeline introspection source - Renders live visualization of pipeline DAG and metrics.
|
|
||||||
|
|
||||||
This DataSource introspects one or more Pipeline instances and renders
|
|
||||||
an ASCII visualization showing:
|
|
||||||
- Stage DAG with signal flow connections
|
|
||||||
- Per-stage execution times
|
|
||||||
- Sparkline of frame times
|
|
||||||
- Stage breakdown bars
|
|
||||||
|
|
||||||
Example:
|
|
||||||
source = PipelineIntrospectionSource(pipelines=[my_pipeline])
|
|
||||||
items = source.fetch() # Returns ASCII visualization
|
|
||||||
"""
|
|
||||||
|
|
||||||
from typing import TYPE_CHECKING
|
|
||||||
|
|
||||||
from engine.data_sources.sources import DataSource, SourceItem
|
|
||||||
|
|
||||||
if TYPE_CHECKING:
|
|
||||||
from engine.pipeline.controller import Pipeline
|
|
||||||
|
|
||||||
|
|
||||||
SPARKLINE_CHARS = " ▁▂▃▄▅▆▇█"
|
|
||||||
BAR_CHARS = " ▁▂▃▄▅▆▇█"
|
|
||||||
|
|
||||||
|
|
||||||
class PipelineIntrospectionSource(DataSource):
|
|
||||||
"""Data source that renders live pipeline introspection visualization.
|
|
||||||
|
|
||||||
Renders:
|
|
||||||
- DAG of stages with signal flow
|
|
||||||
- Per-stage execution times
|
|
||||||
- Sparkline of frame history
|
|
||||||
- Stage breakdown bars
|
|
||||||
"""
|
|
||||||
|
|
||||||
def __init__(
|
|
||||||
self,
|
|
||||||
pipeline: "Pipeline | None" = None,
|
|
||||||
viewport_width: int = 100,
|
|
||||||
viewport_height: int = 35,
|
|
||||||
):
|
|
||||||
self._pipeline = pipeline # May be None initially, set later via set_pipeline()
|
|
||||||
self.viewport_width = viewport_width
|
|
||||||
self.viewport_height = viewport_height
|
|
||||||
self.frame = 0
|
|
||||||
self._ready = False
|
|
||||||
|
|
||||||
def set_pipeline(self, pipeline: "Pipeline") -> None:
|
|
||||||
"""Set the pipeline to introspect (call after pipeline is built)."""
|
|
||||||
self._pipeline = [pipeline] # Wrap in list for iteration
|
|
||||||
self._ready = True
|
|
||||||
|
|
||||||
@property
|
|
||||||
def ready(self) -> bool:
|
|
||||||
"""Check if source is ready to fetch."""
|
|
||||||
return self._ready
|
|
||||||
|
|
||||||
@property
|
|
||||||
def name(self) -> str:
|
|
||||||
return "pipeline-inspect"
|
|
||||||
|
|
||||||
@property
|
|
||||||
def is_dynamic(self) -> bool:
|
|
||||||
return True
|
|
||||||
|
|
||||||
@property
|
|
||||||
def inlet_types(self) -> set:
|
|
||||||
from engine.pipeline.core import DataType
|
|
||||||
|
|
||||||
return {DataType.NONE}
|
|
||||||
|
|
||||||
@property
|
|
||||||
def outlet_types(self) -> set:
|
|
||||||
from engine.pipeline.core import DataType
|
|
||||||
|
|
||||||
return {DataType.SOURCE_ITEMS}
|
|
||||||
|
|
||||||
def add_pipeline(self, pipeline: "Pipeline") -> None:
|
|
||||||
"""Add a pipeline to visualize."""
|
|
||||||
if self._pipeline is None:
|
|
||||||
self._pipeline = [pipeline]
|
|
||||||
elif isinstance(self._pipeline, list):
|
|
||||||
self._pipeline.append(pipeline)
|
|
||||||
else:
|
|
||||||
self._pipeline = [self._pipeline, pipeline]
|
|
||||||
self._ready = True
|
|
||||||
|
|
||||||
def remove_pipeline(self, pipeline: "Pipeline") -> None:
|
|
||||||
"""Remove a pipeline from visualization."""
|
|
||||||
if self._pipeline is None:
|
|
||||||
return
|
|
||||||
elif isinstance(self._pipeline, list):
|
|
||||||
self._pipeline = [p for p in self._pipeline if p is not pipeline]
|
|
||||||
if not self._pipeline:
|
|
||||||
self._pipeline = None
|
|
||||||
self._ready = False
|
|
||||||
elif self._pipeline is pipeline:
|
|
||||||
self._pipeline = None
|
|
||||||
self._ready = False
|
|
||||||
|
|
||||||
def fetch(self) -> list[SourceItem]:
|
|
||||||
"""Fetch the introspection visualization."""
|
|
||||||
if not self._ready:
|
|
||||||
# Return a placeholder until ready
|
|
||||||
return [
|
|
||||||
SourceItem(
|
|
||||||
content="Initializing...",
|
|
||||||
source="pipeline-inspect",
|
|
||||||
timestamp="init",
|
|
||||||
)
|
|
||||||
]
|
|
||||||
|
|
||||||
lines = self._render()
|
|
||||||
self.frame += 1
|
|
||||||
content = "\n".join(lines)
|
|
||||||
return [
|
|
||||||
SourceItem(
|
|
||||||
content=content, source="pipeline-inspect", timestamp=f"f{self.frame}"
|
|
||||||
)
|
|
||||||
]
|
|
||||||
|
|
||||||
def get_items(self) -> list[SourceItem]:
|
|
||||||
return self.fetch()
|
|
||||||
|
|
||||||
def _render(self) -> list[str]:
|
|
||||||
"""Render the full visualization."""
|
|
||||||
lines: list[str] = []
|
|
||||||
|
|
||||||
# Header
|
|
||||||
lines.extend(self._render_header())
|
|
||||||
|
|
||||||
# Render pipeline(s) if ready
|
|
||||||
if self._ready and self._pipeline:
|
|
||||||
pipelines = (
|
|
||||||
self._pipeline if isinstance(self._pipeline, list) else [self._pipeline]
|
|
||||||
)
|
|
||||||
for pipeline in pipelines:
|
|
||||||
lines.extend(self._render_pipeline(pipeline))
|
|
||||||
|
|
||||||
# Footer with sparkline
|
|
||||||
lines.extend(self._render_footer())
|
|
||||||
|
|
||||||
return lines
|
|
||||||
|
|
||||||
@property
|
|
||||||
def _pipelines(self) -> list:
|
|
||||||
"""Return pipelines as a list for iteration."""
|
|
||||||
if self._pipeline is None:
|
|
||||||
return []
|
|
||||||
elif isinstance(self._pipeline, list):
|
|
||||||
return self._pipeline
|
|
||||||
else:
|
|
||||||
return [self._pipeline]
|
|
||||||
|
|
||||||
def _render_header(self) -> list[str]:
|
|
||||||
"""Render the header with frame info and metrics summary."""
|
|
||||||
lines: list[str] = []
|
|
||||||
|
|
||||||
if not self._pipeline:
|
|
||||||
return ["PIPELINE INTROSPECTION"]
|
|
||||||
|
|
||||||
# Get aggregate metrics
|
|
||||||
total_ms = 0.0
|
|
||||||
fps = 0.0
|
|
||||||
frame_count = 0
|
|
||||||
|
|
||||||
for pipeline in self._pipelines:
|
|
||||||
try:
|
|
||||||
metrics = pipeline.get_metrics_summary()
|
|
||||||
if metrics and "error" not in metrics:
|
|
||||||
# Get avg_ms from pipeline metrics
|
|
||||||
pipeline_avg = metrics.get("pipeline", {}).get("avg_ms", 0)
|
|
||||||
total_ms = max(total_ms, pipeline_avg)
|
|
||||||
# Calculate FPS from avg_ms
|
|
||||||
if pipeline_avg > 0:
|
|
||||||
fps = max(fps, 1000.0 / pipeline_avg)
|
|
||||||
frame_count = max(frame_count, metrics.get("frame_count", 0))
|
|
||||||
except Exception:
|
|
||||||
pass
|
|
||||||
|
|
||||||
header = f"PIPELINE INTROSPECTION -- frame: {self.frame} -- avg: {total_ms:.1f}ms -- fps: {fps:.1f}"
|
|
||||||
lines.append(header)
|
|
||||||
|
|
||||||
return lines
|
|
||||||
|
|
||||||
def _render_pipeline(self, pipeline: "Pipeline") -> list[str]:
|
|
||||||
"""Render a single pipeline's DAG."""
|
|
||||||
lines: list[str] = []
|
|
||||||
|
|
||||||
stages = pipeline.stages
|
|
||||||
execution_order = pipeline.execution_order
|
|
||||||
|
|
||||||
if not stages:
|
|
||||||
lines.append(" (no stages)")
|
|
||||||
return lines
|
|
||||||
|
|
||||||
# Build stage info
|
|
||||||
stage_infos: list[dict] = []
|
|
||||||
for name in execution_order:
|
|
||||||
stage = stages.get(name)
|
|
||||||
if not stage:
|
|
||||||
continue
|
|
||||||
|
|
||||||
try:
|
|
||||||
metrics = pipeline.get_metrics_summary()
|
|
||||||
stage_ms = metrics.get("stages", {}).get(name, {}).get("avg_ms", 0.0)
|
|
||||||
except Exception:
|
|
||||||
stage_ms = 0.0
|
|
||||||
|
|
||||||
stage_infos.append(
|
|
||||||
{
|
|
||||||
"name": name,
|
|
||||||
"category": stage.category,
|
|
||||||
"ms": stage_ms,
|
|
||||||
}
|
|
||||||
)
|
|
||||||
|
|
||||||
# Calculate total time for percentages
|
|
||||||
total_time = sum(s["ms"] for s in stage_infos) or 1.0
|
|
||||||
|
|
||||||
# Render DAG - group by category
|
|
||||||
lines.append("")
|
|
||||||
lines.append(" Signal Flow:")
|
|
||||||
|
|
||||||
# Group stages by category for display
|
|
||||||
categories: dict[str, list[dict]] = {}
|
|
||||||
for info in stage_infos:
|
|
||||||
cat = info["category"]
|
|
||||||
if cat not in categories:
|
|
||||||
categories[cat] = []
|
|
||||||
categories[cat].append(info)
|
|
||||||
|
|
||||||
# Render categories in order
|
|
||||||
cat_order = ["source", "render", "effect", "overlay", "display", "system"]
|
|
||||||
|
|
||||||
for cat in cat_order:
|
|
||||||
if cat not in categories:
|
|
||||||
continue
|
|
||||||
|
|
||||||
cat_stages = categories[cat]
|
|
||||||
cat_names = [s["name"] for s in cat_stages]
|
|
||||||
lines.append(f" {cat}: {' → '.join(cat_names)}")
|
|
||||||
|
|
||||||
# Render timing breakdown
|
|
||||||
lines.append("")
|
|
||||||
lines.append(" Stage Timings:")
|
|
||||||
|
|
||||||
for info in stage_infos:
|
|
||||||
name = info["name"]
|
|
||||||
ms = info["ms"]
|
|
||||||
pct = (ms / total_time) * 100
|
|
||||||
bar = self._render_bar(pct, 20)
|
|
||||||
lines.append(f" {name:12s} {ms:6.2f}ms {bar} {pct:5.1f}%")
|
|
||||||
|
|
||||||
lines.append("")
|
|
||||||
|
|
||||||
return lines
|
|
||||||
|
|
||||||
def _render_footer(self) -> list[str]:
|
|
||||||
"""Render the footer with sparkline."""
|
|
||||||
lines: list[str] = []
|
|
||||||
|
|
||||||
# Get frame history from first pipeline
|
|
||||||
pipelines = self._pipelines
|
|
||||||
if pipelines:
|
|
||||||
try:
|
|
||||||
frame_times = pipelines[0].get_frame_times()
|
|
||||||
except Exception:
|
|
||||||
frame_times = []
|
|
||||||
else:
|
|
||||||
frame_times = []
|
|
||||||
|
|
||||||
if frame_times:
|
|
||||||
sparkline = self._render_sparkline(frame_times[-60:], 50)
|
|
||||||
lines.append(f" Frame Time History (last {len(frame_times[-60:])} frames)")
|
|
||||||
lines.append(f" {sparkline}")
|
|
||||||
else:
|
|
||||||
lines.append(" Frame Time History")
|
|
||||||
lines.append(" (collecting data...)")
|
|
||||||
|
|
||||||
lines.append("")
|
|
||||||
|
|
||||||
return lines
|
|
||||||
|
|
||||||
def _render_bar(self, percentage: float, width: int) -> str:
|
|
||||||
"""Render a horizontal bar for percentage."""
|
|
||||||
filled = int((percentage / 100.0) * width)
|
|
||||||
bar = "█" * filled + "░" * (width - filled)
|
|
||||||
return bar
|
|
||||||
|
|
||||||
def _render_sparkline(self, values: list[float], width: int) -> str:
|
|
||||||
"""Render a sparkline from values."""
|
|
||||||
if not values:
|
|
||||||
return " " * width
|
|
||||||
|
|
||||||
min_val = min(values)
|
|
||||||
max_val = max(values)
|
|
||||||
range_val = max_val - min_val or 1.0
|
|
||||||
|
|
||||||
result = []
|
|
||||||
for v in values[-width:]:
|
|
||||||
normalized = (v - min_val) / range_val
|
|
||||||
idx = int(normalized * (len(SPARKLINE_CHARS) - 1))
|
|
||||||
idx = max(0, min(idx, len(SPARKLINE_CHARS) - 1))
|
|
||||||
result.append(SPARKLINE_CHARS[idx])
|
|
||||||
|
|
||||||
# Pad to width
|
|
||||||
while len(result) < width:
|
|
||||||
result.insert(0, " ")
|
|
||||||
return "".join(result[:width])
|
|
||||||
@@ -1,490 +0,0 @@
|
|||||||
"""
|
|
||||||
Data sources for the pipeline architecture.
|
|
||||||
|
|
||||||
This module contains all DataSource implementations:
|
|
||||||
- DataSource: Abstract base class
|
|
||||||
- SourceItem, ImageItem: Data containers
|
|
||||||
- HeadlinesDataSource, PoetryDataSource, ImageDataSource: Concrete sources
|
|
||||||
- SourceRegistry: Registry for source discovery
|
|
||||||
"""
|
|
||||||
|
|
||||||
from abc import ABC, abstractmethod
|
|
||||||
from collections.abc import Callable
|
|
||||||
from dataclasses import dataclass
|
|
||||||
from typing import Any
|
|
||||||
|
|
||||||
|
|
||||||
@dataclass
|
|
||||||
class SourceItem:
|
|
||||||
"""A single item from a data source."""
|
|
||||||
|
|
||||||
content: str
|
|
||||||
source: str
|
|
||||||
timestamp: str
|
|
||||||
metadata: dict[str, Any] | None = None
|
|
||||||
|
|
||||||
|
|
||||||
@dataclass
|
|
||||||
class ImageItem:
|
|
||||||
"""An image item from a data source - wraps a PIL Image."""
|
|
||||||
|
|
||||||
image: Any # PIL Image
|
|
||||||
source: str
|
|
||||||
timestamp: str
|
|
||||||
path: str | None = None # File path or URL if applicable
|
|
||||||
metadata: dict[str, Any] | None = None
|
|
||||||
|
|
||||||
|
|
||||||
class DataSource(ABC):
|
|
||||||
"""Abstract base class for data sources.
|
|
||||||
|
|
||||||
Static sources: Data fetched once and cached. Safe to call fetch() multiple times.
|
|
||||||
Dynamic sources: Data changes over time. fetch() should be idempotent.
|
|
||||||
"""
|
|
||||||
|
|
||||||
@property
|
|
||||||
@abstractmethod
|
|
||||||
def name(self) -> str:
|
|
||||||
"""Display name for this source."""
|
|
||||||
...
|
|
||||||
|
|
||||||
@property
|
|
||||||
def is_dynamic(self) -> bool:
|
|
||||||
"""Whether this source updates dynamically while the app runs. Default False."""
|
|
||||||
return False
|
|
||||||
|
|
||||||
@abstractmethod
|
|
||||||
def fetch(self) -> list[SourceItem]:
|
|
||||||
"""Fetch fresh data from the source. Must be idempotent."""
|
|
||||||
...
|
|
||||||
|
|
||||||
def get_items(self) -> list[SourceItem]:
|
|
||||||
"""Get current items. Default implementation returns cached fetch results."""
|
|
||||||
if not hasattr(self, "_items") or self._items is None:
|
|
||||||
self._items = self.fetch()
|
|
||||||
return self._items
|
|
||||||
|
|
||||||
def refresh(self) -> list[SourceItem]:
|
|
||||||
"""Force refresh - clear cache and fetch fresh data."""
|
|
||||||
self._items = self.fetch()
|
|
||||||
return self._items
|
|
||||||
|
|
||||||
def stream(self):
|
|
||||||
"""Optional: Yield items continuously. Override for streaming sources."""
|
|
||||||
raise NotImplementedError
|
|
||||||
|
|
||||||
def __post_init__(self):
|
|
||||||
self._items: list[SourceItem] | None = None
|
|
||||||
|
|
||||||
|
|
||||||
class HeadlinesDataSource(DataSource):
|
|
||||||
"""Data source for RSS feed headlines."""
|
|
||||||
|
|
||||||
@property
|
|
||||||
def name(self) -> str:
|
|
||||||
return "headlines"
|
|
||||||
|
|
||||||
def fetch(self) -> list[SourceItem]:
|
|
||||||
from engine.fetch import fetch_all
|
|
||||||
|
|
||||||
items, _, _ = fetch_all()
|
|
||||||
return [SourceItem(content=t, source=s, timestamp=ts) for t, s, ts in items]
|
|
||||||
|
|
||||||
|
|
||||||
class EmptyDataSource(DataSource):
|
|
||||||
"""Empty data source that produces blank lines for testing.
|
|
||||||
|
|
||||||
Useful for testing display borders, effects, and other pipeline
|
|
||||||
components without needing actual content.
|
|
||||||
"""
|
|
||||||
|
|
||||||
def __init__(self, width: int = 80, height: int = 24):
|
|
||||||
self.width = width
|
|
||||||
self.height = height
|
|
||||||
|
|
||||||
@property
|
|
||||||
def name(self) -> str:
|
|
||||||
return "empty"
|
|
||||||
|
|
||||||
@property
|
|
||||||
def is_dynamic(self) -> bool:
|
|
||||||
return False
|
|
||||||
|
|
||||||
def fetch(self) -> list[SourceItem]:
|
|
||||||
# Return empty lines as content
|
|
||||||
content = "\n".join([" " * self.width for _ in range(self.height)])
|
|
||||||
return [SourceItem(content=content, source="empty", timestamp="0")]
|
|
||||||
|
|
||||||
|
|
||||||
class ListDataSource(DataSource):
|
|
||||||
"""Data source that wraps a pre-fetched list of items.
|
|
||||||
|
|
||||||
Used for bootstrap loading when items are already available in memory.
|
|
||||||
This is a simple wrapper for already-fetched data.
|
|
||||||
"""
|
|
||||||
|
|
||||||
def __init__(self, items, name: str = "list"):
|
|
||||||
self._raw_items = items # Store raw items separately
|
|
||||||
self._items = None # Cache for converted SourceItem objects
|
|
||||||
self._name = name
|
|
||||||
|
|
||||||
@property
|
|
||||||
def name(self) -> str:
|
|
||||||
return self._name
|
|
||||||
|
|
||||||
@property
|
|
||||||
def is_dynamic(self) -> bool:
|
|
||||||
return False
|
|
||||||
|
|
||||||
def fetch(self) -> list[SourceItem]:
|
|
||||||
# Convert tuple items to SourceItem if needed
|
|
||||||
result = []
|
|
||||||
for item in self._raw_items:
|
|
||||||
if isinstance(item, SourceItem):
|
|
||||||
result.append(item)
|
|
||||||
elif isinstance(item, tuple) and len(item) >= 3:
|
|
||||||
# Assume (content, source, timestamp) tuple format
|
|
||||||
result.append(
|
|
||||||
SourceItem(content=item[0], source=item[1], timestamp=str(item[2]))
|
|
||||||
)
|
|
||||||
else:
|
|
||||||
# Fallback: treat as string content
|
|
||||||
result.append(
|
|
||||||
SourceItem(content=str(item), source="list", timestamp="0")
|
|
||||||
)
|
|
||||||
return result
|
|
||||||
|
|
||||||
|
|
||||||
class PoetryDataSource(DataSource):
|
|
||||||
"""Data source for Poetry DB."""
|
|
||||||
|
|
||||||
@property
|
|
||||||
def name(self) -> str:
|
|
||||||
return "poetry"
|
|
||||||
|
|
||||||
def fetch(self) -> list[SourceItem]:
|
|
||||||
from engine.fetch import fetch_poetry
|
|
||||||
|
|
||||||
items, _, _ = fetch_poetry()
|
|
||||||
return [SourceItem(content=t, source=s, timestamp=ts) for t, s, ts in items]
|
|
||||||
|
|
||||||
|
|
||||||
class ImageDataSource(DataSource):
|
|
||||||
"""Data source that loads PNG images from file paths or URLs.
|
|
||||||
|
|
||||||
Supports:
|
|
||||||
- Local file paths (e.g., /path/to/image.png)
|
|
||||||
- URLs (e.g., https://example.com/image.png)
|
|
||||||
|
|
||||||
Yields ImageItem objects containing PIL Image objects that can be
|
|
||||||
converted to text buffers by an ImageToTextTransform stage.
|
|
||||||
"""
|
|
||||||
|
|
||||||
def __init__(
|
|
||||||
self,
|
|
||||||
path: str | list[str] | None = None,
|
|
||||||
urls: str | list[str] | None = None,
|
|
||||||
):
|
|
||||||
"""
|
|
||||||
Args:
|
|
||||||
path: Single path or list of paths to PNG files
|
|
||||||
urls: Single URL or list of URLs to PNG images
|
|
||||||
"""
|
|
||||||
self._paths = [path] if isinstance(path, str) else (path or [])
|
|
||||||
self._urls = [urls] if isinstance(urls, str) else (urls or [])
|
|
||||||
self._images: list[ImageItem] = []
|
|
||||||
self._load_images()
|
|
||||||
|
|
||||||
def _load_images(self) -> None:
|
|
||||||
"""Load all images from paths and URLs."""
|
|
||||||
from datetime import datetime
|
|
||||||
from io import BytesIO
|
|
||||||
from urllib.request import urlopen
|
|
||||||
|
|
||||||
timestamp = datetime.now().isoformat()
|
|
||||||
|
|
||||||
for path in self._paths:
|
|
||||||
try:
|
|
||||||
from PIL import Image
|
|
||||||
|
|
||||||
img = Image.open(path)
|
|
||||||
if img.mode != "RGBA":
|
|
||||||
img = img.convert("RGBA")
|
|
||||||
self._images.append(
|
|
||||||
ImageItem(
|
|
||||||
image=img,
|
|
||||||
source=f"file:{path}",
|
|
||||||
timestamp=timestamp,
|
|
||||||
path=path,
|
|
||||||
)
|
|
||||||
)
|
|
||||||
except Exception:
|
|
||||||
pass
|
|
||||||
|
|
||||||
for url in self._urls:
|
|
||||||
try:
|
|
||||||
from PIL import Image
|
|
||||||
|
|
||||||
with urlopen(url) as response:
|
|
||||||
img = Image.open(BytesIO(response.read()))
|
|
||||||
if img.mode != "RGBA":
|
|
||||||
img = img.convert("RGBA")
|
|
||||||
self._images.append(
|
|
||||||
ImageItem(
|
|
||||||
image=img,
|
|
||||||
source=f"url:{url}",
|
|
||||||
timestamp=timestamp,
|
|
||||||
path=url,
|
|
||||||
)
|
|
||||||
)
|
|
||||||
except Exception:
|
|
||||||
pass
|
|
||||||
|
|
||||||
@property
|
|
||||||
def name(self) -> str:
|
|
||||||
return "image"
|
|
||||||
|
|
||||||
@property
|
|
||||||
def is_dynamic(self) -> bool:
|
|
||||||
return False # Static images, not updating
|
|
||||||
|
|
||||||
def fetch(self) -> list[ImageItem]:
|
|
||||||
"""Return loaded images as ImageItem list."""
|
|
||||||
return self._images
|
|
||||||
|
|
||||||
def get_items(self) -> list[ImageItem]:
|
|
||||||
"""Return current image items."""
|
|
||||||
return self._images
|
|
||||||
|
|
||||||
|
|
||||||
class MetricsDataSource(DataSource):
|
|
||||||
"""Data source that renders live pipeline metrics as ASCII art.
|
|
||||||
|
|
||||||
Wraps a Pipeline and displays active stages with their average execution
|
|
||||||
time and approximate FPS impact. Updates lazily when camera is about to
|
|
||||||
focus on a new node (frame % 15 == 12).
|
|
||||||
"""
|
|
||||||
|
|
||||||
def __init__(
|
|
||||||
self,
|
|
||||||
pipeline: Any,
|
|
||||||
viewport_width: int = 80,
|
|
||||||
viewport_height: int = 24,
|
|
||||||
):
|
|
||||||
self.pipeline = pipeline
|
|
||||||
self.viewport_width = viewport_width
|
|
||||||
self.viewport_height = viewport_height
|
|
||||||
self.frame = 0
|
|
||||||
self._cached_metrics: dict | None = None
|
|
||||||
|
|
||||||
@property
|
|
||||||
def name(self) -> str:
|
|
||||||
return "metrics"
|
|
||||||
|
|
||||||
@property
|
|
||||||
def is_dynamic(self) -> bool:
|
|
||||||
return True
|
|
||||||
|
|
||||||
def fetch(self) -> list[SourceItem]:
|
|
||||||
if self.frame % 15 == 12:
|
|
||||||
self._cached_metrics = None
|
|
||||||
|
|
||||||
if self._cached_metrics is None:
|
|
||||||
self._cached_metrics = self._fetch_metrics()
|
|
||||||
|
|
||||||
buffer = self._render_metrics(self._cached_metrics)
|
|
||||||
self.frame += 1
|
|
||||||
content = "\n".join(buffer)
|
|
||||||
return [
|
|
||||||
SourceItem(content=content, source="metrics", timestamp=f"f{self.frame}")
|
|
||||||
]
|
|
||||||
|
|
||||||
def _fetch_metrics(self) -> dict:
|
|
||||||
if hasattr(self.pipeline, "get_metrics_summary"):
|
|
||||||
metrics = self.pipeline.get_metrics_summary()
|
|
||||||
if "error" not in metrics:
|
|
||||||
return metrics
|
|
||||||
return {"stages": {}, "pipeline": {"avg_ms": 0}}
|
|
||||||
|
|
||||||
def _render_metrics(self, metrics: dict) -> list[str]:
|
|
||||||
stages = metrics.get("stages", {})
|
|
||||||
|
|
||||||
if not stages:
|
|
||||||
return self._render_empty()
|
|
||||||
|
|
||||||
active_stages = {
|
|
||||||
name: stats for name, stats in stages.items() if stats.get("avg_ms", 0) > 0
|
|
||||||
}
|
|
||||||
|
|
||||||
if not active_stages:
|
|
||||||
return self._render_empty()
|
|
||||||
|
|
||||||
total_avg = sum(s["avg_ms"] for s in active_stages.values())
|
|
||||||
if total_avg == 0:
|
|
||||||
total_avg = 1
|
|
||||||
|
|
||||||
lines: list[str] = []
|
|
||||||
lines.append("═" * self.viewport_width)
|
|
||||||
lines.append(" PIPELINE METRICS ".center(self.viewport_width, "─"))
|
|
||||||
lines.append("─" * self.viewport_width)
|
|
||||||
|
|
||||||
header = f"{'STAGE':<20} {'AVG_MS':>8} {'FPS %':>8}"
|
|
||||||
lines.append(header)
|
|
||||||
lines.append("─" * self.viewport_width)
|
|
||||||
|
|
||||||
for name, stats in sorted(active_stages.items()):
|
|
||||||
avg_ms = stats.get("avg_ms", 0)
|
|
||||||
fps_impact = (avg_ms / 16.67) * 100 if avg_ms > 0 else 0
|
|
||||||
|
|
||||||
row = f"{name:<20} {avg_ms:>7.2f} {fps_impact:>7.1f}%"
|
|
||||||
lines.append(row[: self.viewport_width])
|
|
||||||
|
|
||||||
lines.append("─" * self.viewport_width)
|
|
||||||
total_row = (
|
|
||||||
f"{'TOTAL':<20} {total_avg:>7.2f} {(total_avg / 16.67) * 100:>7.1f}%"
|
|
||||||
)
|
|
||||||
lines.append(total_row[: self.viewport_width])
|
|
||||||
lines.append("─" * self.viewport_width)
|
|
||||||
lines.append(
|
|
||||||
f" Frame:{self.frame:04d} Cache:{'HIT' if self._cached_metrics else 'MISS'}"
|
|
||||||
)
|
|
||||||
|
|
||||||
while len(lines) < self.viewport_height:
|
|
||||||
lines.append(" " * self.viewport_width)
|
|
||||||
|
|
||||||
return lines[: self.viewport_height]
|
|
||||||
|
|
||||||
def _render_empty(self) -> list[str]:
|
|
||||||
lines = [" " * self.viewport_width for _ in range(self.viewport_height)]
|
|
||||||
msg = "No metrics available"
|
|
||||||
y = self.viewport_height // 2
|
|
||||||
x = (self.viewport_width - len(msg)) // 2
|
|
||||||
lines[y] = " " * x + msg + " " * (self.viewport_width - x - len(msg))
|
|
||||||
return lines
|
|
||||||
|
|
||||||
def get_items(self) -> list[SourceItem]:
|
|
||||||
return self.fetch()
|
|
||||||
|
|
||||||
|
|
||||||
class CachedDataSource(DataSource):
|
|
||||||
"""Data source that wraps another source with caching."""
|
|
||||||
|
|
||||||
def __init__(self, source: DataSource, max_items: int = 100):
|
|
||||||
self.source = source
|
|
||||||
self.max_items = max_items
|
|
||||||
|
|
||||||
@property
|
|
||||||
def name(self) -> str:
|
|
||||||
return f"cached:{self.source.name}"
|
|
||||||
|
|
||||||
def fetch(self) -> list[SourceItem]:
|
|
||||||
items = self.source.fetch()
|
|
||||||
return items[: self.max_items]
|
|
||||||
|
|
||||||
def get_items(self) -> list[SourceItem]:
|
|
||||||
if not hasattr(self, "_items") or self._items is None:
|
|
||||||
self._items = self.fetch()
|
|
||||||
return self._items
|
|
||||||
|
|
||||||
|
|
||||||
class TransformDataSource(DataSource):
|
|
||||||
"""Data source that transforms items from another source.
|
|
||||||
|
|
||||||
Applies optional filter and map functions to each item.
|
|
||||||
This enables chaining: source → transform → transformed output.
|
|
||||||
|
|
||||||
Args:
|
|
||||||
source: The source to fetch items from
|
|
||||||
filter_fn: Optional function(item: SourceItem) -> bool
|
|
||||||
map_fn: Optional function(item: SourceItem) -> SourceItem
|
|
||||||
"""
|
|
||||||
|
|
||||||
def __init__(
|
|
||||||
self,
|
|
||||||
source: DataSource,
|
|
||||||
filter_fn: Callable[[SourceItem], bool] | None = None,
|
|
||||||
map_fn: Callable[[SourceItem], SourceItem] | None = None,
|
|
||||||
):
|
|
||||||
self.source = source
|
|
||||||
self.filter_fn = filter_fn
|
|
||||||
self.map_fn = map_fn
|
|
||||||
|
|
||||||
@property
|
|
||||||
def name(self) -> str:
|
|
||||||
return f"transform:{self.source.name}"
|
|
||||||
|
|
||||||
def fetch(self) -> list[SourceItem]:
|
|
||||||
items = self.source.fetch()
|
|
||||||
|
|
||||||
if self.filter_fn:
|
|
||||||
items = [item for item in items if self.filter_fn(item)]
|
|
||||||
|
|
||||||
if self.map_fn:
|
|
||||||
items = [self.map_fn(item) for item in items]
|
|
||||||
|
|
||||||
return items
|
|
||||||
|
|
||||||
|
|
||||||
class CompositeDataSource(DataSource):
|
|
||||||
"""Data source that combines multiple sources."""
|
|
||||||
|
|
||||||
def __init__(self, sources: list[DataSource]):
|
|
||||||
self.sources = sources
|
|
||||||
|
|
||||||
@property
|
|
||||||
def name(self) -> str:
|
|
||||||
return "composite"
|
|
||||||
|
|
||||||
def fetch(self) -> list[SourceItem]:
|
|
||||||
items = []
|
|
||||||
for source in self.sources:
|
|
||||||
items.extend(source.fetch())
|
|
||||||
return items
|
|
||||||
|
|
||||||
|
|
||||||
class SourceRegistry:
|
|
||||||
"""Registry for data sources."""
|
|
||||||
|
|
||||||
def __init__(self):
|
|
||||||
self._sources: dict[str, DataSource] = {}
|
|
||||||
self._default: str | None = None
|
|
||||||
|
|
||||||
def register(self, source: DataSource, default: bool = False) -> None:
|
|
||||||
self._sources[source.name] = source
|
|
||||||
if default or self._default is None:
|
|
||||||
self._default = source.name
|
|
||||||
|
|
||||||
def get(self, name: str) -> DataSource | None:
|
|
||||||
return self._sources.get(name)
|
|
||||||
|
|
||||||
def list_all(self) -> dict[str, DataSource]:
|
|
||||||
return dict(self._sources)
|
|
||||||
|
|
||||||
def default(self) -> DataSource | None:
|
|
||||||
if self._default:
|
|
||||||
return self._sources.get(self._default)
|
|
||||||
return None
|
|
||||||
|
|
||||||
def create_headlines(self) -> HeadlinesDataSource:
|
|
||||||
return HeadlinesDataSource()
|
|
||||||
|
|
||||||
def create_poetry(self) -> PoetryDataSource:
|
|
||||||
return PoetryDataSource()
|
|
||||||
|
|
||||||
|
|
||||||
_global_registry: SourceRegistry | None = None
|
|
||||||
|
|
||||||
|
|
||||||
def get_source_registry() -> SourceRegistry:
|
|
||||||
global _global_registry
|
|
||||||
if _global_registry is None:
|
|
||||||
_global_registry = SourceRegistry()
|
|
||||||
return _global_registry
|
|
||||||
|
|
||||||
|
|
||||||
def init_default_sources() -> SourceRegistry:
|
|
||||||
"""Initialize the default source registry with standard sources."""
|
|
||||||
registry = get_source_registry()
|
|
||||||
registry.register(HeadlinesDataSource(), default=True)
|
|
||||||
registry.register(PoetryDataSource())
|
|
||||||
return registry
|
|
||||||
102
engine/display.py
Normal file
102
engine/display.py
Normal file
@@ -0,0 +1,102 @@
|
|||||||
|
"""
|
||||||
|
Display output abstraction - allows swapping output backends.
|
||||||
|
|
||||||
|
Protocol:
|
||||||
|
- init(width, height): Initialize display with terminal dimensions
|
||||||
|
- show(buffer): Render buffer (list of strings) to display
|
||||||
|
- clear(): Clear the display
|
||||||
|
- cleanup(): Shutdown display
|
||||||
|
"""
|
||||||
|
|
||||||
|
import time
|
||||||
|
from typing import Protocol
|
||||||
|
|
||||||
|
|
||||||
|
class Display(Protocol):
|
||||||
|
"""Protocol for display backends."""
|
||||||
|
|
||||||
|
def init(self, width: int, height: int) -> None:
|
||||||
|
"""Initialize display with dimensions."""
|
||||||
|
...
|
||||||
|
|
||||||
|
def show(self, buffer: list[str]) -> None:
|
||||||
|
"""Show buffer on display."""
|
||||||
|
...
|
||||||
|
|
||||||
|
def clear(self) -> None:
|
||||||
|
"""Clear display."""
|
||||||
|
...
|
||||||
|
|
||||||
|
def cleanup(self) -> None:
|
||||||
|
"""Shutdown display."""
|
||||||
|
...
|
||||||
|
|
||||||
|
|
||||||
|
def get_monitor():
|
||||||
|
"""Get the performance monitor."""
|
||||||
|
try:
|
||||||
|
from engine.effects.performance import get_monitor as _get_monitor
|
||||||
|
|
||||||
|
return _get_monitor()
|
||||||
|
except Exception:
|
||||||
|
return None
|
||||||
|
|
||||||
|
|
||||||
|
class TerminalDisplay:
|
||||||
|
"""ANSI terminal display backend."""
|
||||||
|
|
||||||
|
def __init__(self):
|
||||||
|
self.width = 80
|
||||||
|
self.height = 24
|
||||||
|
|
||||||
|
def init(self, width: int, height: int) -> None:
|
||||||
|
from engine.terminal import CURSOR_OFF
|
||||||
|
|
||||||
|
self.width = width
|
||||||
|
self.height = height
|
||||||
|
print(CURSOR_OFF, end="", flush=True)
|
||||||
|
|
||||||
|
def show(self, buffer: list[str]) -> None:
|
||||||
|
import sys
|
||||||
|
|
||||||
|
t0 = time.perf_counter()
|
||||||
|
sys.stdout.buffer.write("".join(buffer).encode())
|
||||||
|
sys.stdout.flush()
|
||||||
|
elapsed_ms = (time.perf_counter() - t0) * 1000
|
||||||
|
|
||||||
|
monitor = get_monitor()
|
||||||
|
if monitor:
|
||||||
|
chars_in = sum(len(line) for line in buffer)
|
||||||
|
monitor.record_effect("terminal_display", elapsed_ms, chars_in, chars_in)
|
||||||
|
|
||||||
|
def clear(self) -> None:
|
||||||
|
from engine.terminal import CLR
|
||||||
|
|
||||||
|
print(CLR, end="", flush=True)
|
||||||
|
|
||||||
|
def cleanup(self) -> None:
|
||||||
|
from engine.terminal import CURSOR_ON
|
||||||
|
|
||||||
|
print(CURSOR_ON, end="", flush=True)
|
||||||
|
|
||||||
|
|
||||||
|
class NullDisplay:
|
||||||
|
"""Headless/null display - discards all output."""
|
||||||
|
|
||||||
|
def init(self, width: int, height: int) -> None:
|
||||||
|
self.width = width
|
||||||
|
self.height = height
|
||||||
|
|
||||||
|
def show(self, buffer: list[str]) -> None:
|
||||||
|
monitor = get_monitor()
|
||||||
|
if monitor:
|
||||||
|
t0 = time.perf_counter()
|
||||||
|
chars_in = sum(len(line) for line in buffer)
|
||||||
|
elapsed_ms = (time.perf_counter() - t0) * 1000
|
||||||
|
monitor.record_effect("null_display", elapsed_ms, chars_in, chars_in)
|
||||||
|
|
||||||
|
def clear(self) -> None:
|
||||||
|
pass
|
||||||
|
|
||||||
|
def cleanup(self) -> None:
|
||||||
|
pass
|
||||||
@@ -1,275 +0,0 @@
|
|||||||
"""
|
|
||||||
Display backend system with registry pattern.
|
|
||||||
|
|
||||||
Allows swapping output backends via the Display protocol.
|
|
||||||
Supports auto-discovery of display backends.
|
|
||||||
"""
|
|
||||||
|
|
||||||
from typing import Protocol
|
|
||||||
|
|
||||||
from engine.display.backends.kitty import KittyDisplay
|
|
||||||
from engine.display.backends.multi import MultiDisplay
|
|
||||||
from engine.display.backends.null import NullDisplay
|
|
||||||
from engine.display.backends.pygame import PygameDisplay
|
|
||||||
from engine.display.backends.sixel import SixelDisplay
|
|
||||||
from engine.display.backends.terminal import TerminalDisplay
|
|
||||||
from engine.display.backends.websocket import WebSocketDisplay
|
|
||||||
|
|
||||||
|
|
||||||
class Display(Protocol):
|
|
||||||
"""Protocol for display backends.
|
|
||||||
|
|
||||||
All display backends must implement:
|
|
||||||
- width, height: Terminal dimensions
|
|
||||||
- init(width, height, reuse=False): Initialize the display
|
|
||||||
- show(buffer): Render buffer to display
|
|
||||||
- clear(): Clear the display
|
|
||||||
- cleanup(): Shutdown the display
|
|
||||||
|
|
||||||
Optional methods for keyboard input:
|
|
||||||
- is_quit_requested(): Returns True if user pressed Ctrl+C/Q or Escape
|
|
||||||
- clear_quit_request(): Clears the quit request flag
|
|
||||||
|
|
||||||
The reuse flag allows attaching to an existing display instance
|
|
||||||
rather than creating a new window/connection.
|
|
||||||
|
|
||||||
Keyboard input support by backend:
|
|
||||||
- terminal: No native input (relies on signal handler for Ctrl+C)
|
|
||||||
- pygame: Supports Ctrl+C, Ctrl+Q, Escape for graceful shutdown
|
|
||||||
- websocket: No native input (relies on signal handler for Ctrl+C)
|
|
||||||
- sixel: No native input (relies on signal handler for Ctrl+C)
|
|
||||||
- null: No native input
|
|
||||||
- kitty: Supports Ctrl+C, Ctrl+Q, Escape (via pygame-like handling)
|
|
||||||
"""
|
|
||||||
|
|
||||||
width: int
|
|
||||||
height: int
|
|
||||||
|
|
||||||
def init(self, width: int, height: int, reuse: bool = False) -> None:
|
|
||||||
"""Initialize display with dimensions.
|
|
||||||
|
|
||||||
Args:
|
|
||||||
width: Terminal width in characters
|
|
||||||
height: Terminal height in rows
|
|
||||||
reuse: If True, attach to existing display instead of creating new
|
|
||||||
"""
|
|
||||||
...
|
|
||||||
|
|
||||||
def show(self, buffer: list[str], border: bool = False) -> None:
|
|
||||||
"""Show buffer on display.
|
|
||||||
|
|
||||||
Args:
|
|
||||||
buffer: Buffer to display
|
|
||||||
border: If True, render border around buffer (default False)
|
|
||||||
"""
|
|
||||||
...
|
|
||||||
|
|
||||||
def clear(self) -> None:
|
|
||||||
"""Clear display."""
|
|
||||||
...
|
|
||||||
|
|
||||||
def cleanup(self) -> None:
|
|
||||||
"""Shutdown display."""
|
|
||||||
...
|
|
||||||
|
|
||||||
def get_dimensions(self) -> tuple[int, int]:
|
|
||||||
"""Get current terminal dimensions.
|
|
||||||
|
|
||||||
Returns:
|
|
||||||
(width, height) in character cells
|
|
||||||
|
|
||||||
This method is called after show() to check if the display
|
|
||||||
was resized. The main loop should compare this to the current
|
|
||||||
viewport dimensions and update accordingly.
|
|
||||||
"""
|
|
||||||
...
|
|
||||||
|
|
||||||
def is_quit_requested(self) -> bool:
|
|
||||||
"""Check if user requested quit (Ctrl+C, Ctrl+Q, or Escape).
|
|
||||||
|
|
||||||
Returns:
|
|
||||||
True if quit was requested, False otherwise
|
|
||||||
|
|
||||||
Optional method - only implemented by backends that support keyboard input.
|
|
||||||
"""
|
|
||||||
...
|
|
||||||
|
|
||||||
def clear_quit_request(self) -> None:
|
|
||||||
"""Clear the quit request flag.
|
|
||||||
|
|
||||||
Optional method - only implemented by backends that support keyboard input.
|
|
||||||
"""
|
|
||||||
...
|
|
||||||
|
|
||||||
|
|
||||||
class DisplayRegistry:
|
|
||||||
"""Registry for display backends with auto-discovery."""
|
|
||||||
|
|
||||||
_backends: dict[str, type[Display]] = {}
|
|
||||||
_initialized = False
|
|
||||||
|
|
||||||
@classmethod
|
|
||||||
def register(cls, name: str, backend_class: type[Display]) -> None:
|
|
||||||
"""Register a display backend."""
|
|
||||||
cls._backends[name.lower()] = backend_class
|
|
||||||
|
|
||||||
@classmethod
|
|
||||||
def get(cls, name: str) -> type[Display] | None:
|
|
||||||
"""Get a display backend class by name."""
|
|
||||||
return cls._backends.get(name.lower())
|
|
||||||
|
|
||||||
@classmethod
|
|
||||||
def list_backends(cls) -> list[str]:
|
|
||||||
"""List all available display backend names."""
|
|
||||||
return list(cls._backends.keys())
|
|
||||||
|
|
||||||
@classmethod
|
|
||||||
def create(cls, name: str, **kwargs) -> Display | None:
|
|
||||||
"""Create a display instance by name."""
|
|
||||||
cls.initialize()
|
|
||||||
backend_class = cls.get(name)
|
|
||||||
if backend_class:
|
|
||||||
return backend_class(**kwargs)
|
|
||||||
return None
|
|
||||||
|
|
||||||
@classmethod
|
|
||||||
def initialize(cls) -> None:
|
|
||||||
"""Initialize and register all built-in backends."""
|
|
||||||
if cls._initialized:
|
|
||||||
return
|
|
||||||
|
|
||||||
cls.register("terminal", TerminalDisplay)
|
|
||||||
cls.register("null", NullDisplay)
|
|
||||||
cls.register("websocket", WebSocketDisplay)
|
|
||||||
cls.register("sixel", SixelDisplay)
|
|
||||||
cls.register("kitty", KittyDisplay)
|
|
||||||
cls.register("pygame", PygameDisplay)
|
|
||||||
|
|
||||||
cls._initialized = True
|
|
||||||
|
|
||||||
@classmethod
|
|
||||||
def create_multi(cls, names: list[str]) -> "Display | None":
|
|
||||||
"""Create a MultiDisplay from a list of backend names.
|
|
||||||
|
|
||||||
Args:
|
|
||||||
names: List of display backend names (e.g., ["terminal", "pygame"])
|
|
||||||
|
|
||||||
Returns:
|
|
||||||
MultiDisplay instance or None if any backend fails
|
|
||||||
"""
|
|
||||||
from engine.display.backends.multi import MultiDisplay
|
|
||||||
|
|
||||||
displays = []
|
|
||||||
for name in names:
|
|
||||||
backend = cls.create(name)
|
|
||||||
if backend:
|
|
||||||
displays.append(backend)
|
|
||||||
else:
|
|
||||||
return None
|
|
||||||
|
|
||||||
if not displays:
|
|
||||||
return None
|
|
||||||
|
|
||||||
return MultiDisplay(displays)
|
|
||||||
|
|
||||||
|
|
||||||
def get_monitor():
|
|
||||||
"""Get the performance monitor."""
|
|
||||||
try:
|
|
||||||
from engine.effects.performance import get_monitor as _get_monitor
|
|
||||||
|
|
||||||
return _get_monitor()
|
|
||||||
except Exception:
|
|
||||||
return None
|
|
||||||
|
|
||||||
|
|
||||||
def _strip_ansi(s: str) -> str:
|
|
||||||
"""Strip ANSI escape sequences from string for length calculation."""
|
|
||||||
import re
|
|
||||||
|
|
||||||
return re.sub(r"\x1b\[[0-9;]*[a-zA-Z]", "", s)
|
|
||||||
|
|
||||||
|
|
||||||
def render_border(
|
|
||||||
buf: list[str], width: int, height: int, fps: float = 0.0, frame_time: float = 0.0
|
|
||||||
) -> list[str]:
|
|
||||||
"""Render a border around the buffer.
|
|
||||||
|
|
||||||
Args:
|
|
||||||
buf: Input buffer (list of strings)
|
|
||||||
width: Display width in characters
|
|
||||||
height: Display height in rows
|
|
||||||
fps: Current FPS to display in top border (optional)
|
|
||||||
frame_time: Frame time in ms to display in bottom border (optional)
|
|
||||||
|
|
||||||
Returns:
|
|
||||||
Buffer with border applied
|
|
||||||
"""
|
|
||||||
if not buf or width < 3 or height < 3:
|
|
||||||
return buf
|
|
||||||
|
|
||||||
inner_w = width - 2
|
|
||||||
inner_h = height - 2
|
|
||||||
|
|
||||||
# Crop buffer to fit inside border
|
|
||||||
cropped = []
|
|
||||||
for i in range(min(inner_h, len(buf))):
|
|
||||||
line = buf[i]
|
|
||||||
# Calculate visible width (excluding ANSI codes)
|
|
||||||
visible_len = len(_strip_ansi(line))
|
|
||||||
if visible_len > inner_w:
|
|
||||||
# Truncate carefully - this is approximate for ANSI text
|
|
||||||
cropped.append(line[:inner_w])
|
|
||||||
else:
|
|
||||||
cropped.append(line + " " * (inner_w - visible_len))
|
|
||||||
|
|
||||||
# Pad with empty lines if needed
|
|
||||||
while len(cropped) < inner_h:
|
|
||||||
cropped.append(" " * inner_w)
|
|
||||||
|
|
||||||
# Build borders
|
|
||||||
if fps > 0:
|
|
||||||
fps_str = f" FPS:{fps:.0f}"
|
|
||||||
if len(fps_str) < inner_w:
|
|
||||||
right_len = inner_w - len(fps_str)
|
|
||||||
top_border = "┌" + "─" * right_len + fps_str + "┐"
|
|
||||||
else:
|
|
||||||
top_border = "┌" + "─" * inner_w + "┐"
|
|
||||||
else:
|
|
||||||
top_border = "┌" + "─" * inner_w + "┐"
|
|
||||||
|
|
||||||
if frame_time > 0:
|
|
||||||
ft_str = f" {frame_time:.1f}ms"
|
|
||||||
if len(ft_str) < inner_w:
|
|
||||||
right_len = inner_w - len(ft_str)
|
|
||||||
bottom_border = "└" + "─" * right_len + ft_str + "┘"
|
|
||||||
else:
|
|
||||||
bottom_border = "└" + "─" * inner_w + "┘"
|
|
||||||
else:
|
|
||||||
bottom_border = "└" + "─" * inner_w + "┘"
|
|
||||||
|
|
||||||
# Build result with left/right borders
|
|
||||||
result = [top_border]
|
|
||||||
for line in cropped:
|
|
||||||
# Ensure exactly inner_w characters before adding right border
|
|
||||||
if len(line) < inner_w:
|
|
||||||
line = line + " " * (inner_w - len(line))
|
|
||||||
elif len(line) > inner_w:
|
|
||||||
line = line[:inner_w]
|
|
||||||
result.append("│" + line + "│")
|
|
||||||
result.append(bottom_border)
|
|
||||||
|
|
||||||
return result
|
|
||||||
|
|
||||||
|
|
||||||
__all__ = [
|
|
||||||
"Display",
|
|
||||||
"DisplayRegistry",
|
|
||||||
"get_monitor",
|
|
||||||
"render_border",
|
|
||||||
"TerminalDisplay",
|
|
||||||
"NullDisplay",
|
|
||||||
"WebSocketDisplay",
|
|
||||||
"SixelDisplay",
|
|
||||||
"MultiDisplay",
|
|
||||||
]
|
|
||||||
@@ -1,180 +0,0 @@
|
|||||||
"""
|
|
||||||
Kitty graphics display backend - renders using kitty's native graphics protocol.
|
|
||||||
"""
|
|
||||||
|
|
||||||
import time
|
|
||||||
|
|
||||||
from engine.display.renderer import get_default_font_path, parse_ansi
|
|
||||||
|
|
||||||
|
|
||||||
def _encode_kitty_graphic(image_data: bytes, width: int, height: int) -> bytes:
|
|
||||||
"""Encode image data using kitty's graphics protocol."""
|
|
||||||
import base64
|
|
||||||
|
|
||||||
encoded = base64.b64encode(image_data).decode("ascii")
|
|
||||||
|
|
||||||
chunks = []
|
|
||||||
for i in range(0, len(encoded), 4096):
|
|
||||||
chunk = encoded[i : i + 4096]
|
|
||||||
if i == 0:
|
|
||||||
chunks.append(f"\x1b_Gf=100,t=d,s={width},v={height},c=1,r=1;{chunk}\x1b\\")
|
|
||||||
else:
|
|
||||||
chunks.append(f"\x1b_Gm={height};{chunk}\x1b\\")
|
|
||||||
|
|
||||||
return "".join(chunks).encode("utf-8")
|
|
||||||
|
|
||||||
|
|
||||||
class KittyDisplay:
|
|
||||||
"""Kitty graphics display backend using kitty's native protocol."""
|
|
||||||
|
|
||||||
width: int = 80
|
|
||||||
height: int = 24
|
|
||||||
|
|
||||||
def __init__(self, cell_width: int = 9, cell_height: int = 16):
|
|
||||||
self.width = 80
|
|
||||||
self.height = 24
|
|
||||||
self.cell_width = cell_width
|
|
||||||
self.cell_height = cell_height
|
|
||||||
self._initialized = False
|
|
||||||
self._font_path = None
|
|
||||||
|
|
||||||
def init(self, width: int, height: int, reuse: bool = False) -> None:
|
|
||||||
"""Initialize display with dimensions.
|
|
||||||
|
|
||||||
Args:
|
|
||||||
width: Terminal width in characters
|
|
||||||
height: Terminal height in rows
|
|
||||||
reuse: Ignored for KittyDisplay (protocol doesn't support reuse)
|
|
||||||
"""
|
|
||||||
self.width = width
|
|
||||||
self.height = height
|
|
||||||
self._initialized = True
|
|
||||||
|
|
||||||
def _get_font_path(self) -> str | None:
|
|
||||||
"""Get font path from env or detect common locations."""
|
|
||||||
import os
|
|
||||||
|
|
||||||
if self._font_path:
|
|
||||||
return self._font_path
|
|
||||||
|
|
||||||
env_font = os.environ.get("MAINLINE_KITTY_FONT")
|
|
||||||
if env_font and os.path.exists(env_font):
|
|
||||||
self._font_path = env_font
|
|
||||||
return env_font
|
|
||||||
|
|
||||||
font_path = get_default_font_path()
|
|
||||||
if font_path:
|
|
||||||
self._font_path = font_path
|
|
||||||
|
|
||||||
return self._font_path
|
|
||||||
|
|
||||||
def show(self, buffer: list[str], border: bool = False) -> None:
|
|
||||||
import sys
|
|
||||||
|
|
||||||
t0 = time.perf_counter()
|
|
||||||
|
|
||||||
# Get metrics for border display
|
|
||||||
fps = 0.0
|
|
||||||
frame_time = 0.0
|
|
||||||
from engine.display import get_monitor
|
|
||||||
|
|
||||||
monitor = get_monitor()
|
|
||||||
if monitor:
|
|
||||||
stats = monitor.get_stats()
|
|
||||||
avg_ms = stats.get("pipeline", {}).get("avg_ms", 0) if stats else 0
|
|
||||||
frame_count = stats.get("frame_count", 0) if stats else 0
|
|
||||||
if avg_ms and frame_count > 0:
|
|
||||||
fps = 1000.0 / avg_ms
|
|
||||||
frame_time = avg_ms
|
|
||||||
|
|
||||||
# Apply border if requested
|
|
||||||
if border:
|
|
||||||
from engine.display import render_border
|
|
||||||
|
|
||||||
buffer = render_border(buffer, self.width, self.height, fps, frame_time)
|
|
||||||
|
|
||||||
img_width = self.width * self.cell_width
|
|
||||||
img_height = self.height * self.cell_height
|
|
||||||
|
|
||||||
try:
|
|
||||||
from PIL import Image, ImageDraw, ImageFont
|
|
||||||
except ImportError:
|
|
||||||
return
|
|
||||||
|
|
||||||
img = Image.new("RGBA", (img_width, img_height), (0, 0, 0, 255))
|
|
||||||
draw = ImageDraw.Draw(img)
|
|
||||||
|
|
||||||
font_path = self._get_font_path()
|
|
||||||
font = None
|
|
||||||
if font_path:
|
|
||||||
try:
|
|
||||||
font = ImageFont.truetype(font_path, self.cell_height - 2)
|
|
||||||
except Exception:
|
|
||||||
font = None
|
|
||||||
|
|
||||||
if font is None:
|
|
||||||
try:
|
|
||||||
font = ImageFont.load_default()
|
|
||||||
except Exception:
|
|
||||||
font = None
|
|
||||||
|
|
||||||
for row_idx, line in enumerate(buffer[: self.height]):
|
|
||||||
if row_idx >= self.height:
|
|
||||||
break
|
|
||||||
|
|
||||||
tokens = parse_ansi(line)
|
|
||||||
x_pos = 0
|
|
||||||
y_pos = row_idx * self.cell_height
|
|
||||||
|
|
||||||
for text, fg, bg, bold in tokens:
|
|
||||||
if not text:
|
|
||||||
continue
|
|
||||||
|
|
||||||
if bg != (0, 0, 0):
|
|
||||||
bbox = draw.textbbox((x_pos, y_pos), text, font=font)
|
|
||||||
draw.rectangle(bbox, fill=(*bg, 255))
|
|
||||||
|
|
||||||
if bold and font:
|
|
||||||
draw.text((x_pos - 1, y_pos - 1), text, fill=(*fg, 255), font=font)
|
|
||||||
|
|
||||||
draw.text((x_pos, y_pos), text, fill=(*fg, 255), font=font)
|
|
||||||
|
|
||||||
if font:
|
|
||||||
x_pos += draw.textlength(text, font=font)
|
|
||||||
|
|
||||||
from io import BytesIO
|
|
||||||
|
|
||||||
output = BytesIO()
|
|
||||||
img.save(output, format="PNG")
|
|
||||||
png_data = output.getvalue()
|
|
||||||
|
|
||||||
graphic = _encode_kitty_graphic(png_data, img_width, img_height)
|
|
||||||
|
|
||||||
sys.stdout.buffer.write(graphic)
|
|
||||||
sys.stdout.flush()
|
|
||||||
|
|
||||||
elapsed_ms = (time.perf_counter() - t0) * 1000
|
|
||||||
|
|
||||||
from engine.display import get_monitor
|
|
||||||
|
|
||||||
monitor = get_monitor()
|
|
||||||
if monitor:
|
|
||||||
chars_in = sum(len(line) for line in buffer)
|
|
||||||
monitor.record_effect("kitty_display", elapsed_ms, chars_in, chars_in)
|
|
||||||
|
|
||||||
def clear(self) -> None:
|
|
||||||
import sys
|
|
||||||
|
|
||||||
sys.stdout.buffer.write(b"\x1b_Ga=d\x1b\\")
|
|
||||||
sys.stdout.flush()
|
|
||||||
|
|
||||||
def cleanup(self) -> None:
|
|
||||||
self.clear()
|
|
||||||
|
|
||||||
def get_dimensions(self) -> tuple[int, int]:
|
|
||||||
"""Get current dimensions.
|
|
||||||
|
|
||||||
Returns:
|
|
||||||
(width, height) in character cells
|
|
||||||
"""
|
|
||||||
return (self.width, self.height)
|
|
||||||
@@ -1,50 +0,0 @@
|
|||||||
"""
|
|
||||||
Multi display backend - forwards to multiple displays.
|
|
||||||
"""
|
|
||||||
|
|
||||||
|
|
||||||
class MultiDisplay:
|
|
||||||
"""Display that forwards to multiple displays.
|
|
||||||
|
|
||||||
Supports reuse - passes reuse flag to all child displays.
|
|
||||||
"""
|
|
||||||
|
|
||||||
width: int = 80
|
|
||||||
height: int = 24
|
|
||||||
|
|
||||||
def __init__(self, displays: list):
|
|
||||||
self.displays = displays
|
|
||||||
self.width = 80
|
|
||||||
self.height = 24
|
|
||||||
|
|
||||||
def init(self, width: int, height: int, reuse: bool = False) -> None:
|
|
||||||
"""Initialize all child displays with dimensions.
|
|
||||||
|
|
||||||
Args:
|
|
||||||
width: Terminal width in characters
|
|
||||||
height: Terminal height in rows
|
|
||||||
reuse: If True, use reuse mode for child displays
|
|
||||||
"""
|
|
||||||
self.width = width
|
|
||||||
self.height = height
|
|
||||||
for d in self.displays:
|
|
||||||
d.init(width, height, reuse=reuse)
|
|
||||||
|
|
||||||
def show(self, buffer: list[str], border: bool = False) -> None:
|
|
||||||
for d in self.displays:
|
|
||||||
d.show(buffer, border=border)
|
|
||||||
|
|
||||||
def clear(self) -> None:
|
|
||||||
for d in self.displays:
|
|
||||||
d.clear()
|
|
||||||
|
|
||||||
def get_dimensions(self) -> tuple[int, int]:
|
|
||||||
"""Get dimensions from the first child display that supports it."""
|
|
||||||
for d in self.displays:
|
|
||||||
if hasattr(d, "get_dimensions"):
|
|
||||||
return d.get_dimensions()
|
|
||||||
return (self.width, self.height)
|
|
||||||
|
|
||||||
def cleanup(self) -> None:
|
|
||||||
for d in self.displays:
|
|
||||||
d.cleanup()
|
|
||||||
@@ -1,81 +0,0 @@
|
|||||||
"""
|
|
||||||
Null/headless display backend.
|
|
||||||
"""
|
|
||||||
|
|
||||||
import time
|
|
||||||
|
|
||||||
|
|
||||||
class NullDisplay:
|
|
||||||
"""Headless/null display - discards all output.
|
|
||||||
|
|
||||||
This display does nothing - useful for headless benchmarking
|
|
||||||
or when no display output is needed. Captures last buffer
|
|
||||||
for testing purposes.
|
|
||||||
"""
|
|
||||||
|
|
||||||
width: int = 80
|
|
||||||
height: int = 24
|
|
||||||
_last_buffer: list[str] | None = None
|
|
||||||
|
|
||||||
def __init__(self):
|
|
||||||
self._last_buffer = None
|
|
||||||
|
|
||||||
def init(self, width: int, height: int, reuse: bool = False) -> None:
|
|
||||||
"""Initialize display with dimensions.
|
|
||||||
|
|
||||||
Args:
|
|
||||||
width: Terminal width in characters
|
|
||||||
height: Terminal height in rows
|
|
||||||
reuse: Ignored for NullDisplay (no resources to reuse)
|
|
||||||
"""
|
|
||||||
self.width = width
|
|
||||||
self.height = height
|
|
||||||
self._last_buffer = None
|
|
||||||
|
|
||||||
def show(self, buffer: list[str], border: bool = False) -> None:
|
|
||||||
from engine.display import get_monitor, render_border
|
|
||||||
|
|
||||||
# Get FPS for border (if available)
|
|
||||||
fps = 0.0
|
|
||||||
frame_time = 0.0
|
|
||||||
monitor = get_monitor()
|
|
||||||
if monitor:
|
|
||||||
stats = monitor.get_stats()
|
|
||||||
avg_ms = stats.get("pipeline", {}).get("avg_ms", 0) if stats else 0
|
|
||||||
frame_count = stats.get("frame_count", 0) if stats else 0
|
|
||||||
if avg_ms and frame_count > 0:
|
|
||||||
fps = 1000.0 / avg_ms
|
|
||||||
frame_time = avg_ms
|
|
||||||
|
|
||||||
# Apply border if requested (same as terminal display)
|
|
||||||
if border:
|
|
||||||
buffer = render_border(buffer, self.width, self.height, fps, frame_time)
|
|
||||||
|
|
||||||
self._last_buffer = buffer
|
|
||||||
if monitor:
|
|
||||||
t0 = time.perf_counter()
|
|
||||||
chars_in = sum(len(line) for line in buffer)
|
|
||||||
elapsed_ms = (time.perf_counter() - t0) * 1000
|
|
||||||
monitor.record_effect("null_display", elapsed_ms, chars_in, chars_in)
|
|
||||||
|
|
||||||
def clear(self) -> None:
|
|
||||||
pass
|
|
||||||
|
|
||||||
def cleanup(self) -> None:
|
|
||||||
pass
|
|
||||||
|
|
||||||
def get_dimensions(self) -> tuple[int, int]:
|
|
||||||
"""Get current dimensions.
|
|
||||||
|
|
||||||
Returns:
|
|
||||||
(width, height) in character cells
|
|
||||||
"""
|
|
||||||
return (self.width, self.height)
|
|
||||||
|
|
||||||
def is_quit_requested(self) -> bool:
|
|
||||||
"""Check if quit was requested (optional protocol method)."""
|
|
||||||
return False
|
|
||||||
|
|
||||||
def clear_quit_request(self) -> None:
|
|
||||||
"""Clear quit request (optional protocol method)."""
|
|
||||||
pass
|
|
||||||
@@ -1,289 +0,0 @@
|
|||||||
"""
|
|
||||||
Pygame display backend - renders to a native application window.
|
|
||||||
"""
|
|
||||||
|
|
||||||
import time
|
|
||||||
|
|
||||||
from engine.display.renderer import parse_ansi
|
|
||||||
|
|
||||||
|
|
||||||
class PygameDisplay:
|
|
||||||
"""Pygame display backend - renders to native window.
|
|
||||||
|
|
||||||
Supports reuse mode - when reuse=True, skips SDL initialization
|
|
||||||
and reuses the existing pygame window from a previous instance.
|
|
||||||
"""
|
|
||||||
|
|
||||||
width: int = 80
|
|
||||||
window_width: int = 800
|
|
||||||
window_height: int = 600
|
|
||||||
|
|
||||||
def __init__(
|
|
||||||
self,
|
|
||||||
cell_width: int = 10,
|
|
||||||
cell_height: int = 18,
|
|
||||||
window_width: int = 800,
|
|
||||||
window_height: int = 600,
|
|
||||||
target_fps: float = 30.0,
|
|
||||||
):
|
|
||||||
self.width = 80
|
|
||||||
self.height = 24
|
|
||||||
self.cell_width = cell_width
|
|
||||||
self.cell_height = cell_height
|
|
||||||
self.window_width = window_width
|
|
||||||
self.window_height = window_height
|
|
||||||
self.target_fps = target_fps
|
|
||||||
self._initialized = False
|
|
||||||
self._pygame = None
|
|
||||||
self._screen = None
|
|
||||||
self._font = None
|
|
||||||
self._resized = False
|
|
||||||
self._quit_requested = False
|
|
||||||
self._last_frame_time = 0.0
|
|
||||||
self._frame_period = 1.0 / target_fps if target_fps > 0 else 0
|
|
||||||
self._glyph_cache = {}
|
|
||||||
|
|
||||||
def _get_font_path(self) -> str | None:
|
|
||||||
"""Get font path for rendering."""
|
|
||||||
import os
|
|
||||||
import sys
|
|
||||||
from pathlib import Path
|
|
||||||
|
|
||||||
env_font = os.environ.get("MAINLINE_PYGAME_FONT")
|
|
||||||
if env_font and os.path.exists(env_font):
|
|
||||||
return env_font
|
|
||||||
|
|
||||||
def search_dir(base_path: str) -> str | None:
|
|
||||||
if not os.path.exists(base_path):
|
|
||||||
return None
|
|
||||||
if os.path.isfile(base_path):
|
|
||||||
return base_path
|
|
||||||
for font_file in Path(base_path).rglob("*"):
|
|
||||||
if font_file.suffix.lower() in (".ttf", ".otf", ".ttc"):
|
|
||||||
name = font_file.stem.lower()
|
|
||||||
if "geist" in name and ("nerd" in name or "mono" in name):
|
|
||||||
return str(font_file)
|
|
||||||
return None
|
|
||||||
|
|
||||||
search_dirs = []
|
|
||||||
if sys.platform == "darwin":
|
|
||||||
search_dirs.append(os.path.expanduser("~/Library/Fonts/"))
|
|
||||||
elif sys.platform == "win32":
|
|
||||||
search_dirs.append(
|
|
||||||
os.path.expanduser("~\\AppData\\Local\\Microsoft\\Windows\\Fonts\\")
|
|
||||||
)
|
|
||||||
else:
|
|
||||||
search_dirs.extend(
|
|
||||||
[
|
|
||||||
os.path.expanduser("~/.local/share/fonts/"),
|
|
||||||
os.path.expanduser("~/.fonts/"),
|
|
||||||
"/usr/share/fonts/",
|
|
||||||
]
|
|
||||||
)
|
|
||||||
|
|
||||||
for search_dir_path in search_dirs:
|
|
||||||
found = search_dir(search_dir_path)
|
|
||||||
if found:
|
|
||||||
return found
|
|
||||||
|
|
||||||
return None
|
|
||||||
|
|
||||||
def init(self, width: int, height: int, reuse: bool = False) -> None:
|
|
||||||
"""Initialize display with dimensions.
|
|
||||||
|
|
||||||
Args:
|
|
||||||
width: Terminal width in characters
|
|
||||||
height: Terminal height in rows
|
|
||||||
reuse: If True, attach to existing pygame window instead of creating new
|
|
||||||
"""
|
|
||||||
self.width = width
|
|
||||||
self.height = height
|
|
||||||
|
|
||||||
import os
|
|
||||||
|
|
||||||
os.environ["SDL_VIDEODRIVER"] = "x11"
|
|
||||||
|
|
||||||
try:
|
|
||||||
import pygame
|
|
||||||
except ImportError:
|
|
||||||
return
|
|
||||||
|
|
||||||
if reuse and PygameDisplay._pygame_initialized:
|
|
||||||
self._pygame = pygame
|
|
||||||
self._initialized = True
|
|
||||||
return
|
|
||||||
|
|
||||||
pygame.init()
|
|
||||||
pygame.display.set_caption("Mainline")
|
|
||||||
|
|
||||||
self._screen = pygame.display.set_mode(
|
|
||||||
(self.window_width, self.window_height),
|
|
||||||
pygame.RESIZABLE,
|
|
||||||
)
|
|
||||||
self._pygame = pygame
|
|
||||||
PygameDisplay._pygame_initialized = True
|
|
||||||
|
|
||||||
# Calculate character dimensions from actual window size
|
|
||||||
self.width = max(1, self.window_width // self.cell_width)
|
|
||||||
self.height = max(1, self.window_height // self.cell_height)
|
|
||||||
|
|
||||||
font_path = self._get_font_path()
|
|
||||||
if font_path:
|
|
||||||
try:
|
|
||||||
self._font = pygame.font.Font(font_path, self.cell_height - 2)
|
|
||||||
except Exception:
|
|
||||||
self._font = pygame.font.SysFont("monospace", self.cell_height - 2)
|
|
||||||
else:
|
|
||||||
self._font = pygame.font.SysFont("monospace", self.cell_height - 2)
|
|
||||||
|
|
||||||
self._initialized = True
|
|
||||||
|
|
||||||
def show(self, buffer: list[str], border: bool = False) -> None:
|
|
||||||
if not self._initialized or not self._pygame:
|
|
||||||
return
|
|
||||||
|
|
||||||
t0 = time.perf_counter()
|
|
||||||
|
|
||||||
for event in self._pygame.event.get():
|
|
||||||
if event.type == self._pygame.QUIT:
|
|
||||||
self._quit_requested = True
|
|
||||||
elif event.type == self._pygame.KEYDOWN:
|
|
||||||
if event.key in (self._pygame.K_ESCAPE, self._pygame.K_c):
|
|
||||||
if event.key == self._pygame.K_c and not (
|
|
||||||
event.mod & self._pygame.KMOD_LCTRL
|
|
||||||
or event.mod & self._pygame.KMOD_RCTRL
|
|
||||||
):
|
|
||||||
continue
|
|
||||||
self._quit_requested = True
|
|
||||||
elif event.type == self._pygame.VIDEORESIZE:
|
|
||||||
self.window_width = event.w
|
|
||||||
self.window_height = event.h
|
|
||||||
self.width = max(1, self.window_width // self.cell_width)
|
|
||||||
self.height = max(1, self.window_height // self.cell_height)
|
|
||||||
self._resized = True
|
|
||||||
|
|
||||||
# FPS limiting - skip frame if we're going too fast
|
|
||||||
if self._frame_period > 0:
|
|
||||||
now = time.perf_counter()
|
|
||||||
elapsed = now - self._last_frame_time
|
|
||||||
if elapsed < self._frame_period:
|
|
||||||
return # Skip this frame
|
|
||||||
self._last_frame_time = now
|
|
||||||
|
|
||||||
# Get metrics for border display
|
|
||||||
fps = 0.0
|
|
||||||
frame_time = 0.0
|
|
||||||
from engine.display import get_monitor
|
|
||||||
|
|
||||||
monitor = get_monitor()
|
|
||||||
if monitor:
|
|
||||||
stats = monitor.get_stats()
|
|
||||||
avg_ms = stats.get("pipeline", {}).get("avg_ms", 0) if stats else 0
|
|
||||||
frame_count = stats.get("frame_count", 0) if stats else 0
|
|
||||||
if avg_ms and frame_count > 0:
|
|
||||||
fps = 1000.0 / avg_ms
|
|
||||||
frame_time = avg_ms
|
|
||||||
|
|
||||||
# Apply border if requested
|
|
||||||
if border:
|
|
||||||
from engine.display import render_border
|
|
||||||
|
|
||||||
buffer = render_border(buffer, self.width, self.height, fps, frame_time)
|
|
||||||
|
|
||||||
self._screen.fill((0, 0, 0))
|
|
||||||
|
|
||||||
blit_list = []
|
|
||||||
|
|
||||||
for row_idx, line in enumerate(buffer[: self.height]):
|
|
||||||
if row_idx >= self.height:
|
|
||||||
break
|
|
||||||
|
|
||||||
tokens = parse_ansi(line)
|
|
||||||
x_pos = 0
|
|
||||||
|
|
||||||
for text, fg, bg, _bold in tokens:
|
|
||||||
if not text:
|
|
||||||
continue
|
|
||||||
|
|
||||||
# Use None as key for no background
|
|
||||||
bg_key = bg if bg != (0, 0, 0) else None
|
|
||||||
cache_key = (text, fg, bg_key)
|
|
||||||
|
|
||||||
if cache_key not in self._glyph_cache:
|
|
||||||
# Render and cache
|
|
||||||
if bg_key is not None:
|
|
||||||
self._glyph_cache[cache_key] = self._font.render(
|
|
||||||
text, True, fg, bg_key
|
|
||||||
)
|
|
||||||
else:
|
|
||||||
self._glyph_cache[cache_key] = self._font.render(text, True, fg)
|
|
||||||
|
|
||||||
surface = self._glyph_cache[cache_key]
|
|
||||||
blit_list.append((surface, (x_pos, row_idx * self.cell_height)))
|
|
||||||
x_pos += self._font.size(text)[0]
|
|
||||||
|
|
||||||
self._screen.blits(blit_list)
|
|
||||||
self._pygame.display.flip()
|
|
||||||
|
|
||||||
elapsed_ms = (time.perf_counter() - t0) * 1000
|
|
||||||
|
|
||||||
if monitor:
|
|
||||||
chars_in = sum(len(line) for line in buffer)
|
|
||||||
monitor.record_effect("pygame_display", elapsed_ms, chars_in, chars_in)
|
|
||||||
|
|
||||||
def clear(self) -> None:
|
|
||||||
if self._screen and self._pygame:
|
|
||||||
self._screen.fill((0, 0, 0))
|
|
||||||
self._pygame.display.flip()
|
|
||||||
|
|
||||||
def get_dimensions(self) -> tuple[int, int]:
|
|
||||||
"""Get current terminal dimensions based on window size.
|
|
||||||
|
|
||||||
Returns:
|
|
||||||
(width, height) in character cells
|
|
||||||
"""
|
|
||||||
# Query actual window size and recalculate character cells
|
|
||||||
if self._screen and self._pygame:
|
|
||||||
try:
|
|
||||||
w, h = self._screen.get_size()
|
|
||||||
if w != self.window_width or h != self.window_height:
|
|
||||||
self.window_width = w
|
|
||||||
self.window_height = h
|
|
||||||
self.width = max(1, w // self.cell_width)
|
|
||||||
self.height = max(1, h // self.cell_height)
|
|
||||||
except Exception:
|
|
||||||
pass
|
|
||||||
return self.width, self.height
|
|
||||||
|
|
||||||
def cleanup(self, quit_pygame: bool = True) -> None:
|
|
||||||
"""Cleanup display resources.
|
|
||||||
|
|
||||||
Args:
|
|
||||||
quit_pygame: If True, quit pygame entirely. Set to False when
|
|
||||||
reusing the display to avoid closing shared window.
|
|
||||||
"""
|
|
||||||
if quit_pygame and self._pygame:
|
|
||||||
self._pygame.quit()
|
|
||||||
PygameDisplay._pygame_initialized = False
|
|
||||||
|
|
||||||
@classmethod
|
|
||||||
def reset_state(cls) -> None:
|
|
||||||
"""Reset pygame state - useful for testing."""
|
|
||||||
cls._pygame_initialized = False
|
|
||||||
|
|
||||||
def is_quit_requested(self) -> bool:
|
|
||||||
"""Check if user requested quit (Ctrl+C, Ctrl+Q, or Escape).
|
|
||||||
|
|
||||||
Returns True if the user pressed Ctrl+C, Ctrl+Q, or Escape.
|
|
||||||
The main loop should check this and raise KeyboardInterrupt.
|
|
||||||
"""
|
|
||||||
return self._quit_requested
|
|
||||||
|
|
||||||
def clear_quit_request(self) -> bool:
|
|
||||||
"""Clear the quit request flag after handling.
|
|
||||||
|
|
||||||
Returns the previous quit request state.
|
|
||||||
"""
|
|
||||||
was_requested = self._quit_requested
|
|
||||||
self._quit_requested = False
|
|
||||||
return was_requested
|
|
||||||
@@ -1,228 +0,0 @@
|
|||||||
"""
|
|
||||||
Sixel graphics display backend - renders to sixel graphics in terminal.
|
|
||||||
"""
|
|
||||||
|
|
||||||
import time
|
|
||||||
|
|
||||||
from engine.display.renderer import get_default_font_path, parse_ansi
|
|
||||||
|
|
||||||
|
|
||||||
def _encode_sixel(image) -> str:
|
|
||||||
"""Encode a PIL Image to sixel format (pure Python)."""
|
|
||||||
img = image.convert("RGBA")
|
|
||||||
width, height = img.size
|
|
||||||
pixels = img.load()
|
|
||||||
|
|
||||||
palette = []
|
|
||||||
pixel_palette_idx = {}
|
|
||||||
|
|
||||||
def get_color_idx(r, g, b, a):
|
|
||||||
if a < 128:
|
|
||||||
return -1
|
|
||||||
key = (r // 32, g // 32, b // 32)
|
|
||||||
if key not in pixel_palette_idx:
|
|
||||||
idx = len(palette)
|
|
||||||
if idx < 256:
|
|
||||||
palette.append((r, g, b))
|
|
||||||
pixel_palette_idx[key] = idx
|
|
||||||
return pixel_palette_idx.get(key, 0)
|
|
||||||
|
|
||||||
for y in range(height):
|
|
||||||
for x in range(width):
|
|
||||||
r, g, b, a = pixels[x, y]
|
|
||||||
get_color_idx(r, g, b, a)
|
|
||||||
|
|
||||||
if not palette:
|
|
||||||
return ""
|
|
||||||
|
|
||||||
if len(palette) == 1:
|
|
||||||
palette = [palette[0], (0, 0, 0)]
|
|
||||||
|
|
||||||
sixel_data = []
|
|
||||||
sixel_data.append(
|
|
||||||
f'"{"".join(f"#{i};2;{r};{g};{b}" for i, (r, g, b) in enumerate(palette))}'
|
|
||||||
)
|
|
||||||
|
|
||||||
for x in range(width):
|
|
||||||
col_data = []
|
|
||||||
for y in range(0, height, 6):
|
|
||||||
bits = 0
|
|
||||||
color_idx = -1
|
|
||||||
for dy in range(6):
|
|
||||||
if y + dy < height:
|
|
||||||
r, g, b, a = pixels[x, y + dy]
|
|
||||||
if a >= 128:
|
|
||||||
bits |= 1 << dy
|
|
||||||
idx = get_color_idx(r, g, b, a)
|
|
||||||
if color_idx == -1:
|
|
||||||
color_idx = idx
|
|
||||||
elif color_idx != idx:
|
|
||||||
color_idx = -2
|
|
||||||
|
|
||||||
if color_idx >= 0:
|
|
||||||
col_data.append(
|
|
||||||
chr(63 + color_idx) + chr(63 + bits)
|
|
||||||
if bits
|
|
||||||
else chr(63 + color_idx) + "?"
|
|
||||||
)
|
|
||||||
elif color_idx == -2:
|
|
||||||
pass
|
|
||||||
|
|
||||||
if col_data:
|
|
||||||
sixel_data.append("".join(col_data) + "$")
|
|
||||||
else:
|
|
||||||
sixel_data.append("-" if x < width - 1 else "$")
|
|
||||||
|
|
||||||
sixel_data.append("\x1b\\")
|
|
||||||
|
|
||||||
return "\x1bPq" + "".join(sixel_data)
|
|
||||||
|
|
||||||
|
|
||||||
class SixelDisplay:
|
|
||||||
"""Sixel graphics display backend - renders to sixel graphics in terminal."""
|
|
||||||
|
|
||||||
width: int = 80
|
|
||||||
height: int = 24
|
|
||||||
|
|
||||||
def __init__(self, cell_width: int = 9, cell_height: int = 16):
|
|
||||||
self.width = 80
|
|
||||||
self.height = 24
|
|
||||||
self.cell_width = cell_width
|
|
||||||
self.cell_height = cell_height
|
|
||||||
self._initialized = False
|
|
||||||
self._font_path = None
|
|
||||||
|
|
||||||
def _get_font_path(self) -> str | None:
|
|
||||||
"""Get font path from env or detect common locations."""
|
|
||||||
import os
|
|
||||||
|
|
||||||
if self._font_path:
|
|
||||||
return self._font_path
|
|
||||||
|
|
||||||
env_font = os.environ.get("MAINLINE_SIXEL_FONT")
|
|
||||||
if env_font and os.path.exists(env_font):
|
|
||||||
self._font_path = env_font
|
|
||||||
return env_font
|
|
||||||
|
|
||||||
font_path = get_default_font_path()
|
|
||||||
if font_path:
|
|
||||||
self._font_path = font_path
|
|
||||||
|
|
||||||
return self._font_path
|
|
||||||
|
|
||||||
def init(self, width: int, height: int, reuse: bool = False) -> None:
|
|
||||||
"""Initialize display with dimensions.
|
|
||||||
|
|
||||||
Args:
|
|
||||||
width: Terminal width in characters
|
|
||||||
height: Terminal height in rows
|
|
||||||
reuse: Ignored for SixelDisplay
|
|
||||||
"""
|
|
||||||
self.width = width
|
|
||||||
self.height = height
|
|
||||||
self._initialized = True
|
|
||||||
|
|
||||||
def show(self, buffer: list[str], border: bool = False) -> None:
|
|
||||||
import sys
|
|
||||||
|
|
||||||
t0 = time.perf_counter()
|
|
||||||
|
|
||||||
# Get metrics for border display
|
|
||||||
fps = 0.0
|
|
||||||
frame_time = 0.0
|
|
||||||
from engine.display import get_monitor
|
|
||||||
|
|
||||||
monitor = get_monitor()
|
|
||||||
if monitor:
|
|
||||||
stats = monitor.get_stats()
|
|
||||||
avg_ms = stats.get("pipeline", {}).get("avg_ms", 0) if stats else 0
|
|
||||||
frame_count = stats.get("frame_count", 0) if stats else 0
|
|
||||||
if avg_ms and frame_count > 0:
|
|
||||||
fps = 1000.0 / avg_ms
|
|
||||||
frame_time = avg_ms
|
|
||||||
|
|
||||||
# Apply border if requested
|
|
||||||
if border:
|
|
||||||
from engine.display import render_border
|
|
||||||
|
|
||||||
buffer = render_border(buffer, self.width, self.height, fps, frame_time)
|
|
||||||
|
|
||||||
img_width = self.width * self.cell_width
|
|
||||||
img_height = self.height * self.cell_height
|
|
||||||
|
|
||||||
try:
|
|
||||||
from PIL import Image, ImageDraw, ImageFont
|
|
||||||
except ImportError:
|
|
||||||
return
|
|
||||||
|
|
||||||
img = Image.new("RGBA", (img_width, img_height), (0, 0, 0, 255))
|
|
||||||
draw = ImageDraw.Draw(img)
|
|
||||||
|
|
||||||
font_path = self._get_font_path()
|
|
||||||
font = None
|
|
||||||
if font_path:
|
|
||||||
try:
|
|
||||||
font = ImageFont.truetype(font_path, self.cell_height - 2)
|
|
||||||
except Exception:
|
|
||||||
font = None
|
|
||||||
|
|
||||||
if font is None:
|
|
||||||
try:
|
|
||||||
font = ImageFont.load_default()
|
|
||||||
except Exception:
|
|
||||||
font = None
|
|
||||||
|
|
||||||
for row_idx, line in enumerate(buffer[: self.height]):
|
|
||||||
if row_idx >= self.height:
|
|
||||||
break
|
|
||||||
|
|
||||||
tokens = parse_ansi(line)
|
|
||||||
x_pos = 0
|
|
||||||
y_pos = row_idx * self.cell_height
|
|
||||||
|
|
||||||
for text, fg, bg, bold in tokens:
|
|
||||||
if not text:
|
|
||||||
continue
|
|
||||||
|
|
||||||
if bg != (0, 0, 0):
|
|
||||||
bbox = draw.textbbox((x_pos, y_pos), text, font=font)
|
|
||||||
draw.rectangle(bbox, fill=(*bg, 255))
|
|
||||||
|
|
||||||
if bold and font:
|
|
||||||
draw.text((x_pos - 1, y_pos - 1), text, fill=(*fg, 255), font=font)
|
|
||||||
|
|
||||||
draw.text((x_pos, y_pos), text, fill=(*fg, 255), font=font)
|
|
||||||
|
|
||||||
if font:
|
|
||||||
x_pos += draw.textlength(text, font=font)
|
|
||||||
|
|
||||||
sixel = _encode_sixel(img)
|
|
||||||
|
|
||||||
sys.stdout.buffer.write(sixel.encode("utf-8"))
|
|
||||||
sys.stdout.flush()
|
|
||||||
|
|
||||||
elapsed_ms = (time.perf_counter() - t0) * 1000
|
|
||||||
|
|
||||||
from engine.display import get_monitor
|
|
||||||
|
|
||||||
monitor = get_monitor()
|
|
||||||
if monitor:
|
|
||||||
chars_in = sum(len(line) for line in buffer)
|
|
||||||
monitor.record_effect("sixel_display", elapsed_ms, chars_in, chars_in)
|
|
||||||
|
|
||||||
def clear(self) -> None:
|
|
||||||
import sys
|
|
||||||
|
|
||||||
sys.stdout.buffer.write(b"\x1b[2J\x1b[H")
|
|
||||||
sys.stdout.flush()
|
|
||||||
|
|
||||||
def cleanup(self) -> None:
|
|
||||||
pass
|
|
||||||
|
|
||||||
def get_dimensions(self) -> tuple[int, int]:
|
|
||||||
"""Get current dimensions.
|
|
||||||
|
|
||||||
Returns:
|
|
||||||
(width, height) in character cells
|
|
||||||
"""
|
|
||||||
return (self.width, self.height)
|
|
||||||
@@ -1,146 +0,0 @@
|
|||||||
"""
|
|
||||||
ANSI terminal display backend.
|
|
||||||
"""
|
|
||||||
|
|
||||||
import os
|
|
||||||
import time
|
|
||||||
|
|
||||||
|
|
||||||
class TerminalDisplay:
|
|
||||||
"""ANSI terminal display backend.
|
|
||||||
|
|
||||||
Renders buffer to stdout using ANSI escape codes.
|
|
||||||
Supports reuse - when reuse=True, skips re-initializing terminal state.
|
|
||||||
Auto-detects terminal dimensions on init.
|
|
||||||
"""
|
|
||||||
|
|
||||||
width: int = 80
|
|
||||||
height: int = 24
|
|
||||||
_initialized: bool = False
|
|
||||||
|
|
||||||
def __init__(self, target_fps: float = 30.0):
|
|
||||||
self.target_fps = target_fps
|
|
||||||
self._frame_period = 1.0 / target_fps if target_fps > 0 else 0
|
|
||||||
self._last_frame_time = 0.0
|
|
||||||
self._cached_dimensions: tuple[int, int] | None = None
|
|
||||||
|
|
||||||
def init(self, width: int, height: int, reuse: bool = False) -> None:
|
|
||||||
"""Initialize display with dimensions.
|
|
||||||
|
|
||||||
If width/height are not provided (0/None), auto-detects terminal size.
|
|
||||||
Otherwise uses provided dimensions or falls back to terminal size
|
|
||||||
if the provided dimensions exceed terminal capacity.
|
|
||||||
|
|
||||||
Args:
|
|
||||||
width: Desired terminal width (0 = auto-detect)
|
|
||||||
height: Desired terminal height (0 = auto-detect)
|
|
||||||
reuse: If True, skip terminal re-initialization
|
|
||||||
"""
|
|
||||||
from engine.terminal import CURSOR_OFF
|
|
||||||
|
|
||||||
# Auto-detect terminal size (handle case where no terminal)
|
|
||||||
try:
|
|
||||||
term_size = os.get_terminal_size()
|
|
||||||
term_width = term_size.columns
|
|
||||||
term_height = term_size.lines
|
|
||||||
except OSError:
|
|
||||||
# No terminal available (e.g., in tests)
|
|
||||||
term_width = width if width > 0 else 80
|
|
||||||
term_height = height if height > 0 else 24
|
|
||||||
|
|
||||||
# Use provided dimensions if valid, otherwise use terminal size
|
|
||||||
if width > 0 and height > 0:
|
|
||||||
self.width = min(width, term_width)
|
|
||||||
self.height = min(height, term_height)
|
|
||||||
else:
|
|
||||||
self.width = term_width
|
|
||||||
self.height = term_height
|
|
||||||
|
|
||||||
if not reuse or not self._initialized:
|
|
||||||
print(CURSOR_OFF, end="", flush=True)
|
|
||||||
self._initialized = True
|
|
||||||
|
|
||||||
def get_dimensions(self) -> tuple[int, int]:
|
|
||||||
"""Get current terminal dimensions.
|
|
||||||
|
|
||||||
Returns cached dimensions to avoid querying terminal every frame,
|
|
||||||
which can cause inconsistent results. Dimensions are only refreshed
|
|
||||||
when they actually change.
|
|
||||||
|
|
||||||
Returns:
|
|
||||||
(width, height) in character cells
|
|
||||||
"""
|
|
||||||
try:
|
|
||||||
term_size = os.get_terminal_size()
|
|
||||||
new_dims = (term_size.columns, term_size.lines)
|
|
||||||
except OSError:
|
|
||||||
new_dims = (self.width, self.height)
|
|
||||||
|
|
||||||
# Only update cached dimensions if they actually changed
|
|
||||||
if self._cached_dimensions is None or self._cached_dimensions != new_dims:
|
|
||||||
self._cached_dimensions = new_dims
|
|
||||||
self.width = new_dims[0]
|
|
||||||
self.height = new_dims[1]
|
|
||||||
|
|
||||||
return self._cached_dimensions
|
|
||||||
|
|
||||||
def show(self, buffer: list[str], border: bool = False) -> None:
|
|
||||||
import sys
|
|
||||||
|
|
||||||
from engine.display import get_monitor, render_border
|
|
||||||
|
|
||||||
t0 = time.perf_counter()
|
|
||||||
|
|
||||||
# FPS limiting - skip frame if we're going too fast
|
|
||||||
if self._frame_period > 0:
|
|
||||||
now = time.perf_counter()
|
|
||||||
elapsed = now - self._last_frame_time
|
|
||||||
if elapsed < self._frame_period:
|
|
||||||
# Skip this frame - too soon
|
|
||||||
return
|
|
||||||
self._last_frame_time = now
|
|
||||||
|
|
||||||
# Get metrics for border display
|
|
||||||
fps = 0.0
|
|
||||||
frame_time = 0.0
|
|
||||||
monitor = get_monitor()
|
|
||||||
if monitor:
|
|
||||||
stats = monitor.get_stats()
|
|
||||||
avg_ms = stats.get("pipeline", {}).get("avg_ms", 0) if stats else 0
|
|
||||||
frame_count = stats.get("frame_count", 0) if stats else 0
|
|
||||||
if avg_ms and frame_count > 0:
|
|
||||||
fps = 1000.0 / avg_ms
|
|
||||||
frame_time = avg_ms
|
|
||||||
|
|
||||||
# Apply border if requested
|
|
||||||
if border:
|
|
||||||
buffer = render_border(buffer, self.width, self.height, fps, frame_time)
|
|
||||||
|
|
||||||
# Write buffer with cursor home + erase down to avoid flicker
|
|
||||||
# \033[H = cursor home, \033[J = erase from cursor to end of screen
|
|
||||||
output = "\033[H\033[J" + "".join(buffer)
|
|
||||||
sys.stdout.buffer.write(output.encode())
|
|
||||||
sys.stdout.flush()
|
|
||||||
elapsed_ms = (time.perf_counter() - t0) * 1000
|
|
||||||
|
|
||||||
if monitor:
|
|
||||||
chars_in = sum(len(line) for line in buffer)
|
|
||||||
monitor.record_effect("terminal_display", elapsed_ms, chars_in, chars_in)
|
|
||||||
|
|
||||||
def clear(self) -> None:
|
|
||||||
from engine.terminal import CLR
|
|
||||||
|
|
||||||
print(CLR, end="", flush=True)
|
|
||||||
|
|
||||||
def cleanup(self) -> None:
|
|
||||||
from engine.terminal import CURSOR_ON
|
|
||||||
|
|
||||||
print(CURSOR_ON, end="", flush=True)
|
|
||||||
|
|
||||||
def is_quit_requested(self) -> bool:
|
|
||||||
"""Check if quit was requested (optional protocol method)."""
|
|
||||||
return False
|
|
||||||
|
|
||||||
def clear_quit_request(self) -> None:
|
|
||||||
"""Clear quit request (optional protocol method)."""
|
|
||||||
pass
|
|
||||||
@@ -1,276 +0,0 @@
|
|||||||
"""
|
|
||||||
WebSocket display backend - broadcasts frame buffer to connected web clients.
|
|
||||||
"""
|
|
||||||
|
|
||||||
import asyncio
|
|
||||||
import json
|
|
||||||
import threading
|
|
||||||
import time
|
|
||||||
|
|
||||||
try:
|
|
||||||
import websockets
|
|
||||||
except ImportError:
|
|
||||||
websockets = None
|
|
||||||
|
|
||||||
|
|
||||||
def get_monitor():
|
|
||||||
"""Get the performance monitor."""
|
|
||||||
try:
|
|
||||||
from engine.effects.performance import get_monitor as _get_monitor
|
|
||||||
|
|
||||||
return _get_monitor()
|
|
||||||
except Exception:
|
|
||||||
return None
|
|
||||||
|
|
||||||
|
|
||||||
class WebSocketDisplay:
|
|
||||||
"""WebSocket display backend - broadcasts to HTML Canvas clients."""
|
|
||||||
|
|
||||||
width: int = 80
|
|
||||||
height: int = 24
|
|
||||||
|
|
||||||
def __init__(
|
|
||||||
self,
|
|
||||||
host: str = "0.0.0.0",
|
|
||||||
port: int = 8765,
|
|
||||||
http_port: int = 8766,
|
|
||||||
):
|
|
||||||
self.host = host
|
|
||||||
self.port = port
|
|
||||||
self.http_port = http_port
|
|
||||||
self.width = 80
|
|
||||||
self.height = 24
|
|
||||||
self._clients: set = set()
|
|
||||||
self._server_running = False
|
|
||||||
self._http_running = False
|
|
||||||
self._server_thread: threading.Thread | None = None
|
|
||||||
self._http_thread: threading.Thread | None = None
|
|
||||||
self._available = True
|
|
||||||
self._max_clients = 10
|
|
||||||
self._client_connected_callback = None
|
|
||||||
self._client_disconnected_callback = None
|
|
||||||
self._frame_delay = 0.0
|
|
||||||
|
|
||||||
try:
|
|
||||||
import websockets as _ws
|
|
||||||
|
|
||||||
self._available = _ws is not None
|
|
||||||
except ImportError:
|
|
||||||
self._available = False
|
|
||||||
|
|
||||||
def is_available(self) -> bool:
|
|
||||||
"""Check if WebSocket support is available."""
|
|
||||||
return self._available
|
|
||||||
|
|
||||||
def init(self, width: int, height: int, reuse: bool = False) -> None:
|
|
||||||
"""Initialize display with dimensions and start server.
|
|
||||||
|
|
||||||
Args:
|
|
||||||
width: Terminal width in characters
|
|
||||||
height: Terminal height in rows
|
|
||||||
reuse: If True, skip starting servers (assume already running)
|
|
||||||
"""
|
|
||||||
self.width = width
|
|
||||||
self.height = height
|
|
||||||
|
|
||||||
if not reuse or not self._server_running:
|
|
||||||
self.start_server()
|
|
||||||
self.start_http_server()
|
|
||||||
|
|
||||||
def show(self, buffer: list[str], border: bool = False) -> None:
|
|
||||||
"""Broadcast buffer to all connected clients."""
|
|
||||||
t0 = time.perf_counter()
|
|
||||||
|
|
||||||
# Get metrics for border display
|
|
||||||
fps = 0.0
|
|
||||||
frame_time = 0.0
|
|
||||||
monitor = get_monitor()
|
|
||||||
if monitor:
|
|
||||||
stats = monitor.get_stats()
|
|
||||||
avg_ms = stats.get("pipeline", {}).get("avg_ms", 0) if stats else 0
|
|
||||||
frame_count = stats.get("frame_count", 0) if stats else 0
|
|
||||||
if avg_ms and frame_count > 0:
|
|
||||||
fps = 1000.0 / avg_ms
|
|
||||||
frame_time = avg_ms
|
|
||||||
|
|
||||||
# Apply border if requested
|
|
||||||
if border:
|
|
||||||
from engine.display import render_border
|
|
||||||
|
|
||||||
buffer = render_border(buffer, self.width, self.height, fps, frame_time)
|
|
||||||
|
|
||||||
if self._clients:
|
|
||||||
frame_data = {
|
|
||||||
"type": "frame",
|
|
||||||
"width": self.width,
|
|
||||||
"height": self.height,
|
|
||||||
"lines": buffer,
|
|
||||||
}
|
|
||||||
message = json.dumps(frame_data)
|
|
||||||
|
|
||||||
disconnected = set()
|
|
||||||
for client in list(self._clients):
|
|
||||||
try:
|
|
||||||
asyncio.run(client.send(message))
|
|
||||||
except Exception:
|
|
||||||
disconnected.add(client)
|
|
||||||
|
|
||||||
for client in disconnected:
|
|
||||||
self._clients.discard(client)
|
|
||||||
if self._client_disconnected_callback:
|
|
||||||
self._client_disconnected_callback(client)
|
|
||||||
|
|
||||||
elapsed_ms = (time.perf_counter() - t0) * 1000
|
|
||||||
monitor = get_monitor()
|
|
||||||
if monitor:
|
|
||||||
chars_in = sum(len(line) for line in buffer)
|
|
||||||
monitor.record_effect("websocket_display", elapsed_ms, chars_in, chars_in)
|
|
||||||
|
|
||||||
def clear(self) -> None:
|
|
||||||
"""Broadcast clear command to all clients."""
|
|
||||||
if self._clients:
|
|
||||||
clear_data = {"type": "clear"}
|
|
||||||
message = json.dumps(clear_data)
|
|
||||||
for client in list(self._clients):
|
|
||||||
try:
|
|
||||||
asyncio.run(client.send(message))
|
|
||||||
except Exception:
|
|
||||||
pass
|
|
||||||
|
|
||||||
def cleanup(self) -> None:
|
|
||||||
"""Stop the servers."""
|
|
||||||
self.stop_server()
|
|
||||||
self.stop_http_server()
|
|
||||||
|
|
||||||
async def _websocket_handler(self, websocket):
|
|
||||||
"""Handle WebSocket connections."""
|
|
||||||
if len(self._clients) >= self._max_clients:
|
|
||||||
await websocket.close()
|
|
||||||
return
|
|
||||||
|
|
||||||
self._clients.add(websocket)
|
|
||||||
if self._client_connected_callback:
|
|
||||||
self._client_connected_callback(websocket)
|
|
||||||
|
|
||||||
try:
|
|
||||||
async for message in websocket:
|
|
||||||
try:
|
|
||||||
data = json.loads(message)
|
|
||||||
if data.get("type") == "resize":
|
|
||||||
self.width = data.get("width", 80)
|
|
||||||
self.height = data.get("height", 24)
|
|
||||||
except json.JSONDecodeError:
|
|
||||||
pass
|
|
||||||
except Exception:
|
|
||||||
pass
|
|
||||||
finally:
|
|
||||||
self._clients.discard(websocket)
|
|
||||||
if self._client_disconnected_callback:
|
|
||||||
self._client_disconnected_callback(websocket)
|
|
||||||
|
|
||||||
async def _run_websocket_server(self):
|
|
||||||
"""Run the WebSocket server."""
|
|
||||||
async with websockets.serve(self._websocket_handler, self.host, self.port):
|
|
||||||
while self._server_running:
|
|
||||||
await asyncio.sleep(0.1)
|
|
||||||
|
|
||||||
async def _run_http_server(self):
|
|
||||||
"""Run simple HTTP server for the client."""
|
|
||||||
import os
|
|
||||||
from http.server import HTTPServer, SimpleHTTPRequestHandler
|
|
||||||
|
|
||||||
client_dir = os.path.join(
|
|
||||||
os.path.dirname(os.path.dirname(os.path.dirname(__file__))), "client"
|
|
||||||
)
|
|
||||||
|
|
||||||
class Handler(SimpleHTTPRequestHandler):
|
|
||||||
def __init__(self, *args, **kwargs):
|
|
||||||
super().__init__(*args, directory=client_dir, **kwargs)
|
|
||||||
|
|
||||||
def log_message(self, format, *args):
|
|
||||||
pass
|
|
||||||
|
|
||||||
httpd = HTTPServer((self.host, self.http_port), Handler)
|
|
||||||
while self._http_running:
|
|
||||||
httpd.handle_request()
|
|
||||||
|
|
||||||
def _run_async(self, coro):
|
|
||||||
"""Run coroutine in background."""
|
|
||||||
try:
|
|
||||||
asyncio.run(coro)
|
|
||||||
except Exception as e:
|
|
||||||
print(f"WebSocket async error: {e}")
|
|
||||||
|
|
||||||
def start_server(self):
|
|
||||||
"""Start the WebSocket server in a background thread."""
|
|
||||||
if not self._available:
|
|
||||||
return
|
|
||||||
if self._server_thread is not None:
|
|
||||||
return
|
|
||||||
|
|
||||||
self._server_running = True
|
|
||||||
self._server_thread = threading.Thread(
|
|
||||||
target=self._run_async, args=(self._run_websocket_server(),), daemon=True
|
|
||||||
)
|
|
||||||
self._server_thread.start()
|
|
||||||
|
|
||||||
def stop_server(self):
|
|
||||||
"""Stop the WebSocket server."""
|
|
||||||
self._server_running = False
|
|
||||||
self._server_thread = None
|
|
||||||
|
|
||||||
def start_http_server(self):
|
|
||||||
"""Start the HTTP server in a background thread."""
|
|
||||||
if not self._available:
|
|
||||||
return
|
|
||||||
if self._http_thread is not None:
|
|
||||||
return
|
|
||||||
|
|
||||||
self._http_running = True
|
|
||||||
|
|
||||||
self._http_running = True
|
|
||||||
self._http_thread = threading.Thread(
|
|
||||||
target=self._run_async, args=(self._run_http_server(),), daemon=True
|
|
||||||
)
|
|
||||||
self._http_thread.start()
|
|
||||||
|
|
||||||
def stop_http_server(self):
|
|
||||||
"""Stop the HTTP server."""
|
|
||||||
self._http_running = False
|
|
||||||
self._http_thread = None
|
|
||||||
|
|
||||||
def client_count(self) -> int:
|
|
||||||
"""Return number of connected clients."""
|
|
||||||
return len(self._clients)
|
|
||||||
|
|
||||||
def get_ws_port(self) -> int:
|
|
||||||
"""Return WebSocket port."""
|
|
||||||
return self.port
|
|
||||||
|
|
||||||
def get_http_port(self) -> int:
|
|
||||||
"""Return HTTP port."""
|
|
||||||
return self.http_port
|
|
||||||
|
|
||||||
def set_frame_delay(self, delay: float) -> None:
|
|
||||||
"""Set delay between frames in seconds."""
|
|
||||||
self._frame_delay = delay
|
|
||||||
|
|
||||||
def get_frame_delay(self) -> float:
|
|
||||||
"""Get delay between frames."""
|
|
||||||
return self._frame_delay
|
|
||||||
|
|
||||||
def set_client_connected_callback(self, callback) -> None:
|
|
||||||
"""Set callback for client connections."""
|
|
||||||
self._client_connected_callback = callback
|
|
||||||
|
|
||||||
def set_client_disconnected_callback(self, callback) -> None:
|
|
||||||
"""Set callback for client disconnections."""
|
|
||||||
self._client_disconnected_callback = callback
|
|
||||||
|
|
||||||
def get_dimensions(self) -> tuple[int, int]:
|
|
||||||
"""Get current dimensions.
|
|
||||||
|
|
||||||
Returns:
|
|
||||||
(width, height) in character cells
|
|
||||||
"""
|
|
||||||
return (self.width, self.height)
|
|
||||||
@@ -1,280 +0,0 @@
|
|||||||
"""
|
|
||||||
Shared display rendering utilities.
|
|
||||||
|
|
||||||
Provides common functionality for displays that render text to images
|
|
||||||
(Pygame, Sixel, Kitty displays).
|
|
||||||
"""
|
|
||||||
|
|
||||||
from typing import Any
|
|
||||||
|
|
||||||
ANSI_COLORS = {
|
|
||||||
0: (0, 0, 0),
|
|
||||||
1: (205, 49, 49),
|
|
||||||
2: (13, 188, 121),
|
|
||||||
3: (229, 229, 16),
|
|
||||||
4: (36, 114, 200),
|
|
||||||
5: (188, 63, 188),
|
|
||||||
6: (17, 168, 205),
|
|
||||||
7: (229, 229, 229),
|
|
||||||
8: (102, 102, 102),
|
|
||||||
9: (241, 76, 76),
|
|
||||||
10: (35, 209, 139),
|
|
||||||
11: (245, 245, 67),
|
|
||||||
12: (59, 142, 234),
|
|
||||||
13: (214, 112, 214),
|
|
||||||
14: (41, 184, 219),
|
|
||||||
15: (255, 255, 255),
|
|
||||||
}
|
|
||||||
|
|
||||||
|
|
||||||
def parse_ansi(
|
|
||||||
text: str,
|
|
||||||
) -> list[tuple[str, tuple[int, int, int], tuple[int, int, int], bool]]:
|
|
||||||
"""Parse ANSI escape sequences into text tokens with colors.
|
|
||||||
|
|
||||||
Args:
|
|
||||||
text: Text containing ANSI escape sequences
|
|
||||||
|
|
||||||
Returns:
|
|
||||||
List of (text, fg_rgb, bg_rgb, bold) tuples
|
|
||||||
"""
|
|
||||||
tokens = []
|
|
||||||
current_text = ""
|
|
||||||
fg = (204, 204, 204)
|
|
||||||
bg = (0, 0, 0)
|
|
||||||
bold = False
|
|
||||||
i = 0
|
|
||||||
|
|
||||||
ANSI_COLORS_4BIT = {
|
|
||||||
0: (0, 0, 0),
|
|
||||||
1: (205, 49, 49),
|
|
||||||
2: (13, 188, 121),
|
|
||||||
3: (229, 229, 16),
|
|
||||||
4: (36, 114, 200),
|
|
||||||
5: (188, 63, 188),
|
|
||||||
6: (17, 168, 205),
|
|
||||||
7: (229, 229, 229),
|
|
||||||
8: (102, 102, 102),
|
|
||||||
9: (241, 76, 76),
|
|
||||||
10: (35, 209, 139),
|
|
||||||
11: (245, 245, 67),
|
|
||||||
12: (59, 142, 234),
|
|
||||||
13: (214, 112, 214),
|
|
||||||
14: (41, 184, 219),
|
|
||||||
15: (255, 255, 255),
|
|
||||||
}
|
|
||||||
|
|
||||||
while i < len(text):
|
|
||||||
char = text[i]
|
|
||||||
|
|
||||||
if char == "\x1b" and i + 1 < len(text) and text[i + 1] == "[":
|
|
||||||
if current_text:
|
|
||||||
tokens.append((current_text, fg, bg, bold))
|
|
||||||
current_text = ""
|
|
||||||
|
|
||||||
i += 2
|
|
||||||
code = ""
|
|
||||||
while i < len(text):
|
|
||||||
c = text[i]
|
|
||||||
if c.isalpha():
|
|
||||||
break
|
|
||||||
code += c
|
|
||||||
i += 1
|
|
||||||
|
|
||||||
if code:
|
|
||||||
codes = code.split(";")
|
|
||||||
for c in codes:
|
|
||||||
if c == "0":
|
|
||||||
fg = (204, 204, 204)
|
|
||||||
bg = (0, 0, 0)
|
|
||||||
bold = False
|
|
||||||
elif c == "1":
|
|
||||||
bold = True
|
|
||||||
elif c == "22":
|
|
||||||
bold = False
|
|
||||||
elif c == "39":
|
|
||||||
fg = (204, 204, 204)
|
|
||||||
elif c == "49":
|
|
||||||
bg = (0, 0, 0)
|
|
||||||
elif c.isdigit():
|
|
||||||
color_idx = int(c)
|
|
||||||
if color_idx in ANSI_COLORS_4BIT:
|
|
||||||
fg = ANSI_COLORS_4BIT[color_idx]
|
|
||||||
elif 30 <= color_idx <= 37:
|
|
||||||
fg = ANSI_COLORS_4BIT.get(color_idx - 30, fg)
|
|
||||||
elif 40 <= color_idx <= 47:
|
|
||||||
bg = ANSI_COLORS_4BIT.get(color_idx - 40, bg)
|
|
||||||
elif 90 <= color_idx <= 97:
|
|
||||||
fg = ANSI_COLORS_4BIT.get(color_idx - 90 + 8, fg)
|
|
||||||
elif 100 <= color_idx <= 107:
|
|
||||||
bg = ANSI_COLORS_4BIT.get(color_idx - 100 + 8, bg)
|
|
||||||
elif c.startswith("38;5;"):
|
|
||||||
idx = int(c.split(";")[-1])
|
|
||||||
if idx < 256:
|
|
||||||
if idx < 16:
|
|
||||||
fg = ANSI_COLORS_4BIT.get(idx, fg)
|
|
||||||
elif idx < 232:
|
|
||||||
c_idx = idx - 16
|
|
||||||
fg = (
|
|
||||||
(c_idx >> 4) * 51,
|
|
||||||
((c_idx >> 2) & 7) * 51,
|
|
||||||
(c_idx & 3) * 85,
|
|
||||||
)
|
|
||||||
else:
|
|
||||||
gray = (idx - 232) * 10 + 8
|
|
||||||
fg = (gray, gray, gray)
|
|
||||||
elif c.startswith("48;5;"):
|
|
||||||
idx = int(c.split(";")[-1])
|
|
||||||
if idx < 256:
|
|
||||||
if idx < 16:
|
|
||||||
bg = ANSI_COLORS_4BIT.get(idx, bg)
|
|
||||||
elif idx < 232:
|
|
||||||
c_idx = idx - 16
|
|
||||||
bg = (
|
|
||||||
(c_idx >> 4) * 51,
|
|
||||||
((c_idx >> 2) & 7) * 51,
|
|
||||||
(c_idx & 3) * 85,
|
|
||||||
)
|
|
||||||
else:
|
|
||||||
gray = (idx - 232) * 10 + 8
|
|
||||||
bg = (gray, gray, gray)
|
|
||||||
i += 1
|
|
||||||
else:
|
|
||||||
current_text += char
|
|
||||||
i += 1
|
|
||||||
|
|
||||||
if current_text:
|
|
||||||
tokens.append((current_text, fg, bg, bold))
|
|
||||||
|
|
||||||
return tokens if tokens else [("", fg, bg, bold)]
|
|
||||||
|
|
||||||
|
|
||||||
def get_default_font_path() -> str | None:
|
|
||||||
"""Get the path to a default monospace font."""
|
|
||||||
import os
|
|
||||||
import sys
|
|
||||||
from pathlib import Path
|
|
||||||
|
|
||||||
def search_dir(base_path: str) -> str | None:
|
|
||||||
if not os.path.exists(base_path):
|
|
||||||
return None
|
|
||||||
if os.path.isfile(base_path):
|
|
||||||
return base_path
|
|
||||||
for font_file in Path(base_path).rglob("*"):
|
|
||||||
if font_file.suffix.lower() in (".ttf", ".otf", ".ttc"):
|
|
||||||
name = font_file.stem.lower()
|
|
||||||
if "geist" in name and ("nerd" in name or "mono" in name):
|
|
||||||
return str(font_file)
|
|
||||||
if "mono" in name or "courier" in name or "terminal" in name:
|
|
||||||
return str(font_file)
|
|
||||||
return None
|
|
||||||
|
|
||||||
search_dirs = []
|
|
||||||
if sys.platform == "darwin":
|
|
||||||
search_dirs.extend(
|
|
||||||
[
|
|
||||||
os.path.expanduser("~/Library/Fonts/"),
|
|
||||||
"/System/Library/Fonts/",
|
|
||||||
]
|
|
||||||
)
|
|
||||||
elif sys.platform == "win32":
|
|
||||||
search_dirs.extend(
|
|
||||||
[
|
|
||||||
os.path.expanduser("~\\AppData\\Local\\Microsoft\\Windows\\Fonts\\"),
|
|
||||||
"C:\\Windows\\Fonts\\",
|
|
||||||
]
|
|
||||||
)
|
|
||||||
else:
|
|
||||||
search_dirs.extend(
|
|
||||||
[
|
|
||||||
os.path.expanduser("~/.local/share/fonts/"),
|
|
||||||
os.path.expanduser("~/.fonts/"),
|
|
||||||
"/usr/share/fonts/",
|
|
||||||
]
|
|
||||||
)
|
|
||||||
|
|
||||||
for search_dir_path in search_dirs:
|
|
||||||
found = search_dir(search_dir_path)
|
|
||||||
if found:
|
|
||||||
return found
|
|
||||||
|
|
||||||
if sys.platform != "win32":
|
|
||||||
try:
|
|
||||||
import subprocess
|
|
||||||
|
|
||||||
for pattern in ["monospace", "DejaVuSansMono", "LiberationMono"]:
|
|
||||||
result = subprocess.run(
|
|
||||||
["fc-match", "-f", "%{file}", pattern],
|
|
||||||
capture_output=True,
|
|
||||||
text=True,
|
|
||||||
timeout=5,
|
|
||||||
)
|
|
||||||
if result.returncode == 0 and result.stdout.strip():
|
|
||||||
font_file = result.stdout.strip()
|
|
||||||
if os.path.exists(font_file):
|
|
||||||
return font_file
|
|
||||||
except Exception:
|
|
||||||
pass
|
|
||||||
|
|
||||||
return None
|
|
||||||
|
|
||||||
|
|
||||||
def render_to_pil(
|
|
||||||
buffer: list[str],
|
|
||||||
width: int,
|
|
||||||
height: int,
|
|
||||||
cell_width: int = 10,
|
|
||||||
cell_height: int = 18,
|
|
||||||
font_path: str | None = None,
|
|
||||||
) -> Any:
|
|
||||||
"""Render buffer to a PIL Image.
|
|
||||||
|
|
||||||
Args:
|
|
||||||
buffer: List of text lines to render
|
|
||||||
width: Terminal width in characters
|
|
||||||
height: Terminal height in rows
|
|
||||||
cell_width: Width of each character cell in pixels
|
|
||||||
cell_height: Height of each character cell in pixels
|
|
||||||
font_path: Path to TTF/OTF font file (optional)
|
|
||||||
|
|
||||||
Returns:
|
|
||||||
PIL Image object
|
|
||||||
"""
|
|
||||||
from PIL import Image, ImageDraw, ImageFont
|
|
||||||
|
|
||||||
img_width = width * cell_width
|
|
||||||
img_height = height * cell_height
|
|
||||||
|
|
||||||
img = Image.new("RGBA", (img_width, img_height), (0, 0, 0, 255))
|
|
||||||
draw = ImageDraw.Draw(img)
|
|
||||||
|
|
||||||
if font_path:
|
|
||||||
try:
|
|
||||||
font = ImageFont.truetype(font_path, cell_height - 2)
|
|
||||||
except Exception:
|
|
||||||
font = ImageFont.load_default()
|
|
||||||
else:
|
|
||||||
font = ImageFont.load_default()
|
|
||||||
|
|
||||||
for row_idx, line in enumerate(buffer[:height]):
|
|
||||||
if row_idx >= height:
|
|
||||||
break
|
|
||||||
|
|
||||||
tokens = parse_ansi(line)
|
|
||||||
x_pos = 0
|
|
||||||
y_pos = row_idx * cell_height
|
|
||||||
|
|
||||||
for text, fg, bg, _bold in tokens:
|
|
||||||
if not text:
|
|
||||||
continue
|
|
||||||
|
|
||||||
if bg != (0, 0, 0):
|
|
||||||
bbox = draw.textbbox((x_pos, y_pos), text, font=font)
|
|
||||||
draw.rectangle(bbox, fill=(*bg, 255))
|
|
||||||
|
|
||||||
draw.text((x_pos, y_pos), text, fill=(*fg, 255), font=font)
|
|
||||||
|
|
||||||
if font:
|
|
||||||
x_pos += draw.textlength(text, font=font)
|
|
||||||
|
|
||||||
return img
|
|
||||||
@@ -6,17 +6,18 @@ from engine.effects.legacy import (
|
|||||||
glitch_bar,
|
glitch_bar,
|
||||||
next_headline,
|
next_headline,
|
||||||
noise,
|
noise,
|
||||||
vis_offset,
|
|
||||||
vis_trunc,
|
vis_trunc,
|
||||||
)
|
)
|
||||||
from engine.effects.performance import PerformanceMonitor, get_monitor, set_monitor
|
from engine.effects.performance import PerformanceMonitor, get_monitor, set_monitor
|
||||||
from engine.effects.registry import EffectRegistry, get_registry, set_registry
|
from engine.effects.registry import EffectRegistry, get_registry, set_registry
|
||||||
from engine.effects.types import (
|
from engine.effects.types import EffectConfig, EffectContext, PipelineConfig
|
||||||
EffectConfig,
|
|
||||||
EffectContext,
|
|
||||||
PipelineConfig,
|
def get_effect_chain():
|
||||||
create_effect_context,
|
from engine.layers import get_effect_chain as _chain
|
||||||
)
|
|
||||||
|
return _chain()
|
||||||
|
|
||||||
|
|
||||||
__all__ = [
|
__all__ = [
|
||||||
"EffectChain",
|
"EffectChain",
|
||||||
@@ -24,9 +25,9 @@ __all__ = [
|
|||||||
"EffectConfig",
|
"EffectConfig",
|
||||||
"EffectContext",
|
"EffectContext",
|
||||||
"PipelineConfig",
|
"PipelineConfig",
|
||||||
"create_effect_context",
|
|
||||||
"get_registry",
|
"get_registry",
|
||||||
"set_registry",
|
"set_registry",
|
||||||
|
"get_effect_chain",
|
||||||
"get_monitor",
|
"get_monitor",
|
||||||
"set_monitor",
|
"set_monitor",
|
||||||
"PerformanceMonitor",
|
"PerformanceMonitor",
|
||||||
@@ -38,5 +39,4 @@ __all__ = [
|
|||||||
"noise",
|
"noise",
|
||||||
"next_headline",
|
"next_headline",
|
||||||
"vis_trunc",
|
"vis_trunc",
|
||||||
"vis_offset",
|
|
||||||
]
|
]
|
||||||
|
|||||||
@@ -2,7 +2,7 @@ import time
|
|||||||
|
|
||||||
from engine.effects.performance import PerformanceMonitor, get_monitor
|
from engine.effects.performance import PerformanceMonitor, get_monitor
|
||||||
from engine.effects.registry import EffectRegistry
|
from engine.effects.registry import EffectRegistry
|
||||||
from engine.effects.types import EffectContext, PartialUpdate
|
from engine.effects.types import EffectContext
|
||||||
|
|
||||||
|
|
||||||
class EffectChain:
|
class EffectChain:
|
||||||
@@ -51,18 +51,6 @@ class EffectChain:
|
|||||||
frame_number = ctx.frame_number
|
frame_number = ctx.frame_number
|
||||||
monitor.start_frame(frame_number)
|
monitor.start_frame(frame_number)
|
||||||
|
|
||||||
# Get dirty regions from canvas via context (set by CanvasStage)
|
|
||||||
dirty_rows = ctx.get_state("canvas.dirty_rows")
|
|
||||||
|
|
||||||
# Create PartialUpdate for effects that support it
|
|
||||||
full_buffer = dirty_rows is None or len(dirty_rows) == 0
|
|
||||||
partial = PartialUpdate(
|
|
||||||
rows=None,
|
|
||||||
cols=None,
|
|
||||||
dirty=dirty_rows,
|
|
||||||
full_buffer=full_buffer,
|
|
||||||
)
|
|
||||||
|
|
||||||
frame_start = time.perf_counter()
|
frame_start = time.perf_counter()
|
||||||
result = list(buf)
|
result = list(buf)
|
||||||
for name in self._order:
|
for name in self._order:
|
||||||
@@ -71,11 +59,7 @@ class EffectChain:
|
|||||||
chars_in = sum(len(line) for line in result)
|
chars_in = sum(len(line) for line in result)
|
||||||
effect_start = time.perf_counter()
|
effect_start = time.perf_counter()
|
||||||
try:
|
try:
|
||||||
# Use process_partial if supported, otherwise fall back to process
|
result = plugin.process(result, ctx)
|
||||||
if getattr(plugin, "supports_partial_updates", False):
|
|
||||||
result = plugin.process_partial(result, ctx, partial)
|
|
||||||
else:
|
|
||||||
result = plugin.process(result, ctx)
|
|
||||||
except Exception:
|
except Exception:
|
||||||
plugin.config.enabled = False
|
plugin.config.enabled = False
|
||||||
elapsed = time.perf_counter() - effect_start
|
elapsed = time.perf_counter() - effect_start
|
||||||
|
|||||||
@@ -6,7 +6,14 @@ _effect_chain_ref = None
|
|||||||
|
|
||||||
def _get_effect_chain():
|
def _get_effect_chain():
|
||||||
global _effect_chain_ref
|
global _effect_chain_ref
|
||||||
return _effect_chain_ref
|
if _effect_chain_ref is not None:
|
||||||
|
return _effect_chain_ref
|
||||||
|
try:
|
||||||
|
from engine.layers import get_effect_chain as _chain
|
||||||
|
|
||||||
|
return _chain()
|
||||||
|
except Exception:
|
||||||
|
return None
|
||||||
|
|
||||||
|
|
||||||
def set_effect_chain_ref(chain) -> None:
|
def set_effect_chain_ref(chain) -> None:
|
||||||
|
|||||||
@@ -1,14 +1,6 @@
|
|||||||
"""
|
"""
|
||||||
Visual effects: noise, glitch, fade, ANSI-aware truncation, firehose, headline pool.
|
Visual effects: noise, glitch, fade, ANSI-aware truncation, firehose, headline pool.
|
||||||
Depends on: config, terminal, sources.
|
Depends on: config, terminal, sources.
|
||||||
|
|
||||||
These are low-level functional implementations of visual effects. They are used
|
|
||||||
internally by the EffectPlugin system (effects_plugins/*.py) and also directly
|
|
||||||
by layers.py and scroll.py for rendering.
|
|
||||||
|
|
||||||
The plugin system provides a higher-level OOP interface with configuration
|
|
||||||
support, while these legacy functions provide direct functional access.
|
|
||||||
Both systems coexist - there are no current plans to deprecate the legacy functions.
|
|
||||||
"""
|
"""
|
||||||
|
|
||||||
import random
|
import random
|
||||||
@@ -82,37 +74,6 @@ def vis_trunc(s, w):
|
|||||||
return "".join(result)
|
return "".join(result)
|
||||||
|
|
||||||
|
|
||||||
def vis_offset(s, offset):
|
|
||||||
"""Offset string by skipping first offset visual characters, skipping ANSI escape codes."""
|
|
||||||
if offset <= 0:
|
|
||||||
return s
|
|
||||||
result = []
|
|
||||||
vw = 0
|
|
||||||
i = 0
|
|
||||||
skipping = True
|
|
||||||
while i < len(s):
|
|
||||||
if s[i] == "\033" and i + 1 < len(s) and s[i + 1] == "[":
|
|
||||||
j = i + 2
|
|
||||||
while j < len(s) and not s[j].isalpha():
|
|
||||||
j += 1
|
|
||||||
if skipping:
|
|
||||||
i = j + 1
|
|
||||||
continue
|
|
||||||
result.append(s[i : j + 1])
|
|
||||||
i = j + 1
|
|
||||||
else:
|
|
||||||
if skipping:
|
|
||||||
if vw >= offset:
|
|
||||||
skipping = False
|
|
||||||
result.append(s[i])
|
|
||||||
vw += 1
|
|
||||||
i += 1
|
|
||||||
else:
|
|
||||||
result.append(s[i])
|
|
||||||
i += 1
|
|
||||||
return "".join(result)
|
|
||||||
|
|
||||||
|
|
||||||
def next_headline(pool, items, seen):
|
def next_headline(pool, items, seen):
|
||||||
"""Pull the next unique headline from pool, refilling as needed."""
|
"""Pull the next unique headline from pool, refilling as needed."""
|
||||||
while True:
|
while True:
|
||||||
|
|||||||
@@ -1,105 +0,0 @@
|
|||||||
from engine.effects.types import EffectConfig, EffectContext, EffectPlugin
|
|
||||||
|
|
||||||
|
|
||||||
class BorderEffect(EffectPlugin):
|
|
||||||
"""Simple border effect for terminal display.
|
|
||||||
|
|
||||||
Draws a border around the buffer and optionally displays
|
|
||||||
performance metrics in the border corners.
|
|
||||||
|
|
||||||
Internally crops to display dimensions to ensure border fits.
|
|
||||||
"""
|
|
||||||
|
|
||||||
name = "border"
|
|
||||||
config = EffectConfig(enabled=True, intensity=1.0)
|
|
||||||
|
|
||||||
def process(self, buf: list[str], ctx: EffectContext) -> list[str]:
|
|
||||||
if not buf:
|
|
||||||
return buf
|
|
||||||
|
|
||||||
# Get actual display dimensions from context
|
|
||||||
display_w = ctx.terminal_width
|
|
||||||
display_h = ctx.terminal_height
|
|
||||||
|
|
||||||
# If dimensions are reasonable, crop first - use slightly smaller to ensure fit
|
|
||||||
if display_w >= 10 and display_h >= 3:
|
|
||||||
# Subtract 2 for border characters (left and right)
|
|
||||||
crop_w = display_w - 2
|
|
||||||
crop_h = display_h - 2
|
|
||||||
buf = self._crop_to_size(buf, crop_w, crop_h)
|
|
||||||
w = display_w
|
|
||||||
h = display_h
|
|
||||||
else:
|
|
||||||
# Use buffer dimensions
|
|
||||||
h = len(buf)
|
|
||||||
w = max(len(line) for line in buf) if buf else 0
|
|
||||||
|
|
||||||
if w < 3 or h < 3:
|
|
||||||
return buf
|
|
||||||
|
|
||||||
inner_w = w - 2
|
|
||||||
|
|
||||||
# Get metrics from context
|
|
||||||
fps = 0.0
|
|
||||||
frame_time = 0.0
|
|
||||||
metrics = ctx.get_state("metrics")
|
|
||||||
if metrics:
|
|
||||||
avg_ms = metrics.get("avg_ms")
|
|
||||||
frame_count = metrics.get("frame_count", 0)
|
|
||||||
if avg_ms and frame_count > 0:
|
|
||||||
fps = 1000.0 / avg_ms
|
|
||||||
frame_time = avg_ms
|
|
||||||
|
|
||||||
# Build borders
|
|
||||||
# Top border: ┌────────────────────┐ or with FPS
|
|
||||||
if fps > 0:
|
|
||||||
fps_str = f" FPS:{fps:.0f}"
|
|
||||||
if len(fps_str) < inner_w:
|
|
||||||
right_len = inner_w - len(fps_str)
|
|
||||||
top_border = "┌" + "─" * right_len + fps_str + "┐"
|
|
||||||
else:
|
|
||||||
top_border = "┌" + "─" * inner_w + "┐"
|
|
||||||
else:
|
|
||||||
top_border = "┌" + "─" * inner_w + "┐"
|
|
||||||
|
|
||||||
# Bottom border: └────────────────────┘ or with frame time
|
|
||||||
if frame_time > 0:
|
|
||||||
ft_str = f" {frame_time:.1f}ms"
|
|
||||||
if len(ft_str) < inner_w:
|
|
||||||
right_len = inner_w - len(ft_str)
|
|
||||||
bottom_border = "└" + "─" * right_len + ft_str + "┘"
|
|
||||||
else:
|
|
||||||
bottom_border = "└" + "─" * inner_w + "┘"
|
|
||||||
else:
|
|
||||||
bottom_border = "└" + "─" * inner_w + "┘"
|
|
||||||
|
|
||||||
# Build result with left/right borders
|
|
||||||
result = [top_border]
|
|
||||||
for line in buf[: h - 2]:
|
|
||||||
if len(line) >= inner_w:
|
|
||||||
result.append("│" + line[:inner_w] + "│")
|
|
||||||
else:
|
|
||||||
result.append("│" + line + " " * (inner_w - len(line)) + "│")
|
|
||||||
|
|
||||||
result.append(bottom_border)
|
|
||||||
|
|
||||||
return result
|
|
||||||
|
|
||||||
def _crop_to_size(self, buf: list[str], w: int, h: int) -> list[str]:
|
|
||||||
"""Crop buffer to fit within w x h."""
|
|
||||||
result = []
|
|
||||||
for i in range(min(h, len(buf))):
|
|
||||||
line = buf[i]
|
|
||||||
if len(line) > w:
|
|
||||||
result.append(line[:w])
|
|
||||||
else:
|
|
||||||
result.append(line + " " * (w - len(line)))
|
|
||||||
|
|
||||||
# Pad with empty lines if needed (for border)
|
|
||||||
while len(result) < h:
|
|
||||||
result.append(" " * w)
|
|
||||||
|
|
||||||
return result
|
|
||||||
|
|
||||||
def configure(self, config: EffectConfig) -> None:
|
|
||||||
self.config = config
|
|
||||||
@@ -1,42 +0,0 @@
|
|||||||
from engine.effects.types import EffectConfig, EffectContext, EffectPlugin
|
|
||||||
|
|
||||||
|
|
||||||
class CropEffect(EffectPlugin):
|
|
||||||
"""Crop effect that crops the input buffer to fit the display.
|
|
||||||
|
|
||||||
This ensures the output buffer matches the actual display dimensions,
|
|
||||||
useful when the source produces a buffer larger than the viewport.
|
|
||||||
"""
|
|
||||||
|
|
||||||
name = "crop"
|
|
||||||
config = EffectConfig(enabled=True, intensity=1.0)
|
|
||||||
|
|
||||||
def process(self, buf: list[str], ctx: EffectContext) -> list[str]:
|
|
||||||
if not buf:
|
|
||||||
return buf
|
|
||||||
|
|
||||||
# Get actual display dimensions from context
|
|
||||||
w = (
|
|
||||||
ctx.terminal_width
|
|
||||||
if ctx.terminal_width > 0
|
|
||||||
else max(len(line) for line in buf)
|
|
||||||
)
|
|
||||||
h = ctx.terminal_height if ctx.terminal_height > 0 else len(buf)
|
|
||||||
|
|
||||||
# Crop buffer to fit
|
|
||||||
result = []
|
|
||||||
for i in range(min(h, len(buf))):
|
|
||||||
line = buf[i]
|
|
||||||
if len(line) > w:
|
|
||||||
result.append(line[:w])
|
|
||||||
else:
|
|
||||||
result.append(line + " " * (w - len(line)))
|
|
||||||
|
|
||||||
# Pad with empty lines if needed
|
|
||||||
while len(result) < h:
|
|
||||||
result.append(" " * w)
|
|
||||||
|
|
||||||
return result
|
|
||||||
|
|
||||||
def configure(self, config: EffectConfig) -> None:
|
|
||||||
self.config = config
|
|
||||||
@@ -1,52 +0,0 @@
|
|||||||
import random
|
|
||||||
|
|
||||||
from engine.effects.types import EffectConfig, EffectContext, EffectPlugin
|
|
||||||
from engine.terminal import DIM, G_LO, RST
|
|
||||||
|
|
||||||
|
|
||||||
class GlitchEffect(EffectPlugin):
|
|
||||||
name = "glitch"
|
|
||||||
config = EffectConfig(enabled=True, intensity=1.0)
|
|
||||||
|
|
||||||
def process(self, buf: list[str], ctx: EffectContext) -> list[str]:
|
|
||||||
if not buf:
|
|
||||||
return buf
|
|
||||||
result = list(buf)
|
|
||||||
intensity = self.config.intensity
|
|
||||||
|
|
||||||
glitch_prob = 0.32 + min(0.9, ctx.mic_excess * 0.16)
|
|
||||||
glitch_prob = glitch_prob * intensity
|
|
||||||
n_hits = 4 + int(ctx.mic_excess / 2)
|
|
||||||
n_hits = int(n_hits * intensity)
|
|
||||||
|
|
||||||
if random.random() < glitch_prob:
|
|
||||||
# Store original visible lengths before any modifications
|
|
||||||
# Strip ANSI codes to get visible length
|
|
||||||
import re
|
|
||||||
|
|
||||||
ansi_pattern = re.compile(r"\x1b\[[0-9;]*[a-zA-Z]")
|
|
||||||
original_lengths = [len(ansi_pattern.sub("", line)) for line in result]
|
|
||||||
for _ in range(min(n_hits, len(result))):
|
|
||||||
gi = random.randint(0, len(result) - 1)
|
|
||||||
result[gi]
|
|
||||||
target_len = original_lengths[gi] # Use stored original length
|
|
||||||
glitch_bar = self._glitch_bar(target_len)
|
|
||||||
result[gi] = glitch_bar
|
|
||||||
return result
|
|
||||||
|
|
||||||
def _glitch_bar(self, target_len: int) -> str:
|
|
||||||
c = random.choice(["░", "▒", "─", "\xc2"])
|
|
||||||
n = random.randint(3, max(3, target_len // 2))
|
|
||||||
o = random.randint(0, max(0, target_len - n))
|
|
||||||
|
|
||||||
glitch_chars = c * n
|
|
||||||
trailing_spaces = target_len - o - n
|
|
||||||
trailing_spaces = max(0, trailing_spaces)
|
|
||||||
|
|
||||||
glitch_part = f"{G_LO}{DIM}" + glitch_chars + RST
|
|
||||||
result = " " * o + glitch_part + " " * trailing_spaces
|
|
||||||
|
|
||||||
return result
|
|
||||||
|
|
||||||
def configure(self, config: EffectConfig) -> None:
|
|
||||||
self.config = config
|
|
||||||
@@ -1,102 +0,0 @@
|
|||||||
from engine.effects.types import (
|
|
||||||
EffectConfig,
|
|
||||||
EffectContext,
|
|
||||||
EffectPlugin,
|
|
||||||
PartialUpdate,
|
|
||||||
)
|
|
||||||
|
|
||||||
|
|
||||||
class HudEffect(EffectPlugin):
|
|
||||||
name = "hud"
|
|
||||||
config = EffectConfig(enabled=True, intensity=1.0)
|
|
||||||
supports_partial_updates = True # Enable partial update optimization
|
|
||||||
|
|
||||||
# Cache last HUD content to detect changes
|
|
||||||
_last_hud_content: tuple | None = None
|
|
||||||
|
|
||||||
def process_partial(
|
|
||||||
self, buf: list[str], ctx: EffectContext, partial: PartialUpdate
|
|
||||||
) -> list[str]:
|
|
||||||
# If full buffer requested, process normally
|
|
||||||
if partial.full_buffer:
|
|
||||||
return self.process(buf, ctx)
|
|
||||||
|
|
||||||
# If HUD rows (0, 1, 2) aren't dirty, skip processing
|
|
||||||
if partial.dirty:
|
|
||||||
hud_rows = {0, 1, 2}
|
|
||||||
dirty_hud_rows = partial.dirty & hud_rows
|
|
||||||
if not dirty_hud_rows:
|
|
||||||
return buf # Nothing for HUD to do
|
|
||||||
|
|
||||||
# Proceed with full processing
|
|
||||||
return self.process(buf, ctx)
|
|
||||||
|
|
||||||
def process(self, buf: list[str], ctx: EffectContext) -> list[str]:
|
|
||||||
result = list(buf)
|
|
||||||
|
|
||||||
# Read metrics from pipeline context (first-class citizen)
|
|
||||||
# Falls back to global monitor for backwards compatibility
|
|
||||||
metrics = ctx.get_state("metrics")
|
|
||||||
if not metrics:
|
|
||||||
# Fallback to global monitor for backwards compatibility
|
|
||||||
from engine.effects.performance import get_monitor
|
|
||||||
|
|
||||||
monitor = get_monitor()
|
|
||||||
if monitor:
|
|
||||||
stats = monitor.get_stats()
|
|
||||||
if stats and "pipeline" in stats:
|
|
||||||
metrics = stats
|
|
||||||
|
|
||||||
fps = 0.0
|
|
||||||
frame_time = 0.0
|
|
||||||
if metrics:
|
|
||||||
if "error" in metrics:
|
|
||||||
pass # No metrics available yet
|
|
||||||
elif "pipeline" in metrics:
|
|
||||||
frame_time = metrics["pipeline"].get("avg_ms", 0.0)
|
|
||||||
frame_count = metrics.get("frame_count", 0)
|
|
||||||
if frame_count > 0 and frame_time > 0:
|
|
||||||
fps = 1000.0 / frame_time
|
|
||||||
elif "avg_ms" in metrics:
|
|
||||||
# Direct metrics format
|
|
||||||
frame_time = metrics.get("avg_ms", 0.0)
|
|
||||||
frame_count = metrics.get("frame_count", 0)
|
|
||||||
if frame_count > 0 and frame_time > 0:
|
|
||||||
fps = 1000.0 / frame_time
|
|
||||||
|
|
||||||
effect_name = self.config.params.get("display_effect", "none")
|
|
||||||
effect_intensity = self.config.params.get("display_intensity", 0.0)
|
|
||||||
|
|
||||||
hud_lines = []
|
|
||||||
hud_lines.append(
|
|
||||||
f"\033[1;1H\033[38;5;46mMAINLINE DEMO\033[0m \033[38;5;245m|\033[0m \033[38;5;39mFPS: {fps:.1f}\033[0m \033[38;5;245m|\033[0m \033[38;5;208m{frame_time:.1f}ms\033[0m"
|
|
||||||
)
|
|
||||||
|
|
||||||
bar_width = 20
|
|
||||||
filled = int(bar_width * effect_intensity)
|
|
||||||
bar = (
|
|
||||||
"\033[38;5;82m"
|
|
||||||
+ "█" * filled
|
|
||||||
+ "\033[38;5;240m"
|
|
||||||
+ "░" * (bar_width - filled)
|
|
||||||
+ "\033[0m"
|
|
||||||
)
|
|
||||||
hud_lines.append(
|
|
||||||
f"\033[2;1H\033[38;5;45mEFFECT:\033[0m \033[1;38;5;227m{effect_name:12s}\033[0m \033[38;5;245m|\033[0m {bar} \033[38;5;245m|\033[0m \033[38;5;219m{effect_intensity * 100:.0f}%\033[0m"
|
|
||||||
)
|
|
||||||
|
|
||||||
# Get pipeline order from context
|
|
||||||
pipeline_order = ctx.get_state("pipeline_order")
|
|
||||||
pipeline_str = ",".join(pipeline_order) if pipeline_order else "(none)"
|
|
||||||
hud_lines.append(f"\033[3;1H\033[38;5;44mPIPELINE:\033[0m {pipeline_str}")
|
|
||||||
|
|
||||||
for i, line in enumerate(hud_lines):
|
|
||||||
if i < len(result):
|
|
||||||
result[i] = line + result[i][len(line) :]
|
|
||||||
else:
|
|
||||||
result.append(line)
|
|
||||||
|
|
||||||
return result
|
|
||||||
|
|
||||||
def configure(self, config: EffectConfig) -> None:
|
|
||||||
self.config = config
|
|
||||||
@@ -1,99 +0,0 @@
|
|||||||
from engine.effects.types import EffectConfig, EffectContext, EffectPlugin
|
|
||||||
|
|
||||||
|
|
||||||
class TintEffect(EffectPlugin):
|
|
||||||
"""Tint effect that applies an RGB color overlay to the buffer.
|
|
||||||
|
|
||||||
Uses ANSI escape codes to tint text with the specified RGB values.
|
|
||||||
Supports transparency (0-100%) for blending.
|
|
||||||
|
|
||||||
Inlets:
|
|
||||||
- r: Red component (0-255)
|
|
||||||
- g: Green component (0-255)
|
|
||||||
- b: Blue component (0-255)
|
|
||||||
- a: Alpha/transparency (0.0-1.0, where 0.0 = fully transparent)
|
|
||||||
"""
|
|
||||||
|
|
||||||
name = "tint"
|
|
||||||
config = EffectConfig(enabled=True, intensity=1.0)
|
|
||||||
|
|
||||||
# Define inlet types for PureData-style typing
|
|
||||||
@property
|
|
||||||
def inlet_types(self) -> set:
|
|
||||||
from engine.pipeline.core import DataType
|
|
||||||
|
|
||||||
return {DataType.TEXT_BUFFER}
|
|
||||||
|
|
||||||
@property
|
|
||||||
def outlet_types(self) -> set:
|
|
||||||
from engine.pipeline.core import DataType
|
|
||||||
|
|
||||||
return {DataType.TEXT_BUFFER}
|
|
||||||
|
|
||||||
def process(self, buf: list[str], ctx: EffectContext) -> list[str]:
|
|
||||||
if not buf:
|
|
||||||
return buf
|
|
||||||
|
|
||||||
# Get tint values from effect params or sensors
|
|
||||||
r = self.config.params.get("r", 255)
|
|
||||||
g = self.config.params.get("g", 255)
|
|
||||||
b = self.config.params.get("b", 255)
|
|
||||||
a = self.config.params.get("a", 0.3) # Default 30% tint
|
|
||||||
|
|
||||||
# Clamp values
|
|
||||||
r = max(0, min(255, int(r)))
|
|
||||||
g = max(0, min(255, int(g)))
|
|
||||||
b = max(0, min(255, int(b)))
|
|
||||||
a = max(0.0, min(1.0, float(a)))
|
|
||||||
|
|
||||||
if a <= 0:
|
|
||||||
return buf
|
|
||||||
|
|
||||||
# Convert RGB to ANSI 256 color
|
|
||||||
ansi_color = self._rgb_to_ansi256(r, g, b)
|
|
||||||
|
|
||||||
# Apply tint with transparency effect
|
|
||||||
result = []
|
|
||||||
for line in buf:
|
|
||||||
if not line.strip():
|
|
||||||
result.append(line)
|
|
||||||
continue
|
|
||||||
|
|
||||||
# Check if line already has ANSI codes
|
|
||||||
if "\033[" in line:
|
|
||||||
# For lines with existing colors, wrap the whole line
|
|
||||||
result.append(f"\033[38;5;{ansi_color}m{line}\033[0m")
|
|
||||||
else:
|
|
||||||
# Apply tint to plain text lines
|
|
||||||
result.append(f"\033[38;5;{ansi_color}m{line}\033[0m")
|
|
||||||
|
|
||||||
return result
|
|
||||||
|
|
||||||
def _rgb_to_ansi256(self, r: int, g: int, b: int) -> int:
|
|
||||||
"""Convert RGB (0-255 each) to ANSI 256 color code."""
|
|
||||||
if r == g == b == 0:
|
|
||||||
return 16
|
|
||||||
if r == g == b == 255:
|
|
||||||
return 231
|
|
||||||
|
|
||||||
# Calculate grayscale
|
|
||||||
gray = int((0.299 * r + 0.587 * g + 0.114 * b) / 255 * 24) + 232
|
|
||||||
|
|
||||||
# Calculate color cube
|
|
||||||
ri = int(r / 51)
|
|
||||||
gi = int(g / 51)
|
|
||||||
bi = int(b / 51)
|
|
||||||
color = 16 + 36 * ri + 6 * gi + bi
|
|
||||||
|
|
||||||
# Use whichever is closer - gray or color
|
|
||||||
gray_dist = abs(r - gray)
|
|
||||||
color_dist = (
|
|
||||||
(r - ri * 51) ** 2 + (g - gi * 51) ** 2 + (b - bi * 51) ** 2
|
|
||||||
) ** 0.5
|
|
||||||
|
|
||||||
if gray_dist < color_dist:
|
|
||||||
return gray
|
|
||||||
return color
|
|
||||||
|
|
||||||
def configure(self, config: EffectConfig) -> None:
|
|
||||||
self.config = config
|
|
||||||
@@ -1,47 +1,8 @@
|
|||||||
"""
|
|
||||||
Visual effects type definitions and base classes.
|
|
||||||
|
|
||||||
EffectPlugin Architecture:
|
|
||||||
- Uses ABC (Abstract Base Class) for interface enforcement
|
|
||||||
- Runtime discovery via directory scanning (effects_plugins/)
|
|
||||||
- Configuration via EffectConfig dataclass
|
|
||||||
- Context passed through EffectContext dataclass
|
|
||||||
|
|
||||||
Plugin System Research (see AGENTS.md for references):
|
|
||||||
- VST: Standardized audio interfaces, chaining, presets (FXP/FXB)
|
|
||||||
- Python Entry Points: Namespace packages, importlib.metadata discovery
|
|
||||||
- Shadertoy: Shader-based with uniforms as context
|
|
||||||
|
|
||||||
Current gaps vs industry patterns:
|
|
||||||
- No preset save/load system
|
|
||||||
- No external plugin distribution via entry points
|
|
||||||
- No plugin metadata (version, author, description)
|
|
||||||
"""
|
|
||||||
|
|
||||||
from abc import ABC, abstractmethod
|
from abc import ABC, abstractmethod
|
||||||
from dataclasses import dataclass, field
|
from dataclasses import dataclass, field
|
||||||
from typing import Any
|
from typing import Any
|
||||||
|
|
||||||
|
|
||||||
@dataclass
|
|
||||||
class PartialUpdate:
|
|
||||||
"""Represents a partial buffer update for optimized rendering.
|
|
||||||
|
|
||||||
Instead of processing the full buffer every frame, effects that support
|
|
||||||
partial updates can process only changed regions.
|
|
||||||
|
|
||||||
Attributes:
|
|
||||||
rows: Row indices that changed (None = all rows)
|
|
||||||
cols: Column range that changed (None = full width)
|
|
||||||
dirty: Set of dirty row indices
|
|
||||||
"""
|
|
||||||
|
|
||||||
rows: tuple[int, int] | None = None # (start, end) inclusive
|
|
||||||
cols: tuple[int, int] | None = None # (start, end) inclusive
|
|
||||||
dirty: set[int] | None = None # Set of dirty row indices
|
|
||||||
full_buffer: bool = True # If True, process entire buffer
|
|
||||||
|
|
||||||
|
|
||||||
@dataclass
|
@dataclass
|
||||||
class EffectContext:
|
class EffectContext:
|
||||||
terminal_width: int
|
terminal_width: int
|
||||||
@@ -54,26 +15,6 @@ class EffectContext:
|
|||||||
frame_number: int = 0
|
frame_number: int = 0
|
||||||
has_message: bool = False
|
has_message: bool = False
|
||||||
items: list = field(default_factory=list)
|
items: list = field(default_factory=list)
|
||||||
_state: dict[str, Any] = field(default_factory=dict, repr=False)
|
|
||||||
|
|
||||||
def get_sensor_value(self, sensor_name: str) -> float | None:
|
|
||||||
"""Get a sensor value from context state.
|
|
||||||
|
|
||||||
Args:
|
|
||||||
sensor_name: Name of the sensor (e.g., "mic", "camera")
|
|
||||||
|
|
||||||
Returns:
|
|
||||||
Sensor value as float, or None if not available.
|
|
||||||
"""
|
|
||||||
return self._state.get(f"sensor.{sensor_name}")
|
|
||||||
|
|
||||||
def set_state(self, key: str, value: Any) -> None:
|
|
||||||
"""Set a state value in the context."""
|
|
||||||
self._state[key] = value
|
|
||||||
|
|
||||||
def get_state(self, key: str, default: Any = None) -> Any:
|
|
||||||
"""Get a state value from the context."""
|
|
||||||
return self._state.get(key, default)
|
|
||||||
|
|
||||||
|
|
||||||
@dataclass
|
@dataclass
|
||||||
@@ -84,84 +25,14 @@ class EffectConfig:
|
|||||||
|
|
||||||
|
|
||||||
class EffectPlugin(ABC):
|
class EffectPlugin(ABC):
|
||||||
"""Abstract base class for effect plugins.
|
|
||||||
|
|
||||||
Subclasses must define:
|
|
||||||
- name: str - unique identifier for the effect
|
|
||||||
- config: EffectConfig - current configuration
|
|
||||||
|
|
||||||
Optional class attribute:
|
|
||||||
- param_bindings: dict - Declarative sensor-to-param bindings
|
|
||||||
Example:
|
|
||||||
param_bindings = {
|
|
||||||
"intensity": {"sensor": "mic", "transform": "linear"},
|
|
||||||
"rate": {"sensor": "mic", "transform": "exponential"},
|
|
||||||
}
|
|
||||||
|
|
||||||
And implement:
|
|
||||||
- process(buf, ctx) -> list[str]
|
|
||||||
- configure(config) -> None
|
|
||||||
|
|
||||||
Effect Behavior with ticker_height=0:
|
|
||||||
- NoiseEffect: Returns buffer unchanged (no ticker to apply noise to)
|
|
||||||
- FadeEffect: Returns buffer unchanged (no ticker to fade)
|
|
||||||
- GlitchEffect: Processes normally (doesn't depend on ticker_height)
|
|
||||||
- FirehoseEffect: Returns buffer unchanged if no items in context
|
|
||||||
|
|
||||||
Effects should handle missing or zero context values gracefully by
|
|
||||||
returning the input buffer unchanged rather than raising errors.
|
|
||||||
|
|
||||||
The param_bindings system enables PureData-style signal routing:
|
|
||||||
- Sensors emit values that effects can bind to
|
|
||||||
- Transform functions map sensor values to param ranges
|
|
||||||
- Effects read bound values from context.state["sensor.{name}"]
|
|
||||||
"""
|
|
||||||
|
|
||||||
name: str
|
name: str
|
||||||
config: EffectConfig
|
config: EffectConfig
|
||||||
param_bindings: dict[str, dict[str, str | float]] = {}
|
|
||||||
supports_partial_updates: bool = False # Override in subclasses for optimization
|
|
||||||
|
|
||||||
@abstractmethod
|
@abstractmethod
|
||||||
def process(self, buf: list[str], ctx: EffectContext) -> list[str]:
|
def process(self, buf: list[str], ctx: EffectContext) -> list[str]: ...
|
||||||
"""Process the buffer with this effect applied.
|
|
||||||
|
|
||||||
Args:
|
|
||||||
buf: List of lines to process
|
|
||||||
ctx: Effect context with terminal state
|
|
||||||
|
|
||||||
Returns:
|
|
||||||
Processed buffer (may be same object or new list)
|
|
||||||
"""
|
|
||||||
...
|
|
||||||
|
|
||||||
def process_partial(
|
|
||||||
self, buf: list[str], ctx: EffectContext, partial: PartialUpdate
|
|
||||||
) -> list[str]:
|
|
||||||
"""Process a partial buffer for optimized rendering.
|
|
||||||
|
|
||||||
Override this in subclasses that support partial updates for performance.
|
|
||||||
Default implementation falls back to full buffer processing.
|
|
||||||
|
|
||||||
Args:
|
|
||||||
buf: List of lines to process
|
|
||||||
ctx: Effect context with terminal state
|
|
||||||
partial: PartialUpdate indicating which regions changed
|
|
||||||
|
|
||||||
Returns:
|
|
||||||
Processed buffer (may be same object or new list)
|
|
||||||
"""
|
|
||||||
# Default: fall back to full processing
|
|
||||||
return self.process(buf, ctx)
|
|
||||||
|
|
||||||
@abstractmethod
|
@abstractmethod
|
||||||
def configure(self, config: EffectConfig) -> None:
|
def configure(self, config: EffectConfig) -> None: ...
|
||||||
"""Configure the effect with new settings.
|
|
||||||
|
|
||||||
Args:
|
|
||||||
config: New configuration to apply
|
|
||||||
"""
|
|
||||||
...
|
|
||||||
|
|
||||||
|
|
||||||
def create_effect_context(
|
def create_effect_context(
|
||||||
@@ -169,6 +40,7 @@ def create_effect_context(
|
|||||||
terminal_height: int = 24,
|
terminal_height: int = 24,
|
||||||
scroll_cam: int = 0,
|
scroll_cam: int = 0,
|
||||||
ticker_height: int = 0,
|
ticker_height: int = 0,
|
||||||
|
camera_x: int = 0,
|
||||||
mic_excess: float = 0.0,
|
mic_excess: float = 0.0,
|
||||||
grad_offset: float = 0.0,
|
grad_offset: float = 0.0,
|
||||||
frame_number: int = 0,
|
frame_number: int = 0,
|
||||||
@@ -181,6 +53,7 @@ def create_effect_context(
|
|||||||
terminal_height=terminal_height,
|
terminal_height=terminal_height,
|
||||||
scroll_cam=scroll_cam,
|
scroll_cam=scroll_cam,
|
||||||
ticker_height=ticker_height,
|
ticker_height=ticker_height,
|
||||||
|
camera_x=camera_x,
|
||||||
mic_excess=mic_excess,
|
mic_excess=mic_excess,
|
||||||
grad_offset=grad_offset,
|
grad_offset=grad_offset,
|
||||||
frame_number=frame_number,
|
frame_number=frame_number,
|
||||||
@@ -193,58 +66,3 @@ def create_effect_context(
|
|||||||
class PipelineConfig:
|
class PipelineConfig:
|
||||||
order: list[str] = field(default_factory=list)
|
order: list[str] = field(default_factory=list)
|
||||||
effects: dict[str, EffectConfig] = field(default_factory=dict)
|
effects: dict[str, EffectConfig] = field(default_factory=dict)
|
||||||
|
|
||||||
|
|
||||||
def apply_param_bindings(
|
|
||||||
effect: "EffectPlugin",
|
|
||||||
ctx: EffectContext,
|
|
||||||
) -> EffectConfig:
|
|
||||||
"""Apply sensor bindings to effect config.
|
|
||||||
|
|
||||||
This resolves param_bindings declarations by reading sensor values
|
|
||||||
from the context and applying transform functions.
|
|
||||||
|
|
||||||
Args:
|
|
||||||
effect: The effect with param_bindings to apply
|
|
||||||
ctx: EffectContext containing sensor values
|
|
||||||
|
|
||||||
Returns:
|
|
||||||
Modified EffectConfig with sensor-driven values applied.
|
|
||||||
"""
|
|
||||||
import copy
|
|
||||||
|
|
||||||
if not effect.param_bindings:
|
|
||||||
return effect.config
|
|
||||||
|
|
||||||
config = copy.deepcopy(effect.config)
|
|
||||||
|
|
||||||
for param_name, binding in effect.param_bindings.items():
|
|
||||||
sensor_name: str = binding.get("sensor", "")
|
|
||||||
transform: str = binding.get("transform", "linear")
|
|
||||||
|
|
||||||
if not sensor_name:
|
|
||||||
continue
|
|
||||||
|
|
||||||
sensor_value = ctx.get_sensor_value(sensor_name)
|
|
||||||
if sensor_value is None:
|
|
||||||
continue
|
|
||||||
|
|
||||||
if transform == "linear":
|
|
||||||
applied_value: float = sensor_value
|
|
||||||
elif transform == "exponential":
|
|
||||||
applied_value = sensor_value**2
|
|
||||||
elif transform == "threshold":
|
|
||||||
threshold = float(binding.get("threshold", 0.5))
|
|
||||||
applied_value = 1.0 if sensor_value > threshold else 0.0
|
|
||||||
elif transform == "inverse":
|
|
||||||
applied_value = 1.0 - sensor_value
|
|
||||||
else:
|
|
||||||
applied_value = sensor_value
|
|
||||||
|
|
||||||
config.params[f"{param_name}_sensor"] = applied_value
|
|
||||||
|
|
||||||
if param_name == "intensity":
|
|
||||||
base_intensity = effect.config.intensity
|
|
||||||
config.intensity = base_intensity * (0.5 + applied_value * 0.5)
|
|
||||||
|
|
||||||
return config
|
|
||||||
|
|||||||
25
engine/emitters.py
Normal file
25
engine/emitters.py
Normal file
@@ -0,0 +1,25 @@
|
|||||||
|
"""
|
||||||
|
Event emitter protocols - abstract interfaces for event-producing components.
|
||||||
|
"""
|
||||||
|
|
||||||
|
from collections.abc import Callable
|
||||||
|
from typing import Any, Protocol
|
||||||
|
|
||||||
|
|
||||||
|
class EventEmitter(Protocol):
|
||||||
|
"""Protocol for components that emit events."""
|
||||||
|
|
||||||
|
def subscribe(self, callback: Callable[[Any], None]) -> None: ...
|
||||||
|
def unsubscribe(self, callback: Callable[[Any], None]) -> None: ...
|
||||||
|
|
||||||
|
|
||||||
|
class Startable(Protocol):
|
||||||
|
"""Protocol for components that can be started."""
|
||||||
|
|
||||||
|
def start(self) -> Any: ...
|
||||||
|
|
||||||
|
|
||||||
|
class Stoppable(Protocol):
|
||||||
|
"""Protocol for components that can be stopped."""
|
||||||
|
|
||||||
|
def stop(self) -> None: ...
|
||||||
67
engine/fetch_code.py
Normal file
67
engine/fetch_code.py
Normal file
@@ -0,0 +1,67 @@
|
|||||||
|
"""
|
||||||
|
Source code feed — reads engine/*.py and emits non-blank, non-comment lines
|
||||||
|
as scroll items. Used by --code mode.
|
||||||
|
Depends on: nothing (stdlib only).
|
||||||
|
"""
|
||||||
|
|
||||||
|
import ast
|
||||||
|
from pathlib import Path
|
||||||
|
|
||||||
|
_ENGINE_DIR = Path(__file__).resolve().parent
|
||||||
|
|
||||||
|
|
||||||
|
def _scope_map(source: str) -> dict[int, str]:
|
||||||
|
"""Return {line_number: scope_label} for every line in source.
|
||||||
|
|
||||||
|
Nodes are sorted by range size descending so inner scopes overwrite
|
||||||
|
outer ones, guaranteeing the narrowest enclosing scope wins.
|
||||||
|
"""
|
||||||
|
try:
|
||||||
|
tree = ast.parse(source)
|
||||||
|
except SyntaxError:
|
||||||
|
return {}
|
||||||
|
|
||||||
|
nodes = []
|
||||||
|
for node in ast.walk(tree):
|
||||||
|
if isinstance(node, (ast.FunctionDef, ast.AsyncFunctionDef, ast.ClassDef)):
|
||||||
|
end = getattr(node, "end_lineno", node.lineno)
|
||||||
|
span = end - node.lineno
|
||||||
|
nodes.append((span, node))
|
||||||
|
|
||||||
|
# Largest range first → inner scopes overwrite on second pass
|
||||||
|
nodes.sort(key=lambda x: x[0], reverse=True)
|
||||||
|
|
||||||
|
scope = {}
|
||||||
|
for _, node in nodes:
|
||||||
|
end = getattr(node, "end_lineno", node.lineno)
|
||||||
|
if isinstance(node, ast.ClassDef):
|
||||||
|
label = node.name
|
||||||
|
else:
|
||||||
|
label = f"{node.name}()"
|
||||||
|
for ln in range(node.lineno, end + 1):
|
||||||
|
scope[ln] = label
|
||||||
|
|
||||||
|
return scope
|
||||||
|
|
||||||
|
|
||||||
|
def fetch_code():
|
||||||
|
"""Read engine/*.py and return (items, line_count, 0).
|
||||||
|
|
||||||
|
Each item is (text, src, ts) where:
|
||||||
|
text = the code line (rstripped, indentation preserved)
|
||||||
|
src = enclosing function/class name, e.g. 'stream()' or '<module>'
|
||||||
|
ts = dotted module path, e.g. 'engine.scroll'
|
||||||
|
"""
|
||||||
|
items = []
|
||||||
|
for path in sorted(_ENGINE_DIR.glob("*.py")):
|
||||||
|
module = f"engine.{path.stem}"
|
||||||
|
source = path.read_text(encoding="utf-8")
|
||||||
|
scope = _scope_map(source)
|
||||||
|
for lineno, raw in enumerate(source.splitlines(), start=1):
|
||||||
|
stripped = raw.strip()
|
||||||
|
if not stripped or stripped.startswith("#"):
|
||||||
|
continue
|
||||||
|
label = scope.get(lineno, "<module>")
|
||||||
|
items.append((raw.rstrip(), label, module))
|
||||||
|
|
||||||
|
return items, len(items), 0
|
||||||
@@ -1,73 +0,0 @@
|
|||||||
"""
|
|
||||||
Core interfaces for the mainline pipeline architecture.
|
|
||||||
|
|
||||||
This module provides all abstract base classes and protocols that define
|
|
||||||
the contracts between pipeline components:
|
|
||||||
|
|
||||||
- Stage: Base class for pipeline components (imported from pipeline.core)
|
|
||||||
- DataSource: Abstract data providers (imported from data_sources.sources)
|
|
||||||
- EffectPlugin: Visual effects interface (imported from effects.types)
|
|
||||||
- Sensor: Real-time input interface (imported from sensors)
|
|
||||||
- Display: Output backend protocol (imported from display)
|
|
||||||
|
|
||||||
This module provides a centralized import location for all interfaces.
|
|
||||||
"""
|
|
||||||
|
|
||||||
from engine.data_sources.sources import (
|
|
||||||
DataSource,
|
|
||||||
ImageItem,
|
|
||||||
SourceItem,
|
|
||||||
)
|
|
||||||
from engine.display import Display
|
|
||||||
from engine.effects.types import (
|
|
||||||
EffectConfig,
|
|
||||||
EffectContext,
|
|
||||||
EffectPlugin,
|
|
||||||
PartialUpdate,
|
|
||||||
PipelineConfig,
|
|
||||||
apply_param_bindings,
|
|
||||||
create_effect_context,
|
|
||||||
)
|
|
||||||
from engine.pipeline.core import (
|
|
||||||
DataType,
|
|
||||||
Stage,
|
|
||||||
StageConfig,
|
|
||||||
StageError,
|
|
||||||
StageResult,
|
|
||||||
create_stage_error,
|
|
||||||
)
|
|
||||||
from engine.sensors import (
|
|
||||||
Sensor,
|
|
||||||
SensorStage,
|
|
||||||
SensorValue,
|
|
||||||
create_sensor_stage,
|
|
||||||
)
|
|
||||||
|
|
||||||
__all__ = [
|
|
||||||
# Stage interfaces
|
|
||||||
"DataType",
|
|
||||||
"Stage",
|
|
||||||
"StageConfig",
|
|
||||||
"StageError",
|
|
||||||
"StageResult",
|
|
||||||
"create_stage_error",
|
|
||||||
# Data source interfaces
|
|
||||||
"DataSource",
|
|
||||||
"ImageItem",
|
|
||||||
"SourceItem",
|
|
||||||
# Effect interfaces
|
|
||||||
"EffectConfig",
|
|
||||||
"EffectContext",
|
|
||||||
"EffectPlugin",
|
|
||||||
"PartialUpdate",
|
|
||||||
"PipelineConfig",
|
|
||||||
"apply_param_bindings",
|
|
||||||
"create_effect_context",
|
|
||||||
# Sensor interfaces
|
|
||||||
"Sensor",
|
|
||||||
"SensorStage",
|
|
||||||
"SensorValue",
|
|
||||||
"create_sensor_stage",
|
|
||||||
# Display protocol
|
|
||||||
"Display",
|
|
||||||
]
|
|
||||||
260
engine/layers.py
Normal file
260
engine/layers.py
Normal file
@@ -0,0 +1,260 @@
|
|||||||
|
"""
|
||||||
|
Layer compositing — message overlay, ticker zone, firehose, noise.
|
||||||
|
Depends on: config, render, effects.
|
||||||
|
"""
|
||||||
|
|
||||||
|
import random
|
||||||
|
import re
|
||||||
|
import time
|
||||||
|
from datetime import datetime
|
||||||
|
|
||||||
|
from engine import config
|
||||||
|
from engine.effects import (
|
||||||
|
EffectChain,
|
||||||
|
EffectContext,
|
||||||
|
fade_line,
|
||||||
|
firehose_line,
|
||||||
|
glitch_bar,
|
||||||
|
noise,
|
||||||
|
vis_trunc,
|
||||||
|
)
|
||||||
|
from engine.render import big_wrap, lr_gradient, msg_gradient
|
||||||
|
from engine.terminal import RST, W_COOL
|
||||||
|
|
||||||
|
MSG_META = "\033[38;5;245m"
|
||||||
|
MSG_BORDER = "\033[2;38;5;37m"
|
||||||
|
|
||||||
|
|
||||||
|
def render_message_overlay(
|
||||||
|
msg: tuple[str, str, float] | None,
|
||||||
|
w: int,
|
||||||
|
h: int,
|
||||||
|
msg_cache: tuple,
|
||||||
|
) -> tuple[list[str], tuple]:
|
||||||
|
"""Render ntfy message overlay.
|
||||||
|
|
||||||
|
Args:
|
||||||
|
msg: (title, body, timestamp) or None
|
||||||
|
w: terminal width
|
||||||
|
h: terminal height
|
||||||
|
msg_cache: (cache_key, rendered_rows) for caching
|
||||||
|
|
||||||
|
Returns:
|
||||||
|
(list of ANSI strings, updated cache)
|
||||||
|
"""
|
||||||
|
overlay = []
|
||||||
|
if msg is None:
|
||||||
|
return overlay, msg_cache
|
||||||
|
|
||||||
|
m_title, m_body, m_ts = msg
|
||||||
|
display_text = m_body or m_title or "(empty)"
|
||||||
|
display_text = re.sub(r"\s+", " ", display_text.upper())
|
||||||
|
|
||||||
|
cache_key = (display_text, w)
|
||||||
|
if msg_cache[0] != cache_key:
|
||||||
|
msg_rows = big_wrap(display_text, w - 4)
|
||||||
|
msg_cache = (cache_key, msg_rows)
|
||||||
|
else:
|
||||||
|
msg_rows = msg_cache[1]
|
||||||
|
|
||||||
|
msg_rows = msg_gradient(
|
||||||
|
msg_rows, (time.monotonic() * config.GRAD_SPEED) % 1.0
|
||||||
|
)
|
||||||
|
|
||||||
|
elapsed_s = int(time.monotonic() - m_ts)
|
||||||
|
remaining = max(0, config.MESSAGE_DISPLAY_SECS - elapsed_s)
|
||||||
|
ts_str = datetime.now().strftime("%H:%M:%S")
|
||||||
|
panel_h = len(msg_rows) + 2
|
||||||
|
panel_top = max(0, (h - panel_h) // 2)
|
||||||
|
|
||||||
|
row_idx = 0
|
||||||
|
for mr in msg_rows:
|
||||||
|
ln = vis_trunc(mr, w)
|
||||||
|
overlay.append(f"\033[{panel_top + row_idx + 1};1H {ln}\033[0m\033[K")
|
||||||
|
row_idx += 1
|
||||||
|
|
||||||
|
meta_parts = []
|
||||||
|
if m_title and m_title != m_body:
|
||||||
|
meta_parts.append(m_title)
|
||||||
|
meta_parts.append(f"ntfy \u00b7 {ts_str} \u00b7 {remaining}s")
|
||||||
|
meta = (
|
||||||
|
" " + " \u00b7 ".join(meta_parts)
|
||||||
|
if len(meta_parts) > 1
|
||||||
|
else " " + meta_parts[0]
|
||||||
|
)
|
||||||
|
overlay.append(f"\033[{panel_top + row_idx + 1};1H{MSG_META}{meta}\033[0m\033[K")
|
||||||
|
row_idx += 1
|
||||||
|
|
||||||
|
bar = "\u2500" * (w - 4)
|
||||||
|
overlay.append(f"\033[{panel_top + row_idx + 1};1H {MSG_BORDER}{bar}\033[0m\033[K")
|
||||||
|
|
||||||
|
return overlay, msg_cache
|
||||||
|
|
||||||
|
|
||||||
|
def render_ticker_zone(
|
||||||
|
active: list,
|
||||||
|
scroll_cam: int,
|
||||||
|
ticker_h: int,
|
||||||
|
w: int,
|
||||||
|
noise_cache: dict,
|
||||||
|
grad_offset: float,
|
||||||
|
) -> tuple[list[str], dict]:
|
||||||
|
"""Render the ticker scroll zone.
|
||||||
|
|
||||||
|
Args:
|
||||||
|
active: list of (content_rows, color, canvas_y, meta_idx)
|
||||||
|
scroll_cam: camera position (viewport top)
|
||||||
|
ticker_h: height of ticker zone
|
||||||
|
w: terminal width
|
||||||
|
noise_cache: dict of cy -> noise string
|
||||||
|
grad_offset: gradient animation offset
|
||||||
|
|
||||||
|
Returns:
|
||||||
|
(list of ANSI strings, updated noise_cache)
|
||||||
|
"""
|
||||||
|
buf = []
|
||||||
|
top_zone = max(1, int(ticker_h * 0.25))
|
||||||
|
bot_zone = max(1, int(ticker_h * 0.10))
|
||||||
|
|
||||||
|
def noise_at(cy):
|
||||||
|
if cy not in noise_cache:
|
||||||
|
noise_cache[cy] = noise(w) if random.random() < 0.15 else None
|
||||||
|
return noise_cache[cy]
|
||||||
|
|
||||||
|
for r in range(ticker_h):
|
||||||
|
scr_row = r + 1
|
||||||
|
cy = scroll_cam + r
|
||||||
|
top_f = min(1.0, r / top_zone) if top_zone > 0 else 1.0
|
||||||
|
bot_f = min(1.0, (ticker_h - 1 - r) / bot_zone) if bot_zone > 0 else 1.0
|
||||||
|
row_fade = min(top_f, bot_f)
|
||||||
|
drawn = False
|
||||||
|
|
||||||
|
for content, hc, by, midx in active:
|
||||||
|
cr = cy - by
|
||||||
|
if 0 <= cr < len(content):
|
||||||
|
raw = content[cr]
|
||||||
|
if cr != midx:
|
||||||
|
colored = lr_gradient([raw], grad_offset)[0]
|
||||||
|
else:
|
||||||
|
colored = raw
|
||||||
|
ln = vis_trunc(colored, w)
|
||||||
|
if row_fade < 1.0:
|
||||||
|
ln = fade_line(ln, row_fade)
|
||||||
|
|
||||||
|
if cr == midx:
|
||||||
|
buf.append(f"\033[{scr_row};1H{W_COOL}{ln}{RST}\033[K")
|
||||||
|
elif ln.strip():
|
||||||
|
buf.append(f"\033[{scr_row};1H{ln}{RST}\033[K")
|
||||||
|
else:
|
||||||
|
buf.append(f"\033[{scr_row};1H\033[K")
|
||||||
|
drawn = True
|
||||||
|
break
|
||||||
|
|
||||||
|
if not drawn:
|
||||||
|
n = noise_at(cy)
|
||||||
|
if row_fade < 1.0 and n:
|
||||||
|
n = fade_line(n, row_fade)
|
||||||
|
if n:
|
||||||
|
buf.append(f"\033[{scr_row};1H{n}")
|
||||||
|
else:
|
||||||
|
buf.append(f"\033[{scr_row};1H\033[K")
|
||||||
|
|
||||||
|
return buf, noise_cache
|
||||||
|
|
||||||
|
|
||||||
|
def apply_glitch(
|
||||||
|
buf: list[str],
|
||||||
|
ticker_buf_start: int,
|
||||||
|
mic_excess: float,
|
||||||
|
w: int,
|
||||||
|
) -> list[str]:
|
||||||
|
"""Apply glitch effect to ticker buffer.
|
||||||
|
|
||||||
|
Args:
|
||||||
|
buf: current buffer
|
||||||
|
ticker_buf_start: index where ticker starts in buffer
|
||||||
|
mic_excess: mic level above threshold
|
||||||
|
w: terminal width
|
||||||
|
|
||||||
|
Returns:
|
||||||
|
Updated buffer with glitches applied
|
||||||
|
"""
|
||||||
|
glitch_prob = 0.32 + min(0.9, mic_excess * 0.16)
|
||||||
|
n_hits = 4 + int(mic_excess / 2)
|
||||||
|
ticker_buf_len = len(buf) - ticker_buf_start
|
||||||
|
|
||||||
|
if random.random() < glitch_prob and ticker_buf_len > 0:
|
||||||
|
for _ in range(min(n_hits, ticker_buf_len)):
|
||||||
|
gi = random.randint(0, ticker_buf_len - 1)
|
||||||
|
scr_row = gi + 1
|
||||||
|
buf[ticker_buf_start + gi] = f"\033[{scr_row};1H{glitch_bar(w)}"
|
||||||
|
|
||||||
|
return buf
|
||||||
|
|
||||||
|
|
||||||
|
def render_firehose(items: list, w: int, fh: int, h: int) -> list[str]:
|
||||||
|
"""Render firehose strip at bottom of screen."""
|
||||||
|
buf = []
|
||||||
|
if fh > 0:
|
||||||
|
for fr in range(fh):
|
||||||
|
scr_row = h - fh + fr + 1
|
||||||
|
fline = firehose_line(items, w)
|
||||||
|
buf.append(f"\033[{scr_row};1H{fline}\033[K")
|
||||||
|
return buf
|
||||||
|
|
||||||
|
|
||||||
|
_effect_chain = None
|
||||||
|
|
||||||
|
|
||||||
|
def init_effects() -> None:
|
||||||
|
"""Initialize effect plugins and chain."""
|
||||||
|
global _effect_chain
|
||||||
|
from engine.effects import EffectChain, get_registry
|
||||||
|
|
||||||
|
registry = get_registry()
|
||||||
|
|
||||||
|
import effects_plugins
|
||||||
|
|
||||||
|
effects_plugins.discover_plugins()
|
||||||
|
|
||||||
|
chain = EffectChain(registry)
|
||||||
|
chain.set_order(["noise", "fade", "glitch", "firehose"])
|
||||||
|
_effect_chain = chain
|
||||||
|
|
||||||
|
|
||||||
|
def process_effects(
|
||||||
|
buf: list[str],
|
||||||
|
w: int,
|
||||||
|
h: int,
|
||||||
|
scroll_cam: int,
|
||||||
|
ticker_h: int,
|
||||||
|
mic_excess: float,
|
||||||
|
grad_offset: float,
|
||||||
|
frame_number: int,
|
||||||
|
has_message: bool,
|
||||||
|
items: list,
|
||||||
|
) -> list[str]:
|
||||||
|
"""Process buffer through effect chain."""
|
||||||
|
if _effect_chain is None:
|
||||||
|
init_effects()
|
||||||
|
|
||||||
|
ctx = EffectContext(
|
||||||
|
terminal_width=w,
|
||||||
|
terminal_height=h,
|
||||||
|
scroll_cam=scroll_cam,
|
||||||
|
ticker_height=ticker_h,
|
||||||
|
mic_excess=mic_excess,
|
||||||
|
grad_offset=grad_offset,
|
||||||
|
frame_number=frame_number,
|
||||||
|
has_message=has_message,
|
||||||
|
items=items,
|
||||||
|
)
|
||||||
|
return _effect_chain.process(buf, ctx)
|
||||||
|
|
||||||
|
|
||||||
|
def get_effect_chain() -> EffectChain | None:
|
||||||
|
"""Get the effect chain instance."""
|
||||||
|
global _effect_chain
|
||||||
|
if _effect_chain is None:
|
||||||
|
init_effects()
|
||||||
|
return _effect_chain
|
||||||
96
engine/mic.py
Normal file
96
engine/mic.py
Normal file
@@ -0,0 +1,96 @@
|
|||||||
|
"""
|
||||||
|
Microphone input monitor — standalone, no internal dependencies.
|
||||||
|
Gracefully degrades if sounddevice/numpy are unavailable.
|
||||||
|
"""
|
||||||
|
|
||||||
|
import atexit
|
||||||
|
from collections.abc import Callable
|
||||||
|
from datetime import datetime
|
||||||
|
|
||||||
|
try:
|
||||||
|
import numpy as _np
|
||||||
|
import sounddevice as _sd
|
||||||
|
|
||||||
|
_HAS_MIC = True
|
||||||
|
except Exception:
|
||||||
|
_HAS_MIC = False
|
||||||
|
|
||||||
|
|
||||||
|
from engine.events import MicLevelEvent
|
||||||
|
|
||||||
|
|
||||||
|
class MicMonitor:
|
||||||
|
"""Background mic stream that exposes current RMS dB level."""
|
||||||
|
|
||||||
|
def __init__(self, threshold_db=50):
|
||||||
|
self.threshold_db = threshold_db
|
||||||
|
self._db = -99.0
|
||||||
|
self._stream = None
|
||||||
|
self._subscribers: list[Callable[[MicLevelEvent], None]] = []
|
||||||
|
|
||||||
|
@property
|
||||||
|
def available(self):
|
||||||
|
"""True if sounddevice is importable."""
|
||||||
|
return _HAS_MIC
|
||||||
|
|
||||||
|
@property
|
||||||
|
def db(self):
|
||||||
|
"""Current RMS dB level."""
|
||||||
|
return self._db
|
||||||
|
|
||||||
|
@property
|
||||||
|
def excess(self):
|
||||||
|
"""dB above threshold (clamped to 0)."""
|
||||||
|
return max(0.0, self._db - self.threshold_db)
|
||||||
|
|
||||||
|
def subscribe(self, callback: Callable[[MicLevelEvent], None]) -> None:
|
||||||
|
"""Register a callback to be called when mic level changes."""
|
||||||
|
self._subscribers.append(callback)
|
||||||
|
|
||||||
|
def unsubscribe(self, callback: Callable[[MicLevelEvent], None]) -> None:
|
||||||
|
"""Remove a registered callback."""
|
||||||
|
if callback in self._subscribers:
|
||||||
|
self._subscribers.remove(callback)
|
||||||
|
|
||||||
|
def _emit(self, event: MicLevelEvent) -> None:
|
||||||
|
"""Emit an event to all subscribers."""
|
||||||
|
for cb in self._subscribers:
|
||||||
|
try:
|
||||||
|
cb(event)
|
||||||
|
except Exception:
|
||||||
|
pass
|
||||||
|
|
||||||
|
def start(self):
|
||||||
|
"""Start background mic stream. Returns True on success, False/None otherwise."""
|
||||||
|
if not _HAS_MIC:
|
||||||
|
return None
|
||||||
|
|
||||||
|
def _cb(indata, frames, t, status):
|
||||||
|
rms = float(_np.sqrt(_np.mean(indata**2)))
|
||||||
|
self._db = 20 * _np.log10(rms) if rms > 0 else -99.0
|
||||||
|
if self._subscribers:
|
||||||
|
event = MicLevelEvent(
|
||||||
|
db_level=self._db,
|
||||||
|
excess_above_threshold=max(0.0, self._db - self.threshold_db),
|
||||||
|
timestamp=datetime.now(),
|
||||||
|
)
|
||||||
|
self._emit(event)
|
||||||
|
|
||||||
|
try:
|
||||||
|
self._stream = _sd.InputStream(
|
||||||
|
callback=_cb, channels=1, samplerate=44100, blocksize=2048
|
||||||
|
)
|
||||||
|
self._stream.start()
|
||||||
|
atexit.register(self.stop)
|
||||||
|
return True
|
||||||
|
except Exception:
|
||||||
|
return False
|
||||||
|
|
||||||
|
def stop(self):
|
||||||
|
"""Stop the mic stream if running."""
|
||||||
|
if self._stream:
|
||||||
|
try:
|
||||||
|
self._stream.stop()
|
||||||
|
except Exception:
|
||||||
|
pass
|
||||||
|
self._stream = None
|
||||||
@@ -1,107 +0,0 @@
|
|||||||
"""
|
|
||||||
Unified Pipeline Architecture.
|
|
||||||
|
|
||||||
This module provides a clean, dependency-managed pipeline system:
|
|
||||||
- Stage: Base class for all pipeline components
|
|
||||||
- Pipeline: DAG-based execution orchestrator
|
|
||||||
- PipelineParams: Runtime configuration for animation
|
|
||||||
- PipelinePreset: Pre-configured pipeline configurations
|
|
||||||
- StageRegistry: Unified registration for all stage types
|
|
||||||
|
|
||||||
The pipeline architecture supports:
|
|
||||||
- Sources: Data providers (headlines, poetry, pipeline viz)
|
|
||||||
- Effects: Post-processors (noise, fade, glitch, hud)
|
|
||||||
- Displays: Output backends (terminal, pygame, websocket)
|
|
||||||
- Cameras: Viewport controllers (vertical, horizontal, omni)
|
|
||||||
|
|
||||||
Example:
|
|
||||||
from engine.pipeline import Pipeline, PipelineConfig, StageRegistry
|
|
||||||
|
|
||||||
pipeline = Pipeline(PipelineConfig(source="headlines", display="terminal"))
|
|
||||||
pipeline.add_stage("source", StageRegistry.create("source", "headlines"))
|
|
||||||
pipeline.add_stage("display", StageRegistry.create("display", "terminal"))
|
|
||||||
pipeline.build().initialize()
|
|
||||||
|
|
||||||
result = pipeline.execute(initial_data)
|
|
||||||
"""
|
|
||||||
|
|
||||||
from engine.pipeline.controller import (
|
|
||||||
Pipeline,
|
|
||||||
PipelineConfig,
|
|
||||||
PipelineRunner,
|
|
||||||
create_default_pipeline,
|
|
||||||
create_pipeline_from_params,
|
|
||||||
)
|
|
||||||
from engine.pipeline.core import (
|
|
||||||
PipelineContext,
|
|
||||||
Stage,
|
|
||||||
StageConfig,
|
|
||||||
StageError,
|
|
||||||
StageResult,
|
|
||||||
)
|
|
||||||
from engine.pipeline.params import (
|
|
||||||
DEFAULT_HEADLINE_PARAMS,
|
|
||||||
DEFAULT_PIPELINE_PARAMS,
|
|
||||||
DEFAULT_PYGAME_PARAMS,
|
|
||||||
PipelineParams,
|
|
||||||
)
|
|
||||||
from engine.pipeline.presets import (
|
|
||||||
DEMO_PRESET,
|
|
||||||
FIREHOSE_PRESET,
|
|
||||||
PIPELINE_VIZ_PRESET,
|
|
||||||
POETRY_PRESET,
|
|
||||||
PRESETS,
|
|
||||||
SIXEL_PRESET,
|
|
||||||
WEBSOCKET_PRESET,
|
|
||||||
PipelinePreset,
|
|
||||||
create_preset_from_params,
|
|
||||||
get_preset,
|
|
||||||
list_presets,
|
|
||||||
)
|
|
||||||
from engine.pipeline.registry import (
|
|
||||||
StageRegistry,
|
|
||||||
discover_stages,
|
|
||||||
register_camera,
|
|
||||||
register_display,
|
|
||||||
register_effect,
|
|
||||||
register_source,
|
|
||||||
)
|
|
||||||
|
|
||||||
__all__ = [
|
|
||||||
# Core
|
|
||||||
"Stage",
|
|
||||||
"StageConfig",
|
|
||||||
"StageError",
|
|
||||||
"StageResult",
|
|
||||||
"PipelineContext",
|
|
||||||
# Controller
|
|
||||||
"Pipeline",
|
|
||||||
"PipelineConfig",
|
|
||||||
"PipelineRunner",
|
|
||||||
"create_default_pipeline",
|
|
||||||
"create_pipeline_from_params",
|
|
||||||
# Params
|
|
||||||
"PipelineParams",
|
|
||||||
"DEFAULT_HEADLINE_PARAMS",
|
|
||||||
"DEFAULT_PIPELINE_PARAMS",
|
|
||||||
"DEFAULT_PYGAME_PARAMS",
|
|
||||||
# Presets
|
|
||||||
"PipelinePreset",
|
|
||||||
"PRESETS",
|
|
||||||
"DEMO_PRESET",
|
|
||||||
"POETRY_PRESET",
|
|
||||||
"PIPELINE_VIZ_PRESET",
|
|
||||||
"WEBSOCKET_PRESET",
|
|
||||||
"SIXEL_PRESET",
|
|
||||||
"FIREHOSE_PRESET",
|
|
||||||
"get_preset",
|
|
||||||
"list_presets",
|
|
||||||
"create_preset_from_params",
|
|
||||||
# Registry
|
|
||||||
"StageRegistry",
|
|
||||||
"discover_stages",
|
|
||||||
"register_source",
|
|
||||||
"register_effect",
|
|
||||||
"register_display",
|
|
||||||
"register_camera",
|
|
||||||
]
|
|
||||||
@@ -1,845 +0,0 @@
|
|||||||
"""
|
|
||||||
Stage adapters - Bridge existing components to the Stage interface.
|
|
||||||
|
|
||||||
This module provides adapters that wrap existing components
|
|
||||||
(EffectPlugin, Display, DataSource, Camera) as Stage implementations.
|
|
||||||
"""
|
|
||||||
|
|
||||||
from typing import Any
|
|
||||||
|
|
||||||
from engine.pipeline.core import PipelineContext, Stage
|
|
||||||
|
|
||||||
|
|
||||||
class EffectPluginStage(Stage):
|
|
||||||
"""Adapter wrapping EffectPlugin as a Stage."""
|
|
||||||
|
|
||||||
def __init__(self, effect_plugin, name: str = "effect"):
|
|
||||||
self._effect = effect_plugin
|
|
||||||
self.name = name
|
|
||||||
self.category = "effect"
|
|
||||||
self.optional = False
|
|
||||||
|
|
||||||
@property
|
|
||||||
def stage_type(self) -> str:
|
|
||||||
"""Return stage_type based on effect name.
|
|
||||||
|
|
||||||
HUD effects are overlays.
|
|
||||||
"""
|
|
||||||
if self.name == "hud":
|
|
||||||
return "overlay"
|
|
||||||
return self.category
|
|
||||||
|
|
||||||
@property
|
|
||||||
def render_order(self) -> int:
|
|
||||||
"""Return render_order based on effect type.
|
|
||||||
|
|
||||||
HUD effects have high render_order to appear on top.
|
|
||||||
"""
|
|
||||||
if self.name == "hud":
|
|
||||||
return 100 # High order for overlays
|
|
||||||
return 0
|
|
||||||
|
|
||||||
@property
|
|
||||||
def is_overlay(self) -> bool:
|
|
||||||
"""Return True for HUD effects.
|
|
||||||
|
|
||||||
HUD is an overlay - it composes on top of the buffer
|
|
||||||
rather than transforming it for the next stage.
|
|
||||||
"""
|
|
||||||
return self.name == "hud"
|
|
||||||
|
|
||||||
@property
|
|
||||||
def capabilities(self) -> set[str]:
|
|
||||||
return {f"effect.{self.name}"}
|
|
||||||
|
|
||||||
@property
|
|
||||||
def dependencies(self) -> set[str]:
|
|
||||||
return set()
|
|
||||||
|
|
||||||
@property
|
|
||||||
def inlet_types(self) -> set:
|
|
||||||
from engine.pipeline.core import DataType
|
|
||||||
|
|
||||||
return {DataType.TEXT_BUFFER}
|
|
||||||
|
|
||||||
@property
|
|
||||||
def outlet_types(self) -> set:
|
|
||||||
from engine.pipeline.core import DataType
|
|
||||||
|
|
||||||
return {DataType.TEXT_BUFFER}
|
|
||||||
|
|
||||||
def process(self, data: Any, ctx: PipelineContext) -> Any:
|
|
||||||
"""Process data through the effect."""
|
|
||||||
if data is None:
|
|
||||||
return None
|
|
||||||
from engine.effects.types import EffectContext, apply_param_bindings
|
|
||||||
|
|
||||||
w = ctx.params.viewport_width if ctx.params else 80
|
|
||||||
h = ctx.params.viewport_height if ctx.params else 24
|
|
||||||
frame = ctx.params.frame_number if ctx.params else 0
|
|
||||||
|
|
||||||
effect_ctx = EffectContext(
|
|
||||||
terminal_width=w,
|
|
||||||
terminal_height=h,
|
|
||||||
scroll_cam=0,
|
|
||||||
ticker_height=h,
|
|
||||||
camera_x=0,
|
|
||||||
mic_excess=0.0,
|
|
||||||
grad_offset=(frame * 0.01) % 1.0,
|
|
||||||
frame_number=frame,
|
|
||||||
has_message=False,
|
|
||||||
items=ctx.get("items", []),
|
|
||||||
)
|
|
||||||
|
|
||||||
# Copy sensor state from PipelineContext to EffectContext
|
|
||||||
for key, value in ctx.state.items():
|
|
||||||
if key.startswith("sensor."):
|
|
||||||
effect_ctx.set_state(key, value)
|
|
||||||
|
|
||||||
# Copy metrics from PipelineContext to EffectContext
|
|
||||||
if "metrics" in ctx.state:
|
|
||||||
effect_ctx.set_state("metrics", ctx.state["metrics"])
|
|
||||||
|
|
||||||
# Apply sensor param bindings if effect has them
|
|
||||||
if hasattr(self._effect, "param_bindings") and self._effect.param_bindings:
|
|
||||||
bound_config = apply_param_bindings(self._effect, effect_ctx)
|
|
||||||
self._effect.configure(bound_config)
|
|
||||||
|
|
||||||
return self._effect.process(data, effect_ctx)
|
|
||||||
|
|
||||||
|
|
||||||
class DisplayStage(Stage):
|
|
||||||
"""Adapter wrapping Display as a Stage."""
|
|
||||||
|
|
||||||
def __init__(self, display, name: str = "terminal"):
|
|
||||||
self._display = display
|
|
||||||
self.name = name
|
|
||||||
self.category = "display"
|
|
||||||
self.optional = False
|
|
||||||
|
|
||||||
@property
|
|
||||||
def capabilities(self) -> set[str]:
|
|
||||||
return {"display.output"}
|
|
||||||
|
|
||||||
@property
|
|
||||||
def dependencies(self) -> set[str]:
|
|
||||||
return {"render.output"} # Display needs rendered content
|
|
||||||
|
|
||||||
@property
|
|
||||||
def inlet_types(self) -> set:
|
|
||||||
from engine.pipeline.core import DataType
|
|
||||||
|
|
||||||
return {DataType.TEXT_BUFFER} # Display consumes rendered text
|
|
||||||
|
|
||||||
@property
|
|
||||||
def outlet_types(self) -> set:
|
|
||||||
from engine.pipeline.core import DataType
|
|
||||||
|
|
||||||
return {DataType.NONE} # Display is a terminal stage (no output)
|
|
||||||
|
|
||||||
def init(self, ctx: PipelineContext) -> bool:
|
|
||||||
w = ctx.params.viewport_width if ctx.params else 80
|
|
||||||
h = ctx.params.viewport_height if ctx.params else 24
|
|
||||||
result = self._display.init(w, h, reuse=False)
|
|
||||||
return result is not False
|
|
||||||
|
|
||||||
def process(self, data: Any, ctx: PipelineContext) -> Any:
|
|
||||||
"""Output data to display."""
|
|
||||||
if data is not None:
|
|
||||||
self._display.show(data)
|
|
||||||
return data
|
|
||||||
|
|
||||||
def cleanup(self) -> None:
|
|
||||||
self._display.cleanup()
|
|
||||||
|
|
||||||
|
|
||||||
class DataSourceStage(Stage):
|
|
||||||
"""Adapter wrapping DataSource as a Stage."""
|
|
||||||
|
|
||||||
def __init__(self, data_source, name: str = "headlines"):
|
|
||||||
self._source = data_source
|
|
||||||
self.name = name
|
|
||||||
self.category = "source"
|
|
||||||
self.optional = False
|
|
||||||
|
|
||||||
@property
|
|
||||||
def capabilities(self) -> set[str]:
|
|
||||||
return {f"source.{self.name}"}
|
|
||||||
|
|
||||||
@property
|
|
||||||
def dependencies(self) -> set[str]:
|
|
||||||
return set()
|
|
||||||
|
|
||||||
@property
|
|
||||||
def inlet_types(self) -> set:
|
|
||||||
from engine.pipeline.core import DataType
|
|
||||||
|
|
||||||
return {DataType.NONE} # Sources don't take input
|
|
||||||
|
|
||||||
@property
|
|
||||||
def outlet_types(self) -> set:
|
|
||||||
from engine.pipeline.core import DataType
|
|
||||||
|
|
||||||
return {DataType.SOURCE_ITEMS}
|
|
||||||
|
|
||||||
def process(self, data: Any, ctx: PipelineContext) -> Any:
|
|
||||||
"""Fetch data from source."""
|
|
||||||
if hasattr(self._source, "get_items"):
|
|
||||||
return self._source.get_items()
|
|
||||||
return data
|
|
||||||
|
|
||||||
|
|
||||||
class PassthroughStage(Stage):
|
|
||||||
"""Simple stage that passes data through unchanged.
|
|
||||||
|
|
||||||
Used for sources that already provide the data in the correct format
|
|
||||||
(e.g., pipeline introspection that outputs text directly).
|
|
||||||
"""
|
|
||||||
|
|
||||||
def __init__(self, name: str = "passthrough"):
|
|
||||||
self.name = name
|
|
||||||
self.category = "render"
|
|
||||||
self.optional = True
|
|
||||||
|
|
||||||
@property
|
|
||||||
def stage_type(self) -> str:
|
|
||||||
return "render"
|
|
||||||
|
|
||||||
@property
|
|
||||||
def capabilities(self) -> set[str]:
|
|
||||||
return {"render.output"}
|
|
||||||
|
|
||||||
@property
|
|
||||||
def dependencies(self) -> set[str]:
|
|
||||||
return {"source"}
|
|
||||||
|
|
||||||
@property
|
|
||||||
def inlet_types(self) -> set:
|
|
||||||
from engine.pipeline.core import DataType
|
|
||||||
|
|
||||||
return {DataType.SOURCE_ITEMS}
|
|
||||||
|
|
||||||
@property
|
|
||||||
def outlet_types(self) -> set:
|
|
||||||
from engine.pipeline.core import DataType
|
|
||||||
|
|
||||||
return {DataType.SOURCE_ITEMS}
|
|
||||||
|
|
||||||
def process(self, data: Any, ctx: PipelineContext) -> Any:
|
|
||||||
"""Pass data through unchanged."""
|
|
||||||
return data
|
|
||||||
|
|
||||||
|
|
||||||
class SourceItemsToBufferStage(Stage):
|
|
||||||
"""Convert SourceItem objects to text buffer.
|
|
||||||
|
|
||||||
Takes a list of SourceItem objects and extracts their content,
|
|
||||||
splitting on newlines to create a proper text buffer for display.
|
|
||||||
"""
|
|
||||||
|
|
||||||
def __init__(self, name: str = "items-to-buffer"):
|
|
||||||
self.name = name
|
|
||||||
self.category = "render"
|
|
||||||
self.optional = True
|
|
||||||
|
|
||||||
@property
|
|
||||||
def stage_type(self) -> str:
|
|
||||||
return "render"
|
|
||||||
|
|
||||||
@property
|
|
||||||
def capabilities(self) -> set[str]:
|
|
||||||
return {"render.output"}
|
|
||||||
|
|
||||||
@property
|
|
||||||
def dependencies(self) -> set[str]:
|
|
||||||
return {"source"}
|
|
||||||
|
|
||||||
@property
|
|
||||||
def inlet_types(self) -> set:
|
|
||||||
from engine.pipeline.core import DataType
|
|
||||||
|
|
||||||
return {DataType.SOURCE_ITEMS}
|
|
||||||
|
|
||||||
@property
|
|
||||||
def outlet_types(self) -> set:
|
|
||||||
from engine.pipeline.core import DataType
|
|
||||||
|
|
||||||
return {DataType.TEXT_BUFFER}
|
|
||||||
|
|
||||||
def process(self, data: Any, ctx: PipelineContext) -> Any:
|
|
||||||
"""Convert SourceItem list to text buffer."""
|
|
||||||
if data is None:
|
|
||||||
return []
|
|
||||||
|
|
||||||
# If already a list of strings, return as-is
|
|
||||||
if isinstance(data, list) and data and isinstance(data[0], str):
|
|
||||||
return data
|
|
||||||
|
|
||||||
# If it's a list of SourceItem, extract content
|
|
||||||
from engine.data_sources import SourceItem
|
|
||||||
|
|
||||||
if isinstance(data, list):
|
|
||||||
result = []
|
|
||||||
for item in data:
|
|
||||||
if isinstance(item, SourceItem):
|
|
||||||
# Split content by newline to get individual lines
|
|
||||||
lines = item.content.split("\n")
|
|
||||||
result.extend(lines)
|
|
||||||
elif hasattr(item, "content"): # Has content attribute
|
|
||||||
lines = str(item.content).split("\n")
|
|
||||||
result.extend(lines)
|
|
||||||
else:
|
|
||||||
result.append(str(item))
|
|
||||||
return result
|
|
||||||
|
|
||||||
# Single item
|
|
||||||
if isinstance(data, SourceItem):
|
|
||||||
return data.content.split("\n")
|
|
||||||
|
|
||||||
return [str(data)]
|
|
||||||
|
|
||||||
|
|
||||||
class CameraStage(Stage):
|
|
||||||
"""Adapter wrapping Camera as a Stage."""
|
|
||||||
|
|
||||||
def __init__(self, camera, name: str = "vertical"):
|
|
||||||
self._camera = camera
|
|
||||||
self.name = name
|
|
||||||
self.category = "camera"
|
|
||||||
self.optional = True
|
|
||||||
|
|
||||||
@property
|
|
||||||
def capabilities(self) -> set[str]:
|
|
||||||
return {"camera"}
|
|
||||||
|
|
||||||
@property
|
|
||||||
def dependencies(self) -> set[str]:
|
|
||||||
return {"render.output"} # Depend on rendered output from font or render stage
|
|
||||||
|
|
||||||
@property
|
|
||||||
def inlet_types(self) -> set:
|
|
||||||
from engine.pipeline.core import DataType
|
|
||||||
|
|
||||||
return {DataType.TEXT_BUFFER} # Camera works on rendered text
|
|
||||||
|
|
||||||
@property
|
|
||||||
def outlet_types(self) -> set:
|
|
||||||
from engine.pipeline.core import DataType
|
|
||||||
|
|
||||||
return {DataType.TEXT_BUFFER}
|
|
||||||
|
|
||||||
def process(self, data: Any, ctx: PipelineContext) -> Any:
|
|
||||||
"""Apply camera transformation to data."""
|
|
||||||
if data is None or (isinstance(data, list) and len(data) == 0):
|
|
||||||
return data
|
|
||||||
if hasattr(self._camera, "apply"):
|
|
||||||
viewport_width = ctx.params.viewport_width if ctx.params else 80
|
|
||||||
viewport_height = ctx.params.viewport_height if ctx.params else 24
|
|
||||||
buffer_height = len(data) if isinstance(data, list) else 0
|
|
||||||
|
|
||||||
# Get global layout height for canvas (enables full scrolling range)
|
|
||||||
total_layout_height = ctx.get("total_layout_height", buffer_height)
|
|
||||||
|
|
||||||
# Preserve camera's configured canvas width, but ensure it's at least viewport_width
|
|
||||||
# This allows horizontal/omni/floating/bounce cameras to scroll properly
|
|
||||||
canvas_width = max(
|
|
||||||
viewport_width, getattr(self._camera, "canvas_width", viewport_width)
|
|
||||||
)
|
|
||||||
|
|
||||||
# Update camera's viewport dimensions so it knows its actual bounds
|
|
||||||
# Set canvas size to achieve desired viewport (viewport = canvas / zoom)
|
|
||||||
if hasattr(self._camera, "set_canvas_size"):
|
|
||||||
self._camera.set_canvas_size(
|
|
||||||
width=int(viewport_width * self._camera.zoom),
|
|
||||||
height=int(viewport_height * self._camera.zoom),
|
|
||||||
)
|
|
||||||
|
|
||||||
# Set canvas to full layout height so camera can scroll through all content
|
|
||||||
self._camera.set_canvas_size(width=canvas_width, height=total_layout_height)
|
|
||||||
|
|
||||||
# Update camera position (scroll) - uses global canvas for clamping
|
|
||||||
if hasattr(self._camera, "update"):
|
|
||||||
self._camera.update(1 / 60)
|
|
||||||
|
|
||||||
# Store camera_y in context for ViewportFilterStage (global y position)
|
|
||||||
ctx.set("camera_y", self._camera.y)
|
|
||||||
|
|
||||||
# Apply camera viewport slicing to the partial buffer
|
|
||||||
# The buffer starts at render_offset_y in global coordinates
|
|
||||||
render_offset_y = ctx.get("render_offset_y", 0)
|
|
||||||
|
|
||||||
# Temporarily shift camera to local buffer coordinates for apply()
|
|
||||||
real_y = self._camera.y
|
|
||||||
local_y = max(0, real_y - render_offset_y)
|
|
||||||
|
|
||||||
# Temporarily shrink canvas to local buffer size so apply() works correctly
|
|
||||||
self._camera.set_canvas_size(width=canvas_width, height=buffer_height)
|
|
||||||
self._camera.y = local_y
|
|
||||||
|
|
||||||
# Apply slicing
|
|
||||||
result = self._camera.apply(data, viewport_width, viewport_height)
|
|
||||||
|
|
||||||
# Restore global canvas and camera position for next frame
|
|
||||||
self._camera.set_canvas_size(width=canvas_width, height=total_layout_height)
|
|
||||||
self._camera.y = real_y
|
|
||||||
|
|
||||||
return result
|
|
||||||
return data
|
|
||||||
|
|
||||||
def cleanup(self) -> None:
|
|
||||||
if hasattr(self._camera, "reset"):
|
|
||||||
self._camera.reset()
|
|
||||||
|
|
||||||
|
|
||||||
class ViewportFilterStage(Stage):
|
|
||||||
"""Stage that limits items based on layout calculation.
|
|
||||||
|
|
||||||
Computes cumulative y-offsets for all items using cheap height estimation,
|
|
||||||
then returns only items that overlap the camera's viewport window.
|
|
||||||
This prevents FontStage from rendering thousands of items when only a few
|
|
||||||
are visible, while still allowing camera scrolling through all content.
|
|
||||||
"""
|
|
||||||
|
|
||||||
def __init__(self, name: str = "viewport-filter"):
|
|
||||||
self.name = name
|
|
||||||
self.category = "filter"
|
|
||||||
self.optional = False
|
|
||||||
self._cached_count = 0
|
|
||||||
self._layout: list[tuple[int, int]] = []
|
|
||||||
|
|
||||||
@property
|
|
||||||
def stage_type(self) -> str:
|
|
||||||
return "filter"
|
|
||||||
|
|
||||||
@property
|
|
||||||
def capabilities(self) -> set[str]:
|
|
||||||
return {f"filter.{self.name}"}
|
|
||||||
|
|
||||||
@property
|
|
||||||
def dependencies(self) -> set[str]:
|
|
||||||
return {"source"}
|
|
||||||
|
|
||||||
@property
|
|
||||||
def inlet_types(self) -> set:
|
|
||||||
from engine.pipeline.core import DataType
|
|
||||||
|
|
||||||
return {DataType.SOURCE_ITEMS}
|
|
||||||
|
|
||||||
@property
|
|
||||||
def outlet_types(self) -> set:
|
|
||||||
from engine.pipeline.core import DataType
|
|
||||||
|
|
||||||
return {DataType.SOURCE_ITEMS}
|
|
||||||
|
|
||||||
def process(self, data: Any, ctx: PipelineContext) -> Any:
|
|
||||||
"""Filter items based on layout and camera position."""
|
|
||||||
if data is None or not isinstance(data, list):
|
|
||||||
return data
|
|
||||||
|
|
||||||
viewport_height = ctx.params.viewport_height if ctx.params else 24
|
|
||||||
viewport_width = ctx.params.viewport_width if ctx.params else 80
|
|
||||||
camera_y = ctx.get("camera_y", 0)
|
|
||||||
|
|
||||||
# Recompute layout when item count OR viewport width changes
|
|
||||||
cached_width = getattr(self, "_cached_width", None)
|
|
||||||
if len(data) != self._cached_count or cached_width != viewport_width:
|
|
||||||
self._layout = []
|
|
||||||
y = 0
|
|
||||||
from engine.render.blocks import estimate_block_height
|
|
||||||
|
|
||||||
for item in data:
|
|
||||||
if hasattr(item, "content"):
|
|
||||||
title = item.content
|
|
||||||
elif isinstance(item, tuple):
|
|
||||||
title = str(item[0]) if item else ""
|
|
||||||
else:
|
|
||||||
title = str(item)
|
|
||||||
h = estimate_block_height(title, viewport_width)
|
|
||||||
self._layout.append((y, h))
|
|
||||||
y += h
|
|
||||||
self._cached_count = len(data)
|
|
||||||
self._cached_width = viewport_width
|
|
||||||
|
|
||||||
# Find items visible in [camera_y - buffer, camera_y + viewport_height + buffer]
|
|
||||||
buffer_zone = viewport_height
|
|
||||||
vis_start = max(0, camera_y - buffer_zone)
|
|
||||||
vis_end = camera_y + viewport_height + buffer_zone
|
|
||||||
|
|
||||||
visible_items = []
|
|
||||||
render_offset_y = 0
|
|
||||||
first_visible_found = False
|
|
||||||
for i, (start_y, height) in enumerate(self._layout):
|
|
||||||
item_end = start_y + height
|
|
||||||
if item_end > vis_start and start_y < vis_end:
|
|
||||||
if not first_visible_found:
|
|
||||||
render_offset_y = start_y
|
|
||||||
first_visible_found = True
|
|
||||||
visible_items.append(data[i])
|
|
||||||
|
|
||||||
# Compute total layout height for the canvas
|
|
||||||
total_layout_height = 0
|
|
||||||
if self._layout:
|
|
||||||
last_start, last_height = self._layout[-1]
|
|
||||||
total_layout_height = last_start + last_height
|
|
||||||
|
|
||||||
# Store metadata for CameraStage
|
|
||||||
ctx.set("render_offset_y", render_offset_y)
|
|
||||||
ctx.set("total_layout_height", total_layout_height)
|
|
||||||
|
|
||||||
# Always return at least one item to avoid empty buffer errors
|
|
||||||
return visible_items if visible_items else data[:1]
|
|
||||||
|
|
||||||
|
|
||||||
class FontStage(Stage):
|
|
||||||
"""Stage that applies font rendering to content.
|
|
||||||
|
|
||||||
FontStage is a Transform that takes raw content (text, headlines)
|
|
||||||
and renders it to an ANSI-formatted buffer using the configured font.
|
|
||||||
|
|
||||||
This decouples font rendering from data sources, allowing:
|
|
||||||
- Different fonts per source
|
|
||||||
- Runtime font swapping
|
|
||||||
- Font as a pipeline stage
|
|
||||||
|
|
||||||
Attributes:
|
|
||||||
font_path: Path to font file (None = use config default)
|
|
||||||
font_size: Font size in points (None = use config default)
|
|
||||||
font_ref: Reference name for registered font ("default", "cjk", etc.)
|
|
||||||
"""
|
|
||||||
|
|
||||||
def __init__(
|
|
||||||
self,
|
|
||||||
font_path: str | None = None,
|
|
||||||
font_size: int | None = None,
|
|
||||||
font_ref: str | None = "default",
|
|
||||||
name: str = "font",
|
|
||||||
):
|
|
||||||
self.name = name
|
|
||||||
self.category = "transform"
|
|
||||||
self.optional = False
|
|
||||||
self._font_path = font_path
|
|
||||||
self._font_size = font_size
|
|
||||||
self._font_ref = font_ref
|
|
||||||
self._font = None
|
|
||||||
self._render_cache: dict[tuple[str, str, str, int], list[str]] = {}
|
|
||||||
|
|
||||||
@property
|
|
||||||
def stage_type(self) -> str:
|
|
||||||
return "transform"
|
|
||||||
|
|
||||||
@property
|
|
||||||
def capabilities(self) -> set[str]:
|
|
||||||
return {f"transform.{self.name}", "render.output"}
|
|
||||||
|
|
||||||
@property
|
|
||||||
def dependencies(self) -> set[str]:
|
|
||||||
return {"source"}
|
|
||||||
|
|
||||||
@property
|
|
||||||
def inlet_types(self) -> set:
|
|
||||||
from engine.pipeline.core import DataType
|
|
||||||
|
|
||||||
return {DataType.SOURCE_ITEMS}
|
|
||||||
|
|
||||||
@property
|
|
||||||
def outlet_types(self) -> set:
|
|
||||||
from engine.pipeline.core import DataType
|
|
||||||
|
|
||||||
return {DataType.TEXT_BUFFER}
|
|
||||||
|
|
||||||
def init(self, ctx: PipelineContext) -> bool:
|
|
||||||
"""Initialize font from config or path."""
|
|
||||||
from engine import config
|
|
||||||
|
|
||||||
if self._font_path:
|
|
||||||
try:
|
|
||||||
from PIL import ImageFont
|
|
||||||
|
|
||||||
size = self._font_size or config.FONT_SZ
|
|
||||||
self._font = ImageFont.truetype(self._font_path, size)
|
|
||||||
except Exception:
|
|
||||||
return False
|
|
||||||
return True
|
|
||||||
|
|
||||||
def process(self, data: Any, ctx: PipelineContext) -> Any:
|
|
||||||
"""Render content with font to buffer."""
|
|
||||||
if data is None:
|
|
||||||
return None
|
|
||||||
|
|
||||||
from engine.render import make_block
|
|
||||||
|
|
||||||
w = ctx.params.viewport_width if ctx.params else 80
|
|
||||||
|
|
||||||
# If data is already a list of strings (buffer), return as-is
|
|
||||||
if isinstance(data, list) and data and isinstance(data[0], str):
|
|
||||||
return data
|
|
||||||
|
|
||||||
# If data is a list of items, render each with font
|
|
||||||
if isinstance(data, list):
|
|
||||||
result = []
|
|
||||||
for item in data:
|
|
||||||
# Handle SourceItem or tuple (title, source, timestamp)
|
|
||||||
if hasattr(item, "content"):
|
|
||||||
title = item.content
|
|
||||||
src = getattr(item, "source", "unknown")
|
|
||||||
ts = getattr(item, "timestamp", "0")
|
|
||||||
elif isinstance(item, tuple):
|
|
||||||
title = item[0] if len(item) > 0 else ""
|
|
||||||
src = item[1] if len(item) > 1 else "unknown"
|
|
||||||
ts = str(item[2]) if len(item) > 2 else "0"
|
|
||||||
else:
|
|
||||||
title = str(item)
|
|
||||||
src = "unknown"
|
|
||||||
ts = "0"
|
|
||||||
|
|
||||||
# Check cache first
|
|
||||||
cache_key = (title, src, ts, w)
|
|
||||||
if cache_key in self._render_cache:
|
|
||||||
result.extend(self._render_cache[cache_key])
|
|
||||||
continue
|
|
||||||
|
|
||||||
try:
|
|
||||||
block_lines, color_code, meta_idx = make_block(title, src, ts, w)
|
|
||||||
self._render_cache[cache_key] = block_lines
|
|
||||||
result.extend(block_lines)
|
|
||||||
except Exception:
|
|
||||||
result.append(title)
|
|
||||||
|
|
||||||
return result
|
|
||||||
|
|
||||||
return data
|
|
||||||
|
|
||||||
|
|
||||||
class ImageToTextStage(Stage):
|
|
||||||
"""Transform that converts PIL Image to ASCII text buffer.
|
|
||||||
|
|
||||||
Takes an ImageItem or PIL Image and converts it to a text buffer
|
|
||||||
using ASCII character density mapping. The output can be displayed
|
|
||||||
directly or further processed by effects.
|
|
||||||
|
|
||||||
Attributes:
|
|
||||||
width: Output width in characters
|
|
||||||
height: Output height in characters
|
|
||||||
charset: Character set for density mapping (default: simple ASCII)
|
|
||||||
"""
|
|
||||||
|
|
||||||
def __init__(
|
|
||||||
self,
|
|
||||||
width: int = 80,
|
|
||||||
height: int = 24,
|
|
||||||
charset: str = " .:-=+*#%@",
|
|
||||||
name: str = "image-to-text",
|
|
||||||
):
|
|
||||||
self.name = name
|
|
||||||
self.category = "transform"
|
|
||||||
self.optional = False
|
|
||||||
self.width = width
|
|
||||||
self.height = height
|
|
||||||
self.charset = charset
|
|
||||||
|
|
||||||
@property
|
|
||||||
def stage_type(self) -> str:
|
|
||||||
return "transform"
|
|
||||||
|
|
||||||
@property
|
|
||||||
def inlet_types(self) -> set:
|
|
||||||
from engine.pipeline.core import DataType
|
|
||||||
|
|
||||||
return {DataType.PIL_IMAGE} # Accepts PIL Image objects or ImageItem
|
|
||||||
|
|
||||||
@property
|
|
||||||
def outlet_types(self) -> set:
|
|
||||||
from engine.pipeline.core import DataType
|
|
||||||
|
|
||||||
return {DataType.TEXT_BUFFER}
|
|
||||||
|
|
||||||
@property
|
|
||||||
def capabilities(self) -> set[str]:
|
|
||||||
return {f"transform.{self.name}", "render.output"}
|
|
||||||
|
|
||||||
@property
|
|
||||||
def dependencies(self) -> set[str]:
|
|
||||||
return {"source"}
|
|
||||||
|
|
||||||
def process(self, data: Any, ctx: PipelineContext) -> Any:
|
|
||||||
"""Convert PIL Image to text buffer."""
|
|
||||||
if data is None:
|
|
||||||
return None
|
|
||||||
|
|
||||||
from engine.data_sources.sources import ImageItem
|
|
||||||
|
|
||||||
# Extract PIL Image from various input types
|
|
||||||
pil_image = None
|
|
||||||
|
|
||||||
if isinstance(data, ImageItem) or hasattr(data, "image"):
|
|
||||||
pil_image = data.image
|
|
||||||
else:
|
|
||||||
# Assume it's already a PIL Image
|
|
||||||
pil_image = data
|
|
||||||
|
|
||||||
# Check if it's a PIL Image
|
|
||||||
if not hasattr(pil_image, "resize"):
|
|
||||||
# Not a PIL Image, return as-is
|
|
||||||
return data if isinstance(data, list) else [str(data)]
|
|
||||||
|
|
||||||
# Convert to grayscale and resize
|
|
||||||
try:
|
|
||||||
if pil_image.mode != "L":
|
|
||||||
pil_image = pil_image.convert("L")
|
|
||||||
except Exception:
|
|
||||||
return ["[image conversion error]"]
|
|
||||||
|
|
||||||
# Calculate cell aspect ratio correction (characters are taller than wide)
|
|
||||||
aspect_ratio = 0.5
|
|
||||||
target_w = self.width
|
|
||||||
target_h = int(self.height * aspect_ratio)
|
|
||||||
|
|
||||||
# Resize image to target dimensions
|
|
||||||
try:
|
|
||||||
resized = pil_image.resize((target_w, target_h))
|
|
||||||
except Exception:
|
|
||||||
return ["[image resize error]"]
|
|
||||||
|
|
||||||
# Map pixels to characters
|
|
||||||
result = []
|
|
||||||
pixels = list(resized.getdata())
|
|
||||||
|
|
||||||
for row in range(target_h):
|
|
||||||
line = ""
|
|
||||||
for col in range(target_w):
|
|
||||||
idx = row * target_w + col
|
|
||||||
if idx < len(pixels):
|
|
||||||
brightness = pixels[idx]
|
|
||||||
char_idx = int((brightness / 255) * (len(self.charset) - 1))
|
|
||||||
line += self.charset[char_idx]
|
|
||||||
else:
|
|
||||||
line += " "
|
|
||||||
result.append(line)
|
|
||||||
|
|
||||||
# Pad or trim to exact height
|
|
||||||
while len(result) < self.height:
|
|
||||||
result.append(" " * self.width)
|
|
||||||
result = result[: self.height]
|
|
||||||
|
|
||||||
# Pad lines to width
|
|
||||||
result = [line.ljust(self.width) for line in result]
|
|
||||||
|
|
||||||
return result
|
|
||||||
|
|
||||||
|
|
||||||
def create_stage_from_display(display, name: str = "terminal") -> DisplayStage:
|
|
||||||
"""Create a Stage from a Display instance."""
|
|
||||||
return DisplayStage(display, name)
|
|
||||||
|
|
||||||
|
|
||||||
def create_stage_from_effect(effect_plugin, name: str) -> EffectPluginStage:
|
|
||||||
"""Create a Stage from an EffectPlugin."""
|
|
||||||
return EffectPluginStage(effect_plugin, name)
|
|
||||||
|
|
||||||
|
|
||||||
def create_stage_from_source(data_source, name: str = "headlines") -> DataSourceStage:
|
|
||||||
"""Create a Stage from a DataSource."""
|
|
||||||
return DataSourceStage(data_source, name)
|
|
||||||
|
|
||||||
|
|
||||||
def create_stage_from_camera(camera, name: str = "vertical") -> CameraStage:
|
|
||||||
"""Create a Stage from a Camera."""
|
|
||||||
return CameraStage(camera, name)
|
|
||||||
|
|
||||||
|
|
||||||
def create_stage_from_font(
|
|
||||||
font_path: str | None = None,
|
|
||||||
font_size: int | None = None,
|
|
||||||
font_ref: str | None = "default",
|
|
||||||
name: str = "font",
|
|
||||||
) -> FontStage:
|
|
||||||
"""Create a FontStage for rendering content with fonts."""
|
|
||||||
return FontStage(
|
|
||||||
font_path=font_path, font_size=font_size, font_ref=font_ref, name=name
|
|
||||||
)
|
|
||||||
|
|
||||||
|
|
||||||
class CanvasStage(Stage):
|
|
||||||
"""Stage that manages a Canvas for rendering.
|
|
||||||
|
|
||||||
CanvasStage creates and manages a 2D canvas that can hold rendered content.
|
|
||||||
Other stages can write to and read from the canvas via the pipeline context.
|
|
||||||
|
|
||||||
This enables:
|
|
||||||
- Pre-rendering content off-screen
|
|
||||||
- Multiple cameras viewing different regions
|
|
||||||
- Smooth scrolling (camera moves, content stays)
|
|
||||||
- Layer compositing
|
|
||||||
|
|
||||||
Usage:
|
|
||||||
- Add CanvasStage to pipeline
|
|
||||||
- Other stages access canvas via: ctx.get("canvas")
|
|
||||||
"""
|
|
||||||
|
|
||||||
def __init__(
|
|
||||||
self,
|
|
||||||
width: int = 80,
|
|
||||||
height: int = 24,
|
|
||||||
name: str = "canvas",
|
|
||||||
):
|
|
||||||
self.name = name
|
|
||||||
self.category = "system"
|
|
||||||
self.optional = True
|
|
||||||
self._width = width
|
|
||||||
self._height = height
|
|
||||||
self._canvas = None
|
|
||||||
|
|
||||||
@property
|
|
||||||
def stage_type(self) -> str:
|
|
||||||
return "system"
|
|
||||||
|
|
||||||
@property
|
|
||||||
def capabilities(self) -> set[str]:
|
|
||||||
return {"canvas"}
|
|
||||||
|
|
||||||
@property
|
|
||||||
def dependencies(self) -> set[str]:
|
|
||||||
return set()
|
|
||||||
|
|
||||||
@property
|
|
||||||
def inlet_types(self) -> set:
|
|
||||||
from engine.pipeline.core import DataType
|
|
||||||
|
|
||||||
return {DataType.ANY}
|
|
||||||
|
|
||||||
@property
|
|
||||||
def outlet_types(self) -> set:
|
|
||||||
from engine.pipeline.core import DataType
|
|
||||||
|
|
||||||
return {DataType.ANY}
|
|
||||||
|
|
||||||
def init(self, ctx: PipelineContext) -> bool:
|
|
||||||
from engine.canvas import Canvas
|
|
||||||
|
|
||||||
self._canvas = Canvas(width=self._width, height=self._height)
|
|
||||||
ctx.set("canvas", self._canvas)
|
|
||||||
return True
|
|
||||||
|
|
||||||
def process(self, data: Any, ctx: PipelineContext) -> Any:
|
|
||||||
"""Pass through data but ensure canvas is in context."""
|
|
||||||
if self._canvas is None:
|
|
||||||
from engine.canvas import Canvas
|
|
||||||
|
|
||||||
self._canvas = Canvas(width=self._width, height=self._height)
|
|
||||||
ctx.set("canvas", self._canvas)
|
|
||||||
|
|
||||||
# Get dirty regions from canvas and expose via context
|
|
||||||
# Effects can access via ctx.get_state("canvas.dirty_rows")
|
|
||||||
if self._canvas.is_dirty():
|
|
||||||
dirty_rows = self._canvas.get_dirty_rows()
|
|
||||||
ctx.set_state("canvas.dirty_rows", dirty_rows)
|
|
||||||
ctx.set_state("canvas.dirty_regions", self._canvas.get_dirty_regions())
|
|
||||||
|
|
||||||
return data
|
|
||||||
|
|
||||||
def get_canvas(self):
|
|
||||||
"""Get the canvas instance."""
|
|
||||||
return self._canvas
|
|
||||||
|
|
||||||
def cleanup(self) -> None:
|
|
||||||
self._canvas = None
|
|
||||||
@@ -1,576 +0,0 @@
|
|||||||
"""
|
|
||||||
Pipeline controller - DAG-based pipeline execution.
|
|
||||||
|
|
||||||
The Pipeline class orchestrates stages in dependency order, handling
|
|
||||||
the complete render cycle from source to display.
|
|
||||||
"""
|
|
||||||
|
|
||||||
import time
|
|
||||||
from dataclasses import dataclass, field
|
|
||||||
from typing import Any
|
|
||||||
|
|
||||||
from engine.pipeline.core import PipelineContext, Stage, StageError, StageResult
|
|
||||||
from engine.pipeline.params import PipelineParams
|
|
||||||
from engine.pipeline.registry import StageRegistry
|
|
||||||
|
|
||||||
|
|
||||||
@dataclass
|
|
||||||
class PipelineConfig:
|
|
||||||
"""Configuration for a pipeline instance."""
|
|
||||||
|
|
||||||
source: str = "headlines"
|
|
||||||
display: str = "terminal"
|
|
||||||
camera: str = "vertical"
|
|
||||||
effects: list[str] = field(default_factory=list)
|
|
||||||
enable_metrics: bool = True
|
|
||||||
|
|
||||||
|
|
||||||
@dataclass
|
|
||||||
class StageMetrics:
|
|
||||||
"""Metrics for a single stage execution."""
|
|
||||||
|
|
||||||
name: str
|
|
||||||
duration_ms: float
|
|
||||||
chars_in: int = 0
|
|
||||||
chars_out: int = 0
|
|
||||||
|
|
||||||
|
|
||||||
@dataclass
|
|
||||||
class FrameMetrics:
|
|
||||||
"""Metrics for a single frame through the pipeline."""
|
|
||||||
|
|
||||||
frame_number: int
|
|
||||||
total_ms: float
|
|
||||||
stages: list[StageMetrics] = field(default_factory=list)
|
|
||||||
|
|
||||||
|
|
||||||
class Pipeline:
|
|
||||||
"""Main pipeline orchestrator.
|
|
||||||
|
|
||||||
Manages the execution of all stages in dependency order,
|
|
||||||
handling initialization, processing, and cleanup.
|
|
||||||
"""
|
|
||||||
|
|
||||||
def __init__(
|
|
||||||
self,
|
|
||||||
config: PipelineConfig | None = None,
|
|
||||||
context: PipelineContext | None = None,
|
|
||||||
):
|
|
||||||
self.config = config or PipelineConfig()
|
|
||||||
self.context = context or PipelineContext()
|
|
||||||
self._stages: dict[str, Stage] = {}
|
|
||||||
self._execution_order: list[str] = []
|
|
||||||
self._initialized = False
|
|
||||||
|
|
||||||
self._metrics_enabled = self.config.enable_metrics
|
|
||||||
self._frame_metrics: list[FrameMetrics] = []
|
|
||||||
self._max_metrics_frames = 60
|
|
||||||
self._current_frame_number = 0
|
|
||||||
|
|
||||||
def add_stage(self, name: str, stage: Stage) -> "Pipeline":
|
|
||||||
"""Add a stage to the pipeline."""
|
|
||||||
self._stages[name] = stage
|
|
||||||
return self
|
|
||||||
|
|
||||||
def remove_stage(self, name: str) -> None:
|
|
||||||
"""Remove a stage from the pipeline."""
|
|
||||||
if name in self._stages:
|
|
||||||
del self._stages[name]
|
|
||||||
|
|
||||||
def get_stage(self, name: str) -> Stage | None:
|
|
||||||
"""Get a stage by name."""
|
|
||||||
return self._stages.get(name)
|
|
||||||
|
|
||||||
def build(self) -> "Pipeline":
|
|
||||||
"""Build execution order based on dependencies."""
|
|
||||||
self._capability_map = self._build_capability_map()
|
|
||||||
self._execution_order = self._resolve_dependencies()
|
|
||||||
self._validate_dependencies()
|
|
||||||
self._validate_types()
|
|
||||||
self._initialized = True
|
|
||||||
return self
|
|
||||||
|
|
||||||
def _build_capability_map(self) -> dict[str, list[str]]:
|
|
||||||
"""Build a map of capabilities to stage names.
|
|
||||||
|
|
||||||
Returns:
|
|
||||||
Dict mapping capability -> list of stage names that provide it
|
|
||||||
"""
|
|
||||||
capability_map: dict[str, list[str]] = {}
|
|
||||||
for name, stage in self._stages.items():
|
|
||||||
for cap in stage.capabilities:
|
|
||||||
if cap not in capability_map:
|
|
||||||
capability_map[cap] = []
|
|
||||||
capability_map[cap].append(name)
|
|
||||||
return capability_map
|
|
||||||
|
|
||||||
def _find_stage_with_capability(self, capability: str) -> str | None:
|
|
||||||
"""Find a stage that provides the given capability.
|
|
||||||
|
|
||||||
Supports wildcard matching:
|
|
||||||
- "source" matches "source.headlines" (prefix match)
|
|
||||||
- "source.*" matches "source.headlines"
|
|
||||||
- "source.headlines" matches exactly
|
|
||||||
|
|
||||||
Args:
|
|
||||||
capability: The capability to find
|
|
||||||
|
|
||||||
Returns:
|
|
||||||
Stage name that provides the capability, or None if not found
|
|
||||||
"""
|
|
||||||
# Exact match
|
|
||||||
if capability in self._capability_map:
|
|
||||||
return self._capability_map[capability][0]
|
|
||||||
|
|
||||||
# Prefix match (e.g., "source" -> "source.headlines")
|
|
||||||
for cap, stages in self._capability_map.items():
|
|
||||||
if cap.startswith(capability + "."):
|
|
||||||
return stages[0]
|
|
||||||
|
|
||||||
# Wildcard match (e.g., "source.*" -> "source.headlines")
|
|
||||||
if ".*" in capability:
|
|
||||||
prefix = capability[:-2] # Remove ".*"
|
|
||||||
for cap in self._capability_map:
|
|
||||||
if cap.startswith(prefix + "."):
|
|
||||||
return self._capability_map[cap][0]
|
|
||||||
|
|
||||||
return None
|
|
||||||
|
|
||||||
def _resolve_dependencies(self) -> list[str]:
|
|
||||||
"""Resolve stage execution order using topological sort with capability matching."""
|
|
||||||
ordered = []
|
|
||||||
visited = set()
|
|
||||||
temp_mark = set()
|
|
||||||
|
|
||||||
def visit(name: str) -> None:
|
|
||||||
if name in temp_mark:
|
|
||||||
raise StageError(name, "Circular dependency detected")
|
|
||||||
if name in visited:
|
|
||||||
return
|
|
||||||
|
|
||||||
temp_mark.add(name)
|
|
||||||
stage = self._stages.get(name)
|
|
||||||
if stage:
|
|
||||||
for dep in stage.dependencies:
|
|
||||||
# Find a stage that provides this capability
|
|
||||||
dep_stage_name = self._find_stage_with_capability(dep)
|
|
||||||
if dep_stage_name:
|
|
||||||
visit(dep_stage_name)
|
|
||||||
|
|
||||||
temp_mark.remove(name)
|
|
||||||
visited.add(name)
|
|
||||||
ordered.append(name)
|
|
||||||
|
|
||||||
for name in self._stages:
|
|
||||||
if name not in visited:
|
|
||||||
visit(name)
|
|
||||||
|
|
||||||
return ordered
|
|
||||||
|
|
||||||
def _validate_dependencies(self) -> None:
|
|
||||||
"""Validate that all dependencies can be satisfied.
|
|
||||||
|
|
||||||
Raises StageError if any dependency cannot be resolved.
|
|
||||||
"""
|
|
||||||
missing: list[tuple[str, str]] = [] # (stage_name, capability)
|
|
||||||
|
|
||||||
for name, stage in self._stages.items():
|
|
||||||
for dep in stage.dependencies:
|
|
||||||
if not self._find_stage_with_capability(dep):
|
|
||||||
missing.append((name, dep))
|
|
||||||
|
|
||||||
if missing:
|
|
||||||
msgs = [f" - {stage} needs {cap}" for stage, cap in missing]
|
|
||||||
raise StageError(
|
|
||||||
"validation",
|
|
||||||
"Missing capabilities:\n" + "\n".join(msgs),
|
|
||||||
)
|
|
||||||
|
|
||||||
def _validate_types(self) -> None:
|
|
||||||
"""Validate inlet/outlet types between connected stages.
|
|
||||||
|
|
||||||
PureData-style type validation. Each stage declares its inlet_types
|
|
||||||
(what it accepts) and outlet_types (what it produces). This method
|
|
||||||
validates that connected stages have compatible types.
|
|
||||||
|
|
||||||
Raises StageError if type mismatch is detected.
|
|
||||||
"""
|
|
||||||
from engine.pipeline.core import DataType
|
|
||||||
|
|
||||||
errors: list[str] = []
|
|
||||||
|
|
||||||
for i, name in enumerate(self._execution_order):
|
|
||||||
stage = self._stages.get(name)
|
|
||||||
if not stage:
|
|
||||||
continue
|
|
||||||
|
|
||||||
inlet_types = stage.inlet_types
|
|
||||||
|
|
||||||
# Check against previous stage's outlet types
|
|
||||||
if i > 0:
|
|
||||||
prev_name = self._execution_order[i - 1]
|
|
||||||
prev_stage = self._stages.get(prev_name)
|
|
||||||
if prev_stage:
|
|
||||||
prev_outlets = prev_stage.outlet_types
|
|
||||||
|
|
||||||
# Check if any outlet type is accepted by this inlet
|
|
||||||
compatible = (
|
|
||||||
DataType.ANY in inlet_types
|
|
||||||
or DataType.ANY in prev_outlets
|
|
||||||
or bool(prev_outlets & inlet_types)
|
|
||||||
)
|
|
||||||
|
|
||||||
if not compatible:
|
|
||||||
errors.append(
|
|
||||||
f" - {name} (inlet: {inlet_types}) "
|
|
||||||
f"← {prev_name} (outlet: {prev_outlets})"
|
|
||||||
)
|
|
||||||
|
|
||||||
# Check display/sink stages (should accept TEXT_BUFFER)
|
|
||||||
if (
|
|
||||||
stage.category == "display"
|
|
||||||
and DataType.TEXT_BUFFER not in inlet_types
|
|
||||||
and DataType.ANY not in inlet_types
|
|
||||||
):
|
|
||||||
errors.append(f" - {name} is display but doesn't accept TEXT_BUFFER")
|
|
||||||
|
|
||||||
if errors:
|
|
||||||
raise StageError(
|
|
||||||
"type_validation",
|
|
||||||
"Type mismatch in pipeline connections:\n" + "\n".join(errors),
|
|
||||||
)
|
|
||||||
|
|
||||||
def initialize(self) -> bool:
|
|
||||||
"""Initialize all stages in execution order."""
|
|
||||||
for name in self._execution_order:
|
|
||||||
stage = self._stages.get(name)
|
|
||||||
if stage and not stage.init(self.context) and not stage.optional:
|
|
||||||
return False
|
|
||||||
return True
|
|
||||||
|
|
||||||
def execute(self, data: Any | None = None) -> StageResult:
|
|
||||||
"""Execute the pipeline with the given input data.
|
|
||||||
|
|
||||||
Pipeline execution:
|
|
||||||
1. Execute all non-overlay stages in dependency order
|
|
||||||
2. Apply overlay stages on top (sorted by render_order)
|
|
||||||
"""
|
|
||||||
import os
|
|
||||||
import sys
|
|
||||||
|
|
||||||
debug = os.environ.get("MAINLINE_DEBUG_DATAFLOW") == "1"
|
|
||||||
|
|
||||||
if debug:
|
|
||||||
print(
|
|
||||||
f"[PIPELINE.execute] Starting with data type: {type(data).__name__ if data else 'None'}",
|
|
||||||
file=sys.stderr,
|
|
||||||
flush=True,
|
|
||||||
)
|
|
||||||
|
|
||||||
if not self._initialized:
|
|
||||||
self.build()
|
|
||||||
|
|
||||||
if not self._initialized:
|
|
||||||
return StageResult(
|
|
||||||
success=False,
|
|
||||||
data=None,
|
|
||||||
error="Pipeline not initialized",
|
|
||||||
)
|
|
||||||
|
|
||||||
current_data = data
|
|
||||||
frame_start = time.perf_counter() if self._metrics_enabled else 0
|
|
||||||
stage_timings: list[StageMetrics] = []
|
|
||||||
|
|
||||||
# Separate overlay stages from regular stages
|
|
||||||
overlay_stages: list[tuple[int, Stage]] = []
|
|
||||||
regular_stages: list[str] = []
|
|
||||||
|
|
||||||
for name in self._execution_order:
|
|
||||||
stage = self._stages.get(name)
|
|
||||||
if not stage or not stage.is_enabled():
|
|
||||||
continue
|
|
||||||
|
|
||||||
# Safely check is_overlay - handle MagicMock and other non-bool returns
|
|
||||||
try:
|
|
||||||
is_overlay = bool(getattr(stage, "is_overlay", False))
|
|
||||||
except Exception:
|
|
||||||
is_overlay = False
|
|
||||||
|
|
||||||
if is_overlay:
|
|
||||||
# Safely get render_order
|
|
||||||
try:
|
|
||||||
render_order = int(getattr(stage, "render_order", 0))
|
|
||||||
except Exception:
|
|
||||||
render_order = 0
|
|
||||||
overlay_stages.append((render_order, stage))
|
|
||||||
else:
|
|
||||||
regular_stages.append(name)
|
|
||||||
|
|
||||||
# Execute regular stages in dependency order
|
|
||||||
for name in regular_stages:
|
|
||||||
stage = self._stages.get(name)
|
|
||||||
if not stage or not stage.is_enabled():
|
|
||||||
continue
|
|
||||||
|
|
||||||
stage_start = time.perf_counter() if self._metrics_enabled else 0
|
|
||||||
|
|
||||||
try:
|
|
||||||
if debug:
|
|
||||||
data_info = type(current_data).__name__
|
|
||||||
if isinstance(current_data, list):
|
|
||||||
data_info += f"[{len(current_data)}]"
|
|
||||||
print(
|
|
||||||
f"[STAGE.{name}] Starting with: {data_info}",
|
|
||||||
file=sys.stderr,
|
|
||||||
flush=True,
|
|
||||||
)
|
|
||||||
|
|
||||||
current_data = stage.process(current_data, self.context)
|
|
||||||
|
|
||||||
if debug:
|
|
||||||
data_info = type(current_data).__name__
|
|
||||||
if isinstance(current_data, list):
|
|
||||||
data_info += f"[{len(current_data)}]"
|
|
||||||
print(
|
|
||||||
f"[STAGE.{name}] Completed, output: {data_info}",
|
|
||||||
file=sys.stderr,
|
|
||||||
flush=True,
|
|
||||||
)
|
|
||||||
except Exception as e:
|
|
||||||
if debug:
|
|
||||||
print(f"[STAGE.{name}] ERROR: {e}", file=sys.stderr, flush=True)
|
|
||||||
if not stage.optional:
|
|
||||||
return StageResult(
|
|
||||||
success=False,
|
|
||||||
data=current_data,
|
|
||||||
error=str(e),
|
|
||||||
stage_name=name,
|
|
||||||
)
|
|
||||||
continue
|
|
||||||
|
|
||||||
if self._metrics_enabled:
|
|
||||||
stage_duration = (time.perf_counter() - stage_start) * 1000
|
|
||||||
chars_in = len(str(data)) if data else 0
|
|
||||||
chars_out = len(str(current_data)) if current_data else 0
|
|
||||||
stage_timings.append(
|
|
||||||
StageMetrics(
|
|
||||||
name=name,
|
|
||||||
duration_ms=stage_duration,
|
|
||||||
chars_in=chars_in,
|
|
||||||
chars_out=chars_out,
|
|
||||||
)
|
|
||||||
)
|
|
||||||
|
|
||||||
# Apply overlay stages (sorted by render_order)
|
|
||||||
overlay_stages.sort(key=lambda x: x[0])
|
|
||||||
for render_order, stage in overlay_stages:
|
|
||||||
stage_start = time.perf_counter() if self._metrics_enabled else 0
|
|
||||||
stage_name = f"[overlay]{stage.name}"
|
|
||||||
|
|
||||||
try:
|
|
||||||
# Overlays receive current_data but don't pass their output to next stage
|
|
||||||
# Instead, their output is composited on top
|
|
||||||
overlay_output = stage.process(current_data, self.context)
|
|
||||||
# For now, we just let the overlay output pass through
|
|
||||||
# In a more sophisticated implementation, we'd composite it
|
|
||||||
if overlay_output is not None:
|
|
||||||
current_data = overlay_output
|
|
||||||
except Exception as e:
|
|
||||||
if not stage.optional:
|
|
||||||
return StageResult(
|
|
||||||
success=False,
|
|
||||||
data=current_data,
|
|
||||||
error=str(e),
|
|
||||||
stage_name=stage_name,
|
|
||||||
)
|
|
||||||
|
|
||||||
if self._metrics_enabled:
|
|
||||||
stage_duration = (time.perf_counter() - stage_start) * 1000
|
|
||||||
chars_in = len(str(data)) if data else 0
|
|
||||||
chars_out = len(str(current_data)) if current_data else 0
|
|
||||||
stage_timings.append(
|
|
||||||
StageMetrics(
|
|
||||||
name=stage_name,
|
|
||||||
duration_ms=stage_duration,
|
|
||||||
chars_in=chars_in,
|
|
||||||
chars_out=chars_out,
|
|
||||||
)
|
|
||||||
)
|
|
||||||
|
|
||||||
if self._metrics_enabled:
|
|
||||||
total_duration = (time.perf_counter() - frame_start) * 1000
|
|
||||||
self._frame_metrics.append(
|
|
||||||
FrameMetrics(
|
|
||||||
frame_number=self._current_frame_number,
|
|
||||||
total_ms=total_duration,
|
|
||||||
stages=stage_timings,
|
|
||||||
)
|
|
||||||
)
|
|
||||||
|
|
||||||
# Store metrics in context for other stages (like HUD)
|
|
||||||
# This makes metrics a first-class pipeline citizen
|
|
||||||
if self.context:
|
|
||||||
self.context.state["metrics"] = self.get_metrics_summary()
|
|
||||||
|
|
||||||
if len(self._frame_metrics) > self._max_metrics_frames:
|
|
||||||
self._frame_metrics.pop(0)
|
|
||||||
self._current_frame_number += 1
|
|
||||||
|
|
||||||
return StageResult(success=True, data=current_data)
|
|
||||||
|
|
||||||
def cleanup(self) -> None:
|
|
||||||
"""Clean up all stages in reverse order."""
|
|
||||||
for name in reversed(self._execution_order):
|
|
||||||
stage = self._stages.get(name)
|
|
||||||
if stage:
|
|
||||||
try:
|
|
||||||
stage.cleanup()
|
|
||||||
except Exception:
|
|
||||||
pass
|
|
||||||
self._stages.clear()
|
|
||||||
self._initialized = False
|
|
||||||
|
|
||||||
@property
|
|
||||||
def stages(self) -> dict[str, Stage]:
|
|
||||||
"""Get all stages."""
|
|
||||||
return self._stages.copy()
|
|
||||||
|
|
||||||
@property
|
|
||||||
def execution_order(self) -> list[str]:
|
|
||||||
"""Get execution order."""
|
|
||||||
return self._execution_order.copy()
|
|
||||||
|
|
||||||
def get_stage_names(self) -> list[str]:
|
|
||||||
"""Get list of stage names."""
|
|
||||||
return list(self._stages.keys())
|
|
||||||
|
|
||||||
def get_overlay_stages(self) -> list[Stage]:
|
|
||||||
"""Get all overlay stages sorted by render_order."""
|
|
||||||
overlays = [stage for stage in self._stages.values() if stage.is_overlay]
|
|
||||||
overlays.sort(key=lambda s: s.render_order)
|
|
||||||
return overlays
|
|
||||||
|
|
||||||
def get_stage_type(self, name: str) -> str:
|
|
||||||
"""Get the stage_type for a stage."""
|
|
||||||
stage = self._stages.get(name)
|
|
||||||
return stage.stage_type if stage else ""
|
|
||||||
|
|
||||||
def get_render_order(self, name: str) -> int:
|
|
||||||
"""Get the render_order for a stage."""
|
|
||||||
stage = self._stages.get(name)
|
|
||||||
return stage.render_order if stage else 0
|
|
||||||
|
|
||||||
def get_metrics_summary(self) -> dict:
|
|
||||||
"""Get summary of collected metrics."""
|
|
||||||
if not self._frame_metrics:
|
|
||||||
return {"error": "No metrics collected"}
|
|
||||||
|
|
||||||
total_times = [f.total_ms for f in self._frame_metrics]
|
|
||||||
avg_total = sum(total_times) / len(total_times)
|
|
||||||
min_total = min(total_times)
|
|
||||||
max_total = max(total_times)
|
|
||||||
|
|
||||||
stage_stats: dict[str, dict] = {}
|
|
||||||
for frame in self._frame_metrics:
|
|
||||||
for stage in frame.stages:
|
|
||||||
if stage.name not in stage_stats:
|
|
||||||
stage_stats[stage.name] = {"times": [], "total_chars": 0}
|
|
||||||
stage_stats[stage.name]["times"].append(stage.duration_ms)
|
|
||||||
stage_stats[stage.name]["total_chars"] += stage.chars_out
|
|
||||||
|
|
||||||
for name, stats in stage_stats.items():
|
|
||||||
times = stats["times"]
|
|
||||||
stats["avg_ms"] = sum(times) / len(times)
|
|
||||||
stats["min_ms"] = min(times)
|
|
||||||
stats["max_ms"] = max(times)
|
|
||||||
del stats["times"]
|
|
||||||
|
|
||||||
return {
|
|
||||||
"frame_count": len(self._frame_metrics),
|
|
||||||
"pipeline": {
|
|
||||||
"avg_ms": avg_total,
|
|
||||||
"min_ms": min_total,
|
|
||||||
"max_ms": max_total,
|
|
||||||
},
|
|
||||||
"stages": stage_stats,
|
|
||||||
}
|
|
||||||
|
|
||||||
def reset_metrics(self) -> None:
|
|
||||||
"""Reset collected metrics."""
|
|
||||||
self._frame_metrics.clear()
|
|
||||||
self._current_frame_number = 0
|
|
||||||
|
|
||||||
def get_frame_times(self) -> list[float]:
|
|
||||||
"""Get historical frame times for sparklines/charts."""
|
|
||||||
return [f.total_ms for f in self._frame_metrics]
|
|
||||||
|
|
||||||
|
|
||||||
class PipelineRunner:
|
|
||||||
"""High-level pipeline runner with animation support."""
|
|
||||||
|
|
||||||
def __init__(
|
|
||||||
self,
|
|
||||||
pipeline: Pipeline,
|
|
||||||
params: PipelineParams | None = None,
|
|
||||||
):
|
|
||||||
self.pipeline = pipeline
|
|
||||||
self.params = params or PipelineParams()
|
|
||||||
self._running = False
|
|
||||||
|
|
||||||
def start(self) -> bool:
|
|
||||||
"""Start the pipeline."""
|
|
||||||
self._running = True
|
|
||||||
return self.pipeline.initialize()
|
|
||||||
|
|
||||||
def step(self, input_data: Any | None = None) -> Any:
|
|
||||||
"""Execute one pipeline step."""
|
|
||||||
self.params.frame_number += 1
|
|
||||||
self.pipeline.context.params = self.params
|
|
||||||
result = self.pipeline.execute(input_data)
|
|
||||||
return result.data if result.success else None
|
|
||||||
|
|
||||||
def stop(self) -> None:
|
|
||||||
"""Stop and clean up the pipeline."""
|
|
||||||
self._running = False
|
|
||||||
self.pipeline.cleanup()
|
|
||||||
|
|
||||||
@property
|
|
||||||
def is_running(self) -> bool:
|
|
||||||
"""Check if runner is active."""
|
|
||||||
return self._running
|
|
||||||
|
|
||||||
|
|
||||||
def create_pipeline_from_params(params: PipelineParams) -> Pipeline:
|
|
||||||
"""Create a pipeline from PipelineParams."""
|
|
||||||
config = PipelineConfig(
|
|
||||||
source=params.source,
|
|
||||||
display=params.display,
|
|
||||||
camera=params.camera_mode,
|
|
||||||
effects=params.effect_order,
|
|
||||||
)
|
|
||||||
return Pipeline(config=config)
|
|
||||||
|
|
||||||
|
|
||||||
def create_default_pipeline() -> Pipeline:
|
|
||||||
"""Create a default pipeline with all standard components."""
|
|
||||||
from engine.data_sources.sources import HeadlinesDataSource
|
|
||||||
from engine.pipeline.adapters import (
|
|
||||||
DataSourceStage,
|
|
||||||
SourceItemsToBufferStage,
|
|
||||||
)
|
|
||||||
|
|
||||||
pipeline = Pipeline()
|
|
||||||
|
|
||||||
# Add source stage (wrapped as Stage)
|
|
||||||
source = HeadlinesDataSource()
|
|
||||||
pipeline.add_stage("source", DataSourceStage(source, name="headlines"))
|
|
||||||
|
|
||||||
# Add render stage to convert items to text buffer
|
|
||||||
pipeline.add_stage("render", SourceItemsToBufferStage(name="items-to-buffer"))
|
|
||||||
|
|
||||||
# Add display stage
|
|
||||||
display = StageRegistry.create("display", "terminal")
|
|
||||||
if display:
|
|
||||||
pipeline.add_stage("display", display)
|
|
||||||
|
|
||||||
return pipeline.build()
|
|
||||||
@@ -1,306 +0,0 @@
|
|||||||
"""
|
|
||||||
Pipeline core - Unified Stage abstraction and PipelineContext.
|
|
||||||
|
|
||||||
This module provides the foundation for a clean, dependency-managed pipeline:
|
|
||||||
- Stage: Base class for all pipeline components (sources, effects, displays, cameras)
|
|
||||||
- PipelineContext: Dependency injection context for runtime data exchange
|
|
||||||
- Capability system: Explicit capability declarations with duck-typing support
|
|
||||||
- DataType: PureData-style inlet/outlet typing for validation
|
|
||||||
"""
|
|
||||||
|
|
||||||
from abc import ABC, abstractmethod
|
|
||||||
from collections.abc import Callable
|
|
||||||
from dataclasses import dataclass, field
|
|
||||||
from enum import Enum, auto
|
|
||||||
from typing import TYPE_CHECKING, Any
|
|
||||||
|
|
||||||
if TYPE_CHECKING:
|
|
||||||
from engine.pipeline.params import PipelineParams
|
|
||||||
|
|
||||||
|
|
||||||
class DataType(Enum):
|
|
||||||
"""PureData-style data types for inlet/outlet validation.
|
|
||||||
|
|
||||||
Each type represents a specific data format that flows through the pipeline.
|
|
||||||
This enables compile-time-like validation of connections.
|
|
||||||
|
|
||||||
Examples:
|
|
||||||
SOURCE_ITEMS: List[SourceItem] - raw items from sources
|
|
||||||
ITEM_TUPLES: List[tuple] - (title, source, timestamp) tuples
|
|
||||||
TEXT_BUFFER: List[str] - rendered ANSI buffer for display
|
|
||||||
RAW_TEXT: str - raw text strings
|
|
||||||
PIL_IMAGE: PIL Image object
|
|
||||||
"""
|
|
||||||
|
|
||||||
SOURCE_ITEMS = auto() # List[SourceItem] - from DataSource
|
|
||||||
ITEM_TUPLES = auto() # List[tuple] - (title, source, ts)
|
|
||||||
TEXT_BUFFER = auto() # List[str] - ANSI buffer
|
|
||||||
RAW_TEXT = auto() # str - raw text
|
|
||||||
PIL_IMAGE = auto() # PIL Image object
|
|
||||||
ANY = auto() # Accepts any type
|
|
||||||
NONE = auto() # No data (terminator)
|
|
||||||
|
|
||||||
|
|
||||||
@dataclass
|
|
||||||
class StageConfig:
|
|
||||||
"""Configuration for a single stage."""
|
|
||||||
|
|
||||||
name: str
|
|
||||||
category: str
|
|
||||||
enabled: bool = True
|
|
||||||
optional: bool = False
|
|
||||||
params: dict[str, Any] = field(default_factory=dict)
|
|
||||||
|
|
||||||
|
|
||||||
class Stage(ABC):
|
|
||||||
"""Abstract base class for all pipeline stages.
|
|
||||||
|
|
||||||
A Stage is a single component in the rendering pipeline. Stages can be:
|
|
||||||
- Sources: Data providers (headlines, poetry, pipeline viz)
|
|
||||||
- Effects: Post-processors (noise, fade, glitch, hud)
|
|
||||||
- Displays: Output backends (terminal, pygame, websocket)
|
|
||||||
- Cameras: Viewport controllers (vertical, horizontal, omni)
|
|
||||||
- Overlays: UI elements that compose on top (HUD)
|
|
||||||
|
|
||||||
Stages declare:
|
|
||||||
- capabilities: What they provide to other stages
|
|
||||||
- dependencies: What they need from other stages
|
|
||||||
- stage_type: Category of stage (source, effect, overlay, display)
|
|
||||||
- render_order: Execution order within category
|
|
||||||
- is_overlay: If True, output is composited on top, not passed downstream
|
|
||||||
|
|
||||||
Duck-typing is supported: any class with the required methods can act as a Stage.
|
|
||||||
"""
|
|
||||||
|
|
||||||
name: str
|
|
||||||
category: str # "source", "effect", "overlay", "display", "camera"
|
|
||||||
optional: bool = False # If True, pipeline continues even if stage fails
|
|
||||||
|
|
||||||
@property
|
|
||||||
def stage_type(self) -> str:
|
|
||||||
"""Category of stage for ordering.
|
|
||||||
|
|
||||||
Valid values: "source", "effect", "overlay", "display", "camera"
|
|
||||||
Defaults to category for backwards compatibility.
|
|
||||||
"""
|
|
||||||
return self.category
|
|
||||||
|
|
||||||
@property
|
|
||||||
def render_order(self) -> int:
|
|
||||||
"""Execution order within stage_type group.
|
|
||||||
|
|
||||||
Higher values execute later. Useful for ordering overlays
|
|
||||||
or effects that need specific execution order.
|
|
||||||
"""
|
|
||||||
return 0
|
|
||||||
|
|
||||||
@property
|
|
||||||
def is_overlay(self) -> bool:
|
|
||||||
"""If True, this stage's output is composited on top of the buffer.
|
|
||||||
|
|
||||||
Overlay stages don't pass their output to the next stage.
|
|
||||||
Instead, their output is layered on top of the final buffer.
|
|
||||||
Use this for HUD, status displays, and similar UI elements.
|
|
||||||
"""
|
|
||||||
return False
|
|
||||||
|
|
||||||
@property
|
|
||||||
def inlet_types(self) -> set[DataType]:
|
|
||||||
"""Return set of data types this stage accepts.
|
|
||||||
|
|
||||||
PureData-style inlet typing. If the connected upstream stage's
|
|
||||||
outlet_type is not in this set, the pipeline will raise an error.
|
|
||||||
|
|
||||||
Examples:
|
|
||||||
- Source stages: {DataType.NONE} (no input needed)
|
|
||||||
- Transform stages: {DataType.ITEM_TUPLES, DataType.TEXT_BUFFER}
|
|
||||||
- Display stages: {DataType.TEXT_BUFFER}
|
|
||||||
"""
|
|
||||||
return {DataType.ANY}
|
|
||||||
|
|
||||||
@property
|
|
||||||
def outlet_types(self) -> set[DataType]:
|
|
||||||
"""Return set of data types this stage produces.
|
|
||||||
|
|
||||||
PureData-style outlet typing. Downstream stages must accept
|
|
||||||
this type in their inlet_types.
|
|
||||||
|
|
||||||
Examples:
|
|
||||||
- Source stages: {DataType.SOURCE_ITEMS}
|
|
||||||
- Transform stages: {DataType.TEXT_BUFFER}
|
|
||||||
- Display stages: {DataType.NONE} (consumes data)
|
|
||||||
"""
|
|
||||||
return {DataType.ANY}
|
|
||||||
|
|
||||||
@property
|
|
||||||
def capabilities(self) -> set[str]:
|
|
||||||
"""Return set of capabilities this stage provides.
|
|
||||||
|
|
||||||
Examples:
|
|
||||||
- "source.headlines"
|
|
||||||
- "effect.noise"
|
|
||||||
- "display.output"
|
|
||||||
- "camera"
|
|
||||||
"""
|
|
||||||
return {f"{self.category}.{self.name}"}
|
|
||||||
|
|
||||||
@property
|
|
||||||
def dependencies(self) -> set[str]:
|
|
||||||
"""Return set of capability names this stage needs.
|
|
||||||
|
|
||||||
Examples:
|
|
||||||
- {"display.output"}
|
|
||||||
- {"source.headlines"}
|
|
||||||
- {"camera"}
|
|
||||||
"""
|
|
||||||
return set()
|
|
||||||
|
|
||||||
def init(self, ctx: "PipelineContext") -> bool:
|
|
||||||
"""Initialize stage with pipeline context.
|
|
||||||
|
|
||||||
Args:
|
|
||||||
ctx: PipelineContext for accessing services
|
|
||||||
|
|
||||||
Returns:
|
|
||||||
True if initialization succeeded, False otherwise
|
|
||||||
"""
|
|
||||||
return True
|
|
||||||
|
|
||||||
@abstractmethod
|
|
||||||
def process(self, data: Any, ctx: "PipelineContext") -> Any:
|
|
||||||
"""Process input data and return output.
|
|
||||||
|
|
||||||
Args:
|
|
||||||
data: Input data from previous stage (or initial data for first stage)
|
|
||||||
ctx: PipelineContext for accessing services and state
|
|
||||||
|
|
||||||
Returns:
|
|
||||||
Processed data for next stage
|
|
||||||
"""
|
|
||||||
...
|
|
||||||
|
|
||||||
def cleanup(self) -> None: # noqa: B027
|
|
||||||
"""Clean up resources when pipeline shuts down."""
|
|
||||||
pass
|
|
||||||
|
|
||||||
def get_config(self) -> StageConfig:
|
|
||||||
"""Return current configuration of this stage."""
|
|
||||||
return StageConfig(
|
|
||||||
name=self.name,
|
|
||||||
category=self.category,
|
|
||||||
optional=self.optional,
|
|
||||||
)
|
|
||||||
|
|
||||||
def set_enabled(self, enabled: bool) -> None:
|
|
||||||
"""Enable or disable this stage."""
|
|
||||||
self._enabled = enabled # type: ignore[attr-defined]
|
|
||||||
|
|
||||||
def is_enabled(self) -> bool:
|
|
||||||
"""Check if stage is enabled."""
|
|
||||||
return getattr(self, "_enabled", True)
|
|
||||||
|
|
||||||
|
|
||||||
@dataclass
|
|
||||||
class StageResult:
|
|
||||||
"""Result of stage processing, including success/failure info."""
|
|
||||||
|
|
||||||
success: bool
|
|
||||||
data: Any
|
|
||||||
error: str | None = None
|
|
||||||
stage_name: str = ""
|
|
||||||
|
|
||||||
|
|
||||||
class PipelineContext:
|
|
||||||
"""Dependency injection context passed through the pipeline.
|
|
||||||
|
|
||||||
Provides:
|
|
||||||
- services: Named services (display, config, event_bus, etc.)
|
|
||||||
- state: Runtime state shared between stages
|
|
||||||
- params: PipelineParams for animation-driven config
|
|
||||||
|
|
||||||
Services can be injected at construction time or lazily resolved.
|
|
||||||
"""
|
|
||||||
|
|
||||||
def __init__(
|
|
||||||
self,
|
|
||||||
services: dict[str, Any] | None = None,
|
|
||||||
initial_state: dict[str, Any] | None = None,
|
|
||||||
):
|
|
||||||
self.services: dict[str, Any] = services or {}
|
|
||||||
self.state: dict[str, Any] = initial_state or {}
|
|
||||||
self._params: PipelineParams | None = None
|
|
||||||
|
|
||||||
# Lazy resolvers for common services
|
|
||||||
self._lazy_resolvers: dict[str, Callable[[], Any]] = {
|
|
||||||
"config": self._resolve_config,
|
|
||||||
"event_bus": self._resolve_event_bus,
|
|
||||||
}
|
|
||||||
|
|
||||||
def _resolve_config(self) -> Any:
|
|
||||||
from engine.config import get_config
|
|
||||||
|
|
||||||
return get_config()
|
|
||||||
|
|
||||||
def _resolve_event_bus(self) -> Any:
|
|
||||||
from engine.eventbus import get_event_bus
|
|
||||||
|
|
||||||
return get_event_bus()
|
|
||||||
|
|
||||||
def get(self, key: str, default: Any = None) -> Any:
|
|
||||||
"""Get a service or state value by key.
|
|
||||||
|
|
||||||
First checks services, then state, then lazy resolution.
|
|
||||||
"""
|
|
||||||
if key in self.services:
|
|
||||||
return self.services[key]
|
|
||||||
if key in self.state:
|
|
||||||
return self.state[key]
|
|
||||||
if key in self._lazy_resolvers:
|
|
||||||
try:
|
|
||||||
return self._lazy_resolvers[key]()
|
|
||||||
except Exception:
|
|
||||||
return default
|
|
||||||
return default
|
|
||||||
|
|
||||||
def set(self, key: str, value: Any) -> None:
|
|
||||||
"""Set a service or state value."""
|
|
||||||
self.services[key] = value
|
|
||||||
|
|
||||||
def set_state(self, key: str, value: Any) -> None:
|
|
||||||
"""Set a runtime state value."""
|
|
||||||
self.state[key] = value
|
|
||||||
|
|
||||||
def get_state(self, key: str, default: Any = None) -> Any:
|
|
||||||
"""Get a runtime state value."""
|
|
||||||
return self.state.get(key, default)
|
|
||||||
|
|
||||||
@property
|
|
||||||
def params(self) -> "PipelineParams | None":
|
|
||||||
"""Get current pipeline params (for animation)."""
|
|
||||||
return self._params
|
|
||||||
|
|
||||||
@params.setter
|
|
||||||
def params(self, value: "PipelineParams") -> None:
|
|
||||||
"""Set pipeline params (from animation controller)."""
|
|
||||||
self._params = value
|
|
||||||
|
|
||||||
def has_capability(self, capability: str) -> bool:
|
|
||||||
"""Check if a capability is available."""
|
|
||||||
return capability in self.services or capability in self._lazy_resolvers
|
|
||||||
|
|
||||||
|
|
||||||
class StageError(Exception):
|
|
||||||
"""Raised when a stage fails to process."""
|
|
||||||
|
|
||||||
def __init__(self, stage_name: str, message: str, is_optional: bool = False):
|
|
||||||
self.stage_name = stage_name
|
|
||||||
self.message = message
|
|
||||||
self.is_optional = is_optional
|
|
||||||
super().__init__(f"Stage '{stage_name}' failed: {message}")
|
|
||||||
|
|
||||||
|
|
||||||
def create_stage_error(
|
|
||||||
stage_name: str, error: Exception, is_optional: bool = False
|
|
||||||
) -> StageError:
|
|
||||||
"""Helper to create a StageError from an exception."""
|
|
||||||
return StageError(stage_name, str(error), is_optional)
|
|
||||||
@@ -1,145 +0,0 @@
|
|||||||
"""
|
|
||||||
Pipeline parameters - Runtime configuration layer for animation control.
|
|
||||||
|
|
||||||
PipelineParams is the target for AnimationController - animation events
|
|
||||||
modify these params, which the pipeline then applies to its stages.
|
|
||||||
"""
|
|
||||||
|
|
||||||
from dataclasses import dataclass, field
|
|
||||||
from typing import Any
|
|
||||||
|
|
||||||
|
|
||||||
@dataclass
|
|
||||||
class PipelineParams:
|
|
||||||
"""Runtime configuration for the pipeline.
|
|
||||||
|
|
||||||
This is the canonical config object that AnimationController modifies.
|
|
||||||
Stages read from these params to adjust their behavior.
|
|
||||||
"""
|
|
||||||
|
|
||||||
# Source config
|
|
||||||
source: str = "headlines"
|
|
||||||
source_refresh_interval: float = 60.0
|
|
||||||
|
|
||||||
# Display config
|
|
||||||
display: str = "terminal"
|
|
||||||
border: bool = False
|
|
||||||
|
|
||||||
# Camera config
|
|
||||||
camera_mode: str = "vertical"
|
|
||||||
camera_speed: float = 1.0
|
|
||||||
camera_x: int = 0 # For horizontal scrolling
|
|
||||||
|
|
||||||
# Effect config
|
|
||||||
effect_order: list[str] = field(
|
|
||||||
default_factory=lambda: ["noise", "fade", "glitch", "firehose"]
|
|
||||||
)
|
|
||||||
effect_enabled: dict[str, bool] = field(default_factory=dict)
|
|
||||||
effect_intensity: dict[str, float] = field(default_factory=dict)
|
|
||||||
|
|
||||||
# Animation-driven state (set by AnimationController)
|
|
||||||
pulse: float = 0.0
|
|
||||||
current_effect: str | None = None
|
|
||||||
path_progress: float = 0.0
|
|
||||||
|
|
||||||
# Viewport
|
|
||||||
viewport_width: int = 80
|
|
||||||
viewport_height: int = 24
|
|
||||||
|
|
||||||
# Firehose
|
|
||||||
firehose_enabled: bool = False
|
|
||||||
|
|
||||||
# Runtime state
|
|
||||||
frame_number: int = 0
|
|
||||||
fps: float = 60.0
|
|
||||||
|
|
||||||
def get_effect_config(self, name: str) -> tuple[bool, float]:
|
|
||||||
"""Get (enabled, intensity) for an effect."""
|
|
||||||
enabled = self.effect_enabled.get(name, True)
|
|
||||||
intensity = self.effect_intensity.get(name, 1.0)
|
|
||||||
return enabled, intensity
|
|
||||||
|
|
||||||
def set_effect_config(self, name: str, enabled: bool, intensity: float) -> None:
|
|
||||||
"""Set effect configuration."""
|
|
||||||
self.effect_enabled[name] = enabled
|
|
||||||
self.effect_intensity[name] = intensity
|
|
||||||
|
|
||||||
def is_effect_enabled(self, name: str) -> bool:
|
|
||||||
"""Check if an effect is enabled."""
|
|
||||||
if name not in self.effect_enabled:
|
|
||||||
return True # Default to enabled
|
|
||||||
return self.effect_enabled.get(name, True)
|
|
||||||
|
|
||||||
def get_effect_intensity(self, name: str) -> float:
|
|
||||||
"""Get effect intensity (0.0 to 1.0)."""
|
|
||||||
return self.effect_intensity.get(name, 1.0)
|
|
||||||
|
|
||||||
def to_dict(self) -> dict[str, Any]:
|
|
||||||
"""Convert to dictionary for serialization."""
|
|
||||||
return {
|
|
||||||
"source": self.source,
|
|
||||||
"display": self.display,
|
|
||||||
"camera_mode": self.camera_mode,
|
|
||||||
"camera_speed": self.camera_speed,
|
|
||||||
"effect_order": self.effect_order,
|
|
||||||
"effect_enabled": self.effect_enabled.copy(),
|
|
||||||
"effect_intensity": self.effect_intensity.copy(),
|
|
||||||
"pulse": self.pulse,
|
|
||||||
"current_effect": self.current_effect,
|
|
||||||
"viewport_width": self.viewport_width,
|
|
||||||
"viewport_height": self.viewport_height,
|
|
||||||
"firehose_enabled": self.firehose_enabled,
|
|
||||||
}
|
|
||||||
|
|
||||||
@classmethod
|
|
||||||
def from_dict(cls, data: dict[str, Any]) -> "PipelineParams":
|
|
||||||
"""Create from dictionary."""
|
|
||||||
params = cls()
|
|
||||||
for key, value in data.items():
|
|
||||||
if hasattr(params, key):
|
|
||||||
setattr(params, key, value)
|
|
||||||
return params
|
|
||||||
|
|
||||||
def copy(self) -> "PipelineParams":
|
|
||||||
"""Create a copy of this params object."""
|
|
||||||
params = PipelineParams()
|
|
||||||
params.source = self.source
|
|
||||||
params.display = self.display
|
|
||||||
params.camera_mode = self.camera_mode
|
|
||||||
params.camera_speed = self.camera_speed
|
|
||||||
params.camera_x = self.camera_x
|
|
||||||
params.effect_order = self.effect_order.copy()
|
|
||||||
params.effect_enabled = self.effect_enabled.copy()
|
|
||||||
params.effect_intensity = self.effect_intensity.copy()
|
|
||||||
params.pulse = self.pulse
|
|
||||||
params.current_effect = self.current_effect
|
|
||||||
params.path_progress = self.path_progress
|
|
||||||
params.viewport_width = self.viewport_width
|
|
||||||
params.viewport_height = self.viewport_height
|
|
||||||
params.firehose_enabled = self.firehose_enabled
|
|
||||||
params.frame_number = self.frame_number
|
|
||||||
params.fps = self.fps
|
|
||||||
return params
|
|
||||||
|
|
||||||
|
|
||||||
# Default params for different modes
|
|
||||||
DEFAULT_HEADLINE_PARAMS = PipelineParams(
|
|
||||||
source="headlines",
|
|
||||||
display="terminal",
|
|
||||||
camera_mode="vertical",
|
|
||||||
effect_order=["noise", "fade", "glitch", "firehose"],
|
|
||||||
)
|
|
||||||
|
|
||||||
DEFAULT_PYGAME_PARAMS = PipelineParams(
|
|
||||||
source="headlines",
|
|
||||||
display="pygame",
|
|
||||||
camera_mode="vertical",
|
|
||||||
effect_order=["noise", "fade", "glitch", "firehose"],
|
|
||||||
)
|
|
||||||
|
|
||||||
DEFAULT_PIPELINE_PARAMS = PipelineParams(
|
|
||||||
source="pipeline",
|
|
||||||
display="pygame",
|
|
||||||
camera_mode="trace",
|
|
||||||
effect_order=[], # No effects for pipeline viz
|
|
||||||
)
|
|
||||||
@@ -1,300 +0,0 @@
|
|||||||
"""
|
|
||||||
Pipeline introspection demo controller - 3-phase animation system.
|
|
||||||
|
|
||||||
Phase 1: Toggle each effect on/off one at a time (3s each, 1s gap)
|
|
||||||
Phase 2: LFO drives intensity default → max → min → default for each effect
|
|
||||||
Phase 3: All effects with shared LFO driving full waveform
|
|
||||||
|
|
||||||
This controller manages the animation and updates the pipeline accordingly.
|
|
||||||
"""
|
|
||||||
|
|
||||||
import time
|
|
||||||
from dataclasses import dataclass
|
|
||||||
from enum import Enum, auto
|
|
||||||
from typing import Any
|
|
||||||
|
|
||||||
from engine.effects import get_registry
|
|
||||||
from engine.sensors.oscillator import OscillatorSensor
|
|
||||||
|
|
||||||
|
|
||||||
class DemoPhase(Enum):
|
|
||||||
"""The three phases of the pipeline introspection demo."""
|
|
||||||
|
|
||||||
PHASE_1_TOGGLE = auto() # Toggle each effect on/off
|
|
||||||
PHASE_2_LFO = auto() # LFO drives intensity up/down
|
|
||||||
PHASE_3_SHARED_LFO = auto() # All effects with shared LFO
|
|
||||||
|
|
||||||
|
|
||||||
@dataclass
|
|
||||||
class PhaseState:
|
|
||||||
"""State for a single phase of the demo."""
|
|
||||||
|
|
||||||
phase: DemoPhase
|
|
||||||
start_time: float
|
|
||||||
current_effect_index: int = 0
|
|
||||||
effect_start_time: float = 0.0
|
|
||||||
lfo_phase: float = 0.0 # 0.0 to 1.0
|
|
||||||
|
|
||||||
|
|
||||||
@dataclass
|
|
||||||
class DemoConfig:
|
|
||||||
"""Configuration for the demo animation."""
|
|
||||||
|
|
||||||
effect_cycle_duration: float = 3.0 # seconds per effect
|
|
||||||
gap_duration: float = 1.0 # seconds between effects
|
|
||||||
lfo_duration: float = (
|
|
||||||
4.0 # seconds for full LFO cycle (default → max → min → default)
|
|
||||||
)
|
|
||||||
phase_2_effect_duration: float = 4.0 # seconds per effect in phase 2
|
|
||||||
phase_3_lfo_duration: float = 6.0 # seconds for full waveform in phase 3
|
|
||||||
|
|
||||||
|
|
||||||
class PipelineIntrospectionDemo:
|
|
||||||
"""Controller for the 3-phase pipeline introspection demo.
|
|
||||||
|
|
||||||
Manages effect toggling and LFO modulation across the pipeline.
|
|
||||||
"""
|
|
||||||
|
|
||||||
def __init__(
|
|
||||||
self,
|
|
||||||
pipeline: Any,
|
|
||||||
effect_names: list[str] | None = None,
|
|
||||||
config: DemoConfig | None = None,
|
|
||||||
):
|
|
||||||
self._pipeline = pipeline
|
|
||||||
self._config = config or DemoConfig()
|
|
||||||
self._effect_names = effect_names or ["noise", "fade", "glitch", "firehose"]
|
|
||||||
self._phase = DemoPhase.PHASE_1_TOGGLE
|
|
||||||
self._phase_state = PhaseState(
|
|
||||||
phase=DemoPhase.PHASE_1_TOGGLE,
|
|
||||||
start_time=time.time(),
|
|
||||||
)
|
|
||||||
self._shared_oscillator: OscillatorSensor | None = None
|
|
||||||
self._frame = 0
|
|
||||||
|
|
||||||
# Register shared oscillator for phase 3
|
|
||||||
self._shared_oscillator = OscillatorSensor(
|
|
||||||
name="demo-lfo",
|
|
||||||
waveform="sine",
|
|
||||||
frequency=1.0 / self._config.phase_3_lfo_duration,
|
|
||||||
)
|
|
||||||
|
|
||||||
@property
|
|
||||||
def phase(self) -> DemoPhase:
|
|
||||||
return self._phase
|
|
||||||
|
|
||||||
@property
|
|
||||||
def phase_display(self) -> str:
|
|
||||||
"""Get a human-readable phase description."""
|
|
||||||
phase_num = {
|
|
||||||
DemoPhase.PHASE_1_TOGGLE: 1,
|
|
||||||
DemoPhase.PHASE_2_LFO: 2,
|
|
||||||
DemoPhase.PHASE_3_SHARED_LFO: 3,
|
|
||||||
}
|
|
||||||
return f"Phase {phase_num[self._phase]}"
|
|
||||||
|
|
||||||
@property
|
|
||||||
def effect_names(self) -> list[str]:
|
|
||||||
return self._effect_names
|
|
||||||
|
|
||||||
@property
|
|
||||||
def shared_oscillator(self) -> OscillatorSensor | None:
|
|
||||||
return self._shared_oscillator
|
|
||||||
|
|
||||||
def update(self) -> dict[str, Any]:
|
|
||||||
"""Update the demo state and return current parameters.
|
|
||||||
|
|
||||||
Returns:
|
|
||||||
dict with current effect settings for the pipeline
|
|
||||||
"""
|
|
||||||
self._frame += 1
|
|
||||||
current_time = time.time()
|
|
||||||
elapsed = current_time - self._phase_state.start_time
|
|
||||||
|
|
||||||
# Phase transition logic
|
|
||||||
phase_duration = self._get_phase_duration()
|
|
||||||
if elapsed >= phase_duration:
|
|
||||||
self._advance_phase()
|
|
||||||
|
|
||||||
# Update based on current phase
|
|
||||||
if self._phase == DemoPhase.PHASE_1_TOGGLE:
|
|
||||||
return self._update_phase_1(current_time)
|
|
||||||
elif self._phase == DemoPhase.PHASE_2_LFO:
|
|
||||||
return self._update_phase_2(current_time)
|
|
||||||
else:
|
|
||||||
return self._update_phase_3(current_time)
|
|
||||||
|
|
||||||
def _get_phase_duration(self) -> float:
|
|
||||||
"""Get duration of current phase in seconds."""
|
|
||||||
if self._phase == DemoPhase.PHASE_1_TOGGLE:
|
|
||||||
# Duration = (effect_time + gap) * num_effects + final_gap
|
|
||||||
return (
|
|
||||||
self._config.effect_cycle_duration + self._config.gap_duration
|
|
||||||
) * len(self._effect_names) + self._config.gap_duration
|
|
||||||
elif self._phase == DemoPhase.PHASE_2_LFO:
|
|
||||||
return self._config.phase_2_effect_duration * len(self._effect_names)
|
|
||||||
else:
|
|
||||||
# Phase 3 runs indefinitely
|
|
||||||
return float("inf")
|
|
||||||
|
|
||||||
def _advance_phase(self) -> None:
|
|
||||||
"""Advance to the next phase."""
|
|
||||||
if self._phase == DemoPhase.PHASE_1_TOGGLE:
|
|
||||||
self._phase = DemoPhase.PHASE_2_LFO
|
|
||||||
elif self._phase == DemoPhase.PHASE_2_LFO:
|
|
||||||
self._phase = DemoPhase.PHASE_3_SHARED_LFO
|
|
||||||
# Start the shared oscillator
|
|
||||||
if self._shared_oscillator:
|
|
||||||
self._shared_oscillator.start()
|
|
||||||
else:
|
|
||||||
# Phase 3 loops indefinitely - reset for demo replay after long time
|
|
||||||
self._phase = DemoPhase.PHASE_1_TOGGLE
|
|
||||||
|
|
||||||
self._phase_state = PhaseState(
|
|
||||||
phase=self._phase,
|
|
||||||
start_time=time.time(),
|
|
||||||
)
|
|
||||||
|
|
||||||
def _update_phase_1(self, current_time: float) -> dict[str, Any]:
|
|
||||||
"""Phase 1: Toggle each effect on/off one at a time."""
|
|
||||||
effect_time = current_time - self._phase_state.effect_start_time
|
|
||||||
|
|
||||||
# Check if we should move to next effect
|
|
||||||
cycle_time = self._config.effect_cycle_duration + self._config.gap_duration
|
|
||||||
effect_index = int((current_time - self._phase_state.start_time) / cycle_time)
|
|
||||||
|
|
||||||
# Clamp to valid range
|
|
||||||
if effect_index >= len(self._effect_names):
|
|
||||||
effect_index = len(self._effect_names) - 1
|
|
||||||
|
|
||||||
# Calculate current effect state
|
|
||||||
in_gap = effect_time >= self._config.effect_cycle_duration
|
|
||||||
|
|
||||||
# Build effect states
|
|
||||||
effect_states: dict[str, dict[str, Any]] = {}
|
|
||||||
for i, name in enumerate(self._effect_names):
|
|
||||||
if i < effect_index:
|
|
||||||
# Past effects - leave at default
|
|
||||||
effect_states[name] = {"enabled": False, "intensity": 0.5}
|
|
||||||
elif i == effect_index:
|
|
||||||
# Current effect - toggle on/off
|
|
||||||
if in_gap:
|
|
||||||
effect_states[name] = {"enabled": False, "intensity": 0.5}
|
|
||||||
else:
|
|
||||||
effect_states[name] = {"enabled": True, "intensity": 1.0}
|
|
||||||
else:
|
|
||||||
# Future effects - off
|
|
||||||
effect_states[name] = {"enabled": False, "intensity": 0.5}
|
|
||||||
|
|
||||||
# Apply to effect registry
|
|
||||||
self._apply_effect_states(effect_states)
|
|
||||||
|
|
||||||
return {
|
|
||||||
"phase": "PHASE_1_TOGGLE",
|
|
||||||
"phase_display": self.phase_display,
|
|
||||||
"current_effect": self._effect_names[effect_index]
|
|
||||||
if effect_index < len(self._effect_names)
|
|
||||||
else None,
|
|
||||||
"effect_states": effect_states,
|
|
||||||
"frame": self._frame,
|
|
||||||
}
|
|
||||||
|
|
||||||
def _update_phase_2(self, current_time: float) -> dict[str, Any]:
|
|
||||||
"""Phase 2: LFO drives intensity default → max → min → default."""
|
|
||||||
elapsed = current_time - self._phase_state.start_time
|
|
||||||
effect_index = int(elapsed / self._config.phase_2_effect_duration)
|
|
||||||
effect_index = min(effect_index, len(self._effect_names) - 1)
|
|
||||||
|
|
||||||
# Calculate LFO position (0 → 1 → 0)
|
|
||||||
effect_elapsed = elapsed % self._config.phase_2_effect_duration
|
|
||||||
lfo_position = effect_elapsed / self._config.phase_2_effect_duration
|
|
||||||
|
|
||||||
# LFO: 0 → 1 → 0 (triangle wave)
|
|
||||||
if lfo_position < 0.5:
|
|
||||||
lfo_value = lfo_position * 2 # 0 → 1
|
|
||||||
else:
|
|
||||||
lfo_value = 2 - lfo_position * 2 # 1 → 0
|
|
||||||
|
|
||||||
# Map to intensity: 0.3 (default) → 1.0 (max) → 0.0 (min) → 0.3 (default)
|
|
||||||
if lfo_position < 0.25:
|
|
||||||
# 0.3 → 1.0
|
|
||||||
intensity = 0.3 + (lfo_position / 0.25) * 0.7
|
|
||||||
elif lfo_position < 0.75:
|
|
||||||
# 1.0 → 0.0
|
|
||||||
intensity = 1.0 - ((lfo_position - 0.25) / 0.5) * 1.0
|
|
||||||
else:
|
|
||||||
# 0.0 → 0.3
|
|
||||||
intensity = ((lfo_position - 0.75) / 0.25) * 0.3
|
|
||||||
|
|
||||||
# Build effect states
|
|
||||||
effect_states: dict[str, dict[str, Any]] = {}
|
|
||||||
for i, name in enumerate(self._effect_names):
|
|
||||||
if i < effect_index:
|
|
||||||
# Past effects - default
|
|
||||||
effect_states[name] = {"enabled": True, "intensity": 0.5}
|
|
||||||
elif i == effect_index:
|
|
||||||
# Current effect - LFO modulated
|
|
||||||
effect_states[name] = {"enabled": True, "intensity": intensity}
|
|
||||||
else:
|
|
||||||
# Future effects - off
|
|
||||||
effect_states[name] = {"enabled": False, "intensity": 0.5}
|
|
||||||
|
|
||||||
# Apply to effect registry
|
|
||||||
self._apply_effect_states(effect_states)
|
|
||||||
|
|
||||||
return {
|
|
||||||
"phase": "PHASE_2_LFO",
|
|
||||||
"phase_display": self.phase_display,
|
|
||||||
"current_effect": self._effect_names[effect_index],
|
|
||||||
"lfo_value": lfo_value,
|
|
||||||
"intensity": intensity,
|
|
||||||
"effect_states": effect_states,
|
|
||||||
"frame": self._frame,
|
|
||||||
}
|
|
||||||
|
|
||||||
def _update_phase_3(self, current_time: float) -> dict[str, Any]:
|
|
||||||
"""Phase 3: All effects with shared LFO driving full waveform."""
|
|
||||||
# Read shared oscillator
|
|
||||||
lfo_value = 0.5 # Default
|
|
||||||
if self._shared_oscillator:
|
|
||||||
sensor_val = self._shared_oscillator.read()
|
|
||||||
if sensor_val:
|
|
||||||
lfo_value = sensor_val.value
|
|
||||||
|
|
||||||
# All effects enabled with shared LFO
|
|
||||||
effect_states: dict[str, dict[str, Any]] = {}
|
|
||||||
for name in self._effect_names:
|
|
||||||
effect_states[name] = {"enabled": True, "intensity": lfo_value}
|
|
||||||
|
|
||||||
# Apply to effect registry
|
|
||||||
self._apply_effect_states(effect_states)
|
|
||||||
|
|
||||||
return {
|
|
||||||
"phase": "PHASE_3_SHARED_LFO",
|
|
||||||
"phase_display": self.phase_display,
|
|
||||||
"lfo_value": lfo_value,
|
|
||||||
"effect_states": effect_states,
|
|
||||||
"frame": self._frame,
|
|
||||||
}
|
|
||||||
|
|
||||||
def _apply_effect_states(self, effect_states: dict[str, dict[str, Any]]) -> None:
|
|
||||||
"""Apply effect states to the effect registry."""
|
|
||||||
try:
|
|
||||||
registry = get_registry()
|
|
||||||
for name, state in effect_states.items():
|
|
||||||
effect = registry.get(name)
|
|
||||||
if effect:
|
|
||||||
effect.config.enabled = state["enabled"]
|
|
||||||
effect.config.intensity = state["intensity"]
|
|
||||||
except Exception:
|
|
||||||
pass # Silently fail if registry not available
|
|
||||||
|
|
||||||
def cleanup(self) -> None:
|
|
||||||
"""Clean up resources."""
|
|
||||||
if self._shared_oscillator:
|
|
||||||
self._shared_oscillator.stop()
|
|
||||||
|
|
||||||
# Reset all effects to default
|
|
||||||
self._apply_effect_states(
|
|
||||||
{name: {"enabled": False, "intensity": 0.5} for name in self._effect_names}
|
|
||||||
)
|
|
||||||
@@ -1,280 +0,0 @@
|
|||||||
"""
|
|
||||||
Preset loader - Loads presets from TOML files.
|
|
||||||
|
|
||||||
Supports:
|
|
||||||
- Built-in presets.toml in the package
|
|
||||||
- User overrides in ~/.config/mainline/presets.toml
|
|
||||||
- Local override in ./presets.toml
|
|
||||||
- Fallback DEFAULT_PRESET when loading fails
|
|
||||||
"""
|
|
||||||
|
|
||||||
import os
|
|
||||||
from pathlib import Path
|
|
||||||
from typing import Any
|
|
||||||
|
|
||||||
import tomllib
|
|
||||||
|
|
||||||
DEFAULT_PRESET: dict[str, Any] = {
|
|
||||||
"description": "Default fallback preset",
|
|
||||||
"source": "headlines",
|
|
||||||
"display": "terminal",
|
|
||||||
"camera": "vertical",
|
|
||||||
"effects": [],
|
|
||||||
"viewport": {"width": 80, "height": 24},
|
|
||||||
"camera_speed": 1.0,
|
|
||||||
"firehose_enabled": False,
|
|
||||||
}
|
|
||||||
|
|
||||||
|
|
||||||
def get_preset_paths() -> list[Path]:
|
|
||||||
"""Get list of preset file paths in load order (later overrides earlier)."""
|
|
||||||
paths = []
|
|
||||||
|
|
||||||
builtin = Path(__file__).parent.parent / "presets.toml"
|
|
||||||
if builtin.exists():
|
|
||||||
paths.append(builtin)
|
|
||||||
|
|
||||||
user_config = Path(os.path.expanduser("~/.config/mainline/presets.toml"))
|
|
||||||
if user_config.exists():
|
|
||||||
paths.append(user_config)
|
|
||||||
|
|
||||||
local = Path("presets.toml")
|
|
||||||
if local.exists():
|
|
||||||
paths.append(local)
|
|
||||||
|
|
||||||
return paths
|
|
||||||
|
|
||||||
|
|
||||||
def load_presets() -> dict[str, Any]:
|
|
||||||
"""Load all presets, merging from multiple sources."""
|
|
||||||
merged: dict[str, Any] = {"presets": {}, "sensors": {}, "effect_configs": {}}
|
|
||||||
|
|
||||||
for path in get_preset_paths():
|
|
||||||
try:
|
|
||||||
with open(path, "rb") as f:
|
|
||||||
data = tomllib.load(f)
|
|
||||||
|
|
||||||
if "presets" in data:
|
|
||||||
merged["presets"].update(data["presets"])
|
|
||||||
|
|
||||||
if "sensors" in data:
|
|
||||||
merged["sensors"].update(data["sensors"])
|
|
||||||
|
|
||||||
if "effect_configs" in data:
|
|
||||||
merged["effect_configs"].update(data["effect_configs"])
|
|
||||||
|
|
||||||
except Exception as e:
|
|
||||||
print(f"Warning: Failed to load presets from {path}: {e}")
|
|
||||||
|
|
||||||
return merged
|
|
||||||
|
|
||||||
|
|
||||||
def get_preset(name: str) -> dict[str, Any] | None:
|
|
||||||
"""Get a preset by name."""
|
|
||||||
presets = load_presets()
|
|
||||||
return presets.get("presets", {}).get(name)
|
|
||||||
|
|
||||||
|
|
||||||
def list_preset_names() -> list[str]:
|
|
||||||
"""List all available preset names."""
|
|
||||||
presets = load_presets()
|
|
||||||
return list(presets.get("presets", {}).keys())
|
|
||||||
|
|
||||||
|
|
||||||
def get_sensor_config(name: str) -> dict[str, Any] | None:
|
|
||||||
"""Get sensor configuration by name."""
|
|
||||||
sensors = load_presets()
|
|
||||||
return sensors.get("sensors", {}).get(name)
|
|
||||||
|
|
||||||
|
|
||||||
def get_effect_config(name: str) -> dict[str, Any] | None:
|
|
||||||
"""Get effect configuration by name."""
|
|
||||||
configs = load_presets()
|
|
||||||
return configs.get("effect_configs", {}).get(name)
|
|
||||||
|
|
||||||
|
|
||||||
def get_all_effect_configs() -> dict[str, Any]:
|
|
||||||
"""Get all effect configurations."""
|
|
||||||
configs = load_presets()
|
|
||||||
return configs.get("effect_configs", {})
|
|
||||||
|
|
||||||
|
|
||||||
def get_preset_or_default(name: str) -> dict[str, Any]:
|
|
||||||
"""Get a preset by name, or return DEFAULT_PRESET if not found."""
|
|
||||||
preset = get_preset(name)
|
|
||||||
if preset is not None:
|
|
||||||
return preset
|
|
||||||
return DEFAULT_PRESET.copy()
|
|
||||||
|
|
||||||
|
|
||||||
def ensure_preset_available(name: str | None) -> dict[str, Any]:
|
|
||||||
"""Ensure a preset is available, falling back to DEFAULT_PRESET."""
|
|
||||||
if name is None:
|
|
||||||
return DEFAULT_PRESET.copy()
|
|
||||||
return get_preset_or_default(name)
|
|
||||||
|
|
||||||
|
|
||||||
class PresetValidationError(Exception):
|
|
||||||
"""Raised when preset validation fails."""
|
|
||||||
|
|
||||||
|
|
||||||
def validate_preset(preset: dict[str, Any]) -> list[str]:
|
|
||||||
"""Validate a preset and return list of errors (empty if valid)."""
|
|
||||||
errors: list[str] = []
|
|
||||||
|
|
||||||
required_fields = ["source", "display", "effects"]
|
|
||||||
for field in required_fields:
|
|
||||||
if field not in preset:
|
|
||||||
errors.append(f"Missing required field: {field}")
|
|
||||||
|
|
||||||
if "effects" in preset:
|
|
||||||
if not isinstance(preset["effects"], list):
|
|
||||||
errors.append("'effects' must be a list")
|
|
||||||
else:
|
|
||||||
for effect in preset["effects"]:
|
|
||||||
if not isinstance(effect, str):
|
|
||||||
errors.append(
|
|
||||||
f"Effect must be string, got {type(effect)}: {effect}"
|
|
||||||
)
|
|
||||||
|
|
||||||
if "viewport" in preset:
|
|
||||||
viewport = preset["viewport"]
|
|
||||||
if not isinstance(viewport, dict):
|
|
||||||
errors.append("'viewport' must be a dict")
|
|
||||||
else:
|
|
||||||
if "width" in viewport and not isinstance(viewport["width"], int):
|
|
||||||
errors.append("'viewport.width' must be an int")
|
|
||||||
if "height" in viewport and not isinstance(viewport["height"], int):
|
|
||||||
errors.append("'viewport.height' must be an int")
|
|
||||||
|
|
||||||
return errors
|
|
||||||
|
|
||||||
|
|
||||||
def validate_signal_flow(stages: list[dict]) -> list[str]:
|
|
||||||
"""Validate signal flow based on inlet/outlet types.
|
|
||||||
|
|
||||||
This validates that the preset's stage configuration produces valid
|
|
||||||
data flow using the PureData-style type system.
|
|
||||||
|
|
||||||
Args:
|
|
||||||
stages: List of stage configs with 'name', 'category', 'inlet_types', 'outlet_types'
|
|
||||||
|
|
||||||
Returns:
|
|
||||||
List of errors (empty if valid)
|
|
||||||
"""
|
|
||||||
errors: list[str] = []
|
|
||||||
|
|
||||||
if not stages:
|
|
||||||
errors.append("Signal flow is empty")
|
|
||||||
return errors
|
|
||||||
|
|
||||||
# Define expected types for each category
|
|
||||||
type_map = {
|
|
||||||
"source": {"inlet": "NONE", "outlet": "SOURCE_ITEMS"},
|
|
||||||
"data": {"inlet": "ANY", "outlet": "SOURCE_ITEMS"},
|
|
||||||
"transform": {"inlet": "SOURCE_ITEMS", "outlet": "TEXT_BUFFER"},
|
|
||||||
"effect": {"inlet": "TEXT_BUFFER", "outlet": "TEXT_BUFFER"},
|
|
||||||
"overlay": {"inlet": "TEXT_BUFFER", "outlet": "TEXT_BUFFER"},
|
|
||||||
"camera": {"inlet": "TEXT_BUFFER", "outlet": "TEXT_BUFFER"},
|
|
||||||
"display": {"inlet": "TEXT_BUFFER", "outlet": "NONE"},
|
|
||||||
"render": {"inlet": "SOURCE_ITEMS", "outlet": "TEXT_BUFFER"},
|
|
||||||
}
|
|
||||||
|
|
||||||
# Check stage order and type compatibility
|
|
||||||
for i, stage in enumerate(stages):
|
|
||||||
category = stage.get("category", "unknown")
|
|
||||||
name = stage.get("name", f"stage_{i}")
|
|
||||||
|
|
||||||
if category not in type_map:
|
|
||||||
continue # Skip unknown categories
|
|
||||||
|
|
||||||
expected = type_map[category]
|
|
||||||
|
|
||||||
# Check against previous stage
|
|
||||||
if i > 0:
|
|
||||||
prev = stages[i - 1]
|
|
||||||
prev_category = prev.get("category", "unknown")
|
|
||||||
if prev_category in type_map:
|
|
||||||
prev_outlet = type_map[prev_category]["outlet"]
|
|
||||||
inlet = expected["inlet"]
|
|
||||||
|
|
||||||
# Validate type compatibility
|
|
||||||
if inlet != "ANY" and prev_outlet != "ANY" and inlet != prev_outlet:
|
|
||||||
errors.append(
|
|
||||||
f"Type mismatch at '{name}': "
|
|
||||||
f"expects {inlet} but previous stage outputs {prev_outlet}"
|
|
||||||
)
|
|
||||||
|
|
||||||
return errors
|
|
||||||
|
|
||||||
|
|
||||||
def validate_signal_path(stages: list[str]) -> list[str]:
|
|
||||||
"""Validate signal path for circular dependencies and connectivity.
|
|
||||||
|
|
||||||
Args:
|
|
||||||
stages: List of stage names in execution order
|
|
||||||
|
|
||||||
Returns:
|
|
||||||
List of errors (empty if valid)
|
|
||||||
"""
|
|
||||||
errors: list[str] = []
|
|
||||||
|
|
||||||
if not stages:
|
|
||||||
errors.append("Signal path is empty")
|
|
||||||
return errors
|
|
||||||
|
|
||||||
seen: set[str] = set()
|
|
||||||
for i, stage in enumerate(stages):
|
|
||||||
if stage in seen:
|
|
||||||
errors.append(
|
|
||||||
f"Circular dependency: '{stage}' appears multiple times at index {i}"
|
|
||||||
)
|
|
||||||
seen.add(stage)
|
|
||||||
|
|
||||||
return errors
|
|
||||||
|
|
||||||
|
|
||||||
def generate_preset_toml(
|
|
||||||
name: str,
|
|
||||||
source: str = "headlines",
|
|
||||||
display: str = "terminal",
|
|
||||||
effects: list[str] | None = None,
|
|
||||||
viewport_width: int = 80,
|
|
||||||
viewport_height: int = 24,
|
|
||||||
camera: str = "vertical",
|
|
||||||
camera_speed: float = 1.0,
|
|
||||||
firehose_enabled: bool = False,
|
|
||||||
) -> str:
|
|
||||||
"""Generate a TOML preset skeleton with default values.
|
|
||||||
|
|
||||||
Args:
|
|
||||||
name: Preset name
|
|
||||||
source: Data source name
|
|
||||||
display: Display backend
|
|
||||||
effects: List of effect names
|
|
||||||
viewport_width: Viewport width in columns
|
|
||||||
viewport_height: Viewport height in rows
|
|
||||||
camera: Camera mode
|
|
||||||
camera_speed: Camera scroll speed
|
|
||||||
firehose_enabled: Enable firehose mode
|
|
||||||
|
|
||||||
Returns:
|
|
||||||
TOML string for the preset
|
|
||||||
"""
|
|
||||||
|
|
||||||
if effects is None:
|
|
||||||
effects = ["fade"]
|
|
||||||
|
|
||||||
output = []
|
|
||||||
output.append(f"[presets.{name}]")
|
|
||||||
output.append(f'description = "Auto-generated preset: {name}"')
|
|
||||||
output.append(f'source = "{source}"')
|
|
||||||
output.append(f'display = "{display}"')
|
|
||||||
output.append(f'camera = "{camera}"')
|
|
||||||
output.append(f"effects = {effects}")
|
|
||||||
output.append(f"viewport_width = {viewport_width}")
|
|
||||||
output.append(f"viewport_height = {viewport_height}")
|
|
||||||
output.append(f"camera_speed = {camera_speed}")
|
|
||||||
output.append(f"firehose_enabled = {str(firehose_enabled).lower()}")
|
|
||||||
|
|
||||||
return "\n".join(output)
|
|
||||||
@@ -1,182 +0,0 @@
|
|||||||
"""
|
|
||||||
Pipeline presets - Pre-configured pipeline configurations.
|
|
||||||
|
|
||||||
Provides PipelinePreset as a unified preset system.
|
|
||||||
Presets can be loaded from TOML files (presets.toml) or defined in code.
|
|
||||||
|
|
||||||
Loading order:
|
|
||||||
1. Built-in presets.toml in the package
|
|
||||||
2. User config ~/.config/mainline/presets.toml
|
|
||||||
3. Local ./presets.toml (overrides earlier)
|
|
||||||
"""
|
|
||||||
|
|
||||||
from dataclasses import dataclass, field
|
|
||||||
from typing import Any
|
|
||||||
|
|
||||||
from engine.pipeline.params import PipelineParams
|
|
||||||
|
|
||||||
|
|
||||||
def _load_toml_presets() -> dict[str, Any]:
|
|
||||||
"""Load presets from TOML file."""
|
|
||||||
try:
|
|
||||||
from engine.pipeline.preset_loader import load_presets
|
|
||||||
|
|
||||||
return load_presets()
|
|
||||||
except Exception:
|
|
||||||
return {}
|
|
||||||
|
|
||||||
|
|
||||||
# Pre-load TOML presets
|
|
||||||
_YAML_PRESETS = _load_toml_presets()
|
|
||||||
|
|
||||||
|
|
||||||
@dataclass
|
|
||||||
class PipelinePreset:
|
|
||||||
"""Pre-configured pipeline with stages and animation.
|
|
||||||
|
|
||||||
A PipelinePreset packages:
|
|
||||||
- Initial params: Starting configuration
|
|
||||||
- Stages: List of stage configurations to create
|
|
||||||
|
|
||||||
This is the new unified preset that works with the Pipeline class.
|
|
||||||
"""
|
|
||||||
|
|
||||||
name: str
|
|
||||||
description: str = ""
|
|
||||||
source: str = "headlines"
|
|
||||||
display: str = "terminal"
|
|
||||||
camera: str = "scroll"
|
|
||||||
effects: list[str] = field(default_factory=list)
|
|
||||||
border: bool = False
|
|
||||||
|
|
||||||
def to_params(self) -> PipelineParams:
|
|
||||||
"""Convert to PipelineParams."""
|
|
||||||
params = PipelineParams()
|
|
||||||
params.source = self.source
|
|
||||||
params.display = self.display
|
|
||||||
params.border = self.border
|
|
||||||
params.camera_mode = self.camera
|
|
||||||
params.effect_order = self.effects.copy()
|
|
||||||
return params
|
|
||||||
|
|
||||||
@classmethod
|
|
||||||
def from_yaml(cls, name: str, data: dict[str, Any]) -> "PipelinePreset":
|
|
||||||
"""Create a PipelinePreset from YAML data."""
|
|
||||||
return cls(
|
|
||||||
name=name,
|
|
||||||
description=data.get("description", ""),
|
|
||||||
source=data.get("source", "headlines"),
|
|
||||||
display=data.get("display", "terminal"),
|
|
||||||
camera=data.get("camera", "vertical"),
|
|
||||||
effects=data.get("effects", []),
|
|
||||||
border=data.get("border", False),
|
|
||||||
)
|
|
||||||
|
|
||||||
|
|
||||||
# Built-in presets
|
|
||||||
DEMO_PRESET = PipelinePreset(
|
|
||||||
name="demo",
|
|
||||||
description="Demo mode with effect cycling and camera modes",
|
|
||||||
source="headlines",
|
|
||||||
display="pygame",
|
|
||||||
camera="scroll",
|
|
||||||
effects=["noise", "fade", "glitch", "firehose"],
|
|
||||||
)
|
|
||||||
|
|
||||||
POETRY_PRESET = PipelinePreset(
|
|
||||||
name="poetry",
|
|
||||||
description="Poetry feed with subtle effects",
|
|
||||||
source="poetry",
|
|
||||||
display="pygame",
|
|
||||||
camera="scroll",
|
|
||||||
effects=["fade"],
|
|
||||||
)
|
|
||||||
|
|
||||||
PIPELINE_VIZ_PRESET = PipelinePreset(
|
|
||||||
name="pipeline",
|
|
||||||
description="Pipeline visualization mode",
|
|
||||||
source="pipeline",
|
|
||||||
display="terminal",
|
|
||||||
camera="trace",
|
|
||||||
effects=[],
|
|
||||||
)
|
|
||||||
|
|
||||||
WEBSOCKET_PRESET = PipelinePreset(
|
|
||||||
name="websocket",
|
|
||||||
description="WebSocket display mode",
|
|
||||||
source="headlines",
|
|
||||||
display="websocket",
|
|
||||||
camera="scroll",
|
|
||||||
effects=["noise", "fade", "glitch"],
|
|
||||||
)
|
|
||||||
|
|
||||||
SIXEL_PRESET = PipelinePreset(
|
|
||||||
name="sixel",
|
|
||||||
description="Sixel graphics display mode",
|
|
||||||
source="headlines",
|
|
||||||
display="sixel",
|
|
||||||
camera="scroll",
|
|
||||||
effects=["noise", "fade", "glitch"],
|
|
||||||
)
|
|
||||||
|
|
||||||
FIREHOSE_PRESET = PipelinePreset(
|
|
||||||
name="firehose",
|
|
||||||
description="High-speed firehose mode",
|
|
||||||
source="headlines",
|
|
||||||
display="pygame",
|
|
||||||
camera="scroll",
|
|
||||||
effects=["noise", "fade", "glitch", "firehose"],
|
|
||||||
)
|
|
||||||
|
|
||||||
|
|
||||||
# Build presets from YAML data
|
|
||||||
def _build_presets() -> dict[str, PipelinePreset]:
|
|
||||||
"""Build preset dictionary from all sources."""
|
|
||||||
result = {}
|
|
||||||
|
|
||||||
# Add YAML presets
|
|
||||||
yaml_presets = _YAML_PRESETS.get("presets", {})
|
|
||||||
for name, data in yaml_presets.items():
|
|
||||||
result[name] = PipelinePreset.from_yaml(name, data)
|
|
||||||
|
|
||||||
# Add built-in presets as fallback (if not in YAML)
|
|
||||||
builtins = {
|
|
||||||
"demo": DEMO_PRESET,
|
|
||||||
"poetry": POETRY_PRESET,
|
|
||||||
"pipeline": PIPELINE_VIZ_PRESET,
|
|
||||||
"websocket": WEBSOCKET_PRESET,
|
|
||||||
"sixel": SIXEL_PRESET,
|
|
||||||
"firehose": FIREHOSE_PRESET,
|
|
||||||
}
|
|
||||||
|
|
||||||
for name, preset in builtins.items():
|
|
||||||
if name not in result:
|
|
||||||
result[name] = preset
|
|
||||||
|
|
||||||
return result
|
|
||||||
|
|
||||||
|
|
||||||
PRESETS: dict[str, PipelinePreset] = _build_presets()
|
|
||||||
|
|
||||||
|
|
||||||
def get_preset(name: str) -> PipelinePreset | None:
|
|
||||||
"""Get a preset by name."""
|
|
||||||
return PRESETS.get(name)
|
|
||||||
|
|
||||||
|
|
||||||
def list_presets() -> list[str]:
|
|
||||||
"""List all available preset names."""
|
|
||||||
return list(PRESETS.keys())
|
|
||||||
|
|
||||||
|
|
||||||
def create_preset_from_params(
|
|
||||||
params: PipelineParams, name: str = "custom"
|
|
||||||
) -> PipelinePreset:
|
|
||||||
"""Create a preset from PipelineParams."""
|
|
||||||
return PipelinePreset(
|
|
||||||
name=name,
|
|
||||||
source=params.source,
|
|
||||||
display=params.display,
|
|
||||||
camera=params.camera_mode,
|
|
||||||
effects=params.effect_order.copy() if hasattr(params, "effect_order") else [],
|
|
||||||
)
|
|
||||||
@@ -1,181 +0,0 @@
|
|||||||
"""
|
|
||||||
Stage registry - Unified registration for all pipeline stages.
|
|
||||||
|
|
||||||
Provides a single registry for sources, effects, displays, and cameras.
|
|
||||||
"""
|
|
||||||
|
|
||||||
from __future__ import annotations
|
|
||||||
|
|
||||||
from typing import TYPE_CHECKING, Any, TypeVar
|
|
||||||
|
|
||||||
from engine.pipeline.core import Stage
|
|
||||||
|
|
||||||
if TYPE_CHECKING:
|
|
||||||
from engine.pipeline.core import Stage
|
|
||||||
|
|
||||||
T = TypeVar("T")
|
|
||||||
|
|
||||||
|
|
||||||
class StageRegistry:
|
|
||||||
"""Unified registry for all pipeline stage types."""
|
|
||||||
|
|
||||||
_categories: dict[str, dict[str, type[Any]]] = {}
|
|
||||||
_discovered: bool = False
|
|
||||||
_instances: dict[str, Stage] = {}
|
|
||||||
|
|
||||||
@classmethod
|
|
||||||
def register(cls, category: str, stage_class: type[Any]) -> None:
|
|
||||||
"""Register a stage class in a category.
|
|
||||||
|
|
||||||
Args:
|
|
||||||
category: Category name (source, effect, display, camera)
|
|
||||||
stage_class: Stage subclass to register
|
|
||||||
"""
|
|
||||||
if category not in cls._categories:
|
|
||||||
cls._categories[category] = {}
|
|
||||||
|
|
||||||
key = getattr(stage_class, "__name__", stage_class.__class__.__name__)
|
|
||||||
cls._categories[category][key] = stage_class
|
|
||||||
|
|
||||||
@classmethod
|
|
||||||
def get(cls, category: str, name: str) -> type[Any] | None:
|
|
||||||
"""Get a stage class by category and name."""
|
|
||||||
return cls._categories.get(category, {}).get(name)
|
|
||||||
|
|
||||||
@classmethod
|
|
||||||
def list(cls, category: str) -> list[str]:
|
|
||||||
"""List all stage names in a category."""
|
|
||||||
return list(cls._categories.get(category, {}).keys())
|
|
||||||
|
|
||||||
@classmethod
|
|
||||||
def list_categories(cls) -> list[str]:
|
|
||||||
"""List all registered categories."""
|
|
||||||
return list(cls._categories.keys())
|
|
||||||
|
|
||||||
@classmethod
|
|
||||||
def create(cls, category: str, name: str, **kwargs) -> Stage | None:
|
|
||||||
"""Create a stage instance by category and name."""
|
|
||||||
stage_class = cls.get(category, name)
|
|
||||||
if stage_class:
|
|
||||||
return stage_class(**kwargs)
|
|
||||||
return None
|
|
||||||
|
|
||||||
@classmethod
|
|
||||||
def create_instance(cls, stage: Stage | type[Stage], **kwargs) -> Stage:
|
|
||||||
"""Create an instance from a stage class or return as-is."""
|
|
||||||
if isinstance(stage, Stage):
|
|
||||||
return stage
|
|
||||||
if isinstance(stage, type) and issubclass(stage, Stage):
|
|
||||||
return stage(**kwargs)
|
|
||||||
raise TypeError(f"Expected Stage class or instance, got {type(stage)}")
|
|
||||||
|
|
||||||
@classmethod
|
|
||||||
def register_instance(cls, name: str, stage: Stage) -> None:
|
|
||||||
"""Register a stage instance by name."""
|
|
||||||
cls._instances[name] = stage
|
|
||||||
|
|
||||||
@classmethod
|
|
||||||
def get_instance(cls, name: str) -> Stage | None:
|
|
||||||
"""Get a registered stage instance by name."""
|
|
||||||
return cls._instances.get(name)
|
|
||||||
|
|
||||||
|
|
||||||
def discover_stages() -> None:
|
|
||||||
"""Auto-discover and register all stage implementations."""
|
|
||||||
if StageRegistry._discovered:
|
|
||||||
return
|
|
||||||
|
|
||||||
# Import and register all stage implementations
|
|
||||||
try:
|
|
||||||
from engine.data_sources.sources import (
|
|
||||||
HeadlinesDataSource,
|
|
||||||
PoetryDataSource,
|
|
||||||
)
|
|
||||||
|
|
||||||
StageRegistry.register("source", HeadlinesDataSource)
|
|
||||||
StageRegistry.register("source", PoetryDataSource)
|
|
||||||
|
|
||||||
StageRegistry._categories["source"]["headlines"] = HeadlinesDataSource
|
|
||||||
StageRegistry._categories["source"]["poetry"] = PoetryDataSource
|
|
||||||
except ImportError:
|
|
||||||
pass
|
|
||||||
|
|
||||||
# Register pipeline introspection source
|
|
||||||
try:
|
|
||||||
from engine.data_sources.pipeline_introspection import (
|
|
||||||
PipelineIntrospectionSource,
|
|
||||||
)
|
|
||||||
|
|
||||||
StageRegistry.register("source", PipelineIntrospectionSource)
|
|
||||||
StageRegistry._categories["source"]["pipeline-inspect"] = (
|
|
||||||
PipelineIntrospectionSource
|
|
||||||
)
|
|
||||||
except ImportError:
|
|
||||||
pass
|
|
||||||
|
|
||||||
try:
|
|
||||||
from engine.effects.types import EffectPlugin # noqa: F401
|
|
||||||
except ImportError:
|
|
||||||
pass
|
|
||||||
|
|
||||||
# Register display stages
|
|
||||||
_register_display_stages()
|
|
||||||
|
|
||||||
StageRegistry._discovered = True
|
|
||||||
|
|
||||||
|
|
||||||
def _register_display_stages() -> None:
|
|
||||||
"""Register display backends as stages."""
|
|
||||||
try:
|
|
||||||
from engine.display import DisplayRegistry
|
|
||||||
except ImportError:
|
|
||||||
return
|
|
||||||
|
|
||||||
DisplayRegistry.initialize()
|
|
||||||
|
|
||||||
for backend_name in DisplayRegistry.list_backends():
|
|
||||||
factory = _DisplayStageFactory(backend_name)
|
|
||||||
StageRegistry._categories.setdefault("display", {})[backend_name] = factory
|
|
||||||
|
|
||||||
|
|
||||||
class _DisplayStageFactory:
|
|
||||||
"""Factory that creates DisplayStage instances for a specific backend."""
|
|
||||||
|
|
||||||
def __init__(self, backend_name: str):
|
|
||||||
self._backend_name = backend_name
|
|
||||||
|
|
||||||
def __call__(self):
|
|
||||||
from engine.display import DisplayRegistry
|
|
||||||
from engine.pipeline.adapters import DisplayStage
|
|
||||||
|
|
||||||
display = DisplayRegistry.create(self._backend_name)
|
|
||||||
if display is None:
|
|
||||||
raise RuntimeError(
|
|
||||||
f"Failed to create display backend: {self._backend_name}"
|
|
||||||
)
|
|
||||||
return DisplayStage(display, name=self._backend_name)
|
|
||||||
|
|
||||||
@property
|
|
||||||
def __name__(self) -> str:
|
|
||||||
return self._backend_name.capitalize() + "Stage"
|
|
||||||
|
|
||||||
|
|
||||||
# Convenience functions
|
|
||||||
def register_source(stage_class: type[Stage]) -> None:
|
|
||||||
"""Register a source stage."""
|
|
||||||
StageRegistry.register("source", stage_class)
|
|
||||||
|
|
||||||
|
|
||||||
def register_effect(stage_class: type[Stage]) -> None:
|
|
||||||
"""Register an effect stage."""
|
|
||||||
StageRegistry.register("effect", stage_class)
|
|
||||||
|
|
||||||
|
|
||||||
def register_display(stage_class: type[Stage]) -> None:
|
|
||||||
"""Register a display stage."""
|
|
||||||
StageRegistry.register("display", stage_class)
|
|
||||||
|
|
||||||
|
|
||||||
def register_camera(stage_class: type[Stage]) -> None:
|
|
||||||
"""Register a camera stage."""
|
|
||||||
StageRegistry.register("camera", stage_class)
|
|
||||||
@@ -1,6 +1,7 @@
|
|||||||
"""Block rendering core - Font loading, text rasterization, word-wrap, and headline assembly.
|
"""
|
||||||
|
OTF → terminal half-block rendering pipeline.
|
||||||
Provides PIL font-based rendering to terminal half-block characters.
|
Font loading, text rasterization, word-wrap, gradient coloring, headline block assembly.
|
||||||
|
Depends on: config, terminal, sources, translate.
|
||||||
"""
|
"""
|
||||||
|
|
||||||
import random
|
import random
|
||||||
@@ -11,50 +12,74 @@ from PIL import Image, ImageDraw, ImageFont
|
|||||||
|
|
||||||
from engine import config
|
from engine import config
|
||||||
from engine.sources import NO_UPPER, SCRIPT_FONTS, SOURCE_LANGS
|
from engine.sources import NO_UPPER, SCRIPT_FONTS, SOURCE_LANGS
|
||||||
|
from engine.terminal import RST
|
||||||
from engine.translate import detect_location_language, translate_headline
|
from engine.translate import detect_location_language, translate_headline
|
||||||
|
|
||||||
|
|
||||||
def estimate_block_height(title: str, width: int, fnt=None) -> int:
|
# ─── GRADIENT ─────────────────────────────────────────────
|
||||||
"""Estimate rendered block height without full PIL rendering.
|
def _color_codes_to_ansi(color_codes):
|
||||||
|
"""Convert a list of 256-color codes to ANSI escape code strings.
|
||||||
|
|
||||||
Uses font bbox measurement to count wrapped lines, then computes:
|
Pattern: first 2 are bold, middle 8 are normal, last 2 are dim.
|
||||||
height = num_lines * RENDER_H + (num_lines - 1) + 2
|
|
||||||
|
|
||||||
Args:
|
Args:
|
||||||
title: Headline text to measure
|
color_codes: List of 12 integers (256-color palette codes)
|
||||||
width: Terminal width in characters
|
|
||||||
fnt: Optional PIL font (uses default if None)
|
|
||||||
|
|
||||||
Returns:
|
Returns:
|
||||||
Estimated height in terminal rows
|
List of ANSI escape code strings
|
||||||
"""
|
"""
|
||||||
if fnt is None:
|
if not color_codes or len(color_codes) != 12:
|
||||||
fnt = font()
|
# Fallback to default green if invalid
|
||||||
text = re.sub(r"\s+", " ", title.upper())
|
return _default_green_gradient()
|
||||||
words = text.split()
|
|
||||||
lines = 0
|
result = []
|
||||||
cur = ""
|
for i, code in enumerate(color_codes):
|
||||||
for word in words:
|
if i < 2:
|
||||||
test = f"{cur} {word}".strip() if cur else word
|
# Bold for first 2 (bright leading edge)
|
||||||
bbox = fnt.getbbox(test)
|
result.append(f"\033[1;38;5;{code}m")
|
||||||
if bbox:
|
elif i < 10:
|
||||||
img_h = bbox[3] - bbox[1] + 8
|
# Normal for middle 8
|
||||||
pix_h = config.RENDER_H * 2
|
result.append(f"\033[38;5;{code}m")
|
||||||
scale = pix_h / max(img_h, 1)
|
|
||||||
term_w = int((bbox[2] - bbox[0] + 8) * scale)
|
|
||||||
else:
|
else:
|
||||||
term_w = 0
|
# Dim for last 2 (dark trailing edge)
|
||||||
max_term_w = width - 4 - 4
|
result.append(f"\033[2;38;5;{code}m")
|
||||||
if term_w > max_term_w and cur:
|
return result
|
||||||
lines += 1
|
|
||||||
cur = word
|
|
||||||
else:
|
def _default_green_gradient():
|
||||||
cur = test
|
"""Return the default 12-color green gradient for fallback when no theme is active."""
|
||||||
if cur:
|
return [
|
||||||
lines += 1
|
"\033[1;38;5;231m", # white
|
||||||
if lines == 0:
|
"\033[1;38;5;195m", # pale cyan-white
|
||||||
lines = 1
|
"\033[38;5;123m", # bright cyan
|
||||||
return lines * config.RENDER_H + max(0, lines - 1) + 2
|
"\033[38;5;118m", # bright lime
|
||||||
|
"\033[38;5;82m", # lime
|
||||||
|
"\033[38;5;46m", # bright green
|
||||||
|
"\033[38;5;40m", # green
|
||||||
|
"\033[38;5;34m", # medium green
|
||||||
|
"\033[38;5;28m", # dark green
|
||||||
|
"\033[38;5;22m", # deep green
|
||||||
|
"\033[2;38;5;22m", # dim deep green
|
||||||
|
"\033[2;38;5;235m", # near black
|
||||||
|
]
|
||||||
|
|
||||||
|
|
||||||
|
def _default_magenta_gradient():
|
||||||
|
"""Return the default 12-color magenta gradient for fallback when no theme is active."""
|
||||||
|
return [
|
||||||
|
"\033[1;38;5;231m", # white
|
||||||
|
"\033[1;38;5;225m", # pale pink-white
|
||||||
|
"\033[38;5;219m", # bright pink
|
||||||
|
"\033[38;5;213m", # hot pink
|
||||||
|
"\033[38;5;207m", # magenta
|
||||||
|
"\033[38;5;201m", # bright magenta
|
||||||
|
"\033[38;5;165m", # orchid-red
|
||||||
|
"\033[38;5;161m", # ruby-magenta
|
||||||
|
"\033[38;5;125m", # dark magenta
|
||||||
|
"\033[38;5;89m", # deep maroon-magenta
|
||||||
|
"\033[2;38;5;89m", # dim deep maroon-magenta
|
||||||
|
"\033[2;38;5;235m", # near black
|
||||||
|
]
|
||||||
|
|
||||||
|
|
||||||
# ─── FONT LOADING ─────────────────────────────────────────
|
# ─── FONT LOADING ─────────────────────────────────────────
|
||||||
@@ -198,22 +223,65 @@ def big_wrap(text, max_w, fnt=None):
|
|||||||
return out
|
return out
|
||||||
|
|
||||||
|
|
||||||
# ─── HEADLINE BLOCK ASSEMBLY ─────────────────────────────
|
def lr_gradient(rows, offset=0.0, cols=None):
|
||||||
def make_block(title, src, ts, w):
|
"""Color each non-space block character with a shifting left-to-right gradient."""
|
||||||
"""Render a headline into a content block with color.
|
if cols is None:
|
||||||
|
from engine import config
|
||||||
|
|
||||||
|
if config.ACTIVE_THEME:
|
||||||
|
cols = _color_codes_to_ansi(config.ACTIVE_THEME.main_gradient)
|
||||||
|
else:
|
||||||
|
cols = _default_green_gradient()
|
||||||
|
n = len(cols)
|
||||||
|
max_x = max((len(r.rstrip()) for r in rows if r.strip()), default=1)
|
||||||
|
out = []
|
||||||
|
for row in rows:
|
||||||
|
if not row.strip():
|
||||||
|
out.append(row)
|
||||||
|
continue
|
||||||
|
buf = []
|
||||||
|
for x, ch in enumerate(row):
|
||||||
|
if ch == " ":
|
||||||
|
buf.append(" ")
|
||||||
|
else:
|
||||||
|
shifted = (x / max(max_x - 1, 1) + offset) % 1.0
|
||||||
|
idx = min(round(shifted * (n - 1)), n - 1)
|
||||||
|
buf.append(f"{cols[idx]}{ch}{RST}")
|
||||||
|
out.append("".join(buf))
|
||||||
|
return out
|
||||||
|
|
||||||
|
|
||||||
|
def lr_gradient_opposite(rows, offset=0.0):
|
||||||
|
"""Complementary (opposite wheel) gradient used for queue message panels."""
|
||||||
|
return lr_gradient(rows, offset, _default_magenta_gradient())
|
||||||
|
|
||||||
|
|
||||||
|
def msg_gradient(rows, offset):
|
||||||
|
"""Apply message (ntfy) gradient using theme complementary colors.
|
||||||
|
|
||||||
|
Returns colored rows using ACTIVE_THEME.message_gradient if available,
|
||||||
|
falling back to default magenta if no theme is set.
|
||||||
|
|
||||||
Args:
|
Args:
|
||||||
title: Headline text to render
|
rows: List of text strings to colorize
|
||||||
src: Source identifier (for metadata)
|
offset: Gradient offset (0.0-1.0) for animation
|
||||||
ts: Timestamp string (for metadata)
|
|
||||||
w: Width constraint in terminal characters
|
|
||||||
|
|
||||||
Returns:
|
Returns:
|
||||||
tuple: (content_lines, color_code, meta_row_index)
|
List of rows with ANSI color codes applied
|
||||||
- content_lines: List of rendered text lines
|
|
||||||
- color_code: ANSI color code for display
|
|
||||||
- meta_row_index: Row index of metadata line
|
|
||||||
"""
|
"""
|
||||||
|
from engine import config
|
||||||
|
|
||||||
|
cols = (
|
||||||
|
_color_codes_to_ansi(config.ACTIVE_THEME.message_gradient)
|
||||||
|
if config.ACTIVE_THEME
|
||||||
|
else _default_magenta_gradient()
|
||||||
|
)
|
||||||
|
return lr_gradient(rows, offset, cols)
|
||||||
|
|
||||||
|
|
||||||
|
# ─── HEADLINE BLOCK ASSEMBLY ─────────────────────────────
|
||||||
|
def make_block(title, src, ts, w):
|
||||||
|
"""Render a headline into a content block with color."""
|
||||||
target_lang = (
|
target_lang = (
|
||||||
(SOURCE_LANGS.get(src) or detect_location_language(title))
|
(SOURCE_LANGS.get(src) or detect_location_language(title))
|
||||||
if config.MODE == "news"
|
if config.MODE == "news"
|
||||||
@@ -1,37 +0,0 @@
|
|||||||
"""Modern block rendering system - OTF font to terminal half-block conversion.
|
|
||||||
|
|
||||||
This module provides the core rendering capabilities for big block letters
|
|
||||||
and styled text output using PIL fonts and ANSI terminal rendering.
|
|
||||||
|
|
||||||
Exports:
|
|
||||||
- make_block: Render a headline into a content block with color
|
|
||||||
- big_wrap: Word-wrap text and render with OTF font
|
|
||||||
- render_line: Render a line of text as terminal rows using half-blocks
|
|
||||||
- font_for_lang: Get appropriate font for a language
|
|
||||||
- clear_font_cache: Reset cached font objects
|
|
||||||
- lr_gradient: Color block characters with left-to-right gradient
|
|
||||||
- lr_gradient_opposite: Complementary gradient coloring
|
|
||||||
"""
|
|
||||||
|
|
||||||
from engine.render.blocks import (
|
|
||||||
big_wrap,
|
|
||||||
clear_font_cache,
|
|
||||||
font_for_lang,
|
|
||||||
list_font_faces,
|
|
||||||
load_font_face,
|
|
||||||
make_block,
|
|
||||||
render_line,
|
|
||||||
)
|
|
||||||
from engine.render.gradient import lr_gradient, lr_gradient_opposite
|
|
||||||
|
|
||||||
__all__ = [
|
|
||||||
"big_wrap",
|
|
||||||
"clear_font_cache",
|
|
||||||
"font_for_lang",
|
|
||||||
"list_font_faces",
|
|
||||||
"load_font_face",
|
|
||||||
"lr_gradient",
|
|
||||||
"lr_gradient_opposite",
|
|
||||||
"make_block",
|
|
||||||
"render_line",
|
|
||||||
]
|
|
||||||
@@ -1,82 +0,0 @@
|
|||||||
"""Gradient coloring for rendered block characters.
|
|
||||||
|
|
||||||
Provides left-to-right and complementary gradient effects for terminal display.
|
|
||||||
"""
|
|
||||||
|
|
||||||
from engine.terminal import RST
|
|
||||||
|
|
||||||
# Left → right: white-hot leading edge fades to near-black
|
|
||||||
GRAD_COLS = [
|
|
||||||
"\033[1;38;5;231m", # white
|
|
||||||
"\033[1;38;5;195m", # pale cyan-white
|
|
||||||
"\033[38;5;123m", # bright cyan
|
|
||||||
"\033[38;5;118m", # bright lime
|
|
||||||
"\033[38;5;82m", # lime
|
|
||||||
"\033[38;5;46m", # bright green
|
|
||||||
"\033[38;5;40m", # green
|
|
||||||
"\033[38;5;34m", # medium green
|
|
||||||
"\033[38;5;28m", # dark green
|
|
||||||
"\033[38;5;22m", # deep green
|
|
||||||
"\033[2;38;5;22m", # dim deep green
|
|
||||||
"\033[2;38;5;235m", # near black
|
|
||||||
]
|
|
||||||
|
|
||||||
# Complementary sweep for queue messages (opposite hue family from ticker greens)
|
|
||||||
MSG_GRAD_COLS = [
|
|
||||||
"\033[1;38;5;231m", # white
|
|
||||||
"\033[1;38;5;225m", # pale pink-white
|
|
||||||
"\033[38;5;219m", # bright pink
|
|
||||||
"\033[38;5;213m", # hot pink
|
|
||||||
"\033[38;5;207m", # magenta
|
|
||||||
"\033[38;5;201m", # bright magenta
|
|
||||||
"\033[38;5;165m", # orchid-red
|
|
||||||
"\033[38;5;161m", # ruby-magenta
|
|
||||||
"\033[38;5;125m", # dark magenta
|
|
||||||
"\033[38;5;89m", # deep maroon-magenta
|
|
||||||
"\033[2;38;5;89m", # dim deep maroon-magenta
|
|
||||||
"\033[2;38;5;235m", # near black
|
|
||||||
]
|
|
||||||
|
|
||||||
|
|
||||||
def lr_gradient(rows, offset=0.0, grad_cols=None):
|
|
||||||
"""Color each non-space block character with a shifting left-to-right gradient.
|
|
||||||
|
|
||||||
Args:
|
|
||||||
rows: List of text lines with block characters
|
|
||||||
offset: Gradient offset (0.0-1.0) for animation
|
|
||||||
grad_cols: List of ANSI color codes (default: GRAD_COLS)
|
|
||||||
|
|
||||||
Returns:
|
|
||||||
List of lines with gradient coloring applied
|
|
||||||
"""
|
|
||||||
cols = grad_cols or GRAD_COLS
|
|
||||||
n = len(cols)
|
|
||||||
max_x = max((len(r.rstrip()) for r in rows if r.strip()), default=1)
|
|
||||||
out = []
|
|
||||||
for row in rows:
|
|
||||||
if not row.strip():
|
|
||||||
out.append(row)
|
|
||||||
continue
|
|
||||||
buf = []
|
|
||||||
for x, ch in enumerate(row):
|
|
||||||
if ch == " ":
|
|
||||||
buf.append(" ")
|
|
||||||
else:
|
|
||||||
shifted = (x / max(max_x - 1, 1) + offset) % 1.0
|
|
||||||
idx = min(round(shifted * (n - 1)), n - 1)
|
|
||||||
buf.append(f"{cols[idx]}{ch}{RST}")
|
|
||||||
out.append("".join(buf))
|
|
||||||
return out
|
|
||||||
|
|
||||||
|
|
||||||
def lr_gradient_opposite(rows, offset=0.0):
|
|
||||||
"""Complementary (opposite wheel) gradient used for queue message panels.
|
|
||||||
|
|
||||||
Args:
|
|
||||||
rows: List of text lines with block characters
|
|
||||||
offset: Gradient offset (0.0-1.0) for animation
|
|
||||||
|
|
||||||
Returns:
|
|
||||||
List of lines with complementary gradient coloring applied
|
|
||||||
"""
|
|
||||||
return lr_gradient(rows, offset, MSG_GRAD_COLS)
|
|
||||||
141
engine/scroll.py
Normal file
141
engine/scroll.py
Normal file
@@ -0,0 +1,141 @@
|
|||||||
|
"""
|
||||||
|
Render engine — ticker content, scroll motion, message panel, and firehose overlay.
|
||||||
|
Orchestrates viewport, frame timing, and layers.
|
||||||
|
"""
|
||||||
|
|
||||||
|
import random
|
||||||
|
import time
|
||||||
|
|
||||||
|
from engine import config
|
||||||
|
from engine.display import (
|
||||||
|
Display,
|
||||||
|
TerminalDisplay,
|
||||||
|
)
|
||||||
|
from engine.display import (
|
||||||
|
get_monitor as _get_display_monitor,
|
||||||
|
)
|
||||||
|
from engine.frame import calculate_scroll_step
|
||||||
|
from engine.layers import (
|
||||||
|
apply_glitch,
|
||||||
|
process_effects,
|
||||||
|
render_firehose,
|
||||||
|
render_message_overlay,
|
||||||
|
render_ticker_zone,
|
||||||
|
)
|
||||||
|
from engine.viewport import th, tw
|
||||||
|
|
||||||
|
USE_EFFECT_CHAIN = True
|
||||||
|
|
||||||
|
|
||||||
|
def stream(items, ntfy_poller, mic_monitor, display: Display | None = None):
|
||||||
|
"""Main render loop with four layers: message, ticker, scroll motion, firehose."""
|
||||||
|
if display is None:
|
||||||
|
display = TerminalDisplay()
|
||||||
|
random.shuffle(items)
|
||||||
|
pool = list(items)
|
||||||
|
seen = set()
|
||||||
|
queued = 0
|
||||||
|
|
||||||
|
time.sleep(0.5)
|
||||||
|
w, h = tw(), th()
|
||||||
|
display.init(w, h)
|
||||||
|
display.clear()
|
||||||
|
fh = config.FIREHOSE_H if config.FIREHOSE else 0
|
||||||
|
ticker_view_h = h - fh
|
||||||
|
GAP = 3
|
||||||
|
scroll_step_interval = calculate_scroll_step(config.SCROLL_DUR, ticker_view_h)
|
||||||
|
|
||||||
|
active = []
|
||||||
|
scroll_cam = 0
|
||||||
|
ticker_next_y = ticker_view_h
|
||||||
|
noise_cache = {}
|
||||||
|
scroll_motion_accum = 0.0
|
||||||
|
msg_cache = (None, None)
|
||||||
|
frame_number = 0
|
||||||
|
|
||||||
|
while True:
|
||||||
|
if queued >= config.HEADLINE_LIMIT and not active:
|
||||||
|
break
|
||||||
|
|
||||||
|
t0 = time.monotonic()
|
||||||
|
w, h = tw(), th()
|
||||||
|
fh = config.FIREHOSE_H if config.FIREHOSE else 0
|
||||||
|
ticker_view_h = h - fh
|
||||||
|
scroll_step_interval = calculate_scroll_step(config.SCROLL_DUR, ticker_view_h)
|
||||||
|
|
||||||
|
msg = ntfy_poller.get_active_message()
|
||||||
|
msg_overlay, msg_cache = render_message_overlay(msg, w, h, msg_cache)
|
||||||
|
|
||||||
|
buf = []
|
||||||
|
ticker_h = ticker_view_h
|
||||||
|
|
||||||
|
scroll_motion_accum += config.FRAME_DT
|
||||||
|
while scroll_motion_accum >= scroll_step_interval:
|
||||||
|
scroll_motion_accum -= scroll_step_interval
|
||||||
|
scroll_cam += 1
|
||||||
|
|
||||||
|
while (
|
||||||
|
ticker_next_y < scroll_cam + ticker_view_h + 10
|
||||||
|
and queued < config.HEADLINE_LIMIT
|
||||||
|
):
|
||||||
|
from engine.effects import next_headline
|
||||||
|
from engine.render import make_block
|
||||||
|
|
||||||
|
t, src, ts = next_headline(pool, items, seen)
|
||||||
|
ticker_content, hc, midx = make_block(t, src, ts, w)
|
||||||
|
active.append((ticker_content, hc, ticker_next_y, midx))
|
||||||
|
ticker_next_y += len(ticker_content) + GAP
|
||||||
|
queued += 1
|
||||||
|
|
||||||
|
active = [
|
||||||
|
(c, hc, by, mi) for c, hc, by, mi in active if by + len(c) > scroll_cam
|
||||||
|
]
|
||||||
|
for k in list(noise_cache):
|
||||||
|
if k < scroll_cam:
|
||||||
|
del noise_cache[k]
|
||||||
|
|
||||||
|
grad_offset = (time.monotonic() * config.GRAD_SPEED) % 1.0
|
||||||
|
ticker_buf_start = len(buf)
|
||||||
|
|
||||||
|
ticker_buf, noise_cache = render_ticker_zone(
|
||||||
|
active, scroll_cam, ticker_h, w, noise_cache, grad_offset
|
||||||
|
)
|
||||||
|
buf.extend(ticker_buf)
|
||||||
|
|
||||||
|
mic_excess = mic_monitor.excess
|
||||||
|
render_start = time.perf_counter()
|
||||||
|
|
||||||
|
if USE_EFFECT_CHAIN:
|
||||||
|
buf = process_effects(
|
||||||
|
buf,
|
||||||
|
w,
|
||||||
|
h,
|
||||||
|
scroll_cam,
|
||||||
|
ticker_h,
|
||||||
|
mic_excess,
|
||||||
|
grad_offset,
|
||||||
|
frame_number,
|
||||||
|
msg is not None,
|
||||||
|
items,
|
||||||
|
)
|
||||||
|
else:
|
||||||
|
buf = apply_glitch(buf, ticker_buf_start, mic_excess, w)
|
||||||
|
firehose_buf = render_firehose(items, w, fh, h)
|
||||||
|
buf.extend(firehose_buf)
|
||||||
|
|
||||||
|
if msg_overlay:
|
||||||
|
buf.extend(msg_overlay)
|
||||||
|
|
||||||
|
render_elapsed = (time.perf_counter() - render_start) * 1000
|
||||||
|
monitor = _get_display_monitor()
|
||||||
|
if monitor:
|
||||||
|
chars = sum(len(line) for line in buf)
|
||||||
|
monitor.record_effect("render", render_elapsed, chars, chars)
|
||||||
|
|
||||||
|
display.show(buf)
|
||||||
|
|
||||||
|
elapsed = time.monotonic() - t0
|
||||||
|
time.sleep(max(0, config.FRAME_DT - elapsed))
|
||||||
|
frame_number += 1
|
||||||
|
|
||||||
|
display.cleanup()
|
||||||
@@ -1,203 +0,0 @@
|
|||||||
"""
|
|
||||||
Sensor framework - PureData-style real-time input system.
|
|
||||||
|
|
||||||
Sensors are data sources that emit values over time, similar to how
|
|
||||||
PureData objects emit signals. Effects can bind to sensors to modulate
|
|
||||||
their parameters dynamically.
|
|
||||||
|
|
||||||
Architecture:
|
|
||||||
- Sensor: Base class for all sensors (mic, camera, ntfy, OSC, etc.)
|
|
||||||
- SensorRegistry: Global registry for sensor discovery
|
|
||||||
- SensorStage: Pipeline stage wrapper for sensors
|
|
||||||
- Effect param_bindings: Declarative sensor-to-param routing
|
|
||||||
|
|
||||||
Example:
|
|
||||||
class GlitchEffect(EffectPlugin):
|
|
||||||
param_bindings = {
|
|
||||||
"intensity": {"sensor": "mic", "transform": "linear"},
|
|
||||||
}
|
|
||||||
|
|
||||||
This binds the mic sensor to the glitch intensity parameter.
|
|
||||||
"""
|
|
||||||
|
|
||||||
from abc import ABC, abstractmethod
|
|
||||||
from dataclasses import dataclass
|
|
||||||
from typing import TYPE_CHECKING, Any
|
|
||||||
|
|
||||||
if TYPE_CHECKING:
|
|
||||||
from engine.pipeline.core import PipelineContext
|
|
||||||
|
|
||||||
|
|
||||||
@dataclass
|
|
||||||
class SensorValue:
|
|
||||||
"""A sensor reading with metadata."""
|
|
||||||
|
|
||||||
sensor_name: str
|
|
||||||
value: float
|
|
||||||
timestamp: float
|
|
||||||
unit: str = ""
|
|
||||||
|
|
||||||
|
|
||||||
class Sensor(ABC):
|
|
||||||
"""Abstract base class for sensors.
|
|
||||||
|
|
||||||
Sensors are real-time data sources that emit values. They can be:
|
|
||||||
- Physical: mic, camera, joystick, MIDI, OSC
|
|
||||||
- Virtual: ntfy, timer, random, noise
|
|
||||||
|
|
||||||
Each sensor has a name and emits SensorValue objects.
|
|
||||||
"""
|
|
||||||
|
|
||||||
name: str
|
|
||||||
unit: str = ""
|
|
||||||
|
|
||||||
@property
|
|
||||||
def available(self) -> bool:
|
|
||||||
"""Whether the sensor is currently available."""
|
|
||||||
return True
|
|
||||||
|
|
||||||
@abstractmethod
|
|
||||||
def read(self) -> SensorValue | None:
|
|
||||||
"""Read current sensor value.
|
|
||||||
|
|
||||||
Returns:
|
|
||||||
SensorValue if available, None if sensor is not ready.
|
|
||||||
"""
|
|
||||||
...
|
|
||||||
|
|
||||||
@abstractmethod
|
|
||||||
def start(self) -> bool:
|
|
||||||
"""Start the sensor.
|
|
||||||
|
|
||||||
Returns:
|
|
||||||
True if started successfully.
|
|
||||||
"""
|
|
||||||
...
|
|
||||||
|
|
||||||
@abstractmethod
|
|
||||||
def stop(self) -> None:
|
|
||||||
"""Stop the sensor and release resources."""
|
|
||||||
...
|
|
||||||
|
|
||||||
|
|
||||||
class SensorRegistry:
|
|
||||||
"""Global registry for sensors.
|
|
||||||
|
|
||||||
Provides:
|
|
||||||
- Registration of sensor instances
|
|
||||||
- Lookup by name
|
|
||||||
- Global start/stop
|
|
||||||
"""
|
|
||||||
|
|
||||||
_sensors: dict[str, Sensor] = {}
|
|
||||||
_started: bool = False
|
|
||||||
|
|
||||||
@classmethod
|
|
||||||
def register(cls, sensor: Sensor) -> None:
|
|
||||||
"""Register a sensor instance."""
|
|
||||||
cls._sensors[sensor.name] = sensor
|
|
||||||
|
|
||||||
@classmethod
|
|
||||||
def get(cls, name: str) -> Sensor | None:
|
|
||||||
"""Get a sensor by name."""
|
|
||||||
return cls._sensors.get(name)
|
|
||||||
|
|
||||||
@classmethod
|
|
||||||
def list_sensors(cls) -> list[str]:
|
|
||||||
"""List all registered sensor names."""
|
|
||||||
return list(cls._sensors.keys())
|
|
||||||
|
|
||||||
@classmethod
|
|
||||||
def start_all(cls) -> bool:
|
|
||||||
"""Start all sensors.
|
|
||||||
|
|
||||||
Returns:
|
|
||||||
True if all sensors started successfully.
|
|
||||||
"""
|
|
||||||
if cls._started:
|
|
||||||
return True
|
|
||||||
|
|
||||||
all_started = True
|
|
||||||
for sensor in cls._sensors.values():
|
|
||||||
if sensor.available and not sensor.start():
|
|
||||||
all_started = False
|
|
||||||
|
|
||||||
cls._started = all_started
|
|
||||||
return all_started
|
|
||||||
|
|
||||||
@classmethod
|
|
||||||
def stop_all(cls) -> None:
|
|
||||||
"""Stop all sensors."""
|
|
||||||
for sensor in cls._sensors.values():
|
|
||||||
sensor.stop()
|
|
||||||
cls._started = False
|
|
||||||
|
|
||||||
@classmethod
|
|
||||||
def read_all(cls) -> dict[str, float]:
|
|
||||||
"""Read all sensor values.
|
|
||||||
|
|
||||||
Returns:
|
|
||||||
Dict mapping sensor name to current value.
|
|
||||||
"""
|
|
||||||
result = {}
|
|
||||||
for name, sensor in cls._sensors.items():
|
|
||||||
value = sensor.read()
|
|
||||||
if value:
|
|
||||||
result[name] = value.value
|
|
||||||
return result
|
|
||||||
|
|
||||||
|
|
||||||
class SensorStage:
|
|
||||||
"""Pipeline stage wrapper for sensors.
|
|
||||||
|
|
||||||
Provides sensor data to the pipeline context.
|
|
||||||
Sensors don't transform data - they inject sensor values into context.
|
|
||||||
"""
|
|
||||||
|
|
||||||
def __init__(self, sensor: Sensor, name: str | None = None):
|
|
||||||
self._sensor = sensor
|
|
||||||
self.name = name or sensor.name
|
|
||||||
self.category = "sensor"
|
|
||||||
self.optional = True
|
|
||||||
|
|
||||||
@property
|
|
||||||
def stage_type(self) -> str:
|
|
||||||
return "sensor"
|
|
||||||
|
|
||||||
@property
|
|
||||||
def inlet_types(self) -> set:
|
|
||||||
from engine.pipeline.core import DataType
|
|
||||||
|
|
||||||
return {DataType.ANY}
|
|
||||||
|
|
||||||
@property
|
|
||||||
def outlet_types(self) -> set:
|
|
||||||
from engine.pipeline.core import DataType
|
|
||||||
|
|
||||||
return {DataType.ANY}
|
|
||||||
|
|
||||||
@property
|
|
||||||
def capabilities(self) -> set[str]:
|
|
||||||
return {f"sensor.{self.name}"}
|
|
||||||
|
|
||||||
@property
|
|
||||||
def dependencies(self) -> set[str]:
|
|
||||||
return set()
|
|
||||||
|
|
||||||
def init(self, ctx: "PipelineContext") -> bool:
|
|
||||||
return self._sensor.start()
|
|
||||||
|
|
||||||
def process(self, data: Any, ctx: "PipelineContext") -> Any:
|
|
||||||
value = self._sensor.read()
|
|
||||||
if value:
|
|
||||||
ctx.set_state(f"sensor.{self.name}", value.value)
|
|
||||||
ctx.set_state(f"sensor.{self.name}.full", value)
|
|
||||||
return data
|
|
||||||
|
|
||||||
def cleanup(self) -> None:
|
|
||||||
self._sensor.stop()
|
|
||||||
|
|
||||||
|
|
||||||
def create_sensor_stage(sensor: Sensor, name: str | None = None) -> SensorStage:
|
|
||||||
"""Create a pipeline stage from a sensor."""
|
|
||||||
return SensorStage(sensor, name)
|
|
||||||
@@ -1,145 +0,0 @@
|
|||||||
"""
|
|
||||||
Mic sensor - audio input as a pipeline sensor.
|
|
||||||
|
|
||||||
Self-contained implementation that handles audio input directly,
|
|
||||||
with graceful degradation if sounddevice is unavailable.
|
|
||||||
"""
|
|
||||||
|
|
||||||
import atexit
|
|
||||||
import time
|
|
||||||
from collections.abc import Callable
|
|
||||||
from dataclasses import dataclass
|
|
||||||
from datetime import datetime
|
|
||||||
from typing import Any
|
|
||||||
|
|
||||||
try:
|
|
||||||
import numpy as np
|
|
||||||
import sounddevice as sd
|
|
||||||
|
|
||||||
_HAS_AUDIO = True
|
|
||||||
except Exception:
|
|
||||||
np = None # type: ignore
|
|
||||||
sd = None # type: ignore
|
|
||||||
_HAS_AUDIO = False
|
|
||||||
|
|
||||||
|
|
||||||
from engine.events import MicLevelEvent
|
|
||||||
from engine.sensors import Sensor, SensorRegistry, SensorValue
|
|
||||||
|
|
||||||
|
|
||||||
@dataclass
|
|
||||||
class AudioConfig:
|
|
||||||
"""Configuration for audio input."""
|
|
||||||
|
|
||||||
threshold_db: float = 50.0
|
|
||||||
sample_rate: float = 44100.0
|
|
||||||
block_size: int = 1024
|
|
||||||
|
|
||||||
|
|
||||||
class MicSensor(Sensor):
|
|
||||||
"""Microphone sensor for pipeline integration.
|
|
||||||
|
|
||||||
Self-contained implementation with graceful degradation.
|
|
||||||
No external dependencies required - works with or without sounddevice.
|
|
||||||
"""
|
|
||||||
|
|
||||||
def __init__(self, threshold_db: float = 50.0, name: str = "mic"):
|
|
||||||
self.name = name
|
|
||||||
self.unit = "dB"
|
|
||||||
self._config = AudioConfig(threshold_db=threshold_db)
|
|
||||||
self._db: float = -99.0
|
|
||||||
self._stream: Any = None
|
|
||||||
self._subscribers: list[Callable[[MicLevelEvent], None]] = []
|
|
||||||
|
|
||||||
@property
|
|
||||||
def available(self) -> bool:
|
|
||||||
"""Check if audio input is available."""
|
|
||||||
return _HAS_AUDIO and self._stream is not None
|
|
||||||
|
|
||||||
def start(self) -> bool:
|
|
||||||
"""Start the microphone stream."""
|
|
||||||
if not _HAS_AUDIO or sd is None:
|
|
||||||
return False
|
|
||||||
|
|
||||||
try:
|
|
||||||
self._stream = sd.InputStream(
|
|
||||||
samplerate=self._config.sample_rate,
|
|
||||||
blocksize=self._config.block_size,
|
|
||||||
channels=1,
|
|
||||||
callback=self._audio_callback,
|
|
||||||
)
|
|
||||||
self._stream.start()
|
|
||||||
atexit.register(self.stop)
|
|
||||||
return True
|
|
||||||
except Exception:
|
|
||||||
return False
|
|
||||||
|
|
||||||
def stop(self) -> None:
|
|
||||||
"""Stop the microphone stream."""
|
|
||||||
if self._stream:
|
|
||||||
try:
|
|
||||||
self._stream.stop()
|
|
||||||
self._stream.close()
|
|
||||||
except Exception:
|
|
||||||
pass
|
|
||||||
self._stream = None
|
|
||||||
|
|
||||||
def _audio_callback(self, indata, frames, time_info, status) -> None:
|
|
||||||
"""Process audio data from sounddevice."""
|
|
||||||
if not _HAS_AUDIO or np is None:
|
|
||||||
return
|
|
||||||
|
|
||||||
rms = np.sqrt(np.mean(indata**2))
|
|
||||||
if rms > 0:
|
|
||||||
db = 20 * np.log10(rms)
|
|
||||||
else:
|
|
||||||
db = -99.0
|
|
||||||
|
|
||||||
self._db = db
|
|
||||||
|
|
||||||
excess = max(0.0, db - self._config.threshold_db)
|
|
||||||
event = MicLevelEvent(
|
|
||||||
db_level=db, excess_above_threshold=excess, timestamp=datetime.now()
|
|
||||||
)
|
|
||||||
self._emit(event)
|
|
||||||
|
|
||||||
def _emit(self, event: MicLevelEvent) -> None:
|
|
||||||
"""Emit event to all subscribers."""
|
|
||||||
for callback in self._subscribers:
|
|
||||||
try:
|
|
||||||
callback(event)
|
|
||||||
except Exception:
|
|
||||||
pass
|
|
||||||
|
|
||||||
def subscribe(self, callback: Callable[[MicLevelEvent], None]) -> None:
|
|
||||||
"""Subscribe to mic level events."""
|
|
||||||
if callback not in self._subscribers:
|
|
||||||
self._subscribers.append(callback)
|
|
||||||
|
|
||||||
def unsubscribe(self, callback: Callable[[MicLevelEvent], None]) -> None:
|
|
||||||
"""Unsubscribe from mic level events."""
|
|
||||||
if callback in self._subscribers:
|
|
||||||
self._subscribers.remove(callback)
|
|
||||||
|
|
||||||
def read(self) -> SensorValue | None:
|
|
||||||
"""Read current mic level as sensor value."""
|
|
||||||
if not self.available:
|
|
||||||
return None
|
|
||||||
|
|
||||||
excess = max(0.0, self._db - self._config.threshold_db)
|
|
||||||
return SensorValue(
|
|
||||||
sensor_name=self.name,
|
|
||||||
value=excess,
|
|
||||||
timestamp=time.time(),
|
|
||||||
unit=self.unit,
|
|
||||||
)
|
|
||||||
|
|
||||||
|
|
||||||
def register_mic_sensor() -> None:
|
|
||||||
"""Register the mic sensor with the global registry."""
|
|
||||||
sensor = MicSensor()
|
|
||||||
SensorRegistry.register(sensor)
|
|
||||||
|
|
||||||
|
|
||||||
# Auto-register when imported
|
|
||||||
register_mic_sensor()
|
|
||||||
@@ -1,161 +0,0 @@
|
|||||||
"""
|
|
||||||
Oscillator sensor - Modular synth-style oscillator as a pipeline sensor.
|
|
||||||
|
|
||||||
Provides various waveforms that can be:
|
|
||||||
1. Self-driving (phase accumulates over time)
|
|
||||||
2. Sensor-driven (phase modulated by external sensor)
|
|
||||||
|
|
||||||
Built-in waveforms:
|
|
||||||
- sine: Pure sine wave
|
|
||||||
- square: Square wave (0 to 1)
|
|
||||||
- sawtooth: Rising sawtooth (0 to 1, wraps)
|
|
||||||
- triangle: Triangle wave (0 to 1 to 0)
|
|
||||||
- noise: Random values (0 to 1)
|
|
||||||
|
|
||||||
Example usage:
|
|
||||||
osc = OscillatorSensor(waveform="sine", frequency=0.5)
|
|
||||||
# Or driven by mic sensor:
|
|
||||||
osc = OscillatorSensor(waveform="sine", frequency=1.0, input_sensor="mic")
|
|
||||||
"""
|
|
||||||
|
|
||||||
import math
|
|
||||||
import random
|
|
||||||
import time
|
|
||||||
from enum import Enum
|
|
||||||
|
|
||||||
from engine.sensors import Sensor, SensorRegistry, SensorValue
|
|
||||||
|
|
||||||
|
|
||||||
class Waveform(Enum):
|
|
||||||
"""Built-in oscillator waveforms."""
|
|
||||||
|
|
||||||
SINE = "sine"
|
|
||||||
SQUARE = "square"
|
|
||||||
SAWTOOTH = "sawtooth"
|
|
||||||
TRIANGLE = "triangle"
|
|
||||||
NOISE = "noise"
|
|
||||||
|
|
||||||
|
|
||||||
class OscillatorSensor(Sensor):
|
|
||||||
"""Oscillator sensor that generates periodic or random values.
|
|
||||||
|
|
||||||
Can run in two modes:
|
|
||||||
- Self-driving: phase accumulates based on frequency
|
|
||||||
- Sensor-driven: phase modulated by external sensor value
|
|
||||||
"""
|
|
||||||
|
|
||||||
WAVEFORMS = {
|
|
||||||
"sine": lambda p: (math.sin(2 * math.pi * p) + 1) / 2,
|
|
||||||
"square": lambda p: 1.0 if (p % 1.0) < 0.5 else 0.0,
|
|
||||||
"sawtooth": lambda p: p % 1.0,
|
|
||||||
"triangle": lambda p: 2 * abs(2 * (p % 1.0) - 1) - 1,
|
|
||||||
"noise": lambda _: random.random(),
|
|
||||||
}
|
|
||||||
|
|
||||||
def __init__(
|
|
||||||
self,
|
|
||||||
name: str = "osc",
|
|
||||||
waveform: str = "sine",
|
|
||||||
frequency: float = 1.0,
|
|
||||||
input_sensor: str | None = None,
|
|
||||||
input_scale: float = 1.0,
|
|
||||||
):
|
|
||||||
"""Initialize oscillator sensor.
|
|
||||||
|
|
||||||
Args:
|
|
||||||
name: Sensor name
|
|
||||||
waveform: Waveform type (sine, square, sawtooth, triangle, noise)
|
|
||||||
frequency: Frequency in Hz (self-driving mode)
|
|
||||||
input_sensor: Optional sensor name to drive phase
|
|
||||||
input_scale: Scale factor for input sensor
|
|
||||||
"""
|
|
||||||
self.name = name
|
|
||||||
self.unit = ""
|
|
||||||
self._waveform = waveform
|
|
||||||
self._frequency = frequency
|
|
||||||
self._input_sensor = input_sensor
|
|
||||||
self._input_scale = input_scale
|
|
||||||
self._phase = 0.0
|
|
||||||
self._start_time = time.time()
|
|
||||||
|
|
||||||
@property
|
|
||||||
def available(self) -> bool:
|
|
||||||
return True
|
|
||||||
|
|
||||||
@property
|
|
||||||
def waveform(self) -> str:
|
|
||||||
return self._waveform
|
|
||||||
|
|
||||||
@waveform.setter
|
|
||||||
def waveform(self, value: str) -> None:
|
|
||||||
if value not in self.WAVEFORMS:
|
|
||||||
raise ValueError(f"Unknown waveform: {value}")
|
|
||||||
self._waveform = value
|
|
||||||
|
|
||||||
@property
|
|
||||||
def frequency(self) -> float:
|
|
||||||
return self._frequency
|
|
||||||
|
|
||||||
@frequency.setter
|
|
||||||
def frequency(self, value: float) -> None:
|
|
||||||
self._frequency = max(0.0, value)
|
|
||||||
|
|
||||||
def start(self) -> bool:
|
|
||||||
self._phase = 0.0
|
|
||||||
self._start_time = time.time()
|
|
||||||
return True
|
|
||||||
|
|
||||||
def stop(self) -> None:
|
|
||||||
pass
|
|
||||||
|
|
||||||
def _get_input_value(self) -> float:
|
|
||||||
"""Get value from input sensor if configured."""
|
|
||||||
if self._input_sensor:
|
|
||||||
from engine.sensors import SensorRegistry
|
|
||||||
|
|
||||||
sensor = SensorRegistry.get(self._input_sensor)
|
|
||||||
if sensor:
|
|
||||||
reading = sensor.read()
|
|
||||||
if reading:
|
|
||||||
return reading.value * self._input_scale
|
|
||||||
return 0.0
|
|
||||||
|
|
||||||
def read(self) -> SensorValue | None:
|
|
||||||
current_time = time.time()
|
|
||||||
elapsed = current_time - self._start_time
|
|
||||||
|
|
||||||
if self._input_sensor:
|
|
||||||
input_val = self._get_input_value()
|
|
||||||
phase_increment = (self._frequency * elapsed) + input_val
|
|
||||||
else:
|
|
||||||
phase_increment = self._frequency * elapsed
|
|
||||||
|
|
||||||
self._phase += phase_increment
|
|
||||||
|
|
||||||
waveform_fn = self.WAVEFORMS.get(self._waveform)
|
|
||||||
if waveform_fn is None:
|
|
||||||
return None
|
|
||||||
|
|
||||||
value = waveform_fn(self._phase)
|
|
||||||
value = max(0.0, min(1.0, value))
|
|
||||||
|
|
||||||
return SensorValue(
|
|
||||||
sensor_name=self.name,
|
|
||||||
value=value,
|
|
||||||
timestamp=current_time,
|
|
||||||
unit=self.unit,
|
|
||||||
)
|
|
||||||
|
|
||||||
def set_waveform(self, waveform: str) -> None:
|
|
||||||
"""Change waveform at runtime."""
|
|
||||||
self.waveform = waveform
|
|
||||||
|
|
||||||
def set_frequency(self, frequency: float) -> None:
|
|
||||||
"""Change frequency at runtime."""
|
|
||||||
self.frequency = frequency
|
|
||||||
|
|
||||||
|
|
||||||
def register_oscillator_sensor(name: str = "osc", **kwargs) -> None:
|
|
||||||
"""Register an oscillator sensor with the global registry."""
|
|
||||||
sensor = OscillatorSensor(name=name, **kwargs)
|
|
||||||
SensorRegistry.register(sensor)
|
|
||||||
@@ -1,114 +0,0 @@
|
|||||||
"""
|
|
||||||
Pipeline metrics sensor - Exposes pipeline performance data as sensor values.
|
|
||||||
|
|
||||||
This sensor reads metrics from a Pipeline instance and provides them
|
|
||||||
as sensor values that can drive effect parameters.
|
|
||||||
|
|
||||||
Example:
|
|
||||||
sensor = PipelineMetricsSensor(pipeline)
|
|
||||||
sensor.read() # Returns SensorValue with total_ms, fps, etc.
|
|
||||||
"""
|
|
||||||
|
|
||||||
from typing import TYPE_CHECKING
|
|
||||||
|
|
||||||
from engine.sensors import Sensor, SensorValue
|
|
||||||
|
|
||||||
if TYPE_CHECKING:
|
|
||||||
from engine.pipeline.controller import Pipeline
|
|
||||||
|
|
||||||
|
|
||||||
class PipelineMetricsSensor(Sensor):
|
|
||||||
"""Sensor that reads metrics from a Pipeline instance.
|
|
||||||
|
|
||||||
Provides real-time performance data:
|
|
||||||
- total_ms: Total frame time in milliseconds
|
|
||||||
- fps: Calculated frames per second
|
|
||||||
- stage_timings: Dict of stage name -> duration_ms
|
|
||||||
|
|
||||||
Can be bound to effect parameters for reactive visuals.
|
|
||||||
"""
|
|
||||||
|
|
||||||
def __init__(self, pipeline: "Pipeline | None" = None, name: str = "pipeline"):
|
|
||||||
self._pipeline = pipeline
|
|
||||||
self.name = name
|
|
||||||
self.unit = "ms"
|
|
||||||
self._last_values: dict[str, float] = {
|
|
||||||
"total_ms": 0.0,
|
|
||||||
"fps": 0.0,
|
|
||||||
"avg_ms": 0.0,
|
|
||||||
"min_ms": 0.0,
|
|
||||||
"max_ms": 0.0,
|
|
||||||
}
|
|
||||||
|
|
||||||
@property
|
|
||||||
def available(self) -> bool:
|
|
||||||
return self._pipeline is not None
|
|
||||||
|
|
||||||
def set_pipeline(self, pipeline: "Pipeline") -> None:
|
|
||||||
"""Set or update the pipeline to read metrics from."""
|
|
||||||
self._pipeline = pipeline
|
|
||||||
|
|
||||||
def read(self) -> SensorValue | None:
|
|
||||||
"""Read current metrics from the pipeline."""
|
|
||||||
if not self._pipeline:
|
|
||||||
return None
|
|
||||||
|
|
||||||
try:
|
|
||||||
metrics = self._pipeline.get_metrics_summary()
|
|
||||||
except Exception:
|
|
||||||
return None
|
|
||||||
|
|
||||||
if not metrics or "error" in metrics:
|
|
||||||
return None
|
|
||||||
|
|
||||||
self._last_values["total_ms"] = metrics.get("total_ms", 0.0)
|
|
||||||
self._last_values["fps"] = metrics.get("fps", 0.0)
|
|
||||||
self._last_values["avg_ms"] = metrics.get("avg_ms", 0.0)
|
|
||||||
self._last_values["min_ms"] = metrics.get("min_ms", 0.0)
|
|
||||||
self._last_values["max_ms"] = metrics.get("max_ms", 0.0)
|
|
||||||
|
|
||||||
# Provide total_ms as primary value (for LFO-style effects)
|
|
||||||
return SensorValue(
|
|
||||||
sensor_name=self.name,
|
|
||||||
value=self._last_values["total_ms"],
|
|
||||||
timestamp=0.0,
|
|
||||||
unit=self.unit,
|
|
||||||
)
|
|
||||||
|
|
||||||
def get_stage_timing(self, stage_name: str) -> float:
|
|
||||||
"""Get timing for a specific stage."""
|
|
||||||
if not self._pipeline:
|
|
||||||
return 0.0
|
|
||||||
try:
|
|
||||||
metrics = self._pipeline.get_metrics_summary()
|
|
||||||
stages = metrics.get("stages", {})
|
|
||||||
return stages.get(stage_name, {}).get("avg_ms", 0.0)
|
|
||||||
except Exception:
|
|
||||||
return 0.0
|
|
||||||
|
|
||||||
def get_all_timings(self) -> dict[str, float]:
|
|
||||||
"""Get all stage timings as a dict."""
|
|
||||||
if not self._pipeline:
|
|
||||||
return {}
|
|
||||||
try:
|
|
||||||
metrics = self._pipeline.get_metrics_summary()
|
|
||||||
return metrics.get("stages", {})
|
|
||||||
except Exception:
|
|
||||||
return {}
|
|
||||||
|
|
||||||
def get_frame_history(self) -> list[float]:
|
|
||||||
"""Get historical frame times for sparklines."""
|
|
||||||
if not self._pipeline:
|
|
||||||
return []
|
|
||||||
try:
|
|
||||||
return self._pipeline.get_frame_times()
|
|
||||||
except Exception:
|
|
||||||
return []
|
|
||||||
|
|
||||||
def start(self) -> bool:
|
|
||||||
"""Start the sensor (no-op for read-only metrics)."""
|
|
||||||
return True
|
|
||||||
|
|
||||||
def stop(self) -> None:
|
|
||||||
"""Stop the sensor (no-op for read-only metrics)."""
|
|
||||||
pass
|
|
||||||
60
engine/themes.py
Normal file
60
engine/themes.py
Normal file
@@ -0,0 +1,60 @@
|
|||||||
|
"""
|
||||||
|
Theme definitions with color gradients for terminal rendering.
|
||||||
|
|
||||||
|
This module is data-only and does not import config or render
|
||||||
|
to prevent circular dependencies.
|
||||||
|
"""
|
||||||
|
|
||||||
|
|
||||||
|
class Theme:
|
||||||
|
"""Represents a color theme with two gradients."""
|
||||||
|
|
||||||
|
def __init__(self, name, main_gradient, message_gradient):
|
||||||
|
"""Initialize a theme with name and color gradients.
|
||||||
|
|
||||||
|
Args:
|
||||||
|
name: Theme identifier string
|
||||||
|
main_gradient: List of 12 ANSI 256-color codes for main gradient
|
||||||
|
message_gradient: List of 12 ANSI 256-color codes for message gradient
|
||||||
|
"""
|
||||||
|
self.name = name
|
||||||
|
self.main_gradient = main_gradient
|
||||||
|
self.message_gradient = message_gradient
|
||||||
|
|
||||||
|
|
||||||
|
# ─── GRADIENT DEFINITIONS ─────────────────────────────────────────────────
|
||||||
|
# Each gradient is 12 ANSI 256-color codes in sequence
|
||||||
|
# Format: [light...] → [medium...] → [dark...] → [black]
|
||||||
|
|
||||||
|
_GREEN_MAIN = [231, 195, 123, 118, 82, 46, 40, 34, 28, 22, 22, 235]
|
||||||
|
_GREEN_MSG = [231, 225, 219, 213, 207, 201, 165, 161, 125, 89, 89, 235]
|
||||||
|
|
||||||
|
_ORANGE_MAIN = [231, 215, 209, 208, 202, 166, 130, 94, 58, 94, 94, 235]
|
||||||
|
_ORANGE_MSG = [231, 195, 33, 27, 21, 21, 21, 18, 18, 18, 18, 235]
|
||||||
|
|
||||||
|
_PURPLE_MAIN = [231, 225, 177, 171, 165, 135, 129, 93, 57, 57, 57, 235]
|
||||||
|
_PURPLE_MSG = [231, 226, 226, 220, 220, 184, 184, 178, 178, 172, 172, 235]
|
||||||
|
|
||||||
|
|
||||||
|
# ─── THEME REGISTRY ───────────────────────────────────────────────────────
|
||||||
|
|
||||||
|
THEME_REGISTRY = {
|
||||||
|
"green": Theme("green", _GREEN_MAIN, _GREEN_MSG),
|
||||||
|
"orange": Theme("orange", _ORANGE_MAIN, _ORANGE_MSG),
|
||||||
|
"purple": Theme("purple", _PURPLE_MAIN, _PURPLE_MSG),
|
||||||
|
}
|
||||||
|
|
||||||
|
|
||||||
|
def get_theme(theme_id):
|
||||||
|
"""Retrieve a theme by ID.
|
||||||
|
|
||||||
|
Args:
|
||||||
|
theme_id: Theme identifier string
|
||||||
|
|
||||||
|
Returns:
|
||||||
|
Theme object matching the ID
|
||||||
|
|
||||||
|
Raises:
|
||||||
|
KeyError: If theme_id is not in registry
|
||||||
|
"""
|
||||||
|
return THEME_REGISTRY[theme_id]
|
||||||
BIN
fonts/Kapiler.otf
Normal file
BIN
fonts/Kapiler.otf
Normal file
Binary file not shown.
BIN
fonts/Kapiler.ttf
Normal file
BIN
fonts/Kapiler.ttf
Normal file
Binary file not shown.
3
hk.pkl
3
hk.pkl
@@ -22,9 +22,6 @@ hooks {
|
|||||||
prefix = "uv run"
|
prefix = "uv run"
|
||||||
check = "ruff check engine/ tests/"
|
check = "ruff check engine/ tests/"
|
||||||
}
|
}
|
||||||
["benchmark"] {
|
|
||||||
check = "uv run python -m engine.benchmark --hook --displays null --iterations 20"
|
|
||||||
}
|
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|||||||
65
mise.toml
65
mise.toml
@@ -2,39 +2,28 @@
|
|||||||
python = "3.12"
|
python = "3.12"
|
||||||
hk = "latest"
|
hk = "latest"
|
||||||
pkl = "latest"
|
pkl = "latest"
|
||||||
uv = "latest"
|
|
||||||
|
|
||||||
[tasks]
|
[tasks]
|
||||||
# =====================
|
# =====================
|
||||||
# Core
|
# Development
|
||||||
# =====================
|
# =====================
|
||||||
|
|
||||||
test = "uv run pytest"
|
test = "uv run pytest"
|
||||||
test-cov = { run = "uv run pytest --cov=engine --cov-report=term-missing", depends = ["sync-all"] }
|
test-v = "uv run pytest -v"
|
||||||
|
test-cov = "uv run pytest --cov=engine --cov-report=term-missing --cov-report=html"
|
||||||
|
test-cov-open = "uv run pytest --cov=engine --cov-report=term-missing --cov-report=html && open htmlcov/index.html"
|
||||||
|
|
||||||
lint = "uv run ruff check engine/ mainline.py"
|
lint = "uv run ruff check engine/ mainline.py"
|
||||||
|
lint-fix = "uv run ruff check --fix engine/ mainline.py"
|
||||||
format = "uv run ruff format engine/ mainline.py"
|
format = "uv run ruff format engine/ mainline.py"
|
||||||
|
|
||||||
# =====================
|
# =====================
|
||||||
# Run
|
# Runtime
|
||||||
# =====================
|
# =====================
|
||||||
|
|
||||||
run = "uv run mainline.py"
|
run = "uv run mainline.py"
|
||||||
run-pygame = { run = "uv run mainline.py --display pygame", depends = ["sync-all"] }
|
run-poetry = "uv run mainline.py --poetry"
|
||||||
run-terminal = { run = "uv run mainline.py --display terminal", depends = ["sync-all"] }
|
run-firehose = "uv run mainline.py --firehose"
|
||||||
|
|
||||||
# =====================
|
|
||||||
# Presets
|
|
||||||
# =====================
|
|
||||||
|
|
||||||
run-demo = { run = "uv run mainline.py --preset demo --display pygame", depends = ["sync-all"] }
|
|
||||||
|
|
||||||
# =====================
|
|
||||||
# Daemon
|
|
||||||
# =====================
|
|
||||||
|
|
||||||
daemon = "nohup uv run mainline.py > nohup.out 2>&1 &"
|
|
||||||
daemon-stop = "pkill -f 'uv run mainline.py' 2>/dev/null || true"
|
|
||||||
daemon-restart = "mise run daemon-stop && sleep 2 && mise run daemon"
|
|
||||||
|
|
||||||
# =====================
|
# =====================
|
||||||
# Environment
|
# Environment
|
||||||
@@ -42,38 +31,22 @@ daemon-restart = "mise run daemon-stop && sleep 2 && mise run daemon"
|
|||||||
|
|
||||||
sync = "uv sync"
|
sync = "uv sync"
|
||||||
sync-all = "uv sync --all-extras"
|
sync-all = "uv sync --all-extras"
|
||||||
install = "mise run sync"
|
install = "uv sync"
|
||||||
clean = "rm -rf .venv htmlcov .coverage tests/.pytest_cache .mainline_cache_*.json nohup.out"
|
install-dev = "uv sync --group dev"
|
||||||
clobber = "git clean -fdx && rm -rf .venv htmlcov .coverage tests/.pytest_cache .mainline_cache_*.json nohup.out"
|
|
||||||
|
bootstrap = "uv sync && uv run mainline.py --help"
|
||||||
|
|
||||||
|
clean = "rm -rf .venv htmlcov .coverage tests/.pytest_cache"
|
||||||
|
|
||||||
# =====================
|
# =====================
|
||||||
# CI
|
# CI/CD
|
||||||
# =====================
|
# =====================
|
||||||
|
|
||||||
ci = { run = "mise run topics-init && mise run lint && mise run test-cov", depends = ["topics-init", "lint", "test-cov"] }
|
ci = "uv sync --group dev && uv run pytest --cov=engine --cov-report=term-missing --cov-report=xml"
|
||||||
topics-init = "curl -s -d 'init' https://ntfy.sh/klubhaus_terminal_mainline_cc_cmd > /dev/null && curl -s -d 'init' https://ntfy.sh/klubhaus_terminal_mainline_cc_resp > /dev/null && curl -s -d 'init' https://ntfy.sh/klubhaus_terminal_mainline > /dev/null"
|
ci-lint = "uv run ruff check engine/ mainline.py"
|
||||||
|
|
||||||
# =====================
|
# =====================
|
||||||
# Hooks
|
# Git Hooks (via hk)
|
||||||
# =====================
|
# =====================
|
||||||
|
|
||||||
pre-commit = "hk run pre-commit"
|
pre-commit = "hk run pre-commit"
|
||||||
|
|
||||||
# =====================
|
|
||||||
# Diagrams
|
|
||||||
# =====================
|
|
||||||
|
|
||||||
# Render Mermaid diagrams to ASCII art
|
|
||||||
diagram-ascii = "python3 scripts/render-diagrams.py docs/ARCHITECTURE.md"
|
|
||||||
|
|
||||||
# Validate Mermaid syntax in docs (check all diagrams parse)
|
|
||||||
# Note: classDiagram not supported by mermaid-ascii but works in GitHub/GitLab
|
|
||||||
diagram-validate = """
|
|
||||||
python3 scripts/validate-diagrams.py
|
|
||||||
"""
|
|
||||||
|
|
||||||
# Render diagrams and check they match expected output
|
|
||||||
diagram-check = "mise run diagram-validate"
|
|
||||||
|
|
||||||
[env]
|
|
||||||
KAGI_API_KEY = "lOp6AGyX6TUB0kGzAli1BlAx5-VjlIN1OPCPYEXDdQc.FOKLieOa7NgWUUZi4mTZvHmrW2uNnOr8hfgv7jMvRQM"
|
|
||||||
|
|||||||
336
presets.toml
336
presets.toml
@@ -1,336 +0,0 @@
|
|||||||
# Mainline Presets Configuration
|
|
||||||
# Human- and machine-readable preset definitions
|
|
||||||
#
|
|
||||||
# Format: TOML
|
|
||||||
# Usage: mainline --preset <name>
|
|
||||||
#
|
|
||||||
# Built-in presets can be overridden by user presets in:
|
|
||||||
# - ~/.config/mainline/presets.toml
|
|
||||||
# - ./presets.toml (local override)
|
|
||||||
|
|
||||||
# ============================================
|
|
||||||
# TEST PRESETS
|
|
||||||
# ============================================
|
|
||||||
|
|
||||||
[presets.test-single-item]
|
|
||||||
description = "Test: Single item to isolate rendering stage issues"
|
|
||||||
source = "empty"
|
|
||||||
display = "terminal"
|
|
||||||
camera = "feed"
|
|
||||||
effects = []
|
|
||||||
camera_speed = 0.1
|
|
||||||
viewport_width = 80
|
|
||||||
viewport_height = 24
|
|
||||||
|
|
||||||
[presets.test-single-item-border]
|
|
||||||
description = "Test: Single item with border effect only"
|
|
||||||
source = "empty"
|
|
||||||
display = "terminal"
|
|
||||||
camera = "feed"
|
|
||||||
effects = ["border"]
|
|
||||||
camera_speed = 0.1
|
|
||||||
viewport_width = 80
|
|
||||||
viewport_height = 24
|
|
||||||
|
|
||||||
[presets.test-headlines]
|
|
||||||
description = "Test: Headlines from cache with border effect"
|
|
||||||
source = "headlines"
|
|
||||||
display = "terminal"
|
|
||||||
camera = "feed"
|
|
||||||
effects = ["border"]
|
|
||||||
camera_speed = 0.1
|
|
||||||
viewport_width = 80
|
|
||||||
viewport_height = 24
|
|
||||||
|
|
||||||
[presets.test-headlines-noise]
|
|
||||||
description = "Test: Headlines from cache with noise effect"
|
|
||||||
source = "headlines"
|
|
||||||
display = "terminal"
|
|
||||||
camera = "feed"
|
|
||||||
effects = ["noise"]
|
|
||||||
camera_speed = 0.1
|
|
||||||
viewport_width = 80
|
|
||||||
viewport_height = 24
|
|
||||||
|
|
||||||
[presets.test-demo-effects]
|
|
||||||
description = "Test: All demo effects with terminal display"
|
|
||||||
source = "headlines"
|
|
||||||
display = "terminal"
|
|
||||||
camera = "feed"
|
|
||||||
effects = ["noise", "fade", "firehose"]
|
|
||||||
camera_speed = 0.3
|
|
||||||
viewport_width = 80
|
|
||||||
viewport_height = 24
|
|
||||||
|
|
||||||
# ============================================
|
|
||||||
# DATA SOURCE GALLERY
|
|
||||||
# ============================================
|
|
||||||
|
|
||||||
[presets.gallery-sources]
|
|
||||||
description = "Gallery: Headlines data source"
|
|
||||||
source = "headlines"
|
|
||||||
display = "pygame"
|
|
||||||
camera = "feed"
|
|
||||||
effects = []
|
|
||||||
camera_speed = 0.1
|
|
||||||
viewport_width = 80
|
|
||||||
viewport_height = 24
|
|
||||||
|
|
||||||
[presets.gallery-sources-poetry]
|
|
||||||
description = "Gallery: Poetry data source"
|
|
||||||
source = "poetry"
|
|
||||||
display = "pygame"
|
|
||||||
camera = "feed"
|
|
||||||
effects = ["fade"]
|
|
||||||
camera_speed = 0.1
|
|
||||||
viewport_width = 80
|
|
||||||
viewport_height = 24
|
|
||||||
|
|
||||||
[presets.gallery-sources-pipeline]
|
|
||||||
description = "Gallery: Pipeline introspection"
|
|
||||||
source = "pipeline-inspect"
|
|
||||||
display = "pygame"
|
|
||||||
camera = "scroll"
|
|
||||||
effects = []
|
|
||||||
camera_speed = 0.3
|
|
||||||
viewport_width = 100
|
|
||||||
viewport_height = 35
|
|
||||||
|
|
||||||
[presets.gallery-sources-empty]
|
|
||||||
description = "Gallery: Empty source (for border tests)"
|
|
||||||
source = "empty"
|
|
||||||
display = "terminal"
|
|
||||||
camera = "feed"
|
|
||||||
effects = ["border"]
|
|
||||||
camera_speed = 0.1
|
|
||||||
viewport_width = 80
|
|
||||||
viewport_height = 24
|
|
||||||
|
|
||||||
# ============================================
|
|
||||||
# EFFECT GALLERY
|
|
||||||
# ============================================
|
|
||||||
|
|
||||||
[presets.gallery-effect-noise]
|
|
||||||
description = "Gallery: Noise effect"
|
|
||||||
source = "headlines"
|
|
||||||
display = "pygame"
|
|
||||||
camera = "feed"
|
|
||||||
effects = ["noise"]
|
|
||||||
camera_speed = 0.1
|
|
||||||
viewport_width = 80
|
|
||||||
viewport_height = 24
|
|
||||||
|
|
||||||
[presets.gallery-effect-fade]
|
|
||||||
description = "Gallery: Fade effect"
|
|
||||||
source = "headlines"
|
|
||||||
display = "pygame"
|
|
||||||
camera = "feed"
|
|
||||||
effects = ["fade"]
|
|
||||||
camera_speed = 0.1
|
|
||||||
viewport_width = 80
|
|
||||||
viewport_height = 24
|
|
||||||
|
|
||||||
[presets.gallery-effect-glitch]
|
|
||||||
description = "Gallery: Glitch effect"
|
|
||||||
source = "headlines"
|
|
||||||
display = "pygame"
|
|
||||||
camera = "feed"
|
|
||||||
effects = ["glitch"]
|
|
||||||
camera_speed = 0.1
|
|
||||||
viewport_width = 80
|
|
||||||
viewport_height = 24
|
|
||||||
|
|
||||||
[presets.gallery-effect-firehose]
|
|
||||||
description = "Gallery: Firehose effect"
|
|
||||||
source = "headlines"
|
|
||||||
display = "pygame"
|
|
||||||
camera = "feed"
|
|
||||||
effects = ["firehose"]
|
|
||||||
camera_speed = 0.1
|
|
||||||
viewport_width = 80
|
|
||||||
viewport_height = 24
|
|
||||||
|
|
||||||
[presets.gallery-effect-hud]
|
|
||||||
description = "Gallery: HUD effect"
|
|
||||||
source = "headlines"
|
|
||||||
display = "pygame"
|
|
||||||
camera = "feed"
|
|
||||||
effects = ["hud"]
|
|
||||||
camera_speed = 0.1
|
|
||||||
viewport_width = 80
|
|
||||||
viewport_height = 24
|
|
||||||
|
|
||||||
[presets.gallery-effect-tint]
|
|
||||||
description = "Gallery: Tint effect"
|
|
||||||
source = "headlines"
|
|
||||||
display = "pygame"
|
|
||||||
camera = "feed"
|
|
||||||
effects = ["tint"]
|
|
||||||
camera_speed = 0.1
|
|
||||||
viewport_width = 80
|
|
||||||
viewport_height = 24
|
|
||||||
|
|
||||||
[presets.gallery-effect-border]
|
|
||||||
description = "Gallery: Border effect"
|
|
||||||
source = "headlines"
|
|
||||||
display = "pygame"
|
|
||||||
camera = "feed"
|
|
||||||
effects = ["border"]
|
|
||||||
camera_speed = 0.1
|
|
||||||
viewport_width = 80
|
|
||||||
viewport_height = 24
|
|
||||||
|
|
||||||
[presets.gallery-effect-crop]
|
|
||||||
description = "Gallery: Crop effect"
|
|
||||||
source = "headlines"
|
|
||||||
display = "pygame"
|
|
||||||
camera = "feed"
|
|
||||||
effects = ["crop"]
|
|
||||||
camera_speed = 0.1
|
|
||||||
viewport_width = 80
|
|
||||||
viewport_height = 24
|
|
||||||
|
|
||||||
# ============================================
|
|
||||||
# CAMERA GALLERY
|
|
||||||
# ============================================
|
|
||||||
|
|
||||||
[presets.gallery-camera-feed]
|
|
||||||
description = "Gallery: Feed camera (rapid single-item)"
|
|
||||||
source = "headlines"
|
|
||||||
display = "pygame"
|
|
||||||
camera = "feed"
|
|
||||||
effects = ["noise"]
|
|
||||||
camera_speed = 1.0
|
|
||||||
viewport_width = 80
|
|
||||||
viewport_height = 24
|
|
||||||
|
|
||||||
[presets.gallery-camera-scroll]
|
|
||||||
description = "Gallery: Scroll camera (smooth)"
|
|
||||||
source = "headlines"
|
|
||||||
display = "pygame"
|
|
||||||
camera = "scroll"
|
|
||||||
effects = ["noise"]
|
|
||||||
camera_speed = 0.3
|
|
||||||
viewport_width = 80
|
|
||||||
viewport_height = 24
|
|
||||||
|
|
||||||
[presets.gallery-camera-horizontal]
|
|
||||||
description = "Gallery: Horizontal camera"
|
|
||||||
source = "headlines"
|
|
||||||
display = "pygame"
|
|
||||||
camera = "horizontal"
|
|
||||||
effects = ["noise"]
|
|
||||||
camera_speed = 0.5
|
|
||||||
viewport_width = 80
|
|
||||||
viewport_height = 24
|
|
||||||
|
|
||||||
[presets.gallery-camera-omni]
|
|
||||||
description = "Gallery: Omni camera"
|
|
||||||
source = "headlines"
|
|
||||||
display = "pygame"
|
|
||||||
camera = "omni"
|
|
||||||
effects = ["noise"]
|
|
||||||
camera_speed = 0.5
|
|
||||||
viewport_width = 80
|
|
||||||
viewport_height = 24
|
|
||||||
|
|
||||||
[presets.gallery-camera-floating]
|
|
||||||
description = "Gallery: Floating camera"
|
|
||||||
source = "headlines"
|
|
||||||
display = "pygame"
|
|
||||||
camera = "floating"
|
|
||||||
effects = ["noise"]
|
|
||||||
camera_speed = 1.0
|
|
||||||
viewport_width = 80
|
|
||||||
viewport_height = 24
|
|
||||||
|
|
||||||
[presets.gallery-camera-bounce]
|
|
||||||
description = "Gallery: Bounce camera"
|
|
||||||
source = "headlines"
|
|
||||||
display = "pygame"
|
|
||||||
camera = "bounce"
|
|
||||||
effects = ["noise"]
|
|
||||||
camera_speed = 1.0
|
|
||||||
viewport_width = 80
|
|
||||||
viewport_height = 24
|
|
||||||
|
|
||||||
# ============================================
|
|
||||||
# DISPLAY GALLERY
|
|
||||||
# ============================================
|
|
||||||
|
|
||||||
[presets.gallery-display-terminal]
|
|
||||||
description = "Gallery: Terminal display"
|
|
||||||
source = "headlines"
|
|
||||||
display = "terminal"
|
|
||||||
camera = "feed"
|
|
||||||
effects = ["noise"]
|
|
||||||
camera_speed = 0.1
|
|
||||||
viewport_width = 80
|
|
||||||
viewport_height = 24
|
|
||||||
|
|
||||||
[presets.gallery-display-pygame]
|
|
||||||
description = "Gallery: Pygame display"
|
|
||||||
source = "headlines"
|
|
||||||
display = "pygame"
|
|
||||||
camera = "feed"
|
|
||||||
effects = ["noise"]
|
|
||||||
camera_speed = 0.1
|
|
||||||
viewport_width = 80
|
|
||||||
viewport_height = 24
|
|
||||||
|
|
||||||
[presets.gallery-display-websocket]
|
|
||||||
description = "Gallery: WebSocket display"
|
|
||||||
source = "headlines"
|
|
||||||
display = "websocket"
|
|
||||||
camera = "feed"
|
|
||||||
effects = ["noise"]
|
|
||||||
camera_speed = 0.1
|
|
||||||
viewport_width = 80
|
|
||||||
viewport_height = 24
|
|
||||||
|
|
||||||
[presets.gallery-display-multi]
|
|
||||||
description = "Gallery: MultiDisplay (terminal + pygame)"
|
|
||||||
source = "headlines"
|
|
||||||
display = "multi:terminal,pygame"
|
|
||||||
camera = "feed"
|
|
||||||
effects = ["noise"]
|
|
||||||
camera_speed = 0.1
|
|
||||||
viewport_width = 80
|
|
||||||
viewport_height = 24
|
|
||||||
|
|
||||||
# ============================================
|
|
||||||
# SENSOR CONFIGURATION
|
|
||||||
# ============================================
|
|
||||||
|
|
||||||
[sensors.mic]
|
|
||||||
enabled = false
|
|
||||||
threshold_db = 50.0
|
|
||||||
|
|
||||||
[sensors.oscillator]
|
|
||||||
enabled = false
|
|
||||||
waveform = "sine"
|
|
||||||
frequency = 1.0
|
|
||||||
|
|
||||||
# ============================================
|
|
||||||
# EFFECT CONFIGURATIONS
|
|
||||||
# ============================================
|
|
||||||
|
|
||||||
[effect_configs.noise]
|
|
||||||
enabled = true
|
|
||||||
intensity = 1.0
|
|
||||||
|
|
||||||
[effect_configs.fade]
|
|
||||||
enabled = true
|
|
||||||
intensity = 1.0
|
|
||||||
|
|
||||||
[effect_configs.glitch]
|
|
||||||
enabled = true
|
|
||||||
intensity = 0.5
|
|
||||||
|
|
||||||
[effect_configs.firehose]
|
|
||||||
enabled = true
|
|
||||||
intensity = 1.0
|
|
||||||
|
|
||||||
[effect_configs.hud]
|
|
||||||
enabled = true
|
|
||||||
intensity = 1.0
|
|
||||||
@@ -23,7 +23,6 @@ dependencies = [
|
|||||||
"feedparser>=6.0.0",
|
"feedparser>=6.0.0",
|
||||||
"Pillow>=10.0.0",
|
"Pillow>=10.0.0",
|
||||||
"pyright>=1.1.408",
|
"pyright>=1.1.408",
|
||||||
"numpy>=1.24.0",
|
|
||||||
]
|
]
|
||||||
|
|
||||||
[project.optional-dependencies]
|
[project.optional-dependencies]
|
||||||
@@ -31,21 +30,8 @@ mic = [
|
|||||||
"sounddevice>=0.4.0",
|
"sounddevice>=0.4.0",
|
||||||
"numpy>=1.24.0",
|
"numpy>=1.24.0",
|
||||||
]
|
]
|
||||||
websocket = [
|
|
||||||
"websockets>=12.0",
|
|
||||||
]
|
|
||||||
sixel = [
|
|
||||||
"Pillow>=10.0.0",
|
|
||||||
]
|
|
||||||
pygame = [
|
|
||||||
"pygame>=2.0.0",
|
|
||||||
]
|
|
||||||
browser = [
|
|
||||||
"playwright>=1.40.0",
|
|
||||||
]
|
|
||||||
dev = [
|
dev = [
|
||||||
"pytest>=8.0.0",
|
"pytest>=8.0.0",
|
||||||
"pytest-benchmark>=4.0.0",
|
|
||||||
"pytest-cov>=4.1.0",
|
"pytest-cov>=4.1.0",
|
||||||
"pytest-mock>=3.12.0",
|
"pytest-mock>=3.12.0",
|
||||||
"ruff>=0.1.0",
|
"ruff>=0.1.0",
|
||||||
@@ -61,7 +47,6 @@ build-backend = "hatchling.build"
|
|||||||
[dependency-groups]
|
[dependency-groups]
|
||||||
dev = [
|
dev = [
|
||||||
"pytest>=8.0.0",
|
"pytest>=8.0.0",
|
||||||
"pytest-benchmark>=4.0.0",
|
|
||||||
"pytest-cov>=4.1.0",
|
"pytest-cov>=4.1.0",
|
||||||
"pytest-mock>=3.12.0",
|
"pytest-mock>=3.12.0",
|
||||||
"ruff>=0.1.0",
|
"ruff>=0.1.0",
|
||||||
@@ -76,12 +61,6 @@ addopts = [
|
|||||||
"--tb=short",
|
"--tb=short",
|
||||||
"-v",
|
"-v",
|
||||||
]
|
]
|
||||||
markers = [
|
|
||||||
"benchmark: marks tests as performance benchmarks (may be slow)",
|
|
||||||
"e2e: marks tests as end-to-end tests (require network/display)",
|
|
||||||
"integration: marks tests as integration tests (require external services)",
|
|
||||||
"ntfy: marks tests that require ntfy service",
|
|
||||||
]
|
|
||||||
filterwarnings = [
|
filterwarnings = [
|
||||||
"ignore::DeprecationWarning",
|
"ignore::DeprecationWarning",
|
||||||
]
|
]
|
||||||
|
|||||||
4
requirements-dev.txt
Normal file
4
requirements-dev.txt
Normal file
@@ -0,0 +1,4 @@
|
|||||||
|
pytest>=8.0.0
|
||||||
|
pytest-cov>=4.1.0
|
||||||
|
pytest-mock>=3.12.0
|
||||||
|
ruff>=0.1.0
|
||||||
4
requirements.txt
Normal file
4
requirements.txt
Normal file
@@ -0,0 +1,4 @@
|
|||||||
|
feedparser>=6.0.0
|
||||||
|
Pillow>=10.0.0
|
||||||
|
sounddevice>=0.4.0
|
||||||
|
numpy>=1.24.0
|
||||||
@@ -1,49 +0,0 @@
|
|||||||
#!/usr/bin/env python3
|
|
||||||
"""Render Mermaid diagrams in markdown files to ASCII art."""
|
|
||||||
|
|
||||||
import re
|
|
||||||
import subprocess
|
|
||||||
import sys
|
|
||||||
|
|
||||||
|
|
||||||
def extract_mermaid_blocks(content: str) -> list[str]:
|
|
||||||
"""Extract mermaid blocks from markdown."""
|
|
||||||
return re.findall(r"```mermaid\n(.*?)\n```", content, re.DOTALL)
|
|
||||||
|
|
||||||
|
|
||||||
def render_diagram(block: str) -> str:
|
|
||||||
"""Render a single mermaid block to ASCII."""
|
|
||||||
result = subprocess.run(
|
|
||||||
["mermaid-ascii", "-f", "-"],
|
|
||||||
input=block,
|
|
||||||
capture_output=True,
|
|
||||||
text=True,
|
|
||||||
)
|
|
||||||
if result.returncode != 0:
|
|
||||||
return f"ERROR: {result.stderr}"
|
|
||||||
return result.stdout
|
|
||||||
|
|
||||||
|
|
||||||
def main():
|
|
||||||
if len(sys.argv) < 2:
|
|
||||||
print("Usage: render-diagrams.py <markdown-file>")
|
|
||||||
sys.exit(1)
|
|
||||||
|
|
||||||
filename = sys.argv[1]
|
|
||||||
content = open(filename).read()
|
|
||||||
blocks = extract_mermaid_blocks(content)
|
|
||||||
|
|
||||||
print(f"Found {len(blocks)} mermaid diagram(s) in {filename}")
|
|
||||||
print()
|
|
||||||
|
|
||||||
for i, block in enumerate(blocks):
|
|
||||||
# Skip if empty
|
|
||||||
if not block.strip():
|
|
||||||
continue
|
|
||||||
|
|
||||||
print(f"=== Diagram {i + 1} ===")
|
|
||||||
print(render_diagram(block))
|
|
||||||
|
|
||||||
|
|
||||||
if __name__ == "__main__":
|
|
||||||
main()
|
|
||||||
@@ -1,64 +0,0 @@
|
|||||||
#!/usr/bin/env python3
|
|
||||||
"""Validate Mermaid diagrams in markdown files."""
|
|
||||||
|
|
||||||
import glob
|
|
||||||
import re
|
|
||||||
import sys
|
|
||||||
|
|
||||||
|
|
||||||
# Diagram types that are valid in Mermaid
|
|
||||||
VALID_TYPES = {
|
|
||||||
"flowchart",
|
|
||||||
"graph",
|
|
||||||
"classDiagram",
|
|
||||||
"sequenceDiagram",
|
|
||||||
"stateDiagram",
|
|
||||||
"stateDiagram-v2",
|
|
||||||
"erDiagram",
|
|
||||||
"gantt",
|
|
||||||
"pie",
|
|
||||||
"mindmap",
|
|
||||||
"journey",
|
|
||||||
"gitGraph",
|
|
||||||
"requirementDiagram",
|
|
||||||
}
|
|
||||||
|
|
||||||
|
|
||||||
def extract_mermaid_blocks(content: str) -> list[tuple[int, str]]:
|
|
||||||
"""Extract mermaid blocks with their positions."""
|
|
||||||
blocks = []
|
|
||||||
for match in re.finditer(r"```mermaid\n(.*?)\n```", content, re.DOTALL):
|
|
||||||
blocks.append((match.start(), match.group(1)))
|
|
||||||
return blocks
|
|
||||||
|
|
||||||
|
|
||||||
def validate_block(block: str) -> bool:
|
|
||||||
"""Check if a mermaid block has a valid diagram type."""
|
|
||||||
if not block.strip():
|
|
||||||
return True # Empty block is OK
|
|
||||||
first_line = block.strip().split("\n")[0]
|
|
||||||
return any(first_line.startswith(t) for t in VALID_TYPES)
|
|
||||||
|
|
||||||
|
|
||||||
def main():
|
|
||||||
md_files = glob.glob("docs/*.md")
|
|
||||||
|
|
||||||
errors = []
|
|
||||||
for filepath in md_files:
|
|
||||||
content = open(filepath).read()
|
|
||||||
blocks = extract_mermaid_blocks(content)
|
|
||||||
|
|
||||||
for i, (_, block) in enumerate(blocks):
|
|
||||||
if not validate_block(block):
|
|
||||||
errors.append(f"{filepath}: invalid diagram type in block {i + 1}")
|
|
||||||
|
|
||||||
if errors:
|
|
||||||
for e in errors:
|
|
||||||
print(f"ERROR: {e}")
|
|
||||||
sys.exit(1)
|
|
||||||
|
|
||||||
print(f"Validated {len(md_files)} markdown files - all OK")
|
|
||||||
|
|
||||||
|
|
||||||
if __name__ == "__main__":
|
|
||||||
main()
|
|
||||||
@@ -1,36 +0,0 @@
|
|||||||
"""
|
|
||||||
Pytest configuration for mainline.
|
|
||||||
"""
|
|
||||||
|
|
||||||
import pytest
|
|
||||||
|
|
||||||
|
|
||||||
def pytest_configure(config):
|
|
||||||
"""Configure pytest to skip integration tests by default."""
|
|
||||||
config.addinivalue_line(
|
|
||||||
"markers",
|
|
||||||
"integration: marks tests as integration tests (require external services)",
|
|
||||||
)
|
|
||||||
config.addinivalue_line("markers", "ntfy: marks tests that require ntfy service")
|
|
||||||
|
|
||||||
|
|
||||||
def pytest_collection_modifyitems(config, items):
|
|
||||||
"""Skip integration/e2e tests unless explicitly requested with -m."""
|
|
||||||
# Get the current marker expression
|
|
||||||
marker_expr = config.getoption("-m", default="")
|
|
||||||
|
|
||||||
# If explicitly running integration or e2e, don't skip them
|
|
||||||
if marker_expr in ("integration", "e2e", "integration or e2e"):
|
|
||||||
return
|
|
||||||
|
|
||||||
# Skip integration tests
|
|
||||||
skip_integration = pytest.mark.skip(reason="need -m integration to run")
|
|
||||||
for item in items:
|
|
||||||
if "integration" in item.keywords:
|
|
||||||
item.add_marker(skip_integration)
|
|
||||||
|
|
||||||
# Skip e2e tests by default (they require browser/display)
|
|
||||||
skip_e2e = pytest.mark.skip(reason="need -m e2e to run")
|
|
||||||
for item in items:
|
|
||||||
if "e2e" in item.keywords and "integration" not in item.keywords:
|
|
||||||
item.add_marker(skip_e2e)
|
|
||||||
@@ -1,133 +0,0 @@
|
|||||||
"""
|
|
||||||
End-to-end tests for web client with headless browser.
|
|
||||||
"""
|
|
||||||
|
|
||||||
import os
|
|
||||||
import socketserver
|
|
||||||
import threading
|
|
||||||
from http.server import HTTPServer, SimpleHTTPRequestHandler
|
|
||||||
from pathlib import Path
|
|
||||||
|
|
||||||
import pytest
|
|
||||||
|
|
||||||
CLIENT_DIR = Path(__file__).parent.parent.parent / "client"
|
|
||||||
|
|
||||||
|
|
||||||
class ThreadedHTTPServer(socketserver.ThreadingMixIn, HTTPServer):
|
|
||||||
"""Threaded HTTP server for handling concurrent requests."""
|
|
||||||
|
|
||||||
daemon_threads = True
|
|
||||||
|
|
||||||
|
|
||||||
@pytest.fixture(scope="module")
|
|
||||||
def http_server():
|
|
||||||
"""Start a local HTTP server for the client."""
|
|
||||||
os.chdir(CLIENT_DIR)
|
|
||||||
|
|
||||||
handler = SimpleHTTPRequestHandler
|
|
||||||
server = ThreadedHTTPServer(("127.0.0.1", 0), handler)
|
|
||||||
port = server.server_address[1]
|
|
||||||
|
|
||||||
thread = threading.Thread(target=server.serve_forever, daemon=True)
|
|
||||||
thread.start()
|
|
||||||
|
|
||||||
yield f"http://127.0.0.1:{port}"
|
|
||||||
|
|
||||||
server.shutdown()
|
|
||||||
|
|
||||||
|
|
||||||
class TestWebClient:
|
|
||||||
"""Tests for the web client using Playwright."""
|
|
||||||
|
|
||||||
@pytest.fixture(autouse=True)
|
|
||||||
def setup_browser(self):
|
|
||||||
"""Set up browser for tests."""
|
|
||||||
pytest.importorskip("playwright")
|
|
||||||
from playwright.sync_api import sync_playwright
|
|
||||||
|
|
||||||
self.playwright = sync_playwright().start()
|
|
||||||
self.browser = self.playwright.chromium.launch(headless=True)
|
|
||||||
self.context = self.browser.new_context()
|
|
||||||
self.page = self.context.new_page()
|
|
||||||
|
|
||||||
yield
|
|
||||||
|
|
||||||
self.page.close()
|
|
||||||
self.context.close()
|
|
||||||
self.browser.close()
|
|
||||||
self.playwright.stop()
|
|
||||||
|
|
||||||
def test_client_loads(self, http_server):
|
|
||||||
"""Web client loads without errors."""
|
|
||||||
response = self.page.goto(http_server)
|
|
||||||
assert response.status == 200, f"Page load failed with status {response.status}"
|
|
||||||
|
|
||||||
self.page.wait_for_load_state("domcontentloaded")
|
|
||||||
|
|
||||||
content = self.page.content()
|
|
||||||
assert "<canvas" in content, "Canvas element not found in page"
|
|
||||||
|
|
||||||
canvas = self.page.locator("#terminal")
|
|
||||||
assert canvas.count() > 0, "Canvas not found"
|
|
||||||
|
|
||||||
def test_status_shows_connecting(self, http_server):
|
|
||||||
"""Status shows connecting initially."""
|
|
||||||
self.page.goto(http_server)
|
|
||||||
self.page.wait_for_load_state("domcontentloaded")
|
|
||||||
|
|
||||||
status = self.page.locator("#status")
|
|
||||||
assert status.count() > 0, "Status element not found"
|
|
||||||
|
|
||||||
def test_canvas_has_dimensions(self, http_server):
|
|
||||||
"""Canvas has correct dimensions after load."""
|
|
||||||
self.page.goto(http_server)
|
|
||||||
self.page.wait_for_load_state("domcontentloaded")
|
|
||||||
|
|
||||||
canvas = self.page.locator("#terminal")
|
|
||||||
assert canvas.count() > 0, "Canvas not found"
|
|
||||||
|
|
||||||
def test_no_console_errors_on_load(self, http_server):
|
|
||||||
"""No JavaScript errors on page load (websocket errors are expected without server)."""
|
|
||||||
js_errors = []
|
|
||||||
|
|
||||||
def handle_console(msg):
|
|
||||||
if msg.type == "error":
|
|
||||||
text = msg.text
|
|
||||||
if "WebSocket" not in text:
|
|
||||||
js_errors.append(text)
|
|
||||||
|
|
||||||
self.page.on("console", handle_console)
|
|
||||||
self.page.goto(http_server)
|
|
||||||
self.page.wait_for_load_state("domcontentloaded")
|
|
||||||
|
|
||||||
assert len(js_errors) == 0, f"JavaScript errors: {js_errors}"
|
|
||||||
|
|
||||||
|
|
||||||
class TestWebClientProtocol:
|
|
||||||
"""Tests for WebSocket protocol handling in client."""
|
|
||||||
|
|
||||||
@pytest.fixture(autouse=True)
|
|
||||||
def setup_browser(self):
|
|
||||||
"""Set up browser for tests."""
|
|
||||||
pytest.importorskip("playwright")
|
|
||||||
from playwright.sync_api import sync_playwright
|
|
||||||
|
|
||||||
self.playwright = sync_playwright().start()
|
|
||||||
self.browser = self.playwright.chromium.launch(headless=True)
|
|
||||||
self.context = self.browser.new_context()
|
|
||||||
self.page = self.context.new_page()
|
|
||||||
|
|
||||||
yield
|
|
||||||
|
|
||||||
self.page.close()
|
|
||||||
self.context.close()
|
|
||||||
self.browser.close()
|
|
||||||
self.playwright.stop()
|
|
||||||
|
|
||||||
def test_websocket_reconnection(self, http_server):
|
|
||||||
"""Client attempts reconnection on disconnect."""
|
|
||||||
self.page.goto(http_server)
|
|
||||||
self.page.wait_for_load_state("domcontentloaded")
|
|
||||||
|
|
||||||
status = self.page.locator("#status")
|
|
||||||
assert status.count() > 0, "Status element not found"
|
|
||||||
@@ -1,31 +0,0 @@
|
|||||||
#!/usr/bin/env python3
|
|
||||||
"""Test script for Kitty graphics display."""
|
|
||||||
|
|
||||||
import sys
|
|
||||||
|
|
||||||
|
|
||||||
def test_kitty_simple():
|
|
||||||
"""Test simple Kitty graphics output with embedded PNG."""
|
|
||||||
import base64
|
|
||||||
|
|
||||||
# Minimal 1x1 red pixel PNG (pre-encoded)
|
|
||||||
# This is a tiny valid PNG with a red pixel
|
|
||||||
png_red_1x1 = (
|
|
||||||
b"\x89PNG\r\n\x1a\n\x00\x00\x00\rIHDR\x00\x00\x00"
|
|
||||||
b"\x01\x00\x00\x00\x01\x08\x02\x00\x00\x00\x90wS\xde"
|
|
||||||
b"\x00\x00\x00\x0cIDATx\x9cc\xf8\xcf\xc0\x00\x00\x00"
|
|
||||||
b"\x03\x00\x01\x00\x05\xfe\xd4\x00\x00\x00\x00IEND\xaeB`\x82"
|
|
||||||
)
|
|
||||||
|
|
||||||
encoded = base64.b64encode(png_red_1x1).decode("ascii")
|
|
||||||
|
|
||||||
graphic = f"\x1b_Gf=100,t=d,s=1,v=1,c=1,r=1;{encoded}\x1b\\"
|
|
||||||
sys.stdout.buffer.write(graphic.encode("utf-8"))
|
|
||||||
sys.stdout.flush()
|
|
||||||
|
|
||||||
print("\n[If you see a red dot above, Kitty graphics is working!]")
|
|
||||||
print("[If you see nothing or garbage, it's not working]")
|
|
||||||
|
|
||||||
|
|
||||||
if __name__ == "__main__":
|
|
||||||
test_kitty_simple()
|
|
||||||
@@ -1,345 +0,0 @@
|
|||||||
"""
|
|
||||||
Tests for engine/pipeline/adapters.py - Stage adapters for the pipeline.
|
|
||||||
|
|
||||||
Tests Stage adapters that bridge existing components to the Stage interface:
|
|
||||||
- DataSourceStage: Wraps DataSource objects
|
|
||||||
- DisplayStage: Wraps Display backends
|
|
||||||
- PassthroughStage: Simple pass-through stage for pre-rendered data
|
|
||||||
- SourceItemsToBufferStage: Converts SourceItem objects to text buffers
|
|
||||||
- EffectPluginStage: Wraps effect plugins
|
|
||||||
"""
|
|
||||||
|
|
||||||
from unittest.mock import MagicMock
|
|
||||||
|
|
||||||
from engine.data_sources.sources import SourceItem
|
|
||||||
from engine.pipeline.adapters import (
|
|
||||||
DataSourceStage,
|
|
||||||
DisplayStage,
|
|
||||||
EffectPluginStage,
|
|
||||||
PassthroughStage,
|
|
||||||
SourceItemsToBufferStage,
|
|
||||||
)
|
|
||||||
from engine.pipeline.core import PipelineContext
|
|
||||||
|
|
||||||
|
|
||||||
class TestDataSourceStage:
|
|
||||||
"""Test DataSourceStage adapter."""
|
|
||||||
|
|
||||||
def test_datasource_stage_name(self):
|
|
||||||
"""DataSourceStage stores name correctly."""
|
|
||||||
mock_source = MagicMock()
|
|
||||||
stage = DataSourceStage(mock_source, name="headlines")
|
|
||||||
assert stage.name == "headlines"
|
|
||||||
|
|
||||||
def test_datasource_stage_category(self):
|
|
||||||
"""DataSourceStage has 'source' category."""
|
|
||||||
mock_source = MagicMock()
|
|
||||||
stage = DataSourceStage(mock_source, name="headlines")
|
|
||||||
assert stage.category == "source"
|
|
||||||
|
|
||||||
def test_datasource_stage_capabilities(self):
|
|
||||||
"""DataSourceStage advertises source capability."""
|
|
||||||
mock_source = MagicMock()
|
|
||||||
stage = DataSourceStage(mock_source, name="headlines")
|
|
||||||
assert "source.headlines" in stage.capabilities
|
|
||||||
|
|
||||||
def test_datasource_stage_dependencies(self):
|
|
||||||
"""DataSourceStage has no dependencies."""
|
|
||||||
mock_source = MagicMock()
|
|
||||||
stage = DataSourceStage(mock_source, name="headlines")
|
|
||||||
assert stage.dependencies == set()
|
|
||||||
|
|
||||||
def test_datasource_stage_process_calls_get_items(self):
|
|
||||||
"""DataSourceStage.process() calls source.get_items()."""
|
|
||||||
mock_items = [
|
|
||||||
SourceItem(content="Item 1", source="headlines", timestamp="12:00"),
|
|
||||||
]
|
|
||||||
mock_source = MagicMock()
|
|
||||||
mock_source.get_items.return_value = mock_items
|
|
||||||
|
|
||||||
stage = DataSourceStage(mock_source, name="headlines")
|
|
||||||
ctx = PipelineContext()
|
|
||||||
result = stage.process(None, ctx)
|
|
||||||
|
|
||||||
assert result == mock_items
|
|
||||||
mock_source.get_items.assert_called_once()
|
|
||||||
|
|
||||||
def test_datasource_stage_process_fallback_returns_data(self):
|
|
||||||
"""DataSourceStage.process() returns data if no get_items method."""
|
|
||||||
mock_source = MagicMock(spec=[]) # No get_items method
|
|
||||||
stage = DataSourceStage(mock_source, name="headlines")
|
|
||||||
ctx = PipelineContext()
|
|
||||||
test_data = [{"content": "test"}]
|
|
||||||
|
|
||||||
result = stage.process(test_data, ctx)
|
|
||||||
assert result == test_data
|
|
||||||
|
|
||||||
|
|
||||||
class TestDisplayStage:
|
|
||||||
"""Test DisplayStage adapter."""
|
|
||||||
|
|
||||||
def test_display_stage_name(self):
|
|
||||||
"""DisplayStage stores name correctly."""
|
|
||||||
mock_display = MagicMock()
|
|
||||||
stage = DisplayStage(mock_display, name="terminal")
|
|
||||||
assert stage.name == "terminal"
|
|
||||||
|
|
||||||
def test_display_stage_category(self):
|
|
||||||
"""DisplayStage has 'display' category."""
|
|
||||||
mock_display = MagicMock()
|
|
||||||
stage = DisplayStage(mock_display, name="terminal")
|
|
||||||
assert stage.category == "display"
|
|
||||||
|
|
||||||
def test_display_stage_capabilities(self):
|
|
||||||
"""DisplayStage advertises display capability."""
|
|
||||||
mock_display = MagicMock()
|
|
||||||
stage = DisplayStage(mock_display, name="terminal")
|
|
||||||
assert "display.output" in stage.capabilities
|
|
||||||
|
|
||||||
def test_display_stage_dependencies(self):
|
|
||||||
"""DisplayStage depends on render.output."""
|
|
||||||
mock_display = MagicMock()
|
|
||||||
stage = DisplayStage(mock_display, name="terminal")
|
|
||||||
assert "render.output" in stage.dependencies
|
|
||||||
|
|
||||||
def test_display_stage_init(self):
|
|
||||||
"""DisplayStage.init() calls display.init() with dimensions."""
|
|
||||||
mock_display = MagicMock()
|
|
||||||
mock_display.init.return_value = True
|
|
||||||
stage = DisplayStage(mock_display, name="terminal")
|
|
||||||
|
|
||||||
ctx = PipelineContext()
|
|
||||||
ctx.params = MagicMock()
|
|
||||||
ctx.params.viewport_width = 100
|
|
||||||
ctx.params.viewport_height = 30
|
|
||||||
|
|
||||||
result = stage.init(ctx)
|
|
||||||
|
|
||||||
assert result is True
|
|
||||||
mock_display.init.assert_called_once_with(100, 30, reuse=False)
|
|
||||||
|
|
||||||
def test_display_stage_init_uses_defaults(self):
|
|
||||||
"""DisplayStage.init() uses defaults when params missing."""
|
|
||||||
mock_display = MagicMock()
|
|
||||||
mock_display.init.return_value = True
|
|
||||||
stage = DisplayStage(mock_display, name="terminal")
|
|
||||||
|
|
||||||
ctx = PipelineContext()
|
|
||||||
ctx.params = None
|
|
||||||
|
|
||||||
result = stage.init(ctx)
|
|
||||||
|
|
||||||
assert result is True
|
|
||||||
mock_display.init.assert_called_once_with(80, 24, reuse=False)
|
|
||||||
|
|
||||||
def test_display_stage_process_calls_show(self):
|
|
||||||
"""DisplayStage.process() calls display.show() with data."""
|
|
||||||
mock_display = MagicMock()
|
|
||||||
stage = DisplayStage(mock_display, name="terminal")
|
|
||||||
|
|
||||||
test_buffer = [[["A", "red"] for _ in range(80)] for _ in range(24)]
|
|
||||||
ctx = PipelineContext()
|
|
||||||
result = stage.process(test_buffer, ctx)
|
|
||||||
|
|
||||||
assert result == test_buffer
|
|
||||||
mock_display.show.assert_called_once_with(test_buffer)
|
|
||||||
|
|
||||||
def test_display_stage_process_skips_none_data(self):
|
|
||||||
"""DisplayStage.process() skips show() if data is None."""
|
|
||||||
mock_display = MagicMock()
|
|
||||||
stage = DisplayStage(mock_display, name="terminal")
|
|
||||||
|
|
||||||
ctx = PipelineContext()
|
|
||||||
result = stage.process(None, ctx)
|
|
||||||
|
|
||||||
assert result is None
|
|
||||||
mock_display.show.assert_not_called()
|
|
||||||
|
|
||||||
def test_display_stage_cleanup(self):
|
|
||||||
"""DisplayStage.cleanup() calls display.cleanup()."""
|
|
||||||
mock_display = MagicMock()
|
|
||||||
stage = DisplayStage(mock_display, name="terminal")
|
|
||||||
|
|
||||||
stage.cleanup()
|
|
||||||
|
|
||||||
mock_display.cleanup.assert_called_once()
|
|
||||||
|
|
||||||
|
|
||||||
class TestPassthroughStage:
|
|
||||||
"""Test PassthroughStage adapter."""
|
|
||||||
|
|
||||||
def test_passthrough_stage_name(self):
|
|
||||||
"""PassthroughStage stores name correctly."""
|
|
||||||
stage = PassthroughStage(name="test")
|
|
||||||
assert stage.name == "test"
|
|
||||||
|
|
||||||
def test_passthrough_stage_category(self):
|
|
||||||
"""PassthroughStage has 'render' category."""
|
|
||||||
stage = PassthroughStage()
|
|
||||||
assert stage.category == "render"
|
|
||||||
|
|
||||||
def test_passthrough_stage_is_optional(self):
|
|
||||||
"""PassthroughStage is optional."""
|
|
||||||
stage = PassthroughStage()
|
|
||||||
assert stage.optional is True
|
|
||||||
|
|
||||||
def test_passthrough_stage_capabilities(self):
|
|
||||||
"""PassthroughStage advertises render output capability."""
|
|
||||||
stage = PassthroughStage()
|
|
||||||
assert "render.output" in stage.capabilities
|
|
||||||
|
|
||||||
def test_passthrough_stage_dependencies(self):
|
|
||||||
"""PassthroughStage depends on source."""
|
|
||||||
stage = PassthroughStage()
|
|
||||||
assert "source" in stage.dependencies
|
|
||||||
|
|
||||||
def test_passthrough_stage_process_returns_data_unchanged(self):
|
|
||||||
"""PassthroughStage.process() returns data unchanged."""
|
|
||||||
stage = PassthroughStage()
|
|
||||||
ctx = PipelineContext()
|
|
||||||
|
|
||||||
test_data = [
|
|
||||||
SourceItem(content="Line 1", source="test", timestamp="12:00"),
|
|
||||||
]
|
|
||||||
result = stage.process(test_data, ctx)
|
|
||||||
|
|
||||||
assert result == test_data
|
|
||||||
assert result is test_data
|
|
||||||
|
|
||||||
|
|
||||||
class TestSourceItemsToBufferStage:
|
|
||||||
"""Test SourceItemsToBufferStage adapter."""
|
|
||||||
|
|
||||||
def test_source_items_to_buffer_stage_name(self):
|
|
||||||
"""SourceItemsToBufferStage stores name correctly."""
|
|
||||||
stage = SourceItemsToBufferStage(name="custom-name")
|
|
||||||
assert stage.name == "custom-name"
|
|
||||||
|
|
||||||
def test_source_items_to_buffer_stage_category(self):
|
|
||||||
"""SourceItemsToBufferStage has 'render' category."""
|
|
||||||
stage = SourceItemsToBufferStage()
|
|
||||||
assert stage.category == "render"
|
|
||||||
|
|
||||||
def test_source_items_to_buffer_stage_is_optional(self):
|
|
||||||
"""SourceItemsToBufferStage is optional."""
|
|
||||||
stage = SourceItemsToBufferStage()
|
|
||||||
assert stage.optional is True
|
|
||||||
|
|
||||||
def test_source_items_to_buffer_stage_capabilities(self):
|
|
||||||
"""SourceItemsToBufferStage advertises render output capability."""
|
|
||||||
stage = SourceItemsToBufferStage()
|
|
||||||
assert "render.output" in stage.capabilities
|
|
||||||
|
|
||||||
def test_source_items_to_buffer_stage_dependencies(self):
|
|
||||||
"""SourceItemsToBufferStage depends on source."""
|
|
||||||
stage = SourceItemsToBufferStage()
|
|
||||||
assert "source" in stage.dependencies
|
|
||||||
|
|
||||||
def test_source_items_to_buffer_stage_process_single_line_item(self):
|
|
||||||
"""SourceItemsToBufferStage converts single-line SourceItem."""
|
|
||||||
stage = SourceItemsToBufferStage()
|
|
||||||
ctx = PipelineContext()
|
|
||||||
|
|
||||||
items = [
|
|
||||||
SourceItem(content="Single line content", source="test", timestamp="12:00"),
|
|
||||||
]
|
|
||||||
result = stage.process(items, ctx)
|
|
||||||
|
|
||||||
assert isinstance(result, list)
|
|
||||||
assert len(result) >= 1
|
|
||||||
# Result should be lines of text
|
|
||||||
assert all(isinstance(line, str) for line in result)
|
|
||||||
|
|
||||||
def test_source_items_to_buffer_stage_process_multiline_item(self):
|
|
||||||
"""SourceItemsToBufferStage splits multiline SourceItem content."""
|
|
||||||
stage = SourceItemsToBufferStage()
|
|
||||||
ctx = PipelineContext()
|
|
||||||
|
|
||||||
content = "Line 1\nLine 2\nLine 3"
|
|
||||||
items = [
|
|
||||||
SourceItem(content=content, source="test", timestamp="12:00"),
|
|
||||||
]
|
|
||||||
result = stage.process(items, ctx)
|
|
||||||
|
|
||||||
# Should have at least 3 lines
|
|
||||||
assert len(result) >= 3
|
|
||||||
assert all(isinstance(line, str) for line in result)
|
|
||||||
|
|
||||||
def test_source_items_to_buffer_stage_process_multiple_items(self):
|
|
||||||
"""SourceItemsToBufferStage handles multiple SourceItems."""
|
|
||||||
stage = SourceItemsToBufferStage()
|
|
||||||
ctx = PipelineContext()
|
|
||||||
|
|
||||||
items = [
|
|
||||||
SourceItem(content="Item 1", source="test", timestamp="12:00"),
|
|
||||||
SourceItem(content="Item 2", source="test", timestamp="12:01"),
|
|
||||||
SourceItem(content="Item 3", source="test", timestamp="12:02"),
|
|
||||||
]
|
|
||||||
result = stage.process(items, ctx)
|
|
||||||
|
|
||||||
# Should have at least 3 lines (one per item, possibly more)
|
|
||||||
assert len(result) >= 3
|
|
||||||
assert all(isinstance(line, str) for line in result)
|
|
||||||
|
|
||||||
|
|
||||||
class TestEffectPluginStage:
|
|
||||||
"""Test EffectPluginStage adapter."""
|
|
||||||
|
|
||||||
def test_effect_plugin_stage_name(self):
|
|
||||||
"""EffectPluginStage stores name correctly."""
|
|
||||||
mock_effect = MagicMock()
|
|
||||||
stage = EffectPluginStage(mock_effect, name="blur")
|
|
||||||
assert stage.name == "blur"
|
|
||||||
|
|
||||||
def test_effect_plugin_stage_category(self):
|
|
||||||
"""EffectPluginStage has 'effect' category."""
|
|
||||||
mock_effect = MagicMock()
|
|
||||||
stage = EffectPluginStage(mock_effect, name="blur")
|
|
||||||
assert stage.category == "effect"
|
|
||||||
|
|
||||||
def test_effect_plugin_stage_is_not_optional(self):
|
|
||||||
"""EffectPluginStage is required when configured."""
|
|
||||||
mock_effect = MagicMock()
|
|
||||||
stage = EffectPluginStage(mock_effect, name="blur")
|
|
||||||
assert stage.optional is False
|
|
||||||
|
|
||||||
def test_effect_plugin_stage_capabilities(self):
|
|
||||||
"""EffectPluginStage advertises effect capability with name."""
|
|
||||||
mock_effect = MagicMock()
|
|
||||||
stage = EffectPluginStage(mock_effect, name="blur")
|
|
||||||
assert "effect.blur" in stage.capabilities
|
|
||||||
|
|
||||||
def test_effect_plugin_stage_dependencies(self):
|
|
||||||
"""EffectPluginStage has no static dependencies."""
|
|
||||||
mock_effect = MagicMock()
|
|
||||||
stage = EffectPluginStage(mock_effect, name="blur")
|
|
||||||
# EffectPluginStage has empty dependencies - they are resolved dynamically
|
|
||||||
assert stage.dependencies == set()
|
|
||||||
|
|
||||||
def test_effect_plugin_stage_stage_type(self):
|
|
||||||
"""EffectPluginStage.stage_type returns effect for non-HUD."""
|
|
||||||
mock_effect = MagicMock()
|
|
||||||
stage = EffectPluginStage(mock_effect, name="blur")
|
|
||||||
assert stage.stage_type == "effect"
|
|
||||||
|
|
||||||
def test_effect_plugin_stage_hud_special_handling(self):
|
|
||||||
"""EffectPluginStage has special handling for HUD effect."""
|
|
||||||
mock_effect = MagicMock()
|
|
||||||
stage = EffectPluginStage(mock_effect, name="hud")
|
|
||||||
assert stage.stage_type == "overlay"
|
|
||||||
assert stage.is_overlay is True
|
|
||||||
assert stage.render_order == 100
|
|
||||||
|
|
||||||
def test_effect_plugin_stage_process(self):
|
|
||||||
"""EffectPluginStage.process() calls effect.process()."""
|
|
||||||
mock_effect = MagicMock()
|
|
||||||
mock_effect.process.return_value = "processed_data"
|
|
||||||
|
|
||||||
stage = EffectPluginStage(mock_effect, name="blur")
|
|
||||||
ctx = PipelineContext()
|
|
||||||
test_buffer = "test_buffer"
|
|
||||||
|
|
||||||
result = stage.process(test_buffer, ctx)
|
|
||||||
|
|
||||||
assert result == "processed_data"
|
|
||||||
mock_effect.process.assert_called_once()
|
|
||||||
@@ -1,205 +0,0 @@
|
|||||||
"""
|
|
||||||
Integration tests for engine/app.py - pipeline orchestration.
|
|
||||||
|
|
||||||
Tests the main entry point and pipeline mode initialization.
|
|
||||||
"""
|
|
||||||
|
|
||||||
import sys
|
|
||||||
from unittest.mock import Mock, patch
|
|
||||||
|
|
||||||
import pytest
|
|
||||||
|
|
||||||
from engine.app import main, run_pipeline_mode
|
|
||||||
from engine.pipeline import get_preset
|
|
||||||
|
|
||||||
|
|
||||||
class TestMain:
|
|
||||||
"""Test main() entry point."""
|
|
||||||
|
|
||||||
def test_main_calls_run_pipeline_mode_with_default_preset(self):
|
|
||||||
"""main() runs default preset (demo) when no args provided."""
|
|
||||||
with patch("engine.app.run_pipeline_mode") as mock_run:
|
|
||||||
sys.argv = ["mainline.py"]
|
|
||||||
main()
|
|
||||||
mock_run.assert_called_once_with("demo")
|
|
||||||
|
|
||||||
def test_main_calls_run_pipeline_mode_with_config_preset(self):
|
|
||||||
"""main() uses PRESET from config if set."""
|
|
||||||
with (
|
|
||||||
patch("engine.app.config") as mock_config,
|
|
||||||
patch("engine.app.run_pipeline_mode") as mock_run,
|
|
||||||
):
|
|
||||||
mock_config.PIPELINE_DIAGRAM = False
|
|
||||||
mock_config.PRESET = "gallery-sources"
|
|
||||||
mock_config.PIPELINE_MODE = False
|
|
||||||
sys.argv = ["mainline.py"]
|
|
||||||
main()
|
|
||||||
mock_run.assert_called_once_with("gallery-sources")
|
|
||||||
|
|
||||||
def test_main_exits_on_unknown_preset(self):
|
|
||||||
"""main() exits with error for unknown preset."""
|
|
||||||
with (
|
|
||||||
patch("engine.app.config") as mock_config,
|
|
||||||
patch("engine.app.list_presets", return_value=["demo", "poetry"]),
|
|
||||||
):
|
|
||||||
mock_config.PIPELINE_DIAGRAM = False
|
|
||||||
mock_config.PRESET = "nonexistent"
|
|
||||||
mock_config.PIPELINE_MODE = False
|
|
||||||
sys.argv = ["mainline.py"]
|
|
||||||
with pytest.raises(SystemExit) as exc_info:
|
|
||||||
main()
|
|
||||||
assert exc_info.value.code == 1
|
|
||||||
|
|
||||||
|
|
||||||
class TestRunPipelineMode:
|
|
||||||
"""Test run_pipeline_mode() initialization."""
|
|
||||||
|
|
||||||
def test_run_pipeline_mode_loads_valid_preset(self):
|
|
||||||
"""run_pipeline_mode() loads a valid preset."""
|
|
||||||
preset = get_preset("demo")
|
|
||||||
assert preset is not None
|
|
||||||
assert preset.name == "demo"
|
|
||||||
assert preset.source == "headlines"
|
|
||||||
|
|
||||||
def test_run_pipeline_mode_exits_on_invalid_preset(self):
|
|
||||||
"""run_pipeline_mode() exits if preset not found."""
|
|
||||||
with pytest.raises(SystemExit) as exc_info:
|
|
||||||
run_pipeline_mode("invalid-preset-xyz")
|
|
||||||
assert exc_info.value.code == 1
|
|
||||||
|
|
||||||
def test_run_pipeline_mode_exits_when_no_content_available(self):
|
|
||||||
"""run_pipeline_mode() exits if no content can be fetched."""
|
|
||||||
with (
|
|
||||||
patch("engine.app.load_cache", return_value=None),
|
|
||||||
patch("engine.app.fetch_all", return_value=([], None, None)),
|
|
||||||
patch("engine.app.effects_plugins"),
|
|
||||||
pytest.raises(SystemExit) as exc_info,
|
|
||||||
):
|
|
||||||
run_pipeline_mode("demo")
|
|
||||||
assert exc_info.value.code == 1
|
|
||||||
|
|
||||||
def test_run_pipeline_mode_uses_cache_over_fetch(self):
|
|
||||||
"""run_pipeline_mode() uses cached content if available."""
|
|
||||||
cached = ["cached_item"]
|
|
||||||
with (
|
|
||||||
patch("engine.app.load_cache", return_value=cached) as mock_load,
|
|
||||||
patch("engine.app.fetch_all") as mock_fetch,
|
|
||||||
patch("engine.app.DisplayRegistry.create") as mock_create,
|
|
||||||
):
|
|
||||||
mock_display = Mock()
|
|
||||||
mock_display.init = Mock()
|
|
||||||
mock_display.get_dimensions = Mock(return_value=(80, 24))
|
|
||||||
mock_display.is_quit_requested = Mock(return_value=True)
|
|
||||||
mock_display.clear_quit_request = Mock()
|
|
||||||
mock_display.show = Mock()
|
|
||||||
mock_display.cleanup = Mock()
|
|
||||||
mock_create.return_value = mock_display
|
|
||||||
|
|
||||||
try:
|
|
||||||
run_pipeline_mode("demo")
|
|
||||||
except (KeyboardInterrupt, SystemExit):
|
|
||||||
pass
|
|
||||||
|
|
||||||
# Verify fetch_all was NOT called (cache was used)
|
|
||||||
mock_fetch.assert_not_called()
|
|
||||||
mock_load.assert_called_once()
|
|
||||||
|
|
||||||
def test_run_pipeline_mode_creates_display(self):
|
|
||||||
"""run_pipeline_mode() creates a display backend."""
|
|
||||||
with (
|
|
||||||
patch("engine.app.load_cache", return_value=["item"]),
|
|
||||||
patch("engine.app.DisplayRegistry.create") as mock_create,
|
|
||||||
):
|
|
||||||
mock_display = Mock()
|
|
||||||
mock_display.init = Mock()
|
|
||||||
mock_display.get_dimensions = Mock(return_value=(80, 24))
|
|
||||||
mock_display.is_quit_requested = Mock(return_value=True)
|
|
||||||
mock_display.clear_quit_request = Mock()
|
|
||||||
mock_display.show = Mock()
|
|
||||||
mock_display.cleanup = Mock()
|
|
||||||
mock_create.return_value = mock_display
|
|
||||||
|
|
||||||
try:
|
|
||||||
run_pipeline_mode("gallery-display-terminal")
|
|
||||||
except (KeyboardInterrupt, SystemExit):
|
|
||||||
pass
|
|
||||||
|
|
||||||
# Verify display was created with 'terminal' (preset display)
|
|
||||||
mock_create.assert_called_once_with("terminal")
|
|
||||||
|
|
||||||
def test_run_pipeline_mode_respects_display_cli_flag(self):
|
|
||||||
"""run_pipeline_mode() uses --display CLI flag if provided."""
|
|
||||||
sys.argv = ["mainline.py", "--display", "websocket"]
|
|
||||||
|
|
||||||
with (
|
|
||||||
patch("engine.app.load_cache", return_value=["item"]),
|
|
||||||
patch("engine.app.DisplayRegistry.create") as mock_create,
|
|
||||||
):
|
|
||||||
mock_display = Mock()
|
|
||||||
mock_display.init = Mock()
|
|
||||||
mock_display.get_dimensions = Mock(return_value=(80, 24))
|
|
||||||
mock_display.is_quit_requested = Mock(return_value=True)
|
|
||||||
mock_display.clear_quit_request = Mock()
|
|
||||||
mock_display.show = Mock()
|
|
||||||
mock_display.cleanup = Mock()
|
|
||||||
mock_create.return_value = mock_display
|
|
||||||
|
|
||||||
try:
|
|
||||||
run_pipeline_mode("demo")
|
|
||||||
except (KeyboardInterrupt, SystemExit):
|
|
||||||
pass
|
|
||||||
|
|
||||||
# Verify display was created with CLI override
|
|
||||||
mock_create.assert_called_once_with("websocket")
|
|
||||||
|
|
||||||
def test_run_pipeline_mode_fetches_poetry_for_poetry_source(self):
|
|
||||||
"""run_pipeline_mode() fetches poetry for poetry preset."""
|
|
||||||
with (
|
|
||||||
patch("engine.app.load_cache", return_value=None),
|
|
||||||
patch(
|
|
||||||
"engine.app.fetch_poetry", return_value=(["poem"], None, None)
|
|
||||||
) as mock_fetch_poetry,
|
|
||||||
patch("engine.app.fetch_all") as mock_fetch_all,
|
|
||||||
patch("engine.app.DisplayRegistry.create") as mock_create,
|
|
||||||
):
|
|
||||||
mock_display = Mock()
|
|
||||||
mock_display.init = Mock()
|
|
||||||
mock_display.get_dimensions = Mock(return_value=(80, 24))
|
|
||||||
mock_display.is_quit_requested = Mock(return_value=True)
|
|
||||||
mock_display.clear_quit_request = Mock()
|
|
||||||
mock_display.show = Mock()
|
|
||||||
mock_display.cleanup = Mock()
|
|
||||||
mock_create.return_value = mock_display
|
|
||||||
|
|
||||||
try:
|
|
||||||
run_pipeline_mode("poetry")
|
|
||||||
except (KeyboardInterrupt, SystemExit):
|
|
||||||
pass
|
|
||||||
|
|
||||||
# Verify fetch_poetry was called, not fetch_all
|
|
||||||
mock_fetch_poetry.assert_called_once()
|
|
||||||
mock_fetch_all.assert_not_called()
|
|
||||||
|
|
||||||
def test_run_pipeline_mode_discovers_effect_plugins(self):
|
|
||||||
"""run_pipeline_mode() discovers available effect plugins."""
|
|
||||||
with (
|
|
||||||
patch("engine.app.load_cache", return_value=["item"]),
|
|
||||||
patch("engine.app.effects_plugins") as mock_effects,
|
|
||||||
patch("engine.app.DisplayRegistry.create") as mock_create,
|
|
||||||
):
|
|
||||||
mock_display = Mock()
|
|
||||||
mock_display.init = Mock()
|
|
||||||
mock_display.get_dimensions = Mock(return_value=(80, 24))
|
|
||||||
mock_display.is_quit_requested = Mock(return_value=True)
|
|
||||||
mock_display.clear_quit_request = Mock()
|
|
||||||
mock_display.show = Mock()
|
|
||||||
mock_display.cleanup = Mock()
|
|
||||||
mock_create.return_value = mock_display
|
|
||||||
|
|
||||||
try:
|
|
||||||
run_pipeline_mode("demo")
|
|
||||||
except (KeyboardInterrupt, SystemExit):
|
|
||||||
pass
|
|
||||||
|
|
||||||
# Verify effects_plugins.discover_plugins was called
|
|
||||||
mock_effects.discover_plugins.assert_called_once()
|
|
||||||
@@ -1,100 +0,0 @@
|
|||||||
"""
|
|
||||||
Tests for engine.benchmark module - performance regression tests.
|
|
||||||
"""
|
|
||||||
|
|
||||||
from unittest.mock import patch
|
|
||||||
|
|
||||||
import pytest
|
|
||||||
|
|
||||||
from engine.display import NullDisplay
|
|
||||||
|
|
||||||
|
|
||||||
class TestBenchmarkNullDisplay:
|
|
||||||
"""Performance tests for NullDisplay - regression tests."""
|
|
||||||
|
|
||||||
@pytest.mark.benchmark
|
|
||||||
def test_null_display_minimum_fps(self):
|
|
||||||
"""NullDisplay should meet minimum performance threshold."""
|
|
||||||
import time
|
|
||||||
|
|
||||||
display = NullDisplay()
|
|
||||||
display.init(80, 24)
|
|
||||||
buffer = ["x" * 80 for _ in range(24)]
|
|
||||||
|
|
||||||
iterations = 1000
|
|
||||||
start = time.perf_counter()
|
|
||||||
for _ in range(iterations):
|
|
||||||
display.show(buffer)
|
|
||||||
elapsed = time.perf_counter() - start
|
|
||||||
|
|
||||||
fps = iterations / elapsed
|
|
||||||
min_fps = 20000
|
|
||||||
|
|
||||||
assert fps >= min_fps, f"NullDisplay FPS {fps:.0f} below minimum {min_fps}"
|
|
||||||
|
|
||||||
@pytest.mark.benchmark
|
|
||||||
def test_effects_minimum_throughput(self):
|
|
||||||
"""Effects should meet minimum processing throughput."""
|
|
||||||
import time
|
|
||||||
|
|
||||||
from engine.effects import EffectContext, get_registry
|
|
||||||
from engine.effects.plugins import discover_plugins
|
|
||||||
|
|
||||||
discover_plugins()
|
|
||||||
registry = get_registry()
|
|
||||||
effect = registry.get("noise")
|
|
||||||
assert effect is not None, "Noise effect should be registered"
|
|
||||||
|
|
||||||
buffer = ["x" * 80 for _ in range(24)]
|
|
||||||
ctx = EffectContext(
|
|
||||||
terminal_width=80,
|
|
||||||
terminal_height=24,
|
|
||||||
scroll_cam=0,
|
|
||||||
ticker_height=20,
|
|
||||||
mic_excess=0.0,
|
|
||||||
grad_offset=0.0,
|
|
||||||
frame_number=0,
|
|
||||||
has_message=False,
|
|
||||||
)
|
|
||||||
|
|
||||||
iterations = 500
|
|
||||||
start = time.perf_counter()
|
|
||||||
for _ in range(iterations):
|
|
||||||
effect.process(buffer, ctx)
|
|
||||||
elapsed = time.perf_counter() - start
|
|
||||||
|
|
||||||
fps = iterations / elapsed
|
|
||||||
min_fps = 10000
|
|
||||||
|
|
||||||
assert fps >= min_fps, (
|
|
||||||
f"Effect processing FPS {fps:.0f} below minimum {min_fps}"
|
|
||||||
)
|
|
||||||
|
|
||||||
|
|
||||||
class TestBenchmarkWebSocketDisplay:
|
|
||||||
"""Performance tests for WebSocketDisplay."""
|
|
||||||
|
|
||||||
@pytest.mark.benchmark
|
|
||||||
def test_websocket_display_minimum_fps(self):
|
|
||||||
"""WebSocketDisplay should meet minimum performance threshold."""
|
|
||||||
import time
|
|
||||||
|
|
||||||
with patch("engine.display.backends.websocket.websockets", None):
|
|
||||||
from engine.display import WebSocketDisplay
|
|
||||||
|
|
||||||
display = WebSocketDisplay()
|
|
||||||
display.init(80, 24)
|
|
||||||
buffer = ["x" * 80 for _ in range(24)]
|
|
||||||
|
|
||||||
iterations = 500
|
|
||||||
start = time.perf_counter()
|
|
||||||
for _ in range(iterations):
|
|
||||||
display.show(buffer)
|
|
||||||
elapsed = time.perf_counter() - start
|
|
||||||
|
|
||||||
fps = iterations / elapsed
|
|
||||||
min_fps = 10000
|
|
||||||
|
|
||||||
assert fps >= min_fps, (
|
|
||||||
f"WebSocketDisplay FPS {fps:.0f} below minimum {min_fps}"
|
|
||||||
)
|
|
||||||
@@ -1,111 +0,0 @@
|
|||||||
"""
|
|
||||||
Tests for BorderEffect.
|
|
||||||
"""
|
|
||||||
|
|
||||||
from engine.effects.plugins.border import BorderEffect
|
|
||||||
from engine.effects.types import EffectContext
|
|
||||||
|
|
||||||
|
|
||||||
def make_ctx(terminal_width: int = 80, terminal_height: int = 24) -> EffectContext:
|
|
||||||
"""Create a mock EffectContext."""
|
|
||||||
return EffectContext(
|
|
||||||
terminal_width=terminal_width,
|
|
||||||
terminal_height=terminal_height,
|
|
||||||
scroll_cam=0,
|
|
||||||
ticker_height=terminal_height,
|
|
||||||
)
|
|
||||||
|
|
||||||
|
|
||||||
class TestBorderEffect:
|
|
||||||
"""Tests for BorderEffect."""
|
|
||||||
|
|
||||||
def test_basic_init(self):
|
|
||||||
"""BorderEffect initializes with defaults."""
|
|
||||||
effect = BorderEffect()
|
|
||||||
assert effect.name == "border"
|
|
||||||
assert effect.config.enabled is True
|
|
||||||
|
|
||||||
def test_adds_border(self):
|
|
||||||
"""BorderEffect adds border around content."""
|
|
||||||
effect = BorderEffect()
|
|
||||||
buf = [
|
|
||||||
"Hello World",
|
|
||||||
"Test Content",
|
|
||||||
"Third Line",
|
|
||||||
]
|
|
||||||
ctx = make_ctx(terminal_width=20, terminal_height=10)
|
|
||||||
|
|
||||||
result = effect.process(buf, ctx)
|
|
||||||
|
|
||||||
# Should have top and bottom borders
|
|
||||||
assert len(result) >= 3
|
|
||||||
# First line should start with border character
|
|
||||||
assert result[0][0] in "┌┎┍"
|
|
||||||
# Last line should end with border character
|
|
||||||
assert result[-1][-1] in "┘┖┚"
|
|
||||||
|
|
||||||
def test_border_with_small_buffer(self):
|
|
||||||
"""BorderEffect handles small buffer (too small for border)."""
|
|
||||||
effect = BorderEffect()
|
|
||||||
buf = ["ab"] # Too small for proper border
|
|
||||||
ctx = make_ctx(terminal_width=10, terminal_height=5)
|
|
||||||
|
|
||||||
result = effect.process(buf, ctx)
|
|
||||||
|
|
||||||
# Should still try to add border but result may differ
|
|
||||||
# At minimum should have output
|
|
||||||
assert len(result) >= 1
|
|
||||||
|
|
||||||
def test_metrics_in_border(self):
|
|
||||||
"""BorderEffect includes FPS and frame time in border."""
|
|
||||||
effect = BorderEffect()
|
|
||||||
buf = ["x" * 10] * 5
|
|
||||||
ctx = make_ctx(terminal_width=20, terminal_height=10)
|
|
||||||
|
|
||||||
# Add metrics to context
|
|
||||||
ctx.set_state(
|
|
||||||
"metrics",
|
|
||||||
{
|
|
||||||
"avg_ms": 16.5,
|
|
||||||
"frame_count": 100,
|
|
||||||
"fps": 60.0,
|
|
||||||
},
|
|
||||||
)
|
|
||||||
|
|
||||||
result = effect.process(buf, ctx)
|
|
||||||
|
|
||||||
# Check for FPS in top border
|
|
||||||
top_line = result[0]
|
|
||||||
assert "FPS" in top_line or "60" in top_line
|
|
||||||
|
|
||||||
# Check for frame time in bottom border
|
|
||||||
bottom_line = result[-1]
|
|
||||||
assert "ms" in bottom_line or "16" in bottom_line
|
|
||||||
|
|
||||||
def test_no_metrics(self):
|
|
||||||
"""BorderEffect works without metrics."""
|
|
||||||
effect = BorderEffect()
|
|
||||||
buf = ["content"] * 5
|
|
||||||
ctx = make_ctx(terminal_width=20, terminal_height=10)
|
|
||||||
# No metrics set
|
|
||||||
|
|
||||||
result = effect.process(buf, ctx)
|
|
||||||
|
|
||||||
# Should still have border characters
|
|
||||||
assert len(result) >= 3
|
|
||||||
assert result[0][0] in "┌┎┍"
|
|
||||||
|
|
||||||
def test_crops_before_bordering(self):
|
|
||||||
"""BorderEffect crops input before adding border."""
|
|
||||||
effect = BorderEffect()
|
|
||||||
buf = ["x" * 100] * 50 # Very large buffer
|
|
||||||
ctx = make_ctx(terminal_width=20, terminal_height=10)
|
|
||||||
|
|
||||||
result = effect.process(buf, ctx)
|
|
||||||
|
|
||||||
# Should be cropped to fit, then bordered
|
|
||||||
# Result should be <= terminal_height with border
|
|
||||||
assert len(result) <= ctx.terminal_height
|
|
||||||
# Each line should be <= terminal_width
|
|
||||||
for line in result:
|
|
||||||
assert len(line) <= ctx.terminal_width
|
|
||||||
@@ -1,68 +0,0 @@
|
|||||||
from engine.camera import Camera, CameraMode
|
|
||||||
|
|
||||||
|
|
||||||
def test_camera_vertical_default():
|
|
||||||
"""Test default vertical camera."""
|
|
||||||
cam = Camera()
|
|
||||||
assert cam.mode == CameraMode.FEED
|
|
||||||
assert cam.x == 0
|
|
||||||
assert cam.y == 0
|
|
||||||
|
|
||||||
|
|
||||||
def test_camera_vertical_factory():
|
|
||||||
"""Test vertical factory method."""
|
|
||||||
cam = Camera.feed(speed=2.0)
|
|
||||||
assert cam.mode == CameraMode.FEED
|
|
||||||
assert cam.speed == 2.0
|
|
||||||
|
|
||||||
|
|
||||||
def test_camera_horizontal():
|
|
||||||
"""Test horizontal camera."""
|
|
||||||
cam = Camera.horizontal(speed=1.5)
|
|
||||||
assert cam.mode == CameraMode.HORIZONTAL
|
|
||||||
cam.update(1.0)
|
|
||||||
assert cam.x > 0
|
|
||||||
|
|
||||||
|
|
||||||
def test_camera_omni():
|
|
||||||
"""Test omnidirectional camera."""
|
|
||||||
cam = Camera.omni(speed=1.0)
|
|
||||||
assert cam.mode == CameraMode.OMNI
|
|
||||||
cam.update(1.0)
|
|
||||||
assert cam.x > 0
|
|
||||||
assert cam.y > 0
|
|
||||||
|
|
||||||
|
|
||||||
def test_camera_floating():
|
|
||||||
"""Test floating camera with sinusoidal motion."""
|
|
||||||
cam = Camera.floating(speed=1.0)
|
|
||||||
assert cam.mode == CameraMode.FLOATING
|
|
||||||
y_before = cam.y
|
|
||||||
cam.update(0.5)
|
|
||||||
y_after = cam.y
|
|
||||||
assert y_before != y_after
|
|
||||||
|
|
||||||
|
|
||||||
def test_camera_reset():
|
|
||||||
"""Test camera reset."""
|
|
||||||
cam = Camera.vertical()
|
|
||||||
cam.update(1.0)
|
|
||||||
assert cam.y > 0
|
|
||||||
cam.reset()
|
|
||||||
assert cam.x == 0
|
|
||||||
assert cam.y == 0
|
|
||||||
|
|
||||||
|
|
||||||
def test_camera_custom_update():
|
|
||||||
"""Test custom update function."""
|
|
||||||
call_count = 0
|
|
||||||
|
|
||||||
def custom_update(camera, dt):
|
|
||||||
nonlocal call_count
|
|
||||||
call_count += 1
|
|
||||||
camera.x += int(10 * dt)
|
|
||||||
|
|
||||||
cam = Camera.custom(custom_update)
|
|
||||||
cam.update(1.0)
|
|
||||||
assert call_count == 1
|
|
||||||
assert cam.x == 10
|
|
||||||
117
tests/test_controller.py
Normal file
117
tests/test_controller.py
Normal file
@@ -0,0 +1,117 @@
|
|||||||
|
"""
|
||||||
|
Tests for engine.controller module.
|
||||||
|
"""
|
||||||
|
|
||||||
|
from unittest.mock import MagicMock, patch
|
||||||
|
|
||||||
|
from engine import config
|
||||||
|
from engine.controller import StreamController
|
||||||
|
|
||||||
|
|
||||||
|
class TestStreamController:
|
||||||
|
"""Tests for StreamController class."""
|
||||||
|
|
||||||
|
def test_init_default_config(self):
|
||||||
|
"""StreamController initializes with default config."""
|
||||||
|
controller = StreamController()
|
||||||
|
assert controller.config is not None
|
||||||
|
assert isinstance(controller.config, config.Config)
|
||||||
|
|
||||||
|
def test_init_custom_config(self):
|
||||||
|
"""StreamController accepts custom config."""
|
||||||
|
custom_config = config.Config(headline_limit=500)
|
||||||
|
controller = StreamController(config=custom_config)
|
||||||
|
assert controller.config.headline_limit == 500
|
||||||
|
|
||||||
|
def test_init_sources_none_by_default(self):
|
||||||
|
"""Sources are None until initialized."""
|
||||||
|
controller = StreamController()
|
||||||
|
assert controller.mic is None
|
||||||
|
assert controller.ntfy is None
|
||||||
|
|
||||||
|
@patch("engine.controller.MicMonitor")
|
||||||
|
@patch("engine.controller.NtfyPoller")
|
||||||
|
def test_initialize_sources(self, mock_ntfy, mock_mic):
|
||||||
|
"""initialize_sources creates mic and ntfy instances."""
|
||||||
|
mock_mic_instance = MagicMock()
|
||||||
|
mock_mic_instance.available = True
|
||||||
|
mock_mic_instance.start.return_value = True
|
||||||
|
mock_mic.return_value = mock_mic_instance
|
||||||
|
|
||||||
|
mock_ntfy_instance = MagicMock()
|
||||||
|
mock_ntfy_instance.start.return_value = True
|
||||||
|
mock_ntfy.return_value = mock_ntfy_instance
|
||||||
|
|
||||||
|
controller = StreamController()
|
||||||
|
mic_ok, ntfy_ok = controller.initialize_sources()
|
||||||
|
|
||||||
|
assert mic_ok is True
|
||||||
|
assert ntfy_ok is True
|
||||||
|
assert controller.mic is not None
|
||||||
|
assert controller.ntfy is not None
|
||||||
|
|
||||||
|
@patch("engine.controller.MicMonitor")
|
||||||
|
@patch("engine.controller.NtfyPoller")
|
||||||
|
def test_initialize_sources_mic_unavailable(self, mock_ntfy, mock_mic):
|
||||||
|
"""initialize_sources handles unavailable mic."""
|
||||||
|
mock_mic_instance = MagicMock()
|
||||||
|
mock_mic_instance.available = False
|
||||||
|
mock_mic.return_value = mock_mic_instance
|
||||||
|
|
||||||
|
mock_ntfy_instance = MagicMock()
|
||||||
|
mock_ntfy_instance.start.return_value = True
|
||||||
|
mock_ntfy.return_value = mock_ntfy_instance
|
||||||
|
|
||||||
|
controller = StreamController()
|
||||||
|
mic_ok, ntfy_ok = controller.initialize_sources()
|
||||||
|
|
||||||
|
assert mic_ok is False
|
||||||
|
assert ntfy_ok is True
|
||||||
|
|
||||||
|
|
||||||
|
class TestStreamControllerCleanup:
|
||||||
|
"""Tests for StreamController cleanup."""
|
||||||
|
|
||||||
|
@patch("engine.controller.MicMonitor")
|
||||||
|
def test_cleanup_stops_mic(self, mock_mic):
|
||||||
|
"""cleanup stops the microphone if running."""
|
||||||
|
mock_mic_instance = MagicMock()
|
||||||
|
mock_mic.return_value = mock_mic_instance
|
||||||
|
|
||||||
|
controller = StreamController()
|
||||||
|
controller.mic = mock_mic_instance
|
||||||
|
controller.cleanup()
|
||||||
|
|
||||||
|
mock_mic_instance.stop.assert_called_once()
|
||||||
|
|
||||||
|
|
||||||
|
class TestStreamControllerWarmup:
|
||||||
|
"""Tests for StreamController topic warmup."""
|
||||||
|
|
||||||
|
def test_warmup_topics_idempotent(self):
|
||||||
|
"""warmup_topics can be called multiple times."""
|
||||||
|
StreamController._topics_warmed = False
|
||||||
|
|
||||||
|
with patch("urllib.request.urlopen") as mock_urlopen:
|
||||||
|
StreamController.warmup_topics()
|
||||||
|
StreamController.warmup_topics()
|
||||||
|
|
||||||
|
assert mock_urlopen.call_count >= 3
|
||||||
|
|
||||||
|
def test_warmup_topics_sets_flag(self):
|
||||||
|
"""warmup_topics sets the warmed flag."""
|
||||||
|
StreamController._topics_warmed = False
|
||||||
|
|
||||||
|
with patch("urllib.request.urlopen"):
|
||||||
|
StreamController.warmup_topics()
|
||||||
|
|
||||||
|
assert StreamController._topics_warmed is True
|
||||||
|
|
||||||
|
def test_warmup_topics_skips_after_first(self):
|
||||||
|
"""warmup_topics skips after first call."""
|
||||||
|
StreamController._topics_warmed = True
|
||||||
|
|
||||||
|
with patch("urllib.request.urlopen") as mock_urlopen:
|
||||||
|
StreamController.warmup_topics()
|
||||||
|
|
||||||
|
mock_urlopen.assert_not_called()
|
||||||
@@ -1,99 +0,0 @@
|
|||||||
"""
|
|
||||||
Tests for CropEffect.
|
|
||||||
"""
|
|
||||||
|
|
||||||
from engine.effects.plugins.crop import CropEffect
|
|
||||||
from engine.effects.types import EffectContext
|
|
||||||
|
|
||||||
|
|
||||||
def make_ctx(terminal_width: int = 80, terminal_height: int = 24) -> EffectContext:
|
|
||||||
"""Create a mock EffectContext."""
|
|
||||||
return EffectContext(
|
|
||||||
terminal_width=terminal_width,
|
|
||||||
terminal_height=terminal_height,
|
|
||||||
scroll_cam=0,
|
|
||||||
ticker_height=terminal_height,
|
|
||||||
)
|
|
||||||
|
|
||||||
|
|
||||||
class TestCropEffect:
|
|
||||||
"""Tests for CropEffect."""
|
|
||||||
|
|
||||||
def test_basic_init(self):
|
|
||||||
"""CropEffect initializes with defaults."""
|
|
||||||
effect = CropEffect()
|
|
||||||
assert effect.name == "crop"
|
|
||||||
assert effect.config.enabled is True
|
|
||||||
|
|
||||||
def test_crop_wider_buffer(self):
|
|
||||||
"""CropEffect crops wide buffer to terminal width."""
|
|
||||||
effect = CropEffect()
|
|
||||||
buf = [
|
|
||||||
"This is a very long line that exceeds the terminal width of eighty characters!",
|
|
||||||
"Another long line that should also be cropped to fit within the terminal bounds!",
|
|
||||||
"Short",
|
|
||||||
]
|
|
||||||
ctx = make_ctx(terminal_width=40, terminal_height=10)
|
|
||||||
|
|
||||||
result = effect.process(buf, ctx)
|
|
||||||
|
|
||||||
# Lines should be cropped to 40 chars
|
|
||||||
assert len(result[0]) == 40
|
|
||||||
assert len(result[1]) == 40
|
|
||||||
assert result[2] == "Short" + " " * 35 # padded to width
|
|
||||||
|
|
||||||
def test_crop_taller_buffer(self):
|
|
||||||
"""CropEffect crops tall buffer to terminal height."""
|
|
||||||
effect = CropEffect()
|
|
||||||
buf = ["line"] * 30 # 30 lines
|
|
||||||
ctx = make_ctx(terminal_width=80, terminal_height=10)
|
|
||||||
|
|
||||||
result = effect.process(buf, ctx)
|
|
||||||
|
|
||||||
# Should be cropped to 10 lines
|
|
||||||
assert len(result) == 10
|
|
||||||
|
|
||||||
def test_pad_shorter_lines(self):
|
|
||||||
"""CropEffect pads lines shorter than width."""
|
|
||||||
effect = CropEffect()
|
|
||||||
buf = ["short", "medium length", ""]
|
|
||||||
ctx = make_ctx(terminal_width=20, terminal_height=5)
|
|
||||||
|
|
||||||
result = effect.process(buf, ctx)
|
|
||||||
|
|
||||||
assert len(result[0]) == 20 # padded
|
|
||||||
assert len(result[1]) == 20 # padded
|
|
||||||
assert len(result[2]) == 20 # padded (was empty)
|
|
||||||
|
|
||||||
def test_pad_to_height(self):
|
|
||||||
"""CropEffect pads with empty lines if buffer is too short."""
|
|
||||||
effect = CropEffect()
|
|
||||||
buf = ["line1", "line2"]
|
|
||||||
ctx = make_ctx(terminal_width=20, terminal_height=10)
|
|
||||||
|
|
||||||
result = effect.process(buf, ctx)
|
|
||||||
|
|
||||||
# Should have 10 lines
|
|
||||||
assert len(result) == 10
|
|
||||||
# Last 8 should be empty padding
|
|
||||||
for i in range(2, 10):
|
|
||||||
assert result[i] == " " * 20
|
|
||||||
|
|
||||||
def test_empty_buffer(self):
|
|
||||||
"""CropEffect handles empty buffer."""
|
|
||||||
effect = CropEffect()
|
|
||||||
ctx = make_ctx()
|
|
||||||
|
|
||||||
result = effect.process([], ctx)
|
|
||||||
|
|
||||||
assert result == []
|
|
||||||
|
|
||||||
def test_uses_context_dimensions(self):
|
|
||||||
"""CropEffect uses context terminal_width/terminal_height."""
|
|
||||||
effect = CropEffect()
|
|
||||||
buf = ["x" * 100]
|
|
||||||
ctx = make_ctx(terminal_width=50, terminal_height=1)
|
|
||||||
|
|
||||||
result = effect.process(buf, ctx)
|
|
||||||
|
|
||||||
assert len(result[0]) == 50
|
|
||||||
@@ -1,220 +0,0 @@
|
|||||||
"""
|
|
||||||
Tests for engine/data_sources/sources.py - data source implementations.
|
|
||||||
|
|
||||||
Tests HeadlinesDataSource, PoetryDataSource, EmptyDataSource, and the
|
|
||||||
base DataSource class functionality.
|
|
||||||
"""
|
|
||||||
|
|
||||||
from unittest.mock import patch
|
|
||||||
|
|
||||||
import pytest
|
|
||||||
|
|
||||||
from engine.data_sources.sources import (
|
|
||||||
EmptyDataSource,
|
|
||||||
HeadlinesDataSource,
|
|
||||||
PoetryDataSource,
|
|
||||||
SourceItem,
|
|
||||||
)
|
|
||||||
|
|
||||||
|
|
||||||
class TestSourceItem:
|
|
||||||
"""Test SourceItem dataclass."""
|
|
||||||
|
|
||||||
def test_source_item_creation(self):
|
|
||||||
"""SourceItem can be created with required fields."""
|
|
||||||
item = SourceItem(
|
|
||||||
content="Test headline",
|
|
||||||
source="test_source",
|
|
||||||
timestamp="2024-01-01",
|
|
||||||
)
|
|
||||||
assert item.content == "Test headline"
|
|
||||||
assert item.source == "test_source"
|
|
||||||
assert item.timestamp == "2024-01-01"
|
|
||||||
assert item.metadata is None
|
|
||||||
|
|
||||||
def test_source_item_with_metadata(self):
|
|
||||||
"""SourceItem can include optional metadata."""
|
|
||||||
metadata = {"author": "John", "category": "tech"}
|
|
||||||
item = SourceItem(
|
|
||||||
content="Test",
|
|
||||||
source="test",
|
|
||||||
timestamp="2024-01-01",
|
|
||||||
metadata=metadata,
|
|
||||||
)
|
|
||||||
assert item.metadata == metadata
|
|
||||||
|
|
||||||
|
|
||||||
class TestEmptyDataSource:
|
|
||||||
"""Test EmptyDataSource."""
|
|
||||||
|
|
||||||
def test_empty_source_name(self):
|
|
||||||
"""EmptyDataSource has correct name."""
|
|
||||||
source = EmptyDataSource()
|
|
||||||
assert source.name == "empty"
|
|
||||||
|
|
||||||
def test_empty_source_is_not_dynamic(self):
|
|
||||||
"""EmptyDataSource is static, not dynamic."""
|
|
||||||
source = EmptyDataSource()
|
|
||||||
assert source.is_dynamic is False
|
|
||||||
|
|
||||||
def test_empty_source_fetch_returns_blank_content(self):
|
|
||||||
"""EmptyDataSource.fetch() returns blank lines."""
|
|
||||||
source = EmptyDataSource(width=80, height=24)
|
|
||||||
items = source.fetch()
|
|
||||||
|
|
||||||
assert len(items) == 1
|
|
||||||
assert isinstance(items[0], SourceItem)
|
|
||||||
assert items[0].source == "empty"
|
|
||||||
# Content should be 24 lines of 80 spaces
|
|
||||||
lines = items[0].content.split("\n")
|
|
||||||
assert len(lines) == 24
|
|
||||||
assert all(len(line) == 80 for line in lines)
|
|
||||||
|
|
||||||
def test_empty_source_get_items_caches_result(self):
|
|
||||||
"""EmptyDataSource.get_items() caches the result."""
|
|
||||||
source = EmptyDataSource()
|
|
||||||
items1 = source.get_items()
|
|
||||||
items2 = source.get_items()
|
|
||||||
# Should return same cached items (same object reference)
|
|
||||||
assert items1 is items2
|
|
||||||
|
|
||||||
|
|
||||||
class TestHeadlinesDataSource:
|
|
||||||
"""Test HeadlinesDataSource."""
|
|
||||||
|
|
||||||
def test_headlines_source_name(self):
|
|
||||||
"""HeadlinesDataSource has correct name."""
|
|
||||||
source = HeadlinesDataSource()
|
|
||||||
assert source.name == "headlines"
|
|
||||||
|
|
||||||
def test_headlines_source_is_static(self):
|
|
||||||
"""HeadlinesDataSource is static."""
|
|
||||||
source = HeadlinesDataSource()
|
|
||||||
assert source.is_dynamic is False
|
|
||||||
|
|
||||||
def test_headlines_fetch_returns_source_items(self):
|
|
||||||
"""HeadlinesDataSource.fetch() returns SourceItem list."""
|
|
||||||
mock_items = [
|
|
||||||
("Test Article 1", "source1", "10:30"),
|
|
||||||
("Test Article 2", "source2", "11:45"),
|
|
||||||
]
|
|
||||||
with patch("engine.fetch.fetch_all") as mock_fetch_all:
|
|
||||||
mock_fetch_all.return_value = (mock_items, 2, 0)
|
|
||||||
|
|
||||||
source = HeadlinesDataSource()
|
|
||||||
items = source.fetch()
|
|
||||||
|
|
||||||
assert len(items) == 2
|
|
||||||
assert all(isinstance(item, SourceItem) for item in items)
|
|
||||||
assert items[0].content == "Test Article 1"
|
|
||||||
assert items[0].source == "source1"
|
|
||||||
assert items[0].timestamp == "10:30"
|
|
||||||
|
|
||||||
def test_headlines_fetch_with_empty_feed(self):
|
|
||||||
"""HeadlinesDataSource handles empty feeds gracefully."""
|
|
||||||
with patch("engine.fetch.fetch_all") as mock_fetch_all:
|
|
||||||
mock_fetch_all.return_value = ([], 0, 1)
|
|
||||||
|
|
||||||
source = HeadlinesDataSource()
|
|
||||||
items = source.fetch()
|
|
||||||
|
|
||||||
# Should return empty list
|
|
||||||
assert isinstance(items, list)
|
|
||||||
assert len(items) == 0
|
|
||||||
|
|
||||||
def test_headlines_get_items_caches_result(self):
|
|
||||||
"""HeadlinesDataSource.get_items() caches the result."""
|
|
||||||
mock_items = [("Test Article", "source", "12:00")]
|
|
||||||
with patch("engine.fetch.fetch_all") as mock_fetch_all:
|
|
||||||
mock_fetch_all.return_value = (mock_items, 1, 0)
|
|
||||||
|
|
||||||
source = HeadlinesDataSource()
|
|
||||||
items1 = source.get_items()
|
|
||||||
items2 = source.get_items()
|
|
||||||
|
|
||||||
# Should only call fetch once (cached)
|
|
||||||
assert mock_fetch_all.call_count == 1
|
|
||||||
assert items1 is items2
|
|
||||||
|
|
||||||
def test_headlines_refresh_clears_cache(self):
|
|
||||||
"""HeadlinesDataSource.refresh() clears cache and refetches."""
|
|
||||||
mock_items = [("Test Article", "source", "12:00")]
|
|
||||||
with patch("engine.fetch.fetch_all") as mock_fetch_all:
|
|
||||||
mock_fetch_all.return_value = (mock_items, 1, 0)
|
|
||||||
|
|
||||||
source = HeadlinesDataSource()
|
|
||||||
source.get_items()
|
|
||||||
source.refresh()
|
|
||||||
source.get_items()
|
|
||||||
|
|
||||||
# Should call fetch twice (once for initial, once for refresh)
|
|
||||||
assert mock_fetch_all.call_count == 2
|
|
||||||
|
|
||||||
|
|
||||||
class TestPoetryDataSource:
|
|
||||||
"""Test PoetryDataSource."""
|
|
||||||
|
|
||||||
def test_poetry_source_name(self):
|
|
||||||
"""PoetryDataSource has correct name."""
|
|
||||||
source = PoetryDataSource()
|
|
||||||
assert source.name == "poetry"
|
|
||||||
|
|
||||||
def test_poetry_source_is_static(self):
|
|
||||||
"""PoetryDataSource is static."""
|
|
||||||
source = PoetryDataSource()
|
|
||||||
assert source.is_dynamic is False
|
|
||||||
|
|
||||||
def test_poetry_fetch_returns_source_items(self):
|
|
||||||
"""PoetryDataSource.fetch() returns SourceItem list."""
|
|
||||||
mock_items = [
|
|
||||||
("Poetry line 1", "Poetry Source 1", ""),
|
|
||||||
("Poetry line 2", "Poetry Source 2", ""),
|
|
||||||
]
|
|
||||||
with patch("engine.fetch.fetch_poetry") as mock_fetch_poetry:
|
|
||||||
mock_fetch_poetry.return_value = (mock_items, 2, 0)
|
|
||||||
|
|
||||||
source = PoetryDataSource()
|
|
||||||
items = source.fetch()
|
|
||||||
|
|
||||||
assert len(items) == 2
|
|
||||||
assert all(isinstance(item, SourceItem) for item in items)
|
|
||||||
assert items[0].content == "Poetry line 1"
|
|
||||||
assert items[0].source == "Poetry Source 1"
|
|
||||||
|
|
||||||
def test_poetry_get_items_caches_result(self):
|
|
||||||
"""PoetryDataSource.get_items() caches result."""
|
|
||||||
mock_items = [("Poetry line", "Poetry Source", "")]
|
|
||||||
with patch("engine.fetch.fetch_poetry") as mock_fetch_poetry:
|
|
||||||
mock_fetch_poetry.return_value = (mock_items, 1, 0)
|
|
||||||
|
|
||||||
source = PoetryDataSource()
|
|
||||||
items1 = source.get_items()
|
|
||||||
items2 = source.get_items()
|
|
||||||
|
|
||||||
# Should only fetch once (cached)
|
|
||||||
assert mock_fetch_poetry.call_count == 1
|
|
||||||
assert items1 is items2
|
|
||||||
|
|
||||||
|
|
||||||
class TestDataSourceInterface:
|
|
||||||
"""Test DataSource base class interface."""
|
|
||||||
|
|
||||||
def test_data_source_stream_not_implemented(self):
|
|
||||||
"""DataSource.stream() raises NotImplementedError."""
|
|
||||||
source = EmptyDataSource()
|
|
||||||
with pytest.raises(NotImplementedError):
|
|
||||||
source.stream()
|
|
||||||
|
|
||||||
def test_data_source_is_dynamic_defaults_false(self):
|
|
||||||
"""DataSource.is_dynamic defaults to False."""
|
|
||||||
source = EmptyDataSource()
|
|
||||||
assert source.is_dynamic is False
|
|
||||||
|
|
||||||
def test_data_source_refresh_updates_cache(self):
|
|
||||||
"""DataSource.refresh() updates internal cache."""
|
|
||||||
source = EmptyDataSource()
|
|
||||||
source.get_items()
|
|
||||||
items_refreshed = source.refresh()
|
|
||||||
|
|
||||||
# refresh() should return new items
|
|
||||||
assert isinstance(items_refreshed, list)
|
|
||||||
@@ -2,13 +2,7 @@
|
|||||||
Tests for engine.display module.
|
Tests for engine.display module.
|
||||||
"""
|
"""
|
||||||
|
|
||||||
import sys
|
from engine.display import NullDisplay, TerminalDisplay
|
||||||
from unittest.mock import MagicMock, patch
|
|
||||||
|
|
||||||
import pytest
|
|
||||||
|
|
||||||
from engine.display import DisplayRegistry, NullDisplay, TerminalDisplay, render_border
|
|
||||||
from engine.display.backends.multi import MultiDisplay
|
|
||||||
|
|
||||||
|
|
||||||
class TestDisplayProtocol:
|
class TestDisplayProtocol:
|
||||||
@@ -31,66 +25,6 @@ class TestDisplayProtocol:
|
|||||||
assert hasattr(display, "cleanup")
|
assert hasattr(display, "cleanup")
|
||||||
|
|
||||||
|
|
||||||
class TestDisplayRegistry:
|
|
||||||
"""Tests for DisplayRegistry class."""
|
|
||||||
|
|
||||||
def setup_method(self):
|
|
||||||
"""Reset registry before each test."""
|
|
||||||
DisplayRegistry._backends = {}
|
|
||||||
DisplayRegistry._initialized = False
|
|
||||||
|
|
||||||
def test_register_adds_backend(self):
|
|
||||||
"""register adds a backend to the registry."""
|
|
||||||
DisplayRegistry.register("test", TerminalDisplay)
|
|
||||||
assert DisplayRegistry.get("test") == TerminalDisplay
|
|
||||||
|
|
||||||
def test_register_case_insensitive(self):
|
|
||||||
"""register is case insensitive."""
|
|
||||||
DisplayRegistry.register("TEST", TerminalDisplay)
|
|
||||||
assert DisplayRegistry.get("test") == TerminalDisplay
|
|
||||||
|
|
||||||
def test_get_returns_none_for_unknown(self):
|
|
||||||
"""get returns None for unknown backend."""
|
|
||||||
assert DisplayRegistry.get("unknown") is None
|
|
||||||
|
|
||||||
def test_list_backends_returns_all(self):
|
|
||||||
"""list_backends returns all registered backends."""
|
|
||||||
DisplayRegistry.register("a", TerminalDisplay)
|
|
||||||
DisplayRegistry.register("b", NullDisplay)
|
|
||||||
backends = DisplayRegistry.list_backends()
|
|
||||||
assert "a" in backends
|
|
||||||
assert "b" in backends
|
|
||||||
|
|
||||||
def test_create_returns_instance(self):
|
|
||||||
"""create returns a display instance."""
|
|
||||||
DisplayRegistry.register("test", NullDisplay)
|
|
||||||
display = DisplayRegistry.create("test")
|
|
||||||
assert isinstance(display, NullDisplay)
|
|
||||||
|
|
||||||
def test_create_returns_none_for_unknown(self):
|
|
||||||
"""create returns None for unknown backend."""
|
|
||||||
display = DisplayRegistry.create("unknown")
|
|
||||||
assert display is None
|
|
||||||
|
|
||||||
def test_initialize_registers_defaults(self):
|
|
||||||
"""initialize registers default backends."""
|
|
||||||
DisplayRegistry.initialize()
|
|
||||||
assert DisplayRegistry.get("terminal") == TerminalDisplay
|
|
||||||
assert DisplayRegistry.get("null") == NullDisplay
|
|
||||||
from engine.display.backends.sixel import SixelDisplay
|
|
||||||
from engine.display.backends.websocket import WebSocketDisplay
|
|
||||||
|
|
||||||
assert DisplayRegistry.get("websocket") == WebSocketDisplay
|
|
||||||
assert DisplayRegistry.get("sixel") == SixelDisplay
|
|
||||||
|
|
||||||
def test_initialize_idempotent(self):
|
|
||||||
"""initialize can be called multiple times safely."""
|
|
||||||
DisplayRegistry.initialize()
|
|
||||||
DisplayRegistry._backends["custom"] = TerminalDisplay
|
|
||||||
DisplayRegistry.initialize()
|
|
||||||
assert "custom" in DisplayRegistry.list_backends()
|
|
||||||
|
|
||||||
|
|
||||||
class TestTerminalDisplay:
|
class TestTerminalDisplay:
|
||||||
"""Tests for TerminalDisplay class."""
|
"""Tests for TerminalDisplay class."""
|
||||||
|
|
||||||
@@ -118,115 +52,6 @@ class TestTerminalDisplay:
|
|||||||
display = TerminalDisplay()
|
display = TerminalDisplay()
|
||||||
display.cleanup()
|
display.cleanup()
|
||||||
|
|
||||||
def test_get_dimensions_returns_cached_value(self):
|
|
||||||
"""get_dimensions returns cached dimensions for stability."""
|
|
||||||
display = TerminalDisplay()
|
|
||||||
display.init(80, 24)
|
|
||||||
|
|
||||||
# First call should set cache
|
|
||||||
d1 = display.get_dimensions()
|
|
||||||
assert d1 == (80, 24)
|
|
||||||
|
|
||||||
def test_show_clears_screen_before_each_frame(self):
|
|
||||||
"""show clears previous frame to prevent visual wobble.
|
|
||||||
|
|
||||||
Regression test: Previously show() didn't clear the screen,
|
|
||||||
causing old content to remain and creating visual wobble.
|
|
||||||
The fix adds \\033[H\\033[J (cursor home + erase down) before each frame.
|
|
||||||
"""
|
|
||||||
from io import BytesIO
|
|
||||||
|
|
||||||
display = TerminalDisplay()
|
|
||||||
display.init(80, 24)
|
|
||||||
|
|
||||||
buffer = ["line1", "line2", "line3"]
|
|
||||||
|
|
||||||
fake_buffer = BytesIO()
|
|
||||||
fake_stdout = MagicMock()
|
|
||||||
fake_stdout.buffer = fake_buffer
|
|
||||||
with patch.object(sys, "stdout", fake_stdout):
|
|
||||||
display.show(buffer)
|
|
||||||
|
|
||||||
output = fake_buffer.getvalue().decode("utf-8")
|
|
||||||
assert output.startswith("\033[H\033[J"), (
|
|
||||||
f"Output should start with clear sequence, got: {repr(output[:20])}"
|
|
||||||
)
|
|
||||||
|
|
||||||
def test_show_clears_screen_on_subsequent_frames(self):
|
|
||||||
"""show clears screen on every frame, not just the first.
|
|
||||||
|
|
||||||
Regression test: Ensures each show() call includes the clear sequence.
|
|
||||||
"""
|
|
||||||
from io import BytesIO
|
|
||||||
|
|
||||||
# Use target_fps=0 to disable frame skipping in test
|
|
||||||
display = TerminalDisplay(target_fps=0)
|
|
||||||
display.init(80, 24)
|
|
||||||
|
|
||||||
buffer = ["line1", "line2"]
|
|
||||||
|
|
||||||
for i in range(3):
|
|
||||||
fake_buffer = BytesIO()
|
|
||||||
fake_stdout = MagicMock()
|
|
||||||
fake_stdout.buffer = fake_buffer
|
|
||||||
with patch.object(sys, "stdout", fake_stdout):
|
|
||||||
display.show(buffer)
|
|
||||||
|
|
||||||
output = fake_buffer.getvalue().decode("utf-8")
|
|
||||||
assert output.startswith("\033[H\033[J"), (
|
|
||||||
f"Frame {i} should start with clear sequence"
|
|
||||||
)
|
|
||||||
|
|
||||||
def test_get_dimensions_stable_across_rapid_calls(self):
|
|
||||||
"""get_dimensions should not fluctuate when called rapidly.
|
|
||||||
|
|
||||||
This test catches the bug where os.get_terminal_size() returns
|
|
||||||
inconsistent values, causing visual wobble.
|
|
||||||
"""
|
|
||||||
display = TerminalDisplay()
|
|
||||||
display.init(80, 24)
|
|
||||||
|
|
||||||
# Get dimensions 10 times rapidly (simulating frame loop)
|
|
||||||
dims = [display.get_dimensions() for _ in range(10)]
|
|
||||||
|
|
||||||
# All should be the same - this would fail if os.get_terminal_size()
|
|
||||||
# returns different values each call
|
|
||||||
assert len(set(dims)) == 1, f"Dimensions should be stable, got: {set(dims)}"
|
|
||||||
|
|
||||||
def test_show_with_border_uses_render_border(self):
|
|
||||||
"""show with border=True calls render_border with FPS."""
|
|
||||||
from unittest.mock import MagicMock
|
|
||||||
|
|
||||||
display = TerminalDisplay()
|
|
||||||
display.init(80, 24)
|
|
||||||
|
|
||||||
buffer = ["line1", "line2"]
|
|
||||||
|
|
||||||
# Mock get_monitor to provide FPS
|
|
||||||
mock_monitor = MagicMock()
|
|
||||||
mock_monitor.get_stats.return_value = {
|
|
||||||
"pipeline": {"avg_ms": 16.5},
|
|
||||||
"frame_count": 100,
|
|
||||||
}
|
|
||||||
|
|
||||||
# Mock render_border to verify it's called
|
|
||||||
with (
|
|
||||||
patch("engine.display.get_monitor", return_value=mock_monitor),
|
|
||||||
patch("engine.display.render_border", wraps=render_border) as mock_render,
|
|
||||||
):
|
|
||||||
display.show(buffer, border=True)
|
|
||||||
|
|
||||||
# Verify render_border was called with correct arguments
|
|
||||||
assert mock_render.called
|
|
||||||
args, kwargs = mock_render.call_args
|
|
||||||
# Arguments: buffer, width, height, fps, frame_time (positional)
|
|
||||||
assert args[0] == buffer
|
|
||||||
assert args[1] == 80
|
|
||||||
assert args[2] == 24
|
|
||||||
assert args[3] == pytest.approx(60.6, rel=0.1) # fps = 1000/16.5
|
|
||||||
assert args[4] == pytest.approx(16.5, rel=0.1)
|
|
||||||
assert kwargs == {} # no keyword arguments
|
|
||||||
|
|
||||||
|
|
||||||
class TestNullDisplay:
|
class TestNullDisplay:
|
||||||
"""Tests for NullDisplay class."""
|
"""Tests for NullDisplay class."""
|
||||||
@@ -252,178 +77,3 @@ class TestNullDisplay:
|
|||||||
"""cleanup does nothing."""
|
"""cleanup does nothing."""
|
||||||
display = NullDisplay()
|
display = NullDisplay()
|
||||||
display.cleanup()
|
display.cleanup()
|
||||||
|
|
||||||
def test_show_stores_last_buffer(self):
|
|
||||||
"""show stores last buffer for testing inspection."""
|
|
||||||
display = NullDisplay()
|
|
||||||
display.init(80, 24)
|
|
||||||
|
|
||||||
buffer = ["line1", "line2", "line3"]
|
|
||||||
display.show(buffer)
|
|
||||||
|
|
||||||
assert display._last_buffer == buffer
|
|
||||||
|
|
||||||
def test_show_tracks_last_buffer_across_calls(self):
|
|
||||||
"""show updates last_buffer on each call."""
|
|
||||||
display = NullDisplay()
|
|
||||||
display.init(80, 24)
|
|
||||||
|
|
||||||
display.show(["first"])
|
|
||||||
assert display._last_buffer == ["first"]
|
|
||||||
|
|
||||||
display.show(["second"])
|
|
||||||
assert display._last_buffer == ["second"]
|
|
||||||
|
|
||||||
|
|
||||||
class TestRenderBorder:
|
|
||||||
"""Tests for render_border function."""
|
|
||||||
|
|
||||||
def test_render_border_adds_corners(self):
|
|
||||||
"""render_border adds corner characters."""
|
|
||||||
from engine.display import render_border
|
|
||||||
|
|
||||||
buffer = ["hello", "world"]
|
|
||||||
result = render_border(buffer, width=10, height=5)
|
|
||||||
|
|
||||||
assert result[0][0] in "┌┎┍" # top-left
|
|
||||||
assert result[0][-1] in "┐┒┓" # top-right
|
|
||||||
assert result[-1][0] in "└┚┖" # bottom-left
|
|
||||||
assert result[-1][-1] in "┘┛┙" # bottom-right
|
|
||||||
|
|
||||||
def test_render_border_dimensions(self):
|
|
||||||
"""render_border output matches requested dimensions."""
|
|
||||||
from engine.display import render_border
|
|
||||||
|
|
||||||
buffer = ["line1", "line2", "line3"]
|
|
||||||
result = render_border(buffer, width=20, height=10)
|
|
||||||
|
|
||||||
# Output should be exactly height lines
|
|
||||||
assert len(result) == 10
|
|
||||||
# Each line should be exactly width characters
|
|
||||||
for line in result:
|
|
||||||
assert len(line) == 20
|
|
||||||
|
|
||||||
def test_render_border_with_fps(self):
|
|
||||||
"""render_border includes FPS in top border when provided."""
|
|
||||||
from engine.display import render_border
|
|
||||||
|
|
||||||
buffer = ["test"]
|
|
||||||
result = render_border(buffer, width=20, height=5, fps=60.0)
|
|
||||||
|
|
||||||
top_line = result[0]
|
|
||||||
assert "FPS:60" in top_line or "FPS: 60" in top_line
|
|
||||||
|
|
||||||
def test_render_border_with_frame_time(self):
|
|
||||||
"""render_border includes frame time in bottom border when provided."""
|
|
||||||
from engine.display import render_border
|
|
||||||
|
|
||||||
buffer = ["test"]
|
|
||||||
result = render_border(buffer, width=20, height=5, frame_time=16.5)
|
|
||||||
|
|
||||||
bottom_line = result[-1]
|
|
||||||
assert "16.5ms" in bottom_line
|
|
||||||
|
|
||||||
def test_render_border_crops_content_to_fit(self):
|
|
||||||
"""render_border crops content to fit within borders."""
|
|
||||||
from engine.display import render_border
|
|
||||||
|
|
||||||
# Buffer larger than viewport
|
|
||||||
buffer = ["x" * 100] * 50
|
|
||||||
result = render_border(buffer, width=20, height=10)
|
|
||||||
|
|
||||||
# Result shrinks to fit viewport
|
|
||||||
assert len(result) == 10
|
|
||||||
for line in result[1:-1]: # Skip border lines
|
|
||||||
assert len(line) == 20
|
|
||||||
|
|
||||||
def test_render_border_preserves_content(self):
|
|
||||||
"""render_border preserves content within borders."""
|
|
||||||
from engine.display import render_border
|
|
||||||
|
|
||||||
buffer = ["hello world", "test line"]
|
|
||||||
result = render_border(buffer, width=20, height=5)
|
|
||||||
|
|
||||||
# Content should appear in the middle rows
|
|
||||||
content_lines = result[1:-1]
|
|
||||||
assert any("hello world" in line for line in content_lines)
|
|
||||||
|
|
||||||
def test_render_border_with_small_buffer(self):
|
|
||||||
"""render_border handles buffers smaller than viewport."""
|
|
||||||
from engine.display import render_border
|
|
||||||
|
|
||||||
buffer = ["hi"]
|
|
||||||
result = render_border(buffer, width=20, height=10)
|
|
||||||
|
|
||||||
# Should still produce full viewport with padding
|
|
||||||
assert len(result) == 10
|
|
||||||
# All lines should be full width
|
|
||||||
for line in result:
|
|
||||||
assert len(line) == 20
|
|
||||||
|
|
||||||
|
|
||||||
class TestMultiDisplay:
|
|
||||||
"""Tests for MultiDisplay class."""
|
|
||||||
|
|
||||||
def test_init_stores_dimensions(self):
|
|
||||||
"""init stores dimensions and forwards to displays."""
|
|
||||||
mock_display1 = MagicMock()
|
|
||||||
mock_display2 = MagicMock()
|
|
||||||
multi = MultiDisplay([mock_display1, mock_display2])
|
|
||||||
|
|
||||||
multi.init(120, 40)
|
|
||||||
|
|
||||||
assert multi.width == 120
|
|
||||||
assert multi.height == 40
|
|
||||||
mock_display1.init.assert_called_once_with(120, 40, reuse=False)
|
|
||||||
mock_display2.init.assert_called_once_with(120, 40, reuse=False)
|
|
||||||
|
|
||||||
def test_show_forwards_to_all_displays(self):
|
|
||||||
"""show forwards buffer to all displays."""
|
|
||||||
mock_display1 = MagicMock()
|
|
||||||
mock_display2 = MagicMock()
|
|
||||||
multi = MultiDisplay([mock_display1, mock_display2])
|
|
||||||
|
|
||||||
buffer = ["line1", "line2"]
|
|
||||||
multi.show(buffer, border=False)
|
|
||||||
|
|
||||||
mock_display1.show.assert_called_once_with(buffer, border=False)
|
|
||||||
mock_display2.show.assert_called_once_with(buffer, border=False)
|
|
||||||
|
|
||||||
def test_clear_forwards_to_all_displays(self):
|
|
||||||
"""clear forwards to all displays."""
|
|
||||||
mock_display1 = MagicMock()
|
|
||||||
mock_display2 = MagicMock()
|
|
||||||
multi = MultiDisplay([mock_display1, mock_display2])
|
|
||||||
|
|
||||||
multi.clear()
|
|
||||||
|
|
||||||
mock_display1.clear.assert_called_once()
|
|
||||||
mock_display2.clear.assert_called_once()
|
|
||||||
|
|
||||||
def test_cleanup_forwards_to_all_displays(self):
|
|
||||||
"""cleanup forwards to all displays."""
|
|
||||||
mock_display1 = MagicMock()
|
|
||||||
mock_display2 = MagicMock()
|
|
||||||
multi = MultiDisplay([mock_display1, mock_display2])
|
|
||||||
|
|
||||||
multi.cleanup()
|
|
||||||
|
|
||||||
mock_display1.cleanup.assert_called_once()
|
|
||||||
mock_display2.cleanup.assert_called_once()
|
|
||||||
|
|
||||||
def test_empty_displays_list(self):
|
|
||||||
"""handles empty displays list gracefully."""
|
|
||||||
multi = MultiDisplay([])
|
|
||||||
multi.init(80, 24)
|
|
||||||
multi.show(["test"])
|
|
||||||
multi.clear()
|
|
||||||
multi.cleanup()
|
|
||||||
|
|
||||||
def test_init_with_reuse(self):
|
|
||||||
"""init passes reuse flag to child displays."""
|
|
||||||
mock_display = MagicMock()
|
|
||||||
multi = MultiDisplay([mock_display])
|
|
||||||
|
|
||||||
multi.init(80, 24, reuse=True)
|
|
||||||
|
|
||||||
mock_display.init.assert_called_once_with(80, 24, reuse=True)
|
|
||||||
|
|||||||
@@ -5,10 +5,8 @@ Tests for engine.effects.controller module.
|
|||||||
from unittest.mock import MagicMock, patch
|
from unittest.mock import MagicMock, patch
|
||||||
|
|
||||||
from engine.effects.controller import (
|
from engine.effects.controller import (
|
||||||
_format_stats,
|
|
||||||
handle_effects_command,
|
handle_effects_command,
|
||||||
set_effect_chain_ref,
|
set_effect_chain_ref,
|
||||||
show_effects_menu,
|
|
||||||
)
|
)
|
||||||
|
|
||||||
|
|
||||||
@@ -94,29 +92,6 @@ class TestHandleEffectsCommand:
|
|||||||
assert "Reordered pipeline" in result
|
assert "Reordered pipeline" in result
|
||||||
mock_chain_instance.reorder.assert_called_once_with(["noise", "fade"])
|
mock_chain_instance.reorder.assert_called_once_with(["noise", "fade"])
|
||||||
|
|
||||||
def test_reorder_failure(self):
|
|
||||||
"""reorder returns error on failure."""
|
|
||||||
with patch("engine.effects.controller.get_registry") as mock_registry:
|
|
||||||
mock_registry.return_value.list_all.return_value = {}
|
|
||||||
|
|
||||||
with patch("engine.effects.controller._get_effect_chain") as mock_chain:
|
|
||||||
mock_chain_instance = MagicMock()
|
|
||||||
mock_chain_instance.reorder.return_value = False
|
|
||||||
mock_chain.return_value = mock_chain_instance
|
|
||||||
|
|
||||||
result = handle_effects_command("/effects reorder bad")
|
|
||||||
|
|
||||||
assert "Failed to reorder" in result
|
|
||||||
|
|
||||||
def test_unknown_effect(self):
|
|
||||||
"""unknown effect returns error."""
|
|
||||||
with patch("engine.effects.controller.get_registry") as mock_registry:
|
|
||||||
mock_registry.return_value.list_all.return_value = {}
|
|
||||||
|
|
||||||
result = handle_effects_command("/effects unknown on")
|
|
||||||
|
|
||||||
assert "Unknown effect" in result
|
|
||||||
|
|
||||||
def test_unknown_command(self):
|
def test_unknown_command(self):
|
||||||
"""unknown command returns error."""
|
"""unknown command returns error."""
|
||||||
result = handle_effects_command("/unknown")
|
result = handle_effects_command("/unknown")
|
||||||
@@ -127,105 +102,6 @@ class TestHandleEffectsCommand:
|
|||||||
result = handle_effects_command("not a command")
|
result = handle_effects_command("not a command")
|
||||||
assert "Unknown command" in result
|
assert "Unknown command" in result
|
||||||
|
|
||||||
def test_invalid_intensity_value(self):
|
|
||||||
"""invalid intensity value returns error."""
|
|
||||||
with patch("engine.effects.controller.get_registry") as mock_registry:
|
|
||||||
mock_plugin = MagicMock()
|
|
||||||
mock_registry.return_value.get.return_value = mock_plugin
|
|
||||||
mock_registry.return_value.list_all.return_value = {"noise": mock_plugin}
|
|
||||||
|
|
||||||
result = handle_effects_command("/effects noise intensity bad")
|
|
||||||
|
|
||||||
assert "Invalid intensity" in result
|
|
||||||
|
|
||||||
def test_missing_action(self):
|
|
||||||
"""missing action returns usage."""
|
|
||||||
with patch("engine.effects.controller.get_registry") as mock_registry:
|
|
||||||
mock_plugin = MagicMock()
|
|
||||||
mock_registry.return_value.get.return_value = mock_plugin
|
|
||||||
mock_registry.return_value.list_all.return_value = {"noise": mock_plugin}
|
|
||||||
|
|
||||||
result = handle_effects_command("/effects noise")
|
|
||||||
|
|
||||||
assert "Usage" in result
|
|
||||||
|
|
||||||
def test_stats_command(self):
|
|
||||||
"""stats command returns formatted stats."""
|
|
||||||
with patch("engine.effects.controller.get_monitor") as mock_monitor:
|
|
||||||
mock_monitor.return_value.get_stats.return_value = {
|
|
||||||
"frame_count": 100,
|
|
||||||
"pipeline": {"avg_ms": 1.5, "min_ms": 1.0, "max_ms": 2.0},
|
|
||||||
"effects": {},
|
|
||||||
}
|
|
||||||
|
|
||||||
result = handle_effects_command("/effects stats")
|
|
||||||
|
|
||||||
assert "Performance Stats" in result
|
|
||||||
|
|
||||||
def test_list_only_effects(self):
|
|
||||||
"""list command works with just /effects."""
|
|
||||||
with patch("engine.effects.controller.get_registry") as mock_registry:
|
|
||||||
mock_plugin = MagicMock()
|
|
||||||
mock_plugin.config.enabled = False
|
|
||||||
mock_plugin.config.intensity = 0.5
|
|
||||||
mock_registry.return_value.list_all.return_value = {"noise": mock_plugin}
|
|
||||||
|
|
||||||
with patch("engine.effects.controller._get_effect_chain") as mock_chain:
|
|
||||||
mock_chain.return_value = None
|
|
||||||
|
|
||||||
result = handle_effects_command("/effects")
|
|
||||||
|
|
||||||
assert "noise: OFF" in result
|
|
||||||
|
|
||||||
|
|
||||||
class TestShowEffectsMenu:
|
|
||||||
"""Tests for show_effects_menu function."""
|
|
||||||
|
|
||||||
def test_returns_formatted_menu(self):
|
|
||||||
"""returns formatted effects menu."""
|
|
||||||
with patch("engine.effects.controller.get_registry") as mock_registry:
|
|
||||||
mock_plugin = MagicMock()
|
|
||||||
mock_plugin.config.enabled = True
|
|
||||||
mock_plugin.config.intensity = 0.75
|
|
||||||
mock_registry.return_value.list_all.return_value = {"noise": mock_plugin}
|
|
||||||
|
|
||||||
with patch("engine.effects.controller._get_effect_chain") as mock_chain:
|
|
||||||
mock_chain_instance = MagicMock()
|
|
||||||
mock_chain_instance.get_order.return_value = ["noise"]
|
|
||||||
mock_chain.return_value = mock_chain_instance
|
|
||||||
|
|
||||||
result = show_effects_menu()
|
|
||||||
|
|
||||||
assert "EFFECTS MENU" in result
|
|
||||||
assert "noise" in result
|
|
||||||
|
|
||||||
|
|
||||||
class TestFormatStats:
|
|
||||||
"""Tests for _format_stats function."""
|
|
||||||
|
|
||||||
def test_returns_error_when_no_monitor(self):
|
|
||||||
"""returns error when monitor unavailable."""
|
|
||||||
with patch("engine.effects.controller.get_monitor") as mock_monitor:
|
|
||||||
mock_monitor.return_value.get_stats.return_value = {"error": "No data"}
|
|
||||||
|
|
||||||
result = _format_stats()
|
|
||||||
|
|
||||||
assert "No data" in result
|
|
||||||
|
|
||||||
def test_formats_pipeline_stats(self):
|
|
||||||
"""formats pipeline stats correctly."""
|
|
||||||
with patch("engine.effects.controller.get_monitor") as mock_monitor:
|
|
||||||
mock_monitor.return_value.get_stats.return_value = {
|
|
||||||
"frame_count": 50,
|
|
||||||
"pipeline": {"avg_ms": 2.5, "min_ms": 2.0, "max_ms": 3.0},
|
|
||||||
"effects": {"noise": {"avg_ms": 0.5, "min_ms": 0.4, "max_ms": 0.6}},
|
|
||||||
}
|
|
||||||
|
|
||||||
result = _format_stats()
|
|
||||||
|
|
||||||
assert "Pipeline" in result
|
|
||||||
assert "noise" in result
|
|
||||||
|
|
||||||
|
|
||||||
class TestSetEffectChainRef:
|
class TestSetEffectChainRef:
|
||||||
"""Tests for set_effect_chain_ref function."""
|
"""Tests for set_effect_chain_ref function."""
|
||||||
|
|||||||
69
tests/test_emitters.py
Normal file
69
tests/test_emitters.py
Normal file
@@ -0,0 +1,69 @@
|
|||||||
|
"""
|
||||||
|
Tests for engine.emitters module.
|
||||||
|
"""
|
||||||
|
|
||||||
|
from engine.emitters import EventEmitter, Startable, Stoppable
|
||||||
|
|
||||||
|
|
||||||
|
class TestEventEmitterProtocol:
|
||||||
|
"""Tests for EventEmitter protocol."""
|
||||||
|
|
||||||
|
def test_protocol_exists(self):
|
||||||
|
"""EventEmitter protocol is defined."""
|
||||||
|
assert EventEmitter is not None
|
||||||
|
|
||||||
|
def test_protocol_has_subscribe_method(self):
|
||||||
|
"""EventEmitter has subscribe method in protocol."""
|
||||||
|
assert hasattr(EventEmitter, "subscribe")
|
||||||
|
|
||||||
|
def test_protocol_has_unsubscribe_method(self):
|
||||||
|
"""EventEmitter has unsubscribe method in protocol."""
|
||||||
|
assert hasattr(EventEmitter, "unsubscribe")
|
||||||
|
|
||||||
|
|
||||||
|
class TestStartableProtocol:
|
||||||
|
"""Tests for Startable protocol."""
|
||||||
|
|
||||||
|
def test_protocol_exists(self):
|
||||||
|
"""Startable protocol is defined."""
|
||||||
|
assert Startable is not None
|
||||||
|
|
||||||
|
def test_protocol_has_start_method(self):
|
||||||
|
"""Startable has start method in protocol."""
|
||||||
|
assert hasattr(Startable, "start")
|
||||||
|
|
||||||
|
|
||||||
|
class TestStoppableProtocol:
|
||||||
|
"""Tests for Stoppable protocol."""
|
||||||
|
|
||||||
|
def test_protocol_exists(self):
|
||||||
|
"""Stoppable protocol is defined."""
|
||||||
|
assert Stoppable is not None
|
||||||
|
|
||||||
|
def test_protocol_has_stop_method(self):
|
||||||
|
"""Stoppable has stop method in protocol."""
|
||||||
|
assert hasattr(Stoppable, "stop")
|
||||||
|
|
||||||
|
|
||||||
|
class TestProtocolCompliance:
|
||||||
|
"""Tests that existing classes comply with protocols."""
|
||||||
|
|
||||||
|
def test_ntfy_poller_complies_with_protocol(self):
|
||||||
|
"""NtfyPoller implements EventEmitter protocol."""
|
||||||
|
from engine.ntfy import NtfyPoller
|
||||||
|
|
||||||
|
poller = NtfyPoller("http://example.com/topic")
|
||||||
|
assert hasattr(poller, "subscribe")
|
||||||
|
assert hasattr(poller, "unsubscribe")
|
||||||
|
assert callable(poller.subscribe)
|
||||||
|
assert callable(poller.unsubscribe)
|
||||||
|
|
||||||
|
def test_mic_monitor_complies_with_protocol(self):
|
||||||
|
"""MicMonitor implements EventEmitter and Startable protocols."""
|
||||||
|
from engine.mic import MicMonitor
|
||||||
|
|
||||||
|
monitor = MicMonitor()
|
||||||
|
assert hasattr(monitor, "subscribe")
|
||||||
|
assert hasattr(monitor, "unsubscribe")
|
||||||
|
assert hasattr(monitor, "start")
|
||||||
|
assert hasattr(monitor, "stop")
|
||||||
@@ -1,234 +0,0 @@
|
|||||||
"""
|
|
||||||
Tests for engine.fetch module.
|
|
||||||
"""
|
|
||||||
|
|
||||||
import json
|
|
||||||
from unittest.mock import MagicMock, patch
|
|
||||||
|
|
||||||
from engine.fetch import (
|
|
||||||
_fetch_gutenberg,
|
|
||||||
fetch_all,
|
|
||||||
fetch_feed,
|
|
||||||
fetch_poetry,
|
|
||||||
load_cache,
|
|
||||||
save_cache,
|
|
||||||
)
|
|
||||||
|
|
||||||
|
|
||||||
class TestFetchFeed:
|
|
||||||
"""Tests for fetch_feed function."""
|
|
||||||
|
|
||||||
@patch("engine.fetch.urllib.request.urlopen")
|
|
||||||
def test_fetch_success(self, mock_urlopen):
|
|
||||||
"""Successful feed fetch returns parsed feed."""
|
|
||||||
mock_response = MagicMock()
|
|
||||||
mock_response.read.return_value = b"<rss>test</rss>"
|
|
||||||
mock_urlopen.return_value = mock_response
|
|
||||||
|
|
||||||
result = fetch_feed("http://example.com/feed")
|
|
||||||
|
|
||||||
assert result is not None
|
|
||||||
|
|
||||||
@patch("engine.fetch.urllib.request.urlopen")
|
|
||||||
def test_fetch_network_error(self, mock_urlopen):
|
|
||||||
"""Network error returns None."""
|
|
||||||
mock_urlopen.side_effect = Exception("Network error")
|
|
||||||
|
|
||||||
result = fetch_feed("http://example.com/feed")
|
|
||||||
|
|
||||||
assert result is None
|
|
||||||
|
|
||||||
|
|
||||||
class TestFetchAll:
|
|
||||||
"""Tests for fetch_all function."""
|
|
||||||
|
|
||||||
@patch("engine.fetch.fetch_feed")
|
|
||||||
@patch("engine.fetch.strip_tags")
|
|
||||||
@patch("engine.fetch.skip")
|
|
||||||
@patch("engine.fetch.boot_ln")
|
|
||||||
def test_fetch_all_success(self, mock_boot, mock_skip, mock_strip, mock_fetch_feed):
|
|
||||||
"""Successful fetch returns items."""
|
|
||||||
mock_feed = MagicMock()
|
|
||||||
mock_feed.bozo = False
|
|
||||||
mock_feed.entries = [
|
|
||||||
{"title": "Headline 1", "published_parsed": (2024, 1, 1, 12, 0, 0)},
|
|
||||||
{"title": "Headline 2", "updated_parsed": (2024, 1, 2, 12, 0, 0)},
|
|
||||||
]
|
|
||||||
mock_fetch_feed.return_value = mock_feed
|
|
||||||
mock_skip.return_value = False
|
|
||||||
mock_strip.side_effect = lambda x: x
|
|
||||||
|
|
||||||
items, linked, failed = fetch_all()
|
|
||||||
|
|
||||||
assert linked > 0
|
|
||||||
assert failed == 0
|
|
||||||
|
|
||||||
@patch("engine.fetch.fetch_feed")
|
|
||||||
@patch("engine.fetch.boot_ln")
|
|
||||||
def test_fetch_all_feed_error(self, mock_boot, mock_fetch_feed):
|
|
||||||
"""Feed error increments failed count."""
|
|
||||||
mock_fetch_feed.return_value = None
|
|
||||||
|
|
||||||
items, linked, failed = fetch_all()
|
|
||||||
|
|
||||||
assert failed > 0
|
|
||||||
|
|
||||||
@patch("engine.fetch.fetch_feed")
|
|
||||||
@patch("engine.fetch.strip_tags")
|
|
||||||
@patch("engine.fetch.skip")
|
|
||||||
@patch("engine.fetch.boot_ln")
|
|
||||||
def test_fetch_all_skips_filtered(
|
|
||||||
self, mock_boot, mock_skip, mock_strip, mock_fetch_feed
|
|
||||||
):
|
|
||||||
"""Filtered headlines are skipped."""
|
|
||||||
mock_feed = MagicMock()
|
|
||||||
mock_feed.bozo = False
|
|
||||||
mock_feed.entries = [
|
|
||||||
{"title": "Sports scores"},
|
|
||||||
{"title": "Valid headline"},
|
|
||||||
]
|
|
||||||
mock_fetch_feed.return_value = mock_feed
|
|
||||||
mock_skip.side_effect = lambda x: x == "Sports scores"
|
|
||||||
mock_strip.side_effect = lambda x: x
|
|
||||||
|
|
||||||
items, linked, failed = fetch_all()
|
|
||||||
|
|
||||||
assert any("Valid headline" in item[0] for item in items)
|
|
||||||
|
|
||||||
|
|
||||||
class TestFetchGutenberg:
|
|
||||||
"""Tests for _fetch_gutenberg function."""
|
|
||||||
|
|
||||||
@patch("engine.fetch.urllib.request.urlopen")
|
|
||||||
def test_gutenberg_success(self, mock_urlopen):
|
|
||||||
"""Successful gutenberg fetch returns items."""
|
|
||||||
text = """Project Gutenberg
|
|
||||||
|
|
||||||
*** START OF THE PROJECT GUTENBERG ***
|
|
||||||
This is a test poem with multiple lines
|
|
||||||
that should be parsed as a block.
|
|
||||||
|
|
||||||
Another stanza with more content here.
|
|
||||||
|
|
||||||
*** END OF THE PROJECT GUTENBERG ***
|
|
||||||
"""
|
|
||||||
mock_response = MagicMock()
|
|
||||||
mock_response.read.return_value = text.encode("utf-8")
|
|
||||||
mock_urlopen.return_value = mock_response
|
|
||||||
|
|
||||||
result = _fetch_gutenberg("http://example.com/test", "Test")
|
|
||||||
|
|
||||||
assert len(result) > 0
|
|
||||||
|
|
||||||
@patch("engine.fetch.urllib.request.urlopen")
|
|
||||||
def test_gutenberg_network_error(self, mock_urlopen):
|
|
||||||
"""Network error returns empty list."""
|
|
||||||
mock_urlopen.side_effect = Exception("Network error")
|
|
||||||
|
|
||||||
result = _fetch_gutenberg("http://example.com/test", "Test")
|
|
||||||
|
|
||||||
assert result == []
|
|
||||||
|
|
||||||
@patch("engine.fetch.urllib.request.urlopen")
|
|
||||||
def test_gutenberg_skips_short_blocks(self, mock_urlopen):
|
|
||||||
"""Blocks shorter than 20 chars are skipped."""
|
|
||||||
text = """*** START OF THE ***
|
|
||||||
Short
|
|
||||||
*** END OF THE ***
|
|
||||||
"""
|
|
||||||
mock_response = MagicMock()
|
|
||||||
mock_response.read.return_value = text.encode("utf-8")
|
|
||||||
mock_urlopen.return_value = mock_response
|
|
||||||
|
|
||||||
result = _fetch_gutenberg("http://example.com/test", "Test")
|
|
||||||
|
|
||||||
assert result == []
|
|
||||||
|
|
||||||
@patch("engine.fetch.urllib.request.urlopen")
|
|
||||||
def test_gutenberg_skips_all_caps_headers(self, mock_urlopen):
|
|
||||||
"""All-caps lines are skipped as headers."""
|
|
||||||
text = """*** START OF THE ***
|
|
||||||
THIS IS ALL CAPS HEADER
|
|
||||||
more content here
|
|
||||||
*** END OF THE ***
|
|
||||||
"""
|
|
||||||
mock_response = MagicMock()
|
|
||||||
mock_response.read.return_value = text.encode("utf-8")
|
|
||||||
mock_urlopen.return_value = mock_response
|
|
||||||
|
|
||||||
result = _fetch_gutenberg("http://example.com/test", "Test")
|
|
||||||
|
|
||||||
assert len(result) > 0
|
|
||||||
|
|
||||||
|
|
||||||
class TestFetchPoetry:
|
|
||||||
"""Tests for fetch_poetry function."""
|
|
||||||
|
|
||||||
@patch("engine.fetch._fetch_gutenberg")
|
|
||||||
@patch("engine.fetch.boot_ln")
|
|
||||||
def test_fetch_poetry_success(self, mock_boot, mock_fetch):
|
|
||||||
"""Successful poetry fetch returns items."""
|
|
||||||
mock_fetch.return_value = [
|
|
||||||
("Stanza 1 content here", "Test", ""),
|
|
||||||
("Stanza 2 content here", "Test", ""),
|
|
||||||
]
|
|
||||||
|
|
||||||
items, linked, failed = fetch_poetry()
|
|
||||||
|
|
||||||
assert linked > 0
|
|
||||||
assert failed == 0
|
|
||||||
|
|
||||||
@patch("engine.fetch._fetch_gutenberg")
|
|
||||||
@patch("engine.fetch.boot_ln")
|
|
||||||
def test_fetch_poetry_failure(self, mock_boot, mock_fetch):
|
|
||||||
"""Failed fetch increments failed count."""
|
|
||||||
mock_fetch.return_value = []
|
|
||||||
|
|
||||||
items, linked, failed = fetch_poetry()
|
|
||||||
|
|
||||||
assert failed > 0
|
|
||||||
|
|
||||||
|
|
||||||
class TestCache:
|
|
||||||
"""Tests for cache functions."""
|
|
||||||
|
|
||||||
@patch("engine.fetch._cache_path")
|
|
||||||
def test_load_cache_success(self, mock_path):
|
|
||||||
"""Successful cache load returns items."""
|
|
||||||
mock_path.return_value.__str__ = MagicMock(return_value="/tmp/cache")
|
|
||||||
mock_path.return_value.exists.return_value = True
|
|
||||||
mock_path.return_value.read_text.return_value = json.dumps(
|
|
||||||
{"items": [("title", "source", "time")]}
|
|
||||||
)
|
|
||||||
|
|
||||||
result = load_cache()
|
|
||||||
|
|
||||||
assert result is not None
|
|
||||||
|
|
||||||
@patch("engine.fetch._cache_path")
|
|
||||||
def test_load_cache_missing_file(self, mock_path):
|
|
||||||
"""Missing cache file returns None."""
|
|
||||||
mock_path.return_value.exists.return_value = False
|
|
||||||
|
|
||||||
result = load_cache()
|
|
||||||
|
|
||||||
assert result is None
|
|
||||||
|
|
||||||
@patch("engine.fetch._cache_path")
|
|
||||||
def test_load_cache_invalid_json(self, mock_path):
|
|
||||||
"""Invalid JSON returns None."""
|
|
||||||
mock_path.return_value.exists.return_value = True
|
|
||||||
mock_path.return_value.read_text.side_effect = json.JSONDecodeError("", "", 0)
|
|
||||||
|
|
||||||
result = load_cache()
|
|
||||||
|
|
||||||
assert result is None
|
|
||||||
|
|
||||||
@patch("engine.fetch._cache_path")
|
|
||||||
def test_save_cache_success(self, mock_path):
|
|
||||||
"""Save cache writes to file."""
|
|
||||||
mock_path.return_value.__truediv__ = MagicMock(
|
|
||||||
return_value=mock_path.return_value
|
|
||||||
)
|
|
||||||
|
|
||||||
save_cache([("title", "source", "time")])
|
|
||||||
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user