forked from genewildish/Mainline
Compare commits
124 Commits
feature/ve
...
feature/gr
| Author | SHA1 | Date | |
|---|---|---|---|
| f91082186c | |||
| bfcad4963a | |||
| e5799a346a | |||
| b1bf739324 | |||
| a050e26c03 | |||
| d5406a6b11 | |||
| 3fac583d94 | |||
| 995badbffc | |||
| 6646ed78b3 | |||
| fb0dd4592f | |||
| 2c23c423a0 | |||
| 38bc9a2c13 | |||
| 613752ee20 | |||
| 247f572218 | |||
| 915598629a | |||
| 19fe87573d | |||
| 1a7da400e3 | |||
| 406a58d292 | |||
| f27f3475c8 | |||
| c790027ede | |||
| 901717b86b | |||
| 33df254409 | |||
| 5352054d09 | |||
| f136bd75f1 | |||
| 860bab6550 | |||
| f568cc1a73 | |||
| 7d4623b009 | |||
| c999a9a724 | |||
| 6c06f12c5a | |||
| b058160e9d | |||
| b28cd154c7 | |||
| 66f4957c24 | |||
| afee03f693 | |||
| a747f67f63 | |||
| 018778dd11 | |||
| 4acd7b3344 | |||
| 2976839f7b | |||
| ead4cc3d5a | |||
| 1010f5868e | |||
| fff87382f6 | |||
| b3ac72884d | |||
| 7c26150408 | |||
| 7185005f9b | |||
| ef0c43266a | |||
| e02ab92dad | |||
| 4816ee6da8 | |||
| ec9f5bbe1f | |||
| f64590c0a3 | |||
| b2404068dd | |||
| 677e5c66a9 | |||
| ad8513f2f6 | |||
| 7eaa441574 | |||
| 4f2cf49a80 | |||
| ff08b1d6f5 | |||
| cd5034ce78 | |||
| 161bb522be | |||
| 3fa9eabe36 | |||
| 31ac728737 | |||
| d73d1c65bd | |||
| 5d9efdcb89 | |||
| f2b4226173 | |||
| 238bac1bb2 | |||
| 0eb5f1d5ff | |||
| 14d622f0d6 | |||
| e684666774 | |||
| bb0f1b85bf | |||
| c57617bb3d | |||
| abe49ba7d7 | |||
| 6d2c5ba304 | |||
| a95b24a246 | |||
| cdcdb7b172 | |||
| 21fb210c6e | |||
| 36afbacb6b | |||
| 60ae4f7dfb | |||
| 4b26c947e8 | |||
| b37b2ccc73 | |||
| b926b346ad | |||
| a65fb50464 | |||
| 10e2f00edd | |||
| 05d261273e | |||
| 57de835ae0 | |||
| 4c97cfe6aa | |||
| 10c1d057a9 | |||
| 7f6413c83b | |||
| d54147cfb4 | |||
| affafe810c | |||
| 85d8b29bab | |||
| d14f850711 | |||
| 6fc3cbc0d2 | |||
| 3e73ea0adb | |||
| 7c69086fa5 | |||
| 0980279332 | |||
| cda13584c5 | |||
| 526e5ae47d | |||
| dfe42b0883 | |||
| 1d244cf76a | |||
| 0aa80f92de | |||
| 5762d5e845 | |||
| 28203bac4b | |||
| 952b73cdf0 | |||
| d9c7138fe3 | |||
| c976b99da6 | |||
| 8d066edcca | |||
| b20b4973b5 | |||
| 73ca72d920 | |||
| 015d563c4a | |||
| 4a08b474c1 | |||
| 637cbc5515 | |||
| e0bbfea26c | |||
| 3a3d0c0607 | |||
| f638fb7597 | |||
| 2a41a90d79 | |||
| f43920e2f0 | |||
| b27ddbccb8 | |||
| bfd94fe046 | |||
| 76126bdaac | |||
| 4616a21359 | |||
| ce9d888cf5 | |||
| 1a42fca507 | |||
| e23ba81570 | |||
| 997bffab68 | |||
| 2e96b7cd83 | |||
| a370c7e1a0 | |||
| ea379f5aca |
5
.gitignore
vendored
5
.gitignore
vendored
@@ -10,3 +10,8 @@ htmlcov/
|
||||
.pytest_cache/
|
||||
*.egg-info/
|
||||
coverage.xml
|
||||
*.dot
|
||||
*.png
|
||||
test-reports/
|
||||
.opencode/
|
||||
tests/comparison_output/
|
||||
|
||||
97
.opencode/skills/mainline-architecture/SKILL.md
Normal file
97
.opencode/skills/mainline-architecture/SKILL.md
Normal file
@@ -0,0 +1,97 @@
|
||||
---
|
||||
name: mainline-architecture
|
||||
description: Pipeline stages, capability resolution, and core architecture patterns
|
||||
compatibility: opencode
|
||||
metadata:
|
||||
audience: developers
|
||||
source_type: codebase
|
||||
---
|
||||
|
||||
## What This Skill Covers
|
||||
|
||||
This skill covers Mainline's pipeline architecture - the Stage-based system for dependency resolution, data flow, and component composition.
|
||||
|
||||
## Key Concepts
|
||||
|
||||
### Stage Class (engine/pipeline/core.py)
|
||||
|
||||
The `Stage` ABC is the foundation. All pipeline components inherit from it:
|
||||
|
||||
```python
|
||||
class Stage(ABC):
|
||||
name: str
|
||||
category: str # "source", "effect", "overlay", "display", "camera"
|
||||
optional: bool = False
|
||||
|
||||
@property
|
||||
def capabilities(self) -> set[str]:
|
||||
"""What this stage provides (e.g., 'source.headlines')"""
|
||||
return set()
|
||||
|
||||
@property
|
||||
def dependencies(self) -> set[str]:
|
||||
"""What this stage needs (e.g., {'source'})"""
|
||||
return set()
|
||||
```
|
||||
|
||||
### Capability-Based Dependencies
|
||||
|
||||
The Pipeline resolves dependencies using **prefix matching**:
|
||||
- `"source"` matches `"source.headlines"`, `"source.poetry"`, etc.
|
||||
- `"camera.state"` matches the camera state capability
|
||||
- This allows flexible composition without hardcoding specific stage names
|
||||
|
||||
### Minimum Capabilities
|
||||
|
||||
The pipeline requires these minimum capabilities to function:
|
||||
- `"source"` - Data source capability
|
||||
- `"render.output"` - Rendered content capability
|
||||
- `"display.output"` - Display output capability
|
||||
- `"camera.state"` - Camera state for viewport filtering
|
||||
|
||||
These are automatically injected if missing (auto-injection).
|
||||
|
||||
### DataType Enum
|
||||
|
||||
PureData-style data types for inlet/outlet validation:
|
||||
- `SOURCE_ITEMS`: List[SourceItem] - raw items from sources
|
||||
- `ITEM_TUPLES`: List[tuple] - (title, source, timestamp) tuples
|
||||
- `TEXT_BUFFER`: List[str] - rendered ANSI buffer
|
||||
- `RAW_TEXT`: str - raw text strings
|
||||
- `PIL_IMAGE`: PIL Image object
|
||||
|
||||
### Pipeline Execution
|
||||
|
||||
The Pipeline (engine/pipeline/controller.py):
|
||||
1. Collects all stages from StageRegistry
|
||||
2. Resolves dependencies using prefix matching
|
||||
3. Executes stages in dependency order
|
||||
4. Handles errors for non-optional stages
|
||||
|
||||
### Canvas & Camera
|
||||
|
||||
- **Canvas** (`engine/canvas.py`): 2D rendering surface with dirty region tracking
|
||||
- **Camera** (`engine/camera.py`): Viewport controller for scrolling content
|
||||
|
||||
Canvas tracks dirty regions automatically when content is written via `put_region`, `put_text`, `fill`, enabling partial buffer updates.
|
||||
|
||||
## Adding New Stages
|
||||
|
||||
1. Create a class inheriting from `Stage`
|
||||
2. Define `capabilities` and `dependencies` properties
|
||||
3. Implement required abstract methods
|
||||
4. Register in StageRegistry or use as adapter
|
||||
|
||||
## Common Patterns
|
||||
|
||||
- Use adapters (engine/pipeline/adapters.py) to wrap existing components as stages
|
||||
- Set `optional=True` for stages that can fail gracefully
|
||||
- Use `stage_type` and `render_order` for execution ordering
|
||||
- Clock stages update state independently of data flow
|
||||
|
||||
## Sources
|
||||
|
||||
- engine/pipeline/core.py - Stage base class
|
||||
- engine/pipeline/controller.py - Pipeline implementation
|
||||
- engine/pipeline/adapters/ - Stage adapters
|
||||
- docs/PIPELINE.md - Pipeline documentation
|
||||
163
.opencode/skills/mainline-display/SKILL.md
Normal file
163
.opencode/skills/mainline-display/SKILL.md
Normal file
@@ -0,0 +1,163 @@
|
||||
---
|
||||
name: mainline-display
|
||||
description: Display backend implementation and the Display protocol
|
||||
compatibility: opencode
|
||||
metadata:
|
||||
audience: developers
|
||||
source_type: codebase
|
||||
---
|
||||
|
||||
## What This Skill Covers
|
||||
|
||||
This skill covers Mainline's display backend system - how to implement new display backends and how the Display protocol works.
|
||||
|
||||
## Key Concepts
|
||||
|
||||
### Display Protocol
|
||||
|
||||
All backends implement a common Display protocol (in `engine/display/__init__.py`):
|
||||
|
||||
```python
|
||||
class Display(Protocol):
|
||||
width: int
|
||||
height: int
|
||||
|
||||
def init(self, width: int, height: int, reuse: bool = False) -> None:
|
||||
"""Initialize the display"""
|
||||
...
|
||||
|
||||
def show(self, buf: list[str], border: bool = False) -> None:
|
||||
"""Display the buffer"""
|
||||
...
|
||||
|
||||
def clear(self) -> None:
|
||||
"""Clear the display"""
|
||||
...
|
||||
|
||||
def cleanup(self) -> None:
|
||||
"""Clean up resources"""
|
||||
...
|
||||
|
||||
def get_dimensions(self) -> tuple[int, int]:
|
||||
"""Return (width, height)"""
|
||||
...
|
||||
```
|
||||
|
||||
### DisplayRegistry
|
||||
|
||||
Discovers and manages backends:
|
||||
|
||||
```python
|
||||
from engine.display import DisplayRegistry
|
||||
display = DisplayRegistry.create("terminal") # or "websocket", "null", "multi"
|
||||
```
|
||||
|
||||
### Available Backends
|
||||
|
||||
| Backend | File | Description |
|
||||
|---------|------|-------------|
|
||||
| terminal | backends/terminal.py | ANSI terminal output |
|
||||
| websocket | backends/websocket.py | Web browser via WebSocket |
|
||||
| null | backends/null.py | Headless for testing |
|
||||
| multi | backends/multi.py | Forwards to multiple displays |
|
||||
| moderngl | backends/moderngl.py | GPU-accelerated OpenGL rendering (optional) |
|
||||
|
||||
### WebSocket Backend
|
||||
|
||||
- WebSocket server: port 8765
|
||||
- HTTP server: port 8766 (serves client/index.html)
|
||||
- Client has ANSI color parsing and fullscreen support
|
||||
|
||||
### Multi Backend
|
||||
|
||||
Forwards to multiple displays simultaneously - useful for `terminal + websocket`.
|
||||
|
||||
## Adding a New Backend
|
||||
|
||||
1. Create `engine/display/backends/my_backend.py`
|
||||
2. Implement the Display protocol methods
|
||||
3. Register in `engine/display/__init__.py`'s `DisplayRegistry`
|
||||
|
||||
Required methods:
|
||||
- `init(width: int, height: int, reuse: bool = False)` - Initialize display
|
||||
- `show(buf: list[str], border: bool = False)` - Display buffer
|
||||
- `clear()` - Clear screen
|
||||
- `cleanup()` - Clean up resources
|
||||
- `get_dimensions() -> tuple[int, int]` - Get terminal dimensions
|
||||
|
||||
Optional methods:
|
||||
- `title(text: str)` - Set window title
|
||||
- `cursor(show: bool)` - Control cursor
|
||||
|
||||
## Usage
|
||||
|
||||
```bash
|
||||
python mainline.py --display terminal # default
|
||||
python mainline.py --display websocket
|
||||
python mainline.py --display moderngl # GPU-accelerated (requires moderngl)
|
||||
```
|
||||
|
||||
## Common Bugs and Patterns
|
||||
|
||||
### BorderMode.OFF Enum Bug
|
||||
|
||||
**Problem**: `BorderMode.OFF` has enum value `1` (not `0`), and Python enums are always truthy.
|
||||
|
||||
**Incorrect Code**:
|
||||
```python
|
||||
if border:
|
||||
buffer = render_border(buffer, width, height, fps, frame_time)
|
||||
```
|
||||
|
||||
**Correct Code**:
|
||||
```python
|
||||
from engine.display import BorderMode
|
||||
if border and border != BorderMode.OFF:
|
||||
buffer = render_border(buffer, width, height, fps, frame_time)
|
||||
```
|
||||
|
||||
**Why**: Checking `if border:` evaluates to `True` even when `border == BorderMode.OFF` because enum members are always truthy in Python.
|
||||
|
||||
### Context Type Mismatch
|
||||
|
||||
**Problem**: `PipelineContext` and `EffectContext` have different APIs for storing data.
|
||||
|
||||
- `PipelineContext`: Uses `set()`/`get()` for services
|
||||
- `EffectContext`: Uses `set_state()`/`get_state()` for state
|
||||
|
||||
**Pattern for Passing Data**:
|
||||
```python
|
||||
# In pipeline setup (uses PipelineContext)
|
||||
ctx.set("pipeline_order", pipeline.execution_order)
|
||||
|
||||
# In EffectPluginStage (must copy to EffectContext)
|
||||
effect_ctx.set_state("pipeline_order", ctx.get("pipeline_order"))
|
||||
```
|
||||
|
||||
### Terminal Display ANSI Patterns
|
||||
|
||||
**Screen Clearing**:
|
||||
```python
|
||||
output = "\033[H\033[J" + "".join(buffer)
|
||||
```
|
||||
|
||||
**Cursor Positioning** (used by HUD effect):
|
||||
- `\033[row;colH` - Move cursor to row, column
|
||||
- Example: `\033[1;1H` - Move to row 1, column 1
|
||||
|
||||
**Key Insight**: Terminal display joins buffer lines WITHOUT newlines, relying on ANSI cursor positioning codes to move the cursor to the correct location for each line.
|
||||
|
||||
### EffectPluginStage Context Copying
|
||||
|
||||
**Problem**: When effects need access to pipeline services (like `pipeline_order`), they must be copied from `PipelineContext` to `EffectContext`.
|
||||
|
||||
**Pattern**:
|
||||
```python
|
||||
# In EffectPluginStage.process()
|
||||
# Copy pipeline_order from PipelineContext services to EffectContext state
|
||||
pipeline_order = ctx.get("pipeline_order")
|
||||
if pipeline_order:
|
||||
effect_ctx.set_state("pipeline_order", pipeline_order)
|
||||
```
|
||||
|
||||
This ensures effects can access `ctx.get_state("pipeline_order")` in their process method.
|
||||
113
.opencode/skills/mainline-effects/SKILL.md
Normal file
113
.opencode/skills/mainline-effects/SKILL.md
Normal file
@@ -0,0 +1,113 @@
|
||||
---
|
||||
name: mainline-effects
|
||||
description: How to add new effect plugins to Mainline's effect system
|
||||
compatibility: opencode
|
||||
metadata:
|
||||
audience: developers
|
||||
source_type: codebase
|
||||
---
|
||||
|
||||
## What This Skill Covers
|
||||
|
||||
This skill covers Mainline's effect plugin system - how to create, configure, and integrate visual effects into the pipeline.
|
||||
|
||||
## Key Concepts
|
||||
|
||||
### EffectPlugin ABC (engine/effects/types.py)
|
||||
|
||||
All effects must inherit from `EffectPlugin` and implement:
|
||||
|
||||
```python
|
||||
class EffectPlugin(ABC):
|
||||
name: str
|
||||
config: EffectConfig
|
||||
param_bindings: dict[str, dict[str, str | float]] = {}
|
||||
supports_partial_updates: bool = False
|
||||
|
||||
@abstractmethod
|
||||
def process(self, buf: list[str], ctx: EffectContext) -> list[str]:
|
||||
"""Process buffer with effect applied"""
|
||||
...
|
||||
|
||||
@abstractmethod
|
||||
def configure(self, config: EffectConfig) -> None:
|
||||
"""Configure the effect"""
|
||||
...
|
||||
```
|
||||
|
||||
### EffectContext
|
||||
|
||||
Passed to every effect's process method:
|
||||
|
||||
```python
|
||||
@dataclass
|
||||
class EffectContext:
|
||||
terminal_width: int
|
||||
terminal_height: int
|
||||
scroll_cam: int
|
||||
ticker_height: int
|
||||
camera_x: int = 0
|
||||
mic_excess: float = 0.0
|
||||
grad_offset: float = 0.0
|
||||
frame_number: int = 0
|
||||
has_message: bool = False
|
||||
items: list = field(default_factory=list)
|
||||
_state: dict[str, Any] = field(default_factory=dict)
|
||||
```
|
||||
|
||||
Access sensor values via `ctx.get_sensor_value("sensor_name")`.
|
||||
|
||||
### EffectConfig
|
||||
|
||||
Configuration dataclass:
|
||||
|
||||
```python
|
||||
@dataclass
|
||||
class EffectConfig:
|
||||
enabled: bool = True
|
||||
intensity: float = 1.0
|
||||
params: dict[str, Any] = field(default_factory=dict)
|
||||
```
|
||||
|
||||
### Partial Updates
|
||||
|
||||
For performance optimization, set `supports_partial_updates = True` and implement `process_partial`:
|
||||
|
||||
```python
|
||||
class MyEffect(EffectPlugin):
|
||||
supports_partial_updates = True
|
||||
|
||||
def process_partial(self, buf, ctx, partial: PartialUpdate) -> list[str]:
|
||||
# Only process changed regions
|
||||
...
|
||||
```
|
||||
|
||||
## Adding a New Effect
|
||||
|
||||
1. Create file in `effects_plugins/my_effect.py`
|
||||
2. Inherit from `EffectPlugin`
|
||||
3. Implement `process()` and `configure()`
|
||||
4. Add to `effects_plugins/__init__.py` (runtime discovery via issubclass checks)
|
||||
|
||||
## Param Bindings
|
||||
|
||||
Declarative sensor-to-param mappings:
|
||||
|
||||
```python
|
||||
param_bindings = {
|
||||
"intensity": {"sensor": "mic", "transform": "linear"},
|
||||
"rate": {"sensor": "oscillator", "transform": "exponential"},
|
||||
}
|
||||
```
|
||||
|
||||
Transforms: `linear`, `exponential`, `threshold`
|
||||
|
||||
## Effect Chain
|
||||
|
||||
Effects are chained via `engine/effects/chain.py` - processes each effect in order, passing output to next.
|
||||
|
||||
## Existing Effects
|
||||
|
||||
See `effects_plugins/`:
|
||||
- noise.py, fade.py, glitch.py, firehose.py
|
||||
- border.py, crop.py, tint.py, hud.py
|
||||
103
.opencode/skills/mainline-presets/SKILL.md
Normal file
103
.opencode/skills/mainline-presets/SKILL.md
Normal file
@@ -0,0 +1,103 @@
|
||||
---
|
||||
name: mainline-presets
|
||||
description: Creating pipeline presets in TOML format for Mainline
|
||||
compatibility: opencode
|
||||
metadata:
|
||||
audience: developers
|
||||
source_type: codebase
|
||||
---
|
||||
|
||||
## What This Skill Covers
|
||||
|
||||
This skill covers how to create pipeline presets in TOML format for Mainline's rendering pipeline.
|
||||
|
||||
## Key Concepts
|
||||
|
||||
### Preset Loading Order
|
||||
|
||||
Presets are loaded from multiple locations (later overrides earlier):
|
||||
1. Built-in: `engine/presets.toml`
|
||||
2. User config: `~/.config/mainline/presets.toml`
|
||||
3. Local override: `./presets.toml`
|
||||
|
||||
### PipelinePreset Dataclass
|
||||
|
||||
```python
|
||||
@dataclass
|
||||
class PipelinePreset:
|
||||
name: str
|
||||
description: str = ""
|
||||
source: str = "headlines" # Data source
|
||||
display: str = "terminal" # Display backend
|
||||
camera: str = "scroll" # Camera mode
|
||||
effects: list[str] = field(default_factory=list)
|
||||
border: bool = False
|
||||
```
|
||||
|
||||
### TOML Format
|
||||
|
||||
```toml
|
||||
[presets.my-preset]
|
||||
description = "My custom pipeline"
|
||||
source = "headlines"
|
||||
display = "terminal"
|
||||
camera = "scroll"
|
||||
effects = ["noise", "fade"]
|
||||
border = true
|
||||
```
|
||||
|
||||
## Creating a Preset
|
||||
|
||||
### Option 1: User Config
|
||||
|
||||
Create/edit `~/.config/mainline/presets.toml`:
|
||||
|
||||
```toml
|
||||
[presets.my-cool-preset]
|
||||
description = "Noise and glitch effects"
|
||||
source = "headlines"
|
||||
display = "terminal"
|
||||
effects = ["noise", "glitch"]
|
||||
```
|
||||
|
||||
### Option 2: Local Override
|
||||
|
||||
Create `./presets.toml` in project root:
|
||||
|
||||
```toml
|
||||
[presets.dev-inspect]
|
||||
description = "Pipeline introspection for development"
|
||||
source = "headlines"
|
||||
display = "terminal"
|
||||
effects = ["hud"]
|
||||
```
|
||||
|
||||
### Option 3: Built-in
|
||||
|
||||
Edit `engine/presets.toml` (requires PR to repository).
|
||||
|
||||
## Available Sources
|
||||
|
||||
- `headlines` - RSS news feeds
|
||||
- `poetry` - Literature mode
|
||||
- `pipeline-inspect` - Live DAG visualization
|
||||
|
||||
## Available Displays
|
||||
|
||||
- `terminal` - ANSI terminal
|
||||
- `websocket` - Web browser
|
||||
- `null` - Headless
|
||||
- `moderngl` - GPU-accelerated (optional)
|
||||
|
||||
## Available Effects
|
||||
|
||||
See `effects_plugins/`:
|
||||
- noise, fade, glitch, firehose
|
||||
- border, crop, tint, hud
|
||||
|
||||
## Validation Functions
|
||||
|
||||
Use these from `engine/pipeline/presets.py`:
|
||||
- `validate_preset()` - Validate preset structure
|
||||
- `validate_signal_path()` - Detect circular dependencies
|
||||
- `generate_preset_toml()` - Generate skeleton preset
|
||||
136
.opencode/skills/mainline-sensors/SKILL.md
Normal file
136
.opencode/skills/mainline-sensors/SKILL.md
Normal file
@@ -0,0 +1,136 @@
|
||||
---
|
||||
name: mainline-sensors
|
||||
description: Sensor framework for real-time input in Mainline
|
||||
compatibility: opencode
|
||||
metadata:
|
||||
audience: developers
|
||||
source_type: codebase
|
||||
---
|
||||
|
||||
## What This Skill Covers
|
||||
|
||||
This skill covers Mainline's sensor framework - how to use, create, and integrate sensors for real-time input.
|
||||
|
||||
## Key Concepts
|
||||
|
||||
### Sensor Base Class (engine/sensors/__init__.py)
|
||||
|
||||
```python
|
||||
class Sensor(ABC):
|
||||
name: str
|
||||
unit: str = ""
|
||||
|
||||
@property
|
||||
def available(self) -> bool:
|
||||
"""Whether sensor is currently available"""
|
||||
return True
|
||||
|
||||
@abstractmethod
|
||||
def read(self) -> SensorValue | None:
|
||||
"""Read current sensor value"""
|
||||
...
|
||||
|
||||
def start(self) -> None:
|
||||
"""Initialize sensor (optional)"""
|
||||
pass
|
||||
|
||||
def stop(self) -> None:
|
||||
"""Clean up sensor (optional)"""
|
||||
pass
|
||||
```
|
||||
|
||||
### SensorValue Dataclass
|
||||
|
||||
```python
|
||||
@dataclass
|
||||
class SensorValue:
|
||||
sensor_name: str
|
||||
value: float
|
||||
timestamp: float
|
||||
unit: str = ""
|
||||
```
|
||||
|
||||
### SensorRegistry
|
||||
|
||||
Discovers and manages sensors globally:
|
||||
|
||||
```python
|
||||
from engine.sensors import SensorRegistry
|
||||
registry = SensorRegistry()
|
||||
sensor = registry.get("mic")
|
||||
```
|
||||
|
||||
### SensorStage
|
||||
|
||||
Pipeline adapter that provides sensor values to effects:
|
||||
|
||||
```python
|
||||
from engine.pipeline.adapters import SensorStage
|
||||
stage = SensorStage(sensor_name="mic")
|
||||
```
|
||||
|
||||
## Built-in Sensors
|
||||
|
||||
| Sensor | File | Description |
|
||||
|--------|------|-------------|
|
||||
| MicSensor | sensors/mic.py | Microphone input (RMS dB) |
|
||||
| OscillatorSensor | sensors/oscillator.py | Test sine wave generator |
|
||||
| PipelineMetricsSensor | sensors/pipeline_metrics.py | FPS, frame time, etc. |
|
||||
|
||||
## Param Bindings
|
||||
|
||||
Effects declare sensor-to-param mappings:
|
||||
|
||||
```python
|
||||
class GlitchEffect(EffectPlugin):
|
||||
param_bindings = {
|
||||
"intensity": {"sensor": "mic", "transform": "linear"},
|
||||
}
|
||||
```
|
||||
|
||||
### Transform Functions
|
||||
|
||||
- `linear` - Direct mapping to param range
|
||||
- `exponential` - Exponential scaling
|
||||
- `threshold` - Binary on/off
|
||||
|
||||
## Adding a New Sensor
|
||||
|
||||
1. Create `engine/sensors/my_sensor.py`
|
||||
2. Inherit from `Sensor` ABC
|
||||
3. Implement required methods
|
||||
4. Register in `SensorRegistry`
|
||||
|
||||
Example:
|
||||
```python
|
||||
class MySensor(Sensor):
|
||||
name = "my-sensor"
|
||||
unit = "units"
|
||||
|
||||
def read(self) -> SensorValue | None:
|
||||
return SensorValue(
|
||||
sensor_name=self.name,
|
||||
value=self._read_hardware(),
|
||||
timestamp=time.time(),
|
||||
unit=self.unit
|
||||
)
|
||||
```
|
||||
|
||||
## Using Sensors in Effects
|
||||
|
||||
Access sensor values via EffectContext:
|
||||
|
||||
```python
|
||||
def process(self, buf, ctx):
|
||||
mic_level = ctx.get_sensor_value("mic")
|
||||
if mic_level and mic_level > 0.5:
|
||||
# Apply intense effect
|
||||
...
|
||||
```
|
||||
|
||||
Or via param_bindings (automatic):
|
||||
|
||||
```python
|
||||
# If intensity is bound to "mic", it's automatically
|
||||
# available in self.config.intensity
|
||||
```
|
||||
87
.opencode/skills/mainline-sources/SKILL.md
Normal file
87
.opencode/skills/mainline-sources/SKILL.md
Normal file
@@ -0,0 +1,87 @@
|
||||
---
|
||||
name: mainline-sources
|
||||
description: Adding new RSS feeds and data sources to Mainline
|
||||
compatibility: opencode
|
||||
metadata:
|
||||
audience: developers
|
||||
source_type: codebase
|
||||
---
|
||||
|
||||
## What This Skill Covers
|
||||
|
||||
This skill covers how to add new data sources (RSS feeds, poetry) to Mainline.
|
||||
|
||||
## Key Concepts
|
||||
|
||||
### Feeds Dictionary (engine/sources.py)
|
||||
|
||||
All feeds are defined in a simple dictionary:
|
||||
|
||||
```python
|
||||
FEEDS = {
|
||||
"Feed Name": "https://example.com/feed.xml",
|
||||
# Category comments help organize:
|
||||
# Science & Technology
|
||||
# Economics & Business
|
||||
# World & Politics
|
||||
# Culture & Ideas
|
||||
}
|
||||
```
|
||||
|
||||
### Poetry Sources
|
||||
|
||||
Project Gutenberg URLs for public domain literature:
|
||||
|
||||
```python
|
||||
POETRY_SOURCES = {
|
||||
"Author Name": "https://www.gutenberg.org/cache/epub/1234/pg1234.txt",
|
||||
}
|
||||
```
|
||||
|
||||
### Language & Script Mapping
|
||||
|
||||
The sources.py also contains language/script detection mappings used for auto-translation and font selection.
|
||||
|
||||
## Adding a New RSS Feed
|
||||
|
||||
1. Edit `engine/sources.py`
|
||||
2. Add entry to `FEEDS` dict under appropriate category:
|
||||
```python
|
||||
"My Feed": "https://example.com/feed.xml",
|
||||
```
|
||||
3. The feed will be automatically discovered on next run
|
||||
|
||||
### Feed Requirements
|
||||
|
||||
- Must be valid RSS or Atom XML
|
||||
- Should have `<title>` elements for items
|
||||
- Must be HTTP/HTTPS accessible
|
||||
|
||||
## Adding Poetry Sources
|
||||
|
||||
1. Edit `engine/sources.py`
|
||||
2. Add to `POETRY_SOURCES` dict:
|
||||
```python
|
||||
"Author": "https://www.gutenberg.org/cache/epub/XXXX/pgXXXX.txt",
|
||||
```
|
||||
|
||||
### Poetry Requirements
|
||||
|
||||
- Plain text (UTF-8)
|
||||
- Project Gutenberg format preferred
|
||||
- No DRM-protected sources
|
||||
|
||||
## Data Flow
|
||||
|
||||
Feeds are fetched via `engine/fetch.py`:
|
||||
- `fetch_feed(url)` - Fetches and parses RSS/Atom
|
||||
- Results cached for fast restarts
|
||||
- Filtered via `engine/filter.py` for content cleaning
|
||||
|
||||
## Categories
|
||||
|
||||
Organize new feeds by category using comments:
|
||||
- Science & Technology
|
||||
- Economics & Business
|
||||
- World & Politics
|
||||
- Culture & Ideas
|
||||
422
AGENTS.md
422
AGENTS.md
@@ -4,145 +4,208 @@
|
||||
|
||||
This project uses:
|
||||
- **mise** (mise.jdx.dev) - tool version manager and task runner
|
||||
- **hk** (hk.jdx.dev) - git hook manager
|
||||
- **uv** - fast Python package installer
|
||||
- **ruff** - linter and formatter
|
||||
- **pytest** - test runner
|
||||
- **ruff** - linter and formatter (line-length 88, target Python 3.10)
|
||||
- **pytest** - test runner with strict marker enforcement
|
||||
|
||||
### Setup
|
||||
|
||||
```bash
|
||||
# Install dependencies
|
||||
mise run install
|
||||
|
||||
# Or equivalently:
|
||||
uv sync --all-extras # includes mic, websocket, sixel support
|
||||
mise run install # Install dependencies
|
||||
# Or: uv sync --all-extras # includes mic, websocket support
|
||||
```
|
||||
|
||||
### Available Commands
|
||||
|
||||
```bash
|
||||
mise run test # Run tests
|
||||
mise run test-v # Run tests verbose
|
||||
# Testing
|
||||
mise run test # Run all tests
|
||||
mise run test-cov # Run tests with coverage report
|
||||
mise run test-browser # Run e2e browser tests (requires playwright)
|
||||
mise run lint # Run ruff linter
|
||||
pytest tests/test_foo.py::TestClass::test_method # Run single test
|
||||
|
||||
# Linting & Formatting
|
||||
mise run lint # Run ruff linter
|
||||
mise run lint-fix # Run ruff with auto-fix
|
||||
mise run format # Run ruff formatter
|
||||
|
||||
# CI
|
||||
mise run ci # Full CI pipeline (topics-init + lint + test-cov)
|
||||
```
|
||||
|
||||
### Runtime Commands
|
||||
### Running a Single Test
|
||||
|
||||
```bash
|
||||
mise run run # Run mainline (terminal)
|
||||
mise run run-poetry # Run with poetry feed
|
||||
mise run run-firehose # Run in firehose mode
|
||||
mise run run-websocket # Run with WebSocket display only
|
||||
mise run run-sixel # Run with Sixel graphics display
|
||||
mise run run-both # Run with both terminal and WebSocket
|
||||
mise run run-client # Run both + open browser
|
||||
mise run cmd # Run C&C command interface
|
||||
# Run a specific test function
|
||||
pytest tests/test_eventbus.py::TestEventBusInit::test_init_creates_empty_subscribers
|
||||
|
||||
# Run all tests in a file
|
||||
pytest tests/test_eventbus.py
|
||||
|
||||
# Run tests matching a pattern
|
||||
pytest -k "test_subscribe"
|
||||
```
|
||||
|
||||
## Git Hooks
|
||||
|
||||
**At the start of every agent session**, verify hooks are installed:
|
||||
### Git Hooks
|
||||
|
||||
Install hooks at start of session:
|
||||
```bash
|
||||
ls -la .git/hooks/pre-commit
|
||||
ls -la .git/hooks/pre-commit # Verify installed
|
||||
hk init --mise # Install if missing
|
||||
mise run pre-commit # Run manually
|
||||
```
|
||||
|
||||
If hooks are not installed, install them with:
|
||||
## Code Style Guidelines
|
||||
|
||||
```bash
|
||||
hk init --mise
|
||||
mise run pre-commit
|
||||
### Imports (three sections, alphabetical within each)
|
||||
|
||||
```python
|
||||
# 1. Standard library
|
||||
import os
|
||||
import threading
|
||||
from collections import defaultdict
|
||||
from collections.abc import Callable
|
||||
from dataclasses import dataclass, field
|
||||
from typing import Any
|
||||
|
||||
# 2. Third-party
|
||||
from abc import ABC, abstractmethod
|
||||
|
||||
# 3. Local project
|
||||
from engine.events import EventType
|
||||
```
|
||||
|
||||
**IMPORTANT**: Always review the hk documentation before modifying `hk.pkl`:
|
||||
- [hk Configuration Guide](https://hk.jdx.dev/configuration.html)
|
||||
- [hk Hooks Reference](https://hk.jdx.dev/hooks.html)
|
||||
- [hk Builtins](https://hk.jdx.dev/builtins.html)
|
||||
### Type Hints
|
||||
|
||||
The project uses hk configured in `hk.pkl`:
|
||||
- **pre-commit**: runs ruff-format and ruff (with auto-fix)
|
||||
- **pre-push**: runs ruff check + benchmark hook
|
||||
- Use type hints for all function signatures (parameters and return)
|
||||
- Use `|` for unions (Python 3.10+): `EventType | None`
|
||||
- Use `dict[K, V]`, `list[V]` (generic syntax): `dict[str, list[int]]`
|
||||
- Use `Callable[[ArgType], ReturnType]` for callbacks
|
||||
|
||||
## Benchmark Runner
|
||||
```python
|
||||
def subscribe(self, event_type: EventType, callback: Callable[[Any], None]) -> None:
|
||||
...
|
||||
|
||||
Run performance benchmarks:
|
||||
|
||||
```bash
|
||||
mise run benchmark # Run all benchmarks (text output)
|
||||
mise run benchmark-json # Run benchmarks (JSON output)
|
||||
mise run benchmark-report # Run benchmarks (Markdown report)
|
||||
def get_sensor_value(self, sensor_name: str) -> float | None:
|
||||
return self._state.get(f"sensor.{sensor_name}")
|
||||
```
|
||||
|
||||
### Benchmark Commands
|
||||
### Naming Conventions
|
||||
|
||||
```bash
|
||||
# Run benchmarks
|
||||
uv run python -m engine.benchmark
|
||||
- **Classes**: `PascalCase` (e.g., `EventBus`, `EffectPlugin`)
|
||||
- **Functions/methods**: `snake_case` (e.g., `get_event_bus`, `process_partial`)
|
||||
- **Constants**: `SCREAMING_SNAKE_CASE` (e.g., `CURSOR_OFF`)
|
||||
- **Private methods**: `_snake_case` prefix (e.g., `_initialize`)
|
||||
- **Type variables**: `PascalCase` (e.g., `T`, `EffectT`)
|
||||
|
||||
# Run with specific displays/effects
|
||||
uv run python -m engine.benchmark --displays null,terminal --effects fade,glitch
|
||||
### Dataclasses
|
||||
|
||||
# Save baseline for hook comparisons
|
||||
uv run python -m engine.benchmark --baseline
|
||||
Use `@dataclass` for simple data containers:
|
||||
|
||||
# Run in hook mode (compares against baseline)
|
||||
uv run python -m engine.benchmark --hook
|
||||
|
||||
# Hook mode with custom threshold (default: 20% degradation)
|
||||
uv run python -m engine.benchmark --hook --threshold 0.3
|
||||
|
||||
# Custom baseline location
|
||||
uv run python -m engine.benchmark --hook --cache /path/to/cache.json
|
||||
```python
|
||||
@dataclass
|
||||
class EffectContext:
|
||||
terminal_width: int
|
||||
terminal_height: int
|
||||
scroll_cam: int
|
||||
ticker_height: int = 0
|
||||
_state: dict[str, Any] = field(default_factory=dict, repr=False)
|
||||
```
|
||||
|
||||
### Hook Mode
|
||||
### Abstract Base Classes
|
||||
|
||||
The `--hook` mode compares current benchmarks against a saved baseline. If performance degrades beyond the threshold (default 20%), it exits with code 1. This is useful for preventing performance regressions in feature branches.
|
||||
Use ABC for interface enforcement:
|
||||
|
||||
The pre-push hook runs benchmark in hook mode to catch performance regressions before pushing.
|
||||
```python
|
||||
class EffectPlugin(ABC):
|
||||
name: str
|
||||
config: EffectConfig
|
||||
|
||||
@abstractmethod
|
||||
def process(self, buf: list[str], ctx: EffectContext) -> list[str]:
|
||||
...
|
||||
|
||||
@abstractmethod
|
||||
def configure(self, config: EffectConfig) -> None:
|
||||
...
|
||||
```
|
||||
|
||||
### Error Handling
|
||||
|
||||
- Catch specific exceptions, not bare `Exception`
|
||||
- Use `try/except` with fallbacks for optional features
|
||||
- Silent pass in event callbacks to prevent one handler from breaking others
|
||||
|
||||
```python
|
||||
# Good: specific exception
|
||||
try:
|
||||
term_size = os.get_terminal_size()
|
||||
except OSError:
|
||||
term_width = 80
|
||||
|
||||
# Good: silent pass in callbacks
|
||||
for callback in callbacks:
|
||||
try:
|
||||
callback(event)
|
||||
except Exception:
|
||||
pass
|
||||
```
|
||||
|
||||
### Thread Safety
|
||||
|
||||
Use locks for shared state:
|
||||
|
||||
```python
|
||||
class EventBus:
|
||||
def __init__(self):
|
||||
self._lock = threading.Lock()
|
||||
|
||||
def publish(self, event_type: EventType, event: Any = None) -> None:
|
||||
with self._lock:
|
||||
callbacks = list(self._subscribers.get(event_type, []))
|
||||
```
|
||||
|
||||
### Comments
|
||||
|
||||
- **DO NOT ADD comments** unless explicitly required
|
||||
- Let code be self-documenting with good naming
|
||||
- Use docstrings only for public APIs or complex logic
|
||||
|
||||
### Testing Patterns
|
||||
|
||||
Follow pytest conventions:
|
||||
|
||||
```python
|
||||
class TestEventBusSubscribe:
|
||||
"""Tests for EventBus.subscribe method."""
|
||||
|
||||
def test_subscribe_adds_callback(self):
|
||||
"""subscribe() adds a callback for an event type."""
|
||||
bus = EventBus()
|
||||
def callback(e):
|
||||
return None
|
||||
bus.subscribe(EventType.NTFY_MESSAGE, callback)
|
||||
assert bus.subscriber_count(EventType.NTFY_MESSAGE) == 1
|
||||
```
|
||||
|
||||
- Use classes to group related tests (`Test<ClassName>`, `Test<method_name>`)
|
||||
- Test docstrings follow `"<method>() <action>"` pattern
|
||||
- Use descriptive assertion messages via pytest behavior
|
||||
|
||||
## Workflow Rules
|
||||
|
||||
### Before Committing
|
||||
|
||||
1. **Always run the test suite** - never commit code that fails tests:
|
||||
```bash
|
||||
mise run test
|
||||
```
|
||||
|
||||
2. **Always run the linter**:
|
||||
```bash
|
||||
mise run lint
|
||||
```
|
||||
|
||||
3. **Fix any lint errors** before committing (or let the pre-commit hook handle it).
|
||||
|
||||
4. **Review your changes** using `git diff` to understand what will be committed.
|
||||
1. Run tests: `mise run test`
|
||||
2. Run linter: `mise run lint`
|
||||
3. Review changes: `git diff`
|
||||
|
||||
### On Failing Tests
|
||||
|
||||
When tests fail, **determine whether it's an out-of-date test or a correctly failing test**:
|
||||
|
||||
- **Out-of-date test**: The test was written for old behavior that has legitimately changed. Update the test to match the new expected behavior.
|
||||
|
||||
- **Correctly failing test**: The test correctly identifies a broken contract. Fix the implementation, not the test.
|
||||
- **Out-of-date test**: Update test to match new expected behavior
|
||||
- **Correctly failing test**: Fix implementation, not the test
|
||||
|
||||
**Never** modify a test to make it pass without understanding why it failed.
|
||||
|
||||
### Code Review
|
||||
|
||||
Before committing significant changes:
|
||||
- Run `git diff` to review all changes
|
||||
- Ensure new code follows existing patterns in the codebase
|
||||
- Check that type hints are added for new functions
|
||||
- Verify that tests exist for new functionality
|
||||
|
||||
## Testing
|
||||
|
||||
Tests live in `tests/` and follow the pattern `test_*.py`.
|
||||
@@ -161,12 +224,11 @@ The project uses pytest with strict marker enforcement. Test configuration is in
|
||||
|
||||
### Test Coverage Strategy
|
||||
|
||||
Current coverage: 56% (336 tests)
|
||||
Current coverage: 56% (463 tests)
|
||||
|
||||
Key areas with lower coverage (acceptable for now):
|
||||
- **app.py** (8%): Main entry point - integration heavy, requires terminal
|
||||
- **scroll.py** (10%): Terminal-dependent rendering logic
|
||||
- **benchmark.py** (0%): Standalone benchmark tool, runs separately
|
||||
- **scroll.py** (10%): Terminal-dependent rendering logic (unused)
|
||||
|
||||
Key areas with good coverage:
|
||||
- **display/backends/null.py** (95%): Easy to test headlessly
|
||||
@@ -186,20 +248,113 @@ Performance regression tests are in `tests/test_benchmark.py` with `@pytest.mark
|
||||
|
||||
## Architecture Notes
|
||||
|
||||
- **ntfy.py** and **mic.py** are standalone modules with zero internal dependencies
|
||||
- **ntfy.py** - standalone notification poller with zero internal dependencies
|
||||
- **sensors/** - Sensor framework (MicSensor, OscillatorSensor) for real-time input
|
||||
- **eventbus.py** provides thread-safe event publishing for decoupled communication
|
||||
- **controller.py** coordinates ntfy/mic monitoring and event publishing
|
||||
- **effects/** - plugin architecture with performance monitoring
|
||||
- The render pipeline: fetch → render → effects → scroll → terminal output
|
||||
- The new pipeline architecture: source → render → effects → display
|
||||
|
||||
#### Canvas & Camera
|
||||
|
||||
- **Canvas** (`engine/canvas.py`): 2D rendering surface with dirty region tracking
|
||||
- **Camera** (`engine/camera.py`): Viewport controller for scrolling content
|
||||
|
||||
The Canvas tracks dirty regions automatically when content is written (via `put_region`, `put_text`, `fill`), enabling partial buffer updates for optimized effect processing.
|
||||
|
||||
### Pipeline Architecture
|
||||
|
||||
The new Stage-based pipeline architecture provides capability-based dependency resolution:
|
||||
|
||||
- **Stage** (`engine/pipeline/core.py`): Base class for pipeline stages
|
||||
- **Pipeline** (`engine/pipeline/controller.py`): Executes stages with capability-based dependency resolution
|
||||
- **PipelineConfig** (`engine/pipeline/controller.py`): Configuration for pipeline instance
|
||||
- **StageRegistry** (`engine/pipeline/registry.py`): Discovers and registers stages
|
||||
- **Stage Adapters** (`engine/pipeline/adapters.py`): Wraps existing components as stages
|
||||
|
||||
#### Pipeline Configuration
|
||||
|
||||
The `PipelineConfig` dataclass configures pipeline behavior:
|
||||
|
||||
```python
|
||||
@dataclass
|
||||
class PipelineConfig:
|
||||
source: str = "headlines" # Data source identifier
|
||||
display: str = "terminal" # Display backend identifier
|
||||
camera: str = "vertical" # Camera mode identifier
|
||||
effects: list[str] = field(default_factory=list) # List of effect names
|
||||
enable_metrics: bool = True # Enable performance metrics
|
||||
```
|
||||
|
||||
**Available sources**: `headlines`, `poetry`, `empty`, `list`, `image`, `metrics`, `cached`, `transform`, `composite`, `pipeline-inspect`
|
||||
**Available displays**: `terminal`, `null`, `replay`, `websocket`, `pygame`, `moderngl`, `multi`
|
||||
**Available camera modes**: `FEED`, `SCROLL`, `HORIZONTAL`, `OMNI`, `FLOATING`, `BOUNCE`, `RADIAL`
|
||||
|
||||
#### Capability-Based Dependencies
|
||||
|
||||
Stages declare capabilities (what they provide) and dependencies (what they need). The Pipeline resolves dependencies using prefix matching:
|
||||
- `"source"` matches `"source.headlines"`, `"source.poetry"`, etc.
|
||||
- `"camera.state"` matches the camera state capability
|
||||
- This allows flexible composition without hardcoding specific stage names
|
||||
|
||||
#### Minimum Capabilities
|
||||
|
||||
The pipeline requires these minimum capabilities to function:
|
||||
- `"source"` - Data source capability
|
||||
- `"render.output"` - Rendered content capability
|
||||
- `"display.output"` - Display output capability
|
||||
- `"camera.state"` - Camera state for viewport filtering
|
||||
|
||||
These are automatically injected if missing by the `ensure_minimum_capabilities()` method.
|
||||
|
||||
#### Sensor Framework
|
||||
|
||||
- **Sensor** (`engine/sensors/__init__.py`): Base class for real-time input sensors
|
||||
- **SensorRegistry**: Discovers available sensors
|
||||
- **SensorStage**: Pipeline adapter that provides sensor values to effects
|
||||
- **MicSensor** (`engine/sensors/mic.py`): Self-contained microphone input
|
||||
- **OscillatorSensor** (`engine/sensors/oscillator.py`): Test sensor for development
|
||||
- **PipelineMetricsSensor** (`engine/sensors/pipeline_metrics.py`): Exposes pipeline metrics as sensor values
|
||||
|
||||
Sensors support param bindings to drive effect parameters in real-time.
|
||||
|
||||
#### Pipeline Introspection
|
||||
|
||||
- **PipelineIntrospectionSource** (`engine/data_sources/pipeline_introspection.py`): Renders live ASCII visualization of pipeline DAG with metrics
|
||||
- **PipelineIntrospectionDemo** (`engine/pipeline/pipeline_introspection_demo.py`): 3-phase demo controller for effect animation
|
||||
|
||||
Preset: `pipeline-inspect` - Live pipeline introspection with DAG and performance metrics
|
||||
|
||||
#### Partial Update Support
|
||||
|
||||
Effect plugins can opt-in to partial buffer updates for performance optimization:
|
||||
- Set `supports_partial_updates = True` on the effect class
|
||||
- Implement `process_partial(buf, ctx, partial)` method
|
||||
- The `PartialUpdate` dataclass indicates which regions changed
|
||||
|
||||
### Preset System
|
||||
|
||||
Presets use TOML format (no external dependencies):
|
||||
|
||||
- Built-in: `engine/presets.toml`
|
||||
- User config: `~/.config/mainline/presets.toml`
|
||||
- Local override: `./presets.toml`
|
||||
|
||||
- **Preset loader** (`engine/pipeline/preset_loader.py`): Loads and validates presets
|
||||
- **PipelinePreset** (`engine/pipeline/presets.py`): Dataclass for preset configuration
|
||||
|
||||
Functions:
|
||||
- `validate_preset()` - Validate preset structure
|
||||
- `validate_signal_path()` - Detect circular dependencies
|
||||
- `generate_preset_toml()` - Generate skeleton preset
|
||||
|
||||
### Display System
|
||||
|
||||
- **Display abstraction** (`engine/display/`): swap display backends via the Display protocol
|
||||
- `display/backends/terminal.py` - ANSI terminal output
|
||||
- `display/backends/websocket.py` - broadcasts to web clients via WebSocket
|
||||
- `display/backends/sixel.py` - renders to Sixel graphics (pure Python, no C dependency)
|
||||
- `display/backends/null.py` - headless display for testing
|
||||
- `display/backends/multi.py` - forwards to multiple displays simultaneously
|
||||
- `display/backends/moderngl.py` - GPU-accelerated OpenGL rendering (optional)
|
||||
- `display/__init__.py` - DisplayRegistry for backend discovery
|
||||
|
||||
- **WebSocket display** (`engine/display/backends/websocket.py`): real-time frame broadcasting to web browsers
|
||||
@@ -210,8 +365,7 @@ Performance regression tests are in `tests/test_benchmark.py` with `@pytest.mark
|
||||
- **Display modes** (`--display` flag):
|
||||
- `terminal` - Default ANSI terminal output
|
||||
- `websocket` - Web browser display (requires websockets package)
|
||||
- `sixel` - Sixel graphics in supported terminals (iTerm2, mintty, etc.)
|
||||
- `both` - Terminal + WebSocket simultaneously
|
||||
- `moderngl` - GPU-accelerated rendering (requires moderngl package)
|
||||
|
||||
### Effect Plugin System
|
||||
|
||||
@@ -236,4 +390,76 @@ The rendering pipeline is documented in `docs/PIPELINE.md` using Mermaid diagram
|
||||
**IMPORTANT**: When making significant architectural changes to the rendering pipeline (new layers, effects, display backends), update `docs/PIPELINE.md` to reflect the changes:
|
||||
1. Edit `docs/PIPELINE.md` with the new architecture
|
||||
2. If adding new SVG diagrams, render them manually using an external tool (e.g., Mermaid Live Editor)
|
||||
3. Commit both the markdown and any new diagram files
|
||||
3. Commit both the markdown and any new diagram files
|
||||
|
||||
### Pipeline Mutation API
|
||||
|
||||
The Pipeline class supports dynamic mutation during runtime via the mutation API:
|
||||
|
||||
**Core Methods:**
|
||||
- `add_stage(name, stage, initialize=True)` - Add a stage to the pipeline
|
||||
- `remove_stage(name, cleanup=True)` - Remove a stage and rebuild execution order
|
||||
- `replace_stage(name, new_stage, preserve_state=True)` - Replace a stage with another
|
||||
- `swap_stages(name1, name2)` - Swap two stages
|
||||
- `move_stage(name, after=None, before=None)` - Move a stage in execution order
|
||||
- `enable_stage(name)` - Enable a stage
|
||||
- `disable_stage(name)` - Disable a stage
|
||||
|
||||
**New Methods (Issue #35):**
|
||||
- `cleanup_stage(name)` - Clean up specific stage without removing it
|
||||
- `remove_stage_safe(name, cleanup=True)` - Alias for remove_stage that explicitly rebuilds
|
||||
- `can_hot_swap(name)` - Check if a stage can be safely hot-swapped
|
||||
- Returns False for stages that provide minimum capabilities as sole provider
|
||||
- Returns True for swappable stages
|
||||
|
||||
**WebSocket Commands:**
|
||||
Commands can be sent via WebSocket to mutate the pipeline at runtime:
|
||||
```json
|
||||
{"action": "remove_stage", "stage": "stage_name"}
|
||||
{"action": "swap_stages", "stage1": "name1", "stage2": "name2"}
|
||||
{"action": "enable_stage", "stage": "stage_name"}
|
||||
{"action": "disable_stage", "stage": "stage_name"}
|
||||
{"action": "cleanup_stage", "stage": "stage_name"}
|
||||
{"action": "can_hot_swap", "stage": "stage_name"}
|
||||
```
|
||||
|
||||
**Implementation Files:**
|
||||
- `engine/pipeline/controller.py` - Pipeline class with mutation methods
|
||||
- `engine/app/pipeline_runner.py` - `_handle_pipeline_mutation()` function
|
||||
- `engine/pipeline/ui.py` - execute_command() with docstrings
|
||||
- `tests/test_pipeline_mutation_commands.py` - Integration tests
|
||||
|
||||
## Skills Library
|
||||
|
||||
A skills library MCP server (`skills`) is available for capturing and tracking learned knowledge. Skills are stored in `~/.skills/`.
|
||||
|
||||
### Workflow
|
||||
|
||||
**Before starting work:**
|
||||
1. Run `local_skills_list_skills` to see available skills
|
||||
2. Use `local_skills_peek_skill({name: "skill-name"})` to preview relevant skills
|
||||
3. Use `local_skills_skill_slice({name: "skill-name", query: "your question"})` to get relevant sections
|
||||
|
||||
**While working:**
|
||||
- If a skill was wrong or incomplete: `local_skills_update_skill` → `local_skills_record_assessment` → `local_skills_report_outcome({quality: 1})`
|
||||
- If a skill worked correctly: `local_skills_report_outcome({quality: 4})` (normal) or `quality: 5` (perfect)
|
||||
|
||||
**End of session:**
|
||||
- Run `local_skills_reflect_on_session({context_summary: "what you did"})` to identify new skills to capture
|
||||
- Use `local_skills_create_skill` to add new skills
|
||||
- Use `local_skills_record_assessment` to score them
|
||||
|
||||
### Useful Tools
|
||||
- `local_skills_review_stale_skills()` - Skills due for review (negative days_until_due)
|
||||
- `local_skills_skills_report()` - Overview of entire collection
|
||||
- `local_skills_validate_skill({name: "skill-name"})` - Load skill for review with sources
|
||||
|
||||
### Agent Skills
|
||||
|
||||
This project also has Agent Skills (SKILL.md files) in `.opencode/skills/`. Use the `skill` tool to load them:
|
||||
- `skill({name: "mainline-architecture"})` - Pipeline stages, capability resolution
|
||||
- `skill({name: "mainline-effects"})` - How to add new effect plugins
|
||||
- `skill({name: "mainline-display"})` - Display backend implementation
|
||||
- `skill({name: "mainline-sources"})` - Adding new RSS feeds
|
||||
- `skill({name: "mainline-presets"})` - Creating pipeline presets
|
||||
- `skill({name: "mainline-sensors"})` - Sensor framework usage
|
||||
|
||||
10
README.md
10
README.md
@@ -16,7 +16,6 @@ python3 mainline.py --poetry # literary consciousness mode
|
||||
python3 mainline.py -p # same
|
||||
python3 mainline.py --firehose # dense rapid-fire headline mode
|
||||
python3 mainline.py --display websocket # web browser display only
|
||||
python3 mainline.py --display both # terminal + web browser
|
||||
python3 mainline.py --no-font-picker # skip interactive font picker
|
||||
python3 mainline.py --font-file path.otf # use a specific font file
|
||||
python3 mainline.py --font-dir ~/fonts # scan a different font folder
|
||||
@@ -75,8 +74,7 @@ Mainline supports multiple display backends:
|
||||
|
||||
- **Terminal** (`--display terminal`): ANSI terminal output (default)
|
||||
- **WebSocket** (`--display websocket`): Stream to web browser clients
|
||||
- **Sixel** (`--display sixel`): Sixel graphics in supported terminals (iTerm2, mintty)
|
||||
- **Both** (`--display both`): Terminal + WebSocket simultaneously
|
||||
- **ModernGL** (`--display moderngl`): GPU-accelerated rendering (optional)
|
||||
|
||||
WebSocket mode serves a web client at http://localhost:8766 with ANSI color support and fullscreen mode.
|
||||
|
||||
@@ -160,9 +158,9 @@ engine/
|
||||
backends/
|
||||
terminal.py ANSI terminal display
|
||||
websocket.py WebSocket server for browser clients
|
||||
sixel.py Sixel graphics (pure Python)
|
||||
null.py headless display for testing
|
||||
multi.py forwards to multiple displays
|
||||
moderngl.py GPU-accelerated OpenGL rendering
|
||||
benchmark.py performance benchmarking tool
|
||||
```
|
||||
|
||||
@@ -194,9 +192,7 @@ mise run format # ruff format
|
||||
|
||||
mise run run # terminal display
|
||||
mise run run-websocket # web display only
|
||||
mise run run-sixel # sixel graphics
|
||||
mise run run-both # terminal + web
|
||||
mise run run-client # both + open browser
|
||||
mise run run-client # terminal + web
|
||||
|
||||
mise run cmd # C&C command interface
|
||||
mise run cmd-stats # watch effects stats
|
||||
|
||||
132
REPL_USAGE.md
Normal file
132
REPL_USAGE.md
Normal file
@@ -0,0 +1,132 @@
|
||||
# REPL Usage Guide
|
||||
|
||||
The REPL (Read-Eval-Print Loop) effect provides an interactive command-line interface for controlling Mainline's pipeline in real-time.
|
||||
|
||||
## How to Access the REPL
|
||||
|
||||
### Method 1: Using CLI Arguments (Recommended)
|
||||
|
||||
Run Mainline with the `repl` effect added to the effects list:
|
||||
|
||||
```bash
|
||||
# With empty source (for testing)
|
||||
python mainline.py --pipeline-source empty --pipeline-effects repl
|
||||
|
||||
# With headlines source (requires network)
|
||||
python mainline.py --pipeline-source headlines --pipeline-effects repl
|
||||
|
||||
# With poetry source
|
||||
python mainline.py --pipeline-source poetry --pipeline-effects repl
|
||||
```
|
||||
|
||||
### Method 2: Using a Preset
|
||||
|
||||
Add a preset to your `~/.config/mainline/presets.toml` or `./presets.toml`:
|
||||
|
||||
```toml
|
||||
[presets.repl]
|
||||
description = "Interactive REPL control"
|
||||
source = "headlines"
|
||||
display = "terminal"
|
||||
effects = ["repl"]
|
||||
viewport_width = 80
|
||||
viewport_height = 24
|
||||
```
|
||||
|
||||
Then run:
|
||||
```bash
|
||||
python mainline.py --preset repl
|
||||
```
|
||||
|
||||
### Method 3: Using Graph Config
|
||||
|
||||
Create a TOML file (e.g., `repl_config.toml`):
|
||||
|
||||
```toml
|
||||
source = "empty"
|
||||
display = "terminal"
|
||||
effects = ["repl"]
|
||||
```
|
||||
|
||||
Then run:
|
||||
```bash
|
||||
python mainline.py --graph-config repl_config.toml
|
||||
```
|
||||
|
||||
## REPL Commands
|
||||
|
||||
Once the REPL is active, you can type commands:
|
||||
|
||||
- **help** - Show available commands
|
||||
- **status** - Show pipeline status and metrics
|
||||
- **effects** - List all effects in the pipeline
|
||||
- **effect \<name\> \<on|off\>** - Toggle an effect
|
||||
- **param \<effect\> \<param\> \<value\>** - Set effect parameter
|
||||
- **pipeline** - Show current pipeline order
|
||||
- **clear** - Clear output buffer
|
||||
- **quit/exit** - Show exit message (use Ctrl+C to actually exit)
|
||||
|
||||
## Keyboard Controls
|
||||
|
||||
- **Enter** - Execute command
|
||||
- **Up/Down arrows** - Navigate command history
|
||||
- **Backspace** - Delete last character
|
||||
- **Ctrl+C** - Exit Mainline
|
||||
|
||||
## Visual Features
|
||||
|
||||
The REPL displays:
|
||||
- **HUD header** (top 3 lines): Shows FPS, frame time, command count, and output buffer size
|
||||
- **Content area**: Main content from the data source
|
||||
- **Separator line**: Visual divider
|
||||
- **REPL area**: Output buffer and input prompt
|
||||
|
||||
## Example Session
|
||||
|
||||
```
|
||||
MAINLINE REPL | FPS: 60.0 | 12.5ms
|
||||
COMMANDS: 3 | [2/3]
|
||||
OUTPUT: 5 lines
|
||||
────────────────────────────────────────
|
||||
Content from source appears here...
|
||||
More content...
|
||||
────────────────────────────────────────
|
||||
> help
|
||||
Available commands:
|
||||
help - Show this help
|
||||
status - Show pipeline status
|
||||
effects - List all effects
|
||||
effect <name> <on|off> - Toggle effect
|
||||
param <effect> <param> <value> - Set parameter
|
||||
pipeline - Show current pipeline order
|
||||
clear - Clear output buffer
|
||||
quit - Show exit message
|
||||
> effects
|
||||
Pipeline effects:
|
||||
1. repl
|
||||
> effect repl off
|
||||
Effect 'repl' set to off
|
||||
```
|
||||
|
||||
## Scrolling Support
|
||||
|
||||
The REPL output buffer supports scrolling through command history:
|
||||
|
||||
**Keyboard Controls:**
|
||||
- **PageUp** - Scroll up 10 lines
|
||||
- **PageDown** - Scroll down 10 lines
|
||||
- **Mouse wheel up** - Scroll up 3 lines
|
||||
- **Mouse wheel down** - Scroll down 3 lines
|
||||
|
||||
**Scroll Features:**
|
||||
- **Scroll percentage** shown in HUD (like vim, e.g., "50%")
|
||||
- **Scroll position** shown in output line (e.g., "(5/20)")
|
||||
- **Auto-reset** - Scroll resets to bottom when new output arrives
|
||||
- **Max buffer** - 50 lines (excluding empty lines)
|
||||
|
||||
## Notes
|
||||
|
||||
- The REPL effect needs a content source to overlay on (e.g., headlines, poetry, empty)
|
||||
- The REPL uses terminal display with raw input mode
|
||||
- Command history is preserved across sessions (up to 50 commands)
|
||||
- Pipeline mutations (enabling/disabling effects) are handled automatically
|
||||
63
TODO.md
Normal file
63
TODO.md
Normal file
@@ -0,0 +1,63 @@
|
||||
# Tasks
|
||||
|
||||
## Documentation Updates
|
||||
- [x] Remove references to removed display backends (sixel, kitty) from all documentation
|
||||
- [x] Remove references to deprecated "both" display mode
|
||||
- [x] Update AGENTS.md to reflect current architecture and remove merge conflicts
|
||||
- [x] Update Agent Skills (.opencode/skills/) to match current codebase
|
||||
- [x] Update docs/ARCHITECTURE.md to remove SixelDisplay references
|
||||
- [x] Verify ModernGL backend is properly documented and registered
|
||||
- [ ] Update docs/PIPELINE.md to reflect Stage-based architecture (outdated legacy flowchart) [#41](https://git.notsosm.art/david/Mainline/issues/41)
|
||||
|
||||
## Code & Features
|
||||
- [ ] Check if luminance implementation exists for shade/tint effects (see [#26](https://git.notsosm.art/david/Mainline/issues/26) related: need to verify render/blocks.py has luminance calculation)
|
||||
- [x] Add entropy/chaos score metadata to effects for auto-categorization and intensity control [#32](https://git.notsosm.art/david/Mainline/issues/32) (closed - completed)
|
||||
- [ ] Finish ModernGL display backend: integrate window system, implement glyph caching, add event handling, and support border modes [#42](https://git.notsosm.art/david/Mainline/issues/42)
|
||||
- [x] Integrate UIPanel with pipeline: register stages, link parameter schemas, handle events, implement hot-reload.
|
||||
- [x] Move cached fixture headlines to engine/fixtures/headlines.json and update default source to use fixture.
|
||||
- [x] Add interactive UI panel for pipeline configuration (right-side panel) with stage toggles and param sliders.
|
||||
- [x] Enumerate all effect plugin parameters automatically for UI control (intensity, decay, etc.)
|
||||
- [ ] Implement pipeline hot-rebuild when stage toggles or params change, preserving camera and display state [#43](https://git.notsosm.art/david/Mainline/issues/43)
|
||||
|
||||
## Test Suite Cleanup & Feature Implementation
|
||||
### Phase 1: Test Suite Cleanup (In Progress)
|
||||
- [x] Port figment feature to modern pipeline architecture
|
||||
- [x] Create `engine/effects/plugins/figment.py` (full port)
|
||||
- [x] Add `figment.py` to `engine/effects/plugins/`
|
||||
- [x] Copy SVG files to `figments/` directory
|
||||
- [x] Update `pyproject.toml` with figment extra
|
||||
- [x] Add `test-figment` preset to `presets.toml`
|
||||
- [x] Update pipeline adapters for overlay effects
|
||||
- [x] Clean up `test_adapters.py` (removed 18 mock-only tests)
|
||||
- [x] Verify all tests pass (652 passing, 20 skipped, 58% coverage)
|
||||
- [ ] Review remaining mock-heavy tests in `test_pipeline.py`
|
||||
- [ ] Review `test_effects.py` for implementation detail tests
|
||||
- [ ] Identify additional tests to remove/consolidate
|
||||
- [ ] Target: ~600 tests total
|
||||
|
||||
### Phase 2: Acceptance Test Expansion (Planned)
|
||||
- [ ] Create `test_message_overlay.py` for message rendering
|
||||
- [ ] Create `test_firehose.py` for firehose rendering
|
||||
- [ ] Create `test_pipeline_order.py` for execution order verification
|
||||
- [ ] Expand `test_figment_effect.py` for animation phases
|
||||
- [ ] Target: 10-15 new acceptance tests
|
||||
|
||||
### Phase 3: Post-Branch Features (Planned)
|
||||
- [ ] Port message overlay system from `upstream_layers.py`
|
||||
- [ ] Port firehose rendering from `upstream_layers.py`
|
||||
- [ ] Create `MessageOverlayStage` for pipeline integration
|
||||
- [ ] Verify figment renders in correct order (effects → figment → messages → display)
|
||||
|
||||
### Phase 4: Visual Quality Improvements (Planned)
|
||||
- [ ] Compare upstream vs current pipeline output
|
||||
- [ ] Implement easing functions for figment animations
|
||||
- [ ] Add animated gradient shifts
|
||||
- [ ] Improve strobe effect patterns
|
||||
- [ ] Use introspection to match visual style
|
||||
|
||||
## Gitea Issues Tracking
|
||||
- [#37](https://git.notsosm.art/david/Mainline/issues/37): Refactor app.py and adapter.py for better maintainability
|
||||
- [#35](https://git.notsosm.art/david/Mainline/issues/35): Epic: Pipeline Mutation API for Stage Hot-Swapping
|
||||
- [#34](https://git.notsosm.art/david/Mainline/issues/34): Improve benchmarking system and performance tests
|
||||
- [#33](https://git.notsosm.art/david/Mainline/issues/33): Add web-based pipeline editor UI
|
||||
- [#26](https://git.notsosm.art/david/Mainline/issues/26): Add Streaming display backend
|
||||
158
analysis/visual_output_comparison.md
Normal file
158
analysis/visual_output_comparison.md
Normal file
@@ -0,0 +1,158 @@
|
||||
# Visual Output Comparison: Upstream/Main vs Sideline
|
||||
|
||||
## Summary
|
||||
|
||||
A comprehensive comparison of visual output between `upstream/main` and the sideline branch (`feature/capability-based-deps`) reveals fundamental architectural differences in how content is rendered and displayed.
|
||||
|
||||
## Captured Outputs
|
||||
|
||||
### Sideline (Pipeline Architecture)
|
||||
- **File**: `output/sideline_demo.json`
|
||||
- **Format**: Plain text lines without ANSI cursor positioning
|
||||
- **Content**: Readable headlines with gradient colors applied
|
||||
|
||||
### Upstream/Main (Monolithic Architecture)
|
||||
- **File**: `output/upstream_demo.json`
|
||||
- **Format**: Lines with explicit ANSI cursor positioning codes
|
||||
- **Content**: Cursor positioning codes + block characters + ANSI colors
|
||||
|
||||
## Key Architectural Differences
|
||||
|
||||
### 1. Buffer Content Structure
|
||||
|
||||
**Sideline Pipeline:**
|
||||
```python
|
||||
# Each line is plain text with ANSI colors
|
||||
buffer = [
|
||||
"The Download: OpenAI is building...",
|
||||
"OpenAI is throwing everything...",
|
||||
# ... more lines
|
||||
]
|
||||
```
|
||||
|
||||
**Upstream Monolithic:**
|
||||
```python
|
||||
# Each line includes cursor positioning
|
||||
buffer = [
|
||||
"\033[10;1H \033[2;38;5;238mユ\033[0m \033[2;38;5;37mモ\033[0m ...",
|
||||
"\033[11;1H\033[K", # Clear line 11
|
||||
# ... more lines with positioning
|
||||
]
|
||||
```
|
||||
|
||||
### 2. Rendering Approach
|
||||
|
||||
**Sideline (Pipeline Architecture):**
|
||||
- Stages produce plain text buffers
|
||||
- Display backend handles cursor positioning
|
||||
- `TerminalDisplay.show()` prepends `\033[H\033[J` (home + clear)
|
||||
- Lines are appended sequentially
|
||||
|
||||
**Upstream (Monolithic Architecture):**
|
||||
- `render_ticker_zone()` produces buffers with explicit positioning
|
||||
- Each line includes `\033[{row};1H` to position cursor
|
||||
- Display backend writes buffer directly to stdout
|
||||
- Lines are positioned explicitly in the buffer
|
||||
|
||||
### 3. Content Rendering
|
||||
|
||||
**Sideline:**
|
||||
- Headlines rendered as plain text
|
||||
- Gradient colors applied via ANSI codes
|
||||
- Ticker effect via camera/viewport filtering
|
||||
|
||||
**Upstream:**
|
||||
- Headlines rendered as block characters (▀, ▄, █, etc.)
|
||||
- Japanese katakana glyphs used for glitch effect
|
||||
- Explicit row positioning for each line
|
||||
|
||||
## Visual Output Analysis
|
||||
|
||||
### Sideline Frame 0 (First 5 lines):
|
||||
```
|
||||
Line 0: 'The Download: OpenAI is building a fully automated researcher...'
|
||||
Line 1: 'OpenAI is throwing everything into building a fully automated...'
|
||||
Line 2: 'Mind-altering substances are (still) falling short in clinical...'
|
||||
Line 3: 'The Download: Quantum computing for health...'
|
||||
Line 4: 'Can quantum computers now solve health care problems...'
|
||||
```
|
||||
|
||||
### Upstream Frame 0 (First 5 lines):
|
||||
```
|
||||
Line 0: ''
|
||||
Line 1: '\x1b[2;1H\x1b[K'
|
||||
Line 2: '\x1b[3;1H\x1b[K'
|
||||
Line 3: '\x1b[4;1H\x1b[2;38;5;238m \x1b[0m \x1b[2;38;5;238mリ\x1b[0m ...'
|
||||
Line 4: '\x1b[5;1H\x1b[K'
|
||||
```
|
||||
|
||||
## Implications for Visual Comparison
|
||||
|
||||
### Challenges with Direct Comparison
|
||||
1. **Different buffer formats**: Plain text vs. positioned ANSI codes
|
||||
2. **Different rendering pipelines**: Pipeline stages vs. monolithic functions
|
||||
3. **Different content generation**: Headlines vs. block characters
|
||||
|
||||
### Approaches for Visual Verification
|
||||
|
||||
#### Option 1: Render and Compare Terminal Output
|
||||
- Run both branches with `TerminalDisplay`
|
||||
- Capture terminal output (not buffer)
|
||||
- Compare visual rendering
|
||||
- **Challenge**: Requires actual terminal rendering
|
||||
|
||||
#### Option 2: Normalize Buffers for Comparison
|
||||
- Convert upstream positioned buffers to plain text
|
||||
- Strip ANSI cursor positioning codes
|
||||
- Compare normalized content
|
||||
- **Challenge**: Loses positioning information
|
||||
|
||||
#### Option 3: Functional Equivalence Testing
|
||||
- Verify features work the same way
|
||||
- Test message overlay rendering
|
||||
- Test effect application
|
||||
- **Challenge**: Doesn't verify exact visual match
|
||||
|
||||
## Recommendations
|
||||
|
||||
### For Exact Visual Match
|
||||
1. **Update sideline to match upstream architecture**:
|
||||
- Change `MessageOverlayStage` to return positioned buffers
|
||||
- Update terminal display to handle positioned buffers
|
||||
- This requires significant refactoring
|
||||
|
||||
2. **Accept architectural differences**:
|
||||
- The sideline pipeline architecture is fundamentally different
|
||||
- Visual differences are expected and acceptable
|
||||
- Focus on functional equivalence
|
||||
|
||||
### For Functional Verification
|
||||
1. **Test message overlay rendering**:
|
||||
- Verify message appears in correct position
|
||||
- Verify gradient colors are applied
|
||||
- Verify metadata bar is displayed
|
||||
|
||||
2. **Test effect rendering**:
|
||||
- Verify glitch effect applies block characters
|
||||
- Verify firehose effect renders correctly
|
||||
- Verify figment effect integrates properly
|
||||
|
||||
3. **Test pipeline execution**:
|
||||
- Verify stage execution order
|
||||
- Verify capability resolution
|
||||
- Verify dependency injection
|
||||
|
||||
## Conclusion
|
||||
|
||||
The visual output comparison reveals that `sideline` and `upstream/main` use fundamentally different rendering architectures:
|
||||
|
||||
- **Upstream**: Explicit cursor positioning in buffer, monolithic rendering
|
||||
- **Sideline**: Plain text buffer, display handles positioning, pipeline rendering
|
||||
|
||||
These differences are **architectural**, not bugs. The sideline branch has successfully adapted the upstream features to a new pipeline architecture.
|
||||
|
||||
### Next Steps
|
||||
1. ✅ Document architectural differences (this file)
|
||||
2. ⏳ Create functional tests for visual verification
|
||||
3. ⏳ Update Gitea issue #50 with findings
|
||||
4. ⏳ Consider whether to adapt sideline to match upstream rendering style
|
||||
313
client/editor.html
Normal file
313
client/editor.html
Normal file
@@ -0,0 +1,313 @@
|
||||
<!DOCTYPE html>
|
||||
<html lang="en">
|
||||
<head>
|
||||
<meta charset="UTF-8">
|
||||
<meta name="viewport" content="width=device-width, initial-scale=1.0">
|
||||
<title>Mainline Pipeline Editor</title>
|
||||
<style>
|
||||
* { margin: 0; padding: 0; box-sizing: border-box; }
|
||||
body {
|
||||
font-family: 'Fira Code', 'Cascadia Code', 'Consolas', monospace;
|
||||
background: #1a1a1a;
|
||||
color: #eee;
|
||||
display: flex;
|
||||
min-height: 100vh;
|
||||
}
|
||||
#sidebar {
|
||||
width: 300px;
|
||||
background: #222;
|
||||
padding: 15px;
|
||||
border-right: 1px solid #333;
|
||||
overflow-y: auto;
|
||||
}
|
||||
#main {
|
||||
flex: 1;
|
||||
padding: 20px;
|
||||
overflow-y: auto;
|
||||
}
|
||||
h2 {
|
||||
font-size: 14px;
|
||||
color: #888;
|
||||
margin-bottom: 10px;
|
||||
text-transform: uppercase;
|
||||
}
|
||||
.section {
|
||||
margin-bottom: 20px;
|
||||
}
|
||||
.stage-list {
|
||||
list-style: none;
|
||||
}
|
||||
.stage-item {
|
||||
display: flex;
|
||||
align-items: center;
|
||||
padding: 6px 8px;
|
||||
background: #333;
|
||||
margin-bottom: 2px;
|
||||
cursor: pointer;
|
||||
border-radius: 4px;
|
||||
}
|
||||
.stage-item:hover { background: #444; }
|
||||
.stage-item.selected { background: #0066cc; }
|
||||
.stage-item input[type="checkbox"] {
|
||||
margin-right: 8px;
|
||||
transform: scale(1.2);
|
||||
}
|
||||
.stage-name {
|
||||
flex: 1;
|
||||
font-size: 13px;
|
||||
}
|
||||
.param-group {
|
||||
background: #2a2a2a;
|
||||
padding: 10px;
|
||||
border-radius: 4px;
|
||||
}
|
||||
.param-row {
|
||||
display: flex;
|
||||
align-items: center;
|
||||
margin-bottom: 8px;
|
||||
font-size: 12px;
|
||||
}
|
||||
.param-name {
|
||||
width: 100px;
|
||||
color: #aaa;
|
||||
}
|
||||
.param-slider {
|
||||
flex: 1;
|
||||
margin: 0 10px;
|
||||
}
|
||||
.param-value {
|
||||
width: 50px;
|
||||
text-align: right;
|
||||
color: #4f4;
|
||||
}
|
||||
.preset-list {
|
||||
display: flex;
|
||||
flex-wrap: wrap;
|
||||
gap: 6px;
|
||||
}
|
||||
.preset-btn {
|
||||
background: #333;
|
||||
border: 1px solid #444;
|
||||
color: #ccc;
|
||||
padding: 4px 10px;
|
||||
border-radius: 3px;
|
||||
cursor: pointer;
|
||||
font-size: 11px;
|
||||
}
|
||||
.preset-btn:hover { background: #444; }
|
||||
.preset-btn.active { background: #0066cc; border-color: #0077ff; color: #fff; }
|
||||
button.action-btn {
|
||||
background: #0066cc;
|
||||
border: none;
|
||||
color: white;
|
||||
padding: 8px 12px;
|
||||
border-radius: 4px;
|
||||
cursor: pointer;
|
||||
font-size: 12px;
|
||||
margin-right: 5px;
|
||||
margin-bottom: 5px;
|
||||
}
|
||||
button.action-btn:hover { background: #0077ee; }
|
||||
#status {
|
||||
position: fixed;
|
||||
bottom: 10px;
|
||||
left: 10px;
|
||||
font-size: 11px;
|
||||
color: #666;
|
||||
}
|
||||
#status.connected { color: #4f4; }
|
||||
#status.disconnected { color: #f44; }
|
||||
#pipeline-view {
|
||||
margin-top: 10px;
|
||||
}
|
||||
.pipeline-node {
|
||||
display: inline-block;
|
||||
padding: 4px 8px;
|
||||
margin: 2px;
|
||||
background: #333;
|
||||
border-radius: 3px;
|
||||
font-size: 11px;
|
||||
}
|
||||
.pipeline-node.enabled { border-left: 3px solid #4f4; }
|
||||
.pipeline-node.disabled { border-left: 3px solid #666; opacity: 0.6; }
|
||||
</style>
|
||||
</head>
|
||||
<body>
|
||||
<div id="sidebar">
|
||||
<div class="section">
|
||||
<h2>Preset</h2>
|
||||
<div id="preset-list" class="preset-list"></div>
|
||||
</div>
|
||||
<div class="section">
|
||||
<h2>Stages</h2>
|
||||
<ul id="stage-list" class="stage-list"></ul>
|
||||
</div>
|
||||
<div class="section">
|
||||
<h2>Parameters</h2>
|
||||
<div id="param-editor" class="param-group"></div>
|
||||
</div>
|
||||
</div>
|
||||
<div id="main">
|
||||
<h2>Pipeline</h2>
|
||||
<div id="pipeline-view"></div>
|
||||
<div style="margin-top: 20px;">
|
||||
<button class="action-btn" onclick="sendCommand({action: 'cycle_preset', direction: 1})">Next Preset</button>
|
||||
<button class="action-btn" onclick="sendCommand({action: 'cycle_preset', direction: -1})">Prev Preset</button>
|
||||
</div>
|
||||
</div>
|
||||
<div id="status">Disconnected</div>
|
||||
|
||||
<script>
|
||||
const ws = new WebSocket(`ws://${location.hostname}:8765`);
|
||||
let state = { stages: {}, preset: '', presets: [], selected_stage: null };
|
||||
|
||||
function updateStatus(connected) {
|
||||
const status = document.getElementById('status');
|
||||
status.textContent = connected ? 'Connected' : 'Disconnected';
|
||||
status.className = connected ? 'connected' : 'disconnected';
|
||||
}
|
||||
|
||||
function connect() {
|
||||
ws.onopen = () => {
|
||||
updateStatus(true);
|
||||
// Request initial state
|
||||
ws.send(JSON.stringify({ type: 'state_request' }));
|
||||
};
|
||||
ws.onclose = () => {
|
||||
updateStatus(false);
|
||||
setTimeout(connect, 2000);
|
||||
};
|
||||
ws.onerror = () => {
|
||||
updateStatus(false);
|
||||
};
|
||||
ws.onmessage = (event) => {
|
||||
try {
|
||||
const data = JSON.parse(event.data);
|
||||
if (data.type === 'state') {
|
||||
state = data.state;
|
||||
render();
|
||||
}
|
||||
} catch (e) {
|
||||
console.error('Parse error:', e);
|
||||
}
|
||||
};
|
||||
}
|
||||
|
||||
function sendCommand(command) {
|
||||
ws.send(JSON.stringify({ type: 'command', command }));
|
||||
}
|
||||
|
||||
function render() {
|
||||
renderPresets();
|
||||
renderStageList();
|
||||
renderPipeline();
|
||||
renderParams();
|
||||
}
|
||||
|
||||
function renderPresets() {
|
||||
const container = document.getElementById('preset-list');
|
||||
container.innerHTML = '';
|
||||
(state.presets || []).forEach(preset => {
|
||||
const btn = document.createElement('button');
|
||||
btn.className = 'preset-btn' + (preset === state.preset ? ' active' : '');
|
||||
btn.textContent = preset;
|
||||
btn.onclick = () => sendCommand({ action: 'change_preset', preset });
|
||||
container.appendChild(btn);
|
||||
});
|
||||
}
|
||||
|
||||
function renderStageList() {
|
||||
const list = document.getElementById('stage-list');
|
||||
list.innerHTML = '';
|
||||
Object.entries(state.stages || {}).forEach(([name, info]) => {
|
||||
const li = document.createElement('li');
|
||||
li.className = 'stage-item' + (name === state.selected_stage ? ' selected' : '');
|
||||
li.innerHTML = `
|
||||
<input type="checkbox" ${info.enabled ? 'checked' : ''}
|
||||
onchange="sendCommand({action: 'toggle_stage', stage: '${name}'})">
|
||||
<span class="stage-name">${name}</span>
|
||||
`;
|
||||
li.onclick = (e) => {
|
||||
if (e.target.type !== 'checkbox') {
|
||||
sendCommand({ action: 'select_stage', stage: name });
|
||||
}
|
||||
};
|
||||
list.appendChild(li);
|
||||
});
|
||||
}
|
||||
|
||||
function renderPipeline() {
|
||||
const view = document.getElementById('pipeline-view');
|
||||
view.innerHTML = '';
|
||||
const stages = Object.entries(state.stages || {});
|
||||
if (stages.length === 0) {
|
||||
view.textContent = '(No stages)';
|
||||
return;
|
||||
}
|
||||
stages.forEach(([name, info]) => {
|
||||
const span = document.createElement('span');
|
||||
span.className = 'pipeline-node' + (info.enabled ? ' enabled' : ' disabled');
|
||||
span.textContent = name;
|
||||
view.appendChild(span);
|
||||
});
|
||||
}
|
||||
|
||||
function renderParams() {
|
||||
const container = document.getElementById('param-editor');
|
||||
container.innerHTML = '';
|
||||
const selected = state.selected_stage;
|
||||
if (!selected || !state.stages[selected]) {
|
||||
container.innerHTML = '<div style="color:#666;font-size:11px;">(select a stage)</div>';
|
||||
return;
|
||||
}
|
||||
const stage = state.stages[selected];
|
||||
if (!stage.params || Object.keys(stage.params).length === 0) {
|
||||
container.innerHTML = '<div style="color:#666;font-size:11px;">(no params)</div>';
|
||||
return;
|
||||
}
|
||||
Object.entries(stage.params).forEach(([key, value]) => {
|
||||
const row = document.createElement('div');
|
||||
row.className = 'param-row';
|
||||
// Infer min/max/step from typical ranges
|
||||
let min = 0, max = 1, step = 0.1;
|
||||
if (typeof value === 'number') {
|
||||
if (value > 1) { max = value * 2; step = 1; }
|
||||
else { max = 1; step = 0.1; }
|
||||
}
|
||||
row.innerHTML = `
|
||||
<div class="param-name">${key}</div>
|
||||
<input type="range" class="param-slider" min="${min}" max="${max}" step="${step}"
|
||||
value="${value}"
|
||||
oninput="adjustParam('${key}', this.value)">
|
||||
<div class="param-value">${typeof value === 'number' ? Number(value).toFixed(2) : value}</div>
|
||||
`;
|
||||
container.appendChild(row);
|
||||
});
|
||||
}
|
||||
|
||||
function adjustParam(param, newValue) {
|
||||
const selected = state.selected_stage;
|
||||
if (!selected) return;
|
||||
// Update display immediately for responsiveness
|
||||
const num = parseFloat(newValue);
|
||||
if (!isNaN(num)) {
|
||||
// Show updated value
|
||||
document.querySelectorAll('.param-value').forEach(el => {
|
||||
if (el.parentElement.querySelector('.param-name').textContent === param) {
|
||||
el.textContent = num.toFixed(2);
|
||||
}
|
||||
});
|
||||
}
|
||||
// Send command
|
||||
sendCommand({
|
||||
action: 'adjust_param',
|
||||
stage: selected,
|
||||
param: param,
|
||||
delta: num - (state.stages[selected].params[param] || 0)
|
||||
});
|
||||
}
|
||||
|
||||
connect();
|
||||
</script>
|
||||
</body>
|
||||
</html>
|
||||
@@ -277,6 +277,9 @@
|
||||
} else if (data.type === 'clear') {
|
||||
ctx.fillStyle = '#000';
|
||||
ctx.fillRect(0, 0, canvas.width, canvas.height);
|
||||
} else if (data.type === 'state') {
|
||||
// Log state updates for debugging (can be extended for UI)
|
||||
console.log('State update:', data.state);
|
||||
}
|
||||
} catch (e) {
|
||||
console.error('Failed to parse message:', e);
|
||||
|
||||
106
completion/mainline-completion.bash
Normal file
106
completion/mainline-completion.bash
Normal file
@@ -0,0 +1,106 @@
|
||||
# Mainline bash completion script
|
||||
#
|
||||
# To install:
|
||||
# source /path/to/completion/mainline-completion.bash
|
||||
#
|
||||
# Or add to ~/.bashrc:
|
||||
# source /path/to/completion/mainline-completion.bash
|
||||
|
||||
_mainline_completion() {
|
||||
local cur prev words cword
|
||||
_init_completion || return
|
||||
|
||||
# Get current word and previous word
|
||||
cur="${COMP_WORDS[COMP_CWORD]}"
|
||||
prev="${COMP_WORDS[COMP_CWORD-1]}"
|
||||
|
||||
# Completion options based on previous word
|
||||
case "${prev}" in
|
||||
--display)
|
||||
# Display backends
|
||||
COMPREPLY=($(compgen -W "terminal null replay websocket pygame moderngl" -- "${cur}"))
|
||||
return
|
||||
;;
|
||||
|
||||
--pipeline-source)
|
||||
# Available sources
|
||||
COMPREPLY=($(compgen -W "headlines poetry empty fixture pipeline-inspect" -- "${cur}"))
|
||||
return
|
||||
;;
|
||||
|
||||
--pipeline-effects)
|
||||
# Available effects (comma-separated)
|
||||
local effects="afterimage border crop fade firehose glitch hud motionblur noise tint"
|
||||
COMPREPLY=($(compgen -W "${effects}" -- "${cur}"))
|
||||
return
|
||||
;;
|
||||
|
||||
--pipeline-camera)
|
||||
# Camera modes
|
||||
COMPREPLY=($(compgen -W "feed scroll horizontal omni floating bounce radial" -- "${cur}"))
|
||||
return
|
||||
;;
|
||||
|
||||
--pipeline-border)
|
||||
# Border modes
|
||||
COMPREPLY=($(compgen -W "off simple ui" -- "${cur}"))
|
||||
return
|
||||
;;
|
||||
|
||||
--pipeline-display)
|
||||
# Display backends (same as --display)
|
||||
COMPREPLY=($(compgen -W "terminal null replay websocket pygame moderngl" -- "${cur}"))
|
||||
return
|
||||
;;
|
||||
|
||||
--theme)
|
||||
# Theme colors
|
||||
COMPREPLY=($(compgen -W "green orange purple blue red" -- "${cur}"))
|
||||
return
|
||||
;;
|
||||
|
||||
--viewport)
|
||||
# Viewport size suggestions
|
||||
COMPREPLY=($(compgen -W "80x24 100x30 120x40 60x20" -- "${cur}"))
|
||||
return
|
||||
;;
|
||||
|
||||
--preset)
|
||||
# Presets (would need to query available presets)
|
||||
COMPREPLY=($(compgen -W "demo demo-base demo-pygame demo-camera-showcase poetry headlines empty test-basic test-border test-scroll-camera" -- "${cur}"))
|
||||
return
|
||||
;;
|
||||
|
||||
--positioning)
|
||||
# Positioning modes
|
||||
COMPREPLY=($(compgen -W "absolute relative mixed" -- "${cur}"))
|
||||
return
|
||||
;;
|
||||
esac
|
||||
|
||||
# Flag completion (start with --)
|
||||
if [[ "${cur}" == -* ]]; then
|
||||
COMPREPLY=($(compgen -W "
|
||||
--display
|
||||
--pipeline-source
|
||||
--pipeline-effects
|
||||
--pipeline-camera
|
||||
--pipeline-display
|
||||
--pipeline-ui
|
||||
--pipeline-border
|
||||
--viewport
|
||||
--preset
|
||||
--theme
|
||||
--positioning
|
||||
--websocket
|
||||
--websocket-port
|
||||
--allow-unsafe
|
||||
--help
|
||||
" -- "${cur}"))
|
||||
return
|
||||
fi
|
||||
}
|
||||
|
||||
complete -F _mainline_completion mainline.py
|
||||
complete -F _mainline_completion python\ -m\ engine.app
|
||||
complete -F _mainline_completion python\ -m\ mainline
|
||||
81
completion/mainline-completion.fish
Normal file
81
completion/mainline-completion.fish
Normal file
@@ -0,0 +1,81 @@
|
||||
# Fish completion script for Mainline
|
||||
#
|
||||
# To install:
|
||||
# source /path/to/completion/mainline-completion.fish
|
||||
#
|
||||
# Or copy to ~/.config/fish/completions/mainline.fish
|
||||
|
||||
# Define display backends
|
||||
set -l display_backends terminal null replay websocket pygame moderngl
|
||||
|
||||
# Define sources
|
||||
set -l sources headlines poetry empty fixture pipeline-inspect
|
||||
|
||||
# Define effects
|
||||
set -l effects afterimage border crop fade firehose glitch hud motionblur noise tint
|
||||
|
||||
# Define camera modes
|
||||
set -l cameras feed scroll horizontal omni floating bounce radial
|
||||
|
||||
# Define border modes
|
||||
set -l borders off simple ui
|
||||
|
||||
# Define themes
|
||||
set -l themes green orange purple blue red
|
||||
|
||||
# Define presets
|
||||
set -l presets demo demo-base demo-pygame demo-camera-showcase poetry headlines empty test-basic test-border test-scroll-camera test-figment test-message-overlay
|
||||
|
||||
# Main completion function
|
||||
function __mainline_complete
|
||||
set -l cmd (commandline -po)
|
||||
set -l token (commandline -t)
|
||||
|
||||
# Complete display backends
|
||||
complete -c mainline.py -n '__fish_seen_argument --display' -a "$display_backends" -d 'Display backend'
|
||||
|
||||
# Complete sources
|
||||
complete -c mainline.py -n '__fish_seen_argument --pipeline-source' -a "$sources" -d 'Data source'
|
||||
|
||||
# Complete effects
|
||||
complete -c mainline.py -n '__fish_seen_argument --pipeline-effects' -a "$effects" -d 'Effect plugin'
|
||||
|
||||
# Complete camera modes
|
||||
complete -c mainline.py -n '__fish_seen_argument --pipeline-camera' -a "$cameras" -d 'Camera mode'
|
||||
|
||||
# Complete display backends (pipeline)
|
||||
complete -c mainline.py -n '__fish_seen_argument --pipeline-display' -a "$display_backends" -d 'Display backend'
|
||||
|
||||
# Complete border modes
|
||||
complete -c mainline.py -n '__fish_seen_argument --pipeline-border' -a "$borders" -d 'Border mode'
|
||||
|
||||
# Complete themes
|
||||
complete -c mainline.py -n '__fish_seen_argument --theme' -a "$themes" -d 'Color theme'
|
||||
|
||||
# Complete presets
|
||||
complete -c mainline.py -n '__fish_seen_argument --preset' -a "$presets" -d 'Preset name'
|
||||
|
||||
# Complete viewport sizes
|
||||
complete -c mainline.py -n '__fish_seen_argument --viewport' -a '80x24 100x30 120x40 60x20' -d 'Viewport size (WxH)'
|
||||
|
||||
# Complete flag options
|
||||
complete -c mainline.py -n 'not __fish_seen_argument --display' -l display -d 'Display backend' -a "$display_backends"
|
||||
complete -c mainline.py -n 'not __fish_seen_argument --preset' -l preset -d 'Preset to use' -a "$presets"
|
||||
complete -c mainline.py -n 'not __fish_seen_argument --viewport' -l viewport -d 'Viewport size (WxH)' -a '80x24 100x30 120x40 60x20'
|
||||
complete -c mainline.py -n 'not __fish_seen_argument --theme' -l theme -d 'Color theme' -a "$themes"
|
||||
complete -c mainline.py -l websocket -d 'Enable WebSocket server'
|
||||
complete -c mainline.py -n 'not __fish_seen_argument --websocket-port' -l websocket-port -d 'WebSocket port' -a '8765'
|
||||
complete -c mainline.py -l allow-unsafe -d 'Allow unsafe pipeline configuration'
|
||||
complete -c mainline.py -n 'not __fish_seen_argument --help' -l help -d 'Show help'
|
||||
|
||||
# Pipeline-specific flags
|
||||
complete -c mainline.py -n 'not __fish_seen_argument --pipeline-source' -l pipeline-source -d 'Data source' -a "$sources"
|
||||
complete -c mainline.py -n 'not __fish_seen_argument --pipeline-effects' -l pipeline-effects -d 'Effect plugins (comma-separated)' -a "$effects"
|
||||
complete -c mainline.py -n 'not __fish_seen_argument --pipeline-camera' -l pipeline-camera -d 'Camera mode' -a "$cameras"
|
||||
complete -c mainline.py -n 'not __fish_seen_argument --pipeline-display' -l pipeline-display -d 'Display backend' -a "$display_backends"
|
||||
complete -c mainline.py -l pipeline-ui -d 'Enable UI panel'
|
||||
complete -c mainline.py -n 'not __fish_seen_argument --pipeline-border' -l pipeline-border -d 'Border mode' -a "$borders"
|
||||
end
|
||||
|
||||
# Register the completion function
|
||||
__mainline_complete
|
||||
48
completion/mainline-completion.zsh
Normal file
48
completion/mainline-completion.zsh
Normal file
@@ -0,0 +1,48 @@
|
||||
#compdef mainline.py
|
||||
|
||||
# Mainline zsh completion script
|
||||
#
|
||||
# To install:
|
||||
# source /path/to/completion/mainline-completion.zsh
|
||||
#
|
||||
# Or add to ~/.zshrc:
|
||||
# source /path/to/completion/mainline-completion.zsh
|
||||
|
||||
# Define completion function
|
||||
_mainline() {
|
||||
local -a commands
|
||||
local curcontext="$curcontext" state line
|
||||
typeset -A opt_args
|
||||
|
||||
_arguments -C \
|
||||
'(-h --help)'{-h,--help}'[Show help]' \
|
||||
'--display=[Display backend]:backend:(terminal null replay websocket pygame moderngl)' \
|
||||
'--preset=[Preset to use]:preset:(demo demo-base demo-pygame demo-camera-showcase poetry headlines empty test-basic test-border test-scroll-camera test-figment test-message-overlay)' \
|
||||
'--viewport=[Viewport size]:size:(80x24 100x30 120x40 60x20)' \
|
||||
'--theme=[Color theme]:theme:(green orange purple blue red)' \
|
||||
'--websocket[Enable WebSocket server]' \
|
||||
'--websocket-port=[WebSocket port]:port:' \
|
||||
'--allow-unsafe[Allow unsafe pipeline configuration]' \
|
||||
'(-)*: :{_files}' \
|
||||
&& ret=0
|
||||
|
||||
# Handle --pipeline-* arguments
|
||||
if [[ -n ${words[*]} ]]; then
|
||||
_arguments -C \
|
||||
'--pipeline-source=[Data source]:source:(headlines poetry empty fixture pipeline-inspect)' \
|
||||
'--pipeline-effects=[Effect plugins]:effects:(afterimage border crop fade firehose glitch hud motionblur noise tint)' \
|
||||
'--pipeline-camera=[Camera mode]:camera:(feed scroll horizontal omni floating bounce radial)' \
|
||||
'--pipeline-display=[Display backend]:backend:(terminal null replay websocket pygame moderngl)' \
|
||||
'--pipeline-ui[Enable UI panel]' \
|
||||
'--pipeline-border=[Border mode]:mode:(off simple ui)' \
|
||||
'--viewport=[Viewport size]:size:(80x24 100x30 120x40 60x20)' \
|
||||
&& ret=0
|
||||
fi
|
||||
|
||||
return ret
|
||||
}
|
||||
|
||||
# Register completion function
|
||||
compdef _mainline mainline.py
|
||||
compdef _mainline "python -m engine.app"
|
||||
compdef _mainline "python -m mainline"
|
||||
153
docs/ARCHITECTURE.md
Normal file
153
docs/ARCHITECTURE.md
Normal file
@@ -0,0 +1,153 @@
|
||||
# Mainline Architecture Diagrams
|
||||
|
||||
> These diagrams use Mermaid. Render with: `npx @mermaid-js/mermaid-cli -i ARCHITECTURE.md` or view in GitHub/GitLab/Notion.
|
||||
|
||||
## Class Hierarchy (Mermaid)
|
||||
|
||||
```mermaid
|
||||
classDiagram
|
||||
class Stage {
|
||||
<<abstract>>
|
||||
+str name
|
||||
+set[str] capabilities
|
||||
+set[str] dependencies
|
||||
+process(data, ctx) Any
|
||||
}
|
||||
|
||||
Stage <|-- DataSourceStage
|
||||
Stage <|-- CameraStage
|
||||
Stage <|-- FontStage
|
||||
Stage <|-- ViewportFilterStage
|
||||
Stage <|-- EffectPluginStage
|
||||
Stage <|-- DisplayStage
|
||||
Stage <|-- SourceItemsToBufferStage
|
||||
Stage <|-- PassthroughStage
|
||||
Stage <|-- ImageToTextStage
|
||||
Stage <|-- CanvasStage
|
||||
|
||||
class EffectPlugin {
|
||||
<<abstract>>
|
||||
+str name
|
||||
+EffectConfig config
|
||||
+process(buf, ctx) list[str]
|
||||
+configure(config) None
|
||||
}
|
||||
|
||||
EffectPlugin <|-- NoiseEffect
|
||||
EffectPlugin <|-- FadeEffect
|
||||
EffectPlugin <|-- GlitchEffect
|
||||
EffectPlugin <|-- FirehoseEffect
|
||||
EffectPlugin <|-- CropEffect
|
||||
EffectPlugin <|-- TintEffect
|
||||
|
||||
class Display {
|
||||
<<protocol>>
|
||||
+int width
|
||||
+int height
|
||||
+init(width, height, reuse)
|
||||
+show(buffer, border)
|
||||
+clear() None
|
||||
+cleanup() None
|
||||
}
|
||||
|
||||
Display <|.. TerminalDisplay
|
||||
Display <|.. NullDisplay
|
||||
Display <|.. PygameDisplay
|
||||
Display <|.. WebSocketDisplay
|
||||
|
||||
class Camera {
|
||||
+int viewport_width
|
||||
+int viewport_height
|
||||
+CameraMode mode
|
||||
+apply(buffer, width, height) list[str]
|
||||
}
|
||||
|
||||
class Pipeline {
|
||||
+dict[str, Stage] stages
|
||||
+PipelineContext context
|
||||
+execute(data) StageResult
|
||||
}
|
||||
|
||||
Pipeline --> Stage
|
||||
Stage --> Display
|
||||
```
|
||||
|
||||
## Data Flow (Mermaid)
|
||||
|
||||
```mermaid
|
||||
flowchart LR
|
||||
DataSource[Data Source] --> DataSourceStage
|
||||
DataSourceStage --> FontStage
|
||||
FontStage --> CameraStage
|
||||
CameraStage --> EffectStages
|
||||
EffectStages --> DisplayStage
|
||||
DisplayStage --> TerminalDisplay
|
||||
DisplayStage --> BrowserWebSocket
|
||||
DisplayStage --> SixelDisplay
|
||||
DisplayStage --> NullDisplay
|
||||
```
|
||||
|
||||
## Effect Chain (Mermaid)
|
||||
|
||||
```mermaid
|
||||
flowchart LR
|
||||
InputBuffer --> NoiseEffect
|
||||
NoiseEffect --> FadeEffect
|
||||
FadeEffect --> GlitchEffect
|
||||
GlitchEffect --> FirehoseEffect
|
||||
FirehoseEffect --> Output
|
||||
```
|
||||
|
||||
> **Note:** Each effect must preserve buffer dimensions (line count and visible width).
|
||||
|
||||
## Stage Capabilities
|
||||
|
||||
```mermaid
|
||||
flowchart TB
|
||||
subgraph "Capability Resolution"
|
||||
D[DataSource<br/>provides: source.*]
|
||||
C[Camera<br/>provides: render.output]
|
||||
E[Effects<br/>provides: render.effect]
|
||||
DIS[Display<br/>provides: display.output]
|
||||
end
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Legacy ASCII Diagrams
|
||||
|
||||
### Stage Inheritance
|
||||
```
|
||||
Stage(ABC)
|
||||
├── DataSourceStage
|
||||
├── CameraStage
|
||||
├── FontStage
|
||||
├── ViewportFilterStage
|
||||
├── EffectPluginStage
|
||||
├── DisplayStage
|
||||
├── SourceItemsToBufferStage
|
||||
├── PassthroughStage
|
||||
├── ImageToTextStage
|
||||
└── CanvasStage
|
||||
```
|
||||
|
||||
### Display Backends
|
||||
```
|
||||
Display(Protocol)
|
||||
├── TerminalDisplay
|
||||
├── NullDisplay
|
||||
├── PygameDisplay
|
||||
├── WebSocketDisplay
|
||||
└── MultiDisplay
|
||||
```
|
||||
|
||||
### Camera Modes
|
||||
```
|
||||
Camera
|
||||
├── FEED # Static view
|
||||
├── SCROLL # Horizontal scroll
|
||||
├── VERTICAL # Vertical scroll
|
||||
├── HORIZONTAL # Same as scroll
|
||||
├── OMNI # Omnidirectional
|
||||
├── FLOATING # Floating particles
|
||||
└── BOUNCE # Bouncing camera
|
||||
178
docs/GRAPH_SYSTEM_SUMMARY.md
Normal file
178
docs/GRAPH_SYSTEM_SUMMARY.md
Normal file
@@ -0,0 +1,178 @@
|
||||
# Graph-Based Pipeline System - Implementation Summary
|
||||
|
||||
## Overview
|
||||
|
||||
Implemented a graph-based scripting language to replace the verbose `XYZStage` naming convention in Mainline's pipeline architecture. The new system represents pipelines as nodes and connections, providing a more intuitive way to define, configure, and orchestrate pipelines.
|
||||
|
||||
## Files Created
|
||||
|
||||
### Core Graph System
|
||||
- `engine/pipeline/graph.py` - Core graph abstraction (Node, Connection, Graph classes)
|
||||
- `engine/pipeline/graph_adapter.py` - Adapter to convert Graph to Pipeline with existing Stage classes
|
||||
- `engine/pipeline/graph_toml.py` - TOML-based graph configuration loader
|
||||
|
||||
### Tests
|
||||
- `tests/test_graph_pipeline.py` - Comprehensive test suite (17 tests, all passing)
|
||||
- `examples/graph_dsl_demo.py` - Demo script showing the new DSL
|
||||
- `examples/test_graph_integration.py` - Integration test verifying pipeline execution
|
||||
- `examples/pipeline_graph.toml` - Example TOML configuration file
|
||||
|
||||
### Documentation
|
||||
- `docs/graph-dsl.md` - Complete DSL documentation with examples
|
||||
- `docs/GRAPH_SYSTEM_SUMMARY.md` - This summary document
|
||||
|
||||
## Key Features
|
||||
|
||||
### 1. Graph Abstraction
|
||||
- **Node Types**: `source`, `camera`, `effect`, `position`, `display`, `render`, `overlay`
|
||||
- **Connections**: Directed edges between nodes with automatic dependency resolution
|
||||
- **Validation**: Cycle detection and disconnected node warnings
|
||||
|
||||
### 2. DSL Syntax Options
|
||||
|
||||
#### TOML Configuration
|
||||
```toml
|
||||
[nodes.source]
|
||||
type = "source"
|
||||
source = "headlines"
|
||||
|
||||
[nodes.camera]
|
||||
type = "camera"
|
||||
mode = "scroll"
|
||||
|
||||
[nodes.noise]
|
||||
type = "effect"
|
||||
effect = "noise"
|
||||
intensity = 0.5
|
||||
|
||||
[nodes.display]
|
||||
type = "display"
|
||||
backend = "terminal"
|
||||
|
||||
[connections]
|
||||
list = ["source -> camera -> noise -> display"]
|
||||
```
|
||||
|
||||
#### Python API
|
||||
```python
|
||||
from engine.pipeline.graph import Graph, NodeType
|
||||
from engine.pipeline.graph_adapter import graph_to_pipeline
|
||||
|
||||
graph = Graph()
|
||||
graph.node("source", NodeType.SOURCE, source="headlines")
|
||||
graph.node("camera", NodeType.CAMERA, mode="scroll")
|
||||
graph.node("noise", NodeType.EFFECT, effect="noise", intensity=0.5)
|
||||
graph.node("display", NodeType.DISPLAY, backend="terminal")
|
||||
graph.chain("source", "camera", "noise", "display")
|
||||
|
||||
pipeline = graph_to_pipeline(graph)
|
||||
```
|
||||
|
||||
#### Dictionary/JSON Input
|
||||
```python
|
||||
from engine.pipeline.graph_adapter import dict_to_pipeline
|
||||
|
||||
data = {
|
||||
"nodes": {
|
||||
"source": "headlines",
|
||||
"noise": {"type": "effect", "effect": "noise", "intensity": 0.5},
|
||||
"display": {"type": "display", "backend": "terminal"}
|
||||
},
|
||||
"connections": ["source -> noise -> display"]
|
||||
}
|
||||
|
||||
pipeline = dict_to_pipeline(data)
|
||||
```
|
||||
|
||||
### 3. Pipeline Integration
|
||||
|
||||
The graph system integrates with the existing pipeline architecture:
|
||||
|
||||
- **Auto-injection**: Pipeline automatically injects required stages (camera_update, render, etc.)
|
||||
- **Capability Resolution**: Uses existing capability-based dependency system
|
||||
- **Type Safety**: Validates data flow between stages (TEXT_BUFFER, SOURCE_ITEMS, etc.)
|
||||
- **Backward Compatible**: Works alongside existing preset system
|
||||
|
||||
### 4. Node Configuration
|
||||
|
||||
| Node Type | Config Options | Example |
|
||||
|-----------|----------------|---------|
|
||||
| `source` | `source`: "headlines", "poetry", "empty" | `{"type": "source", "source": "headlines"}` |
|
||||
| `camera` | `mode`: "scroll", "feed", "horizontal", etc.<br>`speed`: float | `{"type": "camera", "mode": "scroll", "speed": 1.0}` |
|
||||
| `effect` | `effect`: effect name<br>`intensity`: 0.0-1.0 | `{"type": "effect", "effect": "noise", "intensity": 0.5}` |
|
||||
| `position` | `mode`: "absolute", "relative", "mixed" | `{"type": "position", "mode": "mixed"}` |
|
||||
| `display` | `backend`: "terminal", "null", "websocket" | `{"type": "display", "backend": "terminal"}` |
|
||||
|
||||
## Implementation Details
|
||||
|
||||
### Graph Adapter Logic
|
||||
|
||||
1. **Node Mapping**: Converts graph nodes to appropriate Stage classes
|
||||
2. **Effect Intensity**: Sets effect intensity globally (consistent with existing architecture)
|
||||
3. **Camera Creation**: Maps mode strings to Camera factory methods
|
||||
4. **Dependencies**: Effects automatically depend on `render.output`
|
||||
5. **Type Flow**: Ensures TEXT_BUFFER flow between render and effects
|
||||
|
||||
### Validation
|
||||
|
||||
- **Disconnected Nodes**: Warns about nodes without connections
|
||||
- **Cycle Detection**: Detects circular dependencies using DFS
|
||||
- **Type Validation**: Pipeline validates inlet/outlet type compatibility
|
||||
|
||||
## Files Modified
|
||||
|
||||
### Core Pipeline
|
||||
- `engine/pipeline/controller.py` - Pipeline class (no changes needed, uses existing architecture)
|
||||
- `engine/pipeline/graph_adapter.py` - Added effect intensity setting, fixed PositionStage creation
|
||||
- `engine/app/pipeline_runner.py` - Added graph config support
|
||||
|
||||
### Documentation
|
||||
- `AGENTS.md` - Updated with task tracking
|
||||
|
||||
## Test Results
|
||||
|
||||
```
|
||||
17 tests passed in 0.23s
|
||||
- Graph creation and manipulation
|
||||
- Connection handling and validation
|
||||
- TOML loading and parsing
|
||||
- Pipeline conversion and execution
|
||||
- Effect intensity configuration
|
||||
- Camera mode mapping
|
||||
- Positioning mode support
|
||||
```
|
||||
|
||||
## Usage Examples
|
||||
|
||||
### Running with Graph Config
|
||||
```bash
|
||||
python -c "
|
||||
from engine.effects.plugins import discover_plugins
|
||||
from engine.pipeline.graph_toml import load_pipeline_from_toml
|
||||
|
||||
discover_plugins()
|
||||
pipeline = load_pipeline_from_toml('examples/pipeline_graph.toml')
|
||||
"
|
||||
```
|
||||
|
||||
### Integration with Pipeline Runner
|
||||
```bash
|
||||
# The pipeline runner now supports graph configs
|
||||
# (Implementation in progress)
|
||||
```
|
||||
|
||||
## Benefits
|
||||
|
||||
1. **Simplified Configuration**: No need to manually create Stage instances
|
||||
2. **Visual Representation**: Graph structure is easier to understand than class hierarchy
|
||||
3. **Automatic Dependency Resolution**: Pipeline handles stage ordering automatically
|
||||
4. **Flexible Composition**: Easy to add/remove/modify pipeline stages
|
||||
5. **Backward Compatible**: Existing presets and stages continue to work
|
||||
|
||||
## Future Enhancements
|
||||
|
||||
1. **CLI Integration**: Add `--graph-config` flag to mainline command
|
||||
2. **Visual Builder**: Web-based drag-and-drop pipeline editor
|
||||
3. **Script Execution**: Support for loops, conditionals, and timing in graph scripts
|
||||
4. **Parameter Binding**: Real-time sensor-to-parameter bindings in graph config
|
||||
5. **Pipeline Inspection**: Visual DAG representation with metrics
|
||||
234
docs/PIPELINE.md
234
docs/PIPELINE.md
@@ -2,136 +2,160 @@
|
||||
|
||||
## Architecture Overview
|
||||
|
||||
The Mainline pipeline uses a **Stage-based architecture** with **capability-based dependency resolution**. Stages declare capabilities (what they provide) and dependencies (what they need), and the Pipeline resolves dependencies using prefix matching.
|
||||
|
||||
```
|
||||
Sources (static/dynamic) → Fetch → Prepare → Scroll → Effects → Render → Display
|
||||
↓
|
||||
NtfyPoller ← MicMonitor (async)
|
||||
Source Stage → Render Stage → Effect Stages → Display Stage
|
||||
↓
|
||||
Camera Stage (provides camera.state capability)
|
||||
```
|
||||
|
||||
### Data Source Abstraction (sources_v2.py)
|
||||
### Capability-Based Dependency Resolution
|
||||
|
||||
- **Static sources**: Data fetched once and cached (HeadlinesDataSource, PoetryDataSource)
|
||||
- **Dynamic sources**: Idempotent fetch for runtime updates (PipelineDataSource)
|
||||
- **SourceRegistry**: Discovery and management of data sources
|
||||
Stages declare capabilities and dependencies:
|
||||
- **Capabilities**: What the stage provides (e.g., `source`, `render.output`, `display.output`, `camera.state`)
|
||||
- **Dependencies**: What the stage needs (e.g., `source`, `render.output`, `camera.state`)
|
||||
|
||||
### Camera Modes
|
||||
The Pipeline resolves dependencies using **prefix matching**:
|
||||
- `"source"` matches `"source.headlines"`, `"source.poetry"`, etc.
|
||||
- `"camera.state"` matches the camera state capability provided by `CameraClockStage`
|
||||
- This allows flexible composition without hardcoding specific stage names
|
||||
|
||||
- **Vertical**: Scroll up (default)
|
||||
- **Horizontal**: Scroll left
|
||||
- **Omni**: Diagonal scroll
|
||||
- **Floating**: Sinusoidal bobbing
|
||||
- **Trace**: Follow network path node-by-node (for pipeline viz)
|
||||
### Minimum Capabilities
|
||||
|
||||
## Content to Display Rendering Pipeline
|
||||
The pipeline requires these minimum capabilities to function:
|
||||
- `"source"` - Data source capability (provides raw items)
|
||||
- `"render.output"` - Rendered content capability
|
||||
- `"display.output"` - Display output capability
|
||||
- `"camera.state"` - Camera state for viewport filtering
|
||||
|
||||
These are automatically injected if missing by the `ensure_minimum_capabilities()` method.
|
||||
|
||||
### Stage Registry
|
||||
|
||||
The `StageRegistry` discovers and registers stages automatically:
|
||||
- Scans `engine/stages/` for stage implementations
|
||||
- Registers stages by their declared capabilities
|
||||
- Enables runtime stage discovery and composition
|
||||
|
||||
## Stage-Based Pipeline Flow
|
||||
|
||||
```mermaid
|
||||
flowchart TD
|
||||
subgraph Sources["Data Sources (v2)"]
|
||||
Headlines[HeadlinesDataSource]
|
||||
Poetry[PoetryDataSource]
|
||||
Pipeline[PipelineDataSource]
|
||||
Registry[SourceRegistry]
|
||||
end
|
||||
subgraph Stages["Stage Pipeline"]
|
||||
subgraph SourceStage["Source Stage (provides: source.*)"]
|
||||
Headlines[HeadlinesSource]
|
||||
Poetry[PoetrySource]
|
||||
Pipeline[PipelineSource]
|
||||
end
|
||||
|
||||
subgraph SourcesLegacy["Data Sources (legacy)"]
|
||||
RSS[("RSS Feeds")]
|
||||
PoetryFeed[("Poetry Feed")]
|
||||
Ntfy[("Ntfy Messages")]
|
||||
Mic[("Microphone")]
|
||||
end
|
||||
subgraph RenderStage["Render Stage (provides: render.*)"]
|
||||
Render[RenderStage]
|
||||
Canvas[Canvas]
|
||||
Camera[Camera]
|
||||
end
|
||||
|
||||
subgraph Fetch["Fetch Layer"]
|
||||
FC[fetch_all]
|
||||
FP[fetch_poetry]
|
||||
Cache[(Cache)]
|
||||
end
|
||||
|
||||
subgraph Prepare["Prepare Layer"]
|
||||
MB[make_block]
|
||||
Strip[strip_tags]
|
||||
Trans[translate]
|
||||
end
|
||||
|
||||
subgraph Scroll["Scroll Engine"]
|
||||
SC[StreamController]
|
||||
CAM[Camera]
|
||||
RTZ[render_ticker_zone]
|
||||
Msg[render_message_overlay]
|
||||
Grad[lr_gradient]
|
||||
VT[vis_trunc / vis_offset]
|
||||
end
|
||||
|
||||
subgraph Effects["Effect Pipeline"]
|
||||
subgraph EffectsPlugins["Effect Plugins"]
|
||||
subgraph EffectStages["Effect Stages (provides: effect.*)"]
|
||||
Noise[NoiseEffect]
|
||||
Fade[FadeEffect]
|
||||
Glitch[GlitchEffect]
|
||||
Firehose[FirehoseEffect]
|
||||
Hud[HudEffect]
|
||||
end
|
||||
EC[EffectChain]
|
||||
ER[EffectRegistry]
|
||||
|
||||
subgraph DisplayStage["Display Stage (provides: display.*)"]
|
||||
Terminal[TerminalDisplay]
|
||||
Pygame[PygameDisplay]
|
||||
WebSocket[WebSocketDisplay]
|
||||
Null[NullDisplay]
|
||||
end
|
||||
end
|
||||
|
||||
subgraph Render["Render Layer"]
|
||||
BW[big_wrap]
|
||||
RL[render_line]
|
||||
subgraph Capabilities["Capability Map"]
|
||||
SourceCaps["source.headlines<br/>source.poetry<br/>source.pipeline"]
|
||||
RenderCaps["render.output<br/>render.canvas"]
|
||||
EffectCaps["effect.noise<br/>effect.fade<br/>effect.glitch"]
|
||||
DisplayCaps["display.output<br/>display.terminal"]
|
||||
end
|
||||
|
||||
subgraph Display["Display Backends"]
|
||||
TD[TerminalDisplay]
|
||||
PD[PygameDisplay]
|
||||
SD[SixelDisplay]
|
||||
KD[KittyDisplay]
|
||||
WSD[WebSocketDisplay]
|
||||
ND[NullDisplay]
|
||||
end
|
||||
SourceStage --> RenderStage
|
||||
RenderStage --> EffectStages
|
||||
EffectStages --> DisplayStage
|
||||
|
||||
subgraph Async["Async Sources"]
|
||||
NTFY[NtfyPoller]
|
||||
MIC[MicMonitor]
|
||||
end
|
||||
SourceStage --> SourceCaps
|
||||
RenderStage --> RenderCaps
|
||||
EffectStages --> EffectCaps
|
||||
DisplayStage --> DisplayCaps
|
||||
|
||||
subgraph Animation["Animation System"]
|
||||
AC[AnimationController]
|
||||
PR[Preset]
|
||||
end
|
||||
|
||||
Sources --> Fetch
|
||||
RSS --> FC
|
||||
PoetryFeed --> FP
|
||||
FC --> Cache
|
||||
FP --> Cache
|
||||
Cache --> MB
|
||||
Strip --> MB
|
||||
Trans --> MB
|
||||
MB --> SC
|
||||
NTFY --> SC
|
||||
SC --> RTZ
|
||||
CAM --> RTZ
|
||||
Grad --> RTZ
|
||||
VT --> RTZ
|
||||
RTZ --> EC
|
||||
EC --> ER
|
||||
ER --> EffectsPlugins
|
||||
EffectsPlugins --> BW
|
||||
BW --> RL
|
||||
RL --> Display
|
||||
Ntfy --> RL
|
||||
Mic --> RL
|
||||
MIC --> RL
|
||||
|
||||
style Sources fill:#f9f,stroke:#333
|
||||
style Fetch fill:#bbf,stroke:#333
|
||||
style Prepare fill:#bff,stroke:#333
|
||||
style Scroll fill:#bfb,stroke:#333
|
||||
style Effects fill:#fbf,stroke:#333
|
||||
style Render fill:#ffb,stroke:#333
|
||||
style Display fill:#bbf,stroke:#333
|
||||
style Async fill:#fbb,stroke:#333
|
||||
style Animation fill:#bfb,stroke:#333
|
||||
style SourceStage fill:#f9f,stroke:#333
|
||||
style RenderStage fill:#bbf,stroke:#333
|
||||
style EffectStages fill:#fbf,stroke:#333
|
||||
style DisplayStage fill:#bfb,stroke:#333
|
||||
```
|
||||
|
||||
## Stage Adapters
|
||||
|
||||
Existing components are wrapped as Stages via adapters:
|
||||
|
||||
### Source Stage Adapter
|
||||
- Wraps `HeadlinesDataSource`, `PoetryDataSource`, etc.
|
||||
- Provides `source.*` capabilities
|
||||
- Fetches data and outputs to pipeline buffer
|
||||
|
||||
### Render Stage Adapter
|
||||
- Wraps `StreamController`, `Camera`, `render_ticker_zone`
|
||||
- Provides `render.output` capability
|
||||
- Processes content and renders to canvas
|
||||
|
||||
### Effect Stage Adapter
|
||||
- Wraps `EffectChain` and individual effect plugins
|
||||
- Provides `effect.*` capabilities
|
||||
- Applies visual effects to rendered content
|
||||
|
||||
### Display Stage Adapter
|
||||
- Wraps `TerminalDisplay`, `PygameDisplay`, etc.
|
||||
- Provides `display.*` capabilities
|
||||
- Outputs final buffer to display backend
|
||||
|
||||
## Pipeline Mutation API
|
||||
|
||||
The Pipeline supports dynamic mutation during runtime:
|
||||
|
||||
### Core Methods
|
||||
- `add_stage(name, stage, initialize=True)` - Add a stage
|
||||
- `remove_stage(name, cleanup=True)` - Remove a stage and rebuild execution order
|
||||
- `replace_stage(name, new_stage, preserve_state=True)` - Replace a stage
|
||||
- `swap_stages(name1, name2)` - Swap two stages
|
||||
- `move_stage(name, after=None, before=None)` - Move a stage in execution order
|
||||
- `enable_stage(name)` / `disable_stage(name)` - Enable/disable stages
|
||||
|
||||
### Safety Checks
|
||||
- `can_hot_swap(name)` - Check if a stage can be safely hot-swapped
|
||||
- `cleanup_stage(name)` - Clean up specific stage without removing it
|
||||
|
||||
### WebSocket Commands
|
||||
The mutation API is accessible via WebSocket for remote control:
|
||||
```json
|
||||
{"action": "remove_stage", "stage": "stage_name"}
|
||||
{"action": "swap_stages", "stage1": "name1", "stage2": "name2"}
|
||||
{"action": "enable_stage", "stage": "stage_name"}
|
||||
{"action": "cleanup_stage", "stage": "stage_name"}
|
||||
```
|
||||
|
||||
## Camera Modes
|
||||
|
||||
The Camera supports the following modes:
|
||||
|
||||
- **FEED**: Single item view (static or rapid cycling)
|
||||
- **SCROLL**: Smooth vertical scrolling (movie credits style)
|
||||
- **HORIZONTAL**: Left/right movement
|
||||
- **OMNI**: Combination of vertical and horizontal
|
||||
- **FLOATING**: Sinusoidal/bobbing motion
|
||||
- **BOUNCE**: DVD-style bouncing off edges
|
||||
- **RADIAL**: Polar coordinate scanning (radar sweep)
|
||||
|
||||
Note: Camera state is provided by `CameraClockStage` (capability: `camera.state`) which updates independently of data flow. The `CameraStage` applies viewport transformations (capability: `camera`).
|
||||
|
||||
## Animation & Presets
|
||||
|
||||
```mermaid
|
||||
@@ -161,7 +185,7 @@ flowchart LR
|
||||
Triggers --> Events
|
||||
```
|
||||
|
||||
## Camera Modes
|
||||
## Camera Modes State Diagram
|
||||
|
||||
```mermaid
|
||||
stateDiagram-v2
|
||||
|
||||
@@ -1,11 +1,18 @@
|
||||
# Refactor mainline\.py into modular package
|
||||
#
|
||||
|
||||
Refactor mainline\.py into modular package
|
||||
|
||||
## Problem
|
||||
|
||||
`mainline.py` is a single 1085\-line file with ~10 interleaved concerns\. This prevents:
|
||||
|
||||
* Reusing the ntfy doorbell interrupt in other visualizers
|
||||
* Importing the render pipeline from `serve.py` \(future ESP32 HTTP server\)
|
||||
* Testing any concern in isolation
|
||||
* Porting individual layers to Rust independently
|
||||
|
||||
## Target structure
|
||||
|
||||
```warp-runnable-command
|
||||
mainline.py # thin entrypoint: venv bootstrap → engine.app.main()
|
||||
engine/
|
||||
@@ -23,8 +30,11 @@ engine/
|
||||
scroll.py # stream() frame loop + message rendering
|
||||
app.py # main(), TITLE art, boot sequence, signal handler
|
||||
```
|
||||
|
||||
The package is named `engine/` to avoid a naming conflict with the `mainline.py` entrypoint\.
|
||||
|
||||
## Module dependency graph
|
||||
|
||||
```warp-runnable-command
|
||||
config ← (nothing)
|
||||
sources ← (nothing)
|
||||
@@ -39,64 +49,92 @@ mic ← (nothing — sounddevice only)
|
||||
scroll ← config, terminal, render, effects, ntfy, mic
|
||||
app ← everything above
|
||||
```
|
||||
|
||||
Critical property: **ntfy\.py and mic\.py have zero internal dependencies**, making ntfy reusable by any visualizer\.
|
||||
|
||||
## Module details
|
||||
|
||||
### mainline\.py \(entrypoint — slimmed down\)
|
||||
|
||||
Keeps only the venv bootstrap \(lines 10\-38\) which must run before any third\-party imports\. After bootstrap, delegates to `engine.app.main()`\.
|
||||
|
||||
### engine/config\.py
|
||||
|
||||
From current mainline\.py:
|
||||
|
||||
* `HEADLINE_LIMIT`, `FEED_TIMEOUT`, `MIC_THRESHOLD_DB` \(lines 55\-57\)
|
||||
* `MODE`, `FIREHOSE` CLI flag parsing \(lines 58\-59\)
|
||||
* `NTFY_TOPIC`, `NTFY_POLL_INTERVAL`, `MESSAGE_DISPLAY_SECS` \(lines 62\-64\)
|
||||
* `_FONT_PATH`, `_FONT_SZ`, `_RENDER_H` \(lines 147\-150\)
|
||||
* `_SCROLL_DUR`, `_FRAME_DT`, `FIREHOSE_H` \(lines 505\-507\)
|
||||
* `GLITCH`, `KATA` glyph tables \(lines 143\-144\)
|
||||
|
||||
### engine/sources\.py
|
||||
|
||||
Pure data, no logic:
|
||||
|
||||
* `FEEDS` dict \(lines 102\-140\)
|
||||
* `POETRY_SOURCES` dict \(lines 67\-80\)
|
||||
* `SOURCE_LANGS` dict \(lines 258\-266\)
|
||||
* `_LOCATION_LANGS` dict \(lines 269\-289\)
|
||||
* `_SCRIPT_FONTS` dict \(lines 153\-165\)
|
||||
* `_NO_UPPER` set \(line 167\)
|
||||
|
||||
### engine/terminal\.py
|
||||
|
||||
ANSI primitives and terminal I/O:
|
||||
|
||||
* All ANSI constants: `RST`, `BOLD`, `DIM`, `G_HI`, `G_MID`, `G_LO`, `G_DIM`, `W_COOL`, `W_DIM`, `W_GHOST`, `C_DIM`, `CLR`, `CURSOR_OFF`, `CURSOR_ON` \(lines 83\-99\)
|
||||
* `tw()`, `th()` \(lines 223\-234\)
|
||||
* `type_out()`, `slow_print()`, `boot_ln()` \(lines 355\-386\)
|
||||
|
||||
### engine/filter\.py
|
||||
|
||||
* `_Strip` HTML parser class \(lines 205\-214\)
|
||||
* `strip_tags()` \(lines 217\-220\)
|
||||
* `_SKIP_RE` compiled regex \(lines 322\-346\)
|
||||
* `_skip()` predicate \(lines 349\-351\)
|
||||
|
||||
### engine/translate\.py
|
||||
|
||||
* `_TRANSLATE_CACHE` \(line 291\)
|
||||
* `_detect_location_language()` \(lines 294\-300\) — imports `_LOCATION_LANGS` from sources
|
||||
* `_translate_headline()` \(lines 303\-319\)
|
||||
|
||||
### engine/render\.py
|
||||
|
||||
The OTF→terminal pipeline\. This is exactly what `serve.py` will import to produce 1\-bit bitmaps for the ESP32\.
|
||||
|
||||
* `_GRAD_COLS` gradient table \(lines 169\-182\)
|
||||
* `_font()`, `_font_for_lang()` with lazy\-load \+ cache \(lines 185\-202\)
|
||||
* `_render_line()` — OTF text → half\-block terminal rows \(lines 567\-605\)
|
||||
* `_big_wrap()` — word\-wrap \+ render \(lines 608\-636\)
|
||||
* `_lr_gradient()` — apply left→right color gradient \(lines 639\-656\)
|
||||
* `_make_block()` — composite: translate → render → colorize a headline \(lines 718\-756\)\. Imports from translate, sources\.
|
||||
|
||||
### engine/effects\.py
|
||||
|
||||
Visual effects applied during the frame loop:
|
||||
|
||||
* `noise()` \(lines 237\-245\)
|
||||
* `glitch_bar()` \(lines 248\-252\)
|
||||
* `_fade_line()` — probabilistic character dissolve \(lines 659\-680\)
|
||||
* `_vis_trunc()` — ANSI\-aware width truncation \(lines 683\-701\)
|
||||
* `_firehose_line()` \(lines 759\-801\) — imports config\.MODE, sources\.FEEDS/POETRY\_SOURCES
|
||||
* `_next_headline()` — pool management \(lines 704\-715\)
|
||||
|
||||
### engine/fetch\.py
|
||||
|
||||
* `fetch_feed()` \(lines 390\-396\)
|
||||
* `fetch_all()` \(lines 399\-426\) — imports filter\.\_skip, filter\.strip\_tags, terminal\.boot\_ln
|
||||
* `_fetch_gutenberg()` \(lines 429\-456\)
|
||||
* `fetch_poetry()` \(lines 459\-472\)
|
||||
* `_cache_path()`, `_load_cache()`, `_save_cache()` \(lines 476\-501\)
|
||||
|
||||
### engine/ntfy\.py — standalone, reusable
|
||||
|
||||
Refactored from the current globals \+ thread \(lines 531\-564\) and the message rendering section of `stream()` \(lines 845\-909\) into a class:
|
||||
|
||||
```python
|
||||
class NtfyPoller:
|
||||
def __init__(self, topic_url, poll_interval=15, display_secs=30):
|
||||
@@ -108,8 +146,10 @@ class NtfyPoller:
|
||||
def dismiss(self):
|
||||
"""Manually dismiss current message."""
|
||||
```
|
||||
|
||||
Dependencies: `urllib.request`, `json`, `threading`, `time` — all stdlib\. No internal imports\.
|
||||
Other visualizers use it like:
|
||||
|
||||
```python
|
||||
from engine.ntfy import NtfyPoller
|
||||
poller = NtfyPoller("https://ntfy.sh/my_topic/json?since=20s&poll=1")
|
||||
@@ -120,8 +160,11 @@ if msg:
|
||||
title, body, ts = msg
|
||||
render_my_message(title, body) # visualizer-specific
|
||||
```
|
||||
|
||||
### engine/mic\.py — standalone
|
||||
|
||||
Refactored from the current globals \(lines 508\-528\) into a class:
|
||||
|
||||
```python
|
||||
class MicMonitor:
|
||||
def __init__(self, threshold_db=50):
|
||||
@@ -137,41 +180,75 @@ class MicMonitor:
|
||||
def excess(self) -> float:
|
||||
"""dB above threshold (clamped to 0)."""
|
||||
```
|
||||
|
||||
Dependencies: `sounddevice`, `numpy` \(both optional — graceful fallback\)\.
|
||||
|
||||
### engine/scroll\.py
|
||||
|
||||
The `stream()` function \(lines 804\-990\)\. Receives its dependencies via arguments or imports:
|
||||
|
||||
* `stream(items, ntfy_poller, mic_monitor, config)` or similar
|
||||
* Message rendering \(lines 855\-909\) stays here since it's terminal\-display\-specific — a different visualizer would render messages differently
|
||||
|
||||
### engine/app\.py
|
||||
|
||||
The orchestrator:
|
||||
|
||||
* `TITLE` ASCII art \(lines 994\-1001\)
|
||||
* `main()` \(lines 1004\-1084\): CLI handling, signal setup, boot animation, fetch, wire up ntfy/mic/scroll
|
||||
|
||||
## Execution order
|
||||
|
||||
### Step 1: Create engine/ package skeleton
|
||||
|
||||
Create `engine/__init__.py` and all empty module files\.
|
||||
|
||||
### Step 2: Extract pure data modules \(zero\-dep\)
|
||||
|
||||
Move constants and data dicts into `config.py`, `sources.py`\. These have no logic dependencies\.
|
||||
|
||||
### Step 3: Extract terminal\.py
|
||||
|
||||
Move ANSI codes and terminal I/O helpers\. No internal deps\.
|
||||
|
||||
### Step 4: Extract filter\.py and translate\.py
|
||||
|
||||
Both are small, self\-contained\. translate imports from sources\.
|
||||
|
||||
### Step 5: Extract render\.py
|
||||
|
||||
Font loading \+ the OTF→half\-block pipeline\. Imports from config, terminal, sources\. This is the module `serve.py` will later import\.
|
||||
|
||||
### Step 6: Extract effects\.py
|
||||
|
||||
Visual effects\. Imports from config, terminal, sources\.
|
||||
|
||||
### Step 7: Extract fetch\.py
|
||||
|
||||
Feed/Gutenberg fetching \+ caching\. Imports from config, sources, filter, terminal\.
|
||||
|
||||
### Step 8: Extract ntfy\.py and mic\.py
|
||||
|
||||
Refactor globals\+threads into classes\. Zero internal deps\.
|
||||
|
||||
### Step 9: Extract scroll\.py
|
||||
|
||||
The frame loop\. Last to extract because it depends on everything above\.
|
||||
|
||||
### Step 10: Extract app\.py
|
||||
|
||||
The `main()` function, boot sequence, signal handler\. Wire up all modules\.
|
||||
|
||||
### Step 11: Slim down mainline\.py
|
||||
|
||||
Keep only venv bootstrap \+ `from engine.app import main; main()`\.
|
||||
|
||||
### Step 12: Verify
|
||||
|
||||
Run `python3 mainline.py`, `python3 mainline.py --poetry`, and `python3 mainline.py --firehose` to confirm identical behavior\. No behavioral changes in this refactor\.
|
||||
|
||||
## What this enables
|
||||
|
||||
* **serve\.py** \(future\): `from engine.render import _render_line, _big_wrap` \+ `from engine.fetch import fetch_all` — imports the pipeline directly
|
||||
* **Other visualizers**: `from engine.ntfy import NtfyPoller` — doorbell feature with no coupling to mainline's scroll engine
|
||||
* **Rust port**: Clear boundaries for what to port first \(ntfy client, render pipeline\) vs what stays in Python \(fetching, caching — the server side\)
|
||||
30
docs/SUMMARY.md
Normal file
30
docs/SUMMARY.md
Normal file
@@ -0,0 +1,30 @@
|
||||
# Mainline Documentation Summary
|
||||
|
||||
## Core Architecture
|
||||
- [Pipeline Architecture](PIPELINE.md) - Pipeline stages, capability resolution, DAG execution
|
||||
- [Graph-Based DSL](graph-dsl.md) - New graph abstraction for pipeline configuration
|
||||
|
||||
## Pipeline Configuration
|
||||
- [Hybrid Config](hybrid-config.md) - **Recommended**: Preset simplicity + graph flexibility
|
||||
- [Graph DSL](graph-dsl.md) - Verbose node-based graph definition
|
||||
- [Presets Usage](presets-usage.md) - Creating and using pipeline presets
|
||||
|
||||
## Feature Documentation
|
||||
- [Positioning Analysis](positioning-analysis.md) - Positioning modes and tradeoffs
|
||||
- [Pipeline Introspection](pipeline_introspection.md) - Live pipeline visualization
|
||||
|
||||
## Implementation Details
|
||||
- [Graph System Summary](GRAPH_SYSTEM_SUMMARY.md) - Complete implementation overview
|
||||
|
||||
## Quick Start
|
||||
|
||||
**Recommended: Hybrid Configuration**
|
||||
```toml
|
||||
[pipeline]
|
||||
source = "headlines"
|
||||
camera = { mode = "scroll" }
|
||||
effects = [{ name = "noise", intensity = 0.3 }]
|
||||
display = { backend = "terminal" }
|
||||
```
|
||||
|
||||
See `docs/hybrid-config.md` for details.
|
||||
236
docs/analysis_graph_dsl_duplicative.md
Normal file
236
docs/analysis_graph_dsl_duplicative.md
Normal file
@@ -0,0 +1,236 @@
|
||||
# Analysis: Graph DSL Duplicative Issue
|
||||
|
||||
## Executive Summary
|
||||
|
||||
The current Graph DSL implementation in Mainline is **duplicative** because:
|
||||
|
||||
1. **Node definitions are repeated**: Every node requires a full `[nodes.name]` block with `type` and specific config, even when the type can often be inferred
|
||||
2. **Connections are separate**: The `[connections]` list must manually reference node names that were just defined
|
||||
3. **Type specification is redundant**: The `type = "effect"` is always the same as the key name prefix
|
||||
4. **No implicit connections**: Even linear pipelines require explicit connection strings
|
||||
|
||||
This creates significant verbosity compared to the preset system.
|
||||
|
||||
---
|
||||
|
||||
## What Makes the Script Feel "Duplicative"
|
||||
|
||||
### 1. Type Specification Redundancy
|
||||
|
||||
```toml
|
||||
[nodes.noise]
|
||||
type = "effect" # ← Redundant: already know it's an effect from context
|
||||
effect = "noise"
|
||||
intensity = 0.3
|
||||
```
|
||||
|
||||
**Why it's redundant:**
|
||||
- The `[nodes.noise]` section name suggests it's a custom node
|
||||
- The `effect = "noise"` key implies it's an effect type
|
||||
- The parser could infer the type from the presence of `effect` key
|
||||
|
||||
### 2. Connection String Redundancy
|
||||
|
||||
```toml
|
||||
[connections]
|
||||
list = ["source -> camera -> noise -> fade -> glitch -> firehose -> display"]
|
||||
```
|
||||
|
||||
**Why it's redundant:**
|
||||
- All node names were already defined in individual blocks above
|
||||
- For linear pipelines, the natural flow is obvious
|
||||
- The connection order matches the definition order
|
||||
|
||||
### 3. Verbosity Comparison
|
||||
|
||||
**Preset System (10 lines):**
|
||||
```toml
|
||||
[presets.upstream-default]
|
||||
source = "headlines"
|
||||
display = "terminal"
|
||||
camera = "scroll"
|
||||
effects = ["noise", "fade", "glitch", "firehose"]
|
||||
camera_speed = 1.0
|
||||
viewport_width = 80
|
||||
viewport_height = 24
|
||||
```
|
||||
|
||||
**Graph DSL (39 lines):**
|
||||
- 3.9x more lines for the same pipeline
|
||||
- Each effect requires 4 lines instead of 1 line in preset system
|
||||
- Connection string repeats all node names
|
||||
|
||||
---
|
||||
|
||||
## Syntactic Sugar Options
|
||||
|
||||
### Option 1: Type Inference (Immediate)
|
||||
|
||||
**Current:**
|
||||
```toml
|
||||
[nodes.noise]
|
||||
type = "effect"
|
||||
effect = "noise"
|
||||
intensity = 0.3
|
||||
```
|
||||
|
||||
**Proposed:**
|
||||
```toml
|
||||
[nodes.noise]
|
||||
effect = "noise" # Type inferred from 'effect' key
|
||||
intensity = 0.3
|
||||
```
|
||||
|
||||
**Implementation:** Modify `graph_toml.py` to infer node type from keys:
|
||||
- `effect` key → type = "effect"
|
||||
- `backend` key → type = "display"
|
||||
- `source` key → type = "source"
|
||||
- `mode` key → type = "camera"
|
||||
|
||||
### Option 2: Implicit Linear Connections
|
||||
|
||||
**Current:**
|
||||
```toml
|
||||
[connections]
|
||||
list = ["source -> camera -> noise -> fade -> display"]
|
||||
```
|
||||
|
||||
**Proposed:**
|
||||
```toml
|
||||
[connections]
|
||||
implicit = true # Auto-connect all nodes in definition order
|
||||
```
|
||||
|
||||
**Implementation:** If `implicit = true`, automatically create connections between consecutive nodes.
|
||||
|
||||
### Option 3: Inline Node Definitions
|
||||
|
||||
**Current:**
|
||||
```toml
|
||||
[nodes.noise]
|
||||
type = "effect"
|
||||
effect = "noise"
|
||||
intensity = 0.3
|
||||
|
||||
[nodes.fade]
|
||||
type = "effect"
|
||||
effect = "fade"
|
||||
intensity = 0.5
|
||||
```
|
||||
|
||||
**Proposed:**
|
||||
```toml
|
||||
[graph]
|
||||
nodes = [
|
||||
{ name = "source", source = "headlines" },
|
||||
{ name = "noise", effect = "noise", intensity = 0.3 },
|
||||
{ name = "fade", effect = "fade", intensity = 0.5 },
|
||||
{ name = "display", backend = "terminal" }
|
||||
]
|
||||
connections = ["source -> noise -> fade -> display"]
|
||||
```
|
||||
|
||||
### Option 4: Hybrid Preset-Graph System
|
||||
|
||||
```toml
|
||||
[presets.custom]
|
||||
source = "headlines"
|
||||
display = "terminal"
|
||||
camera = "scroll"
|
||||
effects = [
|
||||
{ name = "noise", intensity = 0.3 },
|
||||
{ name = "fade", intensity = 0.5 }
|
||||
]
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Comparative Analysis: Other Systems
|
||||
|
||||
### GitHub Actions
|
||||
```yaml
|
||||
steps:
|
||||
- uses: actions/checkout@v2
|
||||
- uses: actions/setup-node@v2
|
||||
- run: npm install
|
||||
```
|
||||
- Steps in order, no explicit connection syntax
|
||||
- Type inference from `uses` or `run`
|
||||
|
||||
### Apache Airflow
|
||||
```python
|
||||
task1 = PythonOperator(...)
|
||||
task2 = PythonOperator(...)
|
||||
task1 >> task2 # Minimal connection syntax
|
||||
```
|
||||
|
||||
### Jenkins Pipeline
|
||||
```groovy
|
||||
stages {
|
||||
stage('Build') { steps { sh 'make' } }
|
||||
stage('Test') { steps { sh 'make test' } }
|
||||
}
|
||||
```
|
||||
- Implicit sequential execution
|
||||
|
||||
---
|
||||
|
||||
## Recommended Improvements
|
||||
|
||||
### Immediate (Backward Compatible)
|
||||
|
||||
1. **Type Inference** - Make `type` field optional
|
||||
2. **Implicit Connections** - Add `implicit = true` option
|
||||
3. **Array Format** - Support `nodes = ["a", "b", "c"]` format
|
||||
|
||||
### Example: Improved Configuration
|
||||
|
||||
**Current (39 lines):**
|
||||
```toml
|
||||
[nodes.source]
|
||||
type = "source"
|
||||
source = "headlines"
|
||||
|
||||
[nodes.camera]
|
||||
type = "camera"
|
||||
mode = "scroll"
|
||||
speed = 1.0
|
||||
|
||||
[nodes.noise]
|
||||
type = "effect"
|
||||
effect = "noise"
|
||||
intensity = 0.3
|
||||
|
||||
[nodes.display]
|
||||
type = "display"
|
||||
backend = "terminal"
|
||||
|
||||
[connections]
|
||||
list = ["source -> camera -> noise -> display"]
|
||||
```
|
||||
|
||||
**Improved (13 lines, 67% reduction):**
|
||||
```toml
|
||||
[graph]
|
||||
nodes = [
|
||||
{ name = "source", source = "headlines" },
|
||||
{ name = "camera", mode = "scroll", speed = 1.0 },
|
||||
{ name = "noise", effect = "noise", intensity = 0.3 },
|
||||
{ name = "display", backend = "terminal" }
|
||||
]
|
||||
|
||||
[connections]
|
||||
implicit = true # Auto-connects: source -> camera -> noise -> display
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Conclusion
|
||||
|
||||
The Graph DSL's duplicative nature stems from:
|
||||
1. **Explicit type specification** when it could be inferred
|
||||
2. **Separate connection definitions** that repeat node names
|
||||
3. **Verbose node definitions** for simple cases
|
||||
4. **Lack of implicit defaults** for linear pipelines
|
||||
|
||||
The recommended improvements focus on **type inference** and **implicit connections** as immediate wins that reduce verbosity by 50%+ while maintaining full flexibility for complex pipelines.
|
||||
210
docs/graph-dsl.md
Normal file
210
docs/graph-dsl.md
Normal file
@@ -0,0 +1,210 @@
|
||||
# Graph-Based Pipeline DSL
|
||||
|
||||
This document describes the new graph-based DSL for defining pipelines in Mainline.
|
||||
|
||||
## Overview
|
||||
|
||||
The graph DSL represents pipelines as nodes and connections, replacing the verbose `XYZStage` naming convention with a more intuitive graph abstraction.
|
||||
|
||||
## TOML Syntax
|
||||
|
||||
### Basic Pipeline
|
||||
|
||||
```toml
|
||||
[nodes.source]
|
||||
type = "source"
|
||||
source = "headlines"
|
||||
|
||||
[nodes.camera]
|
||||
type = "camera"
|
||||
mode = "scroll"
|
||||
|
||||
[nodes.display]
|
||||
type = "display"
|
||||
backend = "terminal"
|
||||
|
||||
[connections]
|
||||
list = ["source -> camera -> display"]
|
||||
```
|
||||
|
||||
### With Effects
|
||||
|
||||
```toml
|
||||
[nodes.source]
|
||||
type = "source"
|
||||
source = "headlines"
|
||||
|
||||
[nodes.noise]
|
||||
type = "effect"
|
||||
effect = "noise"
|
||||
intensity = 0.5
|
||||
|
||||
[nodes.fade]
|
||||
type = "effect"
|
||||
effect = "fade"
|
||||
intensity = 0.8
|
||||
|
||||
[nodes.display]
|
||||
type = "display"
|
||||
backend = "terminal"
|
||||
|
||||
[connections]
|
||||
list = ["source -> noise -> fade -> display"]
|
||||
```
|
||||
|
||||
### With Positioning
|
||||
|
||||
```toml
|
||||
[nodes.source]
|
||||
type = "source"
|
||||
source = "headlines"
|
||||
|
||||
[nodes.position]
|
||||
type = "position"
|
||||
mode = "mixed"
|
||||
|
||||
[nodes.display]
|
||||
type = "display"
|
||||
backend = "terminal"
|
||||
|
||||
[connections]
|
||||
list = ["source -> position -> display"]
|
||||
```
|
||||
|
||||
## Python API
|
||||
|
||||
### Basic Construction
|
||||
|
||||
```python
|
||||
from engine.pipeline.graph import Graph, NodeType
|
||||
|
||||
graph = Graph()
|
||||
graph.node("source", NodeType.SOURCE, source="headlines")
|
||||
graph.node("camera", NodeType.CAMERA, mode="scroll")
|
||||
graph.node("display", NodeType.DISPLAY, backend="terminal")
|
||||
graph.chain("source", "camera", "display")
|
||||
|
||||
pipeline = graph_to_pipeline(graph)
|
||||
```
|
||||
|
||||
### With Effects
|
||||
|
||||
```python
|
||||
from engine.pipeline.graph import Graph, NodeType
|
||||
|
||||
graph = Graph()
|
||||
graph.node("source", NodeType.SOURCE, source="headlines")
|
||||
graph.node("noise", NodeType.EFFECT, effect="noise", intensity=0.5)
|
||||
graph.node("fade", NodeType.EFFECT, effect="fade", intensity=0.8)
|
||||
graph.node("display", NodeType.DISPLAY, backend="terminal")
|
||||
graph.chain("source", "noise", "fade", "display")
|
||||
|
||||
pipeline = graph_to_pipeline(graph)
|
||||
```
|
||||
|
||||
### Dictionary/JSON Input
|
||||
|
||||
```python
|
||||
from engine.pipeline.graph_adapter import dict_to_pipeline
|
||||
|
||||
data = {
|
||||
"nodes": {
|
||||
"source": "headlines",
|
||||
"noise": {"type": "effect", "effect": "noise", "intensity": 0.5},
|
||||
"display": {"type": "display", "backend": "terminal"}
|
||||
},
|
||||
"connections": ["source -> noise -> display"]
|
||||
}
|
||||
|
||||
pipeline = dict_to_pipeline(data)
|
||||
```
|
||||
|
||||
## CLI Usage
|
||||
|
||||
### Using Graph Config File
|
||||
|
||||
```bash
|
||||
mainline --graph-config pipeline.toml
|
||||
```
|
||||
|
||||
### Inline Graph Definition
|
||||
|
||||
```bash
|
||||
mainline --graph 'source:headlines -> noise:noise:0.5 -> display:terminal'
|
||||
```
|
||||
|
||||
### With Preset Override
|
||||
|
||||
```bash
|
||||
mainline --preset demo --graph-modify 'add:noise:0.5 after:source'
|
||||
```
|
||||
|
||||
## Node Types
|
||||
|
||||
| Type | Description | Config Options |
|
||||
|------|-------------|----------------|
|
||||
| `source` | Data source | `source`: "headlines", "poetry", "empty", etc. |
|
||||
| `camera` | Viewport camera | `mode`: "scroll", "feed", "horizontal", etc. `speed`: float |
|
||||
| `effect` | Visual effect | `effect`: effect name, `intensity`: 0.0-1.0 |
|
||||
| `position` | Positioning mode | `mode`: "absolute", "relative", "mixed" |
|
||||
| `display` | Output backend | `backend`: "terminal", "null", "websocket", etc. |
|
||||
| `render` | Text rendering | (auto-injected) |
|
||||
| `overlay` | Message overlay | (auto-injected) |
|
||||
|
||||
## Advanced Features
|
||||
|
||||
### Conditional Connections
|
||||
|
||||
```toml
|
||||
[connections]
|
||||
list = ["source -> camera -> display"]
|
||||
# Effects can be conditionally enabled/disabled
|
||||
```
|
||||
|
||||
### Parameter Binding
|
||||
|
||||
```toml
|
||||
[nodes.noise]
|
||||
type = "effect"
|
||||
effect = "noise"
|
||||
intensity = 1.0
|
||||
# intensity can be bound to sensor values at runtime
|
||||
```
|
||||
|
||||
### Pipeline Inspection
|
||||
|
||||
```toml
|
||||
[nodes.inspect]
|
||||
type = "pipeline-inspect"
|
||||
# Renders live pipeline visualization
|
||||
```
|
||||
|
||||
## Comparison with Stage-Based Approach
|
||||
|
||||
### Old (Stage-Based)
|
||||
|
||||
```python
|
||||
pipeline = Pipeline()
|
||||
pipeline.add_stage("source", DataSourceStage(HeadlinesDataSource()))
|
||||
pipeline.add_stage("camera", CameraStage(Camera.scroll()))
|
||||
pipeline.add_stage("render", FontStage())
|
||||
pipeline.add_stage("noise", EffectPluginStage(noise_effect))
|
||||
pipeline.add_stage("display", DisplayStage(terminal_display))
|
||||
```
|
||||
|
||||
### New (Graph-Based)
|
||||
|
||||
```python
|
||||
graph = Graph()
|
||||
graph.node("source", NodeType.SOURCE, source="headlines")
|
||||
graph.node("camera", NodeType.CAMERA, mode="scroll")
|
||||
graph.node("noise", NodeType.EFFECT, effect="noise")
|
||||
graph.node("display", NodeType.DISPLAY, backend="terminal")
|
||||
graph.chain("source", "camera", "noise", "display")
|
||||
pipeline = graph_to_pipeline(graph)
|
||||
```
|
||||
|
||||
The graph system automatically:
|
||||
- Inserts the render stage between camera and effects
|
||||
- Handles capability-based dependency resolution
|
||||
- Auto-injects required stages (camera_update, render, etc.)
|
||||
267
docs/hybrid-config.md
Normal file
267
docs/hybrid-config.md
Normal file
@@ -0,0 +1,267 @@
|
||||
# Hybrid Preset-Graph Configuration
|
||||
|
||||
The hybrid configuration format combines the simplicity of presets with the flexibility of graphs, providing a concise way to define pipelines.
|
||||
|
||||
## Overview
|
||||
|
||||
The hybrid format uses **70% less space** than the verbose node-based DSL while providing the same functionality.
|
||||
|
||||
### Comparison
|
||||
|
||||
**Verbose Node DSL (39 lines):**
|
||||
```toml
|
||||
[nodes.source]
|
||||
type = "source"
|
||||
source = "headlines"
|
||||
|
||||
[nodes.camera]
|
||||
type = "camera"
|
||||
mode = "scroll"
|
||||
speed = 1.0
|
||||
|
||||
[nodes.noise]
|
||||
type = "effect"
|
||||
effect = "noise"
|
||||
intensity = 0.3
|
||||
|
||||
[nodes.display]
|
||||
type = "display"
|
||||
backend = "terminal"
|
||||
|
||||
[connections]
|
||||
list = ["source -> camera -> noise -> display"]
|
||||
```
|
||||
|
||||
**Hybrid Config (20 lines):**
|
||||
```toml
|
||||
[pipeline]
|
||||
source = "headlines"
|
||||
|
||||
camera = { mode = "scroll", speed = 1.0 }
|
||||
|
||||
effects = [
|
||||
{ name = "noise", intensity = 0.3 }
|
||||
]
|
||||
|
||||
display = { backend = "terminal" }
|
||||
```
|
||||
|
||||
## Syntax
|
||||
|
||||
### Basic Structure
|
||||
|
||||
```toml
|
||||
[pipeline]
|
||||
source = "headlines"
|
||||
camera = { mode = "scroll", speed = 1.0 }
|
||||
effects = [
|
||||
{ name = "noise", intensity = 0.3 },
|
||||
{ name = "fade", intensity = 0.5 }
|
||||
]
|
||||
display = { backend = "terminal", positioning = "mixed" }
|
||||
```
|
||||
|
||||
### Configuration Options
|
||||
|
||||
#### Source
|
||||
```toml
|
||||
source = "headlines" # Built-in source: headlines, poetry, empty, etc.
|
||||
```
|
||||
|
||||
#### Camera
|
||||
```toml
|
||||
# Inline object notation
|
||||
camera = { mode = "scroll", speed = 1.0 }
|
||||
|
||||
# Or shorthand (uses defaults)
|
||||
camera = "scroll"
|
||||
```
|
||||
|
||||
Available modes: `scroll`, `feed`, `horizontal`, `omni`, `floating`, `bounce`, `radial`
|
||||
|
||||
#### Effects
|
||||
```toml
|
||||
# Array of effect configurations
|
||||
effects = [
|
||||
{ name = "noise", intensity = 0.3 },
|
||||
{ name = "fade", intensity = 0.5, enabled = true }
|
||||
]
|
||||
|
||||
# Or shorthand (uses defaults)
|
||||
effects = ["noise", "fade"]
|
||||
```
|
||||
|
||||
Available effects: `noise`, `fade`, `glitch`, `firehose`, `tint`, `hud`, etc.
|
||||
|
||||
#### Display
|
||||
```toml
|
||||
# Inline object notation
|
||||
display = { backend = "terminal", positioning = "mixed" }
|
||||
|
||||
# Or shorthand
|
||||
display = "terminal"
|
||||
```
|
||||
|
||||
Available backends: `terminal`, `null`, `websocket`, `pygame`
|
||||
|
||||
### Viewport Settings
|
||||
```toml
|
||||
[pipeline]
|
||||
viewport_width = 80
|
||||
viewport_height = 24
|
||||
```
|
||||
|
||||
## Usage Examples
|
||||
|
||||
### Minimal Configuration
|
||||
```toml
|
||||
[pipeline]
|
||||
source = "headlines"
|
||||
display = "terminal"
|
||||
```
|
||||
|
||||
### With Camera and Effects
|
||||
```toml
|
||||
[pipeline]
|
||||
source = "headlines"
|
||||
camera = { mode = "scroll", speed = 1.0 }
|
||||
effects = [
|
||||
{ name = "noise", intensity = 0.3 },
|
||||
{ name = "fade", intensity = 0.5 }
|
||||
]
|
||||
display = { backend = "terminal", positioning = "mixed" }
|
||||
```
|
||||
|
||||
### Full Configuration
|
||||
```toml
|
||||
[pipeline]
|
||||
source = "poetry"
|
||||
camera = { mode = "scroll", speed = 1.5 }
|
||||
effects = [
|
||||
{ name = "noise", intensity = 0.2 },
|
||||
{ name = "fade", intensity = 0.4 },
|
||||
{ name = "glitch", intensity = 0.3 },
|
||||
{ name = "firehose", intensity = 0.5 }
|
||||
]
|
||||
display = { backend = "terminal", positioning = "mixed" }
|
||||
viewport_width = 100
|
||||
viewport_height = 30
|
||||
```
|
||||
|
||||
## Python API
|
||||
|
||||
### Loading from TOML File
|
||||
```python
|
||||
from engine.pipeline.hybrid_config import load_hybrid_config
|
||||
|
||||
config = load_hybrid_config("examples/hybrid_config.toml")
|
||||
pipeline = config.to_pipeline()
|
||||
```
|
||||
|
||||
### Creating Config Programmatically
|
||||
```python
|
||||
from engine.pipeline.hybrid_config import (
|
||||
PipelineConfig,
|
||||
CameraConfig,
|
||||
EffectConfig,
|
||||
DisplayConfig,
|
||||
)
|
||||
|
||||
config = PipelineConfig(
|
||||
source="headlines",
|
||||
camera=CameraConfig(mode="scroll", speed=1.0),
|
||||
effects=[
|
||||
EffectConfig(name="noise", intensity=0.3),
|
||||
EffectConfig(name="fade", intensity=0.5),
|
||||
],
|
||||
display=DisplayConfig(backend="terminal", positioning="mixed"),
|
||||
)
|
||||
|
||||
pipeline = config.to_pipeline(viewport_width=80, viewport_height=24)
|
||||
```
|
||||
|
||||
### Converting to Graph
|
||||
```python
|
||||
from engine.pipeline.hybrid_config import PipelineConfig
|
||||
|
||||
config = PipelineConfig(source="headlines", display={"backend": "terminal"})
|
||||
graph = config.to_graph() # Returns Graph object for further manipulation
|
||||
```
|
||||
|
||||
## How It Works
|
||||
|
||||
The hybrid config system:
|
||||
|
||||
1. **Parses TOML** into a `PipelineConfig` dataclass
|
||||
2. **Converts to Graph** internally using automatic linear connections
|
||||
3. **Reuses existing adapter** to convert graph to pipeline stages
|
||||
4. **Maintains backward compatibility** with verbose node DSL
|
||||
|
||||
### Automatic Connection Logic
|
||||
|
||||
The system automatically creates linear connections:
|
||||
```
|
||||
source -> camera -> effects[0] -> effects[1] -> ... -> display
|
||||
```
|
||||
|
||||
This covers 90% of use cases. For complex DAGs, use the verbose node DSL.
|
||||
|
||||
## Migration Guide
|
||||
|
||||
### From Presets
|
||||
The hybrid format is very similar to presets:
|
||||
|
||||
**Preset:**
|
||||
```toml
|
||||
[presets.custom]
|
||||
source = "headlines"
|
||||
effects = ["noise", "fade"]
|
||||
display = "terminal"
|
||||
```
|
||||
|
||||
**Hybrid:**
|
||||
```toml
|
||||
[pipeline]
|
||||
source = "headlines"
|
||||
effects = ["noise", "fade"]
|
||||
display = "terminal"
|
||||
```
|
||||
|
||||
The main difference is using `[pipeline]` instead of `[presets.custom]`.
|
||||
|
||||
### From Verbose Node DSL
|
||||
**Old (39 lines):**
|
||||
```toml
|
||||
[nodes.source] type = "source" source = "headlines"
|
||||
[nodes.camera] type = "camera" mode = "scroll"
|
||||
[nodes.noise] type = "effect" effect = "noise" intensity = 0.3
|
||||
[nodes.display] type = "display" backend = "terminal"
|
||||
[connections] list = ["source -> camera -> noise -> display"]
|
||||
```
|
||||
|
||||
**New (14 lines):**
|
||||
```toml
|
||||
[pipeline]
|
||||
source = "headlines"
|
||||
camera = { mode = "scroll" }
|
||||
effects = [{ name = "noise", intensity = 0.3 }]
|
||||
display = { backend = "terminal" }
|
||||
```
|
||||
|
||||
## When to Use Each Format
|
||||
|
||||
| Format | Use When | Lines (example) |
|
||||
|--------|----------|-----------------|
|
||||
| **Preset** | Simple configurations, no effect intensity tuning | 10 |
|
||||
| **Hybrid** | Most common use cases, need intensity tuning | 20 |
|
||||
| **Verbose Node DSL** | Complex DAGs, branching, custom connections | 39 |
|
||||
| **Python API** | Dynamic configuration, programmatic generation | N/A |
|
||||
|
||||
## Examples
|
||||
|
||||
See `examples/hybrid_config.toml` for a complete working example.
|
||||
|
||||
Run the demo:
|
||||
```bash
|
||||
python examples/hybrid_visualization.py
|
||||
```
|
||||
303
docs/positioning-analysis.md
Normal file
303
docs/positioning-analysis.md
Normal file
@@ -0,0 +1,303 @@
|
||||
# ANSI Positioning Approaches Analysis
|
||||
|
||||
## Current Positioning Methods in Mainline
|
||||
|
||||
### 1. Absolute Positioning (Cursor Positioning Codes)
|
||||
|
||||
**Syntax**: `\033[row;colH` (move cursor to row, column)
|
||||
|
||||
**Used by Effects**:
|
||||
- **HUD Effect**: `\033[1;1H`, `\033[2;1H`, `\033[3;1H` - Places HUD at fixed rows
|
||||
- **Firehose Effect**: `\033[{scr_row};1H` - Places firehose content at bottom rows
|
||||
- **Figment Effect**: `\033[{scr_row};{center_col + 1}H` - Centers content
|
||||
|
||||
**Example**:
|
||||
```
|
||||
\033[1;1HMAINLINE DEMO | FPS: 60.0 | 16.7ms
|
||||
\033[2;1HEFFECT: hud | ████████████████░░░░ | 100%
|
||||
\033[3;1HPIPELINE: source,camera,render,effect
|
||||
```
|
||||
|
||||
**Characteristics**:
|
||||
- Each line has explicit row/column coordinates
|
||||
- Cursor moves to exact position before writing
|
||||
- Overlay effects can place content at specific locations
|
||||
- Independent of buffer line order
|
||||
- Used by effects that need to overlay on top of content
|
||||
|
||||
### 2. Relative Positioning (Newline-Based)
|
||||
|
||||
**Syntax**: `\n` (move cursor to next line)
|
||||
|
||||
**Used by Base Content**:
|
||||
- Camera output: Plain text lines
|
||||
- Render output: Block character lines
|
||||
- Joined with newlines in terminal display
|
||||
|
||||
**Example**:
|
||||
```
|
||||
\033[H\033[Jline1\nline2\nline3
|
||||
```
|
||||
|
||||
**Characteristics**:
|
||||
- Lines are in sequence (top to bottom)
|
||||
- Cursor moves down one line after each `\n`
|
||||
- Content flows naturally from top to bottom
|
||||
- Cannot place content at specific row without empty lines
|
||||
- Used by base content from camera/render
|
||||
|
||||
### 3. Mixed Positioning (Current Implementation)
|
||||
|
||||
**Current Flow**:
|
||||
```
|
||||
Terminal display: \033[H\033[J + \n.join(buffer)
|
||||
Buffer structure: [line1, line2, \033[1;1HHUD line, ...]
|
||||
```
|
||||
|
||||
**Behavior**:
|
||||
1. `\033[H\033[J` - Move to (1,1), clear screen
|
||||
2. `line1\n` - Write line1, move to line2
|
||||
3. `line2\n` - Write line2, move to line3
|
||||
4. `\033[1;1H` - Move back to (1,1)
|
||||
5. Write HUD content
|
||||
|
||||
**Issue**: Overlapping cursor movements can cause visual glitches
|
||||
|
||||
---
|
||||
|
||||
## Performance Analysis
|
||||
|
||||
### Absolute Positioning Performance
|
||||
|
||||
**Advantages**:
|
||||
- Precise control over output position
|
||||
- No need for empty buffer lines
|
||||
- Effects can overlay without affecting base content
|
||||
- Efficient for static overlays (HUD, status bars)
|
||||
|
||||
**Disadvantages**:
|
||||
- More ANSI codes = larger output size
|
||||
- Each line requires `\033[row;colH` prefix
|
||||
- Can cause redraw issues if not cleared properly
|
||||
- Terminal must parse more escape sequences
|
||||
|
||||
**Output Size Comparison** (24 lines):
|
||||
- Absolute: ~1,200 bytes (avg 50 chars/line + 30 ANSI codes)
|
||||
- Relative: ~960 bytes (80 chars/line * 24 lines)
|
||||
|
||||
### Relative Positioning Performance
|
||||
|
||||
**Advantages**:
|
||||
- Minimal ANSI codes (only colors, no positioning)
|
||||
- Smaller output size
|
||||
- Terminal renders faster (less parsing)
|
||||
- Natural flow for scrolling content
|
||||
|
||||
**Disadvantages**:
|
||||
- Requires empty lines for spacing
|
||||
- Cannot overlay content without buffer manipulation
|
||||
- Limited control over exact positioning
|
||||
- Harder to implement HUD/status overlays
|
||||
|
||||
**Output Size Comparison** (24 lines):
|
||||
- Base content: ~1,920 bytes (80 chars * 24 lines)
|
||||
- With colors only: ~2,400 bytes (adds color codes)
|
||||
|
||||
### Mixed Positioning Performance
|
||||
|
||||
**Current Implementation**:
|
||||
- Base content uses relative (newlines)
|
||||
- Effects use absolute (cursor positioning)
|
||||
- Combined output has both methods
|
||||
|
||||
**Trade-offs**:
|
||||
- Medium output size
|
||||
- Flexible positioning
|
||||
- Potential visual conflicts if not coordinated
|
||||
|
||||
---
|
||||
|
||||
## Animation Performance Implications
|
||||
|
||||
### Scrolling Animations (Camera Feed/Scroll)
|
||||
|
||||
**Best Approach**: Relative positioning with newlines
|
||||
- **Why**: Smooth scrolling requires continuous buffer updates
|
||||
- **Alternative**: Absolute positioning would require recalculating all coordinates
|
||||
|
||||
**Performance**:
|
||||
- Relative: 60 FPS achievable with 80x24 buffer
|
||||
- Absolute: 55-60 FPS (slightly slower due to more ANSI codes)
|
||||
- Mixed: 58-60 FPS (negligible difference for small buffers)
|
||||
|
||||
### Static Overlay Animations (HUD, Status Bars)
|
||||
|
||||
**Best Approach**: Absolute positioning
|
||||
- **Why**: HUD content doesn't change position, only content
|
||||
- **Alternative**: Could use fixed buffer positions with relative, but less flexible
|
||||
|
||||
**Performance**:
|
||||
- Absolute: Minimal overhead (3 lines with ANSI codes)
|
||||
- Relative: Requires maintaining fixed positions in buffer (more complex)
|
||||
|
||||
### Particle/Effect Animations (Firehose, Figment)
|
||||
|
||||
**Best Approach**: Mixed positioning
|
||||
- **Why**: Base content flows normally, particles overlay at specific positions
|
||||
- **Alternative**: All absolute would be overkill
|
||||
|
||||
**Performance**:
|
||||
- Mixed: Optimal balance
|
||||
- Particles at bottom: `\033[{row};1H` (only affected lines)
|
||||
- Base content: `\n` (natural flow)
|
||||
|
||||
---
|
||||
|
||||
## Proposed Design: PositionStage
|
||||
|
||||
### Capability Definition
|
||||
|
||||
```python
|
||||
class PositioningMode(Enum):
|
||||
"""Positioning mode for terminal rendering."""
|
||||
ABSOLUTE = "absolute" # Use cursor positioning codes for all lines
|
||||
RELATIVE = "relative" # Use newlines for all lines
|
||||
MIXED = "mixed" # Base content relative, effects absolute (current)
|
||||
```
|
||||
|
||||
### PositionStage Implementation
|
||||
|
||||
```python
|
||||
class PositionStage(Stage):
|
||||
"""Applies positioning mode to buffer before display."""
|
||||
|
||||
def __init__(self, mode: PositioningMode = PositioningMode.RELATIVE):
|
||||
self.mode = mode
|
||||
self.name = f"position-{mode.value}"
|
||||
self.category = "position"
|
||||
|
||||
@property
|
||||
def capabilities(self) -> set[str]:
|
||||
return {"position.output"}
|
||||
|
||||
@property
|
||||
def dependencies(self) -> set[str]:
|
||||
return {"render.output"} # Needs content before positioning
|
||||
|
||||
def process(self, data: Any, ctx: PipelineContext) -> Any:
|
||||
if self.mode == PositioningMode.ABSOLUTE:
|
||||
return self._to_absolute(data, ctx)
|
||||
elif self.mode == PositioningMode.RELATIVE:
|
||||
return self._to_relative(data, ctx)
|
||||
else: # MIXED
|
||||
return data # No transformation needed
|
||||
|
||||
def _to_absolute(self, data: list[str], ctx: PipelineContext) -> list[str]:
|
||||
"""Convert buffer to absolute positioning (all lines have cursor codes)."""
|
||||
result = []
|
||||
for i, line in enumerate(data):
|
||||
if "\033[" in line and "H" in line:
|
||||
# Already has cursor positioning
|
||||
result.append(line)
|
||||
else:
|
||||
# Add cursor positioning for this line
|
||||
result.append(f"\033[{i + 1};1H{line}")
|
||||
return result
|
||||
|
||||
def _to_relative(self, data: list[str], ctx: PipelineContext) -> list[str]:
|
||||
"""Convert buffer to relative positioning (use newlines)."""
|
||||
# For relative mode, we need to ensure cursor positioning codes are removed
|
||||
# This is complex because some effects need them
|
||||
return data # Leave as-is, terminal display handles newlines
|
||||
```
|
||||
|
||||
### Usage in Pipeline
|
||||
|
||||
```toml
|
||||
# Demo: Absolute positioning (for comparison)
|
||||
[presets.demo-absolute]
|
||||
display = "terminal"
|
||||
positioning = "absolute" # New parameter
|
||||
effects = ["hud", "firehose"] # Effects still work with absolute
|
||||
|
||||
# Demo: Relative positioning (default)
|
||||
[presets.demo-relative]
|
||||
display = "terminal"
|
||||
positioning = "relative" # New parameter
|
||||
effects = ["hud", "firehose"] # Effects must adapt
|
||||
```
|
||||
|
||||
### Terminal Display Integration
|
||||
|
||||
```python
|
||||
def show(self, buffer: list[str], border: bool = False, mode: PositioningMode = None) -> None:
|
||||
# Apply border if requested
|
||||
if border and border != BorderMode.OFF:
|
||||
buffer = render_border(buffer, self.width, self.height, fps, frame_time)
|
||||
|
||||
# Apply positioning based on mode
|
||||
if mode == PositioningMode.ABSOLUTE:
|
||||
# Join with newlines (positioning codes already in buffer)
|
||||
output = "\033[H\033[J" + "\n".join(buffer)
|
||||
elif mode == PositioningMode.RELATIVE:
|
||||
# Join with newlines
|
||||
output = "\033[H\033,J" + "\n".join(buffer)
|
||||
else: # MIXED
|
||||
# Current implementation
|
||||
output = "\033[H\033[J" + "\n".join(buffer)
|
||||
|
||||
sys.stdout.buffer.write(output.encode())
|
||||
sys.stdout.flush()
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Recommendations
|
||||
|
||||
### For Different Animation Types
|
||||
|
||||
1. **Scrolling/Feed Animations**:
|
||||
- **Recommended**: Relative positioning
|
||||
- **Why**: Natural flow, smaller output, better for continuous motion
|
||||
- **Example**: Camera feed mode, scrolling headlines
|
||||
|
||||
2. **Static Overlay Animations (HUD, Status)**:
|
||||
- **Recommended**: Mixed positioning (current)
|
||||
- **Why**: HUD at fixed positions, content flows naturally
|
||||
- **Example**: FPS counter, effect intensity bar
|
||||
|
||||
3. **Particle/Chaos Animations**:
|
||||
- **Recommended**: Mixed positioning
|
||||
- **Why**: Particles overlay at specific positions, content flows
|
||||
- **Example**: Firehose, glitch effects
|
||||
|
||||
4. **Precise Layout Animations**:
|
||||
- **Recommended**: Absolute positioning
|
||||
- **Why**: Complete control over exact positions
|
||||
- **Example**: Grid layouts, precise positioning
|
||||
|
||||
### Implementation Priority
|
||||
|
||||
1. **Phase 1**: Document current behavior (done)
|
||||
2. **Phase 2**: Create PositionStage with configurable mode
|
||||
3. **Phase 3**: Update terminal display to respect positioning mode
|
||||
4. **Phase 4**: Create presets for different positioning modes
|
||||
5. **Phase 5**: Performance testing and optimization
|
||||
|
||||
### Key Considerations
|
||||
|
||||
- **Backward Compatibility**: Keep mixed positioning as default
|
||||
- **Performance**: Relative is ~20% faster for large buffers
|
||||
- **Flexibility**: Absolute allows precise control but increases output size
|
||||
- **Simplicity**: Mixed provides best balance for typical use cases
|
||||
|
||||
---
|
||||
|
||||
## Next Steps
|
||||
|
||||
1. Implement `PositioningMode` enum
|
||||
2. Create `PositionStage` class with mode configuration
|
||||
3. Update terminal display to accept positioning mode parameter
|
||||
4. Create test presets for each positioning mode
|
||||
5. Performance benchmark each approach
|
||||
6. Document best practices for choosing positioning mode
|
||||
219
docs/presets-usage.md
Normal file
219
docs/presets-usage.md
Normal file
@@ -0,0 +1,219 @@
|
||||
# Presets Usage Guide
|
||||
|
||||
## Overview
|
||||
|
||||
The sideline branch introduces a new preset system that allows you to easily configure different pipeline behaviors. This guide explains the available presets and how to use them.
|
||||
|
||||
## Available Presets
|
||||
|
||||
### 1. upstream-default
|
||||
|
||||
**Purpose:** Matches the default upstream Mainline operation for comparison.
|
||||
|
||||
**Configuration:**
|
||||
- **Display:** Terminal (not pygame)
|
||||
- **Camera:** Scroll mode
|
||||
- **Effects:** noise, fade, glitch, firehose (classic four effects)
|
||||
- **Positioning:** Mixed mode
|
||||
- **Message Overlay:** Disabled (matches upstream)
|
||||
|
||||
**Usage:**
|
||||
```bash
|
||||
python -m mainline --preset upstream-default --display terminal
|
||||
```
|
||||
|
||||
**Best for:**
|
||||
- Comparing sideline vs upstream behavior
|
||||
- Legacy terminal-based operation
|
||||
- Baseline performance testing
|
||||
|
||||
### 2. demo
|
||||
|
||||
**Purpose:** Showcases sideline features including hotswappable effects and sensors.
|
||||
|
||||
**Configuration:**
|
||||
- **Display:** Pygame (graphical display)
|
||||
- **Camera:** Scroll mode
|
||||
- **Effects:** noise, fade, glitch, firehose, hud (with visual feedback)
|
||||
- **Positioning:** Mixed mode
|
||||
- **Message Overlay:** Enabled (with ntfy integration)
|
||||
|
||||
**Features:**
|
||||
- **Hotswappable Effects:** Effects can be toggled and modified at runtime
|
||||
- **LFO Sensor Modulation:** Oscillator sensor provides smooth intensity modulation
|
||||
- **Visual Feedback:** HUD effect shows current effect state and pipeline info
|
||||
- **Mixed Positioning:** Optimal balance of performance and control
|
||||
|
||||
**Usage:**
|
||||
```bash
|
||||
python -m mainline --preset demo --display pygame
|
||||
```
|
||||
|
||||
**Best for:**
|
||||
- Exploring sideline capabilities
|
||||
- Testing effect hotswapping
|
||||
- Demonstrating sensor integration
|
||||
|
||||
### 3. demo-base / demo-pygame
|
||||
|
||||
**Purpose:** Base presets for custom effect hotswapping experiments.
|
||||
|
||||
**Configuration:**
|
||||
- **Display:** Terminal (base) or Pygame (pygame variant)
|
||||
- **Camera:** Feed mode
|
||||
- **Effects:** Empty (add your own)
|
||||
- **Positioning:** Mixed mode
|
||||
|
||||
**Usage:**
|
||||
```bash
|
||||
python -m mainline --preset demo-pygame --display pygame
|
||||
```
|
||||
|
||||
### 4. Other Presets
|
||||
|
||||
- `poetry`: Poetry feed with subtle effects
|
||||
- `firehose`: High-speed firehose mode
|
||||
- `ui`: Interactive UI mode with control panel
|
||||
- `fixture`: Uses cached headline fixtures
|
||||
- `websocket`: WebSocket display mode
|
||||
|
||||
## Positioning Modes
|
||||
|
||||
The `--positioning` flag controls how text is positioned in the terminal:
|
||||
|
||||
```bash
|
||||
# Relative positioning (newlines, good for scrolling)
|
||||
python -m mainline --positioning relative --preset demo
|
||||
|
||||
# Absolute positioning (cursor codes, good for overlays)
|
||||
python -m mainline --positioning absolute --preset demo
|
||||
|
||||
# Mixed positioning (default, optimal balance)
|
||||
python -m mainline --positioning mixed --preset demo
|
||||
```
|
||||
|
||||
## Pipeline Stages
|
||||
|
||||
### Upstream-Default Pipeline
|
||||
|
||||
1. **Source Stage:** Headlines data source
|
||||
2. **Viewport Filter:** Filters items to viewport height
|
||||
3. **Font Stage:** Renders headlines as block characters
|
||||
4. **Camera Stages:** Scrolling animation
|
||||
5. **Effect Stages:** noise, fade, glitch, firehose
|
||||
6. **Display Stage:** Terminal output
|
||||
|
||||
### Demo Pipeline
|
||||
|
||||
1. **Source Stage:** Headlines data source
|
||||
2. **Viewport Filter:** Filters items to viewport height
|
||||
3. **Font Stage:** Renders headlines as block characters
|
||||
4. **Camera Stages:** Scrolling animation
|
||||
5. **Effect Stages:** noise, fade, glitch, firehose, hud
|
||||
6. **Message Overlay:** Ntfy message integration
|
||||
7. **Display Stage:** Pygame output
|
||||
|
||||
## Command-Line Examples
|
||||
|
||||
### Basic Usage
|
||||
|
||||
```bash
|
||||
# Run upstream-default preset
|
||||
python -m mainline --preset upstream-default --display terminal
|
||||
|
||||
# Run demo preset
|
||||
python -m mainline --preset demo --display pygame
|
||||
|
||||
# Run with custom positioning
|
||||
python -m mainline --preset demo --display pygame --positioning absolute
|
||||
```
|
||||
|
||||
### Comparison Testing
|
||||
|
||||
```bash
|
||||
# Capture upstream output
|
||||
python -m mainline --preset upstream-default --display null --viewport 80x24
|
||||
|
||||
# Capture sideline output
|
||||
python -m mainline --preset demo --display null --viewport 80x24
|
||||
```
|
||||
|
||||
### Hotswapping Effects
|
||||
|
||||
The demo preset supports hotswapping effects at runtime:
|
||||
- Use the WebSocket display to send commands
|
||||
- Toggle effects on/off
|
||||
- Adjust intensity values in real-time
|
||||
|
||||
## Configuration Files
|
||||
|
||||
### Built-in Presets
|
||||
|
||||
Location: `engine/pipeline/presets.py` (Python code)
|
||||
|
||||
### User Presets
|
||||
|
||||
Location: `~/.config/mainline/presets.toml` or `./presets.toml`
|
||||
|
||||
Example user preset:
|
||||
```toml
|
||||
[presets.my-custom-preset]
|
||||
description = "My custom configuration"
|
||||
source = "headlines"
|
||||
display = "terminal"
|
||||
camera = "scroll"
|
||||
effects = ["noise", "fade"]
|
||||
positioning = "mixed"
|
||||
viewport_width = 100
|
||||
viewport_height = 30
|
||||
```
|
||||
|
||||
## Sensor Configuration
|
||||
|
||||
### Oscillator Sensor (LFO)
|
||||
|
||||
The oscillator sensor provides Low Frequency Oscillator modulation:
|
||||
|
||||
```toml
|
||||
[sensors.oscillator]
|
||||
enabled = true
|
||||
waveform = "sine" # sine, square, triangle, sawtooth
|
||||
frequency = 0.05 # 20 second cycle (gentle)
|
||||
amplitude = 0.5 # 50% modulation
|
||||
```
|
||||
|
||||
### Effect Configuration
|
||||
|
||||
Effect intensities can be configured with initial values:
|
||||
|
||||
```toml
|
||||
[effect_configs.noise]
|
||||
enabled = true
|
||||
intensity = 1.0
|
||||
|
||||
[effect_configs.fade]
|
||||
enabled = true
|
||||
intensity = 1.0
|
||||
|
||||
[effect_configs.glitch]
|
||||
enabled = true
|
||||
intensity = 0.5
|
||||
```
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
### No Display Output
|
||||
|
||||
- Check if display backend is available (pygame, terminal, etc.)
|
||||
- Use `--display null` for headless testing
|
||||
|
||||
### Effects Not Modulating
|
||||
|
||||
- Ensure sensor is enabled in presets.toml
|
||||
- Check effect intensity values in configuration
|
||||
|
||||
### Performance Issues
|
||||
|
||||
- Use `--positioning relative` for large buffers
|
||||
- Reduce viewport height for better performance
|
||||
- Use null display for testing without rendering
|
||||
217
docs/proposals/adr-preset-scripting-language.md
Normal file
217
docs/proposals/adr-preset-scripting-language.md
Normal file
@@ -0,0 +1,217 @@
|
||||
# ADR: Preset Scripting Language for Mainline
|
||||
|
||||
## Status: Draft
|
||||
|
||||
## Context
|
||||
|
||||
We need to evaluate whether to add a scripting language for authoring presets in Mainline, replacing or augmenting the current TOML-based preset system. The goals are:
|
||||
|
||||
1. **Expressiveness**: More powerful than TOML for describing dynamic, procedural, or dataflow-based presets
|
||||
2. **Live coding**: Support hot-reloading of presets during runtime (like TidalCycles or Sonic Pi)
|
||||
3. **Testing**: Include assertion language to package tests alongside presets
|
||||
4. **Toolchain**: Consider packaging and build processes
|
||||
|
||||
### Current State
|
||||
|
||||
The current preset system uses TOML files (`presets.toml`) with a simple structure:
|
||||
|
||||
```toml
|
||||
[presets.demo-base]
|
||||
description = "Demo: Base preset for effect hot-swapping"
|
||||
source = "headlines"
|
||||
display = "terminal"
|
||||
camera = "feed"
|
||||
effects = [] # Demo script will add/remove effects dynamically
|
||||
camera_speed = 0.1
|
||||
viewport_width = 80
|
||||
viewport_height = 24
|
||||
```
|
||||
|
||||
This is declarative and static. It cannot express:
|
||||
- Conditional logic based on runtime state
|
||||
- Dataflow between pipeline stages
|
||||
- Procedural generation of stage configurations
|
||||
- Assertions or validation of preset behavior
|
||||
|
||||
### Problems with TOML
|
||||
|
||||
- No way to express dependencies between effects or stages
|
||||
- Cannot describe temporal/animated behavior
|
||||
- No support for sensor bindings or parametric animations
|
||||
- Static configuration cannot adapt to runtime conditions
|
||||
- No built-in testing/assertion mechanism
|
||||
|
||||
## Approaches
|
||||
|
||||
### 1. Visual Dataflow Language (PureData-style)
|
||||
|
||||
Inspired by Pure Data (Pd), Max/MSP, and TouchDesigner:
|
||||
|
||||
**Pros:**
|
||||
- Intuitive for creative coding and live performance
|
||||
- Strong model for real-time parameter modulation
|
||||
- Matches the "patcher" paradigm already seen in pipeline architecture
|
||||
- Rich ecosystem of visual programming tools
|
||||
|
||||
**Cons:**
|
||||
- Complex to implement from scratch
|
||||
- Requires dedicated GUI editor
|
||||
- Harder to version control (binary/graph formats)
|
||||
- Mermaid diagrams alone aren't sufficient for this
|
||||
|
||||
**Tools to explore:**
|
||||
- libpd (Pure Data bindings for other languages)
|
||||
- Node-based frameworks (node-red, various DSP tools)
|
||||
- TouchDesigner-like approaches
|
||||
|
||||
### 2. Textual DSL (TidalCycles-style)
|
||||
|
||||
Domain-specific language focused on pattern transformation:
|
||||
|
||||
**Pros:**
|
||||
- Lightweight, fast iteration
|
||||
- Easy to version control (text files)
|
||||
- Can express complex patterns with minimal syntax
|
||||
- Proven in livecoding community
|
||||
|
||||
**Cons:**
|
||||
- Learning curve for non-programmers
|
||||
- Less visual than PureData approach
|
||||
|
||||
**Example (hypothetical):**
|
||||
```
|
||||
preset my-show {
|
||||
source: headlines
|
||||
|
||||
every 8s {
|
||||
effect noise: intensity = (0.5 <-> 1.0)
|
||||
}
|
||||
|
||||
on mic.level > 0.7 {
|
||||
effect glitch: intensity += 0.2
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### 3. Embed Existing Language
|
||||
|
||||
Embed Lua, Python, or JavaScript:
|
||||
|
||||
**Pros:**
|
||||
- Full power of general-purpose language
|
||||
- Existing tooling, testing frameworks
|
||||
- Easy to integrate (many embeddable interpreters)
|
||||
|
||||
**Cons:**
|
||||
- Security concerns with running user code
|
||||
- May be overkill for simple presets
|
||||
- Testing/assertion system must be built on top
|
||||
|
||||
**Tools:**
|
||||
- Lua (lightweight, fast)
|
||||
- Python (rich ecosystem, but heavier)
|
||||
- QuickJS (small, embeddable JS)
|
||||
|
||||
### 4. Hybrid Approach
|
||||
|
||||
Visual editor generates textual DSL that compiles to Python:
|
||||
|
||||
**Pros:**
|
||||
- Best of both worlds
|
||||
- Can start with simple DSL and add editor later
|
||||
|
||||
**Cons:**
|
||||
- More complex initial implementation
|
||||
|
||||
## Requirements Analysis
|
||||
|
||||
### Must Have
|
||||
- [ ] Express pipeline stage configurations (source, effects, camera, display)
|
||||
- [ ] Support parameter bindings to sensors
|
||||
- [ ] Hot-reloading during runtime
|
||||
- [ ] Integration with existing Pipeline architecture
|
||||
|
||||
### Should Have
|
||||
- [ ] Basic assertion language for testing
|
||||
- [ ] Ability to define custom abstractions/modules
|
||||
- [ ] Version control friendly (text-based)
|
||||
|
||||
### Could Have
|
||||
- [ ] Visual node-based editor
|
||||
- [ ] Real-time visualization of dataflow
|
||||
- [ ] MIDI/OSC support for external controllers
|
||||
|
||||
## User Stories (Proposed)
|
||||
|
||||
### Spike Stories (Investigation)
|
||||
|
||||
**Story 1: Evaluate DSL Parsing Tools**
|
||||
> As a developer, I want to understand the available Python DSL parsing libraries (Lark, parsy, pyparsing) so that I can choose the right tool for implementing a preset DSL.
|
||||
>
|
||||
> **Acceptance**: Document pros/cons of 3+ parsing libraries with small proof-of-concept experiments
|
||||
|
||||
**Story 2: Research Livecoding Languages**
|
||||
> As a developer, I want to understand how TidalCycles, Sonic Pi, and PureData handle hot-reloading and pattern generation so that I can apply similar techniques to Mainline.
|
||||
>
|
||||
> **Acceptance**: Document key architectural patterns from 2+ livecoding systems
|
||||
|
||||
**Story 3: Prototype Textual DSL**
|
||||
> As a preset author, I want to write presets in a simple textual DSL that supports basic conditionals and sensor bindings.
|
||||
>
|
||||
> **Acceptance**: Create a prototype DSL that can parse a sample preset and convert to PipelineConfig
|
||||
|
||||
**Story 4: Investigate Assertion/Testing Approaches**
|
||||
> As a quality engineer, I want to include assertions with presets so that preset behavior can be validated automatically.
|
||||
>
|
||||
> **Acceptance**: Survey testing patterns in livecoding and propose assertion syntax
|
||||
|
||||
### Implementation Stories (Future)
|
||||
|
||||
**Story 5: Implement Core DSL Parser**
|
||||
> As a preset author, I want to write presets in a textual DSL that supports sensors, conditionals, and parameter bindings.
|
||||
>
|
||||
> **Acceptance**: DSL parser handles the core syntax, produces valid PipelineConfig
|
||||
|
||||
**Story 6: Hot-Reload System**
|
||||
> As a performer, I want to edit preset files and see changes reflected in real-time without restarting.
|
||||
>
|
||||
> **Acceptance**: File watcher + pipeline mutation API integration works
|
||||
|
||||
**Story 7: Assertion Language**
|
||||
> As a preset author, I want to include assertions that validate sensor values or pipeline state.
|
||||
>
|
||||
> **Acceptance**: Assertions can run as part of preset execution and report pass/fail
|
||||
|
||||
**Story 8: Toolchain/Packaging**
|
||||
> As a preset distributor, I want to package presets with dependencies for easy sharing.
|
||||
>
|
||||
> **Acceptance**: Can create, build, and install a preset package
|
||||
|
||||
## Decision
|
||||
|
||||
**Recommend: Start with textual DSL approach (Option 2/4)**
|
||||
|
||||
Rationale:
|
||||
- Lowest barrier to entry (text files, version control)
|
||||
- Can evolve to hybrid later if visual editor is needed
|
||||
- Strong precedents in livecoding community (TidalCycles, Sonic Pi)
|
||||
- Enables hot-reloading naturally
|
||||
- Assertion language can be part of the DSL syntax
|
||||
|
||||
**Not recommending Mermaid**: Mermaid is excellent for documentation and visualization, but it's a diagramming tool, not a programming language. It cannot express the logic, conditionals, and sensor bindings we need.
|
||||
|
||||
## Next Steps
|
||||
|
||||
1. Execute Spike Stories 1-4 to reduce uncertainty
|
||||
2. Create minimal viable DSL syntax
|
||||
3. Prototype hot-reloading with existing preset system
|
||||
4. Evaluate whether visual editor adds sufficient value to warrant complexity
|
||||
|
||||
## References
|
||||
|
||||
- Pure Data: https://puredata.info/
|
||||
- TidalCycles: https://tidalcycles.org/
|
||||
- Sonic Pi: https://sonic-pi.net/
|
||||
- Lark parser: https://lark-parser.readthedocs.io/
|
||||
- Mainline Pipeline Architecture: `engine/pipeline/`
|
||||
- Current Presets: `presets.toml`
|
||||
@@ -1,145 +0,0 @@
|
||||
# README Update Design — 2026-03-15
|
||||
|
||||
## Goal
|
||||
|
||||
Restructure and expand `README.md` to:
|
||||
1. Align with the current codebase (Python 3.10+, uv/mise/pytest/ruff toolchain, 6 new fonts)
|
||||
2. Add extensibility-focused content (`Extending` section)
|
||||
3. Add developer workflow coverage (`Development` section)
|
||||
4. Improve navigability via top-level grouping (Approach C)
|
||||
|
||||
---
|
||||
|
||||
## Proposed Structure
|
||||
|
||||
```
|
||||
# MAINLINE
|
||||
> tagline + description
|
||||
|
||||
## Using
|
||||
### Run
|
||||
### Config
|
||||
### Feeds
|
||||
### Fonts
|
||||
### ntfy.sh
|
||||
|
||||
## Internals
|
||||
### How it works
|
||||
### Architecture
|
||||
|
||||
## Extending
|
||||
### NtfyPoller
|
||||
### MicMonitor
|
||||
### Render pipeline
|
||||
|
||||
## Development
|
||||
### Setup
|
||||
### Tasks
|
||||
### Testing
|
||||
### Linting
|
||||
|
||||
## Roadmap
|
||||
|
||||
---
|
||||
*footer*
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Section-by-section design
|
||||
|
||||
### Using
|
||||
|
||||
All existing content preserved verbatim. Two changes:
|
||||
- **Run**: add `uv run mainline.py` as an alternative invocation; expand bootstrap note to mention `uv sync` / `uv sync --all-extras`
|
||||
- **ntfy.sh**: remove `NtfyPoller` reuse code example (moves to Extending); keep push instructions and topic config
|
||||
|
||||
Subsections moved into Using (currently standalone):
|
||||
- `Feeds` — it's configuration, not a concept
|
||||
- `ntfy.sh` (usage half)
|
||||
|
||||
### Internals
|
||||
|
||||
All existing content preserved verbatim. One change:
|
||||
- **Architecture**: append `tests/` directory listing to the module tree
|
||||
|
||||
### Extending
|
||||
|
||||
Entirely new section. Three subsections:
|
||||
|
||||
**NtfyPoller**
|
||||
- Minimal working import + usage example
|
||||
- Note: stdlib only dependencies
|
||||
|
||||
```python
|
||||
from engine.ntfy import NtfyPoller
|
||||
|
||||
poller = NtfyPoller("https://ntfy.sh/my_topic/json?since=20s&poll=1")
|
||||
poller.start()
|
||||
|
||||
# in your render loop:
|
||||
msg = poller.get_active_message() # → (title, body, timestamp) or None
|
||||
if msg:
|
||||
title, body, ts = msg
|
||||
render_my_message(title, body) # visualizer-specific
|
||||
```
|
||||
|
||||
**MicMonitor**
|
||||
- Minimal working import + usage example
|
||||
- Note: sounddevice/numpy optional, degrades gracefully
|
||||
|
||||
```python
|
||||
from engine.mic import MicMonitor
|
||||
|
||||
mic = MicMonitor(threshold_db=50)
|
||||
if mic.start(): # returns False if sounddevice unavailable
|
||||
excess = mic.excess # dB above threshold, clamped to 0
|
||||
db = mic.db # raw RMS dB level
|
||||
```
|
||||
|
||||
**Render pipeline**
|
||||
- Brief prose about `engine.render` as importable pipeline
|
||||
- Minimal sketch of serve.py / ESP32 usage pattern
|
||||
- Reference to `Mainline Renderer + ntfy Message Queue for ESP32.md`
|
||||
|
||||
### Development
|
||||
|
||||
Entirely new section. Four subsections:
|
||||
|
||||
**Setup**
|
||||
- Hard requirements: Python 3.10+, uv
|
||||
- `uv sync` / `uv sync --all-extras` / `uv sync --group dev`
|
||||
|
||||
**Tasks** (via mise)
|
||||
- `mise run test`, `test-cov`, `lint`, `lint-fix`, `format`, `run`, `run-poetry`, `run-firehose`
|
||||
|
||||
**Testing**
|
||||
- Tests in `tests/` covering config, filter, mic, ntfy, sources, terminal
|
||||
- `uv run pytest` and `uv run pytest --cov=engine --cov-report=term-missing`
|
||||
|
||||
**Linting**
|
||||
- `uv run ruff check` and `uv run ruff format`
|
||||
- Note: pre-commit hooks run lint via `hk`
|
||||
|
||||
### Roadmap
|
||||
|
||||
Existing `## Ideas / Future` content preserved verbatim. Only change: rename heading to `## Roadmap`.
|
||||
|
||||
### Footer
|
||||
|
||||
Update `Python 3.9+` → `Python 3.10+`.
|
||||
|
||||
---
|
||||
|
||||
## Files changed
|
||||
|
||||
- `README.md` — restructured and expanded as above
|
||||
- No other files
|
||||
|
||||
---
|
||||
|
||||
## What is not changing
|
||||
|
||||
- All existing prose, examples, and config table values — preserved verbatim where retained
|
||||
- The Ideas/Future content — kept intact under the new Roadmap heading
|
||||
- The cyberpunk voice and terse style of the existing README
|
||||
@@ -1,37 +0,0 @@
|
||||
import random
|
||||
|
||||
from engine import config
|
||||
from engine.effects.types import EffectConfig, EffectContext, EffectPlugin
|
||||
from engine.terminal import C_DIM, DIM, G_DIM, G_LO, RST
|
||||
|
||||
|
||||
class GlitchEffect(EffectPlugin):
|
||||
name = "glitch"
|
||||
config = EffectConfig(enabled=True, intensity=1.0)
|
||||
|
||||
def process(self, buf: list[str], ctx: EffectContext) -> list[str]:
|
||||
if not buf:
|
||||
return buf
|
||||
result = list(buf)
|
||||
intensity = self.config.intensity
|
||||
|
||||
glitch_prob = 0.32 + min(0.9, ctx.mic_excess * 0.16)
|
||||
glitch_prob = glitch_prob * intensity
|
||||
n_hits = 4 + int(ctx.mic_excess / 2)
|
||||
n_hits = int(n_hits * intensity)
|
||||
|
||||
if random.random() < glitch_prob:
|
||||
for _ in range(min(n_hits, len(result))):
|
||||
gi = random.randint(0, len(result) - 1)
|
||||
scr_row = gi + 1
|
||||
result[gi] = f"\033[{scr_row};1H{self._glitch_bar(ctx.terminal_width)}"
|
||||
return result
|
||||
|
||||
def _glitch_bar(self, w: int) -> str:
|
||||
c = random.choice(["░", "▒", "─", "\xc2"])
|
||||
n = random.randint(3, w // 2)
|
||||
o = random.randint(0, w - n)
|
||||
return " " * o + f"{G_LO}{DIM}" + c * n + RST
|
||||
|
||||
def configure(self, config: EffectConfig) -> None:
|
||||
self.config = config
|
||||
@@ -1,63 +0,0 @@
|
||||
from engine.effects.performance import get_monitor
|
||||
from engine.effects.types import EffectConfig, EffectContext, EffectPlugin
|
||||
|
||||
|
||||
class HudEffect(EffectPlugin):
|
||||
name = "hud"
|
||||
config = EffectConfig(enabled=True, intensity=1.0)
|
||||
|
||||
def process(self, buf: list[str], ctx: EffectContext) -> list[str]:
|
||||
result = list(buf)
|
||||
monitor = get_monitor()
|
||||
|
||||
fps = 0.0
|
||||
frame_time = 0.0
|
||||
if monitor:
|
||||
stats = monitor.get_stats()
|
||||
if stats and "pipeline" in stats:
|
||||
frame_time = stats["pipeline"].get("avg_ms", 0.0)
|
||||
frame_count = stats.get("frame_count", 0)
|
||||
if frame_count > 0 and frame_time > 0:
|
||||
fps = 1000.0 / frame_time
|
||||
|
||||
w = ctx.terminal_width
|
||||
h = ctx.terminal_height
|
||||
|
||||
effect_name = self.config.params.get("display_effect", "none")
|
||||
effect_intensity = self.config.params.get("display_intensity", 0.0)
|
||||
|
||||
hud_lines = []
|
||||
hud_lines.append(
|
||||
f"\033[1;1H\033[38;5;46mMAINLINE DEMO\033[0m \033[38;5;245m|\033[0m \033[38;5;39mFPS: {fps:.1f}\033[0m \033[38;5;245m|\033[0m \033[38;5;208m{frame_time:.1f}ms\033[0m"
|
||||
)
|
||||
|
||||
bar_width = 20
|
||||
filled = int(bar_width * effect_intensity)
|
||||
bar = (
|
||||
"\033[38;5;82m"
|
||||
+ "█" * filled
|
||||
+ "\033[38;5;240m"
|
||||
+ "░" * (bar_width - filled)
|
||||
+ "\033[0m"
|
||||
)
|
||||
hud_lines.append(
|
||||
f"\033[2;1H\033[38;5;45mEFFECT:\033[0m \033[1;38;5;227m{effect_name:12s}\033[0m \033[38;5;245m|\033[0m {bar} \033[38;5;245m|\033[0m \033[38;5;219m{effect_intensity * 100:.0f}%\033[0m"
|
||||
)
|
||||
|
||||
from engine.effects import get_effect_chain
|
||||
|
||||
chain = get_effect_chain()
|
||||
order = chain.get_order()
|
||||
pipeline_str = ",".join(order) if order else "(none)"
|
||||
hud_lines.append(f"\033[3;1H\033[38;5;44mPIPELINE:\033[0m {pipeline_str}")
|
||||
|
||||
for i, line in enumerate(hud_lines):
|
||||
if i < len(result):
|
||||
result[i] = line + result[i][len(line) :]
|
||||
else:
|
||||
result.append(line)
|
||||
|
||||
return result
|
||||
|
||||
def configure(self, config: EffectConfig) -> None:
|
||||
self.config = config
|
||||
@@ -1 +1,10 @@
|
||||
# engine — modular internals for mainline
|
||||
|
||||
# Import submodules to make them accessible via engine.<name>
|
||||
# This is required for unittest.mock.patch to work with "engine.<module>.<function>"
|
||||
# strings and for direct attribute access on the engine package.
|
||||
import engine.config # noqa: F401
|
||||
import engine.fetch # noqa: F401
|
||||
import engine.filter # noqa: F401
|
||||
import engine.sources # noqa: F401
|
||||
import engine.terminal # noqa: F401
|
||||
|
||||
@@ -1,340 +0,0 @@
|
||||
"""
|
||||
Animation system - Clock, events, triggers, durations, and animation controller.
|
||||
"""
|
||||
|
||||
import time
|
||||
from collections.abc import Callable
|
||||
from dataclasses import dataclass, field
|
||||
from enum import Enum, auto
|
||||
from typing import Any
|
||||
|
||||
|
||||
class Clock:
|
||||
"""High-resolution clock for animation timing."""
|
||||
|
||||
def __init__(self):
|
||||
self._start_time = time.perf_counter()
|
||||
self._paused = False
|
||||
self._pause_offset = 0.0
|
||||
self._pause_start = 0.0
|
||||
|
||||
def reset(self) -> None:
|
||||
self._start_time = time.perf_counter()
|
||||
self._paused = False
|
||||
self._pause_offset = 0.0
|
||||
self._pause_start = 0.0
|
||||
|
||||
def elapsed(self) -> float:
|
||||
if self._paused:
|
||||
return self._pause_start - self._start_time - self._pause_offset
|
||||
return time.perf_counter() - self._start_time - self._pause_offset
|
||||
|
||||
def elapsed_ms(self) -> float:
|
||||
return self.elapsed() * 1000
|
||||
|
||||
def elapsed_frames(self, fps: float = 60.0) -> int:
|
||||
return int(self.elapsed() * fps)
|
||||
|
||||
def pause(self) -> None:
|
||||
if not self._paused:
|
||||
self._paused = True
|
||||
self._pause_start = time.perf_counter()
|
||||
|
||||
def resume(self) -> None:
|
||||
if self._paused:
|
||||
self._pause_offset += time.perf_counter() - self._pause_start
|
||||
self._paused = False
|
||||
|
||||
|
||||
class TriggerType(Enum):
|
||||
TIME = auto() # Trigger after elapsed time
|
||||
FRAME = auto() # Trigger after N frames
|
||||
CYCLE = auto() # Trigger on cycle repeat
|
||||
CONDITION = auto() # Trigger when condition is met
|
||||
MANUAL = auto() # Trigger manually
|
||||
|
||||
|
||||
@dataclass
|
||||
class Trigger:
|
||||
"""Event trigger configuration."""
|
||||
|
||||
type: TriggerType
|
||||
value: float | int = 0
|
||||
condition: Callable[["AnimationController"], bool] | None = None
|
||||
repeat: bool = False
|
||||
repeat_interval: float = 0.0
|
||||
|
||||
|
||||
@dataclass
|
||||
class Event:
|
||||
"""An event with trigger, duration, and action."""
|
||||
|
||||
name: str
|
||||
trigger: Trigger
|
||||
action: Callable[["AnimationController", float], None]
|
||||
duration: float = 0.0
|
||||
ease: Callable[[float], float] | None = None
|
||||
|
||||
def __post_init__(self):
|
||||
if self.ease is None:
|
||||
self.ease = linear_ease
|
||||
|
||||
|
||||
def linear_ease(t: float) -> float:
|
||||
return t
|
||||
|
||||
|
||||
def ease_in_out(t: float) -> float:
|
||||
return t * t * (3 - 2 * t)
|
||||
|
||||
|
||||
def ease_out_bounce(t: float) -> float:
|
||||
if t < 1 / 2.75:
|
||||
return 7.5625 * t * t
|
||||
elif t < 2 / 2.75:
|
||||
t -= 1.5 / 2.75
|
||||
return 7.5625 * t * t + 0.75
|
||||
elif t < 2.5 / 2.75:
|
||||
t -= 2.25 / 2.75
|
||||
return 7.5625 * t * t + 0.9375
|
||||
else:
|
||||
t -= 2.625 / 2.75
|
||||
return 7.5625 * t * t + 0.984375
|
||||
|
||||
|
||||
class AnimationController:
|
||||
"""Controls animation parameters with clock and events."""
|
||||
|
||||
def __init__(self, fps: float = 60.0):
|
||||
self.clock = Clock()
|
||||
self.fps = fps
|
||||
self.frame = 0
|
||||
self._events: list[Event] = []
|
||||
self._active_events: dict[str, float] = {}
|
||||
self._params: dict[str, Any] = {}
|
||||
self._cycled = 0
|
||||
|
||||
def add_event(self, event: Event) -> "AnimationController":
|
||||
self._events.append(event)
|
||||
return self
|
||||
|
||||
def set_param(self, key: str, value: Any) -> None:
|
||||
self._params[key] = value
|
||||
|
||||
def get_param(self, key: str, default: Any = None) -> Any:
|
||||
return self._params.get(key, default)
|
||||
|
||||
def update(self) -> dict[str, Any]:
|
||||
"""Update animation state, return current params."""
|
||||
elapsed = self.clock.elapsed()
|
||||
|
||||
for event in self._events:
|
||||
triggered = False
|
||||
|
||||
if event.trigger.type == TriggerType.TIME:
|
||||
if self.clock.elapsed() >= event.trigger.value:
|
||||
triggered = True
|
||||
elif event.trigger.type == TriggerType.FRAME:
|
||||
if self.frame >= event.trigger.value:
|
||||
triggered = True
|
||||
elif event.trigger.type == TriggerType.CYCLE:
|
||||
cycle_duration = event.trigger.value
|
||||
if cycle_duration > 0:
|
||||
current_cycle = int(elapsed / cycle_duration)
|
||||
if current_cycle > self._cycled:
|
||||
self._cycled = current_cycle
|
||||
triggered = True
|
||||
elif event.trigger.type == TriggerType.CONDITION:
|
||||
if event.trigger.condition and event.trigger.condition(self):
|
||||
triggered = True
|
||||
elif event.trigger.type == TriggerType.MANUAL:
|
||||
pass
|
||||
|
||||
if triggered:
|
||||
if event.name not in self._active_events:
|
||||
self._active_events[event.name] = 0.0
|
||||
|
||||
progress = 0.0
|
||||
if event.duration > 0:
|
||||
self._active_events[event.name] += 1 / self.fps
|
||||
progress = min(
|
||||
1.0, self._active_events[event.name] / event.duration
|
||||
)
|
||||
eased_progress = event.ease(progress)
|
||||
event.action(self, eased_progress)
|
||||
|
||||
if progress >= 1.0:
|
||||
if event.trigger.repeat:
|
||||
self._active_events[event.name] = 0.0
|
||||
else:
|
||||
del self._active_events[event.name]
|
||||
else:
|
||||
event.action(self, 1.0)
|
||||
if not event.trigger.repeat:
|
||||
del self._active_events[event.name]
|
||||
else:
|
||||
self._active_events[event.name] = 0.0
|
||||
|
||||
self.frame += 1
|
||||
return dict(self._params)
|
||||
|
||||
|
||||
@dataclass
|
||||
class PipelineParams:
|
||||
"""Snapshot of pipeline parameters for animation."""
|
||||
|
||||
effect_enabled: dict[str, bool] = field(default_factory=dict)
|
||||
effect_intensity: dict[str, float] = field(default_factory=dict)
|
||||
camera_mode: str = "vertical"
|
||||
camera_speed: float = 1.0
|
||||
camera_x: int = 0
|
||||
camera_y: int = 0
|
||||
display_backend: str = "terminal"
|
||||
scroll_speed: float = 1.0
|
||||
|
||||
|
||||
class Preset:
|
||||
"""Packages a starting pipeline config + Animation controller."""
|
||||
|
||||
def __init__(
|
||||
self,
|
||||
name: str,
|
||||
description: str = "",
|
||||
initial_params: PipelineParams | None = None,
|
||||
animation: AnimationController | None = None,
|
||||
):
|
||||
self.name = name
|
||||
self.description = description
|
||||
self.initial_params = initial_params or PipelineParams()
|
||||
self.animation = animation or AnimationController()
|
||||
|
||||
def create_controller(self) -> AnimationController:
|
||||
controller = AnimationController()
|
||||
for key, value in self.initial_params.__dict__.items():
|
||||
controller.set_param(key, value)
|
||||
for event in self.animation._events:
|
||||
controller.add_event(event)
|
||||
return controller
|
||||
|
||||
|
||||
def create_demo_preset() -> Preset:
|
||||
"""Create the demo preset with effect cycling and camera modes."""
|
||||
animation = AnimationController(fps=60)
|
||||
|
||||
effects = ["noise", "fade", "glitch", "firehose"]
|
||||
camera_modes = ["vertical", "horizontal", "omni", "floating", "trace"]
|
||||
|
||||
def make_effect_action(eff):
|
||||
def action(ctrl, t):
|
||||
ctrl.set_param("current_effect", eff)
|
||||
ctrl.set_param("effect_intensity", t)
|
||||
|
||||
return action
|
||||
|
||||
def make_camera_action(cam_mode):
|
||||
def action(ctrl, t):
|
||||
ctrl.set_param("camera_mode", cam_mode)
|
||||
|
||||
return action
|
||||
|
||||
for i, effect in enumerate(effects):
|
||||
effect_duration = 5.0
|
||||
|
||||
animation.add_event(
|
||||
Event(
|
||||
name=f"effect_{effect}",
|
||||
trigger=Trigger(
|
||||
type=TriggerType.TIME,
|
||||
value=i * effect_duration,
|
||||
repeat=True,
|
||||
repeat_interval=len(effects) * effect_duration,
|
||||
),
|
||||
duration=effect_duration,
|
||||
action=make_effect_action(effect),
|
||||
ease=ease_in_out,
|
||||
)
|
||||
)
|
||||
|
||||
for i, mode in enumerate(camera_modes):
|
||||
camera_duration = 10.0
|
||||
animation.add_event(
|
||||
Event(
|
||||
name=f"camera_{mode}",
|
||||
trigger=Trigger(
|
||||
type=TriggerType.TIME,
|
||||
value=i * camera_duration,
|
||||
repeat=True,
|
||||
repeat_interval=len(camera_modes) * camera_duration,
|
||||
),
|
||||
duration=0.5,
|
||||
action=make_camera_action(mode),
|
||||
)
|
||||
)
|
||||
|
||||
animation.add_event(
|
||||
Event(
|
||||
name="pulse",
|
||||
trigger=Trigger(type=TriggerType.CYCLE, value=2.0, repeat=True),
|
||||
duration=1.0,
|
||||
action=lambda ctrl, t: ctrl.set_param("pulse", t),
|
||||
ease=ease_out_bounce,
|
||||
)
|
||||
)
|
||||
|
||||
return Preset(
|
||||
name="demo",
|
||||
description="Demo mode with effect cycling and camera modes",
|
||||
initial_params=PipelineParams(
|
||||
effect_enabled={
|
||||
"noise": False,
|
||||
"fade": False,
|
||||
"glitch": False,
|
||||
"firehose": False,
|
||||
"hud": True,
|
||||
},
|
||||
effect_intensity={
|
||||
"noise": 0.0,
|
||||
"fade": 0.0,
|
||||
"glitch": 0.0,
|
||||
"firehose": 0.0,
|
||||
},
|
||||
camera_mode="vertical",
|
||||
camera_speed=1.0,
|
||||
display_backend="pygame",
|
||||
),
|
||||
animation=animation,
|
||||
)
|
||||
|
||||
|
||||
def create_pipeline_preset() -> Preset:
|
||||
"""Create preset for pipeline visualization."""
|
||||
animation = AnimationController(fps=60)
|
||||
|
||||
animation.add_event(
|
||||
Event(
|
||||
name="camera_trace",
|
||||
trigger=Trigger(type=TriggerType.CYCLE, value=8.0, repeat=True),
|
||||
duration=8.0,
|
||||
action=lambda ctrl, t: ctrl.set_param("camera_mode", "trace"),
|
||||
)
|
||||
)
|
||||
|
||||
animation.add_event(
|
||||
Event(
|
||||
name="highlight_path",
|
||||
trigger=Trigger(type=TriggerType.CYCLE, value=4.0, repeat=True),
|
||||
duration=4.0,
|
||||
action=lambda ctrl, t: ctrl.set_param("path_progress", t),
|
||||
)
|
||||
)
|
||||
|
||||
return Preset(
|
||||
name="pipeline",
|
||||
description="Pipeline visualization with trace camera",
|
||||
initial_params=PipelineParams(
|
||||
camera_mode="trace",
|
||||
camera_speed=1.0,
|
||||
display_backend="pygame",
|
||||
),
|
||||
animation=animation,
|
||||
)
|
||||
1091
engine/app.py
1091
engine/app.py
File diff suppressed because it is too large
Load Diff
34
engine/app/__init__.py
Normal file
34
engine/app/__init__.py
Normal file
@@ -0,0 +1,34 @@
|
||||
"""
|
||||
Application orchestrator — pipeline mode entry point.
|
||||
|
||||
This package contains the main application logic for the pipeline mode,
|
||||
including pipeline construction, UI controller setup, and the main render loop.
|
||||
"""
|
||||
|
||||
# Re-export from engine for backward compatibility with tests
|
||||
# Re-export effects plugins for backward compatibility with tests
|
||||
import engine.effects.plugins as effects_plugins
|
||||
from engine import config
|
||||
|
||||
# Re-export display registry for backward compatibility with tests
|
||||
from engine.display import DisplayRegistry
|
||||
|
||||
# Re-export fetch functions for backward compatibility with tests
|
||||
from engine.fetch import fetch_all, fetch_poetry, load_cache
|
||||
from engine.pipeline import list_presets
|
||||
|
||||
from .main import main, run_pipeline_mode_direct
|
||||
from .pipeline_runner import run_pipeline_mode
|
||||
|
||||
__all__ = [
|
||||
"config",
|
||||
"list_presets",
|
||||
"main",
|
||||
"run_pipeline_mode",
|
||||
"run_pipeline_mode_direct",
|
||||
"fetch_all",
|
||||
"fetch_poetry",
|
||||
"load_cache",
|
||||
"DisplayRegistry",
|
||||
"effects_plugins",
|
||||
]
|
||||
618
engine/app/main.py
Normal file
618
engine/app/main.py
Normal file
@@ -0,0 +1,618 @@
|
||||
"""
|
||||
Main entry point and CLI argument parsing for the application.
|
||||
"""
|
||||
|
||||
import sys
|
||||
import time
|
||||
|
||||
from engine import config
|
||||
from engine.display import BorderMode, DisplayRegistry
|
||||
from engine.effects import get_registry
|
||||
from engine.fetch import fetch_all, fetch_all_fast, fetch_poetry, load_cache, save_cache
|
||||
from engine.pipeline import (
|
||||
Pipeline,
|
||||
PipelineConfig,
|
||||
PipelineContext,
|
||||
list_presets,
|
||||
)
|
||||
from engine.pipeline.adapters import (
|
||||
CameraStage,
|
||||
DataSourceStage,
|
||||
EffectPluginStage,
|
||||
create_stage_from_display,
|
||||
create_stage_from_effect,
|
||||
)
|
||||
from engine.pipeline.params import PipelineParams
|
||||
from engine.pipeline.ui import UIConfig, UIPanel
|
||||
from engine.pipeline.validation import validate_pipeline_config
|
||||
|
||||
try:
|
||||
from engine.display.backends.websocket import WebSocketDisplay
|
||||
except ImportError:
|
||||
WebSocketDisplay = None
|
||||
|
||||
from .pipeline_runner import run_pipeline_mode
|
||||
|
||||
|
||||
def _handle_pipeline_mutation(pipeline: Pipeline, command: dict) -> bool:
|
||||
"""Handle pipeline mutation commands from REPL or other external control.
|
||||
|
||||
Args:
|
||||
pipeline: The pipeline to mutate
|
||||
command: Command dictionary with 'action' and other parameters
|
||||
|
||||
Returns:
|
||||
True if command was successfully handled, False otherwise
|
||||
"""
|
||||
action = command.get("action")
|
||||
|
||||
if action == "add_stage":
|
||||
stage_name = command.get("stage")
|
||||
stage_type = command.get("stage_type")
|
||||
print(
|
||||
f" [Pipeline] add_stage command received: name='{stage_name}', type='{stage_type}'"
|
||||
)
|
||||
# Note: Dynamic stage creation is complex and requires stage factory support
|
||||
# For now, we acknowledge the command but don't actually add the stage
|
||||
return True
|
||||
|
||||
elif action == "remove_stage":
|
||||
stage_name = command.get("stage")
|
||||
if stage_name:
|
||||
result = pipeline.remove_stage(stage_name)
|
||||
print(f" [Pipeline] Removed stage '{stage_name}': {result is not None}")
|
||||
return result is not None
|
||||
|
||||
elif action == "replace_stage":
|
||||
stage_name = command.get("stage")
|
||||
print(f" [Pipeline] replace_stage command received: {command}")
|
||||
return True
|
||||
|
||||
elif action == "swap_stages":
|
||||
stage1 = command.get("stage1")
|
||||
stage2 = command.get("stage2")
|
||||
if stage1 and stage2:
|
||||
result = pipeline.swap_stages(stage1, stage2)
|
||||
print(f" [Pipeline] Swapped stages '{stage1}' and '{stage2}': {result}")
|
||||
return result
|
||||
|
||||
elif action == "move_stage":
|
||||
stage_name = command.get("stage")
|
||||
after = command.get("after")
|
||||
before = command.get("before")
|
||||
if stage_name:
|
||||
result = pipeline.move_stage(stage_name, after, before)
|
||||
print(f" [Pipeline] Moved stage '{stage_name}': {result}")
|
||||
return result
|
||||
|
||||
elif action == "enable_stage":
|
||||
stage_name = command.get("stage")
|
||||
if stage_name:
|
||||
result = pipeline.enable_stage(stage_name)
|
||||
print(f" [Pipeline] Enabled stage '{stage_name}': {result}")
|
||||
return result
|
||||
|
||||
elif action == "disable_stage":
|
||||
stage_name = command.get("stage")
|
||||
if stage_name:
|
||||
result = pipeline.disable_stage(stage_name)
|
||||
print(f" [Pipeline] Disabled stage '{stage_name}': {result}")
|
||||
return result
|
||||
|
||||
elif action == "cleanup_stage":
|
||||
stage_name = command.get("stage")
|
||||
if stage_name:
|
||||
pipeline.cleanup_stage(stage_name)
|
||||
print(f" [Pipeline] Cleaned up stage '{stage_name}'")
|
||||
return True
|
||||
|
||||
elif action == "can_hot_swap":
|
||||
stage_name = command.get("stage")
|
||||
if stage_name:
|
||||
can_swap = pipeline.can_hot_swap(stage_name)
|
||||
print(f" [Pipeline] Can hot-swap '{stage_name}': {can_swap}")
|
||||
return True
|
||||
|
||||
return False
|
||||
|
||||
|
||||
def main():
|
||||
"""Main entry point - all modes now use presets or CLI construction."""
|
||||
if config.PIPELINE_DIAGRAM:
|
||||
try:
|
||||
from engine.pipeline import generate_pipeline_diagram
|
||||
except ImportError:
|
||||
print("Error: pipeline diagram not available")
|
||||
return
|
||||
print(generate_pipeline_diagram())
|
||||
return
|
||||
|
||||
# Check for direct pipeline construction flags
|
||||
if "--pipeline-source" in sys.argv:
|
||||
# Construct pipeline directly from CLI args
|
||||
run_pipeline_mode_direct()
|
||||
return
|
||||
|
||||
preset_name = None
|
||||
|
||||
if config.PRESET:
|
||||
preset_name = config.PRESET
|
||||
elif config.PIPELINE_MODE:
|
||||
preset_name = config.PIPELINE_PRESET
|
||||
else:
|
||||
preset_name = "demo"
|
||||
|
||||
available = list_presets()
|
||||
if preset_name not in available:
|
||||
print(f"Error: Unknown preset '{preset_name}'")
|
||||
print(f"Available presets: {', '.join(available)}")
|
||||
sys.exit(1)
|
||||
|
||||
run_pipeline_mode(preset_name)
|
||||
|
||||
|
||||
def run_pipeline_mode_direct():
|
||||
"""Construct and run a pipeline directly from CLI arguments.
|
||||
|
||||
Usage:
|
||||
python -m engine.app --pipeline-source headlines --pipeline-effects noise,fade --display null
|
||||
python -m engine.app --pipeline-source fixture --pipeline-effects glitch --pipeline-ui --display null
|
||||
|
||||
Flags:
|
||||
--pipeline-source <source>: Headlines, fixture, poetry, empty, pipeline-inspect
|
||||
--pipeline-effects <effects>: Comma-separated list (noise, fade, glitch, firehose, hud, tint, border, crop)
|
||||
--pipeline-camera <type>: scroll, feed, horizontal, omni, floating, bounce
|
||||
--pipeline-display <display>: terminal, pygame, websocket, null, multi:term,pygame
|
||||
--pipeline-ui: Enable UI panel (BorderMode.UI)
|
||||
--pipeline-border <mode>: off, simple, ui
|
||||
"""
|
||||
import engine.effects.plugins as effects_plugins
|
||||
from engine.camera import Camera
|
||||
from engine.data_sources.pipeline_introspection import PipelineIntrospectionSource
|
||||
from engine.data_sources.sources import EmptyDataSource, ListDataSource
|
||||
from engine.pipeline.adapters import (
|
||||
FontStage,
|
||||
ViewportFilterStage,
|
||||
)
|
||||
|
||||
# Discover and register all effect plugins
|
||||
effects_plugins.discover_plugins()
|
||||
|
||||
# Parse CLI arguments
|
||||
source_name = None
|
||||
effect_names = []
|
||||
camera_type = None
|
||||
display_name = None
|
||||
ui_enabled = False
|
||||
border_mode = BorderMode.OFF
|
||||
source_items = None
|
||||
allow_unsafe = False
|
||||
viewport_width = None
|
||||
viewport_height = None
|
||||
|
||||
i = 1
|
||||
argv = sys.argv
|
||||
while i < len(argv):
|
||||
arg = argv[i]
|
||||
if arg == "--pipeline-source" and i + 1 < len(argv):
|
||||
source_name = argv[i + 1]
|
||||
i += 2
|
||||
elif arg == "--pipeline-effects" and i + 1 < len(argv):
|
||||
effect_names = [e.strip() for e in argv[i + 1].split(",") if e.strip()]
|
||||
i += 2
|
||||
elif arg == "--pipeline-camera" and i + 1 < len(argv):
|
||||
camera_type = argv[i + 1]
|
||||
i += 2
|
||||
elif arg == "--viewport" and i + 1 < len(argv):
|
||||
vp = argv[i + 1]
|
||||
try:
|
||||
viewport_width, viewport_height = map(int, vp.split("x"))
|
||||
except ValueError:
|
||||
print("Error: Invalid viewport format. Use WxH (e.g., 40x15)")
|
||||
sys.exit(1)
|
||||
i += 2
|
||||
elif arg == "--pipeline-display" and i + 1 < len(argv):
|
||||
display_name = argv[i + 1]
|
||||
i += 2
|
||||
elif arg == "--pipeline-ui":
|
||||
ui_enabled = True
|
||||
i += 1
|
||||
elif arg == "--pipeline-border" and i + 1 < len(argv):
|
||||
mode = argv[i + 1]
|
||||
if mode == "simple":
|
||||
border_mode = True
|
||||
elif mode == "ui":
|
||||
border_mode = BorderMode.UI
|
||||
else:
|
||||
border_mode = False
|
||||
i += 2
|
||||
elif arg == "--allow-unsafe":
|
||||
allow_unsafe = True
|
||||
i += 1
|
||||
else:
|
||||
i += 1
|
||||
|
||||
if not source_name:
|
||||
print("Error: --pipeline-source is required")
|
||||
print(
|
||||
"Usage: python -m engine.app --pipeline-source <source> [--pipeline-effects <effects>] ..."
|
||||
)
|
||||
sys.exit(1)
|
||||
|
||||
print(" \033[38;5;245mDirect pipeline construction\033[0m")
|
||||
print(f" Source: {source_name}")
|
||||
print(f" Effects: {effect_names}")
|
||||
print(f" Camera: {camera_type}")
|
||||
print(f" Display: {display_name}")
|
||||
print(f" UI Enabled: {ui_enabled}")
|
||||
|
||||
# Create initial config and params
|
||||
params = PipelineParams()
|
||||
params.source = source_name
|
||||
params.camera_mode = camera_type if camera_type is not None else ""
|
||||
params.effect_order = effect_names
|
||||
params.border = border_mode
|
||||
|
||||
# Create minimal config for validation
|
||||
config_obj = PipelineConfig(
|
||||
source=source_name,
|
||||
display=display_name or "", # Will be filled by validation
|
||||
camera=camera_type if camera_type is not None else "",
|
||||
effects=effect_names,
|
||||
)
|
||||
|
||||
# Run MVP validation
|
||||
result = validate_pipeline_config(config_obj, params, allow_unsafe=allow_unsafe)
|
||||
|
||||
if result.warnings and not allow_unsafe:
|
||||
print(" \033[38;5;226mWarning: MVP validation found issues:\033[0m")
|
||||
for warning in result.warnings:
|
||||
print(f" - {warning}")
|
||||
|
||||
if result.changes:
|
||||
print(" \033[38;5;226mApplied MVP defaults:\033[0m")
|
||||
for change in result.changes:
|
||||
print(f" {change}")
|
||||
|
||||
if not result.valid:
|
||||
print(
|
||||
" \033[38;5;196mPipeline configuration invalid and could not be fixed\033[0m"
|
||||
)
|
||||
sys.exit(1)
|
||||
|
||||
# Show MVP summary
|
||||
print(" \033[38;5;245mMVP Configuration:\033[0m")
|
||||
print(f" Source: {result.config.source}")
|
||||
print(f" Display: {result.config.display}")
|
||||
print(f" Camera: {result.config.camera or 'static (none)'}")
|
||||
print(f" Effects: {result.config.effects if result.config.effects else 'none'}")
|
||||
print(f" Border: {result.params.border}")
|
||||
|
||||
# Load source items
|
||||
if source_name == "headlines":
|
||||
cached = load_cache()
|
||||
if cached:
|
||||
source_items = cached
|
||||
else:
|
||||
source_items = fetch_all_fast()
|
||||
if source_items:
|
||||
import threading
|
||||
|
||||
def background_fetch():
|
||||
full_items, _, _ = fetch_all()
|
||||
save_cache(full_items)
|
||||
|
||||
background_thread = threading.Thread(
|
||||
target=background_fetch, daemon=True
|
||||
)
|
||||
background_thread.start()
|
||||
elif source_name == "fixture":
|
||||
source_items = load_cache()
|
||||
if not source_items:
|
||||
print(" \033[38;5;196mNo fixture cache available\033[0m")
|
||||
sys.exit(1)
|
||||
elif source_name == "poetry":
|
||||
source_items, _, _ = fetch_poetry()
|
||||
elif source_name == "empty" or source_name == "pipeline-inspect":
|
||||
source_items = []
|
||||
else:
|
||||
print(f" \033[38;5;196mUnknown source: {source_name}\033[0m")
|
||||
sys.exit(1)
|
||||
|
||||
if source_items is not None:
|
||||
print(f" \033[38;5;82mLoaded {len(source_items)} items\033[0m")
|
||||
|
||||
# Set border mode
|
||||
if ui_enabled:
|
||||
border_mode = BorderMode.UI
|
||||
|
||||
# Build pipeline using validated config and params
|
||||
params = result.params
|
||||
params.viewport_width = viewport_width if viewport_width is not None else 80
|
||||
params.viewport_height = viewport_height if viewport_height is not None else 24
|
||||
|
||||
ctx = PipelineContext()
|
||||
ctx.params = params
|
||||
|
||||
# Create display using validated display name
|
||||
display_name = result.config.display or "terminal" # Default to terminal if empty
|
||||
|
||||
# Warn if display was auto-selected (not explicitly specified)
|
||||
if not display_name:
|
||||
print(
|
||||
" \033[38;5;226mWarning: No --pipeline-display specified, using default: terminal\033[0m"
|
||||
)
|
||||
print(
|
||||
" \033[38;5;245mTip: Use --pipeline-display null for headless mode (useful for testing)\033[0m"
|
||||
)
|
||||
|
||||
display = DisplayRegistry.create(display_name)
|
||||
|
||||
# Set positioning mode
|
||||
if "--positioning" in sys.argv:
|
||||
idx = sys.argv.index("--positioning")
|
||||
if idx + 1 < len(sys.argv):
|
||||
params.positioning = sys.argv[idx + 1]
|
||||
if not display:
|
||||
print(f" \033[38;5;196mFailed to create display: {display_name}\033[0m")
|
||||
sys.exit(1)
|
||||
display.init(0, 0)
|
||||
|
||||
# Create pipeline using validated config
|
||||
pipeline = Pipeline(config=result.config, context=ctx)
|
||||
|
||||
# Add stages
|
||||
# Source stage
|
||||
if source_name == "pipeline-inspect":
|
||||
introspection_source = PipelineIntrospectionSource(
|
||||
pipeline=None,
|
||||
viewport_width=params.viewport_width,
|
||||
viewport_height=params.viewport_height,
|
||||
)
|
||||
pipeline.add_stage(
|
||||
"source", DataSourceStage(introspection_source, name="pipeline-inspect")
|
||||
)
|
||||
elif source_name == "empty":
|
||||
empty_source = EmptyDataSource(
|
||||
width=params.viewport_width, height=params.viewport_height
|
||||
)
|
||||
pipeline.add_stage("source", DataSourceStage(empty_source, name="empty"))
|
||||
else:
|
||||
list_source = ListDataSource(source_items, name=source_name)
|
||||
pipeline.add_stage("source", DataSourceStage(list_source, name=source_name))
|
||||
|
||||
# Add viewport filter and font for headline sources
|
||||
if source_name in ["headlines", "poetry", "fixture"]:
|
||||
pipeline.add_stage(
|
||||
"viewport_filter", ViewportFilterStage(name="viewport-filter")
|
||||
)
|
||||
pipeline.add_stage("font", FontStage(name="font"))
|
||||
else:
|
||||
# Fallback to simple conversion for other sources
|
||||
from engine.pipeline.adapters import SourceItemsToBufferStage
|
||||
|
||||
pipeline.add_stage("render", SourceItemsToBufferStage(name="items-to-buffer"))
|
||||
|
||||
# Add camera
|
||||
speed = getattr(params, "camera_speed", 1.0)
|
||||
camera = None
|
||||
if camera_type == "feed":
|
||||
camera = Camera.feed(speed=speed)
|
||||
elif camera_type == "scroll":
|
||||
camera = Camera.scroll(speed=speed)
|
||||
elif camera_type == "horizontal":
|
||||
camera = Camera.horizontal(speed=speed)
|
||||
elif camera_type == "omni":
|
||||
camera = Camera.omni(speed=speed)
|
||||
elif camera_type == "floating":
|
||||
camera = Camera.floating(speed=speed)
|
||||
elif camera_type == "bounce":
|
||||
camera = Camera.bounce(speed=speed)
|
||||
|
||||
if camera:
|
||||
pipeline.add_stage("camera", CameraStage(camera, name=camera_type))
|
||||
|
||||
# Add effects
|
||||
effect_registry = get_registry()
|
||||
for effect_name in effect_names:
|
||||
effect = effect_registry.get(effect_name)
|
||||
if effect:
|
||||
pipeline.add_stage(
|
||||
f"effect_{effect_name}", create_stage_from_effect(effect, effect_name)
|
||||
)
|
||||
|
||||
# Add display
|
||||
pipeline.add_stage("display", create_stage_from_display(display, display_name))
|
||||
|
||||
pipeline.build()
|
||||
|
||||
if not pipeline.initialize():
|
||||
print(" \033[38;5;196mFailed to initialize pipeline\033[0m")
|
||||
sys.exit(1)
|
||||
|
||||
# Create UI panel if border mode is UI
|
||||
ui_panel = None
|
||||
if params.border == BorderMode.UI:
|
||||
ui_panel = UIPanel(UIConfig(panel_width=24, start_with_preset_picker=True))
|
||||
# Enable raw mode for terminal input if supported
|
||||
if hasattr(display, "set_raw_mode"):
|
||||
display.set_raw_mode(True)
|
||||
for stage in pipeline.stages.values():
|
||||
if isinstance(stage, EffectPluginStage):
|
||||
effect = stage._effect
|
||||
enabled = effect.config.enabled if hasattr(effect, "config") else True
|
||||
stage_control = ui_panel.register_stage(stage, enabled=enabled)
|
||||
stage_control.effect = effect # type: ignore[attr-defined]
|
||||
|
||||
if ui_panel.stages:
|
||||
first_stage = next(iter(ui_panel.stages))
|
||||
ui_panel.select_stage(first_stage)
|
||||
ctrl = ui_panel.stages[first_stage]
|
||||
if hasattr(ctrl, "effect"):
|
||||
effect = ctrl.effect
|
||||
if hasattr(effect, "config"):
|
||||
config = effect.config
|
||||
try:
|
||||
import dataclasses
|
||||
|
||||
if dataclasses.is_dataclass(config):
|
||||
for field_name, field_obj in dataclasses.fields(config):
|
||||
if field_name == "enabled":
|
||||
continue
|
||||
value = getattr(config, field_name, None)
|
||||
if value is not None:
|
||||
ctrl.params[field_name] = value
|
||||
ctrl.param_schema[field_name] = {
|
||||
"type": type(value).__name__,
|
||||
"min": 0
|
||||
if isinstance(value, (int, float))
|
||||
else None,
|
||||
"max": 1 if isinstance(value, float) else None,
|
||||
"step": 0.1 if isinstance(value, float) else 1,
|
||||
}
|
||||
except Exception:
|
||||
pass
|
||||
|
||||
# Check for REPL effect in pipeline
|
||||
repl_effect = None
|
||||
for stage in pipeline.stages.values():
|
||||
if isinstance(stage, EffectPluginStage) and stage._effect.name == "repl":
|
||||
repl_effect = stage._effect
|
||||
print(
|
||||
" \033[38;5;46mREPL effect detected - Interactive mode enabled\033[0m"
|
||||
)
|
||||
break
|
||||
|
||||
# Enable raw mode for REPL if present and not already enabled
|
||||
# Also enable for UI border mode (already handled above)
|
||||
if repl_effect and ui_panel is None and hasattr(display, "set_raw_mode"):
|
||||
display.set_raw_mode(True)
|
||||
|
||||
# Run pipeline loop
|
||||
from engine.display import render_ui_panel
|
||||
|
||||
ctx.set("display", display)
|
||||
ctx.set("items", source_items)
|
||||
ctx.set("pipeline", pipeline)
|
||||
ctx.set("pipeline_order", pipeline.execution_order)
|
||||
|
||||
current_width = params.viewport_width
|
||||
current_height = params.viewport_height
|
||||
|
||||
# Only get dimensions from display if viewport wasn't explicitly set
|
||||
if "--viewport" not in sys.argv and hasattr(display, "get_dimensions"):
|
||||
current_width, current_height = display.get_dimensions()
|
||||
params.viewport_width = current_width
|
||||
params.viewport_height = current_height
|
||||
|
||||
print(" \033[38;5;82mStarting pipeline...\033[0m")
|
||||
print(" \033[38;5;245mPress Ctrl+C to exit\033[0m\n")
|
||||
|
||||
try:
|
||||
frame = 0
|
||||
while True:
|
||||
params.frame_number = frame
|
||||
ctx.params = params
|
||||
|
||||
result = pipeline.execute(source_items)
|
||||
if not result.success:
|
||||
error_msg = f" ({result.error})" if result.error else ""
|
||||
print(f" \033[38;5;196mPipeline execution failed{error_msg}\033[0m")
|
||||
break
|
||||
|
||||
# Render with UI panel
|
||||
if ui_panel is not None:
|
||||
buf = render_ui_panel(
|
||||
result.data, current_width, current_height, ui_panel
|
||||
)
|
||||
display.show(buf, border=False)
|
||||
else:
|
||||
display.show(result.data, border=border_mode)
|
||||
|
||||
# Handle keyboard events if UI is enabled
|
||||
if ui_panel is not None:
|
||||
# Try pygame first
|
||||
if hasattr(display, "_pygame"):
|
||||
try:
|
||||
import pygame
|
||||
|
||||
for event in pygame.event.get():
|
||||
if event.type == pygame.KEYDOWN:
|
||||
ui_panel.process_key_event(event.key, event.mod)
|
||||
except (ImportError, Exception):
|
||||
pass
|
||||
# Try terminal input
|
||||
elif hasattr(display, "get_input_keys"):
|
||||
try:
|
||||
keys = display.get_input_keys()
|
||||
for key in keys:
|
||||
ui_panel.process_key_event(key, 0)
|
||||
except Exception:
|
||||
pass
|
||||
|
||||
# --- REPL Input Handling ---
|
||||
if repl_effect and hasattr(display, "get_input_keys"):
|
||||
# Get keyboard input (non-blocking)
|
||||
keys = display.get_input_keys(timeout=0.0)
|
||||
|
||||
for key in keys:
|
||||
if key == "ctrl_c":
|
||||
# Request quit when Ctrl+C is pressed
|
||||
if hasattr(display, "request_quit"):
|
||||
display.request_quit()
|
||||
else:
|
||||
raise KeyboardInterrupt()
|
||||
elif key == "return":
|
||||
# Get command string before processing
|
||||
cmd_str = repl_effect.state.current_command
|
||||
if cmd_str:
|
||||
repl_effect.process_command(cmd_str, ctx)
|
||||
# Check for pending pipeline mutations
|
||||
pending = repl_effect.get_pending_command()
|
||||
if pending:
|
||||
_handle_pipeline_mutation(pipeline, pending)
|
||||
elif key == "up":
|
||||
repl_effect.navigate_history(-1)
|
||||
elif key == "down":
|
||||
repl_effect.navigate_history(1)
|
||||
elif key == "page_up":
|
||||
repl_effect.scroll_output(
|
||||
10
|
||||
) # Positive = scroll UP (back in time)
|
||||
elif key == "page_down":
|
||||
repl_effect.scroll_output(
|
||||
-10
|
||||
) # Negative = scroll DOWN (forward in time)
|
||||
elif key == "backspace":
|
||||
repl_effect.backspace()
|
||||
elif key.startswith("mouse:"):
|
||||
# Mouse event format: mouse:button:x:y
|
||||
parts = key.split(":")
|
||||
if len(parts) >= 2:
|
||||
button = int(parts[1])
|
||||
if button == 64: # Wheel up
|
||||
repl_effect.scroll_output(3) # Positive = scroll UP
|
||||
elif button == 65: # Wheel down
|
||||
repl_effect.scroll_output(-3) # Negative = scroll DOWN
|
||||
elif len(key) == 1:
|
||||
repl_effect.append_to_command(key)
|
||||
# --- End REPL Input Handling ---
|
||||
|
||||
# Check for quit request
|
||||
if hasattr(display, "is_quit_requested") and display.is_quit_requested():
|
||||
if hasattr(display, "clear_quit_request"):
|
||||
display.clear_quit_request()
|
||||
raise KeyboardInterrupt()
|
||||
|
||||
time.sleep(1 / 60)
|
||||
frame += 1
|
||||
|
||||
except KeyboardInterrupt:
|
||||
pipeline.cleanup()
|
||||
display.cleanup()
|
||||
print("\n \033[38;5;245mPipeline stopped\033[0m")
|
||||
return
|
||||
|
||||
pipeline.cleanup()
|
||||
display.cleanup()
|
||||
print("\n \033[38;5;245mPipeline stopped\033[0m")
|
||||
1080
engine/app/pipeline_runner.py
Normal file
1080
engine/app/pipeline_runner.py
Normal file
File diff suppressed because it is too large
Load Diff
File diff suppressed because it is too large
Load Diff
@@ -1,730 +1,73 @@
|
||||
#!/usr/bin/env python3
|
||||
"""
|
||||
Benchmark runner for mainline - tests performance across effects and displays.
|
||||
Benchmark module for performance testing.
|
||||
|
||||
Usage:
|
||||
python -m engine.benchmark
|
||||
python -m engine.benchmark --output report.md
|
||||
python -m engine.benchmark --displays terminal,websocket --effects glitch,fade
|
||||
python -m engine.benchmark --format json --output benchmark.json
|
||||
|
||||
Headless mode (default): suppress all terminal output during benchmarks.
|
||||
python -m engine.benchmark # Run all benchmarks
|
||||
python -m engine.benchmark --hook # Run benchmarks in hook mode (for CI)
|
||||
python -m engine.benchmark --displays null --iterations 20
|
||||
"""
|
||||
|
||||
import argparse
|
||||
import json
|
||||
import sys
|
||||
import time
|
||||
from dataclasses import dataclass, field
|
||||
from datetime import datetime
|
||||
from io import StringIO
|
||||
from pathlib import Path
|
||||
from typing import Any
|
||||
|
||||
import numpy as np
|
||||
|
||||
|
||||
@dataclass
|
||||
class BenchmarkResult:
|
||||
"""Result of a single benchmark run."""
|
||||
|
||||
name: str
|
||||
display: str
|
||||
effect: str | None
|
||||
iterations: int
|
||||
total_time_ms: float
|
||||
avg_time_ms: float
|
||||
std_dev_ms: float
|
||||
min_ms: float
|
||||
max_ms: float
|
||||
fps: float
|
||||
chars_processed: int
|
||||
chars_per_sec: float
|
||||
|
||||
|
||||
@dataclass
|
||||
class BenchmarkReport:
|
||||
"""Complete benchmark report."""
|
||||
|
||||
timestamp: str
|
||||
python_version: str
|
||||
results: list[BenchmarkResult] = field(default_factory=list)
|
||||
summary: dict[str, Any] = field(default_factory=dict)
|
||||
|
||||
|
||||
def get_sample_buffer(width: int = 80, height: int = 24) -> list[str]:
|
||||
"""Generate a sample buffer for benchmarking."""
|
||||
lines = []
|
||||
for i in range(height):
|
||||
line = f"\x1b[32mLine {i}\x1b[0m " + "A" * (width - 10)
|
||||
lines.append(line)
|
||||
return lines
|
||||
|
||||
|
||||
def benchmark_display(
|
||||
display_class,
|
||||
buffer: list[str],
|
||||
iterations: int = 100,
|
||||
display=None,
|
||||
reuse: bool = False,
|
||||
) -> BenchmarkResult | None:
|
||||
"""Benchmark a single display.
|
||||
|
||||
Args:
|
||||
display_class: Display class to instantiate
|
||||
buffer: Buffer to display
|
||||
iterations: Number of iterations
|
||||
display: Optional existing display instance to reuse
|
||||
reuse: If True and display provided, use reuse mode
|
||||
"""
|
||||
old_stdout = sys.stdout
|
||||
old_stderr = sys.stderr
|
||||
|
||||
try:
|
||||
sys.stdout = StringIO()
|
||||
sys.stderr = StringIO()
|
||||
|
||||
if display is None:
|
||||
display = display_class()
|
||||
display.init(80, 24, reuse=False)
|
||||
should_cleanup = True
|
||||
else:
|
||||
should_cleanup = False
|
||||
|
||||
times = []
|
||||
chars = sum(len(line) for line in buffer)
|
||||
|
||||
for _ in range(iterations):
|
||||
t0 = time.perf_counter()
|
||||
display.show(buffer)
|
||||
elapsed = (time.perf_counter() - t0) * 1000
|
||||
times.append(elapsed)
|
||||
|
||||
if should_cleanup and hasattr(display, "cleanup"):
|
||||
display.cleanup(quit_pygame=False)
|
||||
|
||||
except Exception:
|
||||
return None
|
||||
finally:
|
||||
sys.stdout = old_stdout
|
||||
sys.stderr = old_stderr
|
||||
|
||||
times_arr = np.array(times)
|
||||
|
||||
return BenchmarkResult(
|
||||
name=f"display_{display_class.__name__}",
|
||||
display=display_class.__name__,
|
||||
effect=None,
|
||||
iterations=iterations,
|
||||
total_time_ms=sum(times),
|
||||
avg_time_ms=float(np.mean(times_arr)),
|
||||
std_dev_ms=float(np.std(times_arr)),
|
||||
min_ms=float(np.min(times_arr)),
|
||||
max_ms=float(np.max(times_arr)),
|
||||
fps=float(1000.0 / np.mean(times_arr)) if np.mean(times_arr) > 0 else 0.0,
|
||||
chars_processed=chars * iterations,
|
||||
chars_per_sec=float((chars * iterations) / (sum(times) / 1000))
|
||||
if sum(times) > 0
|
||||
else 0.0,
|
||||
)
|
||||
|
||||
|
||||
def benchmark_effect_with_display(
|
||||
effect_class, display, buffer: list[str], iterations: int = 100, reuse: bool = False
|
||||
) -> BenchmarkResult | None:
|
||||
"""Benchmark an effect with a display.
|
||||
|
||||
Args:
|
||||
effect_class: Effect class to instantiate
|
||||
display: Display instance to use
|
||||
buffer: Buffer to process and display
|
||||
iterations: Number of iterations
|
||||
reuse: If True, use reuse mode for display
|
||||
"""
|
||||
old_stdout = sys.stdout
|
||||
old_stderr = sys.stderr
|
||||
|
||||
try:
|
||||
from engine.effects.types import EffectConfig, EffectContext
|
||||
|
||||
sys.stdout = StringIO()
|
||||
sys.stderr = StringIO()
|
||||
|
||||
effect = effect_class()
|
||||
effect.configure(EffectConfig(enabled=True, intensity=1.0))
|
||||
|
||||
ctx = EffectContext(
|
||||
terminal_width=80,
|
||||
terminal_height=24,
|
||||
scroll_cam=0,
|
||||
ticker_height=0,
|
||||
mic_excess=0.0,
|
||||
grad_offset=0.0,
|
||||
frame_number=0,
|
||||
has_message=False,
|
||||
)
|
||||
|
||||
times = []
|
||||
chars = sum(len(line) for line in buffer)
|
||||
|
||||
for _ in range(iterations):
|
||||
processed = effect.process(buffer, ctx)
|
||||
t0 = time.perf_counter()
|
||||
display.show(processed)
|
||||
elapsed = (time.perf_counter() - t0) * 1000
|
||||
times.append(elapsed)
|
||||
|
||||
if not reuse and hasattr(display, "cleanup"):
|
||||
display.cleanup(quit_pygame=False)
|
||||
|
||||
except Exception:
|
||||
return None
|
||||
finally:
|
||||
sys.stdout = old_stdout
|
||||
sys.stderr = old_stderr
|
||||
|
||||
times_arr = np.array(times)
|
||||
|
||||
return BenchmarkResult(
|
||||
name=f"effect_{effect_class.__name__}_with_{display.__class__.__name__}",
|
||||
display=display.__class__.__name__,
|
||||
effect=effect_class.__name__,
|
||||
iterations=iterations,
|
||||
total_time_ms=sum(times),
|
||||
avg_time_ms=float(np.mean(times_arr)),
|
||||
std_dev_ms=float(np.std(times_arr)),
|
||||
min_ms=float(np.min(times_arr)),
|
||||
max_ms=float(np.max(times_arr)),
|
||||
fps=float(1000.0 / np.mean(times_arr)) if np.mean(times_arr) > 0 else 0.0,
|
||||
chars_processed=chars * iterations,
|
||||
chars_per_sec=float((chars * iterations) / (sum(times) / 1000))
|
||||
if sum(times) > 0
|
||||
else 0.0,
|
||||
)
|
||||
|
||||
|
||||
def get_available_displays():
|
||||
"""Get available display classes."""
|
||||
from engine.display import (
|
||||
DisplayRegistry,
|
||||
NullDisplay,
|
||||
TerminalDisplay,
|
||||
)
|
||||
|
||||
DisplayRegistry.initialize()
|
||||
|
||||
displays = [
|
||||
("null", NullDisplay),
|
||||
("terminal", TerminalDisplay),
|
||||
]
|
||||
|
||||
try:
|
||||
from engine.display.backends.websocket import WebSocketDisplay
|
||||
|
||||
displays.append(("websocket", WebSocketDisplay))
|
||||
except Exception:
|
||||
pass
|
||||
|
||||
try:
|
||||
from engine.display.backends.sixel import SixelDisplay
|
||||
|
||||
displays.append(("sixel", SixelDisplay))
|
||||
except Exception:
|
||||
pass
|
||||
|
||||
try:
|
||||
from engine.display.backends.pygame import PygameDisplay
|
||||
|
||||
displays.append(("pygame", PygameDisplay))
|
||||
except Exception:
|
||||
pass
|
||||
|
||||
return displays
|
||||
|
||||
|
||||
def get_available_effects():
|
||||
"""Get available effect classes."""
|
||||
try:
|
||||
from engine.effects import get_registry
|
||||
|
||||
try:
|
||||
from effects_plugins import discover_plugins
|
||||
|
||||
discover_plugins()
|
||||
except Exception:
|
||||
pass
|
||||
except Exception:
|
||||
return []
|
||||
|
||||
effects = []
|
||||
registry = get_registry()
|
||||
|
||||
for name, effect in registry.list_all().items():
|
||||
if effect:
|
||||
effect_cls = type(effect)
|
||||
effects.append((name, effect_cls))
|
||||
|
||||
return effects
|
||||
|
||||
|
||||
def run_benchmarks(
|
||||
displays: list[tuple[str, Any]] | None = None,
|
||||
effects: list[tuple[str, Any]] | None = None,
|
||||
iterations: int = 100,
|
||||
verbose: bool = False,
|
||||
) -> BenchmarkReport:
|
||||
"""Run all benchmarks and return report."""
|
||||
from datetime import datetime
|
||||
|
||||
if displays is None:
|
||||
displays = get_available_displays()
|
||||
|
||||
if effects is None:
|
||||
effects = get_available_effects()
|
||||
|
||||
buffer = get_sample_buffer(80, 24)
|
||||
results = []
|
||||
|
||||
if verbose:
|
||||
print(f"Running benchmarks ({iterations} iterations each)...")
|
||||
|
||||
pygame_display = None
|
||||
for name, display_class in displays:
|
||||
if verbose:
|
||||
print(f"Benchmarking display: {name}")
|
||||
|
||||
result = benchmark_display(display_class, buffer, iterations)
|
||||
if result:
|
||||
results.append(result)
|
||||
if verbose:
|
||||
print(f" {result.fps:.1f} FPS, {result.avg_time_ms:.2f}ms avg")
|
||||
|
||||
if name == "pygame":
|
||||
pygame_display = result
|
||||
|
||||
if verbose:
|
||||
print()
|
||||
|
||||
pygame_instance = None
|
||||
if pygame_display:
|
||||
try:
|
||||
from engine.display.backends.pygame import PygameDisplay
|
||||
|
||||
PygameDisplay.reset_state()
|
||||
pygame_instance = PygameDisplay()
|
||||
pygame_instance.init(80, 24, reuse=False)
|
||||
except Exception:
|
||||
pygame_instance = None
|
||||
|
||||
for effect_name, effect_class in effects:
|
||||
for display_name, display_class in displays:
|
||||
if display_name == "websocket":
|
||||
continue
|
||||
|
||||
if display_name == "pygame":
|
||||
if verbose:
|
||||
print(f"Benchmarking effect: {effect_name} with {display_name}")
|
||||
|
||||
if pygame_instance:
|
||||
result = benchmark_effect_with_display(
|
||||
effect_class, pygame_instance, buffer, iterations, reuse=True
|
||||
)
|
||||
if result:
|
||||
results.append(result)
|
||||
if verbose:
|
||||
print(
|
||||
f" {result.fps:.1f} FPS, {result.avg_time_ms:.2f}ms avg"
|
||||
)
|
||||
continue
|
||||
|
||||
if verbose:
|
||||
print(f"Benchmarking effect: {effect_name} with {display_name}")
|
||||
|
||||
display = display_class()
|
||||
display.init(80, 24)
|
||||
result = benchmark_effect_with_display(
|
||||
effect_class, display, buffer, iterations
|
||||
)
|
||||
if result:
|
||||
results.append(result)
|
||||
if verbose:
|
||||
print(f" {result.fps:.1f} FPS, {result.avg_time_ms:.2f}ms avg")
|
||||
|
||||
if pygame_instance:
|
||||
try:
|
||||
pygame_instance.cleanup(quit_pygame=True)
|
||||
except Exception:
|
||||
pass
|
||||
|
||||
summary = generate_summary(results)
|
||||
|
||||
return BenchmarkReport(
|
||||
timestamp=datetime.now().isoformat(),
|
||||
python_version=sys.version,
|
||||
results=results,
|
||||
summary=summary,
|
||||
)
|
||||
|
||||
|
||||
def generate_summary(results: list[BenchmarkResult]) -> dict[str, Any]:
|
||||
"""Generate summary statistics from results."""
|
||||
by_display: dict[str, list[BenchmarkResult]] = {}
|
||||
by_effect: dict[str, list[BenchmarkResult]] = {}
|
||||
|
||||
for r in results:
|
||||
if r.display not in by_display:
|
||||
by_display[r.display] = []
|
||||
by_display[r.display].append(r)
|
||||
|
||||
if r.effect:
|
||||
if r.effect not in by_effect:
|
||||
by_effect[r.effect] = []
|
||||
by_effect[r.effect].append(r)
|
||||
|
||||
summary = {
|
||||
"by_display": {},
|
||||
"by_effect": {},
|
||||
"overall": {
|
||||
"total_tests": len(results),
|
||||
"displays_tested": len(by_display),
|
||||
"effects_tested": len(by_effect),
|
||||
},
|
||||
}
|
||||
|
||||
for display, res in by_display.items():
|
||||
fps_values = [r.fps for r in res]
|
||||
summary["by_display"][display] = {
|
||||
"avg_fps": float(np.mean(fps_values)),
|
||||
"min_fps": float(np.min(fps_values)),
|
||||
"max_fps": float(np.max(fps_values)),
|
||||
"tests": len(res),
|
||||
}
|
||||
|
||||
for effect, res in by_effect.items():
|
||||
fps_values = [r.fps for r in res]
|
||||
summary["by_effect"][effect] = {
|
||||
"avg_fps": float(np.mean(fps_values)),
|
||||
"min_fps": float(np.min(fps_values)),
|
||||
"max_fps": float(np.max(fps_values)),
|
||||
"tests": len(res),
|
||||
}
|
||||
|
||||
return summary
|
||||
|
||||
|
||||
DEFAULT_CACHE_PATH = Path.home() / ".mainline_benchmark_cache.json"
|
||||
|
||||
|
||||
def load_baseline(cache_path: Path | None = None) -> dict[str, Any] | None:
|
||||
"""Load baseline benchmark results from cache."""
|
||||
path = cache_path or DEFAULT_CACHE_PATH
|
||||
if not path.exists():
|
||||
return None
|
||||
try:
|
||||
with open(path) as f:
|
||||
return json.load(f)
|
||||
except Exception:
|
||||
return None
|
||||
|
||||
|
||||
def save_baseline(
|
||||
results: list[BenchmarkResult],
|
||||
cache_path: Path | None = None,
|
||||
) -> None:
|
||||
"""Save benchmark results as baseline to cache."""
|
||||
path = cache_path or DEFAULT_CACHE_PATH
|
||||
baseline = {
|
||||
"timestamp": datetime.now().isoformat(),
|
||||
"results": {
|
||||
r.name: {
|
||||
"fps": r.fps,
|
||||
"avg_time_ms": r.avg_time_ms,
|
||||
"chars_per_sec": r.chars_per_sec,
|
||||
}
|
||||
for r in results
|
||||
},
|
||||
}
|
||||
path.parent.mkdir(parents=True, exist_ok=True)
|
||||
with open(path, "w") as f:
|
||||
json.dump(baseline, f, indent=2)
|
||||
|
||||
|
||||
def compare_with_baseline(
|
||||
results: list[BenchmarkResult],
|
||||
baseline: dict[str, Any],
|
||||
threshold: float = 0.2,
|
||||
verbose: bool = True,
|
||||
) -> tuple[bool, list[str]]:
|
||||
"""Compare current results with baseline. Returns (pass, messages)."""
|
||||
baseline_results = baseline.get("results", {})
|
||||
failures = []
|
||||
warnings = []
|
||||
|
||||
for r in results:
|
||||
if r.name not in baseline_results:
|
||||
warnings.append(f"New test: {r.name} (no baseline)")
|
||||
continue
|
||||
|
||||
b = baseline_results[r.name]
|
||||
if b["fps"] == 0:
|
||||
continue
|
||||
|
||||
degradation = (b["fps"] - r.fps) / b["fps"]
|
||||
if degradation > threshold:
|
||||
failures.append(
|
||||
f"{r.name}: FPS degraded {degradation * 100:.1f}% "
|
||||
f"(baseline: {b['fps']:.1f}, current: {r.fps:.1f})"
|
||||
)
|
||||
elif verbose:
|
||||
print(f" {r.name}: {r.fps:.1f} FPS (baseline: {b['fps']:.1f})")
|
||||
|
||||
passed = len(failures) == 0
|
||||
messages = []
|
||||
if failures:
|
||||
messages.extend(failures)
|
||||
if warnings:
|
||||
messages.extend(warnings)
|
||||
|
||||
return passed, messages
|
||||
|
||||
|
||||
def run_hook_mode(
|
||||
displays: list[tuple[str, Any]] | None = None,
|
||||
effects: list[tuple[str, Any]] | None = None,
|
||||
iterations: int = 20,
|
||||
threshold: float = 0.2,
|
||||
cache_path: Path | None = None,
|
||||
verbose: bool = False,
|
||||
) -> int:
|
||||
"""Run in hook mode: compare against baseline, exit 0 on pass, 1 on fail."""
|
||||
baseline = load_baseline(cache_path)
|
||||
|
||||
if baseline is None:
|
||||
print("No baseline found. Run with --baseline to create one.")
|
||||
return 1
|
||||
|
||||
report = run_benchmarks(displays, effects, iterations, verbose)
|
||||
|
||||
passed, messages = compare_with_baseline(
|
||||
report.results, baseline, threshold, verbose
|
||||
)
|
||||
|
||||
print("\n=== Benchmark Hook Results ===")
|
||||
if passed:
|
||||
print("PASSED - No significant performance degradation")
|
||||
return 0
|
||||
else:
|
||||
print("FAILED - Performance degradation detected:")
|
||||
for msg in messages:
|
||||
print(f" - {msg}")
|
||||
return 1
|
||||
|
||||
|
||||
def format_report_text(report: BenchmarkReport) -> str:
|
||||
"""Format report as human-readable text."""
|
||||
lines = [
|
||||
"# Mainline Performance Benchmark Report",
|
||||
"",
|
||||
f"Generated: {report.timestamp}",
|
||||
f"Python: {report.python_version}",
|
||||
"",
|
||||
"## Summary",
|
||||
"",
|
||||
f"Total tests: {report.summary['overall']['total_tests']}",
|
||||
f"Displays tested: {report.summary['overall']['displays_tested']}",
|
||||
f"Effects tested: {report.summary['overall']['effects_tested']}",
|
||||
"",
|
||||
"## By Display",
|
||||
"",
|
||||
]
|
||||
|
||||
for display, stats in report.summary["by_display"].items():
|
||||
lines.append(f"### {display}")
|
||||
lines.append(f"- Avg FPS: {stats['avg_fps']:.1f}")
|
||||
lines.append(f"- Min FPS: {stats['min_fps']:.1f}")
|
||||
lines.append(f"- Max FPS: {stats['max_fps']:.1f}")
|
||||
lines.append(f"- Tests: {stats['tests']}")
|
||||
lines.append("")
|
||||
|
||||
if report.summary["by_effect"]:
|
||||
lines.append("## By Effect")
|
||||
lines.append("")
|
||||
|
||||
for effect, stats in report.summary["by_effect"].items():
|
||||
lines.append(f"### {effect}")
|
||||
lines.append(f"- Avg FPS: {stats['avg_fps']:.1f}")
|
||||
lines.append(f"- Min FPS: {stats['min_fps']:.1f}")
|
||||
lines.append(f"- Max FPS: {stats['max_fps']:.1f}")
|
||||
lines.append(f"- Tests: {stats['tests']}")
|
||||
lines.append("")
|
||||
|
||||
lines.append("## Detailed Results")
|
||||
lines.append("")
|
||||
lines.append("| Display | Effect | FPS | Avg ms | StdDev ms | Min ms | Max ms |")
|
||||
lines.append("|---------|--------|-----|--------|-----------|--------|--------|")
|
||||
|
||||
for r in report.results:
|
||||
effect_col = r.effect if r.effect else "-"
|
||||
lines.append(
|
||||
f"| {r.display} | {effect_col} | {r.fps:.1f} | {r.avg_time_ms:.2f} | "
|
||||
f"{r.std_dev_ms:.2f} | {r.min_ms:.2f} | {r.max_ms:.2f} |"
|
||||
)
|
||||
|
||||
return "\n".join(lines)
|
||||
|
||||
|
||||
def format_report_json(report: BenchmarkReport) -> str:
|
||||
"""Format report as JSON."""
|
||||
data = {
|
||||
"timestamp": report.timestamp,
|
||||
"python_version": report.python_version,
|
||||
"summary": report.summary,
|
||||
"results": [
|
||||
{
|
||||
"name": r.name,
|
||||
"display": r.display,
|
||||
"effect": r.effect,
|
||||
"iterations": r.iterations,
|
||||
"total_time_ms": r.total_time_ms,
|
||||
"avg_time_ms": r.avg_time_ms,
|
||||
"std_dev_ms": r.std_dev_ms,
|
||||
"min_ms": r.min_ms,
|
||||
"max_ms": r.max_ms,
|
||||
"fps": r.fps,
|
||||
"chars_processed": r.chars_processed,
|
||||
"chars_per_sec": r.chars_per_sec,
|
||||
}
|
||||
for r in report.results
|
||||
],
|
||||
}
|
||||
return json.dumps(data, indent=2)
|
||||
|
||||
|
||||
def main():
|
||||
parser = argparse.ArgumentParser(description="Run mainline benchmarks")
|
||||
parser = argparse.ArgumentParser(description="Run performance benchmarks")
|
||||
parser.add_argument(
|
||||
"--displays",
|
||||
help="Comma-separated list of displays to test (default: all)",
|
||||
"--hook",
|
||||
action="store_true",
|
||||
help="Run in hook mode (fail on regression)",
|
||||
)
|
||||
parser.add_argument(
|
||||
"--effects",
|
||||
help="Comma-separated list of effects to test (default: all)",
|
||||
"--displays",
|
||||
default="null",
|
||||
help="Comma-separated list of displays to benchmark",
|
||||
)
|
||||
parser.add_argument(
|
||||
"--iterations",
|
||||
type=int,
|
||||
default=100,
|
||||
help="Number of iterations per test (default: 100)",
|
||||
help="Number of iterations per benchmark",
|
||||
)
|
||||
parser.add_argument(
|
||||
"--output",
|
||||
help="Output file path (default: stdout)",
|
||||
)
|
||||
parser.add_argument(
|
||||
"--format",
|
||||
choices=["text", "json"],
|
||||
default="text",
|
||||
help="Output format (default: text)",
|
||||
)
|
||||
parser.add_argument(
|
||||
"--verbose",
|
||||
"-v",
|
||||
action="store_true",
|
||||
help="Show progress during benchmarking",
|
||||
)
|
||||
parser.add_argument(
|
||||
"--hook",
|
||||
action="store_true",
|
||||
help="Run in hook mode: compare against baseline, exit 0 pass, 1 fail",
|
||||
)
|
||||
parser.add_argument(
|
||||
"--baseline",
|
||||
action="store_true",
|
||||
help="Save current results as baseline for future hook comparisons",
|
||||
)
|
||||
parser.add_argument(
|
||||
"--threshold",
|
||||
type=float,
|
||||
default=0.2,
|
||||
help="Performance degradation threshold for hook mode (default: 0.2 = 20%%)",
|
||||
)
|
||||
parser.add_argument(
|
||||
"--cache",
|
||||
type=str,
|
||||
default=None,
|
||||
help="Path to baseline cache file (default: ~/.mainline_benchmark_cache.json)",
|
||||
)
|
||||
|
||||
args = parser.parse_args()
|
||||
|
||||
cache_path = Path(args.cache) if args.cache else DEFAULT_CACHE_PATH
|
||||
# Run pytest with benchmark markers
|
||||
pytest_args = [
|
||||
"-v",
|
||||
"-m",
|
||||
"benchmark",
|
||||
]
|
||||
|
||||
if args.hook:
|
||||
displays = None
|
||||
if args.displays:
|
||||
display_map = dict(get_available_displays())
|
||||
displays = [
|
||||
(name, display_map[name])
|
||||
for name in args.displays.split(",")
|
||||
if name in display_map
|
||||
# Hook mode: stricter settings
|
||||
pytest_args.extend(
|
||||
[
|
||||
"--benchmark-only",
|
||||
"--benchmark-compare",
|
||||
"--benchmark-compare-fail=min:5%", # Fail if >5% slower
|
||||
]
|
||||
|
||||
effects = None
|
||||
if args.effects:
|
||||
effect_map = dict(get_available_effects())
|
||||
effects = [
|
||||
(name, effect_map[name])
|
||||
for name in args.effects.split(",")
|
||||
if name in effect_map
|
||||
]
|
||||
|
||||
return run_hook_mode(
|
||||
displays,
|
||||
effects,
|
||||
iterations=args.iterations,
|
||||
threshold=args.threshold,
|
||||
cache_path=cache_path,
|
||||
verbose=args.verbose,
|
||||
)
|
||||
|
||||
displays = None
|
||||
# Add display filter if specified
|
||||
if args.displays:
|
||||
display_map = dict(get_available_displays())
|
||||
displays = [
|
||||
(name, display_map[name])
|
||||
for name in args.displays.split(",")
|
||||
if name in display_map
|
||||
]
|
||||
pytest_args.extend(["-k", args.displays])
|
||||
|
||||
effects = None
|
||||
if args.effects:
|
||||
effect_map = dict(get_available_effects())
|
||||
effects = [
|
||||
(name, effect_map[name])
|
||||
for name in args.effects.split(",")
|
||||
if name in effect_map
|
||||
]
|
||||
# Add iterations
|
||||
if args.iterations:
|
||||
# Set environment variable for benchmark tests
|
||||
import os
|
||||
|
||||
report = run_benchmarks(displays, effects, args.iterations, args.verbose)
|
||||
os.environ["BENCHMARK_ITERATIONS"] = str(args.iterations)
|
||||
|
||||
if args.baseline:
|
||||
save_baseline(report.results, cache_path)
|
||||
print(f"Baseline saved to {cache_path}")
|
||||
return 0
|
||||
# Run pytest
|
||||
import subprocess
|
||||
|
||||
if args.format == "json":
|
||||
output = format_report_json(report)
|
||||
else:
|
||||
output = format_report_text(report)
|
||||
|
||||
if args.output:
|
||||
with open(args.output, "w") as f:
|
||||
f.write(output)
|
||||
else:
|
||||
print(output)
|
||||
|
||||
return 0
|
||||
result = subprocess.run(
|
||||
[sys.executable, "-m", "pytest", "tests/test_benchmark.py"] + pytest_args,
|
||||
cwd=None, # Current directory
|
||||
)
|
||||
sys.exit(result.returncode)
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
sys.exit(main())
|
||||
main()
|
||||
|
||||
386
engine/camera.py
386
engine/camera.py
@@ -6,6 +6,8 @@ Provides abstraction for camera motion in different modes:
|
||||
- Horizontal: left/right movement
|
||||
- Omni: combination of both
|
||||
- Floating: sinusoidal/bobbing motion
|
||||
|
||||
The camera defines a visible viewport into a larger Canvas.
|
||||
"""
|
||||
|
||||
import math
|
||||
@@ -15,31 +17,130 @@ from enum import Enum, auto
|
||||
|
||||
|
||||
class CameraMode(Enum):
|
||||
VERTICAL = auto()
|
||||
FEED = auto() # Single item view (static or rapid cycling)
|
||||
SCROLL = auto() # Smooth vertical scrolling (movie credits style)
|
||||
HORIZONTAL = auto()
|
||||
OMNI = auto()
|
||||
FLOATING = auto()
|
||||
BOUNCE = auto()
|
||||
RADIAL = auto() # Polar coordinates (r, theta) for radial scanning
|
||||
|
||||
|
||||
@dataclass
|
||||
class CameraViewport:
|
||||
"""Represents the visible viewport."""
|
||||
|
||||
x: int
|
||||
y: int
|
||||
width: int
|
||||
height: int
|
||||
|
||||
|
||||
@dataclass
|
||||
class Camera:
|
||||
"""Camera for viewport scrolling.
|
||||
|
||||
The camera defines a visible viewport into a Canvas.
|
||||
It can be smaller than the canvas to allow scrolling,
|
||||
and supports zoom to scale the view.
|
||||
|
||||
Attributes:
|
||||
x: Current horizontal offset (positive = scroll left)
|
||||
y: Current vertical offset (positive = scroll up)
|
||||
mode: Current camera mode
|
||||
speed: Base scroll speed
|
||||
zoom: Zoom factor (1.0 = 100%, 2.0 = 200% zoom out)
|
||||
canvas_width: Width of the canvas being viewed
|
||||
canvas_height: Height of the canvas being viewed
|
||||
custom_update: Optional custom update function
|
||||
"""
|
||||
|
||||
x: int = 0
|
||||
y: int = 0
|
||||
mode: CameraMode = CameraMode.VERTICAL
|
||||
mode: CameraMode = CameraMode.FEED
|
||||
speed: float = 1.0
|
||||
zoom: float = 1.0
|
||||
canvas_width: int = 200 # Larger than viewport for scrolling
|
||||
canvas_height: int = 200
|
||||
custom_update: Callable[["Camera", float], None] | None = None
|
||||
_x_float: float = field(default=0.0, repr=False)
|
||||
_y_float: float = field(default=0.0, repr=False)
|
||||
_time: float = field(default=0.0, repr=False)
|
||||
|
||||
@property
|
||||
def w(self) -> int:
|
||||
"""Shorthand for viewport_width."""
|
||||
return self.viewport_width
|
||||
|
||||
def set_speed(self, speed: float) -> None:
|
||||
"""Set the camera scroll speed dynamically.
|
||||
|
||||
This allows camera speed to be modulated during runtime
|
||||
via PipelineParams or directly.
|
||||
|
||||
Args:
|
||||
speed: New speed value (0.0 = stopped, >0 = movement)
|
||||
"""
|
||||
self.speed = max(0.0, speed)
|
||||
|
||||
@property
|
||||
def h(self) -> int:
|
||||
"""Shorthand for viewport_height."""
|
||||
return self.viewport_height
|
||||
|
||||
@property
|
||||
def viewport_width(self) -> int:
|
||||
"""Get the visible viewport width.
|
||||
|
||||
This is the canvas width divided by zoom.
|
||||
"""
|
||||
return max(1, int(self.canvas_width / self.zoom))
|
||||
|
||||
@property
|
||||
def viewport_height(self) -> int:
|
||||
"""Get the visible viewport height.
|
||||
|
||||
This is the canvas height divided by zoom.
|
||||
"""
|
||||
return max(1, int(self.canvas_height / self.zoom))
|
||||
|
||||
def get_viewport(self, viewport_height: int | None = None) -> CameraViewport:
|
||||
"""Get the current viewport bounds.
|
||||
|
||||
Args:
|
||||
viewport_height: Optional viewport height to use instead of camera's viewport_height
|
||||
|
||||
Returns:
|
||||
CameraViewport with position and size (clamped to canvas bounds)
|
||||
"""
|
||||
vw = self.viewport_width
|
||||
vh = viewport_height if viewport_height is not None else self.viewport_height
|
||||
|
||||
clamped_x = max(0, min(self.x, self.canvas_width - vw))
|
||||
clamped_y = max(0, min(self.y, self.canvas_height - vh))
|
||||
|
||||
return CameraViewport(
|
||||
x=clamped_x,
|
||||
y=clamped_y,
|
||||
width=vw,
|
||||
height=vh,
|
||||
)
|
||||
|
||||
return CameraViewport(
|
||||
x=clamped_x,
|
||||
y=clamped_y,
|
||||
width=vw,
|
||||
height=vh,
|
||||
)
|
||||
|
||||
def set_zoom(self, zoom: float) -> None:
|
||||
"""Set the zoom factor.
|
||||
|
||||
Args:
|
||||
zoom: Zoom factor (1.0 = 100%, 2.0 = zoomed out 2x, 0.5 = zoomed in 2x)
|
||||
"""
|
||||
self.zoom = max(0.1, min(10.0, zoom))
|
||||
|
||||
def update(self, dt: float) -> None:
|
||||
"""Update camera position based on mode.
|
||||
|
||||
@@ -52,18 +153,50 @@ class Camera:
|
||||
self.custom_update(self, dt)
|
||||
return
|
||||
|
||||
if self.mode == CameraMode.VERTICAL:
|
||||
self._update_vertical(dt)
|
||||
if self.mode == CameraMode.FEED:
|
||||
self._update_feed(dt)
|
||||
elif self.mode == CameraMode.SCROLL:
|
||||
self._update_scroll(dt)
|
||||
elif self.mode == CameraMode.HORIZONTAL:
|
||||
self._update_horizontal(dt)
|
||||
elif self.mode == CameraMode.OMNI:
|
||||
self._update_omni(dt)
|
||||
elif self.mode == CameraMode.FLOATING:
|
||||
self._update_floating(dt)
|
||||
elif self.mode == CameraMode.BOUNCE:
|
||||
self._update_bounce(dt)
|
||||
elif self.mode == CameraMode.RADIAL:
|
||||
self._update_radial(dt)
|
||||
|
||||
def _update_vertical(self, dt: float) -> None:
|
||||
# Bounce mode handles its own bounds checking
|
||||
if self.mode != CameraMode.BOUNCE:
|
||||
self._clamp_to_bounds()
|
||||
|
||||
def _clamp_to_bounds(self) -> None:
|
||||
"""Clamp camera position to stay within canvas bounds.
|
||||
|
||||
Only clamps if the viewport is smaller than the canvas.
|
||||
If viewport equals canvas (no scrolling needed), allows any position
|
||||
for backwards compatibility with original behavior.
|
||||
"""
|
||||
vw = self.viewport_width
|
||||
vh = self.viewport_height
|
||||
|
||||
# Only clamp if there's room to scroll
|
||||
if vw < self.canvas_width:
|
||||
self.x = max(0, min(self.x, self.canvas_width - vw))
|
||||
if vh < self.canvas_height:
|
||||
self.y = max(0, min(self.y, self.canvas_height - vh))
|
||||
|
||||
def _update_feed(self, dt: float) -> None:
|
||||
"""Feed mode: rapid scrolling (1 row per frame at speed=1.0)."""
|
||||
self.y += int(self.speed * dt * 60)
|
||||
|
||||
def _update_scroll(self, dt: float) -> None:
|
||||
"""Scroll mode: smooth vertical scrolling with float accumulation."""
|
||||
self._y_float += self.speed * dt * 60
|
||||
self.y = int(self._y_float)
|
||||
|
||||
def _update_horizontal(self, dt: float) -> None:
|
||||
self.x += int(self.speed * dt * 60)
|
||||
|
||||
@@ -77,31 +210,262 @@ class Camera:
|
||||
self.y = int(math.sin(self._time * 2) * base)
|
||||
self.x = int(math.cos(self._time * 1.5) * base * 0.5)
|
||||
|
||||
def _update_bounce(self, dt: float) -> None:
|
||||
"""Bouncing DVD-style camera that bounces off canvas edges."""
|
||||
vw = self.viewport_width
|
||||
vh = self.viewport_height
|
||||
|
||||
# Initialize direction if not set
|
||||
if not hasattr(self, "_bounce_dx"):
|
||||
self._bounce_dx = 1
|
||||
self._bounce_dy = 1
|
||||
|
||||
# Calculate max positions
|
||||
max_x = max(0, self.canvas_width - vw)
|
||||
max_y = max(0, self.canvas_height - vh)
|
||||
|
||||
# Move
|
||||
move_speed = self.speed * dt * 60
|
||||
|
||||
# Bounce off edges - reverse direction when hitting bounds
|
||||
self.x += int(move_speed * self._bounce_dx)
|
||||
self.y += int(move_speed * self._bounce_dy)
|
||||
|
||||
# Bounce horizontally
|
||||
if self.x <= 0:
|
||||
self.x = 0
|
||||
self._bounce_dx = 1
|
||||
elif self.x >= max_x:
|
||||
self.x = max_x
|
||||
self._bounce_dx = -1
|
||||
|
||||
# Bounce vertically
|
||||
if self.y <= 0:
|
||||
self.y = 0
|
||||
self._bounce_dy = 1
|
||||
elif self.y >= max_y:
|
||||
self.y = max_y
|
||||
self._bounce_dy = -1
|
||||
|
||||
def _update_radial(self, dt: float) -> None:
|
||||
"""Radial camera mode: polar coordinate scrolling (r, theta).
|
||||
|
||||
The camera rotates around the center of the canvas while optionally
|
||||
moving outward/inward along rays. This enables:
|
||||
- Radar sweep animations
|
||||
- Pendulum view oscillation
|
||||
- Spiral scanning motion
|
||||
|
||||
Uses polar coordinates internally:
|
||||
- _r_float: radial distance from center (accumulates smoothly)
|
||||
- _theta_float: angle in radians (accumulates smoothly)
|
||||
- Updates x, y based on conversion from polar to Cartesian
|
||||
"""
|
||||
# Initialize radial state if needed
|
||||
if not hasattr(self, "_r_float"):
|
||||
self._r_float = 0.0
|
||||
self._theta_float = 0.0
|
||||
|
||||
# Update angular position (rotation around center)
|
||||
# Speed controls rotation rate
|
||||
theta_speed = self.speed * dt * 1.0 # radians per second
|
||||
self._theta_float += theta_speed
|
||||
|
||||
# Update radial position (inward/outward from center)
|
||||
# Can be modulated by external sensor
|
||||
if hasattr(self, "_radial_input"):
|
||||
r_input = self._radial_input
|
||||
else:
|
||||
# Default: slow outward drift
|
||||
r_input = 0.0
|
||||
|
||||
r_speed = self.speed * dt * 20.0 # pixels per second
|
||||
self._r_float += r_input + r_speed * 0.01
|
||||
|
||||
# Clamp radial position to canvas bounds
|
||||
max_r = min(self.canvas_width, self.canvas_height) / 2
|
||||
self._r_float = max(0.0, min(self._r_float, max_r))
|
||||
|
||||
# Convert polar to Cartesian, centered at canvas center
|
||||
center_x = self.canvas_width / 2
|
||||
center_y = self.canvas_height / 2
|
||||
|
||||
self.x = int(center_x + self._r_float * math.cos(self._theta_float))
|
||||
self.y = int(center_y + self._r_float * math.sin(self._theta_float))
|
||||
|
||||
# Clamp to canvas bounds
|
||||
self._clamp_to_bounds()
|
||||
|
||||
def set_radial_input(self, value: float) -> None:
|
||||
"""Set radial input for sensor-driven radius modulation.
|
||||
|
||||
Args:
|
||||
value: Sensor value (0-1) that modulates radial distance
|
||||
"""
|
||||
self._radial_input = value * 10.0 # Scale to reasonable pixel range
|
||||
|
||||
def set_radial_angle(self, angle: float) -> None:
|
||||
"""Set radial angle directly (for OSC integration).
|
||||
|
||||
Args:
|
||||
angle: Angle in radians (0 to 2π)
|
||||
"""
|
||||
self._theta_float = angle
|
||||
|
||||
def reset(self) -> None:
|
||||
"""Reset camera position."""
|
||||
"""Reset camera position and state."""
|
||||
self.x = 0
|
||||
self.y = 0
|
||||
self._time = 0.0
|
||||
self.zoom = 1.0
|
||||
# Reset bounce direction state
|
||||
if hasattr(self, "_bounce_dx"):
|
||||
self._bounce_dx = 1
|
||||
self._bounce_dy = 1
|
||||
# Reset radial state
|
||||
if hasattr(self, "_r_float"):
|
||||
self._r_float = 0.0
|
||||
self._theta_float = 0.0
|
||||
|
||||
def set_canvas_size(self, width: int, height: int) -> None:
|
||||
"""Set the canvas size and clamp position if needed.
|
||||
|
||||
Args:
|
||||
width: New canvas width
|
||||
height: New canvas height
|
||||
"""
|
||||
self.canvas_width = width
|
||||
self.canvas_height = height
|
||||
self._clamp_to_bounds()
|
||||
|
||||
def apply(
|
||||
self, buffer: list[str], viewport_width: int, viewport_height: int | None = None
|
||||
) -> list[str]:
|
||||
"""Apply camera viewport to a text buffer.
|
||||
|
||||
Slices the buffer based on camera position (x, y) and viewport dimensions.
|
||||
Handles ANSI escape codes correctly for colored/styled text.
|
||||
|
||||
Args:
|
||||
buffer: List of strings representing lines of text
|
||||
viewport_width: Width of the visible viewport in characters
|
||||
viewport_height: Height of the visible viewport (overrides camera's viewport_height if provided)
|
||||
|
||||
Returns:
|
||||
Sliced buffer containing only the visible lines and columns
|
||||
"""
|
||||
from engine.effects.legacy import vis_offset, vis_trunc
|
||||
|
||||
if not buffer:
|
||||
return buffer
|
||||
|
||||
# Get current viewport bounds (clamped to canvas size)
|
||||
viewport = self.get_viewport(viewport_height)
|
||||
|
||||
# Use provided viewport_height if given, otherwise use camera's viewport
|
||||
vh = viewport_height if viewport_height is not None else viewport.height
|
||||
|
||||
# Vertical slice: extract lines that fit in viewport height
|
||||
start_y = viewport.y
|
||||
end_y = min(viewport.y + vh, len(buffer))
|
||||
|
||||
if start_y >= len(buffer):
|
||||
# Scrolled past end of buffer, return empty viewport
|
||||
return [""] * vh
|
||||
|
||||
vertical_slice = buffer[start_y:end_y]
|
||||
|
||||
# Horizontal slice: apply horizontal offset and truncate to width
|
||||
horizontal_slice = []
|
||||
for line in vertical_slice:
|
||||
# Apply horizontal offset (skip first x characters, handling ANSI)
|
||||
offset_line = vis_offset(line, viewport.x)
|
||||
# Truncate to viewport width (handling ANSI)
|
||||
truncated_line = vis_trunc(offset_line, viewport_width)
|
||||
|
||||
# Pad line to full viewport width to prevent ghosting when panning
|
||||
# Skip padding for empty lines to preserve intentional blank lines
|
||||
import re
|
||||
|
||||
visible_len = len(re.sub(r"\x1b\[[0-9;]*m", "", truncated_line))
|
||||
if visible_len < viewport_width and visible_len > 0:
|
||||
truncated_line += " " * (viewport_width - visible_len)
|
||||
|
||||
horizontal_slice.append(truncated_line)
|
||||
|
||||
# Pad with empty lines if needed to fill viewport height
|
||||
while len(horizontal_slice) < vh:
|
||||
horizontal_slice.append("")
|
||||
|
||||
return horizontal_slice
|
||||
|
||||
@classmethod
|
||||
def feed(cls, speed: float = 1.0) -> "Camera":
|
||||
"""Create a feed camera (rapid single-item scrolling, 1 row/frame at speed=1.0)."""
|
||||
return cls(mode=CameraMode.FEED, speed=speed, canvas_height=200)
|
||||
|
||||
@classmethod
|
||||
def scroll(cls, speed: float = 0.5) -> "Camera":
|
||||
"""Create a smooth scrolling camera (movie credits style).
|
||||
|
||||
Uses float accumulation for sub-integer speeds.
|
||||
Sets canvas_width=0 so it matches viewport_width for proper text wrapping.
|
||||
"""
|
||||
return cls(
|
||||
mode=CameraMode.SCROLL, speed=speed, canvas_width=0, canvas_height=200
|
||||
)
|
||||
|
||||
@classmethod
|
||||
def vertical(cls, speed: float = 1.0) -> "Camera":
|
||||
"""Create a vertical scrolling camera."""
|
||||
return cls(mode=CameraMode.VERTICAL, speed=speed)
|
||||
"""Deprecated: Use feed() or scroll() instead."""
|
||||
return cls(mode=CameraMode.FEED, speed=speed, canvas_height=200)
|
||||
|
||||
@classmethod
|
||||
def horizontal(cls, speed: float = 1.0) -> "Camera":
|
||||
"""Create a horizontal scrolling camera."""
|
||||
return cls(mode=CameraMode.HORIZONTAL, speed=speed)
|
||||
return cls(mode=CameraMode.HORIZONTAL, speed=speed, canvas_width=200)
|
||||
|
||||
@classmethod
|
||||
def omni(cls, speed: float = 1.0) -> "Camera":
|
||||
"""Create an omnidirectional scrolling camera."""
|
||||
return cls(mode=CameraMode.OMNI, speed=speed)
|
||||
return cls(
|
||||
mode=CameraMode.OMNI, speed=speed, canvas_width=200, canvas_height=200
|
||||
)
|
||||
|
||||
@classmethod
|
||||
def floating(cls, speed: float = 1.0) -> "Camera":
|
||||
"""Create a floating/bobbing camera."""
|
||||
return cls(mode=CameraMode.FLOATING, speed=speed)
|
||||
return cls(
|
||||
mode=CameraMode.FLOATING, speed=speed, canvas_width=200, canvas_height=200
|
||||
)
|
||||
|
||||
@classmethod
|
||||
def bounce(cls, speed: float = 1.0) -> "Camera":
|
||||
"""Create a bouncing DVD-style camera that bounces off canvas edges."""
|
||||
return cls(
|
||||
mode=CameraMode.BOUNCE, speed=speed, canvas_width=200, canvas_height=200
|
||||
)
|
||||
|
||||
@classmethod
|
||||
def radial(cls, speed: float = 1.0) -> "Camera":
|
||||
"""Create a radial camera (polar coordinate scanning).
|
||||
|
||||
The camera rotates around the center of the canvas with smooth angular motion.
|
||||
Enables radar sweep, pendulum view, and spiral scanning animations.
|
||||
|
||||
Args:
|
||||
speed: Rotation speed (higher = faster rotation)
|
||||
|
||||
Returns:
|
||||
Camera configured for radial polar coordinate scanning
|
||||
"""
|
||||
cam = cls(
|
||||
mode=CameraMode.RADIAL, speed=speed, canvas_width=200, canvas_height=200
|
||||
)
|
||||
# Initialize radial state
|
||||
cam._r_float = 0.0
|
||||
cam._theta_float = 0.0
|
||||
return cam
|
||||
|
||||
@classmethod
|
||||
def custom(cls, update_fn: Callable[["Camera", float], None]) -> "Camera":
|
||||
|
||||
186
engine/canvas.py
Normal file
186
engine/canvas.py
Normal file
@@ -0,0 +1,186 @@
|
||||
"""
|
||||
Canvas - 2D surface for rendering.
|
||||
|
||||
The Canvas represents a full rendered surface that can be larger than the display.
|
||||
The Camera then defines the visible viewport into this canvas.
|
||||
"""
|
||||
|
||||
from dataclasses import dataclass
|
||||
|
||||
|
||||
@dataclass
|
||||
class CanvasRegion:
|
||||
"""A rectangular region on the canvas."""
|
||||
|
||||
x: int
|
||||
y: int
|
||||
width: int
|
||||
height: int
|
||||
|
||||
def is_valid(self) -> bool:
|
||||
"""Check if region has positive dimensions."""
|
||||
return self.width > 0 and self.height > 0
|
||||
|
||||
def rows(self) -> set[int]:
|
||||
"""Return set of row indices in this region."""
|
||||
return set(range(self.y, self.y + self.height))
|
||||
|
||||
|
||||
class Canvas:
|
||||
"""2D canvas for rendering content.
|
||||
|
||||
The canvas is a 2D grid of cells that can hold text content.
|
||||
It can be larger than the visible viewport (display).
|
||||
|
||||
Attributes:
|
||||
width: Total width in characters
|
||||
height: Total height in characters
|
||||
"""
|
||||
|
||||
def __init__(self, width: int = 80, height: int = 24):
|
||||
self.width = width
|
||||
self.height = height
|
||||
self._grid: list[list[str]] = [
|
||||
[" " for _ in range(width)] for _ in range(height)
|
||||
]
|
||||
self._dirty_regions: list[CanvasRegion] = [] # Track dirty regions
|
||||
|
||||
def clear(self) -> None:
|
||||
"""Clear the entire canvas."""
|
||||
self._grid = [[" " for _ in range(self.width)] for _ in range(self.height)]
|
||||
self._dirty_regions = [CanvasRegion(0, 0, self.width, self.height)]
|
||||
|
||||
def mark_dirty(self, x: int, y: int, width: int, height: int) -> None:
|
||||
"""Mark a region as dirty (caller declares what they changed)."""
|
||||
self._dirty_regions.append(CanvasRegion(x, y, width, height))
|
||||
|
||||
def get_dirty_regions(self) -> list[CanvasRegion]:
|
||||
"""Get all dirty regions and clear the set."""
|
||||
regions = self._dirty_regions
|
||||
self._dirty_regions = []
|
||||
return regions
|
||||
|
||||
def get_dirty_rows(self) -> set[int]:
|
||||
"""Get union of all dirty rows."""
|
||||
rows: set[int] = set()
|
||||
for region in self._dirty_regions:
|
||||
rows.update(region.rows())
|
||||
return rows
|
||||
|
||||
def is_dirty(self) -> bool:
|
||||
"""Check if any region is dirty."""
|
||||
return len(self._dirty_regions) > 0
|
||||
|
||||
def get_region(self, x: int, y: int, width: int, height: int) -> list[list[str]]:
|
||||
"""Get a rectangular region from the canvas.
|
||||
|
||||
Args:
|
||||
x: Left position
|
||||
y: Top position
|
||||
width: Region width
|
||||
height: Region height
|
||||
|
||||
Returns:
|
||||
2D list of characters (height rows, width columns)
|
||||
"""
|
||||
region: list[list[str]] = []
|
||||
for py in range(y, y + height):
|
||||
row: list[str] = []
|
||||
for px in range(x, x + width):
|
||||
if 0 <= py < self.height and 0 <= px < self.width:
|
||||
row.append(self._grid[py][px])
|
||||
else:
|
||||
row.append(" ")
|
||||
region.append(row)
|
||||
return region
|
||||
|
||||
def get_region_flat(self, x: int, y: int, width: int, height: int) -> list[str]:
|
||||
"""Get a rectangular region as flat list of lines.
|
||||
|
||||
Args:
|
||||
x: Left position
|
||||
y: Top position
|
||||
width: Region width
|
||||
height: Region height
|
||||
|
||||
Returns:
|
||||
List of strings (one per row)
|
||||
"""
|
||||
region = self.get_region(x, y, width, height)
|
||||
return ["".join(row) for row in region]
|
||||
|
||||
def put_region(self, x: int, y: int, content: list[list[str]]) -> None:
|
||||
"""Put content into a rectangular region on the canvas.
|
||||
|
||||
Args:
|
||||
x: Left position
|
||||
y: Top position
|
||||
content: 2D list of characters to place
|
||||
"""
|
||||
height = len(content) if content else 0
|
||||
width = len(content[0]) if height > 0 else 0
|
||||
|
||||
for py, row in enumerate(content):
|
||||
for px, char in enumerate(row):
|
||||
canvas_x = x + px
|
||||
canvas_y = y + py
|
||||
if 0 <= canvas_y < self.height and 0 <= canvas_x < self.width:
|
||||
self._grid[canvas_y][canvas_x] = char
|
||||
|
||||
if width > 0 and height > 0:
|
||||
self.mark_dirty(x, y, width, height)
|
||||
|
||||
def put_text(self, x: int, y: int, text: str) -> None:
|
||||
"""Put a single line of text at position.
|
||||
|
||||
Args:
|
||||
x: Left position
|
||||
y: Row position
|
||||
text: Text to place
|
||||
"""
|
||||
text_len = len(text)
|
||||
for i, char in enumerate(text):
|
||||
canvas_x = x + i
|
||||
if 0 <= canvas_x < self.width and 0 <= y < self.height:
|
||||
self._grid[y][canvas_x] = char
|
||||
|
||||
if text_len > 0:
|
||||
self.mark_dirty(x, y, text_len, 1)
|
||||
|
||||
def fill(self, x: int, y: int, width: int, height: int, char: str = " ") -> None:
|
||||
"""Fill a rectangular region with a character.
|
||||
|
||||
Args:
|
||||
x: Left position
|
||||
y: Top position
|
||||
width: Region width
|
||||
height: Region height
|
||||
char: Character to fill with
|
||||
"""
|
||||
for py in range(y, y + height):
|
||||
for px in range(x, x + width):
|
||||
if 0 <= py < self.height and 0 <= px < self.width:
|
||||
self._grid[py][px] = char
|
||||
|
||||
if width > 0 and height > 0:
|
||||
self.mark_dirty(x, y, width, height)
|
||||
|
||||
def resize(self, width: int, height: int) -> None:
|
||||
"""Resize the canvas.
|
||||
|
||||
Args:
|
||||
width: New width
|
||||
height: New height
|
||||
"""
|
||||
if width == self.width and height == self.height:
|
||||
return
|
||||
|
||||
new_grid: list[list[str]] = [[" " for _ in range(width)] for _ in range(height)]
|
||||
|
||||
for py in range(min(self.height, height)):
|
||||
for px in range(min(self.width, width)):
|
||||
new_grid[py][px] = self._grid[py][px]
|
||||
|
||||
self.width = width
|
||||
self.height = height
|
||||
self._grid = new_grid
|
||||
@@ -130,8 +130,10 @@ class Config:
|
||||
script_fonts: dict[str, str] = field(default_factory=_get_platform_font_paths)
|
||||
|
||||
display: str = "pygame"
|
||||
positioning: str = "mixed"
|
||||
websocket: bool = False
|
||||
websocket_port: int = 8765
|
||||
theme: str = "green"
|
||||
|
||||
@classmethod
|
||||
def from_args(cls, argv: list[str] | None = None) -> "Config":
|
||||
@@ -173,8 +175,10 @@ class Config:
|
||||
kata_glyphs="ハミヒーウシナモニサワツオリアホテマケメエカキムユラセネスタヌヘ",
|
||||
script_fonts=_get_platform_font_paths(),
|
||||
display=_arg_value("--display", argv) or "terminal",
|
||||
positioning=_arg_value("--positioning", argv) or "mixed",
|
||||
websocket="--websocket" in argv,
|
||||
websocket_port=_arg_int("--websocket-port", 8765, argv),
|
||||
theme=_arg_value("--theme", argv) or "green",
|
||||
)
|
||||
|
||||
|
||||
@@ -246,6 +250,40 @@ DEMO = "--demo" in sys.argv
|
||||
DEMO_EFFECT_DURATION = 5.0 # seconds per effect
|
||||
PIPELINE_DEMO = "--pipeline-demo" in sys.argv
|
||||
|
||||
# ─── THEME MANAGEMENT ─────────────────────────────────────────
|
||||
ACTIVE_THEME = None
|
||||
|
||||
|
||||
def set_active_theme(theme_id: str = "green"):
|
||||
"""Set the active theme by ID.
|
||||
|
||||
Args:
|
||||
theme_id: Theme identifier from theme registry (e.g., "green", "orange", "purple")
|
||||
|
||||
Raises:
|
||||
KeyError: If theme_id is not in the theme registry
|
||||
|
||||
Side Effects:
|
||||
Sets the ACTIVE_THEME global variable
|
||||
"""
|
||||
global ACTIVE_THEME
|
||||
from engine import themes
|
||||
|
||||
ACTIVE_THEME = themes.get_theme(theme_id)
|
||||
|
||||
|
||||
# Initialize theme on module load (lazy to avoid circular dependency)
|
||||
def _init_theme():
|
||||
theme_id = _arg_value("--theme", sys.argv) or "green"
|
||||
try:
|
||||
set_active_theme(theme_id)
|
||||
except KeyError:
|
||||
pass # Theme not found, keep None
|
||||
|
||||
|
||||
_init_theme()
|
||||
|
||||
|
||||
# ─── PIPELINE MODE (new unified architecture) ─────────────
|
||||
PIPELINE_MODE = "--pipeline" in sys.argv
|
||||
PIPELINE_PRESET = _arg_value("--pipeline-preset", sys.argv) or "demo"
|
||||
@@ -256,6 +294,9 @@ PRESET = _arg_value("--preset", sys.argv)
|
||||
# ─── PIPELINE DIAGRAM ────────────────────────────────────
|
||||
PIPELINE_DIAGRAM = "--pipeline-diagram" in sys.argv
|
||||
|
||||
# ─── THEME ──────────────────────────────────────────────────
|
||||
THEME = _arg_value("--theme", sys.argv) or "green"
|
||||
|
||||
|
||||
def set_font_selection(font_path=None, font_index=None):
|
||||
"""Set runtime primary font selection."""
|
||||
|
||||
@@ -1,181 +0,0 @@
|
||||
"""
|
||||
Stream controller - manages input sources and orchestrates the render stream.
|
||||
"""
|
||||
|
||||
from engine.config import Config, get_config
|
||||
from engine.display import (
|
||||
DisplayRegistry,
|
||||
KittyDisplay,
|
||||
MultiDisplay,
|
||||
NullDisplay,
|
||||
PygameDisplay,
|
||||
SixelDisplay,
|
||||
TerminalDisplay,
|
||||
WebSocketDisplay,
|
||||
)
|
||||
from engine.effects.controller import handle_effects_command
|
||||
from engine.eventbus import EventBus
|
||||
from engine.events import EventType, StreamEvent
|
||||
from engine.mic import MicMonitor
|
||||
from engine.ntfy import NtfyPoller
|
||||
from engine.scroll import stream
|
||||
|
||||
|
||||
def _get_display(config: Config):
|
||||
"""Get the appropriate display based on config."""
|
||||
DisplayRegistry.initialize()
|
||||
display_mode = config.display.lower()
|
||||
|
||||
displays = []
|
||||
|
||||
if display_mode in ("terminal", "both"):
|
||||
displays.append(TerminalDisplay())
|
||||
|
||||
if display_mode in ("websocket", "both"):
|
||||
ws = WebSocketDisplay(host="0.0.0.0", port=config.websocket_port)
|
||||
ws.start_server()
|
||||
ws.start_http_server()
|
||||
displays.append(ws)
|
||||
|
||||
if display_mode == "sixel":
|
||||
displays.append(SixelDisplay())
|
||||
|
||||
if display_mode == "kitty":
|
||||
displays.append(KittyDisplay())
|
||||
|
||||
if display_mode == "pygame":
|
||||
displays.append(PygameDisplay())
|
||||
|
||||
if not displays:
|
||||
return NullDisplay()
|
||||
|
||||
if len(displays) == 1:
|
||||
return displays[0]
|
||||
|
||||
return MultiDisplay(displays)
|
||||
|
||||
|
||||
class StreamController:
|
||||
"""Controls the stream lifecycle - initializes sources and runs the stream."""
|
||||
|
||||
_topics_warmed = False
|
||||
|
||||
def __init__(self, config: Config | None = None, event_bus: EventBus | None = None):
|
||||
self.config = config or get_config()
|
||||
self.event_bus = event_bus
|
||||
self.mic: MicMonitor | None = None
|
||||
self.ntfy: NtfyPoller | None = None
|
||||
self.ntfy_cc: NtfyPoller | None = None
|
||||
|
||||
@classmethod
|
||||
def warmup_topics(cls) -> None:
|
||||
"""Warm up ntfy topics lazily (creates them if they don't exist)."""
|
||||
if cls._topics_warmed:
|
||||
return
|
||||
|
||||
import urllib.request
|
||||
|
||||
topics = [
|
||||
"https://ntfy.sh/klubhaus_terminal_mainline_cc_cmd",
|
||||
"https://ntfy.sh/klubhaus_terminal_mainline_cc_resp",
|
||||
"https://ntfy.sh/klubhaus_terminal_mainline",
|
||||
]
|
||||
|
||||
for topic in topics:
|
||||
try:
|
||||
req = urllib.request.Request(
|
||||
topic,
|
||||
data=b"init",
|
||||
headers={
|
||||
"User-Agent": "mainline/0.1",
|
||||
"Content-Type": "text/plain",
|
||||
},
|
||||
method="POST",
|
||||
)
|
||||
urllib.request.urlopen(req, timeout=5)
|
||||
except Exception:
|
||||
pass
|
||||
|
||||
cls._topics_warmed = True
|
||||
|
||||
def initialize_sources(self) -> tuple[bool, bool]:
|
||||
"""Initialize microphone and ntfy sources.
|
||||
|
||||
Returns:
|
||||
(mic_ok, ntfy_ok) - success status for each source
|
||||
"""
|
||||
self.mic = MicMonitor(threshold_db=self.config.mic_threshold_db)
|
||||
mic_ok = self.mic.start() if self.mic.available else False
|
||||
|
||||
self.ntfy = NtfyPoller(
|
||||
self.config.ntfy_topic,
|
||||
reconnect_delay=self.config.ntfy_reconnect_delay,
|
||||
display_secs=self.config.message_display_secs,
|
||||
)
|
||||
ntfy_ok = self.ntfy.start()
|
||||
|
||||
self.ntfy_cc = NtfyPoller(
|
||||
self.config.ntfy_cc_cmd_topic,
|
||||
reconnect_delay=self.config.ntfy_reconnect_delay,
|
||||
display_secs=5,
|
||||
)
|
||||
self.ntfy_cc.subscribe(self._handle_cc_message)
|
||||
ntfy_cc_ok = self.ntfy_cc.start()
|
||||
|
||||
return bool(mic_ok), ntfy_ok and ntfy_cc_ok
|
||||
|
||||
def _handle_cc_message(self, event) -> None:
|
||||
"""Handle incoming C&C message - like a serial port control interface."""
|
||||
import urllib.request
|
||||
|
||||
cmd = event.body.strip() if hasattr(event, "body") else str(event).strip()
|
||||
if not cmd.startswith("/"):
|
||||
return
|
||||
|
||||
response = handle_effects_command(cmd)
|
||||
|
||||
topic_url = self.config.ntfy_cc_resp_topic.replace("/json", "")
|
||||
data = response.encode("utf-8")
|
||||
req = urllib.request.Request(
|
||||
topic_url,
|
||||
data=data,
|
||||
headers={"User-Agent": "mainline/0.1", "Content-Type": "text/plain"},
|
||||
method="POST",
|
||||
)
|
||||
try:
|
||||
urllib.request.urlopen(req, timeout=5)
|
||||
except Exception:
|
||||
pass
|
||||
|
||||
def run(self, items: list) -> None:
|
||||
"""Run the stream with initialized sources."""
|
||||
if self.mic is None or self.ntfy is None:
|
||||
self.initialize_sources()
|
||||
|
||||
if self.event_bus:
|
||||
self.event_bus.publish(
|
||||
EventType.STREAM_START,
|
||||
StreamEvent(
|
||||
event_type=EventType.STREAM_START,
|
||||
headline_count=len(items),
|
||||
),
|
||||
)
|
||||
|
||||
display = _get_display(self.config)
|
||||
stream(items, self.ntfy, self.mic, display)
|
||||
if display:
|
||||
display.cleanup()
|
||||
|
||||
if self.event_bus:
|
||||
self.event_bus.publish(
|
||||
EventType.STREAM_END,
|
||||
StreamEvent(
|
||||
event_type=EventType.STREAM_END,
|
||||
headline_count=len(items),
|
||||
),
|
||||
)
|
||||
|
||||
def cleanup(self) -> None:
|
||||
"""Clean up resources."""
|
||||
if self.mic:
|
||||
self.mic.stop()
|
||||
12
engine/data_sources/__init__.py
Normal file
12
engine/data_sources/__init__.py
Normal file
@@ -0,0 +1,12 @@
|
||||
"""
|
||||
Data source implementations for the pipeline architecture.
|
||||
|
||||
Import directly from submodules:
|
||||
from engine.data_sources.sources import DataSource, SourceItem, HeadlinesDataSource
|
||||
from engine.data_sources.pipeline_introspection import PipelineIntrospectionSource
|
||||
"""
|
||||
|
||||
# Re-export for convenience
|
||||
from engine.data_sources.sources import ImageItem, SourceItem
|
||||
|
||||
__all__ = ["ImageItem", "SourceItem"]
|
||||
60
engine/data_sources/checkerboard.py
Normal file
60
engine/data_sources/checkerboard.py
Normal file
@@ -0,0 +1,60 @@
|
||||
"""Checkerboard data source for visual pattern generation."""
|
||||
|
||||
from engine.data_sources.sources import DataSource, SourceItem
|
||||
|
||||
|
||||
class CheckerboardDataSource(DataSource):
|
||||
"""Data source that generates a checkerboard pattern.
|
||||
|
||||
Creates a grid of alternating characters, useful for testing motion effects
|
||||
and camera movement. The pattern is static; movement comes from camera panning.
|
||||
"""
|
||||
|
||||
def __init__(
|
||||
self,
|
||||
width: int = 200,
|
||||
height: int = 200,
|
||||
square_size: int = 10,
|
||||
char_a: str = "#",
|
||||
char_b: str = " ",
|
||||
):
|
||||
"""Initialize checkerboard data source.
|
||||
|
||||
Args:
|
||||
width: Total pattern width in characters
|
||||
height: Total pattern height in lines
|
||||
square_size: Size of each checker square in characters
|
||||
char_a: Character for "filled" squares (default: '#')
|
||||
char_b: Character for "empty" squares (default: ' ')
|
||||
"""
|
||||
self.width = width
|
||||
self.height = height
|
||||
self.square_size = square_size
|
||||
self.char_a = char_a
|
||||
self.char_b = char_b
|
||||
|
||||
@property
|
||||
def name(self) -> str:
|
||||
return "checkerboard"
|
||||
|
||||
@property
|
||||
def is_dynamic(self) -> bool:
|
||||
return False
|
||||
|
||||
def fetch(self) -> list[SourceItem]:
|
||||
"""Generate the checkerboard pattern as a single SourceItem."""
|
||||
lines = []
|
||||
for y in range(self.height):
|
||||
line_chars = []
|
||||
for x in range(self.width):
|
||||
# Determine which square this position belongs to
|
||||
square_x = x // self.square_size
|
||||
square_y = y // self.square_size
|
||||
# Alternate pattern based on parity of square coordinates
|
||||
if (square_x + square_y) % 2 == 0:
|
||||
line_chars.append(self.char_a)
|
||||
else:
|
||||
line_chars.append(self.char_b)
|
||||
lines.append("".join(line_chars))
|
||||
content = "\n".join(lines)
|
||||
return [SourceItem(content=content, source="checkerboard", timestamp="0")]
|
||||
312
engine/data_sources/pipeline_introspection.py
Normal file
312
engine/data_sources/pipeline_introspection.py
Normal file
@@ -0,0 +1,312 @@
|
||||
"""
|
||||
Pipeline introspection source - Renders live visualization of pipeline DAG and metrics.
|
||||
|
||||
This DataSource introspects one or more Pipeline instances and renders
|
||||
an ASCII visualization showing:
|
||||
- Stage DAG with signal flow connections
|
||||
- Per-stage execution times
|
||||
- Sparkline of frame times
|
||||
- Stage breakdown bars
|
||||
|
||||
Example:
|
||||
source = PipelineIntrospectionSource(pipelines=[my_pipeline])
|
||||
items = source.fetch() # Returns ASCII visualization
|
||||
"""
|
||||
|
||||
from typing import TYPE_CHECKING
|
||||
|
||||
from engine.data_sources.sources import DataSource, SourceItem
|
||||
|
||||
if TYPE_CHECKING:
|
||||
from engine.pipeline.controller import Pipeline
|
||||
|
||||
|
||||
SPARKLINE_CHARS = " ▁▂▃▄▅▆▇█"
|
||||
BAR_CHARS = " ▁▂▃▄▅▆▇█"
|
||||
|
||||
|
||||
class PipelineIntrospectionSource(DataSource):
|
||||
"""Data source that renders live pipeline introspection visualization.
|
||||
|
||||
Renders:
|
||||
- DAG of stages with signal flow
|
||||
- Per-stage execution times
|
||||
- Sparkline of frame history
|
||||
- Stage breakdown bars
|
||||
"""
|
||||
|
||||
def __init__(
|
||||
self,
|
||||
pipeline: "Pipeline | None" = None,
|
||||
viewport_width: int = 100,
|
||||
viewport_height: int = 35,
|
||||
):
|
||||
self._pipeline = pipeline # May be None initially, set later via set_pipeline()
|
||||
self.viewport_width = viewport_width
|
||||
self.viewport_height = viewport_height
|
||||
self.frame = 0
|
||||
self._ready = False
|
||||
|
||||
def set_pipeline(self, pipeline: "Pipeline") -> None:
|
||||
"""Set the pipeline to introspect (call after pipeline is built)."""
|
||||
self._pipeline = [pipeline] # Wrap in list for iteration
|
||||
self._ready = True
|
||||
|
||||
@property
|
||||
def ready(self) -> bool:
|
||||
"""Check if source is ready to fetch."""
|
||||
return self._ready
|
||||
|
||||
@property
|
||||
def name(self) -> str:
|
||||
return "pipeline-inspect"
|
||||
|
||||
@property
|
||||
def is_dynamic(self) -> bool:
|
||||
return True
|
||||
|
||||
@property
|
||||
def inlet_types(self) -> set:
|
||||
from engine.pipeline.core import DataType
|
||||
|
||||
return {DataType.NONE}
|
||||
|
||||
@property
|
||||
def outlet_types(self) -> set:
|
||||
from engine.pipeline.core import DataType
|
||||
|
||||
return {DataType.SOURCE_ITEMS}
|
||||
|
||||
def add_pipeline(self, pipeline: "Pipeline") -> None:
|
||||
"""Add a pipeline to visualize."""
|
||||
if self._pipeline is None:
|
||||
self._pipeline = [pipeline]
|
||||
elif isinstance(self._pipeline, list):
|
||||
self._pipeline.append(pipeline)
|
||||
else:
|
||||
self._pipeline = [self._pipeline, pipeline]
|
||||
self._ready = True
|
||||
|
||||
def remove_pipeline(self, pipeline: "Pipeline") -> None:
|
||||
"""Remove a pipeline from visualization."""
|
||||
if self._pipeline is None:
|
||||
return
|
||||
elif isinstance(self._pipeline, list):
|
||||
self._pipeline = [p for p in self._pipeline if p is not pipeline]
|
||||
if not self._pipeline:
|
||||
self._pipeline = None
|
||||
self._ready = False
|
||||
elif self._pipeline is pipeline:
|
||||
self._pipeline = None
|
||||
self._ready = False
|
||||
|
||||
def fetch(self) -> list[SourceItem]:
|
||||
"""Fetch the introspection visualization."""
|
||||
if not self._ready:
|
||||
# Return a placeholder until ready
|
||||
return [
|
||||
SourceItem(
|
||||
content="Initializing...",
|
||||
source="pipeline-inspect",
|
||||
timestamp="init",
|
||||
)
|
||||
]
|
||||
|
||||
lines = self._render()
|
||||
self.frame += 1
|
||||
content = "\n".join(lines)
|
||||
return [
|
||||
SourceItem(
|
||||
content=content, source="pipeline-inspect", timestamp=f"f{self.frame}"
|
||||
)
|
||||
]
|
||||
|
||||
def get_items(self) -> list[SourceItem]:
|
||||
return self.fetch()
|
||||
|
||||
def _render(self) -> list[str]:
|
||||
"""Render the full visualization."""
|
||||
lines: list[str] = []
|
||||
|
||||
# Header
|
||||
lines.extend(self._render_header())
|
||||
|
||||
# Render pipeline(s) if ready
|
||||
if self._ready and self._pipeline:
|
||||
pipelines = (
|
||||
self._pipeline if isinstance(self._pipeline, list) else [self._pipeline]
|
||||
)
|
||||
for pipeline in pipelines:
|
||||
lines.extend(self._render_pipeline(pipeline))
|
||||
|
||||
# Footer with sparkline
|
||||
lines.extend(self._render_footer())
|
||||
|
||||
return lines
|
||||
|
||||
@property
|
||||
def _pipelines(self) -> list:
|
||||
"""Return pipelines as a list for iteration."""
|
||||
if self._pipeline is None:
|
||||
return []
|
||||
elif isinstance(self._pipeline, list):
|
||||
return self._pipeline
|
||||
else:
|
||||
return [self._pipeline]
|
||||
|
||||
def _render_header(self) -> list[str]:
|
||||
"""Render the header with frame info and metrics summary."""
|
||||
lines: list[str] = []
|
||||
|
||||
if not self._pipeline:
|
||||
return ["PIPELINE INTROSPECTION"]
|
||||
|
||||
# Get aggregate metrics
|
||||
total_ms = 0.0
|
||||
fps = 0.0
|
||||
frame_count = 0
|
||||
|
||||
for pipeline in self._pipelines:
|
||||
try:
|
||||
metrics = pipeline.get_metrics_summary()
|
||||
if metrics and "error" not in metrics:
|
||||
# Get avg_ms from pipeline metrics
|
||||
pipeline_avg = metrics.get("pipeline", {}).get("avg_ms", 0)
|
||||
total_ms = max(total_ms, pipeline_avg)
|
||||
# Calculate FPS from avg_ms
|
||||
if pipeline_avg > 0:
|
||||
fps = max(fps, 1000.0 / pipeline_avg)
|
||||
frame_count = max(frame_count, metrics.get("frame_count", 0))
|
||||
except Exception:
|
||||
pass
|
||||
|
||||
header = f"PIPELINE INTROSPECTION -- frame: {self.frame} -- avg: {total_ms:.1f}ms -- fps: {fps:.1f}"
|
||||
lines.append(header)
|
||||
|
||||
return lines
|
||||
|
||||
def _render_pipeline(self, pipeline: "Pipeline") -> list[str]:
|
||||
"""Render a single pipeline's DAG."""
|
||||
lines: list[str] = []
|
||||
|
||||
stages = pipeline.stages
|
||||
execution_order = pipeline.execution_order
|
||||
|
||||
if not stages:
|
||||
lines.append(" (no stages)")
|
||||
return lines
|
||||
|
||||
# Build stage info
|
||||
stage_infos: list[dict] = []
|
||||
for name in execution_order:
|
||||
stage = stages.get(name)
|
||||
if not stage:
|
||||
continue
|
||||
|
||||
try:
|
||||
metrics = pipeline.get_metrics_summary()
|
||||
stage_ms = metrics.get("stages", {}).get(name, {}).get("avg_ms", 0.0)
|
||||
except Exception:
|
||||
stage_ms = 0.0
|
||||
|
||||
stage_infos.append(
|
||||
{
|
||||
"name": name,
|
||||
"category": stage.category,
|
||||
"ms": stage_ms,
|
||||
}
|
||||
)
|
||||
|
||||
# Calculate total time for percentages
|
||||
total_time = sum(s["ms"] for s in stage_infos) or 1.0
|
||||
|
||||
# Render DAG - group by category
|
||||
lines.append("")
|
||||
lines.append(" Signal Flow:")
|
||||
|
||||
# Group stages by category for display
|
||||
categories: dict[str, list[dict]] = {}
|
||||
for info in stage_infos:
|
||||
cat = info["category"]
|
||||
if cat not in categories:
|
||||
categories[cat] = []
|
||||
categories[cat].append(info)
|
||||
|
||||
# Render categories in order
|
||||
cat_order = ["source", "render", "effect", "overlay", "display", "system"]
|
||||
|
||||
for cat in cat_order:
|
||||
if cat not in categories:
|
||||
continue
|
||||
|
||||
cat_stages = categories[cat]
|
||||
cat_names = [s["name"] for s in cat_stages]
|
||||
lines.append(f" {cat}: {' → '.join(cat_names)}")
|
||||
|
||||
# Render timing breakdown
|
||||
lines.append("")
|
||||
lines.append(" Stage Timings:")
|
||||
|
||||
for info in stage_infos:
|
||||
name = info["name"]
|
||||
ms = info["ms"]
|
||||
pct = (ms / total_time) * 100
|
||||
bar = self._render_bar(pct, 20)
|
||||
lines.append(f" {name:12s} {ms:6.2f}ms {bar} {pct:5.1f}%")
|
||||
|
||||
lines.append("")
|
||||
|
||||
return lines
|
||||
|
||||
def _render_footer(self) -> list[str]:
|
||||
"""Render the footer with sparkline."""
|
||||
lines: list[str] = []
|
||||
|
||||
# Get frame history from first pipeline
|
||||
pipelines = self._pipelines
|
||||
if pipelines:
|
||||
try:
|
||||
frame_times = pipelines[0].get_frame_times()
|
||||
except Exception:
|
||||
frame_times = []
|
||||
else:
|
||||
frame_times = []
|
||||
|
||||
if frame_times:
|
||||
sparkline = self._render_sparkline(frame_times[-60:], 50)
|
||||
lines.append(f" Frame Time History (last {len(frame_times[-60:])} frames)")
|
||||
lines.append(f" {sparkline}")
|
||||
else:
|
||||
lines.append(" Frame Time History")
|
||||
lines.append(" (collecting data...)")
|
||||
|
||||
lines.append("")
|
||||
|
||||
return lines
|
||||
|
||||
def _render_bar(self, percentage: float, width: int) -> str:
|
||||
"""Render a horizontal bar for percentage."""
|
||||
filled = int((percentage / 100.0) * width)
|
||||
bar = "█" * filled + "░" * (width - filled)
|
||||
return bar
|
||||
|
||||
def _render_sparkline(self, values: list[float], width: int) -> str:
|
||||
"""Render a sparkline from values."""
|
||||
if not values:
|
||||
return " " * width
|
||||
|
||||
min_val = min(values)
|
||||
max_val = max(values)
|
||||
range_val = max_val - min_val or 1.0
|
||||
|
||||
result = []
|
||||
for v in values[-width:]:
|
||||
normalized = (v - min_val) / range_val
|
||||
idx = int(normalized * (len(SPARKLINE_CHARS) - 1))
|
||||
idx = max(0, min(idx, len(SPARKLINE_CHARS) - 1))
|
||||
result.append(SPARKLINE_CHARS[idx])
|
||||
|
||||
# Pad to width
|
||||
while len(result) < width:
|
||||
result.insert(0, " ")
|
||||
return "".join(result[:width])
|
||||
@@ -1,11 +1,11 @@
|
||||
"""
|
||||
Data source abstraction - Treat data sources as first-class citizens in the pipeline.
|
||||
Data sources for the pipeline architecture.
|
||||
|
||||
Each data source implements a common interface:
|
||||
- name: Display name for the source
|
||||
- fetch(): Fetch fresh data
|
||||
- stream(): Stream data continuously (optional)
|
||||
- get_items(): Get current items
|
||||
This module contains all DataSource implementations:
|
||||
- DataSource: Abstract base class
|
||||
- SourceItem, ImageItem: Data containers
|
||||
- HeadlinesDataSource, PoetryDataSource, ImageDataSource: Concrete sources
|
||||
- SourceRegistry: Registry for source discovery
|
||||
"""
|
||||
|
||||
from abc import ABC, abstractmethod
|
||||
@@ -24,6 +24,17 @@ class SourceItem:
|
||||
metadata: dict[str, Any] | None = None
|
||||
|
||||
|
||||
@dataclass
|
||||
class ImageItem:
|
||||
"""An image item from a data source - wraps a PIL Image."""
|
||||
|
||||
image: Any # PIL Image
|
||||
source: str
|
||||
timestamp: str
|
||||
path: str | None = None # File path or URL if applicable
|
||||
metadata: dict[str, Any] | None = None
|
||||
|
||||
|
||||
class DataSource(ABC):
|
||||
"""Abstract base class for data sources.
|
||||
|
||||
@@ -80,6 +91,70 @@ class HeadlinesDataSource(DataSource):
|
||||
return [SourceItem(content=t, source=s, timestamp=ts) for t, s, ts in items]
|
||||
|
||||
|
||||
class EmptyDataSource(DataSource):
|
||||
"""Empty data source that produces blank lines for testing.
|
||||
|
||||
Useful for testing display borders, effects, and other pipeline
|
||||
components without needing actual content.
|
||||
"""
|
||||
|
||||
def __init__(self, width: int = 80, height: int = 24):
|
||||
self.width = width
|
||||
self.height = height
|
||||
|
||||
@property
|
||||
def name(self) -> str:
|
||||
return "empty"
|
||||
|
||||
@property
|
||||
def is_dynamic(self) -> bool:
|
||||
return False
|
||||
|
||||
def fetch(self) -> list[SourceItem]:
|
||||
# Return empty lines as content
|
||||
content = "\n".join([" " * self.width for _ in range(self.height)])
|
||||
return [SourceItem(content=content, source="empty", timestamp="0")]
|
||||
|
||||
|
||||
class ListDataSource(DataSource):
|
||||
"""Data source that wraps a pre-fetched list of items.
|
||||
|
||||
Used for bootstrap loading when items are already available in memory.
|
||||
This is a simple wrapper for already-fetched data.
|
||||
"""
|
||||
|
||||
def __init__(self, items, name: str = "list"):
|
||||
self._raw_items = items # Store raw items separately
|
||||
self._items = None # Cache for converted SourceItem objects
|
||||
self._name = name
|
||||
|
||||
@property
|
||||
def name(self) -> str:
|
||||
return self._name
|
||||
|
||||
@property
|
||||
def is_dynamic(self) -> bool:
|
||||
return False
|
||||
|
||||
def fetch(self) -> list[SourceItem]:
|
||||
# Convert tuple items to SourceItem if needed
|
||||
result = []
|
||||
for item in self._raw_items:
|
||||
if isinstance(item, SourceItem):
|
||||
result.append(item)
|
||||
elif isinstance(item, tuple) and len(item) >= 3:
|
||||
# Assume (content, source, timestamp) tuple format
|
||||
result.append(
|
||||
SourceItem(content=item[0], source=item[1], timestamp=str(item[2]))
|
||||
)
|
||||
else:
|
||||
# Fallback: treat as string content
|
||||
result.append(
|
||||
SourceItem(content=str(item), source="list", timestamp="0")
|
||||
)
|
||||
return result
|
||||
|
||||
|
||||
class PoetryDataSource(DataSource):
|
||||
"""Data source for Poetry DB."""
|
||||
|
||||
@@ -94,36 +169,92 @@ class PoetryDataSource(DataSource):
|
||||
return [SourceItem(content=t, source=s, timestamp=ts) for t, s, ts in items]
|
||||
|
||||
|
||||
class PipelineDataSource(DataSource):
|
||||
"""Data source for pipeline visualization (demo mode). Dynamic - updates every frame."""
|
||||
class ImageDataSource(DataSource):
|
||||
"""Data source that loads PNG images from file paths or URLs.
|
||||
|
||||
def __init__(self, viewport_width: int = 80, viewport_height: int = 24):
|
||||
self.viewport_width = viewport_width
|
||||
self.viewport_height = viewport_height
|
||||
self.frame = 0
|
||||
Supports:
|
||||
- Local file paths (e.g., /path/to/image.png)
|
||||
- URLs (e.g., https://example.com/image.png)
|
||||
|
||||
Yields ImageItem objects containing PIL Image objects that can be
|
||||
converted to text buffers by an ImageToTextTransform stage.
|
||||
"""
|
||||
|
||||
def __init__(
|
||||
self,
|
||||
path: str | list[str] | None = None,
|
||||
urls: str | list[str] | None = None,
|
||||
):
|
||||
"""
|
||||
Args:
|
||||
path: Single path or list of paths to PNG files
|
||||
urls: Single URL or list of URLs to PNG images
|
||||
"""
|
||||
self._paths = [path] if isinstance(path, str) else (path or [])
|
||||
self._urls = [urls] if isinstance(urls, str) else (urls or [])
|
||||
self._images: list[ImageItem] = []
|
||||
self._load_images()
|
||||
|
||||
def _load_images(self) -> None:
|
||||
"""Load all images from paths and URLs."""
|
||||
from datetime import datetime
|
||||
from io import BytesIO
|
||||
from urllib.request import urlopen
|
||||
|
||||
timestamp = datetime.now().isoformat()
|
||||
|
||||
for path in self._paths:
|
||||
try:
|
||||
from PIL import Image
|
||||
|
||||
img = Image.open(path)
|
||||
if img.mode != "RGBA":
|
||||
img = img.convert("RGBA")
|
||||
self._images.append(
|
||||
ImageItem(
|
||||
image=img,
|
||||
source=f"file:{path}",
|
||||
timestamp=timestamp,
|
||||
path=path,
|
||||
)
|
||||
)
|
||||
except Exception:
|
||||
pass
|
||||
|
||||
for url in self._urls:
|
||||
try:
|
||||
from PIL import Image
|
||||
|
||||
with urlopen(url) as response:
|
||||
img = Image.open(BytesIO(response.read()))
|
||||
if img.mode != "RGBA":
|
||||
img = img.convert("RGBA")
|
||||
self._images.append(
|
||||
ImageItem(
|
||||
image=img,
|
||||
source=f"url:{url}",
|
||||
timestamp=timestamp,
|
||||
path=url,
|
||||
)
|
||||
)
|
||||
except Exception:
|
||||
pass
|
||||
|
||||
@property
|
||||
def name(self) -> str:
|
||||
return "pipeline"
|
||||
return "image"
|
||||
|
||||
@property
|
||||
def is_dynamic(self) -> bool:
|
||||
return True
|
||||
return False # Static images, not updating
|
||||
|
||||
def fetch(self) -> list[SourceItem]:
|
||||
from engine.pipeline_viz import generate_large_network_viewport
|
||||
def fetch(self) -> list[ImageItem]:
|
||||
"""Return loaded images as ImageItem list."""
|
||||
return self._images
|
||||
|
||||
buffer = generate_large_network_viewport(
|
||||
self.viewport_width, self.viewport_height, self.frame
|
||||
)
|
||||
self.frame += 1
|
||||
content = "\n".join(buffer)
|
||||
return [
|
||||
SourceItem(content=content, source="pipeline", timestamp=f"f{self.frame}")
|
||||
]
|
||||
|
||||
def get_items(self) -> list[SourceItem]:
|
||||
return self.fetch()
|
||||
def get_items(self) -> list[ImageItem]:
|
||||
"""Return current image items."""
|
||||
return self._images
|
||||
|
||||
|
||||
class MetricsDataSource(DataSource):
|
||||
@@ -340,9 +471,6 @@ class SourceRegistry:
|
||||
def create_poetry(self) -> PoetryDataSource:
|
||||
return PoetryDataSource()
|
||||
|
||||
def create_pipeline(self, width: int = 80, height: int = 24) -> PipelineDataSource:
|
||||
return PipelineDataSource(width, height)
|
||||
|
||||
|
||||
_global_registry: SourceRegistry | None = None
|
||||
|
||||
@@ -5,56 +5,59 @@ Allows swapping output backends via the Display protocol.
|
||||
Supports auto-discovery of display backends.
|
||||
"""
|
||||
|
||||
from enum import Enum, auto
|
||||
from typing import Protocol
|
||||
|
||||
from engine.display.backends.kitty import KittyDisplay
|
||||
# Optional backend - requires moderngl package
|
||||
try:
|
||||
from engine.display.backends.moderngl import ModernGLDisplay
|
||||
|
||||
_MODERNGL_AVAILABLE = True
|
||||
except ImportError:
|
||||
ModernGLDisplay = None
|
||||
_MODERNGL_AVAILABLE = False
|
||||
|
||||
from engine.display.backends.multi import MultiDisplay
|
||||
from engine.display.backends.null import NullDisplay
|
||||
from engine.display.backends.pygame import PygameDisplay
|
||||
from engine.display.backends.sixel import SixelDisplay
|
||||
from engine.display.backends.replay import ReplayDisplay
|
||||
from engine.display.backends.terminal import TerminalDisplay
|
||||
from engine.display.backends.websocket import WebSocketDisplay
|
||||
|
||||
|
||||
class BorderMode(Enum):
|
||||
"""Border rendering modes for displays."""
|
||||
|
||||
OFF = auto() # No border
|
||||
SIMPLE = auto() # Traditional border with FPS/frame time
|
||||
UI = auto() # Right-side UI panel with interactive controls
|
||||
|
||||
|
||||
class Display(Protocol):
|
||||
"""Protocol for display backends.
|
||||
|
||||
All display backends must implement:
|
||||
- width, height: Terminal dimensions
|
||||
- init(width, height, reuse=False): Initialize the display
|
||||
- show(buffer): Render buffer to display
|
||||
- clear(): Clear the display
|
||||
- cleanup(): Shutdown the display
|
||||
Required attributes:
|
||||
- width: int
|
||||
- height: int
|
||||
|
||||
The reuse flag allows attaching to an existing display instance
|
||||
rather than creating a new window/connection.
|
||||
Required methods (duck typing - actual signatures may vary):
|
||||
- init(width, height, reuse=False)
|
||||
- show(buffer, border=False)
|
||||
- clear()
|
||||
- cleanup()
|
||||
- get_dimensions() -> (width, height)
|
||||
|
||||
Optional attributes (for UI mode):
|
||||
- ui_panel: UIPanel instance (set by app when border=UI)
|
||||
|
||||
Optional methods:
|
||||
- is_quit_requested() -> bool
|
||||
- clear_quit_request() -> None
|
||||
"""
|
||||
|
||||
width: int
|
||||
height: int
|
||||
|
||||
def init(self, width: int, height: int, reuse: bool = False) -> None:
|
||||
"""Initialize display with dimensions.
|
||||
|
||||
Args:
|
||||
width: Terminal width in characters
|
||||
height: Terminal height in rows
|
||||
reuse: If True, attach to existing display instead of creating new
|
||||
"""
|
||||
...
|
||||
|
||||
def show(self, buffer: list[str]) -> None:
|
||||
"""Show buffer on display."""
|
||||
...
|
||||
|
||||
def clear(self) -> None:
|
||||
"""Clear display."""
|
||||
...
|
||||
|
||||
def cleanup(self) -> None:
|
||||
"""Shutdown display."""
|
||||
...
|
||||
|
||||
|
||||
class DisplayRegistry:
|
||||
"""Registry for display backends with auto-discovery."""
|
||||
@@ -64,22 +67,18 @@ class DisplayRegistry:
|
||||
|
||||
@classmethod
|
||||
def register(cls, name: str, backend_class: type[Display]) -> None:
|
||||
"""Register a display backend."""
|
||||
cls._backends[name.lower()] = backend_class
|
||||
|
||||
@classmethod
|
||||
def get(cls, name: str) -> type[Display] | None:
|
||||
"""Get a display backend class by name."""
|
||||
return cls._backends.get(name.lower())
|
||||
|
||||
@classmethod
|
||||
def list_backends(cls) -> list[str]:
|
||||
"""List all available display backend names."""
|
||||
return list(cls._backends.keys())
|
||||
|
||||
@classmethod
|
||||
def create(cls, name: str, **kwargs) -> Display | None:
|
||||
"""Create a display instance by name."""
|
||||
cls.initialize()
|
||||
backend_class = cls.get(name)
|
||||
if backend_class:
|
||||
@@ -88,19 +87,30 @@ class DisplayRegistry:
|
||||
|
||||
@classmethod
|
||||
def initialize(cls) -> None:
|
||||
"""Initialize and register all built-in backends."""
|
||||
if cls._initialized:
|
||||
return
|
||||
|
||||
cls.register("terminal", TerminalDisplay)
|
||||
cls.register("null", NullDisplay)
|
||||
cls.register("replay", ReplayDisplay)
|
||||
cls.register("websocket", WebSocketDisplay)
|
||||
cls.register("sixel", SixelDisplay)
|
||||
cls.register("kitty", KittyDisplay)
|
||||
cls.register("pygame", PygameDisplay)
|
||||
|
||||
if _MODERNGL_AVAILABLE:
|
||||
cls.register("moderngl", ModernGLDisplay) # type: ignore[arg-type]
|
||||
cls._initialized = True
|
||||
|
||||
@classmethod
|
||||
def create_multi(cls, names: list[str]) -> MultiDisplay | None:
|
||||
displays = []
|
||||
for name in names:
|
||||
backend = cls.create(name)
|
||||
if backend:
|
||||
displays.append(backend)
|
||||
else:
|
||||
return None
|
||||
if not displays:
|
||||
return None
|
||||
return MultiDisplay(displays)
|
||||
|
||||
|
||||
def get_monitor():
|
||||
"""Get the performance monitor."""
|
||||
@@ -112,13 +122,169 @@ def get_monitor():
|
||||
return None
|
||||
|
||||
|
||||
def _strip_ansi(s: str) -> str:
|
||||
"""Strip ANSI escape sequences from string for length calculation."""
|
||||
import re
|
||||
|
||||
return re.sub(r"\x1b\[[0-9;]*[a-zA-Z]", "", s)
|
||||
|
||||
|
||||
def _render_simple_border(
|
||||
buf: list[str], width: int, height: int, fps: float = 0.0, frame_time: float = 0.0
|
||||
) -> list[str]:
|
||||
"""Render a traditional border around the buffer."""
|
||||
if not buf or width < 3 or height < 3:
|
||||
return buf
|
||||
|
||||
inner_w = width - 2
|
||||
inner_h = height - 2
|
||||
|
||||
cropped = []
|
||||
for i in range(min(inner_h, len(buf))):
|
||||
line = buf[i]
|
||||
visible_len = len(_strip_ansi(line))
|
||||
if visible_len > inner_w:
|
||||
cropped.append(line[:inner_w])
|
||||
else:
|
||||
cropped.append(line + " " * (inner_w - visible_len))
|
||||
|
||||
while len(cropped) < inner_h:
|
||||
cropped.append(" " * inner_w)
|
||||
|
||||
if fps > 0:
|
||||
fps_str = f" FPS:{fps:.0f}"
|
||||
if len(fps_str) < inner_w:
|
||||
right_len = inner_w - len(fps_str)
|
||||
top_border = "┌" + "─" * right_len + fps_str + "┐"
|
||||
else:
|
||||
top_border = "┌" + "─" * inner_w + "┐"
|
||||
else:
|
||||
top_border = "┌" + "─" * inner_w + "┐"
|
||||
|
||||
if frame_time > 0:
|
||||
ft_str = f" {frame_time:.1f}ms"
|
||||
if len(ft_str) < inner_w:
|
||||
right_len = inner_w - len(ft_str)
|
||||
bottom_border = "└" + "─" * right_len + ft_str + "┘"
|
||||
else:
|
||||
bottom_border = "└" + "─" * inner_w + "┘"
|
||||
else:
|
||||
bottom_border = "└" + "─" * inner_w + "┘"
|
||||
|
||||
result = [top_border]
|
||||
for line in cropped:
|
||||
if len(line) < inner_w:
|
||||
line = line + " " * (inner_w - len(line))
|
||||
elif len(line) > inner_w:
|
||||
line = line[:inner_w]
|
||||
result.append("│" + line + "│")
|
||||
result.append(bottom_border)
|
||||
|
||||
return result
|
||||
|
||||
|
||||
def render_ui_panel(
|
||||
buf: list[str],
|
||||
width: int,
|
||||
height: int,
|
||||
ui_panel,
|
||||
fps: float = 0.0,
|
||||
frame_time: float = 0.0,
|
||||
) -> list[str]:
|
||||
"""Render buffer with a right-side UI panel."""
|
||||
from engine.pipeline.ui import UIPanel
|
||||
|
||||
if not isinstance(ui_panel, UIPanel):
|
||||
return _render_simple_border(buf, width, height, fps, frame_time)
|
||||
|
||||
panel_width = min(ui_panel.config.panel_width, width - 4)
|
||||
main_width = width - panel_width - 1
|
||||
|
||||
panel_lines = ui_panel.render(panel_width, height)
|
||||
|
||||
main_buf = buf[: height - 2]
|
||||
main_result = _render_simple_border(
|
||||
main_buf, main_width + 2, height, fps, frame_time
|
||||
)
|
||||
|
||||
combined = []
|
||||
for i in range(height):
|
||||
if i < len(main_result):
|
||||
main_line = main_result[i]
|
||||
if len(main_line) >= 2:
|
||||
main_content = (
|
||||
main_line[1:-1] if main_line[-1] in "│┌┐└┘" else main_line[1:]
|
||||
)
|
||||
main_content = main_content.ljust(main_width)[:main_width]
|
||||
else:
|
||||
main_content = " " * main_width
|
||||
else:
|
||||
main_content = " " * main_width
|
||||
|
||||
panel_idx = i
|
||||
panel_line = (
|
||||
panel_lines[panel_idx][:panel_width].ljust(panel_width)
|
||||
if panel_idx < len(panel_lines)
|
||||
else " " * panel_width
|
||||
)
|
||||
|
||||
separator = "│" if 0 < i < height - 1 else "┼" if i == 0 else "┴"
|
||||
combined.append(main_content + separator + panel_line)
|
||||
|
||||
return combined
|
||||
|
||||
|
||||
def render_border(
|
||||
buf: list[str],
|
||||
width: int,
|
||||
height: int,
|
||||
fps: float = 0.0,
|
||||
frame_time: float = 0.0,
|
||||
border_mode: BorderMode | bool = BorderMode.SIMPLE,
|
||||
) -> list[str]:
|
||||
"""Render a border or UI panel around the buffer.
|
||||
|
||||
Args:
|
||||
buf: Input buffer
|
||||
width: Display width
|
||||
height: Display height
|
||||
fps: FPS for top border
|
||||
frame_time: Frame time for bottom border
|
||||
border_mode: Border rendering mode
|
||||
|
||||
Returns:
|
||||
Buffer with border/panel applied
|
||||
"""
|
||||
# Normalize border_mode to BorderMode enum
|
||||
if isinstance(border_mode, bool):
|
||||
border_mode = BorderMode.SIMPLE if border_mode else BorderMode.OFF
|
||||
|
||||
if border_mode == BorderMode.UI:
|
||||
# UI panel requires a UIPanel instance (injected separately)
|
||||
# For now, this will be called by displays that have a ui_panel attribute
|
||||
# This function signature doesn't include ui_panel, so we'll handle it in render_ui_panel
|
||||
# Fall back to simple border if no panel available
|
||||
return _render_simple_border(buf, width, height, fps, frame_time)
|
||||
elif border_mode == BorderMode.SIMPLE:
|
||||
return _render_simple_border(buf, width, height, fps, frame_time)
|
||||
else:
|
||||
return buf
|
||||
|
||||
|
||||
__all__ = [
|
||||
"Display",
|
||||
"DisplayRegistry",
|
||||
"get_monitor",
|
||||
"render_border",
|
||||
"render_ui_panel",
|
||||
"BorderMode",
|
||||
"TerminalDisplay",
|
||||
"NullDisplay",
|
||||
"ReplayDisplay",
|
||||
"WebSocketDisplay",
|
||||
"SixelDisplay",
|
||||
"MultiDisplay",
|
||||
"PygameDisplay",
|
||||
]
|
||||
|
||||
if _MODERNGL_AVAILABLE:
|
||||
__all__.append("ModernGLDisplay")
|
||||
|
||||
656
engine/display/backends/animation_report.py
Normal file
656
engine/display/backends/animation_report.py
Normal file
@@ -0,0 +1,656 @@
|
||||
"""
|
||||
Animation Report Display Backend
|
||||
|
||||
Captures frames from pipeline stages and generates an interactive HTML report
|
||||
showing before/after states for each transformative stage.
|
||||
"""
|
||||
|
||||
import time
|
||||
from dataclasses import dataclass, field
|
||||
from datetime import datetime
|
||||
from pathlib import Path
|
||||
from typing import Any
|
||||
|
||||
from engine.display.streaming import compute_diff
|
||||
|
||||
|
||||
@dataclass
|
||||
class CapturedFrame:
|
||||
"""A captured frame with metadata."""
|
||||
|
||||
stage: str
|
||||
buffer: list[str]
|
||||
timestamp: float
|
||||
frame_number: int
|
||||
diff_from_previous: dict[str, Any] | None = None
|
||||
|
||||
|
||||
@dataclass
|
||||
class StageCapture:
|
||||
"""Captures frames for a single pipeline stage."""
|
||||
|
||||
name: str
|
||||
frames: list[CapturedFrame] = field(default_factory=list)
|
||||
start_time: float = field(default_factory=time.time)
|
||||
end_time: float = 0.0
|
||||
|
||||
def add_frame(
|
||||
self,
|
||||
buffer: list[str],
|
||||
frame_number: int,
|
||||
previous_buffer: list[str] | None = None,
|
||||
) -> None:
|
||||
"""Add a captured frame."""
|
||||
timestamp = time.time()
|
||||
diff = None
|
||||
if previous_buffer is not None:
|
||||
diff_data = compute_diff(previous_buffer, buffer)
|
||||
diff = {
|
||||
"changed_lines": len(diff_data.changed_lines),
|
||||
"total_lines": len(buffer),
|
||||
"width": diff_data.width,
|
||||
"height": diff_data.height,
|
||||
}
|
||||
|
||||
frame = CapturedFrame(
|
||||
stage=self.name,
|
||||
buffer=list(buffer),
|
||||
timestamp=timestamp,
|
||||
frame_number=frame_number,
|
||||
diff_from_previous=diff,
|
||||
)
|
||||
self.frames.append(frame)
|
||||
|
||||
def finish(self) -> None:
|
||||
"""Mark capture as finished."""
|
||||
self.end_time = time.time()
|
||||
|
||||
|
||||
class AnimationReportDisplay:
|
||||
"""
|
||||
Display backend that captures frames for animation report generation.
|
||||
|
||||
Instead of rendering to terminal, this display captures the buffer at each
|
||||
stage and stores it for later HTML report generation.
|
||||
"""
|
||||
|
||||
width: int = 80
|
||||
height: int = 24
|
||||
|
||||
def __init__(self, output_dir: str = "./reports"):
|
||||
"""
|
||||
Initialize the animation report display.
|
||||
|
||||
Args:
|
||||
output_dir: Directory where reports will be saved
|
||||
"""
|
||||
self.output_dir = Path(output_dir)
|
||||
self.output_dir.mkdir(parents=True, exist_ok=True)
|
||||
|
||||
self._stages: dict[str, StageCapture] = {}
|
||||
self._current_stage: str = ""
|
||||
self._previous_buffer: list[str] | None = None
|
||||
self._frame_number: int = 0
|
||||
self._total_frames: int = 0
|
||||
self._start_time: float = 0.0
|
||||
|
||||
def init(self, width: int, height: int, reuse: bool = False) -> None:
|
||||
"""Initialize display with dimensions."""
|
||||
self.width = width
|
||||
self.height = height
|
||||
self._start_time = time.time()
|
||||
|
||||
def show(self, buffer: list[str], border: bool = False) -> None:
|
||||
"""
|
||||
Capture a frame for the current stage.
|
||||
|
||||
Args:
|
||||
buffer: The frame buffer to capture
|
||||
border: Border flag (ignored)
|
||||
"""
|
||||
if not self._current_stage:
|
||||
# If no stage is set, use a default name
|
||||
self._current_stage = "final"
|
||||
|
||||
if self._current_stage not in self._stages:
|
||||
self._stages[self._current_stage] = StageCapture(self._current_stage)
|
||||
|
||||
stage = self._stages[self._current_stage]
|
||||
stage.add_frame(buffer, self._frame_number, self._previous_buffer)
|
||||
|
||||
self._previous_buffer = list(buffer)
|
||||
self._frame_number += 1
|
||||
self._total_frames += 1
|
||||
|
||||
def start_stage(self, stage_name: str) -> None:
|
||||
"""
|
||||
Start capturing frames for a new stage.
|
||||
|
||||
Args:
|
||||
stage_name: Name of the stage (e.g., "noise", "fade", "firehose")
|
||||
"""
|
||||
if self._current_stage and self._current_stage in self._stages:
|
||||
# Finish previous stage
|
||||
self._stages[self._current_stage].finish()
|
||||
|
||||
self._current_stage = stage_name
|
||||
self._previous_buffer = None # Reset for new stage
|
||||
|
||||
def clear(self) -> None:
|
||||
"""Clear the display (no-op for report display)."""
|
||||
pass
|
||||
|
||||
def cleanup(self) -> None:
|
||||
"""Cleanup resources."""
|
||||
# Finish current stage
|
||||
if self._current_stage and self._current_stage in self._stages:
|
||||
self._stages[self._current_stage].finish()
|
||||
|
||||
def get_dimensions(self) -> tuple[int, int]:
|
||||
"""Get current dimensions."""
|
||||
return (self.width, self.height)
|
||||
|
||||
def get_stages(self) -> dict[str, StageCapture]:
|
||||
"""Get all captured stages."""
|
||||
return self._stages
|
||||
|
||||
def generate_report(self, title: str = "Animation Report") -> Path:
|
||||
"""
|
||||
Generate an HTML report with captured frames and animations.
|
||||
|
||||
Args:
|
||||
title: Title of the report
|
||||
|
||||
Returns:
|
||||
Path to the generated HTML file
|
||||
"""
|
||||
report_path = self.output_dir / f"animation_report_{int(time.time())}.html"
|
||||
html_content = self._build_html(title)
|
||||
report_path.write_text(html_content)
|
||||
return report_path
|
||||
|
||||
def _build_html(self, title: str) -> str:
|
||||
"""Build the HTML content for the report."""
|
||||
# Collect all frames across stages
|
||||
all_frames = []
|
||||
for stage_name, stage in self._stages.items():
|
||||
for frame in stage.frames:
|
||||
all_frames.append(frame)
|
||||
|
||||
# Sort frames by timestamp
|
||||
all_frames.sort(key=lambda f: f.timestamp)
|
||||
|
||||
# Build stage sections
|
||||
stages_html = ""
|
||||
for stage_name, stage in self._stages.items():
|
||||
stages_html += self._build_stage_section(stage_name, stage)
|
||||
|
||||
# Build full HTML
|
||||
html = f"""
|
||||
<!DOCTYPE html>
|
||||
<html lang="en">
|
||||
<head>
|
||||
<meta charset="UTF-8">
|
||||
<meta name="viewport" content="width=device-width, initial-scale=1.0">
|
||||
<title>{title}</title>
|
||||
<style>
|
||||
* {{
|
||||
box-sizing: border-box;
|
||||
margin: 0;
|
||||
padding: 0;
|
||||
}}
|
||||
body {{
|
||||
font-family: -apple-system, BlinkMacSystemFont, 'Segoe UI', Roboto, sans-serif;
|
||||
background: #1a1a2e;
|
||||
color: #eee;
|
||||
padding: 20px;
|
||||
line-height: 1.6;
|
||||
}}
|
||||
.container {{
|
||||
max-width: 1400px;
|
||||
margin: 0 auto;
|
||||
}}
|
||||
.header {{
|
||||
background: linear-gradient(135deg, #16213e 0%, #1a1a2e 100%);
|
||||
padding: 30px;
|
||||
border-radius: 12px;
|
||||
margin-bottom: 30px;
|
||||
text-align: center;
|
||||
box-shadow: 0 4px 20px rgba(0,0,0,0.3);
|
||||
}}
|
||||
.header h1 {{
|
||||
font-size: 2.5em;
|
||||
margin-bottom: 10px;
|
||||
background: linear-gradient(90deg, #00d4ff, #00ff88);
|
||||
-webkit-background-clip: text;
|
||||
-webkit-text-fill-color: transparent;
|
||||
background-clip: text;
|
||||
}}
|
||||
.header .meta {{
|
||||
color: #888;
|
||||
font-size: 0.9em;
|
||||
}}
|
||||
.stats-grid {{
|
||||
display: grid;
|
||||
grid-template-columns: repeat(auto-fit, minmax(150px, 1fr));
|
||||
gap: 15px;
|
||||
margin: 20px 0;
|
||||
}}
|
||||
.stat-card {{
|
||||
background: #16213e;
|
||||
padding: 15px;
|
||||
border-radius: 8px;
|
||||
text-align: center;
|
||||
}}
|
||||
.stat-value {{
|
||||
font-size: 1.8em;
|
||||
font-weight: bold;
|
||||
color: #00ff88;
|
||||
}}
|
||||
.stat-label {{
|
||||
color: #888;
|
||||
font-size: 0.85em;
|
||||
margin-top: 5px;
|
||||
}}
|
||||
.stage-section {{
|
||||
background: #16213e;
|
||||
border-radius: 12px;
|
||||
margin-bottom: 25px;
|
||||
overflow: hidden;
|
||||
box-shadow: 0 2px 10px rgba(0,0,0,0.2);
|
||||
}}
|
||||
.stage-header {{
|
||||
background: #1f2a48;
|
||||
padding: 15px 20px;
|
||||
display: flex;
|
||||
justify-content: space-between;
|
||||
align-items: center;
|
||||
cursor: pointer;
|
||||
user-select: none;
|
||||
}}
|
||||
.stage-header:hover {{
|
||||
background: #253252;
|
||||
}}
|
||||
.stage-name {{
|
||||
font-weight: bold;
|
||||
font-size: 1.1em;
|
||||
color: #00d4ff;
|
||||
}}
|
||||
.stage-info {{
|
||||
color: #888;
|
||||
font-size: 0.9em;
|
||||
}}
|
||||
.stage-content {{
|
||||
padding: 20px;
|
||||
}}
|
||||
.frames-container {{
|
||||
display: grid;
|
||||
grid-template-columns: repeat(auto-fill, minmax(300px, 1fr));
|
||||
gap: 15px;
|
||||
}}
|
||||
.frame-card {{
|
||||
background: #0f0f1a;
|
||||
border-radius: 8px;
|
||||
overflow: hidden;
|
||||
border: 1px solid #333;
|
||||
transition: transform 0.2s, box-shadow 0.2s;
|
||||
}}
|
||||
.frame-card:hover {{
|
||||
transform: translateY(-2px);
|
||||
box-shadow: 0 4px 15px rgba(0,212,255,0.2);
|
||||
}}
|
||||
.frame-header {{
|
||||
background: #1a1a2e;
|
||||
padding: 10px 15px;
|
||||
font-size: 0.85em;
|
||||
color: #888;
|
||||
border-bottom: 1px solid #333;
|
||||
display: flex;
|
||||
justify-content: space-between;
|
||||
}}
|
||||
.frame-number {{
|
||||
color: #00ff88;
|
||||
}}
|
||||
.frame-diff {{
|
||||
color: #ff6b6b;
|
||||
}}
|
||||
.frame-content {{
|
||||
padding: 10px;
|
||||
font-family: 'Fira Code', 'Consolas', 'Monaco', monospace;
|
||||
font-size: 11px;
|
||||
line-height: 1.3;
|
||||
white-space: pre;
|
||||
overflow-x: auto;
|
||||
max-height: 200px;
|
||||
overflow-y: auto;
|
||||
}}
|
||||
.timeline-section {{
|
||||
background: #16213e;
|
||||
border-radius: 12px;
|
||||
padding: 20px;
|
||||
margin-bottom: 25px;
|
||||
}}
|
||||
.timeline-header {{
|
||||
display: flex;
|
||||
justify-content: space-between;
|
||||
align-items: center;
|
||||
margin-bottom: 15px;
|
||||
}}
|
||||
.timeline-title {{
|
||||
font-weight: bold;
|
||||
color: #00d4ff;
|
||||
}}
|
||||
.timeline-controls {{
|
||||
display: flex;
|
||||
gap: 10px;
|
||||
}}
|
||||
.timeline-controls button {{
|
||||
background: #1f2a48;
|
||||
border: 1px solid #333;
|
||||
color: #eee;
|
||||
padding: 8px 15px;
|
||||
border-radius: 6px;
|
||||
cursor: pointer;
|
||||
transition: all 0.2s;
|
||||
}}
|
||||
.timeline-controls button:hover {{
|
||||
background: #253252;
|
||||
border-color: #00d4ff;
|
||||
}}
|
||||
.timeline-controls button.active {{
|
||||
background: #00d4ff;
|
||||
color: #000;
|
||||
}}
|
||||
.timeline-canvas {{
|
||||
width: 100%;
|
||||
height: 100px;
|
||||
background: #0f0f1a;
|
||||
border-radius: 8px;
|
||||
position: relative;
|
||||
overflow: hidden;
|
||||
}}
|
||||
.timeline-track {{
|
||||
position: absolute;
|
||||
top: 50%;
|
||||
left: 0;
|
||||
right: 0;
|
||||
height: 4px;
|
||||
background: #333;
|
||||
transform: translateY(-50%);
|
||||
}}
|
||||
.timeline-marker {{
|
||||
position: absolute;
|
||||
top: 50%;
|
||||
transform: translate(-50%, -50%);
|
||||
width: 12px;
|
||||
height: 12px;
|
||||
background: #00d4ff;
|
||||
border-radius: 50%;
|
||||
cursor: pointer;
|
||||
transition: all 0.2s;
|
||||
}}
|
||||
.timeline-marker:hover {{
|
||||
transform: translate(-50%, -50%) scale(1.3);
|
||||
box-shadow: 0 0 10px #00d4ff;
|
||||
}}
|
||||
.timeline-marker.stage-{{stage_name}} {{
|
||||
background: var(--stage-color, #00d4ff);
|
||||
}}
|
||||
.comparison-view {{
|
||||
display: grid;
|
||||
grid-template-columns: 1fr 1fr;
|
||||
gap: 20px;
|
||||
margin-top: 20px;
|
||||
}}
|
||||
.comparison-panel {{
|
||||
background: #0f0f1a;
|
||||
border-radius: 8px;
|
||||
padding: 15px;
|
||||
border: 1px solid #333;
|
||||
}}
|
||||
.comparison-panel h4 {{
|
||||
color: #888;
|
||||
margin-bottom: 10px;
|
||||
font-size: 0.9em;
|
||||
}}
|
||||
.comparison-content {{
|
||||
font-family: 'Fira Code', 'Consolas', 'Monaco', monospace;
|
||||
font-size: 11px;
|
||||
line-height: 1.3;
|
||||
white-space: pre;
|
||||
}}
|
||||
.diff-added {{
|
||||
background: rgba(0, 255, 136, 0.2);
|
||||
}}
|
||||
.diff-removed {{
|
||||
background: rgba(255, 107, 107, 0.2);
|
||||
}}
|
||||
@keyframes pulse {{
|
||||
0%, 100% {{ opacity: 1; }}
|
||||
50% {{ opacity: 0.7; }}
|
||||
}}
|
||||
.animating {{
|
||||
animation: pulse 1s infinite;
|
||||
}}
|
||||
.footer {{
|
||||
text-align: center;
|
||||
color: #666;
|
||||
padding: 20px;
|
||||
font-size: 0.9em;
|
||||
}}
|
||||
</style>
|
||||
</head>
|
||||
<body>
|
||||
<div class="container">
|
||||
<div class="header">
|
||||
<h1>🎬 {title}</h1>
|
||||
<div class="meta">
|
||||
Generated: {datetime.now().strftime("%Y-%m-%d %H:%M:%S")} |
|
||||
Total Frames: {self._total_frames} |
|
||||
Duration: {time.time() - self._start_time:.2f}s
|
||||
</div>
|
||||
<div class="stats-grid">
|
||||
<div class="stat-card">
|
||||
<div class="stat-value">{len(self._stages)}</div>
|
||||
<div class="stat-label">Pipeline Stages</div>
|
||||
</div>
|
||||
<div class="stat-card">
|
||||
<div class="stat-value">{self._total_frames}</div>
|
||||
<div class="stat-label">Total Frames</div>
|
||||
</div>
|
||||
<div class="stat-card">
|
||||
<div class="stat-value">{time.time() - self._start_time:.2f}s</div>
|
||||
<div class="stat-label">Capture Duration</div>
|
||||
</div>
|
||||
<div class="stat-card">
|
||||
<div class="stat-value">{self.width}x{self.height}</div>
|
||||
<div class="stat-label">Resolution</div>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
|
||||
<div class="timeline-section">
|
||||
<div class="timeline-header">
|
||||
<div class="timeline-title">Timeline</div>
|
||||
<div class="timeline-controls">
|
||||
<button onclick="playAnimation()">▶ Play</button>
|
||||
<button onclick="pauseAnimation()">⏸ Pause</button>
|
||||
<button onclick="stepForward()">⏭ Step</button>
|
||||
</div>
|
||||
</div>
|
||||
<div class="timeline-canvas" id="timeline">
|
||||
<div class="timeline-track"></div>
|
||||
<!-- Timeline markers will be added by JavaScript -->
|
||||
</div>
|
||||
</div>
|
||||
|
||||
{stages_html}
|
||||
|
||||
<div class="footer">
|
||||
<p>Animation Report generated by Mainline</p>
|
||||
<p>Use the timeline controls above to play/pause the animation</p>
|
||||
</div>
|
||||
</div>
|
||||
|
||||
<script>
|
||||
// Animation state
|
||||
let currentFrame = 0;
|
||||
let isPlaying = false;
|
||||
let animationInterval = null;
|
||||
const totalFrames = {len(all_frames)};
|
||||
|
||||
// Stage colors for timeline markers
|
||||
const stageColors = {{
|
||||
{self._build_stage_colors()}
|
||||
}};
|
||||
|
||||
// Initialize timeline
|
||||
function initTimeline() {{
|
||||
const timeline = document.getElementById('timeline');
|
||||
const track = timeline.querySelector('.timeline-track');
|
||||
|
||||
{self._build_timeline_markers(all_frames)}
|
||||
}}
|
||||
|
||||
function playAnimation() {{
|
||||
if (isPlaying) return;
|
||||
isPlaying = true;
|
||||
animationInterval = setInterval(() => {{
|
||||
currentFrame = (currentFrame + 1) % totalFrames;
|
||||
updateFrameDisplay();
|
||||
}}, 100);
|
||||
}}
|
||||
|
||||
function pauseAnimation() {{
|
||||
isPlaying = false;
|
||||
if (animationInterval) {{
|
||||
clearInterval(animationInterval);
|
||||
animationInterval = null;
|
||||
}}
|
||||
}}
|
||||
|
||||
function stepForward() {{
|
||||
currentFrame = (currentFrame + 1) % totalFrames;
|
||||
updateFrameDisplay();
|
||||
}}
|
||||
|
||||
function updateFrameDisplay() {{
|
||||
// Highlight current frame in timeline
|
||||
const markers = document.querySelectorAll('.timeline-marker');
|
||||
markers.forEach((marker, index) => {{
|
||||
if (index === currentFrame) {{
|
||||
marker.style.transform = 'translate(-50%, -50%) scale(1.5)';
|
||||
marker.style.boxShadow = '0 0 15px #00ff88';
|
||||
}} else {{
|
||||
marker.style.transform = 'translate(-50%, -50%) scale(1)';
|
||||
marker.style.boxShadow = 'none';
|
||||
}}
|
||||
}});
|
||||
}}
|
||||
|
||||
// Initialize on page load
|
||||
document.addEventListener('DOMContentLoaded', initTimeline);
|
||||
</script>
|
||||
</body>
|
||||
</html>
|
||||
"""
|
||||
return html
|
||||
|
||||
def _build_stage_section(self, stage_name: str, stage: StageCapture) -> str:
|
||||
"""Build HTML for a single stage section."""
|
||||
frames_html = ""
|
||||
for i, frame in enumerate(stage.frames):
|
||||
diff_info = ""
|
||||
if frame.diff_from_previous:
|
||||
changed = frame.diff_from_previous.get("changed_lines", 0)
|
||||
total = frame.diff_from_previous.get("total_lines", 0)
|
||||
diff_info = f'<span class="frame-diff">Δ {changed}/{total}</span>'
|
||||
|
||||
frames_html += f"""
|
||||
<div class="frame-card">
|
||||
<div class="frame-header">
|
||||
<span>Frame <span class="frame-number">{frame.frame_number}</span></span>
|
||||
{diff_info}
|
||||
</div>
|
||||
<div class="frame-content">{self._escape_html("".join(frame.buffer))}</div>
|
||||
</div>
|
||||
"""
|
||||
|
||||
return f"""
|
||||
<div class="stage-section">
|
||||
<div class="stage-header" onclick="this.nextElementSibling.classList.toggle('hidden')">
|
||||
<span class="stage-name">{stage_name}</span>
|
||||
<span class="stage-info">{len(stage.frames)} frames</span>
|
||||
</div>
|
||||
<div class="stage-content">
|
||||
<div class="frames-container">
|
||||
{frames_html}
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
"""
|
||||
|
||||
def _build_timeline(self, all_frames: list[CapturedFrame]) -> str:
|
||||
"""Build timeline HTML."""
|
||||
if not all_frames:
|
||||
return ""
|
||||
|
||||
markers_html = ""
|
||||
for i, frame in enumerate(all_frames):
|
||||
left_percent = (i / len(all_frames)) * 100
|
||||
markers_html += f'<div class="timeline-marker" style="left: {left_percent}%" data-frame="{i}"></div>'
|
||||
|
||||
return markers_html
|
||||
|
||||
def _build_stage_colors(self) -> str:
|
||||
"""Build stage color mapping for JavaScript."""
|
||||
colors = [
|
||||
"#00d4ff",
|
||||
"#00ff88",
|
||||
"#ff6b6b",
|
||||
"#ffd93d",
|
||||
"#a855f7",
|
||||
"#ec4899",
|
||||
"#14b8a6",
|
||||
"#f97316",
|
||||
"#8b5cf6",
|
||||
"#06b6d4",
|
||||
]
|
||||
color_map = ""
|
||||
for i, stage_name in enumerate(self._stages.keys()):
|
||||
color = colors[i % len(colors)]
|
||||
color_map += f' "{stage_name}": "{color}",\n'
|
||||
return color_map.rstrip(",\n")
|
||||
|
||||
def _build_timeline_markers(self, all_frames: list[CapturedFrame]) -> str:
|
||||
"""Build timeline markers in JavaScript."""
|
||||
if not all_frames:
|
||||
return ""
|
||||
|
||||
markers_js = ""
|
||||
for i, frame in enumerate(all_frames):
|
||||
left_percent = (i / len(all_frames)) * 100
|
||||
stage_color = f"stageColors['{frame.stage}']"
|
||||
markers_js += f"""
|
||||
const marker{i} = document.createElement('div');
|
||||
marker{i}.className = 'timeline-marker stage-{{frame.stage}}';
|
||||
marker{i}.style.left = '{left_percent}%';
|
||||
marker{i}.style.setProperty('--stage-color', {stage_color});
|
||||
marker{i}.onclick = () => {{
|
||||
currentFrame = {i};
|
||||
updateFrameDisplay();
|
||||
}};
|
||||
timeline.appendChild(marker{i});
|
||||
"""
|
||||
|
||||
return markers_js
|
||||
|
||||
def _escape_html(self, text: str) -> str:
|
||||
"""Escape HTML special characters."""
|
||||
return (
|
||||
text.replace("&", "&")
|
||||
.replace("<", "<")
|
||||
.replace(">", ">")
|
||||
.replace('"', """)
|
||||
.replace("'", "'")
|
||||
)
|
||||
@@ -1,152 +0,0 @@
|
||||
"""
|
||||
Kitty graphics display backend - renders using kitty's native graphics protocol.
|
||||
"""
|
||||
|
||||
import time
|
||||
|
||||
from engine.display.renderer import get_default_font_path, parse_ansi
|
||||
|
||||
|
||||
def _encode_kitty_graphic(image_data: bytes, width: int, height: int) -> bytes:
|
||||
"""Encode image data using kitty's graphics protocol."""
|
||||
import base64
|
||||
|
||||
encoded = base64.b64encode(image_data).decode("ascii")
|
||||
|
||||
chunks = []
|
||||
for i in range(0, len(encoded), 4096):
|
||||
chunk = encoded[i : i + 4096]
|
||||
if i == 0:
|
||||
chunks.append(f"\x1b_Gf=100,t=d,s={width},v={height},c=1,r=1;{chunk}\x1b\\")
|
||||
else:
|
||||
chunks.append(f"\x1b_Gm={height};{chunk}\x1b\\")
|
||||
|
||||
return "".join(chunks).encode("utf-8")
|
||||
|
||||
|
||||
class KittyDisplay:
|
||||
"""Kitty graphics display backend using kitty's native protocol."""
|
||||
|
||||
width: int = 80
|
||||
height: int = 24
|
||||
|
||||
def __init__(self, cell_width: int = 9, cell_height: int = 16):
|
||||
self.width = 80
|
||||
self.height = 24
|
||||
self.cell_width = cell_width
|
||||
self.cell_height = cell_height
|
||||
self._initialized = False
|
||||
self._font_path = None
|
||||
|
||||
def init(self, width: int, height: int, reuse: bool = False) -> None:
|
||||
"""Initialize display with dimensions.
|
||||
|
||||
Args:
|
||||
width: Terminal width in characters
|
||||
height: Terminal height in rows
|
||||
reuse: Ignored for KittyDisplay (protocol doesn't support reuse)
|
||||
"""
|
||||
self.width = width
|
||||
self.height = height
|
||||
self._initialized = True
|
||||
|
||||
def _get_font_path(self) -> str | None:
|
||||
"""Get font path from env or detect common locations."""
|
||||
import os
|
||||
|
||||
if self._font_path:
|
||||
return self._font_path
|
||||
|
||||
env_font = os.environ.get("MAINLINE_KITTY_FONT")
|
||||
if env_font and os.path.exists(env_font):
|
||||
self._font_path = env_font
|
||||
return env_font
|
||||
|
||||
font_path = get_default_font_path()
|
||||
if font_path:
|
||||
self._font_path = font_path
|
||||
|
||||
return self._font_path
|
||||
|
||||
def show(self, buffer: list[str]) -> None:
|
||||
import sys
|
||||
|
||||
t0 = time.perf_counter()
|
||||
|
||||
img_width = self.width * self.cell_width
|
||||
img_height = self.height * self.cell_height
|
||||
|
||||
try:
|
||||
from PIL import Image, ImageDraw, ImageFont
|
||||
except ImportError:
|
||||
return
|
||||
|
||||
img = Image.new("RGBA", (img_width, img_height), (0, 0, 0, 255))
|
||||
draw = ImageDraw.Draw(img)
|
||||
|
||||
font_path = self._get_font_path()
|
||||
font = None
|
||||
if font_path:
|
||||
try:
|
||||
font = ImageFont.truetype(font_path, self.cell_height - 2)
|
||||
except Exception:
|
||||
font = None
|
||||
|
||||
if font is None:
|
||||
try:
|
||||
font = ImageFont.load_default()
|
||||
except Exception:
|
||||
font = None
|
||||
|
||||
for row_idx, line in enumerate(buffer[: self.height]):
|
||||
if row_idx >= self.height:
|
||||
break
|
||||
|
||||
tokens = parse_ansi(line)
|
||||
x_pos = 0
|
||||
y_pos = row_idx * self.cell_height
|
||||
|
||||
for text, fg, bg, bold in tokens:
|
||||
if not text:
|
||||
continue
|
||||
|
||||
if bg != (0, 0, 0):
|
||||
bbox = draw.textbbox((x_pos, y_pos), text, font=font)
|
||||
draw.rectangle(bbox, fill=(*bg, 255))
|
||||
|
||||
if bold and font:
|
||||
draw.text((x_pos - 1, y_pos - 1), text, fill=(*fg, 255), font=font)
|
||||
|
||||
draw.text((x_pos, y_pos), text, fill=(*fg, 255), font=font)
|
||||
|
||||
if font:
|
||||
x_pos += draw.textlength(text, font=font)
|
||||
|
||||
from io import BytesIO
|
||||
|
||||
output = BytesIO()
|
||||
img.save(output, format="PNG")
|
||||
png_data = output.getvalue()
|
||||
|
||||
graphic = _encode_kitty_graphic(png_data, img_width, img_height)
|
||||
|
||||
sys.stdout.buffer.write(graphic)
|
||||
sys.stdout.flush()
|
||||
|
||||
elapsed_ms = (time.perf_counter() - t0) * 1000
|
||||
|
||||
from engine.display import get_monitor
|
||||
|
||||
monitor = get_monitor()
|
||||
if monitor:
|
||||
chars_in = sum(len(line) for line in buffer)
|
||||
monitor.record_effect("kitty_display", elapsed_ms, chars_in, chars_in)
|
||||
|
||||
def clear(self) -> None:
|
||||
import sys
|
||||
|
||||
sys.stdout.buffer.write(b"\x1b_Ga=d\x1b\\")
|
||||
sys.stdout.flush()
|
||||
|
||||
def cleanup(self) -> None:
|
||||
self.clear()
|
||||
@@ -30,14 +30,21 @@ class MultiDisplay:
|
||||
for d in self.displays:
|
||||
d.init(width, height, reuse=reuse)
|
||||
|
||||
def show(self, buffer: list[str]) -> None:
|
||||
def show(self, buffer: list[str], border: bool = False) -> None:
|
||||
for d in self.displays:
|
||||
d.show(buffer)
|
||||
d.show(buffer, border=border)
|
||||
|
||||
def clear(self) -> None:
|
||||
for d in self.displays:
|
||||
d.clear()
|
||||
|
||||
def get_dimensions(self) -> tuple[int, int]:
|
||||
"""Get dimensions from the first child display that supports it."""
|
||||
for d in self.displays:
|
||||
if hasattr(d, "get_dimensions"):
|
||||
return d.get_dimensions()
|
||||
return (self.width, self.height)
|
||||
|
||||
def cleanup(self) -> None:
|
||||
for d in self.displays:
|
||||
d.cleanup()
|
||||
|
||||
@@ -2,18 +2,30 @@
|
||||
Null/headless display backend.
|
||||
"""
|
||||
|
||||
import json
|
||||
import time
|
||||
from pathlib import Path
|
||||
from typing import Any
|
||||
|
||||
|
||||
class NullDisplay:
|
||||
"""Headless/null display - discards all output.
|
||||
|
||||
This display does nothing - useful for headless benchmarking
|
||||
or when no display output is needed.
|
||||
or when no display output is needed. Captures last buffer
|
||||
for testing purposes. Supports frame recording for replay
|
||||
and file export/import.
|
||||
"""
|
||||
|
||||
width: int = 80
|
||||
height: int = 24
|
||||
_last_buffer: list[str] | None = None
|
||||
|
||||
def __init__(self):
|
||||
self._last_buffer = None
|
||||
self._is_recording = False
|
||||
self._recorded_frames: list[dict[str, Any]] = []
|
||||
self._frame_count = 0
|
||||
|
||||
def init(self, width: int, height: int, reuse: bool = False) -> None:
|
||||
"""Initialize display with dimensions.
|
||||
@@ -25,19 +37,147 @@ class NullDisplay:
|
||||
"""
|
||||
self.width = width
|
||||
self.height = height
|
||||
self._last_buffer = None
|
||||
|
||||
def show(self, buffer: list[str]) -> None:
|
||||
from engine.display import get_monitor
|
||||
def show(self, buffer: list[str], border: bool = False) -> None:
|
||||
import sys
|
||||
|
||||
from engine.display import get_monitor, render_border
|
||||
|
||||
fps = 0.0
|
||||
frame_time = 0.0
|
||||
monitor = get_monitor()
|
||||
if monitor:
|
||||
stats = monitor.get_stats()
|
||||
avg_ms = stats.get("pipeline", {}).get("avg_ms", 0) if stats else 0
|
||||
frame_count = stats.get("frame_count", 0) if stats else 0
|
||||
if avg_ms and frame_count > 0:
|
||||
fps = 1000.0 / avg_ms
|
||||
frame_time = avg_ms
|
||||
|
||||
if border:
|
||||
buffer = render_border(buffer, self.width, self.height, fps, frame_time)
|
||||
|
||||
self._last_buffer = buffer
|
||||
|
||||
if self._is_recording:
|
||||
self._recorded_frames.append(
|
||||
{
|
||||
"frame_number": self._frame_count,
|
||||
"buffer": buffer,
|
||||
"width": self.width,
|
||||
"height": self.height,
|
||||
}
|
||||
)
|
||||
|
||||
if self._frame_count <= 5 or self._frame_count % 10 == 0:
|
||||
sys.stdout.write("\n" + "=" * 80 + "\n")
|
||||
sys.stdout.write(
|
||||
f"Frame {self._frame_count} (buffer height: {len(buffer)})\n"
|
||||
)
|
||||
sys.stdout.write("=" * 80 + "\n")
|
||||
for i, line in enumerate(buffer[:30]):
|
||||
sys.stdout.write(f"{i:2}: {line}\n")
|
||||
if len(buffer) > 30:
|
||||
sys.stdout.write(f"... ({len(buffer) - 30} more lines)\n")
|
||||
sys.stdout.flush()
|
||||
|
||||
if monitor:
|
||||
t0 = time.perf_counter()
|
||||
chars_in = sum(len(line) for line in buffer)
|
||||
elapsed_ms = (time.perf_counter() - t0) * 1000
|
||||
monitor.record_effect("null_display", elapsed_ms, chars_in, chars_in)
|
||||
|
||||
self._frame_count += 1
|
||||
|
||||
def start_recording(self) -> None:
|
||||
"""Begin recording frames."""
|
||||
self._is_recording = True
|
||||
self._recorded_frames = []
|
||||
|
||||
def stop_recording(self) -> None:
|
||||
"""Stop recording frames."""
|
||||
self._is_recording = False
|
||||
|
||||
def get_frames(self) -> list[list[str]]:
|
||||
"""Get recorded frames as list of buffers.
|
||||
|
||||
Returns:
|
||||
List of buffers, each buffer is a list of strings (lines)
|
||||
"""
|
||||
return [frame["buffer"] for frame in self._recorded_frames]
|
||||
|
||||
def get_recorded_data(self) -> list[dict[str, Any]]:
|
||||
"""Get full recorded data including metadata.
|
||||
|
||||
Returns:
|
||||
List of frame dicts with 'frame_number', 'buffer', 'width', 'height'
|
||||
"""
|
||||
return self._recorded_frames
|
||||
|
||||
def clear_recording(self) -> None:
|
||||
"""Clear recorded frames."""
|
||||
self._recorded_frames = []
|
||||
|
||||
def save_recording(self, filepath: str | Path) -> None:
|
||||
"""Save recorded frames to a JSON file.
|
||||
|
||||
Args:
|
||||
filepath: Path to save the recording
|
||||
"""
|
||||
path = Path(filepath)
|
||||
data = {
|
||||
"version": 1,
|
||||
"display": "null",
|
||||
"width": self.width,
|
||||
"height": self.height,
|
||||
"frame_count": len(self._recorded_frames),
|
||||
"frames": self._recorded_frames,
|
||||
}
|
||||
path.write_text(json.dumps(data, indent=2))
|
||||
|
||||
def load_recording(self, filepath: str | Path) -> list[dict[str, Any]]:
|
||||
"""Load recorded frames from a JSON file.
|
||||
|
||||
Args:
|
||||
filepath: Path to load the recording from
|
||||
|
||||
Returns:
|
||||
List of frame dicts
|
||||
"""
|
||||
path = Path(filepath)
|
||||
data = json.loads(path.read_text())
|
||||
self._recorded_frames = data.get("frames", [])
|
||||
self.width = data.get("width", 80)
|
||||
self.height = data.get("height", 24)
|
||||
return self._recorded_frames
|
||||
|
||||
def replay_frames(self) -> list[list[str]]:
|
||||
"""Get frames for replay.
|
||||
|
||||
Returns:
|
||||
List of buffers for replay
|
||||
"""
|
||||
return self.get_frames()
|
||||
|
||||
def clear(self) -> None:
|
||||
pass
|
||||
|
||||
def cleanup(self) -> None:
|
||||
pass
|
||||
|
||||
def get_dimensions(self) -> tuple[int, int]:
|
||||
"""Get current dimensions.
|
||||
|
||||
Returns:
|
||||
(width, height) in character cells
|
||||
"""
|
||||
return (self.width, self.height)
|
||||
|
||||
def is_quit_requested(self) -> bool:
|
||||
"""Check if quit was requested (optional protocol method)."""
|
||||
return False
|
||||
|
||||
def clear_quit_request(self) -> None:
|
||||
"""Clear quit request (optional protocol method)."""
|
||||
pass
|
||||
|
||||
@@ -17,7 +17,6 @@ class PygameDisplay:
|
||||
width: int = 80
|
||||
window_width: int = 800
|
||||
window_height: int = 600
|
||||
_pygame_initialized: bool = False
|
||||
|
||||
def __init__(
|
||||
self,
|
||||
@@ -25,6 +24,7 @@ class PygameDisplay:
|
||||
cell_height: int = 18,
|
||||
window_width: int = 800,
|
||||
window_height: int = 600,
|
||||
target_fps: float = 30.0,
|
||||
):
|
||||
self.width = 80
|
||||
self.height = 24
|
||||
@@ -32,11 +32,16 @@ class PygameDisplay:
|
||||
self.cell_height = cell_height
|
||||
self.window_width = window_width
|
||||
self.window_height = window_height
|
||||
self.target_fps = target_fps
|
||||
self._initialized = False
|
||||
self._pygame = None
|
||||
self._screen = None
|
||||
self._font = None
|
||||
self._resized = False
|
||||
self._quit_requested = False
|
||||
self._last_frame_time = 0.0
|
||||
self._frame_period = 1.0 / target_fps if target_fps > 0 else 0
|
||||
self._glyph_cache = {}
|
||||
|
||||
def _get_font_path(self) -> str | None:
|
||||
"""Get font path for rendering."""
|
||||
@@ -94,10 +99,6 @@ class PygameDisplay:
|
||||
self.width = width
|
||||
self.height = height
|
||||
|
||||
import os
|
||||
|
||||
os.environ["SDL_VIDEODRIVER"] = "x11"
|
||||
|
||||
try:
|
||||
import pygame
|
||||
except ImportError:
|
||||
@@ -118,6 +119,10 @@ class PygameDisplay:
|
||||
self._pygame = pygame
|
||||
PygameDisplay._pygame_initialized = True
|
||||
|
||||
# Calculate character dimensions from actual window size
|
||||
self.width = max(1, self.window_width // self.cell_width)
|
||||
self.height = max(1, self.window_height // self.cell_height)
|
||||
|
||||
font_path = self._get_font_path()
|
||||
if font_path:
|
||||
try:
|
||||
@@ -127,11 +132,24 @@ class PygameDisplay:
|
||||
else:
|
||||
self._font = pygame.font.SysFont("monospace", self.cell_height - 2)
|
||||
|
||||
# Check if font supports box-drawing characters; if not, try to find one
|
||||
self._use_fallback_border = False
|
||||
if self._font:
|
||||
try:
|
||||
# Test rendering some key box-drawing characters
|
||||
test_chars = ["┌", "─", "┐", "│", "└", "┘"]
|
||||
for ch in test_chars:
|
||||
surf = self._font.render(ch, True, (255, 255, 255))
|
||||
# If surface is empty (width=0 or all black), font lacks glyph
|
||||
if surf.get_width() == 0:
|
||||
raise ValueError("Missing glyph")
|
||||
except Exception:
|
||||
# Font doesn't support box-drawing, will use line drawing fallback
|
||||
self._use_fallback_border = True
|
||||
|
||||
self._initialized = True
|
||||
|
||||
def show(self, buffer: list[str]) -> None:
|
||||
import sys
|
||||
|
||||
def show(self, buffer: list[str], border: bool = False) -> None:
|
||||
if not self._initialized or not self._pygame:
|
||||
return
|
||||
|
||||
@@ -139,7 +157,15 @@ class PygameDisplay:
|
||||
|
||||
for event in self._pygame.event.get():
|
||||
if event.type == self._pygame.QUIT:
|
||||
sys.exit(0)
|
||||
self._quit_requested = True
|
||||
elif event.type == self._pygame.KEYDOWN:
|
||||
if event.key in (self._pygame.K_ESCAPE, self._pygame.K_c):
|
||||
if event.key == self._pygame.K_c and not (
|
||||
event.mod & self._pygame.KMOD_LCTRL
|
||||
or event.mod & self._pygame.KMOD_RCTRL
|
||||
):
|
||||
continue
|
||||
self._quit_requested = True
|
||||
elif event.type == self._pygame.VIDEORESIZE:
|
||||
self.window_width = event.w
|
||||
self.window_height = event.h
|
||||
@@ -147,39 +173,144 @@ class PygameDisplay:
|
||||
self.height = max(1, self.window_height // self.cell_height)
|
||||
self._resized = True
|
||||
|
||||
# FPS limiting - skip frame if we're going too fast
|
||||
if self._frame_period > 0:
|
||||
now = time.perf_counter()
|
||||
elapsed = now - self._last_frame_time
|
||||
if elapsed < self._frame_period:
|
||||
return # Skip this frame
|
||||
self._last_frame_time = now
|
||||
|
||||
# Get metrics for border display
|
||||
fps = 0.0
|
||||
frame_time = 0.0
|
||||
from engine.display import get_monitor
|
||||
|
||||
monitor = get_monitor()
|
||||
if monitor:
|
||||
stats = monitor.get_stats()
|
||||
avg_ms = stats.get("pipeline", {}).get("avg_ms", 0) if stats else 0
|
||||
frame_count = stats.get("frame_count", 0) if stats else 0
|
||||
if avg_ms and frame_count > 0:
|
||||
fps = 1000.0 / avg_ms
|
||||
frame_time = avg_ms
|
||||
|
||||
self._screen.fill((0, 0, 0))
|
||||
|
||||
# If border requested but font lacks box-drawing glyphs, use graphical fallback
|
||||
if border and self._use_fallback_border:
|
||||
self._draw_fallback_border(fps, frame_time)
|
||||
# Adjust content area to fit inside border
|
||||
content_offset_x = self.cell_width
|
||||
content_offset_y = self.cell_height
|
||||
self.window_width - 2 * self.cell_width
|
||||
self.window_height - 2 * self.cell_height
|
||||
else:
|
||||
# Normal rendering (with or without text border)
|
||||
content_offset_x = 0
|
||||
content_offset_y = 0
|
||||
|
||||
if border:
|
||||
from engine.display import render_border
|
||||
|
||||
buffer = render_border(buffer, self.width, self.height, fps, frame_time)
|
||||
|
||||
blit_list = []
|
||||
|
||||
for row_idx, line in enumerate(buffer[: self.height]):
|
||||
if row_idx >= self.height:
|
||||
break
|
||||
|
||||
tokens = parse_ansi(line)
|
||||
x_pos = 0
|
||||
x_pos = content_offset_x
|
||||
|
||||
for text, fg, bg, _bold in tokens:
|
||||
if not text:
|
||||
continue
|
||||
|
||||
if bg != (0, 0, 0):
|
||||
bg_surface = self._font.render(text, True, fg, bg)
|
||||
self._screen.blit(bg_surface, (x_pos, row_idx * self.cell_height))
|
||||
else:
|
||||
text_surface = self._font.render(text, True, fg)
|
||||
self._screen.blit(text_surface, (x_pos, row_idx * self.cell_height))
|
||||
# Use None as key for no background
|
||||
bg_key = bg if bg != (0, 0, 0) else None
|
||||
cache_key = (text, fg, bg_key)
|
||||
|
||||
if cache_key not in self._glyph_cache:
|
||||
# Render and cache
|
||||
if bg_key is not None:
|
||||
self._glyph_cache[cache_key] = self._font.render(
|
||||
text, True, fg, bg_key
|
||||
)
|
||||
else:
|
||||
self._glyph_cache[cache_key] = self._font.render(text, True, fg)
|
||||
|
||||
surface = self._glyph_cache[cache_key]
|
||||
blit_list.append(
|
||||
(surface, (x_pos, content_offset_y + row_idx * self.cell_height))
|
||||
)
|
||||
x_pos += self._font.size(text)[0]
|
||||
|
||||
self._screen.blits(blit_list)
|
||||
|
||||
# Draw fallback border using graphics if needed
|
||||
if border and self._use_fallback_border:
|
||||
self._draw_fallback_border(fps, frame_time)
|
||||
|
||||
self._pygame.display.flip()
|
||||
|
||||
elapsed_ms = (time.perf_counter() - t0) * 1000
|
||||
|
||||
from engine.display import get_monitor
|
||||
|
||||
monitor = get_monitor()
|
||||
if monitor:
|
||||
chars_in = sum(len(line) for line in buffer)
|
||||
monitor.record_effect("pygame_display", elapsed_ms, chars_in, chars_in)
|
||||
|
||||
def _draw_fallback_border(self, fps: float, frame_time: float) -> None:
|
||||
"""Draw border using pygame graphics primitives instead of text."""
|
||||
if not self._screen or not self._pygame:
|
||||
return
|
||||
|
||||
# Colors
|
||||
border_color = (0, 255, 0) # Green (like terminal border)
|
||||
text_color = (255, 255, 255)
|
||||
|
||||
# Calculate dimensions
|
||||
x1 = 0
|
||||
y1 = 0
|
||||
x2 = self.window_width - 1
|
||||
y2 = self.window_height - 1
|
||||
|
||||
# Draw outer rectangle
|
||||
self._pygame.draw.rect(
|
||||
self._screen, border_color, (x1, y1, x2 - x1 + 1, y2 - y1 + 1), 1
|
||||
)
|
||||
|
||||
# Draw top border with FPS
|
||||
if fps > 0:
|
||||
fps_text = f" FPS:{fps:.0f}"
|
||||
else:
|
||||
fps_text = ""
|
||||
# We need to render this text with a fallback font that has basic ASCII
|
||||
# Use system font which should have these characters
|
||||
try:
|
||||
font = self._font # May not have box chars but should have alphanumeric
|
||||
text_surf = font.render(fps_text, True, text_color, (0, 0, 0))
|
||||
text_rect = text_surf.get_rect()
|
||||
# Position on top border, right-aligned
|
||||
text_x = x2 - text_rect.width - 5
|
||||
text_y = y1 + 2
|
||||
self._screen.blit(text_surf, (text_x, text_y))
|
||||
except Exception:
|
||||
pass
|
||||
|
||||
# Draw bottom border with frame time
|
||||
if frame_time > 0:
|
||||
ft_text = f" {frame_time:.1f}ms"
|
||||
try:
|
||||
ft_surf = self._font.render(ft_text, True, text_color, (0, 0, 0))
|
||||
ft_rect = ft_surf.get_rect()
|
||||
ft_x = x2 - ft_rect.width - 5
|
||||
ft_y = y2 - ft_rect.height - 2
|
||||
self._screen.blit(ft_surf, (ft_x, ft_y))
|
||||
except Exception:
|
||||
pass
|
||||
|
||||
def clear(self) -> None:
|
||||
if self._screen and self._pygame:
|
||||
self._screen.fill((0, 0, 0))
|
||||
@@ -191,8 +322,17 @@ class PygameDisplay:
|
||||
Returns:
|
||||
(width, height) in character cells
|
||||
"""
|
||||
if self._resized:
|
||||
self._resized = False
|
||||
# Query actual window size and recalculate character cells
|
||||
if self._screen and self._pygame:
|
||||
try:
|
||||
w, h = self._screen.get_size()
|
||||
if w != self.window_width or h != self.window_height:
|
||||
self.window_width = w
|
||||
self.window_height = h
|
||||
self.width = max(1, w // self.cell_width)
|
||||
self.height = max(1, h // self.cell_height)
|
||||
except Exception:
|
||||
pass
|
||||
return self.width, self.height
|
||||
|
||||
def cleanup(self, quit_pygame: bool = True) -> None:
|
||||
@@ -210,3 +350,20 @@ class PygameDisplay:
|
||||
def reset_state(cls) -> None:
|
||||
"""Reset pygame state - useful for testing."""
|
||||
cls._pygame_initialized = False
|
||||
|
||||
def is_quit_requested(self) -> bool:
|
||||
"""Check if user requested quit (Ctrl+C, Ctrl+Q, or Escape).
|
||||
|
||||
Returns True if the user pressed Ctrl+C, Ctrl+Q, or Escape.
|
||||
The main loop should check this and raise KeyboardInterrupt.
|
||||
"""
|
||||
return self._quit_requested
|
||||
|
||||
def clear_quit_request(self) -> bool:
|
||||
"""Clear the quit request flag after handling.
|
||||
|
||||
Returns the previous quit request state.
|
||||
"""
|
||||
was_requested = self._quit_requested
|
||||
self._quit_requested = False
|
||||
return was_requested
|
||||
|
||||
122
engine/display/backends/replay.py
Normal file
122
engine/display/backends/replay.py
Normal file
@@ -0,0 +1,122 @@
|
||||
"""
|
||||
Replay display backend - plays back recorded frames.
|
||||
"""
|
||||
|
||||
from typing import Any
|
||||
|
||||
|
||||
class ReplayDisplay:
|
||||
"""Replay display - plays back recorded frames.
|
||||
|
||||
This display reads frames from a recording (list of frame data)
|
||||
and yields them sequentially, useful for testing and demo purposes.
|
||||
"""
|
||||
|
||||
width: int = 80
|
||||
height: int = 24
|
||||
|
||||
def __init__(self):
|
||||
self._frames: list[dict[str, Any]] = []
|
||||
self._current_frame = 0
|
||||
self._playback_index = 0
|
||||
self._loop = False
|
||||
|
||||
def init(self, width: int, height: int, reuse: bool = False) -> None:
|
||||
"""Initialize display with dimensions.
|
||||
|
||||
Args:
|
||||
width: Terminal width in characters
|
||||
height: Terminal height in rows
|
||||
reuse: Ignored for ReplayDisplay
|
||||
"""
|
||||
self.width = width
|
||||
self.height = height
|
||||
|
||||
def set_frames(self, frames: list[dict[str, Any]]) -> None:
|
||||
"""Set frames to replay.
|
||||
|
||||
Args:
|
||||
frames: List of frame dicts with 'buffer', 'width', 'height'
|
||||
"""
|
||||
self._frames = frames
|
||||
self._current_frame = 0
|
||||
self._playback_index = 0
|
||||
|
||||
def set_loop(self, loop: bool) -> None:
|
||||
"""Set loop playback mode.
|
||||
|
||||
Args:
|
||||
loop: True to loop, False to stop at end
|
||||
"""
|
||||
self._loop = loop
|
||||
|
||||
def show(self, buffer: list[str], border: bool = False) -> None:
|
||||
"""Display a frame (ignored in replay mode).
|
||||
|
||||
Args:
|
||||
buffer: Buffer to display (ignored)
|
||||
border: Border flag (ignored)
|
||||
"""
|
||||
pass
|
||||
|
||||
def get_next_frame(self) -> list[str] | None:
|
||||
"""Get the next frame in the recording.
|
||||
|
||||
Returns:
|
||||
Buffer list of strings, or None if playback is done
|
||||
"""
|
||||
if not self._frames:
|
||||
return None
|
||||
|
||||
if self._playback_index >= len(self._frames):
|
||||
if self._loop:
|
||||
self._playback_index = 0
|
||||
else:
|
||||
return None
|
||||
|
||||
frame = self._frames[self._playback_index]
|
||||
self._playback_index += 1
|
||||
return frame.get("buffer")
|
||||
|
||||
def reset(self) -> None:
|
||||
"""Reset playback to the beginning."""
|
||||
self._playback_index = 0
|
||||
|
||||
def seek(self, index: int) -> None:
|
||||
"""Seek to a specific frame.
|
||||
|
||||
Args:
|
||||
index: Frame index to seek to
|
||||
"""
|
||||
if 0 <= index < len(self._frames):
|
||||
self._playback_index = index
|
||||
|
||||
def is_finished(self) -> bool:
|
||||
"""Check if playback is finished.
|
||||
|
||||
Returns:
|
||||
True if at end of frames and not looping
|
||||
"""
|
||||
return not self._loop and self._playback_index >= len(self._frames)
|
||||
|
||||
def clear(self) -> None:
|
||||
pass
|
||||
|
||||
def cleanup(self) -> None:
|
||||
pass
|
||||
|
||||
def get_dimensions(self) -> tuple[int, int]:
|
||||
"""Get current dimensions.
|
||||
|
||||
Returns:
|
||||
(width, height) in character cells
|
||||
"""
|
||||
return (self.width, self.height)
|
||||
|
||||
def is_quit_requested(self) -> bool:
|
||||
"""Check if quit was requested (optional protocol method)."""
|
||||
return False
|
||||
|
||||
def clear_quit_request(self) -> None:
|
||||
"""Clear quit request (optional protocol method)."""
|
||||
pass
|
||||
@@ -1,200 +0,0 @@
|
||||
"""
|
||||
Sixel graphics display backend - renders to sixel graphics in terminal.
|
||||
"""
|
||||
|
||||
import time
|
||||
|
||||
from engine.display.renderer import get_default_font_path, parse_ansi
|
||||
|
||||
|
||||
def _encode_sixel(image) -> str:
|
||||
"""Encode a PIL Image to sixel format (pure Python)."""
|
||||
img = image.convert("RGBA")
|
||||
width, height = img.size
|
||||
pixels = img.load()
|
||||
|
||||
palette = []
|
||||
pixel_palette_idx = {}
|
||||
|
||||
def get_color_idx(r, g, b, a):
|
||||
if a < 128:
|
||||
return -1
|
||||
key = (r // 32, g // 32, b // 32)
|
||||
if key not in pixel_palette_idx:
|
||||
idx = len(palette)
|
||||
if idx < 256:
|
||||
palette.append((r, g, b))
|
||||
pixel_palette_idx[key] = idx
|
||||
return pixel_palette_idx.get(key, 0)
|
||||
|
||||
for y in range(height):
|
||||
for x in range(width):
|
||||
r, g, b, a = pixels[x, y]
|
||||
get_color_idx(r, g, b, a)
|
||||
|
||||
if not palette:
|
||||
return ""
|
||||
|
||||
if len(palette) == 1:
|
||||
palette = [palette[0], (0, 0, 0)]
|
||||
|
||||
sixel_data = []
|
||||
sixel_data.append(
|
||||
f'"{"".join(f"#{i};2;{r};{g};{b}" for i, (r, g, b) in enumerate(palette))}'
|
||||
)
|
||||
|
||||
for x in range(width):
|
||||
col_data = []
|
||||
for y in range(0, height, 6):
|
||||
bits = 0
|
||||
color_idx = -1
|
||||
for dy in range(6):
|
||||
if y + dy < height:
|
||||
r, g, b, a = pixels[x, y + dy]
|
||||
if a >= 128:
|
||||
bits |= 1 << dy
|
||||
idx = get_color_idx(r, g, b, a)
|
||||
if color_idx == -1:
|
||||
color_idx = idx
|
||||
elif color_idx != idx:
|
||||
color_idx = -2
|
||||
|
||||
if color_idx >= 0:
|
||||
col_data.append(
|
||||
chr(63 + color_idx) + chr(63 + bits)
|
||||
if bits
|
||||
else chr(63 + color_idx) + "?"
|
||||
)
|
||||
elif color_idx == -2:
|
||||
pass
|
||||
|
||||
if col_data:
|
||||
sixel_data.append("".join(col_data) + "$")
|
||||
else:
|
||||
sixel_data.append("-" if x < width - 1 else "$")
|
||||
|
||||
sixel_data.append("\x1b\\")
|
||||
|
||||
return "\x1bPq" + "".join(sixel_data)
|
||||
|
||||
|
||||
class SixelDisplay:
|
||||
"""Sixel graphics display backend - renders to sixel graphics in terminal."""
|
||||
|
||||
width: int = 80
|
||||
height: int = 24
|
||||
|
||||
def __init__(self, cell_width: int = 9, cell_height: int = 16):
|
||||
self.width = 80
|
||||
self.height = 24
|
||||
self.cell_width = cell_width
|
||||
self.cell_height = cell_height
|
||||
self._initialized = False
|
||||
self._font_path = None
|
||||
|
||||
def _get_font_path(self) -> str | None:
|
||||
"""Get font path from env or detect common locations."""
|
||||
import os
|
||||
|
||||
if self._font_path:
|
||||
return self._font_path
|
||||
|
||||
env_font = os.environ.get("MAINLINE_SIXEL_FONT")
|
||||
if env_font and os.path.exists(env_font):
|
||||
self._font_path = env_font
|
||||
return env_font
|
||||
|
||||
font_path = get_default_font_path()
|
||||
if font_path:
|
||||
self._font_path = font_path
|
||||
|
||||
return self._font_path
|
||||
|
||||
def init(self, width: int, height: int, reuse: bool = False) -> None:
|
||||
"""Initialize display with dimensions.
|
||||
|
||||
Args:
|
||||
width: Terminal width in characters
|
||||
height: Terminal height in rows
|
||||
reuse: Ignored for SixelDisplay
|
||||
"""
|
||||
self.width = width
|
||||
self.height = height
|
||||
self._initialized = True
|
||||
|
||||
def show(self, buffer: list[str]) -> None:
|
||||
import sys
|
||||
|
||||
t0 = time.perf_counter()
|
||||
|
||||
img_width = self.width * self.cell_width
|
||||
img_height = self.height * self.cell_height
|
||||
|
||||
try:
|
||||
from PIL import Image, ImageDraw, ImageFont
|
||||
except ImportError:
|
||||
return
|
||||
|
||||
img = Image.new("RGBA", (img_width, img_height), (0, 0, 0, 255))
|
||||
draw = ImageDraw.Draw(img)
|
||||
|
||||
font_path = self._get_font_path()
|
||||
font = None
|
||||
if font_path:
|
||||
try:
|
||||
font = ImageFont.truetype(font_path, self.cell_height - 2)
|
||||
except Exception:
|
||||
font = None
|
||||
|
||||
if font is None:
|
||||
try:
|
||||
font = ImageFont.load_default()
|
||||
except Exception:
|
||||
font = None
|
||||
|
||||
for row_idx, line in enumerate(buffer[: self.height]):
|
||||
if row_idx >= self.height:
|
||||
break
|
||||
|
||||
tokens = parse_ansi(line)
|
||||
x_pos = 0
|
||||
y_pos = row_idx * self.cell_height
|
||||
|
||||
for text, fg, bg, bold in tokens:
|
||||
if not text:
|
||||
continue
|
||||
|
||||
if bg != (0, 0, 0):
|
||||
bbox = draw.textbbox((x_pos, y_pos), text, font=font)
|
||||
draw.rectangle(bbox, fill=(*bg, 255))
|
||||
|
||||
if bold and font:
|
||||
draw.text((x_pos - 1, y_pos - 1), text, fill=(*fg, 255), font=font)
|
||||
|
||||
draw.text((x_pos, y_pos), text, fill=(*fg, 255), font=font)
|
||||
|
||||
if font:
|
||||
x_pos += draw.textlength(text, font=font)
|
||||
|
||||
sixel = _encode_sixel(img)
|
||||
|
||||
sys.stdout.buffer.write(sixel.encode("utf-8"))
|
||||
sys.stdout.flush()
|
||||
|
||||
elapsed_ms = (time.perf_counter() - t0) * 1000
|
||||
|
||||
from engine.display import get_monitor
|
||||
|
||||
monitor = get_monitor()
|
||||
if monitor:
|
||||
chars_in = sum(len(line) for line in buffer)
|
||||
monitor.record_effect("sixel_display", elapsed_ms, chars_in, chars_in)
|
||||
|
||||
def clear(self) -> None:
|
||||
import sys
|
||||
|
||||
sys.stdout.buffer.write(b"\x1b[2J\x1b[H")
|
||||
sys.stdout.flush()
|
||||
|
||||
def cleanup(self) -> None:
|
||||
pass
|
||||
@@ -2,7 +2,11 @@
|
||||
ANSI terminal display backend.
|
||||
"""
|
||||
|
||||
import time
|
||||
import os
|
||||
import select
|
||||
import sys
|
||||
import termios
|
||||
import tty
|
||||
|
||||
|
||||
class TerminalDisplay:
|
||||
@@ -10,43 +14,140 @@ class TerminalDisplay:
|
||||
|
||||
Renders buffer to stdout using ANSI escape codes.
|
||||
Supports reuse - when reuse=True, skips re-initializing terminal state.
|
||||
Auto-detects terminal dimensions on init.
|
||||
"""
|
||||
|
||||
width: int = 80
|
||||
height: int = 24
|
||||
_initialized: bool = False
|
||||
|
||||
def __init__(self, target_fps: float = 30.0):
|
||||
self.target_fps = target_fps
|
||||
self._frame_period = 1.0 / target_fps if target_fps > 0 else 0
|
||||
self._last_frame_time = 0.0
|
||||
self._cached_dimensions: tuple[int, int] | None = None
|
||||
self._raw_mode_enabled: bool = False
|
||||
self._original_termios: list = []
|
||||
self._quit_requested: bool = False
|
||||
|
||||
def init(self, width: int, height: int, reuse: bool = False) -> None:
|
||||
"""Initialize display with dimensions.
|
||||
|
||||
If width/height are not provided (0/None), auto-detects terminal size.
|
||||
Otherwise uses provided dimensions or falls back to terminal size
|
||||
if the provided dimensions exceed terminal capacity.
|
||||
|
||||
Args:
|
||||
width: Terminal width in characters
|
||||
height: Terminal height in rows
|
||||
width: Desired terminal width (0 = auto-detect)
|
||||
height: Desired terminal height (0 = auto-detect)
|
||||
reuse: If True, skip terminal re-initialization
|
||||
"""
|
||||
from engine.terminal import CURSOR_OFF
|
||||
|
||||
self.width = width
|
||||
self.height = height
|
||||
# Auto-detect terminal size (handle case where no terminal)
|
||||
try:
|
||||
term_size = os.get_terminal_size()
|
||||
term_width = term_size.columns
|
||||
term_height = term_size.lines
|
||||
except OSError:
|
||||
# No terminal available (e.g., in tests)
|
||||
term_width = width if width > 0 else 80
|
||||
term_height = height if height > 0 else 24
|
||||
|
||||
# Use provided dimensions if valid, otherwise use terminal size
|
||||
if width > 0 and height > 0:
|
||||
self.width = min(width, term_width)
|
||||
self.height = min(height, term_height)
|
||||
else:
|
||||
self.width = term_width
|
||||
self.height = term_height
|
||||
|
||||
if not reuse or not self._initialized:
|
||||
print(CURSOR_OFF, end="", flush=True)
|
||||
self._initialized = True
|
||||
|
||||
def show(self, buffer: list[str]) -> None:
|
||||
def get_dimensions(self) -> tuple[int, int]:
|
||||
"""Get current terminal dimensions.
|
||||
|
||||
Returns cached dimensions to avoid querying terminal every frame,
|
||||
which can cause inconsistent results. Dimensions are only refreshed
|
||||
when they actually change.
|
||||
|
||||
Returns:
|
||||
(width, height) in character cells
|
||||
"""
|
||||
try:
|
||||
term_size = os.get_terminal_size()
|
||||
new_dims = (term_size.columns, term_size.lines)
|
||||
except OSError:
|
||||
new_dims = (self.width, self.height)
|
||||
|
||||
# Only update cached dimensions if they actually changed
|
||||
if self._cached_dimensions is None or self._cached_dimensions != new_dims:
|
||||
self._cached_dimensions = new_dims
|
||||
self.width = new_dims[0]
|
||||
self.height = new_dims[1]
|
||||
|
||||
return self._cached_dimensions
|
||||
|
||||
def show(
|
||||
self, buffer: list[str], border: bool = False, positioning: str = "mixed"
|
||||
) -> None:
|
||||
"""Display buffer with optional border and positioning mode.
|
||||
|
||||
Args:
|
||||
buffer: List of lines to display
|
||||
border: Whether to apply border
|
||||
positioning: Positioning mode - "mixed" (default), "absolute", or "relative"
|
||||
"""
|
||||
import sys
|
||||
|
||||
t0 = time.perf_counter()
|
||||
sys.stdout.buffer.write("".join(buffer).encode())
|
||||
sys.stdout.flush()
|
||||
elapsed_ms = (time.perf_counter() - t0) * 1000
|
||||
from engine.display import get_monitor, render_border
|
||||
|
||||
from engine.display import get_monitor
|
||||
# Note: Frame rate limiting is handled by the caller (e.g., FrameTimer).
|
||||
# This display renders every frame it receives.
|
||||
|
||||
# Get metrics for border display
|
||||
fps = 0.0
|
||||
frame_time = 0.0
|
||||
monitor = get_monitor()
|
||||
if monitor:
|
||||
chars_in = sum(len(line) for line in buffer)
|
||||
monitor.record_effect("terminal_display", elapsed_ms, chars_in, chars_in)
|
||||
stats = monitor.get_stats()
|
||||
avg_ms = stats.get("pipeline", {}).get("avg_ms", 0) if stats else 0
|
||||
frame_count = stats.get("frame_count", 0) if stats else 0
|
||||
if avg_ms and frame_count > 0:
|
||||
fps = 1000.0 / avg_ms
|
||||
frame_time = avg_ms
|
||||
|
||||
# Apply border if requested
|
||||
from engine.display import BorderMode
|
||||
|
||||
if border and border != BorderMode.OFF:
|
||||
buffer = render_border(buffer, self.width, self.height, fps, frame_time)
|
||||
|
||||
# Apply positioning based on mode
|
||||
if positioning == "absolute":
|
||||
# All lines should have cursor positioning codes
|
||||
# Join with newlines (cursor codes already in buffer)
|
||||
output = "\033[H\033[J" + "\n".join(buffer)
|
||||
elif positioning == "relative":
|
||||
# Remove cursor positioning codes (except colors) and join with newlines
|
||||
import re
|
||||
|
||||
cleaned_buffer = []
|
||||
for line in buffer:
|
||||
# Remove cursor positioning codes but keep color codes
|
||||
# Pattern: \033[row;colH or \033[row;col;...H
|
||||
cleaned = re.sub(r"\033\[[0-9;]*H", "", line)
|
||||
cleaned_buffer.append(cleaned)
|
||||
output = "\033[H\033[J" + "\n".join(cleaned_buffer)
|
||||
else: # mixed (default)
|
||||
# Current behavior: join with newlines
|
||||
# Effects that need absolute positioning have their own cursor codes
|
||||
output = "\033[H\033[J" + "\n".join(buffer)
|
||||
|
||||
sys.stdout.buffer.write(output.encode())
|
||||
sys.stdout.flush()
|
||||
|
||||
def clear(self) -> None:
|
||||
from engine.terminal import CLR
|
||||
@@ -56,4 +157,182 @@ class TerminalDisplay:
|
||||
def cleanup(self) -> None:
|
||||
from engine.terminal import CURSOR_ON
|
||||
|
||||
# Disable mouse tracking if enabled
|
||||
self.disable_mouse_tracking()
|
||||
|
||||
# Restore normal terminal mode if raw mode was enabled
|
||||
self.set_raw_mode(False)
|
||||
|
||||
print(CURSOR_ON, end="", flush=True)
|
||||
|
||||
def is_quit_requested(self) -> bool:
|
||||
"""Check if quit was requested (optional protocol method)."""
|
||||
return self._quit_requested
|
||||
|
||||
def clear_quit_request(self) -> None:
|
||||
"""Clear quit request (optional protocol method)."""
|
||||
self._quit_requested = False
|
||||
|
||||
def request_quit(self) -> None:
|
||||
"""Request quit (e.g., when Ctrl+C is pressed)."""
|
||||
self._quit_requested = True
|
||||
|
||||
def enable_mouse_tracking(self) -> None:
|
||||
"""Enable SGR mouse tracking mode."""
|
||||
try:
|
||||
# SGR mouse mode: \x1b[?1006h
|
||||
sys.stdout.write("\x1b[?1006h")
|
||||
sys.stdout.flush()
|
||||
except (OSError, AttributeError):
|
||||
pass # Terminal might not support mouse tracking
|
||||
|
||||
def disable_mouse_tracking(self) -> None:
|
||||
"""Disable SGR mouse tracking mode."""
|
||||
try:
|
||||
# Disable SGR mouse mode: \x1b[?1006l
|
||||
sys.stdout.write("\x1b[?1006l")
|
||||
sys.stdout.flush()
|
||||
except (OSError, AttributeError):
|
||||
pass
|
||||
|
||||
def set_raw_mode(self, enable: bool = True) -> None:
|
||||
"""Enable/disable raw terminal mode for input capture.
|
||||
|
||||
When raw mode is enabled:
|
||||
- Keystrokes are read immediately without echo
|
||||
- Special keys (arrows, Ctrl+C, etc.) are captured
|
||||
- Terminal is not in cooked/canonical mode
|
||||
|
||||
Args:
|
||||
enable: True to enable raw mode, False to restore normal mode
|
||||
"""
|
||||
try:
|
||||
if enable and not self._raw_mode_enabled:
|
||||
# Save original terminal settings
|
||||
self._original_termios = termios.tcgetattr(sys.stdin)
|
||||
# Set raw mode
|
||||
tty.setraw(sys.stdin.fileno())
|
||||
self._raw_mode_enabled = True
|
||||
# Enable mouse tracking
|
||||
self.enable_mouse_tracking()
|
||||
elif not enable and self._raw_mode_enabled:
|
||||
# Disable mouse tracking
|
||||
self.disable_mouse_tracking()
|
||||
# Restore original terminal settings
|
||||
if self._original_termios:
|
||||
termios.tcsetattr(
|
||||
sys.stdin, termios.TCSADRAIN, self._original_termios
|
||||
)
|
||||
self._raw_mode_enabled = False
|
||||
except (termios.error, OSError):
|
||||
# Terminal might not support raw mode (e.g., in tests)
|
||||
pass
|
||||
|
||||
def get_input_keys(self, timeout: float = 0.0) -> list[str]:
|
||||
"""Get available keyboard input.
|
||||
|
||||
Reads available keystrokes from stdin. Should be called
|
||||
with raw mode enabled for best results.
|
||||
|
||||
Args:
|
||||
timeout: Maximum time to wait for input (seconds)
|
||||
|
||||
Returns:
|
||||
List of key symbols as strings
|
||||
"""
|
||||
keys = []
|
||||
|
||||
try:
|
||||
# Check if input is available
|
||||
if select.select([sys.stdin], [], [], timeout)[0]:
|
||||
char = sys.stdin.read(1)
|
||||
|
||||
if char == "\x1b": # Escape sequence
|
||||
# Read next characters to determine key
|
||||
# Try to read up to 10 chars for longer sequences
|
||||
seq = sys.stdin.read(10)
|
||||
|
||||
# PageUp: \x1b[5~
|
||||
if seq.startswith("[5~"):
|
||||
keys.append("page_up")
|
||||
# PageDown: \x1b[6~
|
||||
elif seq.startswith("[6~"):
|
||||
keys.append("page_down")
|
||||
# Arrow keys: \x1b[A, \x1b[B, etc.
|
||||
elif seq.startswith("["):
|
||||
if seq[1] == "A":
|
||||
keys.append("up")
|
||||
elif seq[1] == "B":
|
||||
keys.append("down")
|
||||
elif seq[1] == "C":
|
||||
keys.append("right")
|
||||
elif seq[1] == "D":
|
||||
keys.append("left")
|
||||
else:
|
||||
# Unknown escape sequence
|
||||
keys.append("escape")
|
||||
# Mouse events: \x1b[<B;X;Ym or \x1b[<B;X;YM
|
||||
elif seq.startswith("[<"):
|
||||
mouse_seq = "\x1b" + seq
|
||||
mouse_data = self._parse_mouse_event(mouse_seq)
|
||||
if mouse_data:
|
||||
keys.append(mouse_data)
|
||||
else:
|
||||
# Unknown escape sequence
|
||||
keys.append("escape")
|
||||
elif char == "\n" or char == "\r":
|
||||
keys.append("return")
|
||||
elif char == "\t":
|
||||
keys.append("tab")
|
||||
elif char == " ":
|
||||
keys.append(" ")
|
||||
elif char == "\x7f" or char == "\x08": # Backspace or Ctrl+H
|
||||
keys.append("backspace")
|
||||
elif char == "\x03": # Ctrl+C
|
||||
keys.append("ctrl_c")
|
||||
elif char == "\x04": # Ctrl+D
|
||||
keys.append("ctrl_d")
|
||||
elif char.isprintable():
|
||||
keys.append(char)
|
||||
except OSError:
|
||||
pass
|
||||
|
||||
return keys
|
||||
|
||||
def _parse_mouse_event(self, data: str) -> str | None:
|
||||
"""Parse SGR mouse event sequence.
|
||||
|
||||
Format: \x1b[<B;X;Ym (release) or \x1b[<B;X;YM (press)
|
||||
B = button number (0=left, 1=middle, 2=right, 64=wheel up, 65=wheel down)
|
||||
X, Y = coordinates (1-indexed)
|
||||
|
||||
Returns:
|
||||
Mouse event string like "mouse:64:10:5" or None if not a mouse event
|
||||
"""
|
||||
if not data.startswith("\x1b[<"):
|
||||
return None
|
||||
|
||||
# Find the ending 'm' or 'M'
|
||||
end_pos = data.rfind("m")
|
||||
if end_pos == -1:
|
||||
end_pos = data.rfind("M")
|
||||
if end_pos == -1:
|
||||
return None
|
||||
|
||||
inner = data[3:end_pos] # Remove \x1b[< and trailing m/M
|
||||
parts = inner.split(";")
|
||||
|
||||
if len(parts) >= 3:
|
||||
try:
|
||||
button = int(parts[0])
|
||||
x = int(parts[1]) - 1 # Convert to 0-indexed
|
||||
y = int(parts[2]) - 1
|
||||
return f"mouse:{button}:{x}:{y}"
|
||||
except ValueError:
|
||||
pass
|
||||
|
||||
return None
|
||||
|
||||
def is_raw_mode_enabled(self) -> bool:
|
||||
"""Check if raw mode is currently enabled."""
|
||||
return self._raw_mode_enabled
|
||||
|
||||
@@ -1,12 +1,44 @@
|
||||
"""
|
||||
WebSocket display backend - broadcasts frame buffer to connected web clients.
|
||||
|
||||
Supports streaming protocols:
|
||||
- Full frame (JSON) - default for compatibility
|
||||
- Binary streaming - efficient binary protocol
|
||||
- Diff streaming - only sends changed lines
|
||||
|
||||
TODO: Transform to a true streaming backend with:
|
||||
- Proper WebSocket message streaming (currently sends full buffer each frame)
|
||||
- Connection pooling and backpressure handling
|
||||
- Binary protocol for efficiency (instead of JSON)
|
||||
- Client management with proper async handling
|
||||
- Mark for deprecation if replaced by a new streaming implementation
|
||||
|
||||
Current implementation: Simple broadcast of text frames to all connected clients.
|
||||
"""
|
||||
|
||||
import asyncio
|
||||
import base64
|
||||
import json
|
||||
import threading
|
||||
import time
|
||||
from typing import Protocol
|
||||
from enum import IntFlag
|
||||
|
||||
from engine.display.streaming import (
|
||||
MessageType,
|
||||
compress_frame,
|
||||
compute_diff,
|
||||
encode_binary_message,
|
||||
encode_diff_message,
|
||||
)
|
||||
|
||||
|
||||
class StreamingMode(IntFlag):
|
||||
"""Streaming modes for WebSocket display."""
|
||||
|
||||
JSON = 0x01 # Full JSON frames (default, compatible)
|
||||
BINARY = 0x02 # Binary compression
|
||||
DIFF = 0x04 # Differential updates
|
||||
|
||||
|
||||
try:
|
||||
import websockets
|
||||
@@ -14,29 +46,6 @@ except ImportError:
|
||||
websockets = None
|
||||
|
||||
|
||||
class Display(Protocol):
|
||||
"""Protocol for display backends."""
|
||||
|
||||
width: int
|
||||
height: int
|
||||
|
||||
def init(self, width: int, height: int) -> None:
|
||||
"""Initialize display with dimensions."""
|
||||
...
|
||||
|
||||
def show(self, buffer: list[str]) -> None:
|
||||
"""Show buffer on display."""
|
||||
...
|
||||
|
||||
def clear(self) -> None:
|
||||
"""Clear display."""
|
||||
...
|
||||
|
||||
def cleanup(self) -> None:
|
||||
"""Shutdown display."""
|
||||
...
|
||||
|
||||
|
||||
def get_monitor():
|
||||
"""Get the performance monitor."""
|
||||
try:
|
||||
@@ -58,6 +67,7 @@ class WebSocketDisplay:
|
||||
host: str = "0.0.0.0",
|
||||
port: int = 8765,
|
||||
http_port: int = 8766,
|
||||
streaming_mode: StreamingMode = StreamingMode.JSON,
|
||||
):
|
||||
self.host = host
|
||||
self.port = port
|
||||
@@ -73,7 +83,15 @@ class WebSocketDisplay:
|
||||
self._max_clients = 10
|
||||
self._client_connected_callback = None
|
||||
self._client_disconnected_callback = None
|
||||
self._command_callback = None
|
||||
self._controller = None # Reference to UI panel or pipeline controller
|
||||
self._frame_delay = 0.0
|
||||
self._httpd = None # HTTP server instance
|
||||
|
||||
# Streaming configuration
|
||||
self._streaming_mode = streaming_mode
|
||||
self._last_buffer: list[str] = []
|
||||
self._client_capabilities: dict = {} # Track client capabilities
|
||||
|
||||
try:
|
||||
import websockets as _ws
|
||||
@@ -101,37 +119,104 @@ class WebSocketDisplay:
|
||||
self.start_server()
|
||||
self.start_http_server()
|
||||
|
||||
def show(self, buffer: list[str]) -> None:
|
||||
"""Broadcast buffer to all connected clients."""
|
||||
def show(self, buffer: list[str], border: bool = False) -> None:
|
||||
"""Broadcast buffer to all connected clients using streaming protocol."""
|
||||
t0 = time.perf_counter()
|
||||
|
||||
if self._clients:
|
||||
frame_data = {
|
||||
"type": "frame",
|
||||
"width": self.width,
|
||||
"height": self.height,
|
||||
"lines": buffer,
|
||||
}
|
||||
message = json.dumps(frame_data)
|
||||
# Get metrics for border display
|
||||
fps = 0.0
|
||||
frame_time = 0.0
|
||||
monitor = get_monitor()
|
||||
if monitor:
|
||||
stats = monitor.get_stats()
|
||||
avg_ms = stats.get("pipeline", {}).get("avg_ms", 0) if stats else 0
|
||||
frame_count = stats.get("frame_count", 0) if stats else 0
|
||||
if avg_ms and frame_count > 0:
|
||||
fps = 1000.0 / avg_ms
|
||||
frame_time = avg_ms
|
||||
|
||||
disconnected = set()
|
||||
for client in list(self._clients):
|
||||
try:
|
||||
asyncio.run(client.send(message))
|
||||
except Exception:
|
||||
disconnected.add(client)
|
||||
# Apply border if requested
|
||||
if border:
|
||||
from engine.display import render_border
|
||||
|
||||
for client in disconnected:
|
||||
self._clients.discard(client)
|
||||
if self._client_disconnected_callback:
|
||||
self._client_disconnected_callback(client)
|
||||
buffer = render_border(buffer, self.width, self.height, fps, frame_time)
|
||||
|
||||
if not self._clients:
|
||||
self._last_buffer = buffer
|
||||
return
|
||||
|
||||
# Send to each client based on their capabilities
|
||||
disconnected = set()
|
||||
for client in list(self._clients):
|
||||
try:
|
||||
client_id = id(client)
|
||||
client_mode = self._client_capabilities.get(
|
||||
client_id, StreamingMode.JSON
|
||||
)
|
||||
|
||||
if client_mode & StreamingMode.DIFF:
|
||||
self._send_diff_frame(client, buffer)
|
||||
elif client_mode & StreamingMode.BINARY:
|
||||
self._send_binary_frame(client, buffer)
|
||||
else:
|
||||
self._send_json_frame(client, buffer)
|
||||
except Exception:
|
||||
disconnected.add(client)
|
||||
|
||||
for client in disconnected:
|
||||
self._clients.discard(client)
|
||||
if self._client_disconnected_callback:
|
||||
self._client_disconnected_callback(client)
|
||||
|
||||
self._last_buffer = buffer
|
||||
|
||||
elapsed_ms = (time.perf_counter() - t0) * 1000
|
||||
monitor = get_monitor()
|
||||
if monitor:
|
||||
chars_in = sum(len(line) for line in buffer)
|
||||
monitor.record_effect("websocket_display", elapsed_ms, chars_in, chars_in)
|
||||
|
||||
def _send_json_frame(self, client, buffer: list[str]) -> None:
|
||||
"""Send frame as JSON."""
|
||||
frame_data = {
|
||||
"type": "frame",
|
||||
"width": self.width,
|
||||
"height": self.height,
|
||||
"lines": buffer,
|
||||
}
|
||||
message = json.dumps(frame_data)
|
||||
asyncio.run(client.send(message))
|
||||
|
||||
def _send_binary_frame(self, client, buffer: list[str]) -> None:
|
||||
"""Send frame as compressed binary."""
|
||||
compressed = compress_frame(buffer)
|
||||
message = encode_binary_message(
|
||||
MessageType.FULL_FRAME, self.width, self.height, compressed
|
||||
)
|
||||
encoded = base64.b64encode(message).decode("utf-8")
|
||||
asyncio.run(client.send(encoded))
|
||||
|
||||
def _send_diff_frame(self, client, buffer: list[str]) -> None:
|
||||
"""Send frame as diff."""
|
||||
diff = compute_diff(self._last_buffer, buffer)
|
||||
|
||||
if not diff.changed_lines:
|
||||
return
|
||||
|
||||
diff_payload = encode_diff_message(diff)
|
||||
message = encode_binary_message(
|
||||
MessageType.DIFF_FRAME, self.width, self.height, diff_payload
|
||||
)
|
||||
encoded = base64.b64encode(message).decode("utf-8")
|
||||
asyncio.run(client.send(encoded))
|
||||
|
||||
def set_streaming_mode(self, mode: StreamingMode) -> None:
|
||||
"""Set the default streaming mode for new clients."""
|
||||
self._streaming_mode = mode
|
||||
|
||||
def get_streaming_mode(self) -> StreamingMode:
|
||||
"""Get the current streaming mode."""
|
||||
return self._streaming_mode
|
||||
|
||||
def clear(self) -> None:
|
||||
"""Broadcast clear command to all clients."""
|
||||
if self._clients:
|
||||
@@ -162,9 +247,21 @@ class WebSocketDisplay:
|
||||
async for message in websocket:
|
||||
try:
|
||||
data = json.loads(message)
|
||||
if data.get("type") == "resize":
|
||||
msg_type = data.get("type")
|
||||
|
||||
if msg_type == "resize":
|
||||
self.width = data.get("width", 80)
|
||||
self.height = data.get("height", 24)
|
||||
elif msg_type == "command" and self._command_callback:
|
||||
# Forward commands to the pipeline controller
|
||||
command = data.get("command", {})
|
||||
self._command_callback(command)
|
||||
elif msg_type == "state_request":
|
||||
# Send current state snapshot
|
||||
state = self._get_state_snapshot()
|
||||
if state:
|
||||
response = {"type": "state", "state": state}
|
||||
await websocket.send(json.dumps(response))
|
||||
except json.JSONDecodeError:
|
||||
pass
|
||||
except Exception:
|
||||
@@ -176,6 +273,8 @@ class WebSocketDisplay:
|
||||
|
||||
async def _run_websocket_server(self):
|
||||
"""Run the WebSocket server."""
|
||||
if not websockets:
|
||||
return
|
||||
async with websockets.serve(self._websocket_handler, self.host, self.port):
|
||||
while self._server_running:
|
||||
await asyncio.sleep(0.1)
|
||||
@@ -185,9 +284,23 @@ class WebSocketDisplay:
|
||||
import os
|
||||
from http.server import HTTPServer, SimpleHTTPRequestHandler
|
||||
|
||||
client_dir = os.path.join(
|
||||
os.path.dirname(os.path.dirname(os.path.dirname(__file__))), "client"
|
||||
)
|
||||
# Find the project root by locating 'engine' directory in the path
|
||||
websocket_file = os.path.abspath(__file__)
|
||||
parts = websocket_file.split(os.sep)
|
||||
if "engine" in parts:
|
||||
engine_idx = parts.index("engine")
|
||||
project_root = os.sep.join(parts[:engine_idx])
|
||||
client_dir = os.path.join(project_root, "client")
|
||||
else:
|
||||
# Fallback: go up 4 levels from websocket.py
|
||||
# websocket.py: .../engine/display/backends/websocket.py
|
||||
# We need: .../client
|
||||
client_dir = os.path.join(
|
||||
os.path.dirname(
|
||||
os.path.dirname(os.path.dirname(os.path.dirname(__file__)))
|
||||
),
|
||||
"client",
|
||||
)
|
||||
|
||||
class Handler(SimpleHTTPRequestHandler):
|
||||
def __init__(self, *args, **kwargs):
|
||||
@@ -197,8 +310,10 @@ class WebSocketDisplay:
|
||||
pass
|
||||
|
||||
httpd = HTTPServer((self.host, self.http_port), Handler)
|
||||
while self._http_running:
|
||||
httpd.handle_request()
|
||||
# Store reference for shutdown
|
||||
self._httpd = httpd
|
||||
# Serve requests continuously
|
||||
httpd.serve_forever()
|
||||
|
||||
def _run_async(self, coro):
|
||||
"""Run coroutine in background."""
|
||||
@@ -243,6 +358,8 @@ class WebSocketDisplay:
|
||||
def stop_http_server(self):
|
||||
"""Stop the HTTP server."""
|
||||
self._http_running = False
|
||||
if hasattr(self, "_httpd") and self._httpd:
|
||||
self._httpd.shutdown()
|
||||
self._http_thread = None
|
||||
|
||||
def client_count(self) -> int:
|
||||
@@ -272,3 +389,76 @@ class WebSocketDisplay:
|
||||
def set_client_disconnected_callback(self, callback) -> None:
|
||||
"""Set callback for client disconnections."""
|
||||
self._client_disconnected_callback = callback
|
||||
|
||||
def set_command_callback(self, callback) -> None:
|
||||
"""Set callback for incoming command messages from clients."""
|
||||
self._command_callback = callback
|
||||
|
||||
def set_controller(self, controller) -> None:
|
||||
"""Set controller (UI panel or pipeline) for state queries and command execution."""
|
||||
self._controller = controller
|
||||
|
||||
def broadcast_state(self, state: dict) -> None:
|
||||
"""Broadcast state update to all connected clients.
|
||||
|
||||
Args:
|
||||
state: Dictionary containing state data to send to clients
|
||||
"""
|
||||
if not self._clients:
|
||||
return
|
||||
|
||||
message = json.dumps({"type": "state", "state": state})
|
||||
|
||||
disconnected = set()
|
||||
for client in list(self._clients):
|
||||
try:
|
||||
asyncio.run(client.send(message))
|
||||
except Exception:
|
||||
disconnected.add(client)
|
||||
|
||||
for client in disconnected:
|
||||
self._clients.discard(client)
|
||||
if self._client_disconnected_callback:
|
||||
self._client_disconnected_callback(client)
|
||||
|
||||
def _get_state_snapshot(self) -> dict | None:
|
||||
"""Get current state snapshot from controller."""
|
||||
if not self._controller:
|
||||
return None
|
||||
|
||||
try:
|
||||
# Expect controller to have methods we need
|
||||
state = {}
|
||||
|
||||
# Get stages info if UIPanel
|
||||
if hasattr(self._controller, "stages"):
|
||||
state["stages"] = {
|
||||
name: {
|
||||
"enabled": ctrl.enabled,
|
||||
"params": ctrl.params,
|
||||
"selected": ctrl.selected,
|
||||
}
|
||||
for name, ctrl in self._controller.stages.items()
|
||||
}
|
||||
|
||||
# Get current preset
|
||||
if hasattr(self._controller, "_current_preset"):
|
||||
state["preset"] = self._controller._current_preset
|
||||
if hasattr(self._controller, "_presets"):
|
||||
state["presets"] = self._controller._presets
|
||||
|
||||
# Get selected stage
|
||||
if hasattr(self._controller, "selected_stage"):
|
||||
state["selected_stage"] = self._controller.selected_stage
|
||||
|
||||
return state
|
||||
except Exception:
|
||||
return None
|
||||
|
||||
def get_dimensions(self) -> tuple[int, int]:
|
||||
"""Get current dimensions.
|
||||
|
||||
Returns:
|
||||
(width, height) in character cells
|
||||
"""
|
||||
return (self.width, self.height)
|
||||
|
||||
268
engine/display/streaming.py
Normal file
268
engine/display/streaming.py
Normal file
@@ -0,0 +1,268 @@
|
||||
"""
|
||||
Streaming protocol utilities for efficient frame transmission.
|
||||
|
||||
Provides:
|
||||
- Frame differencing: Only send changed lines
|
||||
- Run-length encoding: Compress repeated lines
|
||||
- Binary encoding: Compact message format
|
||||
"""
|
||||
|
||||
import json
|
||||
import zlib
|
||||
from dataclasses import dataclass
|
||||
from enum import IntEnum
|
||||
|
||||
|
||||
class MessageType(IntEnum):
|
||||
"""Message types for streaming protocol."""
|
||||
|
||||
FULL_FRAME = 1
|
||||
DIFF_FRAME = 2
|
||||
STATE = 3
|
||||
CLEAR = 4
|
||||
PING = 5
|
||||
PONG = 6
|
||||
|
||||
|
||||
@dataclass
|
||||
class FrameDiff:
|
||||
"""Represents a diff between two frames."""
|
||||
|
||||
width: int
|
||||
height: int
|
||||
changed_lines: list[tuple[int, str]] # (line_index, content)
|
||||
|
||||
|
||||
def compute_diff(old_buffer: list[str], new_buffer: list[str]) -> FrameDiff:
|
||||
"""Compute differences between old and new buffer.
|
||||
|
||||
Args:
|
||||
old_buffer: Previous frame buffer
|
||||
new_buffer: Current frame buffer
|
||||
|
||||
Returns:
|
||||
FrameDiff with only changed lines
|
||||
"""
|
||||
height = len(new_buffer)
|
||||
changed_lines = []
|
||||
|
||||
for i, line in enumerate(new_buffer):
|
||||
if i >= len(old_buffer) or line != old_buffer[i]:
|
||||
changed_lines.append((i, line))
|
||||
|
||||
return FrameDiff(
|
||||
width=len(new_buffer[0]) if new_buffer else 0,
|
||||
height=height,
|
||||
changed_lines=changed_lines,
|
||||
)
|
||||
|
||||
|
||||
def encode_rle(lines: list[tuple[int, str]]) -> list[tuple[int, str, int]]:
|
||||
"""Run-length encode consecutive identical lines.
|
||||
|
||||
Args:
|
||||
lines: List of (index, content) tuples (must be sorted by index)
|
||||
|
||||
Returns:
|
||||
List of (start_index, content, run_length) tuples
|
||||
"""
|
||||
if not lines:
|
||||
return []
|
||||
|
||||
encoded = []
|
||||
start_idx = lines[0][0]
|
||||
current_line = lines[0][1]
|
||||
current_rle = 1
|
||||
|
||||
for idx, line in lines[1:]:
|
||||
if line == current_line:
|
||||
current_rle += 1
|
||||
else:
|
||||
encoded.append((start_idx, current_line, current_rle))
|
||||
start_idx = idx
|
||||
current_line = line
|
||||
current_rle = 1
|
||||
|
||||
encoded.append((start_idx, current_line, current_rle))
|
||||
return encoded
|
||||
|
||||
|
||||
def decode_rle(encoded: list[tuple[int, str, int]]) -> list[tuple[int, str]]:
|
||||
"""Decode run-length encoded lines.
|
||||
|
||||
Args:
|
||||
encoded: List of (start_index, content, run_length) tuples
|
||||
|
||||
Returns:
|
||||
List of (index, content) tuples
|
||||
"""
|
||||
result = []
|
||||
for start_idx, line, rle in encoded:
|
||||
for i in range(rle):
|
||||
result.append((start_idx + i, line))
|
||||
return result
|
||||
|
||||
|
||||
def compress_frame(buffer: list[str], level: int = 6) -> bytes:
|
||||
"""Compress a frame buffer using zlib.
|
||||
|
||||
Args:
|
||||
buffer: Frame buffer (list of lines)
|
||||
level: Compression level (0-9)
|
||||
|
||||
Returns:
|
||||
Compressed bytes
|
||||
"""
|
||||
content = "\n".join(buffer)
|
||||
return zlib.compress(content.encode("utf-8"), level)
|
||||
|
||||
|
||||
def decompress_frame(data: bytes, height: int) -> list[str]:
|
||||
"""Decompress a frame buffer.
|
||||
|
||||
Args:
|
||||
data: Compressed bytes
|
||||
height: Number of lines in original buffer
|
||||
|
||||
Returns:
|
||||
Frame buffer (list of lines)
|
||||
"""
|
||||
content = zlib.decompress(data).decode("utf-8")
|
||||
lines = content.split("\n")
|
||||
if len(lines) > height:
|
||||
lines = lines[:height]
|
||||
while len(lines) < height:
|
||||
lines.append("")
|
||||
return lines
|
||||
|
||||
|
||||
def encode_binary_message(
|
||||
msg_type: MessageType, width: int, height: int, payload: bytes
|
||||
) -> bytes:
|
||||
"""Encode a binary message.
|
||||
|
||||
Message format:
|
||||
- 1 byte: message type
|
||||
- 2 bytes: width (uint16)
|
||||
- 2 bytes: height (uint16)
|
||||
- 4 bytes: payload length (uint32)
|
||||
- N bytes: payload
|
||||
|
||||
Args:
|
||||
msg_type: Message type
|
||||
width: Frame width
|
||||
height: Frame height
|
||||
payload: Message payload
|
||||
|
||||
Returns:
|
||||
Encoded binary message
|
||||
"""
|
||||
import struct
|
||||
|
||||
header = struct.pack("!BHHI", msg_type.value, width, height, len(payload))
|
||||
return header + payload
|
||||
|
||||
|
||||
def decode_binary_message(data: bytes) -> tuple[MessageType, int, int, bytes]:
|
||||
"""Decode a binary message.
|
||||
|
||||
Args:
|
||||
data: Binary message data
|
||||
|
||||
Returns:
|
||||
Tuple of (msg_type, width, height, payload)
|
||||
"""
|
||||
import struct
|
||||
|
||||
msg_type_val, width, height, payload_len = struct.unpack("!BHHI", data[:9])
|
||||
payload = data[9 : 9 + payload_len]
|
||||
return MessageType(msg_type_val), width, height, payload
|
||||
|
||||
|
||||
def encode_diff_message(diff: FrameDiff, use_rle: bool = True) -> bytes:
|
||||
"""Encode a diff message for transmission.
|
||||
|
||||
Args:
|
||||
diff: Frame diff
|
||||
use_rle: Whether to use run-length encoding
|
||||
|
||||
Returns:
|
||||
Encoded diff payload
|
||||
"""
|
||||
|
||||
if use_rle:
|
||||
encoded_lines = encode_rle(diff.changed_lines)
|
||||
data = [[idx, line, rle] for idx, line, rle in encoded_lines]
|
||||
else:
|
||||
data = [[idx, line] for idx, line in diff.changed_lines]
|
||||
|
||||
payload = json.dumps(data).encode("utf-8")
|
||||
return payload
|
||||
|
||||
|
||||
def decode_diff_message(payload: bytes, use_rle: bool = True) -> list[tuple[int, str]]:
|
||||
"""Decode a diff message.
|
||||
|
||||
Args:
|
||||
payload: Encoded diff payload
|
||||
use_rle: Whether run-length encoding was used
|
||||
|
||||
Returns:
|
||||
List of (line_index, content) tuples
|
||||
"""
|
||||
|
||||
data = json.loads(payload.decode("utf-8"))
|
||||
|
||||
if use_rle:
|
||||
return decode_rle([(idx, line, rle) for idx, line, rle in data])
|
||||
else:
|
||||
return [(idx, line) for idx, line in data]
|
||||
|
||||
|
||||
def should_use_diff(
|
||||
old_buffer: list[str], new_buffer: list[str], threshold: float = 0.3
|
||||
) -> bool:
|
||||
"""Determine if diff or full frame is more efficient.
|
||||
|
||||
Args:
|
||||
old_buffer: Previous frame
|
||||
new_buffer: Current frame
|
||||
threshold: Max changed ratio to use diff (0.0-1.0)
|
||||
|
||||
Returns:
|
||||
True if diff is more efficient
|
||||
"""
|
||||
if not old_buffer or not new_buffer:
|
||||
return False
|
||||
|
||||
diff = compute_diff(old_buffer, new_buffer)
|
||||
total_lines = len(new_buffer)
|
||||
changed_ratio = len(diff.changed_lines) / total_lines if total_lines > 0 else 1.0
|
||||
|
||||
return changed_ratio <= threshold
|
||||
|
||||
|
||||
def apply_diff(old_buffer: list[str], diff: FrameDiff) -> list[str]:
|
||||
"""Apply a diff to an old buffer to get the new buffer.
|
||||
|
||||
Args:
|
||||
old_buffer: Previous frame buffer
|
||||
diff: Frame diff to apply
|
||||
|
||||
Returns:
|
||||
New frame buffer
|
||||
"""
|
||||
new_buffer = list(old_buffer)
|
||||
|
||||
for line_idx, content in diff.changed_lines:
|
||||
if line_idx < len(new_buffer):
|
||||
new_buffer[line_idx] = content
|
||||
else:
|
||||
while len(new_buffer) < line_idx:
|
||||
new_buffer.append("")
|
||||
new_buffer.append(content)
|
||||
|
||||
while len(new_buffer) < diff.height:
|
||||
new_buffer.append("")
|
||||
|
||||
return new_buffer[: diff.height]
|
||||
@@ -18,13 +18,6 @@ from engine.effects.types import (
|
||||
create_effect_context,
|
||||
)
|
||||
|
||||
|
||||
def get_effect_chain():
|
||||
from engine.layers import get_effect_chain as _chain
|
||||
|
||||
return _chain()
|
||||
|
||||
|
||||
__all__ = [
|
||||
"EffectChain",
|
||||
"EffectRegistry",
|
||||
@@ -34,7 +27,6 @@ __all__ = [
|
||||
"create_effect_context",
|
||||
"get_registry",
|
||||
"set_registry",
|
||||
"get_effect_chain",
|
||||
"get_monitor",
|
||||
"set_monitor",
|
||||
"PerformanceMonitor",
|
||||
|
||||
@@ -2,7 +2,7 @@ import time
|
||||
|
||||
from engine.effects.performance import PerformanceMonitor, get_monitor
|
||||
from engine.effects.registry import EffectRegistry
|
||||
from engine.effects.types import EffectContext
|
||||
from engine.effects.types import EffectContext, PartialUpdate
|
||||
|
||||
|
||||
class EffectChain:
|
||||
@@ -51,6 +51,18 @@ class EffectChain:
|
||||
frame_number = ctx.frame_number
|
||||
monitor.start_frame(frame_number)
|
||||
|
||||
# Get dirty regions from canvas via context (set by CanvasStage)
|
||||
dirty_rows = ctx.get_state("canvas.dirty_rows")
|
||||
|
||||
# Create PartialUpdate for effects that support it
|
||||
full_buffer = dirty_rows is None or len(dirty_rows) == 0
|
||||
partial = PartialUpdate(
|
||||
rows=None,
|
||||
cols=None,
|
||||
dirty=dirty_rows,
|
||||
full_buffer=full_buffer,
|
||||
)
|
||||
|
||||
frame_start = time.perf_counter()
|
||||
result = list(buf)
|
||||
for name in self._order:
|
||||
@@ -59,7 +71,11 @@ class EffectChain:
|
||||
chars_in = sum(len(line) for line in result)
|
||||
effect_start = time.perf_counter()
|
||||
try:
|
||||
result = plugin.process(result, ctx)
|
||||
# Use process_partial if supported, otherwise fall back to process
|
||||
if getattr(plugin, "supports_partial_updates", False):
|
||||
result = plugin.process_partial(result, ctx, partial)
|
||||
else:
|
||||
result = plugin.process(result, ctx)
|
||||
except Exception:
|
||||
plugin.config.enabled = False
|
||||
elapsed = time.perf_counter() - effect_start
|
||||
|
||||
@@ -6,14 +6,7 @@ _effect_chain_ref = None
|
||||
|
||||
def _get_effect_chain():
|
||||
global _effect_chain_ref
|
||||
if _effect_chain_ref is not None:
|
||||
return _effect_chain_ref
|
||||
try:
|
||||
from engine.layers import get_effect_chain as _chain
|
||||
|
||||
return _chain()
|
||||
except Exception:
|
||||
return None
|
||||
return _effect_chain_ref
|
||||
|
||||
|
||||
def set_effect_chain_ref(chain) -> None:
|
||||
|
||||
@@ -18,7 +18,7 @@ def discover_plugins():
|
||||
continue
|
||||
|
||||
try:
|
||||
module = __import__(f"effects_plugins.{module_name}", fromlist=[""])
|
||||
module = __import__(f"engine.effects.plugins.{module_name}", fromlist=[""])
|
||||
for attr_name in dir(module):
|
||||
attr = getattr(module, attr_name)
|
||||
if (
|
||||
122
engine/effects/plugins/afterimage.py
Normal file
122
engine/effects/plugins/afterimage.py
Normal file
@@ -0,0 +1,122 @@
|
||||
"""Afterimage effect using previous frame."""
|
||||
|
||||
from engine.effects.types import EffectConfig, EffectContext, EffectPlugin
|
||||
|
||||
|
||||
class AfterimageEffect(EffectPlugin):
|
||||
"""Show a faint ghost of the previous frame.
|
||||
|
||||
This effect requires a FrameBufferStage to be present in the pipeline.
|
||||
It shows a dimmed version of the previous frame super-imposed on the
|
||||
current frame.
|
||||
|
||||
Attributes:
|
||||
name: "afterimage"
|
||||
config: EffectConfig with intensity parameter (0.0-1.0)
|
||||
param_bindings: Optional sensor bindings for intensity modulation
|
||||
|
||||
Example:
|
||||
>>> effect = AfterimageEffect()
|
||||
>>> effect.configure(EffectConfig(intensity=0.3))
|
||||
>>> result = effect.process(buffer, ctx)
|
||||
"""
|
||||
|
||||
name = "afterimage"
|
||||
config: EffectConfig = EffectConfig(enabled=True, intensity=0.3)
|
||||
param_bindings: dict[str, dict[str, str | float]] = {}
|
||||
supports_partial_updates = False
|
||||
|
||||
def process(self, buf: list[str], ctx: EffectContext) -> list[str]:
|
||||
"""Apply afterimage effect using the previous frame.
|
||||
|
||||
Args:
|
||||
buf: Current text buffer (list of strings)
|
||||
ctx: Effect context with access to framebuffer history
|
||||
|
||||
Returns:
|
||||
Buffer with ghost of previous frame overlaid
|
||||
"""
|
||||
if not buf:
|
||||
return buf
|
||||
|
||||
# Get framebuffer history from context
|
||||
history = None
|
||||
|
||||
for key in ctx.state:
|
||||
if key.startswith("framebuffer.") and key.endswith(".history"):
|
||||
history = ctx.state[key]
|
||||
break
|
||||
|
||||
if not history or len(history) < 1:
|
||||
# No previous frame available
|
||||
return buf
|
||||
|
||||
# Get intensity from config
|
||||
intensity = self.config.params.get("intensity", self.config.intensity)
|
||||
intensity = max(0.0, min(1.0, intensity))
|
||||
|
||||
if intensity <= 0.0:
|
||||
return buf
|
||||
|
||||
# Get the previous frame (index 1, since index 0 is current)
|
||||
prev_frame = history[1] if len(history) > 1 else None
|
||||
if not prev_frame:
|
||||
return buf
|
||||
|
||||
# Blend current and previous frames
|
||||
viewport_height = ctx.terminal_height - ctx.ticker_height
|
||||
result = []
|
||||
|
||||
for row in range(len(buf)):
|
||||
if row >= viewport_height:
|
||||
result.append(buf[row])
|
||||
continue
|
||||
|
||||
current_line = buf[row]
|
||||
prev_line = prev_frame[row] if row < len(prev_frame) else ""
|
||||
|
||||
if not prev_line:
|
||||
result.append(current_line)
|
||||
continue
|
||||
|
||||
# Apply dimming effect by reducing ANSI color intensity or adding transparency
|
||||
# For a simple text version, we'll use a blend strategy
|
||||
blended = self._blend_lines(current_line, prev_line, intensity)
|
||||
result.append(blended)
|
||||
|
||||
return result
|
||||
|
||||
def _blend_lines(self, current: str, previous: str, intensity: float) -> str:
|
||||
"""Blend current and previous line with given intensity.
|
||||
|
||||
For text with ANSI codes, true blending is complex. This is a simplified
|
||||
version that uses color averaging when possible.
|
||||
|
||||
A more sophisticated implementation would:
|
||||
1. Parse ANSI color codes from both lines
|
||||
2. Blend RGB values based on intensity
|
||||
3. Reconstruct the line with blended colors
|
||||
|
||||
For now, we'll use a heuristic: if lines are similar, return current.
|
||||
If they differ, we alternate or use the previous as a faint overlay.
|
||||
"""
|
||||
if current == previous:
|
||||
return current
|
||||
|
||||
# Simple blending: intensity determines mix
|
||||
# intensity=1.0 => fully current
|
||||
# intensity=0.3 => 70% previous ghost, 30% current
|
||||
|
||||
if intensity > 0.7:
|
||||
return current
|
||||
elif intensity < 0.3:
|
||||
# Show previous but dimmed (simulate by adding faint color/gray)
|
||||
return previous # Would need to dim ANSI colors
|
||||
else:
|
||||
# For medium intensity, alternate based on character pattern
|
||||
# This is a placeholder for proper blending
|
||||
return current
|
||||
|
||||
def configure(self, config: EffectConfig) -> None:
|
||||
"""Configure the effect."""
|
||||
self.config = config
|
||||
105
engine/effects/plugins/border.py
Normal file
105
engine/effects/plugins/border.py
Normal file
@@ -0,0 +1,105 @@
|
||||
from engine.effects.types import EffectConfig, EffectContext, EffectPlugin
|
||||
|
||||
|
||||
class BorderEffect(EffectPlugin):
|
||||
"""Simple border effect for terminal display.
|
||||
|
||||
Draws a border around the buffer and optionally displays
|
||||
performance metrics in the border corners.
|
||||
|
||||
Internally crops to display dimensions to ensure border fits.
|
||||
"""
|
||||
|
||||
name = "border"
|
||||
config = EffectConfig(enabled=True, intensity=1.0)
|
||||
|
||||
def process(self, buf: list[str], ctx: EffectContext) -> list[str]:
|
||||
if not buf:
|
||||
return buf
|
||||
|
||||
# Get actual display dimensions from context
|
||||
display_w = ctx.terminal_width
|
||||
display_h = ctx.terminal_height
|
||||
|
||||
# If dimensions are reasonable, crop first - use slightly smaller to ensure fit
|
||||
if display_w >= 10 and display_h >= 3:
|
||||
# Subtract 2 for border characters (left and right)
|
||||
crop_w = display_w - 2
|
||||
crop_h = display_h - 2
|
||||
buf = self._crop_to_size(buf, crop_w, crop_h)
|
||||
w = display_w
|
||||
h = display_h
|
||||
else:
|
||||
# Use buffer dimensions
|
||||
h = len(buf)
|
||||
w = max(len(line) for line in buf) if buf else 0
|
||||
|
||||
if w < 3 or h < 3:
|
||||
return buf
|
||||
|
||||
inner_w = w - 2
|
||||
|
||||
# Get metrics from context
|
||||
fps = 0.0
|
||||
frame_time = 0.0
|
||||
metrics = ctx.get_state("metrics")
|
||||
if metrics:
|
||||
avg_ms = metrics.get("avg_ms")
|
||||
frame_count = metrics.get("frame_count", 0)
|
||||
if avg_ms and frame_count > 0:
|
||||
fps = 1000.0 / avg_ms
|
||||
frame_time = avg_ms
|
||||
|
||||
# Build borders
|
||||
# Top border: ┌────────────────────┐ or with FPS
|
||||
if fps > 0:
|
||||
fps_str = f" FPS:{fps:.0f}"
|
||||
if len(fps_str) < inner_w:
|
||||
right_len = inner_w - len(fps_str)
|
||||
top_border = "┌" + "─" * right_len + fps_str + "┐"
|
||||
else:
|
||||
top_border = "┌" + "─" * inner_w + "┐"
|
||||
else:
|
||||
top_border = "┌" + "─" * inner_w + "┐"
|
||||
|
||||
# Bottom border: └────────────────────┘ or with frame time
|
||||
if frame_time > 0:
|
||||
ft_str = f" {frame_time:.1f}ms"
|
||||
if len(ft_str) < inner_w:
|
||||
right_len = inner_w - len(ft_str)
|
||||
bottom_border = "└" + "─" * right_len + ft_str + "┘"
|
||||
else:
|
||||
bottom_border = "└" + "─" * inner_w + "┘"
|
||||
else:
|
||||
bottom_border = "└" + "─" * inner_w + "┘"
|
||||
|
||||
# Build result with left/right borders
|
||||
result = [top_border]
|
||||
for line in buf[: h - 2]:
|
||||
if len(line) >= inner_w:
|
||||
result.append("│" + line[:inner_w] + "│")
|
||||
else:
|
||||
result.append("│" + line + " " * (inner_w - len(line)) + "│")
|
||||
|
||||
result.append(bottom_border)
|
||||
|
||||
return result
|
||||
|
||||
def _crop_to_size(self, buf: list[str], w: int, h: int) -> list[str]:
|
||||
"""Crop buffer to fit within w x h."""
|
||||
result = []
|
||||
for i in range(min(h, len(buf))):
|
||||
line = buf[i]
|
||||
if len(line) > w:
|
||||
result.append(line[:w])
|
||||
else:
|
||||
result.append(line + " " * (w - len(line)))
|
||||
|
||||
# Pad with empty lines if needed (for border)
|
||||
while len(result) < h:
|
||||
result.append(" " * w)
|
||||
|
||||
return result
|
||||
|
||||
def configure(self, config: EffectConfig) -> None:
|
||||
self.config = config
|
||||
42
engine/effects/plugins/crop.py
Normal file
42
engine/effects/plugins/crop.py
Normal file
@@ -0,0 +1,42 @@
|
||||
from engine.effects.types import EffectConfig, EffectContext, EffectPlugin
|
||||
|
||||
|
||||
class CropEffect(EffectPlugin):
|
||||
"""Crop effect that crops the input buffer to fit the display.
|
||||
|
||||
This ensures the output buffer matches the actual display dimensions,
|
||||
useful when the source produces a buffer larger than the viewport.
|
||||
"""
|
||||
|
||||
name = "crop"
|
||||
config = EffectConfig(enabled=True, intensity=1.0)
|
||||
|
||||
def process(self, buf: list[str], ctx: EffectContext) -> list[str]:
|
||||
if not buf:
|
||||
return buf
|
||||
|
||||
# Get actual display dimensions from context
|
||||
w = (
|
||||
ctx.terminal_width
|
||||
if ctx.terminal_width > 0
|
||||
else max(len(line) for line in buf)
|
||||
)
|
||||
h = ctx.terminal_height if ctx.terminal_height > 0 else len(buf)
|
||||
|
||||
# Crop buffer to fit
|
||||
result = []
|
||||
for i in range(min(h, len(buf))):
|
||||
line = buf[i]
|
||||
if len(line) > w:
|
||||
result.append(line[:w])
|
||||
else:
|
||||
result.append(line + " " * (w - len(line)))
|
||||
|
||||
# Pad with empty lines if needed
|
||||
while len(result) < h:
|
||||
result.append(" " * w)
|
||||
|
||||
return result
|
||||
|
||||
def configure(self, config: EffectConfig) -> None:
|
||||
self.config = config
|
||||
@@ -5,7 +5,7 @@ from engine.effects.types import EffectConfig, EffectContext, EffectPlugin
|
||||
|
||||
class FadeEffect(EffectPlugin):
|
||||
name = "fade"
|
||||
config = EffectConfig(enabled=True, intensity=1.0)
|
||||
config = EffectConfig(enabled=True, intensity=1.0, entropy=0.1)
|
||||
|
||||
def process(self, buf: list[str], ctx: EffectContext) -> list[str]:
|
||||
if not ctx.ticker_height:
|
||||
@@ -36,7 +36,7 @@ class FadeEffect(EffectPlugin):
|
||||
if fade >= 1.0:
|
||||
return s
|
||||
if fade <= 0.0:
|
||||
return ""
|
||||
return s # Preserve original line length - don't return empty
|
||||
result = []
|
||||
i = 0
|
||||
while i < len(s):
|
||||
332
engine/effects/plugins/figment.py
Normal file
332
engine/effects/plugins/figment.py
Normal file
@@ -0,0 +1,332 @@
|
||||
"""
|
||||
Figment overlay effect for modern pipeline architecture.
|
||||
|
||||
Provides periodic SVG glyph overlays with reveal/hold/dissolve animation phases.
|
||||
Integrates directly with the pipeline's effect system without legacy dependencies.
|
||||
"""
|
||||
|
||||
import random
|
||||
from dataclasses import dataclass
|
||||
from enum import Enum, auto
|
||||
from pathlib import Path
|
||||
|
||||
from engine import config
|
||||
from engine.effects.types import EffectConfig, EffectContext, EffectPlugin
|
||||
from engine.figment_render import rasterize_svg
|
||||
from engine.figment_trigger import FigmentAction, FigmentCommand, FigmentTrigger
|
||||
from engine.terminal import RST
|
||||
from engine.themes import THEME_REGISTRY
|
||||
|
||||
|
||||
class FigmentPhase(Enum):
|
||||
"""Animation phases for figment overlay."""
|
||||
|
||||
REVEAL = auto()
|
||||
HOLD = auto()
|
||||
DISSOLVE = auto()
|
||||
|
||||
|
||||
@dataclass
|
||||
class FigmentState:
|
||||
"""State of a figment overlay at a given frame."""
|
||||
|
||||
phase: FigmentPhase
|
||||
progress: float
|
||||
rows: list[str]
|
||||
gradient: list[int]
|
||||
center_row: int
|
||||
center_col: int
|
||||
|
||||
|
||||
def _color_codes_to_ansi(gradient: list[int]) -> list[str]:
|
||||
"""Convert gradient list to ANSI color codes.
|
||||
|
||||
Args:
|
||||
gradient: List of 256-color palette codes
|
||||
|
||||
Returns:
|
||||
List of ANSI escape code strings
|
||||
"""
|
||||
codes = []
|
||||
for color in gradient:
|
||||
if isinstance(color, int):
|
||||
codes.append(f"\033[38;5;{color}m")
|
||||
else:
|
||||
# Fallback to green
|
||||
codes.append("\033[38;5;46m")
|
||||
return codes if codes else ["\033[38;5;46m"]
|
||||
|
||||
|
||||
def render_figment_overlay(figment_state: FigmentState, w: int, h: int) -> list[str]:
|
||||
"""Render figment overlay as ANSI cursor-positioning commands.
|
||||
|
||||
Args:
|
||||
figment_state: FigmentState with phase, progress, rows, gradient, centering.
|
||||
w: terminal width
|
||||
h: terminal height
|
||||
|
||||
Returns:
|
||||
List of ANSI strings to append to display buffer.
|
||||
"""
|
||||
rows = figment_state.rows
|
||||
if not rows:
|
||||
return []
|
||||
|
||||
phase = figment_state.phase
|
||||
progress = figment_state.progress
|
||||
gradient = figment_state.gradient
|
||||
center_row = figment_state.center_row
|
||||
center_col = figment_state.center_col
|
||||
|
||||
cols = _color_codes_to_ansi(gradient)
|
||||
|
||||
# Build a list of non-space cell positions
|
||||
cell_positions = []
|
||||
for r_idx, row in enumerate(rows):
|
||||
for c_idx, ch in enumerate(row):
|
||||
if ch != " ":
|
||||
cell_positions.append((r_idx, c_idx))
|
||||
|
||||
n_cells = len(cell_positions)
|
||||
if n_cells == 0:
|
||||
return []
|
||||
|
||||
# Use a deterministic seed so the reveal/dissolve pattern is stable per-figment
|
||||
rng = random.Random(hash(tuple(rows[0][:10])) if rows[0] else 42)
|
||||
shuffled = list(cell_positions)
|
||||
rng.shuffle(shuffled)
|
||||
|
||||
# Phase-dependent visibility
|
||||
if phase == FigmentPhase.REVEAL:
|
||||
visible_count = int(n_cells * progress)
|
||||
visible = set(shuffled[:visible_count])
|
||||
elif phase == FigmentPhase.HOLD:
|
||||
visible = set(cell_positions)
|
||||
# Strobe: dim some cells periodically
|
||||
if int(progress * 20) % 3 == 0:
|
||||
dim_count = int(n_cells * 0.3)
|
||||
visible -= set(shuffled[:dim_count])
|
||||
elif phase == FigmentPhase.DISSOLVE:
|
||||
remaining_count = int(n_cells * (1.0 - progress))
|
||||
visible = set(shuffled[:remaining_count])
|
||||
else:
|
||||
visible = set(cell_positions)
|
||||
|
||||
# Build overlay commands
|
||||
overlay: list[str] = []
|
||||
n_cols = len(cols)
|
||||
max_x = max((len(r.rstrip()) for r in rows if r.strip()), default=1)
|
||||
|
||||
for r_idx, row in enumerate(rows):
|
||||
scr_row = center_row + r_idx + 1 # 1-indexed
|
||||
if scr_row < 1 or scr_row > h:
|
||||
continue
|
||||
|
||||
line_buf: list[str] = []
|
||||
has_content = False
|
||||
|
||||
for c_idx, ch in enumerate(row):
|
||||
scr_col = center_col + c_idx + 1
|
||||
if scr_col < 1 or scr_col > w:
|
||||
continue
|
||||
|
||||
if ch != " " and (r_idx, c_idx) in visible:
|
||||
# Apply gradient color
|
||||
shifted = (c_idx / max(max_x - 1, 1)) % 1.0
|
||||
idx = min(round(shifted * (n_cols - 1)), n_cols - 1)
|
||||
line_buf.append(f"{cols[idx]}{ch}{RST}")
|
||||
has_content = True
|
||||
else:
|
||||
line_buf.append(" ")
|
||||
|
||||
if has_content:
|
||||
line_str = "".join(line_buf).rstrip()
|
||||
if line_str.strip():
|
||||
overlay.append(f"\033[{scr_row};{center_col + 1}H{line_str}{RST}")
|
||||
|
||||
return overlay
|
||||
|
||||
|
||||
class FigmentEffect(EffectPlugin):
|
||||
"""Figment overlay effect for pipeline architecture.
|
||||
|
||||
Provides periodic SVG overlays with reveal/hold/dissolve animation.
|
||||
"""
|
||||
|
||||
name = "figment"
|
||||
config = EffectConfig(
|
||||
enabled=True,
|
||||
intensity=1.0,
|
||||
params={
|
||||
"interval_secs": 60,
|
||||
"display_secs": 4.5,
|
||||
"figment_dir": "figments",
|
||||
},
|
||||
)
|
||||
supports_partial_updates = False
|
||||
is_overlay = True # Figment is an overlay effect that composes on top of the buffer
|
||||
|
||||
def __init__(
|
||||
self,
|
||||
figment_dir: str | None = None,
|
||||
triggers: list[FigmentTrigger] | None = None,
|
||||
):
|
||||
self.config = EffectConfig(
|
||||
enabled=True,
|
||||
intensity=1.0,
|
||||
params={
|
||||
"interval_secs": 60,
|
||||
"display_secs": 4.5,
|
||||
"figment_dir": figment_dir or "figments",
|
||||
},
|
||||
)
|
||||
self._triggers = triggers or []
|
||||
self._phase: FigmentPhase | None = None
|
||||
self._progress: float = 0.0
|
||||
self._rows: list[str] = []
|
||||
self._gradient: list[int] = []
|
||||
self._center_row: int = 0
|
||||
self._center_col: int = 0
|
||||
self._timer: float = 0.0
|
||||
self._last_svg: str | None = None
|
||||
self._svg_files: list[str] = []
|
||||
self._scan_svgs()
|
||||
|
||||
def _scan_svgs(self) -> None:
|
||||
"""Scan figment directory for SVG files."""
|
||||
figment_dir = Path(self.config.params["figment_dir"])
|
||||
if figment_dir.is_dir():
|
||||
self._svg_files = sorted(str(p) for p in figment_dir.glob("*.svg"))
|
||||
|
||||
def process(self, buf: list[str], ctx: EffectContext) -> list[str]:
|
||||
"""Add figment overlay to buffer."""
|
||||
if not self.config.enabled:
|
||||
return buf
|
||||
|
||||
# Get figment state using frame number from context
|
||||
figment_state = self.get_figment_state(
|
||||
ctx.frame_number, ctx.terminal_width, ctx.terminal_height
|
||||
)
|
||||
|
||||
if figment_state:
|
||||
# Render overlay and append to buffer
|
||||
overlay = render_figment_overlay(
|
||||
figment_state, ctx.terminal_width, ctx.terminal_height
|
||||
)
|
||||
buf = buf + overlay
|
||||
|
||||
return buf
|
||||
|
||||
def configure(self, config: EffectConfig) -> None:
|
||||
"""Configure the effect."""
|
||||
# Preserve figment_dir if the new config doesn't supply one
|
||||
figment_dir = config.params.get(
|
||||
"figment_dir", self.config.params.get("figment_dir", "figments")
|
||||
)
|
||||
self.config = config
|
||||
if "figment_dir" not in self.config.params:
|
||||
self.config.params["figment_dir"] = figment_dir
|
||||
self._scan_svgs()
|
||||
|
||||
def trigger(self, w: int, h: int) -> None:
|
||||
"""Manually trigger a figment display."""
|
||||
if not self._svg_files:
|
||||
return
|
||||
|
||||
# Pick a random SVG, avoid repeating
|
||||
candidates = [s for s in self._svg_files if s != self._last_svg]
|
||||
if not candidates:
|
||||
candidates = self._svg_files
|
||||
svg_path = random.choice(candidates)
|
||||
self._last_svg = svg_path
|
||||
|
||||
# Rasterize
|
||||
try:
|
||||
self._rows = rasterize_svg(svg_path, w, h)
|
||||
except Exception:
|
||||
return
|
||||
|
||||
# Pick random theme gradient
|
||||
theme_key = random.choice(list(THEME_REGISTRY.keys()))
|
||||
self._gradient = THEME_REGISTRY[theme_key].main_gradient
|
||||
|
||||
# Center in viewport
|
||||
figment_h = len(self._rows)
|
||||
figment_w = max((len(r) for r in self._rows), default=0)
|
||||
self._center_row = max(0, (h - figment_h) // 2)
|
||||
self._center_col = max(0, (w - figment_w) // 2)
|
||||
|
||||
# Start reveal phase
|
||||
self._phase = FigmentPhase.REVEAL
|
||||
self._progress = 0.0
|
||||
|
||||
def get_figment_state(
|
||||
self, frame_number: int, w: int, h: int
|
||||
) -> FigmentState | None:
|
||||
"""Tick the state machine and return current state, or None if idle."""
|
||||
if not self.config.enabled:
|
||||
return None
|
||||
|
||||
# Poll triggers
|
||||
for trig in self._triggers:
|
||||
cmd = trig.poll()
|
||||
if cmd is not None:
|
||||
self._handle_command(cmd, w, h)
|
||||
|
||||
# Tick timer when idle
|
||||
if self._phase is None:
|
||||
self._timer += config.FRAME_DT
|
||||
interval = self.config.params.get("interval_secs", 60)
|
||||
if self._timer >= interval:
|
||||
self._timer = 0.0
|
||||
self.trigger(w, h)
|
||||
|
||||
# Tick animation — snapshot current phase/progress, then advance
|
||||
if self._phase is not None:
|
||||
# Capture the state at the start of this frame
|
||||
current_phase = self._phase
|
||||
current_progress = self._progress
|
||||
|
||||
# Advance for next frame
|
||||
display_secs = self.config.params.get("display_secs", 4.5)
|
||||
phase_duration = display_secs / 3.0
|
||||
self._progress += config.FRAME_DT / phase_duration
|
||||
|
||||
if self._progress >= 1.0:
|
||||
self._progress = 0.0
|
||||
if self._phase == FigmentPhase.REVEAL:
|
||||
self._phase = FigmentPhase.HOLD
|
||||
elif self._phase == FigmentPhase.HOLD:
|
||||
self._phase = FigmentPhase.DISSOLVE
|
||||
elif self._phase == FigmentPhase.DISSOLVE:
|
||||
self._phase = None
|
||||
|
||||
return FigmentState(
|
||||
phase=current_phase,
|
||||
progress=current_progress,
|
||||
rows=self._rows,
|
||||
gradient=self._gradient,
|
||||
center_row=self._center_row,
|
||||
center_col=self._center_col,
|
||||
)
|
||||
|
||||
return None
|
||||
|
||||
def _handle_command(self, cmd: FigmentCommand, w: int, h: int) -> None:
|
||||
"""Handle a figment command."""
|
||||
if cmd.action == FigmentAction.TRIGGER:
|
||||
self.trigger(w, h)
|
||||
elif cmd.action == FigmentAction.SET_INTENSITY and isinstance(
|
||||
cmd.value, (int, float)
|
||||
):
|
||||
self.config.intensity = float(cmd.value)
|
||||
elif cmd.action == FigmentAction.SET_INTERVAL and isinstance(
|
||||
cmd.value, (int, float)
|
||||
):
|
||||
self.config.params["interval_secs"] = float(cmd.value)
|
||||
elif cmd.action == FigmentAction.SET_COLOR and isinstance(cmd.value, str):
|
||||
if cmd.value in THEME_REGISTRY:
|
||||
self._gradient = THEME_REGISTRY[cmd.value].main_gradient
|
||||
elif cmd.action == FigmentAction.STOP:
|
||||
self._phase = None
|
||||
self._progress = 0.0
|
||||
@@ -9,7 +9,7 @@ from engine.terminal import C_DIM, G_DIM, G_LO, RST, W_GHOST
|
||||
|
||||
class FirehoseEffect(EffectPlugin):
|
||||
name = "firehose"
|
||||
config = EffectConfig(enabled=True, intensity=1.0)
|
||||
config = EffectConfig(enabled=True, intensity=1.0, entropy=0.9)
|
||||
|
||||
def process(self, buf: list[str], ctx: EffectContext) -> list[str]:
|
||||
firehose_h = config.FIREHOSE_H if config.FIREHOSE else 0
|
||||
52
engine/effects/plugins/glitch.py
Normal file
52
engine/effects/plugins/glitch.py
Normal file
@@ -0,0 +1,52 @@
|
||||
import random
|
||||
|
||||
from engine.effects.types import EffectConfig, EffectContext, EffectPlugin
|
||||
from engine.terminal import DIM, G_LO, RST
|
||||
|
||||
|
||||
class GlitchEffect(EffectPlugin):
|
||||
name = "glitch"
|
||||
config = EffectConfig(enabled=True, intensity=1.0, entropy=0.8)
|
||||
|
||||
def process(self, buf: list[str], ctx: EffectContext) -> list[str]:
|
||||
if not buf:
|
||||
return buf
|
||||
result = list(buf)
|
||||
intensity = self.config.intensity
|
||||
|
||||
glitch_prob = 0.32 + min(0.9, ctx.mic_excess * 0.16)
|
||||
glitch_prob = glitch_prob * intensity
|
||||
n_hits = 4 + int(ctx.mic_excess / 2)
|
||||
n_hits = int(n_hits * intensity)
|
||||
|
||||
if random.random() < glitch_prob:
|
||||
# Store original visible lengths before any modifications
|
||||
# Strip ANSI codes to get visible length
|
||||
import re
|
||||
|
||||
ansi_pattern = re.compile(r"\x1b\[[0-9;]*[a-zA-Z]")
|
||||
original_lengths = [len(ansi_pattern.sub("", line)) for line in result]
|
||||
for _ in range(min(n_hits, len(result))):
|
||||
gi = random.randint(0, len(result) - 1)
|
||||
result[gi]
|
||||
target_len = original_lengths[gi] # Use stored original length
|
||||
glitch_bar = self._glitch_bar(target_len)
|
||||
result[gi] = glitch_bar
|
||||
return result
|
||||
|
||||
def _glitch_bar(self, target_len: int) -> str:
|
||||
c = random.choice(["░", "▒", "─", "\xc2"])
|
||||
n = random.randint(3, max(3, target_len // 2))
|
||||
o = random.randint(0, max(0, target_len - n))
|
||||
|
||||
glitch_chars = c * n
|
||||
trailing_spaces = target_len - o - n
|
||||
trailing_spaces = max(0, trailing_spaces)
|
||||
|
||||
glitch_part = f"{G_LO}{DIM}" + glitch_chars + RST
|
||||
result = " " * o + glitch_part + " " * trailing_spaces
|
||||
|
||||
return result
|
||||
|
||||
def configure(self, config: EffectConfig) -> None:
|
||||
self.config = config
|
||||
102
engine/effects/plugins/hud.py
Normal file
102
engine/effects/plugins/hud.py
Normal file
@@ -0,0 +1,102 @@
|
||||
from engine.effects.types import (
|
||||
EffectConfig,
|
||||
EffectContext,
|
||||
EffectPlugin,
|
||||
PartialUpdate,
|
||||
)
|
||||
|
||||
|
||||
class HudEffect(EffectPlugin):
|
||||
name = "hud"
|
||||
config = EffectConfig(enabled=True, intensity=1.0)
|
||||
supports_partial_updates = True # Enable partial update optimization
|
||||
|
||||
# Cache last HUD content to detect changes
|
||||
_last_hud_content: tuple | None = None
|
||||
|
||||
def process_partial(
|
||||
self, buf: list[str], ctx: EffectContext, partial: PartialUpdate
|
||||
) -> list[str]:
|
||||
# If full buffer requested, process normally
|
||||
if partial.full_buffer:
|
||||
return self.process(buf, ctx)
|
||||
|
||||
# If HUD rows (0, 1, 2) aren't dirty, skip processing
|
||||
if partial.dirty:
|
||||
hud_rows = {0, 1, 2}
|
||||
dirty_hud_rows = partial.dirty & hud_rows
|
||||
if not dirty_hud_rows:
|
||||
return buf # Nothing for HUD to do
|
||||
|
||||
# Proceed with full processing
|
||||
return self.process(buf, ctx)
|
||||
|
||||
def process(self, buf: list[str], ctx: EffectContext) -> list[str]:
|
||||
result = list(buf)
|
||||
|
||||
# Read metrics from pipeline context (first-class citizen)
|
||||
# Falls back to global monitor for backwards compatibility
|
||||
metrics = ctx.get_state("metrics")
|
||||
if not metrics:
|
||||
# Fallback to global monitor for backwards compatibility
|
||||
from engine.effects.performance import get_monitor
|
||||
|
||||
monitor = get_monitor()
|
||||
if monitor:
|
||||
stats = monitor.get_stats()
|
||||
if stats and "pipeline" in stats:
|
||||
metrics = stats
|
||||
|
||||
fps = 0.0
|
||||
frame_time = 0.0
|
||||
if metrics:
|
||||
if "error" in metrics:
|
||||
pass # No metrics available yet
|
||||
elif "pipeline" in metrics:
|
||||
frame_time = metrics["pipeline"].get("avg_ms", 0.0)
|
||||
frame_count = metrics.get("frame_count", 0)
|
||||
if frame_count > 0 and frame_time > 0:
|
||||
fps = 1000.0 / frame_time
|
||||
elif "avg_ms" in metrics:
|
||||
# Direct metrics format
|
||||
frame_time = metrics.get("avg_ms", 0.0)
|
||||
frame_count = metrics.get("frame_count", 0)
|
||||
if frame_count > 0 and frame_time > 0:
|
||||
fps = 1000.0 / frame_time
|
||||
|
||||
effect_name = self.config.params.get("display_effect", "none")
|
||||
effect_intensity = self.config.params.get("display_intensity", 0.0)
|
||||
|
||||
hud_lines = []
|
||||
hud_lines.append(
|
||||
f"\033[1;1H\033[38;5;46mMAINLINE DEMO\033[0m \033[38;5;245m|\033[0m \033[38;5;39mFPS: {fps:.1f}\033[0m \033[38;5;245m|\033[0m \033[38;5;208m{frame_time:.1f}ms\033[0m"
|
||||
)
|
||||
|
||||
bar_width = 20
|
||||
filled = int(bar_width * effect_intensity)
|
||||
bar = (
|
||||
"\033[38;5;82m"
|
||||
+ "█" * filled
|
||||
+ "\033[38;5;240m"
|
||||
+ "░" * (bar_width - filled)
|
||||
+ "\033[0m"
|
||||
)
|
||||
hud_lines.append(
|
||||
f"\033[2;1H\033[38;5;45mEFFECT:\033[0m \033[1;38;5;227m{effect_name:12s}\033[0m \033[38;5;245m|\033[0m {bar} \033[38;5;245m|\033[0m \033[38;5;219m{effect_intensity * 100:.0f}%\033[0m"
|
||||
)
|
||||
|
||||
# Get pipeline order from context
|
||||
pipeline_order = ctx.get_state("pipeline_order")
|
||||
pipeline_str = ",".join(pipeline_order) if pipeline_order else "(none)"
|
||||
hud_lines.append(f"\033[3;1H\033[38;5;44mPIPELINE:\033[0m {pipeline_str}")
|
||||
|
||||
for i, line in enumerate(hud_lines):
|
||||
if i < len(result):
|
||||
result[i] = line
|
||||
else:
|
||||
result.append(line)
|
||||
|
||||
return result
|
||||
|
||||
def configure(self, config: EffectConfig) -> None:
|
||||
self.config = config
|
||||
119
engine/effects/plugins/motionblur.py
Normal file
119
engine/effects/plugins/motionblur.py
Normal file
@@ -0,0 +1,119 @@
|
||||
"""Motion blur effect using frame history."""
|
||||
|
||||
from engine.effects.types import EffectConfig, EffectContext, EffectPlugin
|
||||
|
||||
|
||||
class MotionBlurEffect(EffectPlugin):
|
||||
"""Apply motion blur by blending current frame with previous frames.
|
||||
|
||||
This effect requires a FrameBufferStage to be present in the pipeline.
|
||||
The framebuffer provides frame history which is blended with the current
|
||||
frame based on intensity.
|
||||
|
||||
Attributes:
|
||||
name: "motionblur"
|
||||
config: EffectConfig with intensity parameter (0.0-1.0)
|
||||
param_bindings: Optional sensor bindings for intensity modulation
|
||||
|
||||
Example:
|
||||
>>> effect = MotionBlurEffect()
|
||||
>>> effect.configure(EffectConfig(intensity=0.5))
|
||||
>>> result = effect.process(buffer, ctx)
|
||||
"""
|
||||
|
||||
name = "motionblur"
|
||||
config: EffectConfig = EffectConfig(enabled=True, intensity=0.5)
|
||||
param_bindings: dict[str, dict[str, str | float]] = {}
|
||||
supports_partial_updates = False
|
||||
|
||||
def process(self, buf: list[str], ctx: EffectContext) -> list[str]:
|
||||
"""Apply motion blur by blending with previous frames.
|
||||
|
||||
Args:
|
||||
buf: Current text buffer (list of strings)
|
||||
ctx: Effect context with access to framebuffer history
|
||||
|
||||
Returns:
|
||||
Blended buffer with motion blur effect applied
|
||||
"""
|
||||
if not buf:
|
||||
return buf
|
||||
|
||||
# Get framebuffer history from context
|
||||
# We'll look for the first available framebuffer history
|
||||
history = None
|
||||
|
||||
for key in ctx.state:
|
||||
if key.startswith("framebuffer.") and key.endswith(".history"):
|
||||
history = ctx.state[key]
|
||||
break
|
||||
|
||||
if not history:
|
||||
# No framebuffer available, return unchanged
|
||||
return buf
|
||||
|
||||
# Get intensity from config
|
||||
intensity = self.config.params.get("intensity", self.config.intensity)
|
||||
intensity = max(0.0, min(1.0, intensity))
|
||||
|
||||
if intensity <= 0.0:
|
||||
return buf
|
||||
|
||||
# Get decay factor (how quickly older frames fade)
|
||||
decay = self.config.params.get("decay", 0.7)
|
||||
|
||||
# Build output buffer
|
||||
result = []
|
||||
viewport_height = ctx.terminal_height - ctx.ticker_height
|
||||
|
||||
# Determine how many frames to blend (up to history depth)
|
||||
max_frames = min(len(history), 5) # Cap at 5 frames for performance
|
||||
|
||||
for row in range(len(buf)):
|
||||
if row >= viewport_height:
|
||||
# Beyond viewport, just copy
|
||||
result.append(buf[row])
|
||||
continue
|
||||
|
||||
# Start with current frame
|
||||
blended = buf[row]
|
||||
|
||||
# Blend with historical frames
|
||||
weight_sum = 1.0
|
||||
if max_frames > 0 and intensity > 0:
|
||||
for i in range(max_frames):
|
||||
frame_weight = intensity * (decay**i)
|
||||
if frame_weight < 0.01: # Skip negligible weights
|
||||
break
|
||||
|
||||
hist_row = history[i][row] if row < len(history[i]) else ""
|
||||
# Simple string blending: we'll concatenate with space
|
||||
# For a proper effect, we'd need to blend ANSI colors
|
||||
# This is a simplified version that just adds the frames
|
||||
blended = self._blend_strings(blended, hist_row, frame_weight)
|
||||
weight_sum += frame_weight
|
||||
|
||||
result.append(blended)
|
||||
|
||||
return result
|
||||
|
||||
def _blend_strings(self, current: str, historical: str, weight: float) -> str:
|
||||
"""Blend two strings with given weight.
|
||||
|
||||
This is a simplified blending that works with ANSI codes.
|
||||
For proper blending we'd need to parse colors, but for now
|
||||
we use a heuristic: if strings are identical, return one.
|
||||
If they differ, we alternate or concatenate based on weight.
|
||||
"""
|
||||
if current == historical:
|
||||
return current
|
||||
|
||||
# If weight is high, show current; if low, show historical
|
||||
if weight > 0.5:
|
||||
return current
|
||||
else:
|
||||
return historical
|
||||
|
||||
def configure(self, config: EffectConfig) -> None:
|
||||
"""Configure the effect."""
|
||||
self.config = config
|
||||
@@ -7,7 +7,7 @@ from engine.terminal import C_DIM, G_DIM, G_LO, RST, W_GHOST
|
||||
|
||||
class NoiseEffect(EffectPlugin):
|
||||
name = "noise"
|
||||
config = EffectConfig(enabled=True, intensity=0.15)
|
||||
config = EffectConfig(enabled=True, intensity=0.15, entropy=0.4)
|
||||
|
||||
def process(self, buf: list[str], ctx: EffectContext) -> list[str]:
|
||||
if not ctx.ticker_height:
|
||||
@@ -19,7 +19,8 @@ class NoiseEffect(EffectPlugin):
|
||||
for r in range(len(result)):
|
||||
cy = ctx.scroll_cam + r
|
||||
if random.random() < probability:
|
||||
result[r] = self._generate_noise(ctx.terminal_width, cy)
|
||||
original_line = result[r]
|
||||
result[r] = self._generate_noise(len(original_line), cy)
|
||||
return result
|
||||
|
||||
def _generate_noise(self, w: int, cy: int) -> str:
|
||||
605
engine/effects/plugins/repl.py
Normal file
605
engine/effects/plugins/repl.py
Normal file
@@ -0,0 +1,605 @@
|
||||
"""REPL Effect Plugin
|
||||
|
||||
A HUD-style command-line interface for interactive pipeline control.
|
||||
|
||||
This effect provides a Read-Eval-Print Loop (REPL) that allows users to:
|
||||
- View pipeline status and metrics
|
||||
- Toggle effects on/off
|
||||
- Adjust effect parameters in real-time
|
||||
- Inspect pipeline configuration
|
||||
- Execute commands for pipeline manipulation
|
||||
|
||||
Usage:
|
||||
Add 'repl' to the effects list in your configuration.
|
||||
|
||||
Commands:
|
||||
help - Show available commands
|
||||
status - Show pipeline status
|
||||
effects - List all effects
|
||||
effect <name> <on|off> - Toggle an effect
|
||||
param <effect> <param> <value> - Set effect parameter
|
||||
pipeline - Show current pipeline order
|
||||
clear - Clear output buffer
|
||||
quit - Exit REPL
|
||||
|
||||
Keyboard:
|
||||
Enter - Execute command
|
||||
Up/Down - Navigate command history
|
||||
Tab - Auto-complete (if implemented)
|
||||
Ctrl+C - Clear current input
|
||||
"""
|
||||
|
||||
from dataclasses import dataclass, field
|
||||
|
||||
from engine.effects.types import (
|
||||
EffectConfig,
|
||||
EffectContext,
|
||||
EffectPlugin,
|
||||
PartialUpdate,
|
||||
)
|
||||
|
||||
|
||||
@dataclass
|
||||
class REPLState:
|
||||
"""State of the REPL interface."""
|
||||
|
||||
command_history: list[str] = field(default_factory=list)
|
||||
current_command: str = ""
|
||||
history_index: int = -1
|
||||
output_buffer: list[str] = field(default_factory=list)
|
||||
scroll_offset: int = 0 # Manual scroll position (0 = bottom of buffer)
|
||||
max_history: int = 50
|
||||
max_output_lines: int = 50 # 50 lines excluding empty lines
|
||||
|
||||
|
||||
class ReplEffect(EffectPlugin):
|
||||
"""REPL effect with HUD-style overlay for interactive pipeline control."""
|
||||
|
||||
name = "repl"
|
||||
config = EffectConfig(
|
||||
enabled=True,
|
||||
intensity=1.0,
|
||||
params={
|
||||
"display_height": 8, # Height of REPL area in lines
|
||||
"show_hud": True, # Show HUD header lines
|
||||
},
|
||||
)
|
||||
supports_partial_updates = True
|
||||
|
||||
def __init__(self):
|
||||
super().__init__()
|
||||
self.state = REPLState()
|
||||
self._last_metrics: dict | None = None
|
||||
|
||||
def process_partial(
|
||||
self, buf: list[str], ctx: EffectContext, partial: PartialUpdate
|
||||
) -> list[str]:
|
||||
"""Handle partial updates efficiently."""
|
||||
if partial.full_buffer:
|
||||
return self.process(buf, ctx)
|
||||
# Always process REPL since it needs to stay visible
|
||||
return self.process(buf, ctx)
|
||||
|
||||
def process(self, buf: list[str], ctx: EffectContext) -> list[str]:
|
||||
"""Render buffer with REPL overlay."""
|
||||
# Get display dimensions from context
|
||||
height = ctx.terminal_height if hasattr(ctx, "terminal_height") else len(buf)
|
||||
width = ctx.terminal_width if hasattr(ctx, "terminal_width") else 80
|
||||
|
||||
# Calculate areas
|
||||
repl_height = self.config.params.get("display_height", 8)
|
||||
show_hud = self.config.params.get("show_hud", True)
|
||||
|
||||
# Reserve space for REPL at bottom
|
||||
# HUD uses top 3 lines if enabled
|
||||
content_height = max(1, height - repl_height)
|
||||
|
||||
# Build output
|
||||
output = []
|
||||
|
||||
# Add content (truncated or padded)
|
||||
for i in range(content_height):
|
||||
if i < len(buf):
|
||||
output.append(buf[i][:width])
|
||||
else:
|
||||
output.append(" " * width)
|
||||
|
||||
# Add HUD lines if enabled
|
||||
if show_hud:
|
||||
hud_output = self._render_hud(width, ctx)
|
||||
# Overlay HUD on first lines of content
|
||||
for i, line in enumerate(hud_output):
|
||||
if i < len(output):
|
||||
output[i] = line[:width]
|
||||
|
||||
# Add separator
|
||||
output.append("─" * width)
|
||||
|
||||
# Add REPL area
|
||||
repl_lines = self._render_repl(width, repl_height - 1)
|
||||
output.extend(repl_lines)
|
||||
|
||||
# Ensure correct height
|
||||
while len(output) < height:
|
||||
output.append(" " * width)
|
||||
output = output[:height]
|
||||
|
||||
return output
|
||||
|
||||
def _render_hud(self, width: int, ctx: EffectContext) -> list[str]:
|
||||
"""Render HUD-style header with metrics."""
|
||||
lines = []
|
||||
|
||||
# Get metrics
|
||||
metrics = self._get_metrics(ctx)
|
||||
fps = metrics.get("fps", 0.0)
|
||||
frame_time = metrics.get("frame_time", 0.0)
|
||||
|
||||
# Line 1: Title + FPS + Frame time
|
||||
fps_str = f"FPS: {fps:.1f}" if fps > 0 else "FPS: --"
|
||||
time_str = f"{frame_time:.1f}ms" if frame_time > 0 else "--ms"
|
||||
|
||||
# Calculate scroll percentage (like vim)
|
||||
scroll_pct = 0
|
||||
if len(self.state.output_buffer) > 1:
|
||||
max_scroll = len(self.state.output_buffer) - 1
|
||||
scroll_pct = (
|
||||
int((self.state.scroll_offset / max_scroll) * 100)
|
||||
if max_scroll > 0
|
||||
else 0
|
||||
)
|
||||
|
||||
scroll_str = f"{scroll_pct}%"
|
||||
line1 = (
|
||||
f"\033[38;5;46mMAINLINE REPL\033[0m "
|
||||
f"\033[38;5;245m|\033[0m \033[38;5;39m{fps_str}\033[0m "
|
||||
f"\033[38;5;245m|\033[0m \033[38;5;208m{time_str}\033[0m "
|
||||
f"\033[38;5;245m|\033[0m \033[38;5;220m{scroll_str}\033[0m"
|
||||
)
|
||||
lines.append(line1[:width])
|
||||
|
||||
# Line 2: Command count + History index
|
||||
cmd_count = len(self.state.command_history)
|
||||
hist_idx = (
|
||||
f"[{self.state.history_index + 1}/{cmd_count}]" if cmd_count > 0 else ""
|
||||
)
|
||||
line2 = (
|
||||
f"\033[38;5;45mCOMMANDS:\033[0m "
|
||||
f"\033[1;38;5;227m{cmd_count}\033[0m "
|
||||
f"\033[38;5;245m|\033[0m \033[38;5;219m{hist_idx}\033[0m"
|
||||
)
|
||||
lines.append(line2[:width])
|
||||
|
||||
# Line 3: Output buffer count with scroll indicator
|
||||
out_count = len(self.state.output_buffer)
|
||||
scroll_pos = f"({self.state.scroll_offset}/{out_count})"
|
||||
line3 = (
|
||||
f"\033[38;5;44mOUTPUT:\033[0m "
|
||||
f"\033[1;38;5;227m{out_count}\033[0m lines "
|
||||
f"\033[38;5;245m{scroll_pos}\033[0m"
|
||||
)
|
||||
lines.append(line3[:width])
|
||||
|
||||
return lines
|
||||
|
||||
def _render_repl(self, width: int, height: int) -> list[str]:
|
||||
"""Render REPL interface."""
|
||||
lines = []
|
||||
|
||||
# Calculate how many output lines to show
|
||||
# Reserve 1 line for input prompt
|
||||
output_height = height - 1
|
||||
|
||||
# Manual scroll: scroll_offset=0 means show bottom of buffer
|
||||
# scroll_offset increases as you scroll up through history
|
||||
buffer_len = len(self.state.output_buffer)
|
||||
output_start = max(0, buffer_len - output_height - self.state.scroll_offset)
|
||||
|
||||
# Render output buffer
|
||||
for i in range(output_height):
|
||||
idx = output_start + i
|
||||
if idx < buffer_len:
|
||||
line = self.state.output_buffer[idx][:width]
|
||||
lines.append(line)
|
||||
else:
|
||||
lines.append(" " * width)
|
||||
|
||||
# Render input prompt
|
||||
prompt = "> "
|
||||
input_line = f"{prompt}{self.state.current_command}"
|
||||
# Add cursor indicator
|
||||
cursor = "█" if len(self.state.current_command) % 2 == 0 else " "
|
||||
input_line += cursor
|
||||
lines.append(input_line[:width])
|
||||
|
||||
return lines
|
||||
|
||||
def scroll_output(self, delta: int) -> None:
|
||||
"""Scroll the output buffer by delta lines.
|
||||
|
||||
Args:
|
||||
delta: Positive to scroll up (back in time), negative to scroll down
|
||||
"""
|
||||
if not self.state.output_buffer:
|
||||
return
|
||||
|
||||
# Calculate max scroll (can't scroll past top of buffer)
|
||||
max_scroll = max(0, len(self.state.output_buffer) - 1)
|
||||
|
||||
# Update scroll offset
|
||||
self.state.scroll_offset = max(
|
||||
0, min(max_scroll, self.state.scroll_offset + delta)
|
||||
)
|
||||
|
||||
# Reset scroll when new output arrives (handled in process_command)
|
||||
|
||||
def _get_metrics(self, ctx: EffectContext) -> dict:
|
||||
"""Get pipeline metrics from context."""
|
||||
metrics = ctx.get_state("metrics")
|
||||
if metrics:
|
||||
self._last_metrics = metrics
|
||||
|
||||
if self._last_metrics:
|
||||
# Extract FPS and frame time
|
||||
fps = 0.0
|
||||
frame_time = 0.0
|
||||
|
||||
if "pipeline" in self._last_metrics:
|
||||
avg_ms = self._last_metrics["pipeline"].get("avg_ms", 0.0)
|
||||
frame_count = self._last_metrics.get("frame_count", 0)
|
||||
if frame_count > 0 and avg_ms > 0:
|
||||
fps = 1000.0 / avg_ms
|
||||
frame_time = avg_ms
|
||||
|
||||
return {"fps": fps, "frame_time": frame_time}
|
||||
|
||||
return {"fps": 0.0, "frame_time": 0.0}
|
||||
|
||||
def process_command(self, command: str, ctx: EffectContext | None = None) -> None:
|
||||
"""Process a REPL command."""
|
||||
cmd = command.strip()
|
||||
if not cmd:
|
||||
return
|
||||
|
||||
# Add to history
|
||||
self.state.command_history.append(cmd)
|
||||
if len(self.state.command_history) > self.state.max_history:
|
||||
self.state.command_history.pop(0)
|
||||
|
||||
self.state.history_index = len(self.state.command_history)
|
||||
self.state.current_command = ""
|
||||
|
||||
# Add to output buffer
|
||||
self.state.output_buffer.append(f"> {cmd}")
|
||||
|
||||
# Reset scroll offset when new output arrives (scroll to bottom)
|
||||
self.state.scroll_offset = 0
|
||||
|
||||
# Parse command
|
||||
parts = cmd.split()
|
||||
cmd_name = parts[0].lower()
|
||||
cmd_args = parts[1:] if len(parts) > 1 else []
|
||||
|
||||
# Execute command
|
||||
try:
|
||||
if cmd_name == "help":
|
||||
self._cmd_help()
|
||||
elif cmd_name == "status":
|
||||
self._cmd_status(ctx)
|
||||
elif cmd_name == "effects":
|
||||
self._cmd_effects(ctx)
|
||||
elif cmd_name == "effect":
|
||||
self._cmd_effect(cmd_args, ctx)
|
||||
elif cmd_name == "param":
|
||||
self._cmd_param(cmd_args, ctx)
|
||||
elif cmd_name == "pipeline":
|
||||
self._cmd_pipeline(ctx)
|
||||
elif cmd_name == "available":
|
||||
self._cmd_available(ctx)
|
||||
elif cmd_name == "add_stage":
|
||||
self._cmd_add_stage(cmd_args)
|
||||
elif cmd_name == "remove_stage":
|
||||
self._cmd_remove_stage(cmd_args)
|
||||
elif cmd_name == "swap_stages":
|
||||
self._cmd_swap_stages(cmd_args)
|
||||
elif cmd_name == "move_stage":
|
||||
self._cmd_move_stage(cmd_args)
|
||||
elif cmd_name == "clear":
|
||||
self.state.output_buffer.clear()
|
||||
elif cmd_name == "quit" or cmd_name == "exit":
|
||||
self.state.output_buffer.append("Use Ctrl+C to exit")
|
||||
else:
|
||||
self.state.output_buffer.append(f"Unknown command: {cmd_name}")
|
||||
self.state.output_buffer.append("Type 'help' for available commands")
|
||||
|
||||
except Exception as e:
|
||||
self.state.output_buffer.append(f"Error: {e}")
|
||||
|
||||
def _cmd_help(self):
|
||||
"""Show help message."""
|
||||
self.state.output_buffer.append("Available commands:")
|
||||
self.state.output_buffer.append(" help - Show this help")
|
||||
self.state.output_buffer.append(" status - Show pipeline status")
|
||||
self.state.output_buffer.append(" effects - List effects in current pipeline")
|
||||
self.state.output_buffer.append(" available - List all available effect types")
|
||||
self.state.output_buffer.append(" effect <name> <on|off> - Toggle effect")
|
||||
self.state.output_buffer.append(
|
||||
" param <effect> <param> <value> - Set parameter"
|
||||
)
|
||||
self.state.output_buffer.append(" pipeline - Show current pipeline order")
|
||||
self.state.output_buffer.append(" add_stage <name> <type> - Add new stage")
|
||||
self.state.output_buffer.append(" remove_stage <name> - Remove stage")
|
||||
self.state.output_buffer.append(" swap_stages <name1> <name2> - Swap stages")
|
||||
self.state.output_buffer.append(
|
||||
" move_stage <name> [after <stage>] [before <stage>] - Move stage"
|
||||
)
|
||||
self.state.output_buffer.append(" clear - Clear output buffer")
|
||||
self.state.output_buffer.append(" quit - Show exit message")
|
||||
|
||||
def _cmd_status(self, ctx: EffectContext | None):
|
||||
"""Show pipeline status."""
|
||||
if ctx:
|
||||
metrics = self._get_metrics(ctx)
|
||||
self.state.output_buffer.append(f"FPS: {metrics['fps']:.1f}")
|
||||
self.state.output_buffer.append(
|
||||
f"Frame time: {metrics['frame_time']:.1f}ms"
|
||||
)
|
||||
|
||||
self.state.output_buffer.append(
|
||||
f"Output lines: {len(self.state.output_buffer)}"
|
||||
)
|
||||
self.state.output_buffer.append(
|
||||
f"History: {len(self.state.command_history)} commands"
|
||||
)
|
||||
|
||||
def _cmd_effects(self, ctx: EffectContext | None):
|
||||
"""List all effects."""
|
||||
if ctx:
|
||||
# Try to get effect list from context
|
||||
effects = ctx.get_state("pipeline_order")
|
||||
if effects:
|
||||
self.state.output_buffer.append("Pipeline effects:")
|
||||
for i, name in enumerate(effects):
|
||||
self.state.output_buffer.append(f" {i + 1}. {name}")
|
||||
else:
|
||||
self.state.output_buffer.append("No pipeline information available")
|
||||
else:
|
||||
self.state.output_buffer.append("No context available")
|
||||
|
||||
def _cmd_available(self, ctx: EffectContext | None):
|
||||
"""List all available effect types and stage categories."""
|
||||
try:
|
||||
from engine.effects import get_registry
|
||||
from engine.effects.plugins import discover_plugins
|
||||
from engine.pipeline.registry import StageRegistry, discover_stages
|
||||
|
||||
# Discover plugins and stages if not already done
|
||||
discover_plugins()
|
||||
discover_stages()
|
||||
|
||||
# List effect types from registry
|
||||
registry = get_registry()
|
||||
all_effects = registry.list_all()
|
||||
|
||||
if all_effects:
|
||||
self.state.output_buffer.append("Available effect types:")
|
||||
for name in sorted(all_effects.keys()):
|
||||
self.state.output_buffer.append(f" - {name}")
|
||||
else:
|
||||
self.state.output_buffer.append("No effects registered")
|
||||
|
||||
# List stage categories and their types
|
||||
categories = StageRegistry.list_categories()
|
||||
if categories:
|
||||
self.state.output_buffer.append("")
|
||||
self.state.output_buffer.append("Stage categories:")
|
||||
for category in sorted(categories):
|
||||
stages = StageRegistry.list(category)
|
||||
if stages:
|
||||
self.state.output_buffer.append(f" {category}:")
|
||||
for stage_name in sorted(stages):
|
||||
self.state.output_buffer.append(f" - {stage_name}")
|
||||
except Exception as e:
|
||||
self.state.output_buffer.append(f"Error listing available types: {e}")
|
||||
|
||||
def _cmd_effect(self, args: list[str], ctx: EffectContext | None):
|
||||
"""Toggle effect on/off."""
|
||||
if len(args) < 2:
|
||||
self.state.output_buffer.append("Usage: effect <name> <on|off>")
|
||||
return
|
||||
|
||||
effect_name = args[0]
|
||||
state = args[1].lower()
|
||||
|
||||
if state not in ("on", "off"):
|
||||
self.state.output_buffer.append("State must be 'on' or 'off'")
|
||||
return
|
||||
|
||||
# Emit event to toggle effect
|
||||
enabled = state == "on"
|
||||
self.state.output_buffer.append(f"Effect '{effect_name}' set to {state}")
|
||||
|
||||
# Store command for external handling
|
||||
self._pending_command = {
|
||||
"action": "enable_stage" if enabled else "disable_stage",
|
||||
"stage": effect_name,
|
||||
}
|
||||
|
||||
def _cmd_param(self, args: list[str], ctx: EffectContext | None):
|
||||
"""Set effect parameter."""
|
||||
if len(args) < 3:
|
||||
self.state.output_buffer.append("Usage: param <effect> <param> <value>")
|
||||
return
|
||||
|
||||
effect_name = args[0]
|
||||
param_name = args[1]
|
||||
try:
|
||||
param_value = float(args[2])
|
||||
except ValueError:
|
||||
self.state.output_buffer.append("Value must be a number")
|
||||
return
|
||||
|
||||
self.state.output_buffer.append(
|
||||
f"Setting {effect_name}.{param_name} = {param_value}"
|
||||
)
|
||||
|
||||
# Store command for external handling
|
||||
self._pending_command = {
|
||||
"action": "adjust_param",
|
||||
"stage": effect_name,
|
||||
"param": param_name,
|
||||
"delta": param_value, # Note: This sets absolute value, need adjustment
|
||||
}
|
||||
|
||||
def _cmd_pipeline(self, ctx: EffectContext | None):
|
||||
"""Show current pipeline order."""
|
||||
if ctx:
|
||||
pipeline_order = ctx.get_state("pipeline_order")
|
||||
if pipeline_order:
|
||||
self.state.output_buffer.append(
|
||||
"Pipeline: " + " → ".join(pipeline_order)
|
||||
)
|
||||
else:
|
||||
self.state.output_buffer.append("Pipeline information not available")
|
||||
else:
|
||||
self.state.output_buffer.append("No context available")
|
||||
|
||||
def _cmd_add_stage(self, args: list[str]):
|
||||
"""Add a new stage to the pipeline."""
|
||||
if len(args) < 2:
|
||||
self.state.output_buffer.append("Usage: add_stage <name> <type>")
|
||||
return
|
||||
|
||||
stage_name = args[0]
|
||||
stage_type = args[1]
|
||||
self.state.output_buffer.append(
|
||||
f"Adding stage '{stage_name}' of type '{stage_type}'"
|
||||
)
|
||||
|
||||
# Store command for external handling
|
||||
self._pending_command = {
|
||||
"action": "add_stage",
|
||||
"stage": stage_name,
|
||||
"stage_type": stage_type,
|
||||
}
|
||||
|
||||
def _cmd_remove_stage(self, args: list[str]):
|
||||
"""Remove a stage from the pipeline."""
|
||||
if len(args) < 1:
|
||||
self.state.output_buffer.append("Usage: remove_stage <name>")
|
||||
return
|
||||
|
||||
stage_name = args[0]
|
||||
self.state.output_buffer.append(f"Removing stage '{stage_name}'")
|
||||
|
||||
# Store command for external handling
|
||||
self._pending_command = {
|
||||
"action": "remove_stage",
|
||||
"stage": stage_name,
|
||||
}
|
||||
|
||||
def _cmd_swap_stages(self, args: list[str]):
|
||||
"""Swap two stages in the pipeline."""
|
||||
if len(args) < 2:
|
||||
self.state.output_buffer.append("Usage: swap_stages <name1> <name2>")
|
||||
return
|
||||
|
||||
stage1 = args[0]
|
||||
stage2 = args[1]
|
||||
self.state.output_buffer.append(f"Swapping stages '{stage1}' and '{stage2}'")
|
||||
|
||||
# Store command for external handling
|
||||
self._pending_command = {
|
||||
"action": "swap_stages",
|
||||
"stage1": stage1,
|
||||
"stage2": stage2,
|
||||
}
|
||||
|
||||
def _cmd_move_stage(self, args: list[str]):
|
||||
"""Move a stage in the pipeline."""
|
||||
if len(args) < 1:
|
||||
self.state.output_buffer.append(
|
||||
"Usage: move_stage <name> [after <stage>] [before <stage>]"
|
||||
)
|
||||
return
|
||||
|
||||
stage_name = args[0]
|
||||
after = None
|
||||
before = None
|
||||
|
||||
# Parse optional after/before arguments
|
||||
i = 1
|
||||
while i < len(args):
|
||||
if args[i] == "after" and i + 1 < len(args):
|
||||
after = args[i + 1]
|
||||
i += 2
|
||||
elif args[i] == "before" and i + 1 < len(args):
|
||||
before = args[i + 1]
|
||||
i += 2
|
||||
else:
|
||||
i += 1
|
||||
|
||||
if after:
|
||||
self.state.output_buffer.append(
|
||||
f"Moving stage '{stage_name}' after '{after}'"
|
||||
)
|
||||
elif before:
|
||||
self.state.output_buffer.append(
|
||||
f"Moving stage '{stage_name}' before '{before}'"
|
||||
)
|
||||
else:
|
||||
self.state.output_buffer.append(
|
||||
"Usage: move_stage <name> [after <stage>] [before <stage>]"
|
||||
)
|
||||
return
|
||||
|
||||
# Store command for external handling
|
||||
self._pending_command = {
|
||||
"action": "move_stage",
|
||||
"stage": stage_name,
|
||||
"after": after,
|
||||
"before": before,
|
||||
}
|
||||
|
||||
def get_pending_command(self) -> dict | None:
|
||||
"""Get and clear pending command for external handling."""
|
||||
cmd = getattr(self, "_pending_command", None)
|
||||
if cmd:
|
||||
self._pending_command = None
|
||||
return cmd
|
||||
|
||||
def navigate_history(self, direction: int) -> None:
|
||||
"""Navigate command history (up/down)."""
|
||||
if not self.state.command_history:
|
||||
return
|
||||
|
||||
if direction > 0: # Down
|
||||
self.state.history_index = min(
|
||||
len(self.state.command_history), self.state.history_index + 1
|
||||
)
|
||||
else: # Up
|
||||
self.state.history_index = max(0, self.state.history_index - 1)
|
||||
|
||||
if self.state.history_index < len(self.state.command_history):
|
||||
self.state.current_command = self.state.command_history[
|
||||
self.state.history_index
|
||||
]
|
||||
else:
|
||||
self.state.current_command = ""
|
||||
|
||||
def append_to_command(self, char: str) -> None:
|
||||
"""Append character to current command."""
|
||||
if len(char) == 1: # Single character
|
||||
self.state.current_command += char
|
||||
|
||||
def backspace(self) -> None:
|
||||
"""Remove last character from command."""
|
||||
self.state.current_command = self.state.current_command[:-1]
|
||||
|
||||
def clear_command(self) -> None:
|
||||
"""Clear current command."""
|
||||
self.state.current_command = ""
|
||||
|
||||
def configure(self, config: EffectConfig) -> None:
|
||||
"""Configure the effect."""
|
||||
self.config = config
|
||||
99
engine/effects/plugins/tint.py
Normal file
99
engine/effects/plugins/tint.py
Normal file
@@ -0,0 +1,99 @@
|
||||
from engine.effects.types import EffectConfig, EffectContext, EffectPlugin
|
||||
|
||||
|
||||
class TintEffect(EffectPlugin):
|
||||
"""Tint effect that applies an RGB color overlay to the buffer.
|
||||
|
||||
Uses ANSI escape codes to tint text with the specified RGB values.
|
||||
Supports transparency (0-100%) for blending.
|
||||
|
||||
Inlets:
|
||||
- r: Red component (0-255)
|
||||
- g: Green component (0-255)
|
||||
- b: Blue component (0-255)
|
||||
- a: Alpha/transparency (0.0-1.0, where 0.0 = fully transparent)
|
||||
"""
|
||||
|
||||
name = "tint"
|
||||
config = EffectConfig(enabled=True, intensity=1.0)
|
||||
|
||||
# Define inlet types for PureData-style typing
|
||||
@property
|
||||
def inlet_types(self) -> set:
|
||||
from engine.pipeline.core import DataType
|
||||
|
||||
return {DataType.TEXT_BUFFER}
|
||||
|
||||
@property
|
||||
def outlet_types(self) -> set:
|
||||
from engine.pipeline.core import DataType
|
||||
|
||||
return {DataType.TEXT_BUFFER}
|
||||
|
||||
def process(self, buf: list[str], ctx: EffectContext) -> list[str]:
|
||||
if not buf:
|
||||
return buf
|
||||
|
||||
# Get tint values from effect params or sensors
|
||||
r = self.config.params.get("r", 255)
|
||||
g = self.config.params.get("g", 255)
|
||||
b = self.config.params.get("b", 255)
|
||||
a = self.config.params.get("a", 0.3) # Default 30% tint
|
||||
|
||||
# Clamp values
|
||||
r = max(0, min(255, int(r)))
|
||||
g = max(0, min(255, int(g)))
|
||||
b = max(0, min(255, int(b)))
|
||||
a = max(0.0, min(1.0, float(a)))
|
||||
|
||||
if a <= 0:
|
||||
return buf
|
||||
|
||||
# Convert RGB to ANSI 256 color
|
||||
ansi_color = self._rgb_to_ansi256(r, g, b)
|
||||
|
||||
# Apply tint with transparency effect
|
||||
result = []
|
||||
for line in buf:
|
||||
if not line.strip():
|
||||
result.append(line)
|
||||
continue
|
||||
|
||||
# Check if line already has ANSI codes
|
||||
if "\033[" in line:
|
||||
# For lines with existing colors, wrap the whole line
|
||||
result.append(f"\033[38;5;{ansi_color}m{line}\033[0m")
|
||||
else:
|
||||
# Apply tint to plain text lines
|
||||
result.append(f"\033[38;5;{ansi_color}m{line}\033[0m")
|
||||
|
||||
return result
|
||||
|
||||
def _rgb_to_ansi256(self, r: int, g: int, b: int) -> int:
|
||||
"""Convert RGB (0-255 each) to ANSI 256 color code."""
|
||||
if r == g == b == 0:
|
||||
return 16
|
||||
if r == g == b == 255:
|
||||
return 231
|
||||
|
||||
# Calculate grayscale
|
||||
gray = int((0.299 * r + 0.587 * g + 0.114 * b) / 255 * 24) + 232
|
||||
|
||||
# Calculate color cube
|
||||
ri = int(r / 51)
|
||||
gi = int(g / 51)
|
||||
bi = int(b / 51)
|
||||
color = 16 + 36 * ri + 6 * gi + bi
|
||||
|
||||
# Use whichever is closer - gray or color
|
||||
gray_dist = abs(r - gray)
|
||||
color_dist = (
|
||||
(r - ri * 51) ** 2 + (g - gi * 51) ** 2 + (b - bi * 51) ** 2
|
||||
) ** 0.5
|
||||
|
||||
if gray_dist < color_dist:
|
||||
return gray
|
||||
return color
|
||||
|
||||
def configure(self, config: EffectConfig) -> None:
|
||||
self.config = config
|
||||
@@ -23,8 +23,32 @@ from dataclasses import dataclass, field
|
||||
from typing import Any
|
||||
|
||||
|
||||
@dataclass
|
||||
class PartialUpdate:
|
||||
"""Represents a partial buffer update for optimized rendering.
|
||||
|
||||
Instead of processing the full buffer every frame, effects that support
|
||||
partial updates can process only changed regions.
|
||||
|
||||
Attributes:
|
||||
rows: Row indices that changed (None = all rows)
|
||||
cols: Column range that changed (None = full width)
|
||||
dirty: Set of dirty row indices
|
||||
"""
|
||||
|
||||
rows: tuple[int, int] | None = None # (start, end) inclusive
|
||||
cols: tuple[int, int] | None = None # (start, end) inclusive
|
||||
dirty: set[int] | None = None # Set of dirty row indices
|
||||
full_buffer: bool = True # If True, process entire buffer
|
||||
|
||||
|
||||
@dataclass
|
||||
class EffectContext:
|
||||
"""Context passed to effect plugins during processing.
|
||||
|
||||
Contains terminal dimensions, camera state, frame info, and real-time sensor values.
|
||||
"""
|
||||
|
||||
terminal_width: int
|
||||
terminal_height: int
|
||||
scroll_cam: int
|
||||
@@ -35,12 +59,58 @@ class EffectContext:
|
||||
frame_number: int = 0
|
||||
has_message: bool = False
|
||||
items: list = field(default_factory=list)
|
||||
_state: dict[str, Any] = field(default_factory=dict, repr=False)
|
||||
|
||||
def compute_entropy(self, effect_name: str, data: Any) -> float:
|
||||
"""Compute entropy score for an effect based on its output.
|
||||
|
||||
Args:
|
||||
effect_name: Name of the effect
|
||||
data: Processed buffer or effect-specific data
|
||||
|
||||
Returns:
|
||||
Entropy score 0.0-1.0 representing visual chaos
|
||||
"""
|
||||
# Default implementation: use effect name as seed for deterministic randomness
|
||||
# Better implementations can analyze actual buffer content
|
||||
import hashlib
|
||||
|
||||
data_str = str(data)[:100] if data else ""
|
||||
hash_val = hashlib.md5(f"{effect_name}:{data_str}".encode()).hexdigest()
|
||||
# Convert hash to float 0.0-1.0
|
||||
entropy = int(hash_val[:8], 16) / 0xFFFFFFFF
|
||||
return min(max(entropy, 0.0), 1.0)
|
||||
|
||||
def get_sensor_value(self, sensor_name: str) -> float | None:
|
||||
"""Get a sensor value from context state.
|
||||
|
||||
Args:
|
||||
sensor_name: Name of the sensor (e.g., "mic", "camera")
|
||||
|
||||
Returns:
|
||||
Sensor value as float, or None if not available.
|
||||
"""
|
||||
return self._state.get(f"sensor.{sensor_name}")
|
||||
|
||||
def set_state(self, key: str, value: Any) -> None:
|
||||
"""Set a state value in the context."""
|
||||
self._state[key] = value
|
||||
|
||||
def get_state(self, key: str, default: Any = None) -> Any:
|
||||
"""Get a state value from the context."""
|
||||
return self._state.get(key, default)
|
||||
|
||||
@property
|
||||
def state(self) -> dict[str, Any]:
|
||||
"""Get the state dictionary for direct access by effects."""
|
||||
return self._state
|
||||
|
||||
|
||||
@dataclass
|
||||
class EffectConfig:
|
||||
enabled: bool = True
|
||||
intensity: float = 1.0
|
||||
entropy: float = 0.0 # Visual chaos metric (0.0 = calm, 1.0 = chaotic)
|
||||
params: dict[str, Any] = field(default_factory=dict)
|
||||
|
||||
|
||||
@@ -51,6 +121,14 @@ class EffectPlugin(ABC):
|
||||
- name: str - unique identifier for the effect
|
||||
- config: EffectConfig - current configuration
|
||||
|
||||
Optional class attribute:
|
||||
- param_bindings: dict - Declarative sensor-to-param bindings
|
||||
Example:
|
||||
param_bindings = {
|
||||
"intensity": {"sensor": "mic", "transform": "linear"},
|
||||
"rate": {"sensor": "mic", "transform": "exponential"},
|
||||
}
|
||||
|
||||
And implement:
|
||||
- process(buf, ctx) -> list[str]
|
||||
- configure(config) -> None
|
||||
@@ -63,10 +141,17 @@ class EffectPlugin(ABC):
|
||||
|
||||
Effects should handle missing or zero context values gracefully by
|
||||
returning the input buffer unchanged rather than raising errors.
|
||||
|
||||
The param_bindings system enables PureData-style signal routing:
|
||||
- Sensors emit values that effects can bind to
|
||||
- Transform functions map sensor values to param ranges
|
||||
- Effects read bound values from context.state["sensor.{name}"]
|
||||
"""
|
||||
|
||||
name: str
|
||||
config: EffectConfig
|
||||
param_bindings: dict[str, dict[str, str | float]] = {}
|
||||
supports_partial_updates: bool = False # Override in subclasses for optimization
|
||||
|
||||
@abstractmethod
|
||||
def process(self, buf: list[str], ctx: EffectContext) -> list[str]:
|
||||
@@ -81,6 +166,25 @@ class EffectPlugin(ABC):
|
||||
"""
|
||||
...
|
||||
|
||||
def process_partial(
|
||||
self, buf: list[str], ctx: EffectContext, partial: PartialUpdate
|
||||
) -> list[str]:
|
||||
"""Process a partial buffer for optimized rendering.
|
||||
|
||||
Override this in subclasses that support partial updates for performance.
|
||||
Default implementation falls back to full buffer processing.
|
||||
|
||||
Args:
|
||||
buf: List of lines to process
|
||||
ctx: Effect context with terminal state
|
||||
partial: PartialUpdate indicating which regions changed
|
||||
|
||||
Returns:
|
||||
Processed buffer (may be same object or new list)
|
||||
"""
|
||||
# Default: fall back to full processing
|
||||
return self.process(buf, ctx)
|
||||
|
||||
@abstractmethod
|
||||
def configure(self, config: EffectConfig) -> None:
|
||||
"""Configure the effect with new settings.
|
||||
@@ -120,3 +224,58 @@ def create_effect_context(
|
||||
class PipelineConfig:
|
||||
order: list[str] = field(default_factory=list)
|
||||
effects: dict[str, EffectConfig] = field(default_factory=dict)
|
||||
|
||||
|
||||
def apply_param_bindings(
|
||||
effect: "EffectPlugin",
|
||||
ctx: EffectContext,
|
||||
) -> EffectConfig:
|
||||
"""Apply sensor bindings to effect config.
|
||||
|
||||
This resolves param_bindings declarations by reading sensor values
|
||||
from the context and applying transform functions.
|
||||
|
||||
Args:
|
||||
effect: The effect with param_bindings to apply
|
||||
ctx: EffectContext containing sensor values
|
||||
|
||||
Returns:
|
||||
Modified EffectConfig with sensor-driven values applied.
|
||||
"""
|
||||
import copy
|
||||
|
||||
if not effect.param_bindings:
|
||||
return effect.config
|
||||
|
||||
config = copy.deepcopy(effect.config)
|
||||
|
||||
for param_name, binding in effect.param_bindings.items():
|
||||
sensor_name: str = binding.get("sensor", "")
|
||||
transform: str = binding.get("transform", "linear")
|
||||
|
||||
if not sensor_name:
|
||||
continue
|
||||
|
||||
sensor_value = ctx.get_sensor_value(sensor_name)
|
||||
if sensor_value is None:
|
||||
continue
|
||||
|
||||
if transform == "linear":
|
||||
applied_value: float = sensor_value
|
||||
elif transform == "exponential":
|
||||
applied_value = sensor_value**2
|
||||
elif transform == "threshold":
|
||||
threshold = float(binding.get("threshold", 0.5))
|
||||
applied_value = 1.0 if sensor_value > threshold else 0.0
|
||||
elif transform == "inverse":
|
||||
applied_value = 1.0 - sensor_value
|
||||
else:
|
||||
applied_value = sensor_value
|
||||
|
||||
config.params[f"{param_name}_sensor"] = applied_value
|
||||
|
||||
if param_name == "intensity":
|
||||
base_intensity = effect.config.intensity
|
||||
config.intensity = base_intensity * (0.5 + applied_value * 0.5)
|
||||
|
||||
return config
|
||||
|
||||
@@ -1,25 +0,0 @@
|
||||
"""
|
||||
Event emitter protocols - abstract interfaces for event-producing components.
|
||||
"""
|
||||
|
||||
from collections.abc import Callable
|
||||
from typing import Any, Protocol
|
||||
|
||||
|
||||
class EventEmitter(Protocol):
|
||||
"""Protocol for components that emit events."""
|
||||
|
||||
def subscribe(self, callback: Callable[[Any], None]) -> None: ...
|
||||
def unsubscribe(self, callback: Callable[[Any], None]) -> None: ...
|
||||
|
||||
|
||||
class Startable(Protocol):
|
||||
"""Protocol for components that can be started."""
|
||||
|
||||
def start(self) -> Any: ...
|
||||
|
||||
|
||||
class Stoppable(Protocol):
|
||||
"""Protocol for components that can be stopped."""
|
||||
|
||||
def stop(self) -> None: ...
|
||||
157
engine/fetch.py
157
engine/fetch.py
@@ -7,6 +7,7 @@ import json
|
||||
import pathlib
|
||||
import re
|
||||
import urllib.request
|
||||
from concurrent.futures import ThreadPoolExecutor, as_completed
|
||||
from datetime import datetime
|
||||
from typing import Any
|
||||
|
||||
@@ -17,54 +18,98 @@ from engine.filter import skip, strip_tags
|
||||
from engine.sources import FEEDS, POETRY_SOURCES
|
||||
from engine.terminal import boot_ln
|
||||
|
||||
# Type alias for headline items
|
||||
HeadlineTuple = tuple[str, str, str]
|
||||
|
||||
DEFAULT_MAX_WORKERS = 10
|
||||
FAST_START_SOURCES = 5
|
||||
FAST_START_TIMEOUT = 3
|
||||
|
||||
# ─── SINGLE FEED ──────────────────────────────────────────
|
||||
def fetch_feed(url: str) -> Any | None:
|
||||
"""Fetch and parse a single RSS feed URL."""
|
||||
|
||||
def fetch_feed(url: str) -> tuple[str, Any] | tuple[None, None]:
|
||||
"""Fetch and parse a single RSS feed URL. Returns (url, feed) tuple."""
|
||||
try:
|
||||
req = urllib.request.Request(url, headers={"User-Agent": "mainline/0.1"})
|
||||
resp = urllib.request.urlopen(req, timeout=config.FEED_TIMEOUT)
|
||||
return feedparser.parse(resp.read())
|
||||
timeout = FAST_START_TIMEOUT if url in _fast_start_urls else config.FEED_TIMEOUT
|
||||
resp = urllib.request.urlopen(req, timeout=timeout)
|
||||
return (url, feedparser.parse(resp.read()))
|
||||
except Exception:
|
||||
return None
|
||||
return (url, None)
|
||||
|
||||
|
||||
def _parse_feed(feed: Any, src: str) -> list[HeadlineTuple]:
|
||||
"""Parse a feed and return list of headline tuples."""
|
||||
items = []
|
||||
if feed is None or (feed.bozo and not feed.entries):
|
||||
return items
|
||||
|
||||
for e in feed.entries:
|
||||
t = strip_tags(e.get("title", ""))
|
||||
if not t or skip(t):
|
||||
continue
|
||||
pub = e.get("published_parsed") or e.get("updated_parsed")
|
||||
try:
|
||||
ts = datetime(*pub[:6]).strftime("%H:%M") if pub else "——:——"
|
||||
except Exception:
|
||||
ts = "——:——"
|
||||
items.append((t, src, ts))
|
||||
return items
|
||||
|
||||
|
||||
def fetch_all_fast() -> list[HeadlineTuple]:
|
||||
"""Fetch only the first N sources for fast startup."""
|
||||
global _fast_start_urls
|
||||
_fast_start_urls = set(list(FEEDS.values())[:FAST_START_SOURCES])
|
||||
|
||||
items: list[HeadlineTuple] = []
|
||||
with ThreadPoolExecutor(max_workers=FAST_START_SOURCES) as executor:
|
||||
futures = {
|
||||
executor.submit(fetch_feed, url): src
|
||||
for src, url in list(FEEDS.items())[:FAST_START_SOURCES]
|
||||
}
|
||||
for future in as_completed(futures):
|
||||
src = futures[future]
|
||||
url, feed = future.result()
|
||||
if feed is None or (feed.bozo and not feed.entries):
|
||||
boot_ln(src, "DARK", False)
|
||||
continue
|
||||
parsed = _parse_feed(feed, src)
|
||||
if parsed:
|
||||
items.extend(parsed)
|
||||
boot_ln(src, f"LINKED [{len(parsed)}]", True)
|
||||
else:
|
||||
boot_ln(src, "EMPTY", False)
|
||||
return items
|
||||
|
||||
|
||||
# ─── ALL RSS FEEDS ────────────────────────────────────────
|
||||
def fetch_all() -> tuple[list[HeadlineTuple], int, int]:
|
||||
"""Fetch all RSS feeds and return items, linked count, failed count."""
|
||||
"""Fetch all RSS feeds concurrently and return items, linked count, failed count."""
|
||||
global _fast_start_urls
|
||||
_fast_start_urls = set()
|
||||
|
||||
items: list[HeadlineTuple] = []
|
||||
linked = failed = 0
|
||||
for src, url in FEEDS.items():
|
||||
feed = fetch_feed(url)
|
||||
if feed is None or (feed.bozo and not feed.entries):
|
||||
boot_ln(src, "DARK", False)
|
||||
failed += 1
|
||||
continue
|
||||
n = 0
|
||||
for e in feed.entries:
|
||||
t = strip_tags(e.get("title", ""))
|
||||
if not t or skip(t):
|
||||
|
||||
with ThreadPoolExecutor(max_workers=DEFAULT_MAX_WORKERS) as executor:
|
||||
futures = {executor.submit(fetch_feed, url): src for src, url in FEEDS.items()}
|
||||
for future in as_completed(futures):
|
||||
src = futures[future]
|
||||
url, feed = future.result()
|
||||
if feed is None or (feed.bozo and not feed.entries):
|
||||
boot_ln(src, "DARK", False)
|
||||
failed += 1
|
||||
continue
|
||||
pub = e.get("published_parsed") or e.get("updated_parsed")
|
||||
try:
|
||||
ts = datetime(*pub[:6]).strftime("%H:%M") if pub else "——:——"
|
||||
except Exception:
|
||||
ts = "——:——"
|
||||
items.append((t, src, ts))
|
||||
n += 1
|
||||
if n:
|
||||
boot_ln(src, f"LINKED [{n}]", True)
|
||||
linked += 1
|
||||
else:
|
||||
boot_ln(src, "EMPTY", False)
|
||||
failed += 1
|
||||
parsed = _parse_feed(feed, src)
|
||||
if parsed:
|
||||
items.extend(parsed)
|
||||
boot_ln(src, f"LINKED [{len(parsed)}]", True)
|
||||
linked += 1
|
||||
else:
|
||||
boot_ln(src, "EMPTY", False)
|
||||
failed += 1
|
||||
|
||||
return items, linked, failed
|
||||
|
||||
|
||||
# ─── PROJECT GUTENBERG ────────────────────────────────────
|
||||
def _fetch_gutenberg(url: str, label: str) -> list[HeadlineTuple]:
|
||||
"""Download and parse stanzas/passages from a Project Gutenberg text."""
|
||||
try:
|
||||
@@ -76,23 +121,21 @@ def _fetch_gutenberg(url: str, label: str) -> list[HeadlineTuple]:
|
||||
.replace("\r\n", "\n")
|
||||
.replace("\r", "\n")
|
||||
)
|
||||
# Strip PG boilerplate
|
||||
m = re.search(r"\*\*\*\s*START OF[^\n]*\n", text)
|
||||
if m:
|
||||
text = text[m.end() :]
|
||||
m = re.search(r"\*\*\*\s*END OF", text)
|
||||
if m:
|
||||
text = text[: m.start()]
|
||||
# Split on blank lines into stanzas/passages
|
||||
blocks = re.split(r"\n{2,}", text.strip())
|
||||
items = []
|
||||
for blk in blocks:
|
||||
blk = " ".join(blk.split()) # flatten to one line
|
||||
blk = " ".join(blk.split())
|
||||
if len(blk) < 20 or len(blk) > 280:
|
||||
continue
|
||||
if blk.isupper(): # skip all-caps headers
|
||||
if blk.isupper():
|
||||
continue
|
||||
if re.match(r"^[IVXLCDM]+\.?\s*$", blk): # roman numerals
|
||||
if re.match(r"^[IVXLCDM]+\.?\s*$", blk):
|
||||
continue
|
||||
items.append((blk, label, ""))
|
||||
return items
|
||||
@@ -100,28 +143,35 @@ def _fetch_gutenberg(url: str, label: str) -> list[HeadlineTuple]:
|
||||
return []
|
||||
|
||||
|
||||
def fetch_poetry():
|
||||
"""Fetch all poetry/literature sources."""
|
||||
def fetch_poetry() -> tuple[list[HeadlineTuple], int, int]:
|
||||
"""Fetch all poetry/literature sources concurrently."""
|
||||
items = []
|
||||
linked = failed = 0
|
||||
for label, url in POETRY_SOURCES.items():
|
||||
stanzas = _fetch_gutenberg(url, label)
|
||||
if stanzas:
|
||||
boot_ln(label, f"LOADED [{len(stanzas)}]", True)
|
||||
items.extend(stanzas)
|
||||
linked += 1
|
||||
else:
|
||||
boot_ln(label, "DARK", False)
|
||||
failed += 1
|
||||
|
||||
with ThreadPoolExecutor(max_workers=DEFAULT_MAX_WORKERS) as executor:
|
||||
futures = {
|
||||
executor.submit(_fetch_gutenberg, url, label): label
|
||||
for label, url in POETRY_SOURCES.items()
|
||||
}
|
||||
for future in as_completed(futures):
|
||||
label = futures[future]
|
||||
stanzas = future.result()
|
||||
if stanzas:
|
||||
boot_ln(label, f"LOADED [{len(stanzas)}]", True)
|
||||
items.extend(stanzas)
|
||||
linked += 1
|
||||
else:
|
||||
boot_ln(label, "DARK", False)
|
||||
failed += 1
|
||||
|
||||
return items, linked, failed
|
||||
|
||||
|
||||
# ─── CACHE ────────────────────────────────────────────────
|
||||
_CACHE_DIR = pathlib.Path(__file__).resolve().parent.parent
|
||||
_cache_dir = pathlib.Path(__file__).resolve().parent / "fixtures"
|
||||
|
||||
|
||||
def _cache_path():
|
||||
return _CACHE_DIR / f".mainline_cache_{config.MODE}.json"
|
||||
return _cache_dir / "headlines.json"
|
||||
|
||||
|
||||
def load_cache():
|
||||
@@ -143,3 +193,6 @@ def save_cache(items):
|
||||
_cache_path().write_text(json.dumps({"items": items}))
|
||||
except Exception:
|
||||
pass
|
||||
|
||||
|
||||
_fast_start_urls: set = set()
|
||||
|
||||
90
engine/figment_render.py
Normal file
90
engine/figment_render.py
Normal file
@@ -0,0 +1,90 @@
|
||||
"""
|
||||
SVG to half-block terminal art rasterization.
|
||||
|
||||
Pipeline: SVG -> cairosvg -> PIL -> greyscale threshold -> half-block encode.
|
||||
Follows the same pixel-pair approach as engine/render.py for OTF fonts.
|
||||
"""
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
import os
|
||||
import sys
|
||||
from io import BytesIO
|
||||
|
||||
# cairocffi (used by cairosvg) calls dlopen() to find the Cairo C library.
|
||||
# On macOS with Homebrew, Cairo lives in /opt/homebrew/lib (Apple Silicon) or
|
||||
# /usr/local/lib (Intel), which are not in dyld's default search path.
|
||||
# Setting DYLD_LIBRARY_PATH before the import directs dlopen() to those paths.
|
||||
if sys.platform == "darwin" and not os.environ.get("DYLD_LIBRARY_PATH"):
|
||||
for _brew_lib in ("/opt/homebrew/lib", "/usr/local/lib"):
|
||||
if os.path.exists(os.path.join(_brew_lib, "libcairo.2.dylib")):
|
||||
os.environ["DYLD_LIBRARY_PATH"] = _brew_lib
|
||||
break
|
||||
|
||||
import cairosvg
|
||||
from PIL import Image
|
||||
|
||||
_cache: dict[tuple[str, int, int], list[str]] = {}
|
||||
|
||||
|
||||
def rasterize_svg(svg_path: str, width: int, height: int) -> list[str]:
|
||||
"""Convert SVG file to list of half-block terminal rows (uncolored).
|
||||
|
||||
Args:
|
||||
svg_path: Path to SVG file.
|
||||
width: Target terminal width in columns.
|
||||
height: Target terminal height in rows.
|
||||
|
||||
Returns:
|
||||
List of strings, one per terminal row, containing block characters.
|
||||
"""
|
||||
cache_key = (svg_path, width, height)
|
||||
if cache_key in _cache:
|
||||
return _cache[cache_key]
|
||||
|
||||
# SVG -> PNG in memory
|
||||
png_bytes = cairosvg.svg2png(
|
||||
url=svg_path,
|
||||
output_width=width,
|
||||
output_height=height * 2, # 2 pixel rows per terminal row
|
||||
)
|
||||
|
||||
# PNG -> greyscale PIL image
|
||||
# Composite RGBA onto white background so transparent areas become white (255)
|
||||
# and drawn pixels retain their luminance values.
|
||||
img_rgba = Image.open(BytesIO(png_bytes)).convert("RGBA")
|
||||
img_rgba = img_rgba.resize((width, height * 2), Image.Resampling.LANCZOS)
|
||||
background = Image.new("RGBA", img_rgba.size, (255, 255, 255, 255))
|
||||
background.paste(img_rgba, mask=img_rgba.split()[3])
|
||||
img = background.convert("L")
|
||||
|
||||
data = img.tobytes()
|
||||
pix_w = width
|
||||
pix_h = height * 2
|
||||
# White (255) = empty space, dark (< threshold) = filled pixel
|
||||
threshold = 128
|
||||
|
||||
# Half-block encode: walk pixel pairs
|
||||
rows: list[str] = []
|
||||
for y in range(0, pix_h, 2):
|
||||
row: list[str] = []
|
||||
for x in range(pix_w):
|
||||
top = data[y * pix_w + x] < threshold
|
||||
bot = data[(y + 1) * pix_w + x] < threshold if y + 1 < pix_h else False
|
||||
if top and bot:
|
||||
row.append("█")
|
||||
elif top:
|
||||
row.append("▀")
|
||||
elif bot:
|
||||
row.append("▄")
|
||||
else:
|
||||
row.append(" ")
|
||||
rows.append("".join(row))
|
||||
|
||||
_cache[cache_key] = rows
|
||||
return rows
|
||||
|
||||
|
||||
def clear_cache() -> None:
|
||||
"""Clear the rasterization cache (e.g., on terminal resize)."""
|
||||
_cache.clear()
|
||||
36
engine/figment_trigger.py
Normal file
36
engine/figment_trigger.py
Normal file
@@ -0,0 +1,36 @@
|
||||
"""
|
||||
Figment trigger protocol and command types.
|
||||
|
||||
Defines the extensible input abstraction for triggering figment displays
|
||||
from any control surface (ntfy, MQTT, serial, etc.).
|
||||
"""
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
from dataclasses import dataclass
|
||||
from enum import Enum
|
||||
from typing import Protocol
|
||||
|
||||
|
||||
class FigmentAction(Enum):
|
||||
TRIGGER = "trigger"
|
||||
SET_INTENSITY = "set_intensity"
|
||||
SET_INTERVAL = "set_interval"
|
||||
SET_COLOR = "set_color"
|
||||
STOP = "stop"
|
||||
|
||||
|
||||
@dataclass
|
||||
class FigmentCommand:
|
||||
action: FigmentAction
|
||||
value: float | str | None = None
|
||||
|
||||
|
||||
class FigmentTrigger(Protocol):
|
||||
"""Protocol for figment trigger sources.
|
||||
|
||||
Any input source (ntfy, MQTT, serial) can implement this
|
||||
to trigger and control figment displays.
|
||||
"""
|
||||
|
||||
def poll(self) -> FigmentCommand | None: ...
|
||||
1
engine/fixtures/headlines.json
Normal file
1
engine/fixtures/headlines.json
Normal file
File diff suppressed because one or more lines are too long
73
engine/interfaces/__init__.py
Normal file
73
engine/interfaces/__init__.py
Normal file
@@ -0,0 +1,73 @@
|
||||
"""
|
||||
Core interfaces for the mainline pipeline architecture.
|
||||
|
||||
This module provides all abstract base classes and protocols that define
|
||||
the contracts between pipeline components:
|
||||
|
||||
- Stage: Base class for pipeline components (imported from pipeline.core)
|
||||
- DataSource: Abstract data providers (imported from data_sources.sources)
|
||||
- EffectPlugin: Visual effects interface (imported from effects.types)
|
||||
- Sensor: Real-time input interface (imported from sensors)
|
||||
- Display: Output backend protocol (imported from display)
|
||||
|
||||
This module provides a centralized import location for all interfaces.
|
||||
"""
|
||||
|
||||
from engine.data_sources.sources import (
|
||||
DataSource,
|
||||
ImageItem,
|
||||
SourceItem,
|
||||
)
|
||||
from engine.display import Display
|
||||
from engine.effects.types import (
|
||||
EffectConfig,
|
||||
EffectContext,
|
||||
EffectPlugin,
|
||||
PartialUpdate,
|
||||
PipelineConfig,
|
||||
apply_param_bindings,
|
||||
create_effect_context,
|
||||
)
|
||||
from engine.pipeline.core import (
|
||||
DataType,
|
||||
Stage,
|
||||
StageConfig,
|
||||
StageError,
|
||||
StageResult,
|
||||
create_stage_error,
|
||||
)
|
||||
from engine.sensors import (
|
||||
Sensor,
|
||||
SensorStage,
|
||||
SensorValue,
|
||||
create_sensor_stage,
|
||||
)
|
||||
|
||||
__all__ = [
|
||||
# Stage interfaces
|
||||
"DataType",
|
||||
"Stage",
|
||||
"StageConfig",
|
||||
"StageError",
|
||||
"StageResult",
|
||||
"create_stage_error",
|
||||
# Data source interfaces
|
||||
"DataSource",
|
||||
"ImageItem",
|
||||
"SourceItem",
|
||||
# Effect interfaces
|
||||
"EffectConfig",
|
||||
"EffectContext",
|
||||
"EffectPlugin",
|
||||
"PartialUpdate",
|
||||
"PipelineConfig",
|
||||
"apply_param_bindings",
|
||||
"create_effect_context",
|
||||
# Sensor interfaces
|
||||
"Sensor",
|
||||
"SensorStage",
|
||||
"SensorValue",
|
||||
"create_sensor_stage",
|
||||
# Display protocol
|
||||
"Display",
|
||||
]
|
||||
267
engine/layers.py
267
engine/layers.py
@@ -1,267 +0,0 @@
|
||||
"""
|
||||
Layer compositing — message overlay, ticker zone, firehose, noise.
|
||||
Depends on: config, render, effects.
|
||||
"""
|
||||
|
||||
import random
|
||||
import re
|
||||
import time
|
||||
from datetime import datetime
|
||||
|
||||
from engine import config
|
||||
from engine.effects import (
|
||||
EffectChain,
|
||||
EffectContext,
|
||||
fade_line,
|
||||
firehose_line,
|
||||
glitch_bar,
|
||||
noise,
|
||||
vis_offset,
|
||||
vis_trunc,
|
||||
)
|
||||
from engine.render import big_wrap, lr_gradient, lr_gradient_opposite
|
||||
from engine.terminal import RST, W_COOL
|
||||
|
||||
MSG_META = "\033[38;5;245m"
|
||||
MSG_BORDER = "\033[2;38;5;37m"
|
||||
|
||||
|
||||
def render_message_overlay(
|
||||
msg: tuple[str, str, float] | None,
|
||||
w: int,
|
||||
h: int,
|
||||
msg_cache: tuple,
|
||||
) -> tuple[list[str], tuple]:
|
||||
"""Render ntfy message overlay.
|
||||
|
||||
Args:
|
||||
msg: (title, body, timestamp) or None
|
||||
w: terminal width
|
||||
h: terminal height
|
||||
msg_cache: (cache_key, rendered_rows) for caching
|
||||
|
||||
Returns:
|
||||
(list of ANSI strings, updated cache)
|
||||
"""
|
||||
overlay = []
|
||||
if msg is None:
|
||||
return overlay, msg_cache
|
||||
|
||||
m_title, m_body, m_ts = msg
|
||||
display_text = m_body or m_title or "(empty)"
|
||||
display_text = re.sub(r"\s+", " ", display_text.upper())
|
||||
|
||||
cache_key = (display_text, w)
|
||||
if msg_cache[0] != cache_key:
|
||||
msg_rows = big_wrap(display_text, w - 4)
|
||||
msg_cache = (cache_key, msg_rows)
|
||||
else:
|
||||
msg_rows = msg_cache[1]
|
||||
|
||||
msg_rows = lr_gradient_opposite(
|
||||
msg_rows, (time.monotonic() * config.GRAD_SPEED) % 1.0
|
||||
)
|
||||
|
||||
elapsed_s = int(time.monotonic() - m_ts)
|
||||
remaining = max(0, config.MESSAGE_DISPLAY_SECS - elapsed_s)
|
||||
ts_str = datetime.now().strftime("%H:%M:%S")
|
||||
panel_h = len(msg_rows) + 2
|
||||
panel_top = max(0, (h - panel_h) // 2)
|
||||
|
||||
row_idx = 0
|
||||
for mr in msg_rows:
|
||||
ln = vis_trunc(mr, w)
|
||||
overlay.append(f"\033[{panel_top + row_idx + 1};1H {ln}\033[0m\033[K")
|
||||
row_idx += 1
|
||||
|
||||
meta_parts = []
|
||||
if m_title and m_title != m_body:
|
||||
meta_parts.append(m_title)
|
||||
meta_parts.append(f"ntfy \u00b7 {ts_str} \u00b7 {remaining}s")
|
||||
meta = (
|
||||
" " + " \u00b7 ".join(meta_parts)
|
||||
if len(meta_parts) > 1
|
||||
else " " + meta_parts[0]
|
||||
)
|
||||
overlay.append(f"\033[{panel_top + row_idx + 1};1H{MSG_META}{meta}\033[0m\033[K")
|
||||
row_idx += 1
|
||||
|
||||
bar = "\u2500" * (w - 4)
|
||||
overlay.append(f"\033[{panel_top + row_idx + 1};1H {MSG_BORDER}{bar}\033[0m\033[K")
|
||||
|
||||
return overlay, msg_cache
|
||||
|
||||
|
||||
def render_ticker_zone(
|
||||
active: list,
|
||||
scroll_cam: int,
|
||||
camera_x: int = 0,
|
||||
ticker_h: int = 0,
|
||||
w: int = 80,
|
||||
noise_cache: dict | None = None,
|
||||
grad_offset: float = 0.0,
|
||||
) -> tuple[list[str], dict]:
|
||||
"""Render the ticker scroll zone.
|
||||
|
||||
Args:
|
||||
active: list of (content_rows, color, canvas_y, meta_idx)
|
||||
scroll_cam: camera position (viewport top)
|
||||
camera_x: horizontal camera offset
|
||||
ticker_h: height of ticker zone
|
||||
w: terminal width
|
||||
noise_cache: dict of cy -> noise string
|
||||
grad_offset: gradient animation offset
|
||||
|
||||
Returns:
|
||||
(list of ANSI strings, updated noise_cache)
|
||||
"""
|
||||
if noise_cache is None:
|
||||
noise_cache = {}
|
||||
buf = []
|
||||
top_zone = max(1, int(ticker_h * 0.25))
|
||||
bot_zone = max(1, int(ticker_h * 0.10))
|
||||
|
||||
def noise_at(cy):
|
||||
if cy not in noise_cache:
|
||||
noise_cache[cy] = noise(w) if random.random() < 0.15 else None
|
||||
return noise_cache[cy]
|
||||
|
||||
for r in range(ticker_h):
|
||||
scr_row = r + 1
|
||||
cy = scroll_cam + r
|
||||
top_f = min(1.0, r / top_zone) if top_zone > 0 else 1.0
|
||||
bot_f = min(1.0, (ticker_h - 1 - r) / bot_zone) if bot_zone > 0 else 1.0
|
||||
row_fade = min(top_f, bot_f)
|
||||
drawn = False
|
||||
|
||||
for content, hc, by, midx in active:
|
||||
cr = cy - by
|
||||
if 0 <= cr < len(content):
|
||||
raw = content[cr]
|
||||
if cr != midx:
|
||||
colored = lr_gradient([raw], grad_offset)[0]
|
||||
else:
|
||||
colored = raw
|
||||
ln = vis_trunc(vis_offset(colored, camera_x), w)
|
||||
if row_fade < 1.0:
|
||||
ln = fade_line(ln, row_fade)
|
||||
|
||||
if cr == midx:
|
||||
buf.append(f"\033[{scr_row};1H{W_COOL}{ln}{RST}\033[K")
|
||||
elif ln.strip():
|
||||
buf.append(f"\033[{scr_row};1H{ln}{RST}\033[K")
|
||||
else:
|
||||
buf.append(f"\033[{scr_row};1H\033[K")
|
||||
drawn = True
|
||||
break
|
||||
|
||||
if not drawn:
|
||||
n = noise_at(cy)
|
||||
if row_fade < 1.0 and n:
|
||||
n = fade_line(n, row_fade)
|
||||
if n:
|
||||
buf.append(f"\033[{scr_row};1H{n}")
|
||||
else:
|
||||
buf.append(f"\033[{scr_row};1H\033[K")
|
||||
|
||||
return buf, noise_cache
|
||||
|
||||
|
||||
def apply_glitch(
|
||||
buf: list[str],
|
||||
ticker_buf_start: int,
|
||||
mic_excess: float,
|
||||
w: int,
|
||||
) -> list[str]:
|
||||
"""Apply glitch effect to ticker buffer.
|
||||
|
||||
Args:
|
||||
buf: current buffer
|
||||
ticker_buf_start: index where ticker starts in buffer
|
||||
mic_excess: mic level above threshold
|
||||
w: terminal width
|
||||
|
||||
Returns:
|
||||
Updated buffer with glitches applied
|
||||
"""
|
||||
glitch_prob = 0.32 + min(0.9, mic_excess * 0.16)
|
||||
n_hits = 4 + int(mic_excess / 2)
|
||||
ticker_buf_len = len(buf) - ticker_buf_start
|
||||
|
||||
if random.random() < glitch_prob and ticker_buf_len > 0:
|
||||
for _ in range(min(n_hits, ticker_buf_len)):
|
||||
gi = random.randint(0, ticker_buf_len - 1)
|
||||
scr_row = gi + 1
|
||||
buf[ticker_buf_start + gi] = f"\033[{scr_row};1H{glitch_bar(w)}"
|
||||
|
||||
return buf
|
||||
|
||||
|
||||
def render_firehose(items: list, w: int, fh: int, h: int) -> list[str]:
|
||||
"""Render firehose strip at bottom of screen."""
|
||||
buf = []
|
||||
if fh > 0:
|
||||
for fr in range(fh):
|
||||
scr_row = h - fh + fr + 1
|
||||
fline = firehose_line(items, w)
|
||||
buf.append(f"\033[{scr_row};1H{fline}\033[K")
|
||||
return buf
|
||||
|
||||
|
||||
_effect_chain = None
|
||||
|
||||
|
||||
def init_effects() -> None:
|
||||
"""Initialize effect plugins and chain."""
|
||||
global _effect_chain
|
||||
from engine.effects import EffectChain, get_registry
|
||||
|
||||
registry = get_registry()
|
||||
|
||||
import effects_plugins
|
||||
|
||||
effects_plugins.discover_plugins()
|
||||
|
||||
chain = EffectChain(registry)
|
||||
chain.set_order(["noise", "fade", "glitch", "firehose"])
|
||||
_effect_chain = chain
|
||||
|
||||
|
||||
def process_effects(
|
||||
buf: list[str],
|
||||
w: int,
|
||||
h: int,
|
||||
scroll_cam: int,
|
||||
ticker_h: int,
|
||||
camera_x: int = 0,
|
||||
mic_excess: float = 0.0,
|
||||
grad_offset: float = 0.0,
|
||||
frame_number: int = 0,
|
||||
has_message: bool = False,
|
||||
items: list | None = None,
|
||||
) -> list[str]:
|
||||
"""Process buffer through effect chain."""
|
||||
if _effect_chain is None:
|
||||
init_effects()
|
||||
|
||||
ctx = EffectContext(
|
||||
terminal_width=w,
|
||||
terminal_height=h,
|
||||
scroll_cam=scroll_cam,
|
||||
camera_x=camera_x,
|
||||
ticker_height=ticker_h,
|
||||
mic_excess=mic_excess,
|
||||
grad_offset=grad_offset,
|
||||
frame_number=frame_number,
|
||||
has_message=has_message,
|
||||
items=items or [],
|
||||
)
|
||||
return _effect_chain.process(buf, ctx)
|
||||
|
||||
|
||||
def get_effect_chain() -> EffectChain | None:
|
||||
"""Get the effect chain instance."""
|
||||
global _effect_chain
|
||||
if _effect_chain is None:
|
||||
init_effects()
|
||||
return _effect_chain
|
||||
@@ -1,96 +0,0 @@
|
||||
"""
|
||||
Microphone input monitor — standalone, no internal dependencies.
|
||||
Gracefully degrades if sounddevice/numpy are unavailable.
|
||||
"""
|
||||
|
||||
import atexit
|
||||
from collections.abc import Callable
|
||||
from datetime import datetime
|
||||
|
||||
try:
|
||||
import numpy as _np
|
||||
import sounddevice as _sd
|
||||
|
||||
_HAS_MIC = True
|
||||
except Exception:
|
||||
_HAS_MIC = False
|
||||
|
||||
|
||||
from engine.events import MicLevelEvent
|
||||
|
||||
|
||||
class MicMonitor:
|
||||
"""Background mic stream that exposes current RMS dB level."""
|
||||
|
||||
def __init__(self, threshold_db=50):
|
||||
self.threshold_db = threshold_db
|
||||
self._db = -99.0
|
||||
self._stream = None
|
||||
self._subscribers: list[Callable[[MicLevelEvent], None]] = []
|
||||
|
||||
@property
|
||||
def available(self):
|
||||
"""True if sounddevice is importable."""
|
||||
return _HAS_MIC
|
||||
|
||||
@property
|
||||
def db(self):
|
||||
"""Current RMS dB level."""
|
||||
return self._db
|
||||
|
||||
@property
|
||||
def excess(self):
|
||||
"""dB above threshold (clamped to 0)."""
|
||||
return max(0.0, self._db - self.threshold_db)
|
||||
|
||||
def subscribe(self, callback: Callable[[MicLevelEvent], None]) -> None:
|
||||
"""Register a callback to be called when mic level changes."""
|
||||
self._subscribers.append(callback)
|
||||
|
||||
def unsubscribe(self, callback: Callable[[MicLevelEvent], None]) -> None:
|
||||
"""Remove a registered callback."""
|
||||
if callback in self._subscribers:
|
||||
self._subscribers.remove(callback)
|
||||
|
||||
def _emit(self, event: MicLevelEvent) -> None:
|
||||
"""Emit an event to all subscribers."""
|
||||
for cb in self._subscribers:
|
||||
try:
|
||||
cb(event)
|
||||
except Exception:
|
||||
pass
|
||||
|
||||
def start(self):
|
||||
"""Start background mic stream. Returns True on success, False/None otherwise."""
|
||||
if not _HAS_MIC:
|
||||
return None
|
||||
|
||||
def _cb(indata, frames, t, status):
|
||||
rms = float(_np.sqrt(_np.mean(indata**2)))
|
||||
self._db = 20 * _np.log10(rms) if rms > 0 else -99.0
|
||||
if self._subscribers:
|
||||
event = MicLevelEvent(
|
||||
db_level=self._db,
|
||||
excess_above_threshold=max(0.0, self._db - self.threshold_db),
|
||||
timestamp=datetime.now(),
|
||||
)
|
||||
self._emit(event)
|
||||
|
||||
try:
|
||||
self._stream = _sd.InputStream(
|
||||
callback=_cb, channels=1, samplerate=44100, blocksize=2048
|
||||
)
|
||||
self._stream.start()
|
||||
atexit.register(self.stop)
|
||||
return True
|
||||
except Exception:
|
||||
return False
|
||||
|
||||
def stop(self):
|
||||
"""Stop the mic stream if running."""
|
||||
if self._stream:
|
||||
try:
|
||||
self._stream.stop()
|
||||
except Exception:
|
||||
pass
|
||||
self._stream = None
|
||||
@@ -1,611 +0,0 @@
|
||||
"""
|
||||
Pipeline introspection - generates self-documenting diagrams of the render pipeline.
|
||||
|
||||
Pipeline Architecture:
|
||||
- Sources: Data providers (RSS, Poetry, Ntfy, Mic) - static or dynamic
|
||||
- Fetch: Retrieve data from sources
|
||||
- Prepare: Transform raw data (make_block, strip_tags, translate)
|
||||
- Scroll: Camera-based viewport rendering (ticker zone, message overlay)
|
||||
- Effects: Post-processing chain (noise, fade, glitch, firehose, hud)
|
||||
- Render: Final line rendering and layout
|
||||
- Display: Output backends (terminal, pygame, websocket, sixel, kitty)
|
||||
|
||||
Key abstractions:
|
||||
- DataSource: Sources can be static (cached) or dynamic (idempotent fetch)
|
||||
- Camera: Viewport controller (vertical, horizontal, omni, floating, trace)
|
||||
- EffectChain: Ordered effect processing pipeline
|
||||
- Display: Pluggable output backends
|
||||
- SourceRegistry: Source discovery and management
|
||||
- AnimationController: Time-based parameter animation
|
||||
- Preset: Package of initial params + animation for demo modes
|
||||
"""
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
from dataclasses import dataclass
|
||||
|
||||
|
||||
@dataclass
|
||||
class PipelineNode:
|
||||
"""Represents a node in the pipeline."""
|
||||
|
||||
name: str
|
||||
module: str
|
||||
class_name: str | None = None
|
||||
func_name: str | None = None
|
||||
description: str = ""
|
||||
inputs: list[str] | None = None
|
||||
outputs: list[str] | None = None
|
||||
metrics: dict | None = None # Performance metrics (avg_ms, min_ms, max_ms)
|
||||
|
||||
|
||||
class PipelineIntrospector:
|
||||
"""Introspects the render pipeline and generates documentation."""
|
||||
|
||||
def __init__(self):
|
||||
self.nodes: list[PipelineNode] = []
|
||||
|
||||
def add_node(self, node: PipelineNode) -> None:
|
||||
self.nodes.append(node)
|
||||
|
||||
def generate_mermaid_flowchart(self) -> str:
|
||||
"""Generate a Mermaid flowchart of the pipeline."""
|
||||
lines = ["```mermaid", "flowchart TD"]
|
||||
|
||||
subgraph_groups = {
|
||||
"Sources": [],
|
||||
"Fetch": [],
|
||||
"Prepare": [],
|
||||
"Scroll": [],
|
||||
"Effects": [],
|
||||
"Display": [],
|
||||
"Async": [],
|
||||
"Animation": [],
|
||||
"Viz": [],
|
||||
}
|
||||
|
||||
other_nodes = []
|
||||
|
||||
for node in self.nodes:
|
||||
node_id = node.name.replace("-", "_").replace(" ", "_").replace(":", "_")
|
||||
label = node.name
|
||||
if node.class_name:
|
||||
label = f"{node.name}\\n({node.class_name})"
|
||||
elif node.func_name:
|
||||
label = f"{node.name}\\n({node.func_name})"
|
||||
|
||||
if node.description:
|
||||
label += f"\\n{node.description}"
|
||||
|
||||
if node.metrics:
|
||||
avg = node.metrics.get("avg_ms", 0)
|
||||
if avg > 0:
|
||||
label += f"\\n⏱ {avg:.1f}ms"
|
||||
impact = node.metrics.get("impact_pct", 0)
|
||||
if impact > 0:
|
||||
label += f" ({impact:.0f}%)"
|
||||
|
||||
node_entry = f' {node_id}["{label}"]'
|
||||
|
||||
if "DataSource" in node.name or "SourceRegistry" in node.name:
|
||||
subgraph_groups["Sources"].append(node_entry)
|
||||
elif "fetch" in node.name.lower():
|
||||
subgraph_groups["Fetch"].append(node_entry)
|
||||
elif (
|
||||
"make_block" in node.name
|
||||
or "strip_tags" in node.name
|
||||
or "translate" in node.name
|
||||
):
|
||||
subgraph_groups["Prepare"].append(node_entry)
|
||||
elif (
|
||||
"StreamController" in node.name
|
||||
or "render_ticker" in node.name
|
||||
or "render_message" in node.name
|
||||
or "Camera" in node.name
|
||||
):
|
||||
subgraph_groups["Scroll"].append(node_entry)
|
||||
elif "Effect" in node.name or "effect" in node.module:
|
||||
subgraph_groups["Effects"].append(node_entry)
|
||||
elif "Display:" in node.name:
|
||||
subgraph_groups["Display"].append(node_entry)
|
||||
elif "Ntfy" in node.name or "Mic" in node.name:
|
||||
subgraph_groups["Async"].append(node_entry)
|
||||
elif "Animation" in node.name or "Preset" in node.name:
|
||||
subgraph_groups["Animation"].append(node_entry)
|
||||
elif "pipeline_viz" in node.module or "CameraLarge" in node.name:
|
||||
subgraph_groups["Viz"].append(node_entry)
|
||||
else:
|
||||
other_nodes.append(node_entry)
|
||||
|
||||
for group_name, nodes in subgraph_groups.items():
|
||||
if nodes:
|
||||
lines.append(f" subgraph {group_name}")
|
||||
for node in nodes:
|
||||
lines.append(node)
|
||||
lines.append(" end")
|
||||
|
||||
for node in other_nodes:
|
||||
lines.append(node)
|
||||
|
||||
lines.append("")
|
||||
|
||||
for node in self.nodes:
|
||||
node_id = node.name.replace("-", "_").replace(" ", "_").replace(":", "_")
|
||||
if node.inputs:
|
||||
for inp in node.inputs:
|
||||
inp_id = inp.replace("-", "_").replace(" ", "_").replace(":", "_")
|
||||
lines.append(f" {inp_id} --> {node_id}")
|
||||
|
||||
lines.append("```")
|
||||
return "\n".join(lines)
|
||||
|
||||
def generate_mermaid_sequence(self) -> str:
|
||||
"""Generate a Mermaid sequence diagram of message flow."""
|
||||
lines = ["```mermaid", "sequenceDiagram"]
|
||||
|
||||
lines.append(" participant Sources")
|
||||
lines.append(" participant Fetch")
|
||||
lines.append(" participant Scroll")
|
||||
lines.append(" participant Effects")
|
||||
lines.append(" participant Display")
|
||||
|
||||
lines.append(" Sources->>Fetch: headlines")
|
||||
lines.append(" Fetch->>Scroll: content blocks")
|
||||
lines.append(" Scroll->>Effects: buffer")
|
||||
lines.append(" Effects->>Effects: process chain")
|
||||
lines.append(" Effects->>Display: rendered buffer")
|
||||
|
||||
lines.append("```")
|
||||
return "\n".join(lines)
|
||||
|
||||
def generate_mermaid_state(self) -> str:
|
||||
"""Generate a Mermaid state diagram of camera modes."""
|
||||
lines = ["```mermaid", "stateDiagram-v2"]
|
||||
|
||||
lines.append(" [*] --> Vertical")
|
||||
lines.append(" Vertical --> Horizontal: set_mode()")
|
||||
lines.append(" Horizontal --> Omni: set_mode()")
|
||||
lines.append(" Omni --> Floating: set_mode()")
|
||||
lines.append(" Floating --> Trace: set_mode()")
|
||||
lines.append(" Trace --> Vertical: set_mode()")
|
||||
|
||||
lines.append(" state Vertical {")
|
||||
lines.append(" [*] --> ScrollUp")
|
||||
lines.append(" ScrollUp --> ScrollUp: +y each frame")
|
||||
lines.append(" }")
|
||||
|
||||
lines.append(" state Horizontal {")
|
||||
lines.append(" [*] --> ScrollLeft")
|
||||
lines.append(" ScrollLeft --> ScrollLeft: +x each frame")
|
||||
lines.append(" }")
|
||||
|
||||
lines.append(" state Omni {")
|
||||
lines.append(" [*] --> Diagonal")
|
||||
lines.append(" Diagonal --> Diagonal: +x, +y")
|
||||
lines.append(" }")
|
||||
|
||||
lines.append(" state Floating {")
|
||||
lines.append(" [*] --> Bobbing")
|
||||
lines.append(" Bobbing --> Bobbing: sin(time)")
|
||||
lines.append(" }")
|
||||
|
||||
lines.append(" state Trace {")
|
||||
lines.append(" [*] --> FollowPath")
|
||||
lines.append(" FollowPath --> FollowPath: node by node")
|
||||
lines.append(" }")
|
||||
|
||||
lines.append("```")
|
||||
return "\n".join(lines)
|
||||
|
||||
def generate_full_diagram(self) -> str:
|
||||
"""Generate full pipeline documentation."""
|
||||
lines = [
|
||||
"# Render Pipeline",
|
||||
"",
|
||||
"## Data Flow",
|
||||
"",
|
||||
self.generate_mermaid_flowchart(),
|
||||
"",
|
||||
"## Message Sequence",
|
||||
"",
|
||||
self.generate_mermaid_sequence(),
|
||||
"",
|
||||
"## Camera States",
|
||||
"",
|
||||
self.generate_mermaid_state(),
|
||||
]
|
||||
return "\n".join(lines)
|
||||
|
||||
def introspect_sources(self) -> None:
|
||||
"""Introspect data sources."""
|
||||
from engine import sources
|
||||
|
||||
for name in dir(sources):
|
||||
obj = getattr(sources, name)
|
||||
if isinstance(obj, dict):
|
||||
self.add_node(
|
||||
PipelineNode(
|
||||
name=f"Data Source: {name}",
|
||||
module="engine.sources",
|
||||
description=f"{len(obj)} feeds configured",
|
||||
)
|
||||
)
|
||||
|
||||
def introspect_sources_v2(self) -> None:
|
||||
"""Introspect data sources v2 (new abstraction)."""
|
||||
from engine.sources_v2 import SourceRegistry, init_default_sources
|
||||
|
||||
init_default_sources()
|
||||
SourceRegistry()
|
||||
|
||||
self.add_node(
|
||||
PipelineNode(
|
||||
name="SourceRegistry",
|
||||
module="engine.sources_v2",
|
||||
class_name="SourceRegistry",
|
||||
description="Source discovery and management",
|
||||
)
|
||||
)
|
||||
|
||||
for name, desc in [
|
||||
("HeadlinesDataSource", "RSS feed headlines"),
|
||||
("PoetryDataSource", "Poetry DB"),
|
||||
("PipelineDataSource", "Pipeline viz (dynamic)"),
|
||||
]:
|
||||
self.add_node(
|
||||
PipelineNode(
|
||||
name=f"DataSource: {name}",
|
||||
module="engine.sources_v2",
|
||||
class_name=name,
|
||||
description=f"{desc}",
|
||||
)
|
||||
)
|
||||
|
||||
def introspect_prepare(self) -> None:
|
||||
"""Introspect prepare layer (transformation)."""
|
||||
self.add_node(
|
||||
PipelineNode(
|
||||
name="make_block",
|
||||
module="engine.render",
|
||||
func_name="make_block",
|
||||
description="Transform headline into display block",
|
||||
inputs=["title", "source", "timestamp", "width"],
|
||||
outputs=["block"],
|
||||
)
|
||||
)
|
||||
|
||||
self.add_node(
|
||||
PipelineNode(
|
||||
name="strip_tags",
|
||||
module="engine.filter",
|
||||
func_name="strip_tags",
|
||||
description="Remove HTML tags from content",
|
||||
inputs=["html"],
|
||||
outputs=["plain_text"],
|
||||
)
|
||||
)
|
||||
|
||||
self.add_node(
|
||||
PipelineNode(
|
||||
name="translate_headline",
|
||||
module="engine.translate",
|
||||
func_name="translate_headline",
|
||||
description="Translate headline to target language",
|
||||
inputs=["title", "target_lang"],
|
||||
outputs=["translated_title"],
|
||||
)
|
||||
)
|
||||
|
||||
def introspect_fetch(self) -> None:
|
||||
"""Introspect fetch layer."""
|
||||
self.add_node(
|
||||
PipelineNode(
|
||||
name="fetch_all",
|
||||
module="engine.fetch",
|
||||
func_name="fetch_all",
|
||||
description="Fetch RSS feeds",
|
||||
outputs=["items"],
|
||||
)
|
||||
)
|
||||
|
||||
self.add_node(
|
||||
PipelineNode(
|
||||
name="fetch_poetry",
|
||||
module="engine.fetch",
|
||||
func_name="fetch_poetry",
|
||||
description="Fetch Poetry DB",
|
||||
outputs=["items"],
|
||||
)
|
||||
)
|
||||
|
||||
def introspect_scroll(self) -> None:
|
||||
"""Introspect scroll engine."""
|
||||
self.add_node(
|
||||
PipelineNode(
|
||||
name="StreamController",
|
||||
module="engine.controller",
|
||||
class_name="StreamController",
|
||||
description="Main render loop orchestrator",
|
||||
inputs=["items", "ntfy_poller", "mic_monitor", "display"],
|
||||
outputs=["buffer"],
|
||||
)
|
||||
)
|
||||
|
||||
self.add_node(
|
||||
PipelineNode(
|
||||
name="render_ticker_zone",
|
||||
module="engine.layers",
|
||||
func_name="render_ticker_zone",
|
||||
description="Render scrolling ticker content",
|
||||
inputs=["active", "camera"],
|
||||
outputs=["buffer"],
|
||||
)
|
||||
)
|
||||
|
||||
self.add_node(
|
||||
PipelineNode(
|
||||
name="render_message_overlay",
|
||||
module="engine.layers",
|
||||
func_name="render_message_overlay",
|
||||
description="Render ntfy message overlay",
|
||||
inputs=["msg", "width", "height"],
|
||||
outputs=["overlay", "cache"],
|
||||
)
|
||||
)
|
||||
|
||||
def introspect_render(self) -> None:
|
||||
"""Introspect render layer."""
|
||||
self.add_node(
|
||||
PipelineNode(
|
||||
name="big_wrap",
|
||||
module="engine.render",
|
||||
func_name="big_wrap",
|
||||
description="Word-wrap text to width",
|
||||
inputs=["text", "width"],
|
||||
outputs=["lines"],
|
||||
)
|
||||
)
|
||||
|
||||
self.add_node(
|
||||
PipelineNode(
|
||||
name="lr_gradient",
|
||||
module="engine.render",
|
||||
func_name="lr_gradient",
|
||||
description="Apply left-right gradient to lines",
|
||||
inputs=["lines", "position"],
|
||||
outputs=["styled_lines"],
|
||||
)
|
||||
)
|
||||
|
||||
def introspect_async_sources(self) -> None:
|
||||
"""Introspect async data sources (ntfy, mic)."""
|
||||
self.add_node(
|
||||
PipelineNode(
|
||||
name="NtfyPoller",
|
||||
module="engine.ntfy",
|
||||
class_name="NtfyPoller",
|
||||
description="Poll ntfy for messages (async)",
|
||||
inputs=["topic"],
|
||||
outputs=["message"],
|
||||
)
|
||||
)
|
||||
|
||||
self.add_node(
|
||||
PipelineNode(
|
||||
name="MicMonitor",
|
||||
module="engine.mic",
|
||||
class_name="MicMonitor",
|
||||
description="Monitor microphone input (async)",
|
||||
outputs=["audio_level"],
|
||||
)
|
||||
)
|
||||
|
||||
def introspect_eventbus(self) -> None:
|
||||
"""Introspect event bus for decoupled communication."""
|
||||
self.add_node(
|
||||
PipelineNode(
|
||||
name="EventBus",
|
||||
module="engine.eventbus",
|
||||
class_name="EventBus",
|
||||
description="Thread-safe event publishing",
|
||||
inputs=["event"],
|
||||
outputs=["subscribers"],
|
||||
)
|
||||
)
|
||||
|
||||
def introspect_animation(self) -> None:
|
||||
"""Introspect animation system."""
|
||||
self.add_node(
|
||||
PipelineNode(
|
||||
name="AnimationController",
|
||||
module="engine.animation",
|
||||
class_name="AnimationController",
|
||||
description="Time-based parameter animation",
|
||||
inputs=["dt"],
|
||||
outputs=["params"],
|
||||
)
|
||||
)
|
||||
|
||||
self.add_node(
|
||||
PipelineNode(
|
||||
name="Preset",
|
||||
module="engine.animation",
|
||||
class_name="Preset",
|
||||
description="Package of initial params + animation",
|
||||
)
|
||||
)
|
||||
|
||||
def introspect_pipeline_viz(self) -> None:
|
||||
"""Introspect pipeline visualization."""
|
||||
self.add_node(
|
||||
PipelineNode(
|
||||
name="generate_large_network_viewport",
|
||||
module="engine.pipeline_viz",
|
||||
func_name="generate_large_network_viewport",
|
||||
description="Large animated network visualization",
|
||||
inputs=["viewport_w", "viewport_h", "frame"],
|
||||
outputs=["buffer"],
|
||||
)
|
||||
)
|
||||
|
||||
self.add_node(
|
||||
PipelineNode(
|
||||
name="CameraLarge",
|
||||
module="engine.pipeline_viz",
|
||||
class_name="CameraLarge",
|
||||
description="Large grid camera (trace mode)",
|
||||
)
|
||||
)
|
||||
|
||||
def introspect_camera(self) -> None:
|
||||
"""Introspect camera system."""
|
||||
self.add_node(
|
||||
PipelineNode(
|
||||
name="Camera",
|
||||
module="engine.camera",
|
||||
class_name="Camera",
|
||||
description="Viewport position controller",
|
||||
inputs=["dt"],
|
||||
outputs=["x", "y"],
|
||||
)
|
||||
)
|
||||
|
||||
def introspect_effects(self) -> None:
|
||||
"""Introspect effect system."""
|
||||
self.add_node(
|
||||
PipelineNode(
|
||||
name="EffectChain",
|
||||
module="engine.effects",
|
||||
class_name="EffectChain",
|
||||
description="Process effects in sequence",
|
||||
inputs=["buffer", "context"],
|
||||
outputs=["buffer"],
|
||||
)
|
||||
)
|
||||
|
||||
self.add_node(
|
||||
PipelineNode(
|
||||
name="EffectRegistry",
|
||||
module="engine.effects",
|
||||
class_name="EffectRegistry",
|
||||
description="Manage effect plugins",
|
||||
)
|
||||
)
|
||||
|
||||
def introspect_display(self) -> None:
|
||||
"""Introspect display backends."""
|
||||
from engine.display import DisplayRegistry
|
||||
|
||||
DisplayRegistry.initialize()
|
||||
backends = DisplayRegistry.list_backends()
|
||||
|
||||
for backend in backends:
|
||||
self.add_node(
|
||||
PipelineNode(
|
||||
name=f"Display: {backend}",
|
||||
module="engine.display.backends",
|
||||
class_name=f"{backend.title()}Display",
|
||||
description=f"Render to {backend}",
|
||||
inputs=["buffer"],
|
||||
)
|
||||
)
|
||||
|
||||
def introspect_new_pipeline(self, pipeline=None) -> None:
|
||||
"""Introspect new unified pipeline stages with metrics.
|
||||
|
||||
Args:
|
||||
pipeline: Optional Pipeline instance to collect metrics from
|
||||
"""
|
||||
|
||||
stages_info = [
|
||||
(
|
||||
"ItemsSource",
|
||||
"engine.pipeline.adapters",
|
||||
"ItemsStage",
|
||||
"Provides pre-fetched items",
|
||||
),
|
||||
(
|
||||
"Render",
|
||||
"engine.pipeline.adapters",
|
||||
"RenderStage",
|
||||
"Renders items to buffer",
|
||||
),
|
||||
(
|
||||
"Effect",
|
||||
"engine.pipeline.adapters",
|
||||
"EffectPluginStage",
|
||||
"Applies effect",
|
||||
),
|
||||
(
|
||||
"Display",
|
||||
"engine.pipeline.adapters",
|
||||
"DisplayStage",
|
||||
"Outputs to display",
|
||||
),
|
||||
]
|
||||
|
||||
metrics = None
|
||||
if pipeline and hasattr(pipeline, "get_metrics_summary"):
|
||||
metrics = pipeline.get_metrics_summary()
|
||||
if "error" in metrics:
|
||||
metrics = None
|
||||
|
||||
total_avg = metrics.get("pipeline", {}).get("avg_ms", 0) if metrics else 0
|
||||
|
||||
for stage_name, module, class_name, desc in stages_info:
|
||||
node_metrics = None
|
||||
if metrics and "stages" in metrics:
|
||||
for name, stats in metrics["stages"].items():
|
||||
if stage_name.lower() in name.lower():
|
||||
impact_pct = (
|
||||
(stats.get("avg_ms", 0) / total_avg * 100)
|
||||
if total_avg > 0
|
||||
else 0
|
||||
)
|
||||
node_metrics = {
|
||||
"avg_ms": stats.get("avg_ms", 0),
|
||||
"min_ms": stats.get("min_ms", 0),
|
||||
"max_ms": stats.get("max_ms", 0),
|
||||
"impact_pct": impact_pct,
|
||||
}
|
||||
break
|
||||
|
||||
self.add_node(
|
||||
PipelineNode(
|
||||
name=f"Pipeline: {stage_name}",
|
||||
module=module,
|
||||
class_name=class_name,
|
||||
description=desc,
|
||||
inputs=["data"],
|
||||
outputs=["data"],
|
||||
metrics=node_metrics,
|
||||
)
|
||||
)
|
||||
|
||||
def run(self) -> str:
|
||||
"""Run full introspection."""
|
||||
self.introspect_sources()
|
||||
self.introspect_sources_v2()
|
||||
self.introspect_fetch()
|
||||
self.introspect_prepare()
|
||||
self.introspect_scroll()
|
||||
self.introspect_render()
|
||||
self.introspect_camera()
|
||||
self.introspect_effects()
|
||||
self.introspect_display()
|
||||
self.introspect_async_sources()
|
||||
self.introspect_eventbus()
|
||||
self.introspect_animation()
|
||||
self.introspect_pipeline_viz()
|
||||
|
||||
return self.generate_full_diagram()
|
||||
|
||||
|
||||
def generate_pipeline_diagram() -> str:
|
||||
"""Generate a self-documenting pipeline diagram."""
|
||||
introspector = PipelineIntrospector()
|
||||
return introspector.run()
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
print(generate_pipeline_diagram())
|
||||
@@ -50,8 +50,7 @@ from engine.pipeline.presets import (
|
||||
FIREHOSE_PRESET,
|
||||
PIPELINE_VIZ_PRESET,
|
||||
POETRY_PRESET,
|
||||
PRESETS,
|
||||
SIXEL_PRESET,
|
||||
UI_PRESET,
|
||||
WEBSOCKET_PRESET,
|
||||
PipelinePreset,
|
||||
create_preset_from_params,
|
||||
@@ -92,8 +91,8 @@ __all__ = [
|
||||
"POETRY_PRESET",
|
||||
"PIPELINE_VIZ_PRESET",
|
||||
"WEBSOCKET_PRESET",
|
||||
"SIXEL_PRESET",
|
||||
"FIREHOSE_PRESET",
|
||||
"UI_PRESET",
|
||||
"get_preset",
|
||||
"list_presets",
|
||||
"create_preset_from_params",
|
||||
|
||||
@@ -3,297 +3,48 @@ Stage adapters - Bridge existing components to the Stage interface.
|
||||
|
||||
This module provides adapters that wrap existing components
|
||||
(EffectPlugin, Display, DataSource, Camera) as Stage implementations.
|
||||
|
||||
DEPRECATED: This file is now a compatibility wrapper.
|
||||
Use `engine.pipeline.adapters` package instead.
|
||||
"""
|
||||
|
||||
import random
|
||||
from typing import Any
|
||||
|
||||
from engine.pipeline.core import PipelineContext, Stage
|
||||
|
||||
|
||||
class RenderStage(Stage):
|
||||
"""Stage that renders items to a text buffer for display.
|
||||
|
||||
This mimics the old demo's render pipeline:
|
||||
- Selects headlines and renders them to blocks
|
||||
- Applies camera scroll position
|
||||
- Adds firehose layer if enabled
|
||||
"""
|
||||
|
||||
def __init__(
|
||||
self,
|
||||
items: list,
|
||||
width: int = 80,
|
||||
height: int = 24,
|
||||
camera_speed: float = 1.0,
|
||||
camera_mode: str = "vertical",
|
||||
firehose_enabled: bool = False,
|
||||
name: str = "render",
|
||||
):
|
||||
self.name = name
|
||||
self.category = "render"
|
||||
self.optional = False
|
||||
self._items = items
|
||||
self._width = width
|
||||
self._height = height
|
||||
self._camera_speed = camera_speed
|
||||
self._camera_mode = camera_mode
|
||||
self._firehose_enabled = firehose_enabled
|
||||
|
||||
self._camera_y = 0.0
|
||||
self._camera_x = 0
|
||||
self._scroll_accum = 0.0
|
||||
self._ticker_next_y = 0
|
||||
self._active: list = []
|
||||
self._seen: set = set()
|
||||
self._pool: list = list(items)
|
||||
self._noise_cache: dict = {}
|
||||
self._frame_count = 0
|
||||
|
||||
@property
|
||||
def capabilities(self) -> set[str]:
|
||||
return {"render.output"}
|
||||
|
||||
@property
|
||||
def dependencies(self) -> set[str]:
|
||||
return {"source.items"}
|
||||
|
||||
def init(self, ctx: PipelineContext) -> bool:
|
||||
random.shuffle(self._pool)
|
||||
return True
|
||||
|
||||
def process(self, data: Any, ctx: PipelineContext) -> Any:
|
||||
"""Render items to a text buffer."""
|
||||
from engine.effects import next_headline
|
||||
from engine.layers import render_firehose, render_ticker_zone
|
||||
from engine.render import make_block
|
||||
|
||||
items = data or self._items
|
||||
w = ctx.params.viewport_width if ctx.params else self._width
|
||||
h = ctx.params.viewport_height if ctx.params else self._height
|
||||
camera_speed = ctx.params.camera_speed if ctx.params else self._camera_speed
|
||||
firehose = ctx.params.firehose_enabled if ctx.params else self._firehose_enabled
|
||||
|
||||
scroll_step = 0.5 / (camera_speed * 10)
|
||||
self._scroll_accum += scroll_step
|
||||
|
||||
GAP = 3
|
||||
|
||||
while self._scroll_accum >= scroll_step:
|
||||
self._scroll_accum -= scroll_step
|
||||
self._camera_y += 1.0
|
||||
|
||||
while (
|
||||
self._ticker_next_y < int(self._camera_y) + h + 10
|
||||
and len(self._active) < 50
|
||||
):
|
||||
t, src, ts = next_headline(self._pool, items, self._seen)
|
||||
ticker_content, hc, midx = make_block(t, src, ts, w)
|
||||
self._active.append((ticker_content, hc, self._ticker_next_y, midx))
|
||||
self._ticker_next_y += len(ticker_content) + GAP
|
||||
|
||||
self._active = [
|
||||
(c, hc, by, mi)
|
||||
for c, hc, by, mi in self._active
|
||||
if by + len(c) > int(self._camera_y)
|
||||
]
|
||||
for k in list(self._noise_cache):
|
||||
if k < int(self._camera_y):
|
||||
del self._noise_cache[k]
|
||||
|
||||
grad_offset = (self._frame_count * 0.01) % 1.0
|
||||
|
||||
buf, self._noise_cache = render_ticker_zone(
|
||||
self._active,
|
||||
scroll_cam=int(self._camera_y),
|
||||
camera_x=self._camera_x,
|
||||
ticker_h=h,
|
||||
w=w,
|
||||
noise_cache=self._noise_cache,
|
||||
grad_offset=grad_offset,
|
||||
)
|
||||
|
||||
if firehose:
|
||||
firehose_buf = render_firehose(items, w, 0, h)
|
||||
buf.extend(firehose_buf)
|
||||
|
||||
self._frame_count += 1
|
||||
return buf
|
||||
|
||||
|
||||
class EffectPluginStage(Stage):
|
||||
"""Adapter wrapping EffectPlugin as a Stage."""
|
||||
|
||||
def __init__(self, effect_plugin, name: str = "effect"):
|
||||
self._effect = effect_plugin
|
||||
self.name = name
|
||||
self.category = "effect"
|
||||
self.optional = False
|
||||
|
||||
@property
|
||||
def capabilities(self) -> set[str]:
|
||||
return {f"effect.{self.name}"}
|
||||
|
||||
@property
|
||||
def dependencies(self) -> set[str]:
|
||||
return set()
|
||||
|
||||
def process(self, data: Any, ctx: PipelineContext) -> Any:
|
||||
"""Process data through the effect."""
|
||||
if data is None:
|
||||
return None
|
||||
from engine.effects import EffectContext
|
||||
|
||||
w = ctx.params.viewport_width if ctx.params else 80
|
||||
h = ctx.params.viewport_height if ctx.params else 24
|
||||
frame = ctx.params.frame_number if ctx.params else 0
|
||||
|
||||
effect_ctx = EffectContext(
|
||||
terminal_width=w,
|
||||
terminal_height=h,
|
||||
scroll_cam=0,
|
||||
ticker_height=h,
|
||||
camera_x=0,
|
||||
mic_excess=0.0,
|
||||
grad_offset=(frame * 0.01) % 1.0,
|
||||
frame_number=frame,
|
||||
has_message=False,
|
||||
items=ctx.get("items", []),
|
||||
)
|
||||
return self._effect.process(data, effect_ctx)
|
||||
|
||||
|
||||
class DisplayStage(Stage):
|
||||
"""Adapter wrapping Display as a Stage."""
|
||||
|
||||
def __init__(self, display, name: str = "terminal"):
|
||||
self._display = display
|
||||
self.name = name
|
||||
self.category = "display"
|
||||
self.optional = False
|
||||
|
||||
@property
|
||||
def capabilities(self) -> set[str]:
|
||||
return {"display.output"}
|
||||
|
||||
@property
|
||||
def dependencies(self) -> set[str]:
|
||||
return set()
|
||||
|
||||
def init(self, ctx: PipelineContext) -> bool:
|
||||
w = ctx.params.viewport_width if ctx.params else 80
|
||||
h = ctx.params.viewport_height if ctx.params else 24
|
||||
result = self._display.init(w, h, reuse=False)
|
||||
return result is not False
|
||||
|
||||
def process(self, data: Any, ctx: PipelineContext) -> Any:
|
||||
"""Output data to display."""
|
||||
if data is not None:
|
||||
self._display.show(data)
|
||||
return data
|
||||
|
||||
def cleanup(self) -> None:
|
||||
self._display.cleanup()
|
||||
|
||||
|
||||
class DataSourceStage(Stage):
|
||||
"""Adapter wrapping DataSource as a Stage."""
|
||||
|
||||
def __init__(self, data_source, name: str = "headlines"):
|
||||
self._source = data_source
|
||||
self.name = name
|
||||
self.category = "source"
|
||||
self.optional = False
|
||||
|
||||
@property
|
||||
def capabilities(self) -> set[str]:
|
||||
return {f"source.{self.name}"}
|
||||
|
||||
@property
|
||||
def dependencies(self) -> set[str]:
|
||||
return set()
|
||||
|
||||
def process(self, data: Any, ctx: PipelineContext) -> Any:
|
||||
"""Fetch data from source."""
|
||||
if hasattr(self._source, "get_items"):
|
||||
return self._source.get_items()
|
||||
return data
|
||||
|
||||
|
||||
class ItemsStage(Stage):
|
||||
"""Stage that holds pre-fetched items and provides them to the pipeline."""
|
||||
|
||||
def __init__(self, items, name: str = "headlines"):
|
||||
self._items = items
|
||||
self.name = name
|
||||
self.category = "source"
|
||||
self.optional = False
|
||||
|
||||
@property
|
||||
def capabilities(self) -> set[str]:
|
||||
return {f"source.{self.name}"}
|
||||
|
||||
@property
|
||||
def dependencies(self) -> set[str]:
|
||||
return set()
|
||||
|
||||
def process(self, data: Any, ctx: PipelineContext) -> Any:
|
||||
"""Return the pre-fetched items."""
|
||||
return self._items
|
||||
|
||||
|
||||
class CameraStage(Stage):
|
||||
"""Adapter wrapping Camera as a Stage."""
|
||||
|
||||
def __init__(self, camera, name: str = "vertical"):
|
||||
self._camera = camera
|
||||
self.name = name
|
||||
self.category = "camera"
|
||||
self.optional = True
|
||||
|
||||
@property
|
||||
def capabilities(self) -> set[str]:
|
||||
return {"camera"}
|
||||
|
||||
@property
|
||||
def dependencies(self) -> set[str]:
|
||||
return {"source.items"}
|
||||
|
||||
def process(self, data: Any, ctx: PipelineContext) -> Any:
|
||||
"""Apply camera transformation to data."""
|
||||
if data is None:
|
||||
return None
|
||||
if hasattr(self._camera, "apply"):
|
||||
return self._camera.apply(
|
||||
data, ctx.params.viewport_width if ctx.params else 80
|
||||
)
|
||||
return data
|
||||
|
||||
def cleanup(self) -> None:
|
||||
if hasattr(self._camera, "reset"):
|
||||
self._camera.reset()
|
||||
|
||||
|
||||
def create_stage_from_display(display, name: str = "terminal") -> DisplayStage:
|
||||
"""Create a Stage from a Display instance."""
|
||||
return DisplayStage(display, name)
|
||||
|
||||
|
||||
def create_stage_from_effect(effect_plugin, name: str) -> EffectPluginStage:
|
||||
"""Create a Stage from an EffectPlugin."""
|
||||
return EffectPluginStage(effect_plugin, name)
|
||||
|
||||
|
||||
def create_stage_from_source(data_source, name: str = "headlines") -> DataSourceStage:
|
||||
"""Create a Stage from a DataSource."""
|
||||
return DataSourceStage(data_source, name)
|
||||
|
||||
|
||||
def create_stage_from_camera(camera, name: str = "vertical") -> CameraStage:
|
||||
"""Create a Stage from a Camera."""
|
||||
return CameraStage(camera, name)
|
||||
|
||||
|
||||
def create_items_stage(items, name: str = "headlines") -> ItemsStage:
|
||||
"""Create a Stage that holds pre-fetched items."""
|
||||
return ItemsStage(items, name)
|
||||
# Re-export from the new package structure for backward compatibility
|
||||
from engine.pipeline.adapters import (
|
||||
# Adapter classes
|
||||
CameraStage,
|
||||
CanvasStage,
|
||||
DataSourceStage,
|
||||
DisplayStage,
|
||||
EffectPluginStage,
|
||||
FontStage,
|
||||
ImageToTextStage,
|
||||
PassthroughStage,
|
||||
SourceItemsToBufferStage,
|
||||
ViewportFilterStage,
|
||||
# Factory functions
|
||||
create_stage_from_camera,
|
||||
create_stage_from_display,
|
||||
create_stage_from_effect,
|
||||
create_stage_from_font,
|
||||
create_stage_from_source,
|
||||
)
|
||||
|
||||
__all__ = [
|
||||
# Adapter classes
|
||||
"EffectPluginStage",
|
||||
"DisplayStage",
|
||||
"DataSourceStage",
|
||||
"PassthroughStage",
|
||||
"SourceItemsToBufferStage",
|
||||
"CameraStage",
|
||||
"ViewportFilterStage",
|
||||
"FontStage",
|
||||
"ImageToTextStage",
|
||||
"CanvasStage",
|
||||
# Factory functions
|
||||
"create_stage_from_display",
|
||||
"create_stage_from_effect",
|
||||
"create_stage_from_source",
|
||||
"create_stage_from_camera",
|
||||
"create_stage_from_font",
|
||||
]
|
||||
|
||||
55
engine/pipeline/adapters/__init__.py
Normal file
55
engine/pipeline/adapters/__init__.py
Normal file
@@ -0,0 +1,55 @@
|
||||
"""Stage adapters - Bridge existing components to the Stage interface.
|
||||
|
||||
This module provides adapters that wrap existing components
|
||||
(EffectPlugin, Display, DataSource, Camera) as Stage implementations.
|
||||
"""
|
||||
|
||||
from .camera import CameraClockStage, CameraStage
|
||||
from .data_source import DataSourceStage, PassthroughStage, SourceItemsToBufferStage
|
||||
from .display import DisplayStage
|
||||
from .effect_plugin import EffectPluginStage
|
||||
from .factory import (
|
||||
create_stage_from_camera,
|
||||
create_stage_from_display,
|
||||
create_stage_from_effect,
|
||||
create_stage_from_font,
|
||||
create_stage_from_source,
|
||||
)
|
||||
from .message_overlay import MessageOverlayConfig, MessageOverlayStage
|
||||
from .positioning import (
|
||||
PositioningMode,
|
||||
PositionStage,
|
||||
create_position_stage,
|
||||
)
|
||||
from .transform import (
|
||||
CanvasStage,
|
||||
FontStage,
|
||||
ImageToTextStage,
|
||||
ViewportFilterStage,
|
||||
)
|
||||
|
||||
__all__ = [
|
||||
# Adapter classes
|
||||
"EffectPluginStage",
|
||||
"DisplayStage",
|
||||
"DataSourceStage",
|
||||
"PassthroughStage",
|
||||
"SourceItemsToBufferStage",
|
||||
"CameraStage",
|
||||
"CameraClockStage",
|
||||
"ViewportFilterStage",
|
||||
"FontStage",
|
||||
"ImageToTextStage",
|
||||
"CanvasStage",
|
||||
"MessageOverlayStage",
|
||||
"MessageOverlayConfig",
|
||||
"PositionStage",
|
||||
"PositioningMode",
|
||||
# Factory functions
|
||||
"create_stage_from_display",
|
||||
"create_stage_from_effect",
|
||||
"create_stage_from_source",
|
||||
"create_stage_from_camera",
|
||||
"create_stage_from_font",
|
||||
"create_position_stage",
|
||||
]
|
||||
219
engine/pipeline/adapters/camera.py
Normal file
219
engine/pipeline/adapters/camera.py
Normal file
@@ -0,0 +1,219 @@
|
||||
"""Adapter for camera stage."""
|
||||
|
||||
import time
|
||||
from typing import Any
|
||||
|
||||
from engine.pipeline.core import DataType, PipelineContext, Stage
|
||||
|
||||
|
||||
class CameraClockStage(Stage):
|
||||
"""Per-frame clock stage that updates camera state.
|
||||
|
||||
This stage runs once per frame and updates the camera's internal state
|
||||
(position, time). It makes camera_y/camera_x available to subsequent
|
||||
stages via the pipeline context.
|
||||
|
||||
Unlike other stages, this is a pure clock stage and doesn't process
|
||||
data - it just updates camera state and passes data through unchanged.
|
||||
"""
|
||||
|
||||
def __init__(self, camera, name: str = "camera-clock"):
|
||||
self._camera = camera
|
||||
self.name = name
|
||||
self.category = "camera"
|
||||
self.optional = False
|
||||
self._last_frame_time: float | None = None
|
||||
|
||||
@property
|
||||
def stage_type(self) -> str:
|
||||
return "camera"
|
||||
|
||||
@property
|
||||
def capabilities(self) -> set[str]:
|
||||
# Provides camera state info only
|
||||
# NOTE: Do NOT provide "source" as it conflicts with viewport_filter's "source.filtered"
|
||||
return {"camera.state"}
|
||||
|
||||
@property
|
||||
def dependencies(self) -> set[str]:
|
||||
# Clock stage - no dependencies (updates every frame regardless of data flow)
|
||||
return set()
|
||||
|
||||
@property
|
||||
def inlet_types(self) -> set:
|
||||
# Accept any data type - this is a pass-through stage
|
||||
return {DataType.ANY}
|
||||
|
||||
@property
|
||||
def outlet_types(self) -> set:
|
||||
# Pass through whatever was received
|
||||
return {DataType.ANY}
|
||||
|
||||
def process(self, data: Any, ctx: PipelineContext) -> Any:
|
||||
"""Update camera state and pass data through.
|
||||
|
||||
This stage updates the camera's internal state (position, time) and
|
||||
makes the updated camera_y/camera_x available to subsequent stages
|
||||
via the pipeline context.
|
||||
|
||||
The data is passed through unchanged - this stage only updates
|
||||
camera state, it doesn't transform the data.
|
||||
"""
|
||||
if data is None:
|
||||
return data
|
||||
|
||||
# Update camera speed from params if explicitly set (for dynamic modulation)
|
||||
# Only update if camera_speed in params differs from the default (1.0)
|
||||
# This preserves camera speed set during construction
|
||||
if (
|
||||
ctx.params
|
||||
and hasattr(ctx.params, "camera_speed")
|
||||
and ctx.params.camera_speed != 1.0
|
||||
):
|
||||
self._camera.set_speed(ctx.params.camera_speed)
|
||||
|
||||
current_time = time.perf_counter()
|
||||
dt = 0.0
|
||||
if self._last_frame_time is not None:
|
||||
dt = current_time - self._last_frame_time
|
||||
self._camera.update(dt)
|
||||
self._last_frame_time = current_time
|
||||
|
||||
# Update context with current camera position
|
||||
ctx.set_state("camera_y", self._camera.y)
|
||||
ctx.set_state("camera_x", self._camera.x)
|
||||
|
||||
# Pass data through unchanged
|
||||
return data
|
||||
|
||||
|
||||
class CameraStage(Stage):
|
||||
"""Adapter wrapping Camera as a Stage.
|
||||
|
||||
This stage applies camera viewport transformation to the rendered buffer.
|
||||
Camera state updates are handled by CameraClockStage.
|
||||
"""
|
||||
|
||||
def __init__(self, camera, name: str = "vertical"):
|
||||
self._camera = camera
|
||||
self.name = name
|
||||
self.category = "camera"
|
||||
self.optional = True
|
||||
self._last_frame_time: float | None = None
|
||||
|
||||
def save_state(self) -> dict[str, Any]:
|
||||
"""Save camera state for restoration after pipeline rebuild.
|
||||
|
||||
Returns:
|
||||
Dictionary containing camera state that can be restored
|
||||
"""
|
||||
state = {
|
||||
"x": self._camera.x,
|
||||
"y": self._camera.y,
|
||||
"mode": self._camera.mode.value
|
||||
if hasattr(self._camera.mode, "value")
|
||||
else self._camera.mode,
|
||||
"speed": self._camera.speed,
|
||||
"zoom": self._camera.zoom,
|
||||
"canvas_width": self._camera.canvas_width,
|
||||
"canvas_height": self._camera.canvas_height,
|
||||
"_x_float": getattr(self._camera, "_x_float", 0.0),
|
||||
"_y_float": getattr(self._camera, "_y_float", 0.0),
|
||||
"_time": getattr(self._camera, "_time", 0.0),
|
||||
}
|
||||
# Save radial camera state if present
|
||||
if hasattr(self._camera, "_r_float"):
|
||||
state["_r_float"] = self._camera._r_float
|
||||
if hasattr(self._camera, "_theta_float"):
|
||||
state["_theta_float"] = self._camera._theta_float
|
||||
if hasattr(self._camera, "_radial_input"):
|
||||
state["_radial_input"] = self._camera._radial_input
|
||||
return state
|
||||
|
||||
def restore_state(self, state: dict[str, Any]) -> None:
|
||||
"""Restore camera state from saved state.
|
||||
|
||||
Args:
|
||||
state: Dictionary containing camera state from save_state()
|
||||
"""
|
||||
from engine.camera import CameraMode
|
||||
|
||||
self._camera.x = state.get("x", 0)
|
||||
self._camera.y = state.get("y", 0)
|
||||
|
||||
# Restore mode - handle both enum value and direct enum
|
||||
mode_value = state.get("mode", 0)
|
||||
if isinstance(mode_value, int):
|
||||
self._camera.mode = CameraMode(mode_value)
|
||||
else:
|
||||
self._camera.mode = mode_value
|
||||
|
||||
self._camera.speed = state.get("speed", 1.0)
|
||||
self._camera.zoom = state.get("zoom", 1.0)
|
||||
self._camera.canvas_width = state.get("canvas_width", 200)
|
||||
self._camera.canvas_height = state.get("canvas_height", 200)
|
||||
|
||||
# Restore internal state
|
||||
if hasattr(self._camera, "_x_float"):
|
||||
self._camera._x_float = state.get("_x_float", 0.0)
|
||||
if hasattr(self._camera, "_y_float"):
|
||||
self._camera._y_float = state.get("_y_float", 0.0)
|
||||
if hasattr(self._camera, "_time"):
|
||||
self._camera._time = state.get("_time", 0.0)
|
||||
|
||||
# Restore radial camera state if present
|
||||
if hasattr(self._camera, "_r_float"):
|
||||
self._camera._r_float = state.get("_r_float", 0.0)
|
||||
if hasattr(self._camera, "_theta_float"):
|
||||
self._camera._theta_float = state.get("_theta_float", 0.0)
|
||||
if hasattr(self._camera, "_radial_input"):
|
||||
self._camera._radial_input = state.get("_radial_input", 0.0)
|
||||
|
||||
@property
|
||||
def stage_type(self) -> str:
|
||||
return "camera"
|
||||
|
||||
@property
|
||||
def capabilities(self) -> set[str]:
|
||||
return {"camera"}
|
||||
|
||||
@property
|
||||
def dependencies(self) -> set[str]:
|
||||
return {"render.output", "camera.state"}
|
||||
|
||||
@property
|
||||
def inlet_types(self) -> set:
|
||||
return {DataType.TEXT_BUFFER}
|
||||
|
||||
@property
|
||||
def outlet_types(self) -> set:
|
||||
return {DataType.TEXT_BUFFER}
|
||||
|
||||
def process(self, data: Any, ctx: PipelineContext) -> Any:
|
||||
"""Apply camera transformation to items."""
|
||||
if data is None:
|
||||
return data
|
||||
|
||||
# Camera state is updated by CameraClockStage
|
||||
# We only apply the viewport transformation here
|
||||
|
||||
if hasattr(self._camera, "apply"):
|
||||
viewport_width = ctx.params.viewport_width if ctx.params else 80
|
||||
viewport_height = ctx.params.viewport_height if ctx.params else 24
|
||||
|
||||
# Use filtered camera position if available (from ViewportFilterStage)
|
||||
# This handles the case where the buffer has been filtered and starts at row 0
|
||||
filtered_camera_y = ctx.get("camera_y", self._camera.y)
|
||||
|
||||
# Temporarily adjust camera position for filtering
|
||||
original_y = self._camera.y
|
||||
self._camera.y = filtered_camera_y
|
||||
|
||||
try:
|
||||
result = self._camera.apply(data, viewport_width, viewport_height)
|
||||
finally:
|
||||
# Restore original camera position
|
||||
self._camera.y = original_y
|
||||
|
||||
return result
|
||||
return data
|
||||
143
engine/pipeline/adapters/data_source.py
Normal file
143
engine/pipeline/adapters/data_source.py
Normal file
@@ -0,0 +1,143 @@
|
||||
"""
|
||||
Stage adapters - Bridge existing components to the Stage interface.
|
||||
|
||||
This module provides adapters that wrap existing components
|
||||
(DataSource) as Stage implementations.
|
||||
"""
|
||||
|
||||
from typing import Any
|
||||
|
||||
from engine.data_sources import SourceItem
|
||||
from engine.pipeline.core import DataType, PipelineContext, Stage
|
||||
|
||||
|
||||
class DataSourceStage(Stage):
|
||||
"""Adapter wrapping DataSource as a Stage."""
|
||||
|
||||
def __init__(self, data_source, name: str = "headlines"):
|
||||
self._source = data_source
|
||||
self.name = name
|
||||
self.category = "source"
|
||||
self.optional = False
|
||||
|
||||
@property
|
||||
def capabilities(self) -> set[str]:
|
||||
return {f"source.{self.name}"}
|
||||
|
||||
@property
|
||||
def dependencies(self) -> set[str]:
|
||||
return set()
|
||||
|
||||
@property
|
||||
def inlet_types(self) -> set:
|
||||
return {DataType.NONE} # Sources don't take input
|
||||
|
||||
@property
|
||||
def outlet_types(self) -> set:
|
||||
return {DataType.SOURCE_ITEMS}
|
||||
|
||||
def process(self, data: Any, ctx: PipelineContext) -> Any:
|
||||
"""Fetch data from source."""
|
||||
if hasattr(self._source, "get_items"):
|
||||
return self._source.get_items()
|
||||
return data
|
||||
|
||||
|
||||
class PassthroughStage(Stage):
|
||||
"""Simple stage that passes data through unchanged.
|
||||
|
||||
Used for sources that already provide the data in the correct format
|
||||
(e.g., pipeline introspection that outputs text directly).
|
||||
"""
|
||||
|
||||
def __init__(self, name: str = "passthrough"):
|
||||
self.name = name
|
||||
self.category = "render"
|
||||
self.optional = True
|
||||
|
||||
@property
|
||||
def stage_type(self) -> str:
|
||||
return "render"
|
||||
|
||||
@property
|
||||
def capabilities(self) -> set[str]:
|
||||
return {"render.output"}
|
||||
|
||||
@property
|
||||
def dependencies(self) -> set[str]:
|
||||
return {"source"}
|
||||
|
||||
@property
|
||||
def inlet_types(self) -> set:
|
||||
return {DataType.SOURCE_ITEMS}
|
||||
|
||||
@property
|
||||
def outlet_types(self) -> set:
|
||||
return {DataType.SOURCE_ITEMS}
|
||||
|
||||
def process(self, data: Any, ctx: PipelineContext) -> Any:
|
||||
"""Pass data through unchanged."""
|
||||
return data
|
||||
|
||||
|
||||
class SourceItemsToBufferStage(Stage):
|
||||
"""Convert SourceItem objects to text buffer.
|
||||
|
||||
Takes a list of SourceItem objects and extracts their content,
|
||||
splitting on newlines to create a proper text buffer for display.
|
||||
"""
|
||||
|
||||
def __init__(self, name: str = "items-to-buffer"):
|
||||
self.name = name
|
||||
self.category = "render"
|
||||
self.optional = True
|
||||
|
||||
@property
|
||||
def stage_type(self) -> str:
|
||||
return "render"
|
||||
|
||||
@property
|
||||
def capabilities(self) -> set[str]:
|
||||
return {"render.output"}
|
||||
|
||||
@property
|
||||
def dependencies(self) -> set[str]:
|
||||
return {"source"}
|
||||
|
||||
@property
|
||||
def inlet_types(self) -> set:
|
||||
return {DataType.SOURCE_ITEMS}
|
||||
|
||||
@property
|
||||
def outlet_types(self) -> set:
|
||||
return {DataType.TEXT_BUFFER}
|
||||
|
||||
def process(self, data: Any, ctx: PipelineContext) -> Any:
|
||||
"""Convert SourceItem list to text buffer."""
|
||||
if data is None:
|
||||
return []
|
||||
|
||||
# If already a list of strings, return as-is
|
||||
if isinstance(data, list) and data and isinstance(data[0], str):
|
||||
return data
|
||||
|
||||
# If it's a list of SourceItem, extract content
|
||||
if isinstance(data, list):
|
||||
result = []
|
||||
for item in data:
|
||||
if isinstance(item, SourceItem):
|
||||
# Split content by newline to get individual lines
|
||||
lines = item.content.split("\n")
|
||||
result.extend(lines)
|
||||
elif hasattr(item, "content"): # Has content attribute
|
||||
lines = str(item.content).split("\n")
|
||||
result.extend(lines)
|
||||
else:
|
||||
result.append(str(item))
|
||||
return result
|
||||
|
||||
# Single item
|
||||
if isinstance(data, SourceItem):
|
||||
return data.content.split("\n")
|
||||
|
||||
return [str(data)]
|
||||
108
engine/pipeline/adapters/display.py
Normal file
108
engine/pipeline/adapters/display.py
Normal file
@@ -0,0 +1,108 @@
|
||||
"""Adapter wrapping Display as a Stage."""
|
||||
|
||||
from typing import Any
|
||||
|
||||
from engine.pipeline.core import PipelineContext, Stage
|
||||
|
||||
|
||||
class DisplayStage(Stage):
|
||||
"""Adapter wrapping Display as a Stage."""
|
||||
|
||||
def __init__(self, display, name: str = "terminal", positioning: str = "mixed"):
|
||||
self._display = display
|
||||
self.name = name
|
||||
self.category = "display"
|
||||
self.optional = False
|
||||
self._initialized = False
|
||||
self._init_width = 80
|
||||
self._init_height = 24
|
||||
self._positioning = positioning
|
||||
|
||||
def save_state(self) -> dict[str, Any]:
|
||||
"""Save display state for restoration after pipeline rebuild.
|
||||
|
||||
Returns:
|
||||
Dictionary containing display state that can be restored
|
||||
"""
|
||||
return {
|
||||
"initialized": self._initialized,
|
||||
"init_width": self._init_width,
|
||||
"init_height": self._init_height,
|
||||
"width": getattr(self._display, "width", 80),
|
||||
"height": getattr(self._display, "height", 24),
|
||||
}
|
||||
|
||||
def restore_state(self, state: dict[str, Any]) -> None:
|
||||
"""Restore display state from saved state.
|
||||
|
||||
Args:
|
||||
state: Dictionary containing display state from save_state()
|
||||
"""
|
||||
self._initialized = state.get("initialized", False)
|
||||
self._init_width = state.get("init_width", 80)
|
||||
self._init_height = state.get("init_height", 24)
|
||||
|
||||
# Restore display dimensions if the display supports it
|
||||
if hasattr(self._display, "width"):
|
||||
self._display.width = state.get("width", 80)
|
||||
if hasattr(self._display, "height"):
|
||||
self._display.height = state.get("height", 24)
|
||||
|
||||
@property
|
||||
def capabilities(self) -> set[str]:
|
||||
return {"display.output"}
|
||||
|
||||
@property
|
||||
def dependencies(self) -> set[str]:
|
||||
# Display needs rendered content and camera transformation
|
||||
return {"render.output", "camera"}
|
||||
|
||||
@property
|
||||
def inlet_types(self) -> set:
|
||||
from engine.pipeline.core import DataType
|
||||
|
||||
return {DataType.TEXT_BUFFER} # Display consumes rendered text
|
||||
|
||||
@property
|
||||
def outlet_types(self) -> set:
|
||||
from engine.pipeline.core import DataType
|
||||
|
||||
return {DataType.NONE} # Display is a terminal stage (no output)
|
||||
|
||||
def init(self, ctx: PipelineContext) -> bool:
|
||||
w = ctx.params.viewport_width if ctx.params else 80
|
||||
h = ctx.params.viewport_height if ctx.params else 24
|
||||
|
||||
# Try to reuse display if already initialized
|
||||
reuse = self._initialized
|
||||
result = self._display.init(w, h, reuse=reuse)
|
||||
|
||||
# Update initialization state
|
||||
if result is not False:
|
||||
self._initialized = True
|
||||
self._init_width = w
|
||||
self._init_height = h
|
||||
|
||||
return result is not False
|
||||
|
||||
def process(self, data: Any, ctx: PipelineContext) -> Any:
|
||||
"""Output data to display."""
|
||||
if data is not None:
|
||||
# Check if positioning mode is specified in context params
|
||||
positioning = self._positioning
|
||||
if ctx and ctx.params and hasattr(ctx.params, "positioning"):
|
||||
positioning = ctx.params.positioning
|
||||
|
||||
# Pass positioning to display if supported
|
||||
if (
|
||||
hasattr(self._display, "show")
|
||||
and "positioning" in self._display.show.__code__.co_varnames
|
||||
):
|
||||
self._display.show(data, positioning=positioning)
|
||||
else:
|
||||
# Fallback for displays that don't support positioning parameter
|
||||
self._display.show(data)
|
||||
return data
|
||||
|
||||
def cleanup(self) -> None:
|
||||
self._display.cleanup()
|
||||
124
engine/pipeline/adapters/effect_plugin.py
Normal file
124
engine/pipeline/adapters/effect_plugin.py
Normal file
@@ -0,0 +1,124 @@
|
||||
"""Adapter wrapping EffectPlugin as a Stage."""
|
||||
|
||||
from typing import Any
|
||||
|
||||
from engine.pipeline.core import PipelineContext, Stage
|
||||
|
||||
|
||||
class EffectPluginStage(Stage):
|
||||
"""Adapter wrapping EffectPlugin as a Stage.
|
||||
|
||||
Supports capability-based dependencies through the dependencies parameter.
|
||||
"""
|
||||
|
||||
def __init__(
|
||||
self,
|
||||
effect_plugin,
|
||||
name: str = "effect",
|
||||
dependencies: set[str] | None = None,
|
||||
):
|
||||
self._effect = effect_plugin
|
||||
self.name = name
|
||||
self.category = "effect"
|
||||
self.optional = False
|
||||
self._dependencies = dependencies or set()
|
||||
|
||||
@property
|
||||
def stage_type(self) -> str:
|
||||
"""Return stage_type based on effect name.
|
||||
|
||||
Overlay effects have stage_type "overlay".
|
||||
"""
|
||||
if self.is_overlay:
|
||||
return "overlay"
|
||||
return self.category
|
||||
|
||||
@property
|
||||
def render_order(self) -> int:
|
||||
"""Return render_order based on effect type.
|
||||
|
||||
Overlay effects have high render_order to appear on top.
|
||||
"""
|
||||
if self.is_overlay:
|
||||
return 100 # High order for overlays
|
||||
return 0
|
||||
|
||||
@property
|
||||
def is_overlay(self) -> bool:
|
||||
"""Return True for overlay effects.
|
||||
|
||||
Overlay effects compose on top of the buffer
|
||||
rather than transforming it for the next stage.
|
||||
"""
|
||||
# Check if the effect has an is_overlay attribute that is explicitly True
|
||||
# (not just any truthy value from a mock object)
|
||||
if hasattr(self._effect, "is_overlay"):
|
||||
effect_overlay = self._effect.is_overlay
|
||||
# Only return True if it's explicitly set to True
|
||||
if effect_overlay is True:
|
||||
return True
|
||||
return self.name == "hud"
|
||||
|
||||
@property
|
||||
def capabilities(self) -> set[str]:
|
||||
return {f"effect.{self.name}"}
|
||||
|
||||
@property
|
||||
def dependencies(self) -> set[str]:
|
||||
return self._dependencies
|
||||
|
||||
@property
|
||||
def inlet_types(self) -> set:
|
||||
from engine.pipeline.core import DataType
|
||||
|
||||
return {DataType.TEXT_BUFFER}
|
||||
|
||||
@property
|
||||
def outlet_types(self) -> set:
|
||||
from engine.pipeline.core import DataType
|
||||
|
||||
return {DataType.TEXT_BUFFER}
|
||||
|
||||
def process(self, data: Any, ctx: PipelineContext) -> Any:
|
||||
"""Process data through the effect."""
|
||||
if data is None:
|
||||
return None
|
||||
from engine.effects.types import EffectContext, apply_param_bindings
|
||||
|
||||
w = ctx.params.viewport_width if ctx.params else 80
|
||||
h = ctx.params.viewport_height if ctx.params else 24
|
||||
frame = ctx.params.frame_number if ctx.params else 0
|
||||
|
||||
effect_ctx = EffectContext(
|
||||
terminal_width=w,
|
||||
terminal_height=h,
|
||||
scroll_cam=0,
|
||||
ticker_height=h,
|
||||
camera_x=0,
|
||||
mic_excess=0.0,
|
||||
grad_offset=(frame * 0.01) % 1.0,
|
||||
frame_number=frame,
|
||||
has_message=False,
|
||||
items=ctx.get("items", []),
|
||||
)
|
||||
|
||||
# Copy sensor state from PipelineContext to EffectContext
|
||||
for key, value in ctx.state.items():
|
||||
if key.startswith("sensor."):
|
||||
effect_ctx.set_state(key, value)
|
||||
|
||||
# Copy metrics from PipelineContext to EffectContext
|
||||
if "metrics" in ctx.state:
|
||||
effect_ctx.set_state("metrics", ctx.state["metrics"])
|
||||
|
||||
# Copy pipeline_order from PipelineContext services to EffectContext state
|
||||
pipeline_order = ctx.get("pipeline_order")
|
||||
if pipeline_order:
|
||||
effect_ctx.set_state("pipeline_order", pipeline_order)
|
||||
|
||||
# Apply sensor param bindings if effect has them
|
||||
if hasattr(self._effect, "param_bindings") and self._effect.param_bindings:
|
||||
bound_config = apply_param_bindings(self._effect, effect_ctx)
|
||||
self._effect.configure(bound_config)
|
||||
|
||||
return self._effect.process(data, effect_ctx)
|
||||
38
engine/pipeline/adapters/factory.py
Normal file
38
engine/pipeline/adapters/factory.py
Normal file
@@ -0,0 +1,38 @@
|
||||
"""Factory functions for creating stage instances."""
|
||||
|
||||
from engine.pipeline.adapters.camera import CameraStage
|
||||
from engine.pipeline.adapters.data_source import DataSourceStage
|
||||
from engine.pipeline.adapters.display import DisplayStage
|
||||
from engine.pipeline.adapters.effect_plugin import EffectPluginStage
|
||||
from engine.pipeline.adapters.transform import FontStage
|
||||
|
||||
|
||||
def create_stage_from_display(display, name: str = "terminal") -> DisplayStage:
|
||||
"""Create a DisplayStage from a display instance."""
|
||||
return DisplayStage(display, name=name)
|
||||
|
||||
|
||||
def create_stage_from_effect(effect_plugin, name: str) -> EffectPluginStage:
|
||||
"""Create an EffectPluginStage from an effect plugin."""
|
||||
return EffectPluginStage(effect_plugin, name=name)
|
||||
|
||||
|
||||
def create_stage_from_source(data_source, name: str = "headlines") -> DataSourceStage:
|
||||
"""Create a DataSourceStage from a data source."""
|
||||
return DataSourceStage(data_source, name=name)
|
||||
|
||||
|
||||
def create_stage_from_camera(camera, name: str = "vertical") -> CameraStage:
|
||||
"""Create a CameraStage from a camera instance."""
|
||||
return CameraStage(camera, name=name)
|
||||
|
||||
|
||||
def create_stage_from_font(
|
||||
font_path: str | None = None,
|
||||
font_size: int | None = None,
|
||||
font_ref: str | None = "default",
|
||||
name: str = "font",
|
||||
) -> FontStage:
|
||||
"""Create a FontStage with specified font configuration."""
|
||||
# FontStage currently doesn't use these parameters but keeps them for compatibility
|
||||
return FontStage(name=name)
|
||||
165
engine/pipeline/adapters/frame_capture.py
Normal file
165
engine/pipeline/adapters/frame_capture.py
Normal file
@@ -0,0 +1,165 @@
|
||||
"""
|
||||
Frame Capture Stage Adapter
|
||||
|
||||
Wraps pipeline stages to capture frames for animation report generation.
|
||||
"""
|
||||
|
||||
from typing import Any
|
||||
|
||||
from engine.display.backends.animation_report import AnimationReportDisplay
|
||||
from engine.pipeline.core import PipelineContext, Stage
|
||||
|
||||
|
||||
class FrameCaptureStage(Stage):
|
||||
"""
|
||||
Wrapper stage that captures frames before and after a wrapped stage.
|
||||
|
||||
This allows generating animation reports showing how each stage
|
||||
transforms the data.
|
||||
"""
|
||||
|
||||
def __init__(
|
||||
self,
|
||||
wrapped_stage: Stage,
|
||||
display: AnimationReportDisplay,
|
||||
name: str | None = None,
|
||||
):
|
||||
"""
|
||||
Initialize frame capture stage.
|
||||
|
||||
Args:
|
||||
wrapped_stage: The stage to wrap and capture frames from
|
||||
display: The animation report display to send frames to
|
||||
name: Optional name for this capture stage
|
||||
"""
|
||||
self._wrapped_stage = wrapped_stage
|
||||
self._display = display
|
||||
self.name = name or f"capture_{wrapped_stage.name}"
|
||||
self.category = wrapped_stage.category
|
||||
self.optional = wrapped_stage.optional
|
||||
|
||||
# Capture state
|
||||
self._captured_input = False
|
||||
self._captured_output = False
|
||||
|
||||
@property
|
||||
def stage_type(self) -> str:
|
||||
return self._wrapped_stage.stage_type
|
||||
|
||||
@property
|
||||
def capabilities(self) -> set[str]:
|
||||
return self._wrapped_stage.capabilities
|
||||
|
||||
@property
|
||||
def dependencies(self) -> set[str]:
|
||||
return self._wrapped_stage.dependencies
|
||||
|
||||
@property
|
||||
def inlet_types(self) -> set:
|
||||
return self._wrapped_stage.inlet_types
|
||||
|
||||
@property
|
||||
def outlet_types(self) -> set:
|
||||
return self._wrapped_stage.outlet_types
|
||||
|
||||
def init(self, ctx: PipelineContext) -> bool:
|
||||
"""Initialize the wrapped stage."""
|
||||
return self._wrapped_stage.init(ctx)
|
||||
|
||||
def process(self, data: Any, ctx: PipelineContext) -> Any:
|
||||
"""
|
||||
Process data through wrapped stage and capture frames.
|
||||
|
||||
Args:
|
||||
data: Input data (typically a text buffer)
|
||||
ctx: Pipeline context
|
||||
|
||||
Returns:
|
||||
Output data from wrapped stage
|
||||
"""
|
||||
# Capture input frame (before stage processing)
|
||||
if isinstance(data, list) and all(isinstance(line, str) for line in data):
|
||||
self._display.start_stage(f"{self._wrapped_stage.name}_input")
|
||||
self._display.show(data)
|
||||
self._captured_input = True
|
||||
|
||||
# Process through wrapped stage
|
||||
result = self._wrapped_stage.process(data, ctx)
|
||||
|
||||
# Capture output frame (after stage processing)
|
||||
if isinstance(result, list) and all(isinstance(line, str) for line in result):
|
||||
self._display.start_stage(f"{self._wrapped_stage.name}_output")
|
||||
self._display.show(result)
|
||||
self._captured_output = True
|
||||
|
||||
return result
|
||||
|
||||
def cleanup(self) -> None:
|
||||
"""Cleanup the wrapped stage."""
|
||||
self._wrapped_stage.cleanup()
|
||||
|
||||
|
||||
class FrameCaptureController:
|
||||
"""
|
||||
Controller for managing frame capture across the pipeline.
|
||||
|
||||
This class provides an easy way to enable frame capture for
|
||||
specific stages or the entire pipeline.
|
||||
"""
|
||||
|
||||
def __init__(self, display: AnimationReportDisplay):
|
||||
"""
|
||||
Initialize frame capture controller.
|
||||
|
||||
Args:
|
||||
display: The animation report display to use for capture
|
||||
"""
|
||||
self._display = display
|
||||
self._captured_stages: list[FrameCaptureStage] = []
|
||||
|
||||
def wrap_stage(self, stage: Stage, name: str | None = None) -> FrameCaptureStage:
|
||||
"""
|
||||
Wrap a stage with frame capture.
|
||||
|
||||
Args:
|
||||
stage: The stage to wrap
|
||||
name: Optional name for the capture stage
|
||||
|
||||
Returns:
|
||||
Wrapped stage that captures frames
|
||||
"""
|
||||
capture_stage = FrameCaptureStage(stage, self._display, name)
|
||||
self._captured_stages.append(capture_stage)
|
||||
return capture_stage
|
||||
|
||||
def wrap_stages(self, stages: dict[str, Stage]) -> dict[str, Stage]:
|
||||
"""
|
||||
Wrap multiple stages with frame capture.
|
||||
|
||||
Args:
|
||||
stages: Dictionary of stage names to stages
|
||||
|
||||
Returns:
|
||||
Dictionary of stage names to wrapped stages
|
||||
"""
|
||||
wrapped = {}
|
||||
for name, stage in stages.items():
|
||||
wrapped[name] = self.wrap_stage(stage, name)
|
||||
return wrapped
|
||||
|
||||
def get_captured_stages(self) -> list[FrameCaptureStage]:
|
||||
"""Get list of all captured stages."""
|
||||
return self._captured_stages
|
||||
|
||||
def generate_report(self, title: str = "Pipeline Animation Report") -> str:
|
||||
"""
|
||||
Generate the animation report.
|
||||
|
||||
Args:
|
||||
title: Title for the report
|
||||
|
||||
Returns:
|
||||
Path to the generated HTML file
|
||||
"""
|
||||
report_path = self._display.generate_report(title)
|
||||
return str(report_path)
|
||||
185
engine/pipeline/adapters/message_overlay.py
Normal file
185
engine/pipeline/adapters/message_overlay.py
Normal file
@@ -0,0 +1,185 @@
|
||||
"""
|
||||
Message overlay stage - Renders ntfy messages as an overlay on the buffer.
|
||||
|
||||
This stage provides message overlay capability for displaying ntfy.sh messages
|
||||
as a centered panel with pink/magenta gradient, matching upstream/main aesthetics.
|
||||
"""
|
||||
|
||||
import re
|
||||
import time
|
||||
from dataclasses import dataclass
|
||||
from datetime import datetime
|
||||
|
||||
from engine import config
|
||||
from engine.effects.legacy import vis_trunc
|
||||
from engine.pipeline.core import DataType, PipelineContext, Stage
|
||||
from engine.render.blocks import big_wrap
|
||||
from engine.render.gradient import msg_gradient
|
||||
|
||||
|
||||
@dataclass
|
||||
class MessageOverlayConfig:
|
||||
"""Configuration for MessageOverlayStage."""
|
||||
|
||||
enabled: bool = True
|
||||
display_secs: int = 30 # How long to display messages
|
||||
topic_url: str | None = None # Ntfy topic URL (None = use config default)
|
||||
|
||||
|
||||
class MessageOverlayStage(Stage):
|
||||
"""Stage that renders ntfy message overlay on the buffer.
|
||||
|
||||
Provides:
|
||||
- message.overlay capability (optional)
|
||||
- Renders centered panel with pink/magenta gradient
|
||||
- Shows title, body, timestamp, and remaining time
|
||||
"""
|
||||
|
||||
name = "message_overlay"
|
||||
category = "overlay"
|
||||
|
||||
def __init__(
|
||||
self, config: MessageOverlayConfig | None = None, name: str = "message_overlay"
|
||||
):
|
||||
self.config = config or MessageOverlayConfig()
|
||||
self._ntfy_poller = None
|
||||
self._msg_cache = (None, None) # (cache_key, rendered_rows)
|
||||
|
||||
@property
|
||||
def capabilities(self) -> set[str]:
|
||||
"""Provides message overlay capability."""
|
||||
return {"message.overlay"} if self.config.enabled else set()
|
||||
|
||||
@property
|
||||
def dependencies(self) -> set[str]:
|
||||
"""Needs rendered buffer and camera transformation to overlay onto."""
|
||||
return {"render.output", "camera"}
|
||||
|
||||
@property
|
||||
def inlet_types(self) -> set:
|
||||
return {DataType.TEXT_BUFFER}
|
||||
|
||||
@property
|
||||
def outlet_types(self) -> set:
|
||||
return {DataType.TEXT_BUFFER}
|
||||
|
||||
def init(self, ctx: PipelineContext) -> bool:
|
||||
"""Initialize ntfy poller if topic URL is configured."""
|
||||
if not self.config.enabled:
|
||||
return True
|
||||
|
||||
# Get or create ntfy poller
|
||||
topic_url = self.config.topic_url or config.NTFY_TOPIC
|
||||
if topic_url:
|
||||
from engine.ntfy import NtfyPoller
|
||||
|
||||
self._ntfy_poller = NtfyPoller(
|
||||
topic_url=topic_url,
|
||||
reconnect_delay=getattr(config, "NTFY_RECONNECT_DELAY", 5),
|
||||
display_secs=self.config.display_secs,
|
||||
)
|
||||
self._ntfy_poller.start()
|
||||
ctx.set("ntfy_poller", self._ntfy_poller)
|
||||
|
||||
return True
|
||||
|
||||
def process(self, data: list[str], ctx: PipelineContext) -> list[str]:
|
||||
"""Render message overlay on the buffer."""
|
||||
if not self.config.enabled or not data:
|
||||
return data
|
||||
|
||||
# Get active message from poller
|
||||
msg = None
|
||||
if self._ntfy_poller:
|
||||
msg = self._ntfy_poller.get_active_message()
|
||||
|
||||
if msg is None:
|
||||
return data
|
||||
|
||||
# Render overlay
|
||||
w = ctx.terminal_width if hasattr(ctx, "terminal_width") else 80
|
||||
h = ctx.terminal_height if hasattr(ctx, "terminal_height") else 24
|
||||
|
||||
overlay, self._msg_cache = self._render_message_overlay(
|
||||
msg, w, h, self._msg_cache
|
||||
)
|
||||
|
||||
# Composite overlay onto buffer
|
||||
result = list(data)
|
||||
for line in overlay:
|
||||
# Overlay uses ANSI cursor positioning, just append
|
||||
result.append(line)
|
||||
|
||||
return result
|
||||
|
||||
def _render_message_overlay(
|
||||
self,
|
||||
msg: tuple[str, str, float] | None,
|
||||
w: int,
|
||||
h: int,
|
||||
msg_cache: tuple,
|
||||
) -> tuple[list[str], tuple]:
|
||||
"""Render ntfy message overlay.
|
||||
|
||||
Args:
|
||||
msg: (title, body, timestamp) or None
|
||||
w: terminal width
|
||||
h: terminal height
|
||||
msg_cache: (cache_key, rendered_rows) for caching
|
||||
|
||||
Returns:
|
||||
(list of ANSI strings, updated cache)
|
||||
"""
|
||||
overlay = []
|
||||
if msg is None:
|
||||
return overlay, msg_cache
|
||||
|
||||
m_title, m_body, m_ts = msg
|
||||
display_text = m_body or m_title or "(empty)"
|
||||
display_text = re.sub(r"\s+", " ", display_text.upper())
|
||||
|
||||
cache_key = (display_text, w)
|
||||
if msg_cache[0] != cache_key:
|
||||
msg_rows = big_wrap(display_text, w - 4)
|
||||
msg_cache = (cache_key, msg_rows)
|
||||
else:
|
||||
msg_rows = msg_cache[1]
|
||||
|
||||
msg_rows = msg_gradient(msg_rows, (time.monotonic() * config.GRAD_SPEED) % 1.0)
|
||||
|
||||
elapsed_s = int(time.monotonic() - m_ts)
|
||||
remaining = max(0, self.config.display_secs - elapsed_s)
|
||||
ts_str = datetime.now().strftime("%H:%M:%S")
|
||||
panel_h = len(msg_rows) + 2
|
||||
panel_top = max(0, (h - panel_h) // 2)
|
||||
|
||||
row_idx = 0
|
||||
for mr in msg_rows:
|
||||
ln = vis_trunc(mr, w)
|
||||
overlay.append(f"\033[{panel_top + row_idx + 1};1H {ln}\033[0m\033[K")
|
||||
row_idx += 1
|
||||
|
||||
meta_parts = []
|
||||
if m_title and m_title != m_body:
|
||||
meta_parts.append(m_title)
|
||||
meta_parts.append(f"ntfy \u00b7 {ts_str} \u00b7 {remaining}s")
|
||||
meta = (
|
||||
" " + " \u00b7 ".join(meta_parts)
|
||||
if len(meta_parts) > 1
|
||||
else " " + meta_parts[0]
|
||||
)
|
||||
overlay.append(
|
||||
f"\033[{panel_top + row_idx + 1};1H\033[38;5;245m{meta}\033[0m\033[K"
|
||||
)
|
||||
row_idx += 1
|
||||
|
||||
bar = "\u2500" * (w - 4)
|
||||
overlay.append(
|
||||
f"\033[{panel_top + row_idx + 1};1H \033[2;38;5;37m{bar}\033[0m\033[K"
|
||||
)
|
||||
|
||||
return overlay, msg_cache
|
||||
|
||||
def cleanup(self) -> None:
|
||||
"""Cleanup resources."""
|
||||
pass
|
||||
185
engine/pipeline/adapters/positioning.py
Normal file
185
engine/pipeline/adapters/positioning.py
Normal file
@@ -0,0 +1,185 @@
|
||||
"""PositionStage - Configurable positioning mode for terminal rendering.
|
||||
|
||||
This module provides positioning stages that allow choosing between
|
||||
different ANSI positioning approaches:
|
||||
- ABSOLUTE: Use cursor positioning codes (\\033[row;colH) for all lines
|
||||
- RELATIVE: Use newlines for all lines
|
||||
- MIXED: Base content uses newlines, effects use cursor positioning (default)
|
||||
"""
|
||||
|
||||
from enum import Enum
|
||||
from typing import Any
|
||||
|
||||
from engine.pipeline.core import DataType, PipelineContext, Stage
|
||||
|
||||
|
||||
class PositioningMode(Enum):
|
||||
"""Positioning mode for terminal rendering."""
|
||||
|
||||
ABSOLUTE = "absolute" # All lines have cursor positioning codes
|
||||
RELATIVE = "relative" # Lines use newlines (no cursor codes)
|
||||
MIXED = "mixed" # Mixed: newlines for base, cursor codes for overlays (default)
|
||||
|
||||
|
||||
class PositionStage(Stage):
|
||||
"""Applies positioning mode to buffer before display.
|
||||
|
||||
This stage allows configuring how lines are positioned in the terminal:
|
||||
- ABSOLUTE: Each line has \\033[row;colH prefix (precise control)
|
||||
- RELATIVE: Lines are joined with \\n (natural flow)
|
||||
- MIXED: Leaves buffer as-is (effects add their own positioning)
|
||||
"""
|
||||
|
||||
def __init__(
|
||||
self, mode: PositioningMode = PositioningMode.RELATIVE, name: str = "position"
|
||||
):
|
||||
self.mode = mode
|
||||
self.name = name
|
||||
self.category = "position"
|
||||
self._mode_str = mode.value
|
||||
|
||||
def save_state(self) -> dict[str, Any]:
|
||||
"""Save positioning mode for restoration."""
|
||||
return {"mode": self.mode.value}
|
||||
|
||||
def restore_state(self, state: dict[str, Any]) -> None:
|
||||
"""Restore positioning mode from saved state."""
|
||||
mode_value = state.get("mode", "relative")
|
||||
self.mode = PositioningMode(mode_value)
|
||||
|
||||
@property
|
||||
def capabilities(self) -> set[str]:
|
||||
return {"position.output"}
|
||||
|
||||
@property
|
||||
def dependencies(self) -> set[str]:
|
||||
# Position stage typically runs after render but before effects
|
||||
# Effects may add their own positioning codes
|
||||
return {"render.output"}
|
||||
|
||||
@property
|
||||
def inlet_types(self) -> set:
|
||||
return {DataType.TEXT_BUFFER}
|
||||
|
||||
@property
|
||||
def outlet_types(self) -> set:
|
||||
return {DataType.TEXT_BUFFER}
|
||||
|
||||
def init(self, ctx: PipelineContext) -> bool:
|
||||
"""Initialize the positioning stage."""
|
||||
return True
|
||||
|
||||
def process(self, data: Any, ctx: PipelineContext) -> Any:
|
||||
"""Apply positioning mode to the buffer.
|
||||
|
||||
Args:
|
||||
data: List of strings (buffer lines)
|
||||
ctx: Pipeline context
|
||||
|
||||
Returns:
|
||||
Buffer with applied positioning mode
|
||||
"""
|
||||
if data is None:
|
||||
return data
|
||||
|
||||
if not isinstance(data, list):
|
||||
return data
|
||||
|
||||
if self.mode == PositioningMode.ABSOLUTE:
|
||||
return self._to_absolute(data, ctx)
|
||||
elif self.mode == PositioningMode.RELATIVE:
|
||||
return self._to_relative(data, ctx)
|
||||
else: # MIXED
|
||||
return data # No transformation
|
||||
|
||||
def _to_absolute(self, data: list[str], ctx: PipelineContext) -> list[str]:
|
||||
"""Convert buffer to absolute positioning (all lines have cursor codes).
|
||||
|
||||
This mode prefixes each line with \\033[row;colH to move cursor
|
||||
to the exact position before writing the line.
|
||||
|
||||
Args:
|
||||
data: List of buffer lines
|
||||
ctx: Pipeline context (provides terminal dimensions)
|
||||
|
||||
Returns:
|
||||
Buffer with cursor positioning codes for each line
|
||||
"""
|
||||
result = []
|
||||
viewport_height = ctx.params.viewport_height if ctx.params else 24
|
||||
|
||||
for i, line in enumerate(data):
|
||||
if i >= viewport_height:
|
||||
break # Don't exceed viewport
|
||||
|
||||
# Check if line already has cursor positioning
|
||||
if "\033[" in line and "H" in line:
|
||||
# Already has cursor positioning - leave as-is
|
||||
result.append(line)
|
||||
else:
|
||||
# Add cursor positioning for this line
|
||||
# Row is 1-indexed
|
||||
result.append(f"\033[{i + 1};1H{line}")
|
||||
|
||||
return result
|
||||
|
||||
def _to_relative(self, data: list[str], ctx: PipelineContext) -> list[str]:
|
||||
"""Convert buffer to relative positioning (use newlines).
|
||||
|
||||
This mode removes explicit cursor positioning codes from lines
|
||||
(except for effects that specifically add them).
|
||||
|
||||
Note: Effects like HUD add their own cursor positioning codes,
|
||||
so we can't simply remove all of them. We rely on the terminal
|
||||
display to join lines with newlines.
|
||||
|
||||
Args:
|
||||
data: List of buffer lines
|
||||
ctx: Pipeline context (unused)
|
||||
|
||||
Returns:
|
||||
Buffer with minimal cursor positioning (only for overlays)
|
||||
"""
|
||||
# For relative mode, we leave the buffer as-is
|
||||
# The terminal display handles joining with newlines
|
||||
# Effects that need absolute positioning will add their own codes
|
||||
|
||||
# Filter out lines that would cause double-positioning
|
||||
result = []
|
||||
for i, line in enumerate(data):
|
||||
# Check if this line looks like base content (no cursor code at start)
|
||||
# vs an effect line (has cursor code at start)
|
||||
if line.startswith("\033[") and "H" in line[:20]:
|
||||
# This is an effect with positioning - keep it
|
||||
result.append(line)
|
||||
else:
|
||||
# Base content - strip any inline cursor codes (rare)
|
||||
# but keep color codes
|
||||
result.append(line)
|
||||
|
||||
return result
|
||||
|
||||
def cleanup(self) -> None:
|
||||
"""Clean up positioning stage."""
|
||||
pass
|
||||
|
||||
|
||||
# Convenience function to create positioning stage
|
||||
def create_position_stage(
|
||||
mode: str = "relative", name: str = "position"
|
||||
) -> PositionStage:
|
||||
"""Create a positioning stage with the specified mode.
|
||||
|
||||
Args:
|
||||
mode: Positioning mode ("absolute", "relative", or "mixed")
|
||||
name: Name for the stage
|
||||
|
||||
Returns:
|
||||
PositionStage instance
|
||||
"""
|
||||
try:
|
||||
positioning_mode = PositioningMode(mode)
|
||||
except ValueError:
|
||||
positioning_mode = PositioningMode.RELATIVE
|
||||
|
||||
return PositionStage(mode=positioning_mode, name=name)
|
||||
293
engine/pipeline/adapters/transform.py
Normal file
293
engine/pipeline/adapters/transform.py
Normal file
@@ -0,0 +1,293 @@
|
||||
"""Adapters for transform stages (viewport, font, image, canvas)."""
|
||||
|
||||
from typing import Any
|
||||
|
||||
import engine.render
|
||||
from engine.data_sources import SourceItem
|
||||
from engine.pipeline.core import DataType, PipelineContext, Stage
|
||||
|
||||
|
||||
def estimate_simple_height(text: str, width: int) -> int:
|
||||
"""Estimate height in terminal rows using simple word wrap.
|
||||
|
||||
Uses conservative estimation suitable for headlines.
|
||||
Each wrapped line is approximately 6 terminal rows (big block rendering).
|
||||
"""
|
||||
words = text.split()
|
||||
if not words:
|
||||
return 6
|
||||
|
||||
lines = 1
|
||||
current_len = 0
|
||||
for word in words:
|
||||
word_len = len(word)
|
||||
if current_len + word_len + 1 > width - 4: # -4 for margins
|
||||
lines += 1
|
||||
current_len = word_len
|
||||
else:
|
||||
current_len += word_len + 1
|
||||
|
||||
return lines * 6 # 6 rows per line for big block rendering
|
||||
|
||||
|
||||
class ViewportFilterStage(Stage):
|
||||
"""Filter items to viewport height based on rendered height."""
|
||||
|
||||
def __init__(self, name: str = "viewport-filter"):
|
||||
self.name = name
|
||||
self.category = "render"
|
||||
self.optional = True
|
||||
self._layout: list[int] = []
|
||||
|
||||
@property
|
||||
def stage_type(self) -> str:
|
||||
return "render"
|
||||
|
||||
@property
|
||||
def capabilities(self) -> set[str]:
|
||||
return {"source.filtered"}
|
||||
|
||||
@property
|
||||
def dependencies(self) -> set[str]:
|
||||
# Always requires camera.state for viewport filtering
|
||||
# CameraUpdateStage provides this (auto-injected if missing)
|
||||
return {"source", "camera.state"}
|
||||
|
||||
@property
|
||||
def inlet_types(self) -> set:
|
||||
return {DataType.SOURCE_ITEMS}
|
||||
|
||||
@property
|
||||
def outlet_types(self) -> set:
|
||||
return {DataType.SOURCE_ITEMS}
|
||||
|
||||
def process(self, data: Any, ctx: PipelineContext) -> Any:
|
||||
"""Filter items to viewport height based on rendered height."""
|
||||
if data is None:
|
||||
return data
|
||||
|
||||
if not isinstance(data, list):
|
||||
return data
|
||||
|
||||
if not data:
|
||||
return []
|
||||
|
||||
# Get viewport parameters from context
|
||||
viewport_height = ctx.params.viewport_height if ctx.params else 24
|
||||
viewport_width = ctx.params.viewport_width if ctx.params else 80
|
||||
camera_y = ctx.get("camera_y", 0)
|
||||
|
||||
# Estimate height for each item and cache layout
|
||||
self._layout = []
|
||||
cumulative_heights = []
|
||||
current_height = 0
|
||||
|
||||
for item in data:
|
||||
title = item.content if isinstance(item, SourceItem) else str(item)
|
||||
# Use simple height estimation (not PIL-based)
|
||||
estimated_height = estimate_simple_height(title, viewport_width)
|
||||
self._layout.append(estimated_height)
|
||||
current_height += estimated_height
|
||||
cumulative_heights.append(current_height)
|
||||
|
||||
# Find visible range based on camera_y and viewport_height
|
||||
# camera_y is the scroll offset (how many rows are scrolled up)
|
||||
start_y = camera_y
|
||||
end_y = camera_y + viewport_height
|
||||
|
||||
# Find start index (first item that intersects with visible range)
|
||||
start_idx = 0
|
||||
start_item_y = 0 # Y position where the first visible item starts
|
||||
for i, total_h in enumerate(cumulative_heights):
|
||||
if total_h > start_y:
|
||||
start_idx = i
|
||||
# Calculate the Y position of the start of this item
|
||||
if i > 0:
|
||||
start_item_y = cumulative_heights[i - 1]
|
||||
break
|
||||
|
||||
# Find end index (first item that extends beyond visible range)
|
||||
end_idx = len(data)
|
||||
for i, total_h in enumerate(cumulative_heights):
|
||||
if total_h >= end_y:
|
||||
end_idx = i + 1
|
||||
break
|
||||
|
||||
# Adjust camera_y for the filtered buffer
|
||||
# The filtered buffer starts at row 0, but the camera position
|
||||
# needs to be relative to where the first visible item starts
|
||||
filtered_camera_y = camera_y - start_item_y
|
||||
|
||||
# Update context with the filtered camera position
|
||||
# This ensures CameraStage can correctly slice the filtered buffer
|
||||
ctx.set_state("camera_y", filtered_camera_y)
|
||||
ctx.set_state("camera_x", ctx.get("camera_x", 0)) # Keep camera_x unchanged
|
||||
|
||||
# Return visible items
|
||||
return data[start_idx:end_idx]
|
||||
|
||||
|
||||
class FontStage(Stage):
|
||||
"""Render items using font."""
|
||||
|
||||
def __init__(self, name: str = "font"):
|
||||
self.name = name
|
||||
self.category = "render"
|
||||
self.optional = False
|
||||
|
||||
@property
|
||||
def stage_type(self) -> str:
|
||||
return "render"
|
||||
|
||||
@property
|
||||
def capabilities(self) -> set[str]:
|
||||
return {"render.output"}
|
||||
|
||||
@property
|
||||
def stage_dependencies(self) -> set[str]:
|
||||
# Must connect to viewport_filter stage to get filtered source
|
||||
return {"viewport_filter"}
|
||||
|
||||
@property
|
||||
def dependencies(self) -> set[str]:
|
||||
# Depend on source.filtered (provided by viewport_filter)
|
||||
# This ensures we get the filtered/processed source, not raw source
|
||||
return {"source.filtered"}
|
||||
|
||||
@property
|
||||
def inlet_types(self) -> set:
|
||||
return {DataType.SOURCE_ITEMS}
|
||||
|
||||
@property
|
||||
def outlet_types(self) -> set:
|
||||
return {DataType.TEXT_BUFFER}
|
||||
|
||||
def process(self, data: Any, ctx: PipelineContext) -> Any:
|
||||
"""Render items to text buffer using font."""
|
||||
if data is None:
|
||||
return []
|
||||
|
||||
if not isinstance(data, list):
|
||||
return [str(data)]
|
||||
|
||||
import os
|
||||
|
||||
if os.environ.get("DEBUG_CAMERA"):
|
||||
print(f"FontStage: input items={len(data)}")
|
||||
|
||||
viewport_width = ctx.params.viewport_width if ctx.params else 80
|
||||
|
||||
result = []
|
||||
for item in data:
|
||||
if isinstance(item, SourceItem):
|
||||
title = item.content
|
||||
src = item.source
|
||||
ts = item.timestamp
|
||||
content_lines, _, _ = engine.render.make_block(
|
||||
title, src, ts, viewport_width
|
||||
)
|
||||
result.extend(content_lines)
|
||||
elif hasattr(item, "content"):
|
||||
title = str(item.content)
|
||||
content_lines, _, _ = engine.render.make_block(
|
||||
title, "", "", viewport_width
|
||||
)
|
||||
result.extend(content_lines)
|
||||
else:
|
||||
result.append(str(item))
|
||||
return result
|
||||
|
||||
|
||||
class ImageToTextStage(Stage):
|
||||
"""Convert image items to text."""
|
||||
|
||||
def __init__(self, name: str = "image-to-text"):
|
||||
self.name = name
|
||||
self.category = "render"
|
||||
self.optional = True
|
||||
|
||||
@property
|
||||
def stage_type(self) -> str:
|
||||
return "render"
|
||||
|
||||
@property
|
||||
def capabilities(self) -> set[str]:
|
||||
return {"render.output"}
|
||||
|
||||
@property
|
||||
def dependencies(self) -> set[str]:
|
||||
return {"source"}
|
||||
|
||||
@property
|
||||
def inlet_types(self) -> set:
|
||||
return {DataType.SOURCE_ITEMS}
|
||||
|
||||
@property
|
||||
def outlet_types(self) -> set:
|
||||
return {DataType.TEXT_BUFFER}
|
||||
|
||||
def process(self, data: Any, ctx: PipelineContext) -> Any:
|
||||
"""Convert image items to text representation."""
|
||||
if data is None:
|
||||
return []
|
||||
|
||||
if not isinstance(data, list):
|
||||
return [str(data)]
|
||||
|
||||
result = []
|
||||
for item in data:
|
||||
# Check if item is an image
|
||||
if hasattr(item, "image_path") or hasattr(item, "image_data"):
|
||||
# Placeholder: would normally render image to ASCII art
|
||||
result.append(f"[Image: {getattr(item, 'image_path', 'data')}]")
|
||||
elif isinstance(item, SourceItem):
|
||||
result.extend(item.content.split("\n"))
|
||||
else:
|
||||
result.append(str(item))
|
||||
return result
|
||||
|
||||
|
||||
class CanvasStage(Stage):
|
||||
"""Render items to canvas."""
|
||||
|
||||
def __init__(self, name: str = "canvas"):
|
||||
self.name = name
|
||||
self.category = "render"
|
||||
self.optional = False
|
||||
|
||||
@property
|
||||
def stage_type(self) -> str:
|
||||
return "render"
|
||||
|
||||
@property
|
||||
def capabilities(self) -> set[str]:
|
||||
return {"render.output"}
|
||||
|
||||
@property
|
||||
def dependencies(self) -> set[str]:
|
||||
return {"source"}
|
||||
|
||||
@property
|
||||
def inlet_types(self) -> set:
|
||||
return {DataType.SOURCE_ITEMS}
|
||||
|
||||
@property
|
||||
def outlet_types(self) -> set:
|
||||
return {DataType.TEXT_BUFFER}
|
||||
|
||||
def process(self, data: Any, ctx: PipelineContext) -> Any:
|
||||
"""Render items to canvas."""
|
||||
if data is None:
|
||||
return []
|
||||
|
||||
if not isinstance(data, list):
|
||||
return [str(data)]
|
||||
|
||||
# Simple canvas rendering
|
||||
result = []
|
||||
for item in data:
|
||||
if isinstance(item, SourceItem):
|
||||
result.extend(item.content.split("\n"))
|
||||
else:
|
||||
result.append(str(item))
|
||||
return result
|
||||
@@ -49,6 +49,8 @@ class Pipeline:
|
||||
|
||||
Manages the execution of all stages in dependency order,
|
||||
handling initialization, processing, and cleanup.
|
||||
|
||||
Supports dynamic mutation during runtime via the mutation API.
|
||||
"""
|
||||
|
||||
def __init__(
|
||||
@@ -61,34 +63,514 @@ class Pipeline:
|
||||
self._stages: dict[str, Stage] = {}
|
||||
self._execution_order: list[str] = []
|
||||
self._initialized = False
|
||||
self._capability_map: dict[str, list[str]] = {}
|
||||
|
||||
self._metrics_enabled = self.config.enable_metrics
|
||||
self._frame_metrics: list[FrameMetrics] = []
|
||||
self._max_metrics_frames = 60
|
||||
|
||||
# Minimum capabilities required for pipeline to function
|
||||
# NOTE: Research later - allow presets to override these defaults
|
||||
self._minimum_capabilities: set[str] = {
|
||||
"source",
|
||||
"render.output",
|
||||
"display.output",
|
||||
"camera.state", # Always required for viewport filtering
|
||||
}
|
||||
self._current_frame_number = 0
|
||||
|
||||
def add_stage(self, name: str, stage: Stage) -> "Pipeline":
|
||||
"""Add a stage to the pipeline."""
|
||||
def add_stage(self, name: str, stage: Stage, initialize: bool = True) -> "Pipeline":
|
||||
"""Add a stage to the pipeline.
|
||||
|
||||
Args:
|
||||
name: Unique name for the stage
|
||||
stage: Stage instance to add
|
||||
initialize: If True, initialize the stage immediately
|
||||
|
||||
Returns:
|
||||
Self for method chaining
|
||||
"""
|
||||
self._stages[name] = stage
|
||||
if self._initialized and initialize:
|
||||
stage.init(self.context)
|
||||
return self
|
||||
|
||||
def remove_stage(self, name: str) -> None:
|
||||
"""Remove a stage from the pipeline."""
|
||||
if name in self._stages:
|
||||
del self._stages[name]
|
||||
def remove_stage(self, name: str, cleanup: bool = True) -> Stage | None:
|
||||
"""Remove a stage from the pipeline.
|
||||
|
||||
Args:
|
||||
name: Name of the stage to remove
|
||||
cleanup: If True, call cleanup() on the removed stage
|
||||
|
||||
Returns:
|
||||
The removed stage, or None if not found
|
||||
"""
|
||||
stage = self._stages.pop(name, None)
|
||||
if stage and cleanup:
|
||||
try:
|
||||
stage.cleanup()
|
||||
except Exception:
|
||||
pass
|
||||
|
||||
# Rebuild execution order and capability map if stage was removed
|
||||
if stage and self._initialized:
|
||||
self._rebuild()
|
||||
|
||||
return stage
|
||||
|
||||
def remove_stage_safe(self, name: str, cleanup: bool = True) -> Stage | None:
|
||||
"""Remove a stage and rebuild execution order safely.
|
||||
|
||||
This is an alias for remove_stage() that explicitly rebuilds
|
||||
the execution order after removal.
|
||||
|
||||
Args:
|
||||
name: Name of the stage to remove
|
||||
cleanup: If True, call cleanup() on the removed stage
|
||||
|
||||
Returns:
|
||||
The removed stage, or None if not found
|
||||
"""
|
||||
return self.remove_stage(name, cleanup)
|
||||
|
||||
def cleanup_stage(self, name: str) -> None:
|
||||
"""Clean up a specific stage without removing it.
|
||||
|
||||
This is useful for stages that need to release resources
|
||||
(like display connections) without being removed from the pipeline.
|
||||
|
||||
Args:
|
||||
name: Name of the stage to clean up
|
||||
"""
|
||||
stage = self._stages.get(name)
|
||||
if stage:
|
||||
try:
|
||||
stage.cleanup()
|
||||
except Exception:
|
||||
pass
|
||||
|
||||
def can_hot_swap(self, name: str) -> bool:
|
||||
"""Check if a stage can be safely hot-swapped.
|
||||
|
||||
A stage can be hot-swapped if:
|
||||
1. It exists in the pipeline
|
||||
2. It's not required for basic pipeline function
|
||||
3. It doesn't have strict dependencies that can't be re-resolved
|
||||
|
||||
Args:
|
||||
name: Name of the stage to check
|
||||
|
||||
Returns:
|
||||
True if the stage can be hot-swapped, False otherwise
|
||||
"""
|
||||
# Check if stage exists
|
||||
if name not in self._stages:
|
||||
return False
|
||||
|
||||
# Check if stage is a minimum capability provider
|
||||
stage = self._stages[name]
|
||||
stage_caps = stage.capabilities if hasattr(stage, "capabilities") else set()
|
||||
minimum_caps = self._minimum_capabilities
|
||||
|
||||
# If stage provides a minimum capability, it's more critical
|
||||
# but still potentially swappable if another stage provides the same capability
|
||||
for cap in stage_caps:
|
||||
if cap in minimum_caps:
|
||||
# Check if another stage provides this capability
|
||||
providers = self._capability_map.get(cap, [])
|
||||
# This stage is the sole provider - might be critical
|
||||
# but still allow hot-swap if pipeline is not initialized
|
||||
if len(providers) <= 1 and self._initialized:
|
||||
return False
|
||||
|
||||
return True
|
||||
|
||||
def replace_stage(
|
||||
self, name: str, new_stage: Stage, preserve_state: bool = True
|
||||
) -> Stage | None:
|
||||
"""Replace a stage in the pipeline with a new one.
|
||||
|
||||
Args:
|
||||
name: Name of the stage to replace
|
||||
new_stage: New stage instance
|
||||
preserve_state: If True, copy relevant state from old stage
|
||||
|
||||
Returns:
|
||||
The old stage, or None if not found
|
||||
"""
|
||||
old_stage = self._stages.get(name)
|
||||
if not old_stage:
|
||||
return None
|
||||
|
||||
if preserve_state:
|
||||
self._copy_stage_state(old_stage, new_stage)
|
||||
|
||||
old_stage.cleanup()
|
||||
self._stages[name] = new_stage
|
||||
new_stage.init(self.context)
|
||||
|
||||
if self._initialized:
|
||||
self._rebuild()
|
||||
|
||||
return old_stage
|
||||
|
||||
def swap_stages(self, name1: str, name2: str) -> bool:
|
||||
"""Swap two stages in the pipeline.
|
||||
|
||||
Args:
|
||||
name1: First stage name
|
||||
name2: Second stage name
|
||||
|
||||
Returns:
|
||||
True if successful, False if either stage not found
|
||||
"""
|
||||
stage1 = self._stages.get(name1)
|
||||
stage2 = self._stages.get(name2)
|
||||
|
||||
if not stage1 or not stage2:
|
||||
return False
|
||||
|
||||
self._stages[name1] = stage2
|
||||
self._stages[name2] = stage1
|
||||
|
||||
if self._initialized:
|
||||
self._rebuild()
|
||||
|
||||
return True
|
||||
|
||||
def move_stage(
|
||||
self, name: str, after: str | None = None, before: str | None = None
|
||||
) -> bool:
|
||||
"""Move a stage's position in execution order.
|
||||
|
||||
Args:
|
||||
name: Stage to move
|
||||
after: Place this stage after this stage name
|
||||
before: Place this stage before this stage name
|
||||
|
||||
Returns:
|
||||
True if successful, False if stage not found
|
||||
"""
|
||||
if name not in self._stages:
|
||||
return False
|
||||
|
||||
if not self._initialized:
|
||||
return False
|
||||
|
||||
current_order = list(self._execution_order)
|
||||
if name not in current_order:
|
||||
return False
|
||||
|
||||
current_order.remove(name)
|
||||
|
||||
if after and after in current_order:
|
||||
idx = current_order.index(after) + 1
|
||||
current_order.insert(idx, name)
|
||||
elif before and before in current_order:
|
||||
idx = current_order.index(before)
|
||||
current_order.insert(idx, name)
|
||||
else:
|
||||
current_order.append(name)
|
||||
|
||||
self._execution_order = current_order
|
||||
return True
|
||||
|
||||
def _copy_stage_state(self, old_stage: Stage, new_stage: Stage) -> None:
|
||||
"""Copy relevant state from old stage to new stage during replacement.
|
||||
|
||||
Args:
|
||||
old_stage: The old stage being replaced
|
||||
new_stage: The new stage
|
||||
"""
|
||||
if hasattr(old_stage, "_enabled"):
|
||||
new_stage._enabled = old_stage._enabled
|
||||
|
||||
# Preserve camera state
|
||||
if hasattr(old_stage, "save_state") and hasattr(new_stage, "restore_state"):
|
||||
try:
|
||||
state = old_stage.save_state()
|
||||
new_stage.restore_state(state)
|
||||
except Exception:
|
||||
# If state preservation fails, continue without it
|
||||
pass
|
||||
|
||||
def _rebuild(self) -> None:
|
||||
"""Rebuild execution order after mutation or auto-injection."""
|
||||
was_initialized = self._initialized
|
||||
self._initialized = False
|
||||
|
||||
self._capability_map = self._build_capability_map()
|
||||
self._execution_order = self._resolve_dependencies()
|
||||
|
||||
# Note: We intentionally DO NOT validate dependencies here.
|
||||
# Mutation operations (remove/swap/move) might leave the pipeline
|
||||
# temporarily invalid (e.g., removing a stage that others depend on).
|
||||
# Validation is performed explicitly in build() or can be checked
|
||||
# manually via validate_minimum_capabilities().
|
||||
# try:
|
||||
# self._validate_dependencies()
|
||||
# self._validate_types()
|
||||
# except StageError:
|
||||
# pass
|
||||
|
||||
# Restore initialized state
|
||||
self._initialized = was_initialized
|
||||
|
||||
def get_stage(self, name: str) -> Stage | None:
|
||||
"""Get a stage by name."""
|
||||
return self._stages.get(name)
|
||||
|
||||
def build(self) -> "Pipeline":
|
||||
"""Build execution order based on dependencies."""
|
||||
def enable_stage(self, name: str) -> bool:
|
||||
"""Enable a stage in the pipeline.
|
||||
|
||||
Args:
|
||||
name: Stage name to enable
|
||||
|
||||
Returns:
|
||||
True if successful, False if stage not found
|
||||
"""
|
||||
stage = self._stages.get(name)
|
||||
if stage:
|
||||
stage.set_enabled(True)
|
||||
return True
|
||||
return False
|
||||
|
||||
def disable_stage(self, name: str) -> bool:
|
||||
"""Disable a stage in the pipeline.
|
||||
|
||||
Args:
|
||||
name: Stage name to disable
|
||||
|
||||
Returns:
|
||||
True if successful, False if stage not found
|
||||
"""
|
||||
stage = self._stages.get(name)
|
||||
if stage:
|
||||
stage.set_enabled(False)
|
||||
return True
|
||||
return False
|
||||
|
||||
def get_stage_info(self, name: str) -> dict | None:
|
||||
"""Get detailed information about a stage.
|
||||
|
||||
Args:
|
||||
name: Stage name
|
||||
|
||||
Returns:
|
||||
Dictionary with stage information, or None if not found
|
||||
"""
|
||||
stage = self._stages.get(name)
|
||||
if not stage:
|
||||
return None
|
||||
|
||||
return {
|
||||
"name": name,
|
||||
"category": stage.category,
|
||||
"stage_type": stage.stage_type,
|
||||
"enabled": stage.is_enabled(),
|
||||
"optional": stage.optional,
|
||||
"capabilities": list(stage.capabilities),
|
||||
"dependencies": list(stage.dependencies),
|
||||
"inlet_types": [dt.name for dt in stage.inlet_types],
|
||||
"outlet_types": [dt.name for dt in stage.outlet_types],
|
||||
"render_order": stage.render_order,
|
||||
"is_overlay": stage.is_overlay,
|
||||
}
|
||||
|
||||
def get_pipeline_info(self) -> dict:
|
||||
"""Get comprehensive information about the pipeline.
|
||||
|
||||
Returns:
|
||||
Dictionary with pipeline state
|
||||
"""
|
||||
return {
|
||||
"stages": {name: self.get_stage_info(name) for name in self._stages},
|
||||
"execution_order": self._execution_order.copy(),
|
||||
"initialized": self._initialized,
|
||||
"stage_count": len(self._stages),
|
||||
}
|
||||
|
||||
@property
|
||||
def minimum_capabilities(self) -> set[str]:
|
||||
"""Get minimum capabilities required for pipeline to function."""
|
||||
return self._minimum_capabilities
|
||||
|
||||
@minimum_capabilities.setter
|
||||
def minimum_capabilities(self, value: set[str]):
|
||||
"""Set minimum required capabilities.
|
||||
|
||||
NOTE: Research later - allow presets to override these defaults
|
||||
"""
|
||||
self._minimum_capabilities = value
|
||||
|
||||
def validate_minimum_capabilities(self) -> tuple[bool, list[str]]:
|
||||
"""Validate that all minimum capabilities are provided.
|
||||
|
||||
Returns:
|
||||
Tuple of (is_valid, missing_capabilities)
|
||||
"""
|
||||
missing = []
|
||||
for cap in self._minimum_capabilities:
|
||||
if not self._find_stage_with_capability(cap):
|
||||
missing.append(cap)
|
||||
return len(missing) == 0, missing
|
||||
|
||||
def ensure_minimum_capabilities(self) -> list[str]:
|
||||
"""Automatically inject MVP stages if minimum capabilities are missing.
|
||||
|
||||
Auto-injection is always on, but defaults are trivial to override.
|
||||
Returns:
|
||||
List of stages that were injected
|
||||
"""
|
||||
from engine.camera import Camera
|
||||
from engine.data_sources.sources import EmptyDataSource
|
||||
from engine.display import DisplayRegistry
|
||||
from engine.pipeline.adapters import (
|
||||
CameraClockStage,
|
||||
CameraStage,
|
||||
DataSourceStage,
|
||||
DisplayStage,
|
||||
SourceItemsToBufferStage,
|
||||
)
|
||||
|
||||
injected = []
|
||||
|
||||
# Check for source capability
|
||||
if (
|
||||
not self._find_stage_with_capability("source")
|
||||
and "source" not in self._stages
|
||||
):
|
||||
empty_source = EmptyDataSource(width=80, height=24)
|
||||
self.add_stage("source", DataSourceStage(empty_source, name="empty"))
|
||||
injected.append("source")
|
||||
|
||||
# Check for camera.state capability (must be BEFORE render to accept SOURCE_ITEMS)
|
||||
camera = None
|
||||
if not self._find_stage_with_capability("camera.state"):
|
||||
# Inject static camera (trivial, no movement)
|
||||
camera = Camera.scroll(speed=0.0)
|
||||
camera.set_canvas_size(200, 200)
|
||||
if "camera_update" not in self._stages:
|
||||
self.add_stage(
|
||||
"camera_update", CameraClockStage(camera, name="camera-clock")
|
||||
)
|
||||
injected.append("camera_update")
|
||||
|
||||
# Check for render capability
|
||||
if (
|
||||
not self._find_stage_with_capability("render.output")
|
||||
and "render" not in self._stages
|
||||
):
|
||||
self.add_stage("render", SourceItemsToBufferStage(name="items-to-buffer"))
|
||||
injected.append("render")
|
||||
|
||||
# Check for camera stage (must be AFTER render to accept TEXT_BUFFER)
|
||||
if camera and "camera" not in self._stages:
|
||||
self.add_stage("camera", CameraStage(camera, name="static"))
|
||||
injected.append("camera")
|
||||
|
||||
# Check for display capability
|
||||
if (
|
||||
not self._find_stage_with_capability("display.output")
|
||||
and "display" not in self._stages
|
||||
):
|
||||
display_name = self.config.display or "terminal"
|
||||
display = DisplayRegistry.create(display_name)
|
||||
if display:
|
||||
self.add_stage("display", DisplayStage(display, name=display_name))
|
||||
injected.append("display")
|
||||
|
||||
# Rebuild pipeline if stages were injected
|
||||
if injected:
|
||||
self._rebuild()
|
||||
|
||||
return injected
|
||||
|
||||
def build(self, auto_inject: bool = True) -> "Pipeline":
|
||||
"""Build execution order based on dependencies.
|
||||
|
||||
Args:
|
||||
auto_inject: If True, automatically inject MVP stages for missing capabilities
|
||||
"""
|
||||
self._capability_map = self._build_capability_map()
|
||||
self._execution_order = self._resolve_dependencies()
|
||||
|
||||
# Validate minimum capabilities and auto-inject if needed
|
||||
if auto_inject:
|
||||
is_valid, missing = self.validate_minimum_capabilities()
|
||||
if not is_valid:
|
||||
injected = self.ensure_minimum_capabilities()
|
||||
if injected:
|
||||
print(
|
||||
f" \033[38;5;226mAuto-injected stages for missing capabilities: {injected}\033[0m"
|
||||
)
|
||||
# Rebuild after auto-injection
|
||||
self._capability_map = self._build_capability_map()
|
||||
self._execution_order = self._resolve_dependencies()
|
||||
|
||||
# Re-validate after injection attempt (whether anything was injected or not)
|
||||
# If injection didn't run (injected empty), we still need to check if we're valid
|
||||
# If injection ran but failed to fix (injected empty), we need to check
|
||||
is_valid, missing = self.validate_minimum_capabilities()
|
||||
if not is_valid:
|
||||
raise StageError(
|
||||
"build",
|
||||
f"Auto-injection failed to provide minimum capabilities: {missing}",
|
||||
)
|
||||
|
||||
self._validate_dependencies()
|
||||
self._validate_types()
|
||||
self._initialized = True
|
||||
return self
|
||||
|
||||
def _build_capability_map(self) -> dict[str, list[str]]:
|
||||
"""Build a map of capabilities to stage names.
|
||||
|
||||
Returns:
|
||||
Dict mapping capability -> list of stage names that provide it
|
||||
"""
|
||||
capability_map: dict[str, list[str]] = {}
|
||||
for name, stage in self._stages.items():
|
||||
for cap in stage.capabilities:
|
||||
if cap not in capability_map:
|
||||
capability_map[cap] = []
|
||||
capability_map[cap].append(name)
|
||||
return capability_map
|
||||
|
||||
def _find_stage_with_capability(self, capability: str) -> str | None:
|
||||
"""Find a stage that provides the given capability.
|
||||
|
||||
Supports wildcard matching:
|
||||
- "source" matches "source.headlines" (prefix match)
|
||||
- "source.*" matches "source.headlines"
|
||||
- "source.headlines" matches exactly
|
||||
|
||||
Args:
|
||||
capability: The capability to find
|
||||
|
||||
Returns:
|
||||
Stage name that provides the capability, or None if not found
|
||||
"""
|
||||
# Exact match
|
||||
if capability in self._capability_map:
|
||||
return self._capability_map[capability][0]
|
||||
|
||||
# Prefix match (e.g., "source" -> "source.headlines")
|
||||
for cap, stages in self._capability_map.items():
|
||||
if cap.startswith(capability + "."):
|
||||
return stages[0]
|
||||
|
||||
# Wildcard match (e.g., "source.*" -> "source.headlines")
|
||||
if ".*" in capability:
|
||||
prefix = capability[:-2] # Remove ".*"
|
||||
for cap in self._capability_map:
|
||||
if cap.startswith(prefix + "."):
|
||||
return self._capability_map[cap][0]
|
||||
|
||||
return None
|
||||
|
||||
def _resolve_dependencies(self) -> list[str]:
|
||||
"""Resolve stage execution order using topological sort."""
|
||||
"""Resolve stage execution order using topological sort with capability matching."""
|
||||
ordered = []
|
||||
visited = set()
|
||||
temp_mark = set()
|
||||
@@ -102,10 +584,23 @@ class Pipeline:
|
||||
temp_mark.add(name)
|
||||
stage = self._stages.get(name)
|
||||
if stage:
|
||||
# Handle capability-based dependencies
|
||||
for dep in stage.dependencies:
|
||||
dep_stage = self._stages.get(dep)
|
||||
if dep_stage:
|
||||
visit(dep)
|
||||
# Find a stage that provides this capability
|
||||
dep_stage_name = self._find_stage_with_capability(dep)
|
||||
if dep_stage_name:
|
||||
visit(dep_stage_name)
|
||||
|
||||
# Handle direct stage dependencies
|
||||
for stage_dep in stage.stage_dependencies:
|
||||
if stage_dep in self._stages:
|
||||
visit(stage_dep)
|
||||
else:
|
||||
# Stage dependency not found - this is an error
|
||||
raise StageError(
|
||||
name,
|
||||
f"Missing stage dependency: '{stage_dep}' not found in pipeline",
|
||||
)
|
||||
|
||||
temp_mark.remove(name)
|
||||
visited.add(name)
|
||||
@@ -117,6 +612,79 @@ class Pipeline:
|
||||
|
||||
return ordered
|
||||
|
||||
def _validate_dependencies(self) -> None:
|
||||
"""Validate that all dependencies can be satisfied.
|
||||
|
||||
Raises StageError if any dependency cannot be resolved.
|
||||
"""
|
||||
missing: list[tuple[str, str]] = [] # (stage_name, capability)
|
||||
|
||||
for name, stage in self._stages.items():
|
||||
for dep in stage.dependencies:
|
||||
if not self._find_stage_with_capability(dep):
|
||||
missing.append((name, dep))
|
||||
|
||||
if missing:
|
||||
msgs = [f" - {stage} needs {cap}" for stage, cap in missing]
|
||||
raise StageError(
|
||||
"validation",
|
||||
"Missing capabilities:\n" + "\n".join(msgs),
|
||||
)
|
||||
|
||||
def _validate_types(self) -> None:
|
||||
"""Validate inlet/outlet types between connected stages.
|
||||
|
||||
PureData-style type validation. Each stage declares its inlet_types
|
||||
(what it accepts) and outlet_types (what it produces). This method
|
||||
validates that connected stages have compatible types.
|
||||
|
||||
Raises StageError if type mismatch is detected.
|
||||
"""
|
||||
from engine.pipeline.core import DataType
|
||||
|
||||
errors: list[str] = []
|
||||
|
||||
for i, name in enumerate(self._execution_order):
|
||||
stage = self._stages.get(name)
|
||||
if not stage:
|
||||
continue
|
||||
|
||||
inlet_types = stage.inlet_types
|
||||
|
||||
# Check against previous stage's outlet types
|
||||
if i > 0:
|
||||
prev_name = self._execution_order[i - 1]
|
||||
prev_stage = self._stages.get(prev_name)
|
||||
if prev_stage:
|
||||
prev_outlets = prev_stage.outlet_types
|
||||
|
||||
# Check if any outlet type is accepted by this inlet
|
||||
compatible = (
|
||||
DataType.ANY in inlet_types
|
||||
or DataType.ANY in prev_outlets
|
||||
or bool(prev_outlets & inlet_types)
|
||||
)
|
||||
|
||||
if not compatible:
|
||||
errors.append(
|
||||
f" - {name} (inlet: {inlet_types}) "
|
||||
f"← {prev_name} (outlet: {prev_outlets})"
|
||||
)
|
||||
|
||||
# Check display/sink stages (should accept TEXT_BUFFER)
|
||||
if (
|
||||
stage.category == "display"
|
||||
and DataType.TEXT_BUFFER not in inlet_types
|
||||
and DataType.ANY not in inlet_types
|
||||
):
|
||||
errors.append(f" - {name} is display but doesn't accept TEXT_BUFFER")
|
||||
|
||||
if errors:
|
||||
raise StageError(
|
||||
"type_validation",
|
||||
"Type mismatch in pipeline connections:\n" + "\n".join(errors),
|
||||
)
|
||||
|
||||
def initialize(self) -> bool:
|
||||
"""Initialize all stages in execution order."""
|
||||
for name in self._execution_order:
|
||||
@@ -126,7 +694,24 @@ class Pipeline:
|
||||
return True
|
||||
|
||||
def execute(self, data: Any | None = None) -> StageResult:
|
||||
"""Execute the pipeline with the given input data."""
|
||||
"""Execute the pipeline with the given input data.
|
||||
|
||||
Pipeline execution:
|
||||
1. Execute all non-overlay stages in dependency order
|
||||
2. Apply overlay stages on top (sorted by render_order)
|
||||
"""
|
||||
import os
|
||||
import sys
|
||||
|
||||
debug = os.environ.get("MAINLINE_DEBUG_DATAFLOW") == "1"
|
||||
|
||||
if debug:
|
||||
print(
|
||||
f"[PIPELINE.execute] Starting with data type: {type(data).__name__ if data else 'None'}",
|
||||
file=sys.stderr,
|
||||
flush=True,
|
||||
)
|
||||
|
||||
if not self._initialized:
|
||||
self.build()
|
||||
|
||||
@@ -141,16 +726,70 @@ class Pipeline:
|
||||
frame_start = time.perf_counter() if self._metrics_enabled else 0
|
||||
stage_timings: list[StageMetrics] = []
|
||||
|
||||
# Separate overlay stages and display stage from regular stages
|
||||
overlay_stages: list[tuple[int, Stage]] = []
|
||||
display_stage: Stage | None = None
|
||||
regular_stages: list[str] = []
|
||||
|
||||
for name in self._execution_order:
|
||||
stage = self._stages.get(name)
|
||||
if not stage or not stage.is_enabled():
|
||||
continue
|
||||
|
||||
# Check if this is the display stage - execute last
|
||||
if stage.category == "display":
|
||||
display_stage = stage
|
||||
continue
|
||||
|
||||
# Safely check is_overlay - handle MagicMock and other non-bool returns
|
||||
try:
|
||||
is_overlay = bool(getattr(stage, "is_overlay", False))
|
||||
except Exception:
|
||||
is_overlay = False
|
||||
|
||||
if is_overlay:
|
||||
# Safely get render_order
|
||||
try:
|
||||
render_order = int(getattr(stage, "render_order", 0))
|
||||
except Exception:
|
||||
render_order = 0
|
||||
overlay_stages.append((render_order, stage))
|
||||
else:
|
||||
regular_stages.append(name)
|
||||
|
||||
# Execute regular stages in dependency order (excluding display)
|
||||
for name in regular_stages:
|
||||
stage = self._stages.get(name)
|
||||
if not stage or not stage.is_enabled():
|
||||
continue
|
||||
|
||||
stage_start = time.perf_counter() if self._metrics_enabled else 0
|
||||
|
||||
try:
|
||||
if debug:
|
||||
data_info = type(current_data).__name__
|
||||
if isinstance(current_data, list):
|
||||
data_info += f"[{len(current_data)}]"
|
||||
print(
|
||||
f"[STAGE.{name}] Starting with: {data_info}",
|
||||
file=sys.stderr,
|
||||
flush=True,
|
||||
)
|
||||
|
||||
current_data = stage.process(current_data, self.context)
|
||||
|
||||
if debug:
|
||||
data_info = type(current_data).__name__
|
||||
if isinstance(current_data, list):
|
||||
data_info += f"[{len(current_data)}]"
|
||||
print(
|
||||
f"[STAGE.{name}] Completed, output: {data_info}",
|
||||
file=sys.stderr,
|
||||
flush=True,
|
||||
)
|
||||
except Exception as e:
|
||||
if debug:
|
||||
print(f"[STAGE.{name}] ERROR: {e}", file=sys.stderr, flush=True)
|
||||
if not stage.optional:
|
||||
return StageResult(
|
||||
success=False,
|
||||
@@ -173,6 +812,71 @@ class Pipeline:
|
||||
)
|
||||
)
|
||||
|
||||
# Apply overlay stages (sorted by render_order)
|
||||
overlay_stages.sort(key=lambda x: x[0])
|
||||
for render_order, stage in overlay_stages:
|
||||
stage_start = time.perf_counter() if self._metrics_enabled else 0
|
||||
stage_name = f"[overlay]{stage.name}"
|
||||
|
||||
try:
|
||||
# Overlays receive current_data but don't pass their output to next stage
|
||||
# Instead, their output is composited on top
|
||||
overlay_output = stage.process(current_data, self.context)
|
||||
# For now, we just let the overlay output pass through
|
||||
# In a more sophisticated implementation, we'd composite it
|
||||
if overlay_output is not None:
|
||||
current_data = overlay_output
|
||||
except Exception as e:
|
||||
if not stage.optional:
|
||||
return StageResult(
|
||||
success=False,
|
||||
data=current_data,
|
||||
error=str(e),
|
||||
stage_name=stage_name,
|
||||
)
|
||||
|
||||
if self._metrics_enabled:
|
||||
stage_duration = (time.perf_counter() - stage_start) * 1000
|
||||
chars_in = len(str(data)) if data else 0
|
||||
chars_out = len(str(current_data)) if current_data else 0
|
||||
stage_timings.append(
|
||||
StageMetrics(
|
||||
name=stage_name,
|
||||
duration_ms=stage_duration,
|
||||
chars_in=chars_in,
|
||||
chars_out=chars_out,
|
||||
)
|
||||
)
|
||||
|
||||
# Execute display stage LAST (after overlay stages)
|
||||
# This ensures overlay effects like HUD are visible in the final output
|
||||
if display_stage:
|
||||
stage_start = time.perf_counter() if self._metrics_enabled else 0
|
||||
|
||||
try:
|
||||
current_data = display_stage.process(current_data, self.context)
|
||||
except Exception as e:
|
||||
if not display_stage.optional:
|
||||
return StageResult(
|
||||
success=False,
|
||||
data=current_data,
|
||||
error=str(e),
|
||||
stage_name=display_stage.name,
|
||||
)
|
||||
|
||||
if self._metrics_enabled:
|
||||
stage_duration = (time.perf_counter() - stage_start) * 1000
|
||||
chars_in = len(str(data)) if data else 0
|
||||
chars_out = len(str(current_data)) if current_data else 0
|
||||
stage_timings.append(
|
||||
StageMetrics(
|
||||
name=display_stage.name,
|
||||
duration_ms=stage_duration,
|
||||
chars_in=chars_in,
|
||||
chars_out=chars_out,
|
||||
)
|
||||
)
|
||||
|
||||
if self._metrics_enabled:
|
||||
total_duration = (time.perf_counter() - frame_start) * 1000
|
||||
self._frame_metrics.append(
|
||||
@@ -182,6 +886,12 @@ class Pipeline:
|
||||
stages=stage_timings,
|
||||
)
|
||||
)
|
||||
|
||||
# Store metrics in context for other stages (like HUD)
|
||||
# This makes metrics a first-class pipeline citizen
|
||||
if self.context:
|
||||
self.context.state["metrics"] = self.get_metrics_summary()
|
||||
|
||||
if len(self._frame_metrics) > self._max_metrics_frames:
|
||||
self._frame_metrics.pop(0)
|
||||
self._current_frame_number += 1
|
||||
@@ -214,6 +924,22 @@ class Pipeline:
|
||||
"""Get list of stage names."""
|
||||
return list(self._stages.keys())
|
||||
|
||||
def get_overlay_stages(self) -> list[Stage]:
|
||||
"""Get all overlay stages sorted by render_order."""
|
||||
overlays = [stage for stage in self._stages.values() if stage.is_overlay]
|
||||
overlays.sort(key=lambda s: s.render_order)
|
||||
return overlays
|
||||
|
||||
def get_stage_type(self, name: str) -> str:
|
||||
"""Get the stage_type for a stage."""
|
||||
stage = self._stages.get(name)
|
||||
return stage.stage_type if stage else ""
|
||||
|
||||
def get_render_order(self, name: str) -> int:
|
||||
"""Get the render_order for a stage."""
|
||||
stage = self._stages.get(name)
|
||||
return stage.render_order if stage else 0
|
||||
|
||||
def get_metrics_summary(self) -> dict:
|
||||
"""Get summary of collected metrics."""
|
||||
if not self._frame_metrics:
|
||||
@@ -254,6 +980,39 @@ class Pipeline:
|
||||
self._frame_metrics.clear()
|
||||
self._current_frame_number = 0
|
||||
|
||||
def get_frame_times(self) -> list[float]:
|
||||
"""Get historical frame times for sparklines/charts."""
|
||||
return [f.total_ms for f in self._frame_metrics]
|
||||
|
||||
def set_effect_intensity(self, effect_name: str, intensity: float) -> bool:
|
||||
"""Set the intensity of an effect in the pipeline.
|
||||
|
||||
Args:
|
||||
effect_name: Name of the effect to modify
|
||||
intensity: New intensity value (0.0 to 1.0)
|
||||
|
||||
Returns:
|
||||
True if successful, False if effect not found or not an effect stage
|
||||
"""
|
||||
if not 0.0 <= intensity <= 1.0:
|
||||
return False
|
||||
|
||||
stage = self._stages.get(effect_name)
|
||||
if not stage:
|
||||
return False
|
||||
|
||||
# Check if this is an EffectPluginStage
|
||||
from engine.pipeline.adapters.effect_plugin import EffectPluginStage
|
||||
|
||||
if isinstance(stage, EffectPluginStage):
|
||||
# Access the underlying effect plugin
|
||||
effect = stage._effect
|
||||
if hasattr(effect, "config"):
|
||||
effect.config.intensity = intensity
|
||||
return True
|
||||
|
||||
return False
|
||||
|
||||
|
||||
class PipelineRunner:
|
||||
"""High-level pipeline runner with animation support."""
|
||||
@@ -303,8 +1062,11 @@ def create_pipeline_from_params(params: PipelineParams) -> Pipeline:
|
||||
|
||||
def create_default_pipeline() -> Pipeline:
|
||||
"""Create a default pipeline with all standard components."""
|
||||
from engine.pipeline.adapters import DataSourceStage
|
||||
from engine.sources_v2 import HeadlinesDataSource
|
||||
from engine.data_sources.sources import HeadlinesDataSource
|
||||
from engine.pipeline.adapters import (
|
||||
DataSourceStage,
|
||||
SourceItemsToBufferStage,
|
||||
)
|
||||
|
||||
pipeline = Pipeline()
|
||||
|
||||
@@ -312,6 +1074,9 @@ def create_default_pipeline() -> Pipeline:
|
||||
source = HeadlinesDataSource()
|
||||
pipeline.add_stage("source", DataSourceStage(source, name="headlines"))
|
||||
|
||||
# Add render stage to convert items to text buffer
|
||||
pipeline.add_stage("render", SourceItemsToBufferStage(name="items-to-buffer"))
|
||||
|
||||
# Add display stage
|
||||
display = StageRegistry.create("display", "terminal")
|
||||
if display:
|
||||
|
||||
@@ -5,17 +5,42 @@ This module provides the foundation for a clean, dependency-managed pipeline:
|
||||
- Stage: Base class for all pipeline components (sources, effects, displays, cameras)
|
||||
- PipelineContext: Dependency injection context for runtime data exchange
|
||||
- Capability system: Explicit capability declarations with duck-typing support
|
||||
- DataType: PureData-style inlet/outlet typing for validation
|
||||
"""
|
||||
|
||||
from abc import ABC, abstractmethod
|
||||
from collections.abc import Callable
|
||||
from dataclasses import dataclass, field
|
||||
from enum import Enum, auto
|
||||
from typing import TYPE_CHECKING, Any
|
||||
|
||||
if TYPE_CHECKING:
|
||||
from engine.pipeline.params import PipelineParams
|
||||
|
||||
|
||||
class DataType(Enum):
|
||||
"""PureData-style data types for inlet/outlet validation.
|
||||
|
||||
Each type represents a specific data format that flows through the pipeline.
|
||||
This enables compile-time-like validation of connections.
|
||||
|
||||
Examples:
|
||||
SOURCE_ITEMS: List[SourceItem] - raw items from sources
|
||||
ITEM_TUPLES: List[tuple] - (title, source, timestamp) tuples
|
||||
TEXT_BUFFER: List[str] - rendered ANSI buffer for display
|
||||
RAW_TEXT: str - raw text strings
|
||||
PIL_IMAGE: PIL Image object
|
||||
"""
|
||||
|
||||
SOURCE_ITEMS = auto() # List[SourceItem] - from DataSource
|
||||
ITEM_TUPLES = auto() # List[tuple] - (title, source, ts)
|
||||
TEXT_BUFFER = auto() # List[str] - ANSI buffer
|
||||
RAW_TEXT = auto() # str - raw text
|
||||
PIL_IMAGE = auto() # PIL Image object
|
||||
ANY = auto() # Accepts any type
|
||||
NONE = auto() # No data (terminator)
|
||||
|
||||
|
||||
@dataclass
|
||||
class StageConfig:
|
||||
"""Configuration for a single stage."""
|
||||
@@ -35,18 +60,78 @@ class Stage(ABC):
|
||||
- Effects: Post-processors (noise, fade, glitch, hud)
|
||||
- Displays: Output backends (terminal, pygame, websocket)
|
||||
- Cameras: Viewport controllers (vertical, horizontal, omni)
|
||||
- Overlays: UI elements that compose on top (HUD)
|
||||
|
||||
Stages declare:
|
||||
- capabilities: What they provide to other stages
|
||||
- dependencies: What they need from other stages
|
||||
- stage_type: Category of stage (source, effect, overlay, display)
|
||||
- render_order: Execution order within category
|
||||
- is_overlay: If True, output is composited on top, not passed downstream
|
||||
|
||||
Duck-typing is supported: any class with the required methods can act as a Stage.
|
||||
"""
|
||||
|
||||
name: str
|
||||
category: str # "source", "effect", "display", "camera"
|
||||
category: str # "source", "effect", "overlay", "display", "camera"
|
||||
optional: bool = False # If True, pipeline continues even if stage fails
|
||||
|
||||
@property
|
||||
def stage_type(self) -> str:
|
||||
"""Category of stage for ordering.
|
||||
|
||||
Valid values: "source", "effect", "overlay", "display", "camera"
|
||||
Defaults to category for backwards compatibility.
|
||||
"""
|
||||
return self.category
|
||||
|
||||
@property
|
||||
def render_order(self) -> int:
|
||||
"""Execution order within stage_type group.
|
||||
|
||||
Higher values execute later. Useful for ordering overlays
|
||||
or effects that need specific execution order.
|
||||
"""
|
||||
return 0
|
||||
|
||||
@property
|
||||
def is_overlay(self) -> bool:
|
||||
"""If True, this stage's output is composited on top of the buffer.
|
||||
|
||||
Overlay stages don't pass their output to the next stage.
|
||||
Instead, their output is layered on top of the final buffer.
|
||||
Use this for HUD, status displays, and similar UI elements.
|
||||
"""
|
||||
return False
|
||||
|
||||
@property
|
||||
def inlet_types(self) -> set[DataType]:
|
||||
"""Return set of data types this stage accepts.
|
||||
|
||||
PureData-style inlet typing. If the connected upstream stage's
|
||||
outlet_type is not in this set, the pipeline will raise an error.
|
||||
|
||||
Examples:
|
||||
- Source stages: {DataType.NONE} (no input needed)
|
||||
- Transform stages: {DataType.ITEM_TUPLES, DataType.TEXT_BUFFER}
|
||||
- Display stages: {DataType.TEXT_BUFFER}
|
||||
"""
|
||||
return {DataType.ANY}
|
||||
|
||||
@property
|
||||
def outlet_types(self) -> set[DataType]:
|
||||
"""Return set of data types this stage produces.
|
||||
|
||||
PureData-style outlet typing. Downstream stages must accept
|
||||
this type in their inlet_types.
|
||||
|
||||
Examples:
|
||||
- Source stages: {DataType.SOURCE_ITEMS}
|
||||
- Transform stages: {DataType.TEXT_BUFFER}
|
||||
- Display stages: {DataType.NONE} (consumes data)
|
||||
"""
|
||||
return {DataType.ANY}
|
||||
|
||||
@property
|
||||
def capabilities(self) -> set[str]:
|
||||
"""Return set of capabilities this stage provides.
|
||||
@@ -70,6 +155,21 @@ class Stage(ABC):
|
||||
"""
|
||||
return set()
|
||||
|
||||
@property
|
||||
def stage_dependencies(self) -> set[str]:
|
||||
"""Return set of stage names this stage must connect to directly.
|
||||
|
||||
This allows explicit stage-to-stage dependencies, useful for enforcing
|
||||
pipeline structure when capability matching alone is insufficient.
|
||||
|
||||
Examples:
|
||||
- {"viewport_filter"} # Must connect to viewport_filter stage
|
||||
- {"camera_update"} # Must connect to camera_update stage
|
||||
|
||||
NOTE: These are stage names (as added to pipeline), not capabilities.
|
||||
"""
|
||||
return set()
|
||||
|
||||
def init(self, ctx: "PipelineContext") -> bool:
|
||||
"""Initialize stage with pipeline context.
|
||||
|
||||
|
||||
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user