Compare commits
101 Commits
integratio
...
cd5034ce78
| Author | SHA1 | Date | |
|---|---|---|---|
| cd5034ce78 | |||
| 161bb522be | |||
| 3fa9eabe36 | |||
| 31ac728737 | |||
| d73d1c65bd | |||
| 5d9efdcb89 | |||
| f2b4226173 | |||
| 238bac1bb2 | |||
| 0eb5f1d5ff | |||
| 14d622f0d6 | |||
| e684666774 | |||
| bb0f1b85bf | |||
| c57617bb3d | |||
| abe49ba7d7 | |||
| 6d2c5ba304 | |||
| a95b24a246 | |||
| cdcdb7b172 | |||
| 21fb210c6e | |||
| 36afbacb6b | |||
| 60ae4f7dfb | |||
| 4b26c947e8 | |||
| b37b2ccc73 | |||
| b926b346ad | |||
| a65fb50464 | |||
| 10e2f00edd | |||
| 05d261273e | |||
| 57de835ae0 | |||
| 4c97cfe6aa | |||
| 10c1d057a9 | |||
| 7f6413c83b | |||
| d54147cfb4 | |||
| affafe810c | |||
| 85d8b29bab | |||
| d14f850711 | |||
| 6fc3cbc0d2 | |||
| 3e73ea0adb | |||
| 7c69086fa5 | |||
| 0980279332 | |||
| cda13584c5 | |||
| 526e5ae47d | |||
| dfe42b0883 | |||
| 1d244cf76a | |||
| 0aa80f92de | |||
| 5762d5e845 | |||
| 28203bac4b | |||
| 952b73cdf0 | |||
| d9c7138fe3 | |||
| c976b99da6 | |||
| 8d066edcca | |||
| b20b4973b5 | |||
| 73ca72d920 | |||
| 015d563c4a | |||
| 4a08b474c1 | |||
| 637cbc5515 | |||
| e0bbfea26c | |||
| 3a3d0c0607 | |||
| f638fb7597 | |||
| 2a41a90d79 | |||
| f43920e2f0 | |||
| b27ddbccb8 | |||
| bfd94fe046 | |||
| 76126bdaac | |||
| 4616a21359 | |||
| ce9d888cf5 | |||
| 1a42fca507 | |||
| e23ba81570 | |||
| 997bffab68 | |||
| 2e96b7cd83 | |||
| a370c7e1a0 | |||
| ea379f5aca | |||
| 828b8489e1 | |||
| 31cabe9128 | |||
| bcb4ef0cfe | |||
| 996ba14b1d | |||
| a1dcceac47 | |||
| c2d77ee358 | |||
| 8e27f89fa4 | |||
| 4d28f286db | |||
| 9b139a40f7 | |||
| e1408dcf16 | |||
| 0152e32115 | |||
| dc1adb2558 | |||
| fada11b58d | |||
| 3e9c1be6d2 | |||
| 0f2d8bf5c2 | |||
| f5de2c62e0 | |||
| f9991c24af | |||
| 20ed014491 | |||
| 9e4d54a82e | |||
| dcd31469a5 | |||
| 829c4ab63d | |||
| 22dd063baa | |||
| 0f7203e4e0 | |||
| ba050ada24 | |||
| d7b044ceae | |||
| ac1306373d | |||
| 2650f7245e | |||
| b1f2b9d2be | |||
| c08a7d3cb0 | |||
| d5a3edba97 | |||
| fb35458718 |
1
.gitignore
vendored
1
.gitignore
vendored
@@ -12,4 +12,3 @@ htmlcov/
|
|||||||
coverage.xml
|
coverage.xml
|
||||||
*.dot
|
*.dot
|
||||||
*.png
|
*.png
|
||||||
test-reports/
|
|
||||||
|
|||||||
@@ -29,28 +29,17 @@ class Stage(ABC):
|
|||||||
return set()
|
return set()
|
||||||
|
|
||||||
@property
|
@property
|
||||||
def dependencies(self) -> set[str]:
|
def dependencies(self) -> list[str]:
|
||||||
"""What this stage needs (e.g., {'source'})"""
|
"""What this stage needs (e.g., ['source'])"""
|
||||||
return set()
|
return []
|
||||||
```
|
```
|
||||||
|
|
||||||
### Capability-Based Dependencies
|
### Capability-Based Dependencies
|
||||||
|
|
||||||
The Pipeline resolves dependencies using **prefix matching**:
|
The Pipeline resolves dependencies using **prefix matching**:
|
||||||
- `"source"` matches `"source.headlines"`, `"source.poetry"`, etc.
|
- `"source"` matches `"source.headlines"`, `"source.poetry"`, etc.
|
||||||
- `"camera.state"` matches the camera state capability
|
|
||||||
- This allows flexible composition without hardcoding specific stage names
|
- This allows flexible composition without hardcoding specific stage names
|
||||||
|
|
||||||
### Minimum Capabilities
|
|
||||||
|
|
||||||
The pipeline requires these minimum capabilities to function:
|
|
||||||
- `"source"` - Data source capability
|
|
||||||
- `"render.output"` - Rendered content capability
|
|
||||||
- `"display.output"` - Display output capability
|
|
||||||
- `"camera.state"` - Camera state for viewport filtering
|
|
||||||
|
|
||||||
These are automatically injected if missing (auto-injection).
|
|
||||||
|
|
||||||
### DataType Enum
|
### DataType Enum
|
||||||
|
|
||||||
PureData-style data types for inlet/outlet validation:
|
PureData-style data types for inlet/outlet validation:
|
||||||
@@ -87,11 +76,3 @@ Canvas tracks dirty regions automatically when content is written via `put_regio
|
|||||||
- Use adapters (engine/pipeline/adapters.py) to wrap existing components as stages
|
- Use adapters (engine/pipeline/adapters.py) to wrap existing components as stages
|
||||||
- Set `optional=True` for stages that can fail gracefully
|
- Set `optional=True` for stages that can fail gracefully
|
||||||
- Use `stage_type` and `render_order` for execution ordering
|
- Use `stage_type` and `render_order` for execution ordering
|
||||||
- Clock stages update state independently of data flow
|
|
||||||
|
|
||||||
## Sources
|
|
||||||
|
|
||||||
- engine/pipeline/core.py - Stage base class
|
|
||||||
- engine/pipeline/controller.py - Pipeline implementation
|
|
||||||
- engine/pipeline/adapters/ - Stage adapters
|
|
||||||
- docs/PIPELINE.md - Pipeline documentation
|
|
||||||
|
|||||||
@@ -96,68 +96,3 @@ python mainline.py --display terminal # default
|
|||||||
python mainline.py --display websocket
|
python mainline.py --display websocket
|
||||||
python mainline.py --display moderngl # GPU-accelerated (requires moderngl)
|
python mainline.py --display moderngl # GPU-accelerated (requires moderngl)
|
||||||
```
|
```
|
||||||
|
|
||||||
## Common Bugs and Patterns
|
|
||||||
|
|
||||||
### BorderMode.OFF Enum Bug
|
|
||||||
|
|
||||||
**Problem**: `BorderMode.OFF` has enum value `1` (not `0`), and Python enums are always truthy.
|
|
||||||
|
|
||||||
**Incorrect Code**:
|
|
||||||
```python
|
|
||||||
if border:
|
|
||||||
buffer = render_border(buffer, width, height, fps, frame_time)
|
|
||||||
```
|
|
||||||
|
|
||||||
**Correct Code**:
|
|
||||||
```python
|
|
||||||
from engine.display import BorderMode
|
|
||||||
if border and border != BorderMode.OFF:
|
|
||||||
buffer = render_border(buffer, width, height, fps, frame_time)
|
|
||||||
```
|
|
||||||
|
|
||||||
**Why**: Checking `if border:` evaluates to `True` even when `border == BorderMode.OFF` because enum members are always truthy in Python.
|
|
||||||
|
|
||||||
### Context Type Mismatch
|
|
||||||
|
|
||||||
**Problem**: `PipelineContext` and `EffectContext` have different APIs for storing data.
|
|
||||||
|
|
||||||
- `PipelineContext`: Uses `set()`/`get()` for services
|
|
||||||
- `EffectContext`: Uses `set_state()`/`get_state()` for state
|
|
||||||
|
|
||||||
**Pattern for Passing Data**:
|
|
||||||
```python
|
|
||||||
# In pipeline setup (uses PipelineContext)
|
|
||||||
ctx.set("pipeline_order", pipeline.execution_order)
|
|
||||||
|
|
||||||
# In EffectPluginStage (must copy to EffectContext)
|
|
||||||
effect_ctx.set_state("pipeline_order", ctx.get("pipeline_order"))
|
|
||||||
```
|
|
||||||
|
|
||||||
### Terminal Display ANSI Patterns
|
|
||||||
|
|
||||||
**Screen Clearing**:
|
|
||||||
```python
|
|
||||||
output = "\033[H\033[J" + "".join(buffer)
|
|
||||||
```
|
|
||||||
|
|
||||||
**Cursor Positioning** (used by HUD effect):
|
|
||||||
- `\033[row;colH` - Move cursor to row, column
|
|
||||||
- Example: `\033[1;1H` - Move to row 1, column 1
|
|
||||||
|
|
||||||
**Key Insight**: Terminal display joins buffer lines WITHOUT newlines, relying on ANSI cursor positioning codes to move the cursor to the correct location for each line.
|
|
||||||
|
|
||||||
### EffectPluginStage Context Copying
|
|
||||||
|
|
||||||
**Problem**: When effects need access to pipeline services (like `pipeline_order`), they must be copied from `PipelineContext` to `EffectContext`.
|
|
||||||
|
|
||||||
**Pattern**:
|
|
||||||
```python
|
|
||||||
# In EffectPluginStage.process()
|
|
||||||
# Copy pipeline_order from PipelineContext services to EffectContext state
|
|
||||||
pipeline_order = ctx.get("pipeline_order")
|
|
||||||
if pipeline_order:
|
|
||||||
effect_ctx.set_state("pipeline_order", pipeline_order)
|
|
||||||
```
|
|
||||||
|
|
||||||
This ensures effects can access `ctx.get_state("pipeline_order")` in their process method.
|
|
||||||
|
|||||||
89
AGENTS.md
89
AGENTS.md
@@ -267,45 +267,15 @@ The new Stage-based pipeline architecture provides capability-based dependency r
|
|||||||
|
|
||||||
- **Stage** (`engine/pipeline/core.py`): Base class for pipeline stages
|
- **Stage** (`engine/pipeline/core.py`): Base class for pipeline stages
|
||||||
- **Pipeline** (`engine/pipeline/controller.py`): Executes stages with capability-based dependency resolution
|
- **Pipeline** (`engine/pipeline/controller.py`): Executes stages with capability-based dependency resolution
|
||||||
- **PipelineConfig** (`engine/pipeline/controller.py`): Configuration for pipeline instance
|
|
||||||
- **StageRegistry** (`engine/pipeline/registry.py`): Discovers and registers stages
|
- **StageRegistry** (`engine/pipeline/registry.py`): Discovers and registers stages
|
||||||
- **Stage Adapters** (`engine/pipeline/adapters.py`): Wraps existing components as stages
|
- **Stage Adapters** (`engine/pipeline/adapters.py`): Wraps existing components as stages
|
||||||
|
|
||||||
#### Pipeline Configuration
|
|
||||||
|
|
||||||
The `PipelineConfig` dataclass configures pipeline behavior:
|
|
||||||
|
|
||||||
```python
|
|
||||||
@dataclass
|
|
||||||
class PipelineConfig:
|
|
||||||
source: str = "headlines" # Data source identifier
|
|
||||||
display: str = "terminal" # Display backend identifier
|
|
||||||
camera: str = "vertical" # Camera mode identifier
|
|
||||||
effects: list[str] = field(default_factory=list) # List of effect names
|
|
||||||
enable_metrics: bool = True # Enable performance metrics
|
|
||||||
```
|
|
||||||
|
|
||||||
**Available sources**: `headlines`, `poetry`, `empty`, `list`, `image`, `metrics`, `cached`, `transform`, `composite`, `pipeline-inspect`
|
|
||||||
**Available displays**: `terminal`, `null`, `replay`, `websocket`, `pygame`, `moderngl`, `multi`
|
|
||||||
**Available camera modes**: `FEED`, `SCROLL`, `HORIZONTAL`, `OMNI`, `FLOATING`, `BOUNCE`, `RADIAL`
|
|
||||||
|
|
||||||
#### Capability-Based Dependencies
|
#### Capability-Based Dependencies
|
||||||
|
|
||||||
Stages declare capabilities (what they provide) and dependencies (what they need). The Pipeline resolves dependencies using prefix matching:
|
Stages declare capabilities (what they provide) and dependencies (what they need). The Pipeline resolves dependencies using prefix matching:
|
||||||
- `"source"` matches `"source.headlines"`, `"source.poetry"`, etc.
|
- `"source"` matches `"source.headlines"`, `"source.poetry"`, etc.
|
||||||
- `"camera.state"` matches the camera state capability
|
|
||||||
- This allows flexible composition without hardcoding specific stage names
|
- This allows flexible composition without hardcoding specific stage names
|
||||||
|
|
||||||
#### Minimum Capabilities
|
|
||||||
|
|
||||||
The pipeline requires these minimum capabilities to function:
|
|
||||||
- `"source"` - Data source capability
|
|
||||||
- `"render.output"` - Rendered content capability
|
|
||||||
- `"display.output"` - Display output capability
|
|
||||||
- `"camera.state"` - Camera state for viewport filtering
|
|
||||||
|
|
||||||
These are automatically injected if missing by the `ensure_minimum_capabilities()` method.
|
|
||||||
|
|
||||||
#### Sensor Framework
|
#### Sensor Framework
|
||||||
|
|
||||||
- **Sensor** (`engine/sensors/__init__.py`): Base class for real-time input sensors
|
- **Sensor** (`engine/sensors/__init__.py`): Base class for real-time input sensors
|
||||||
@@ -392,43 +362,6 @@ The rendering pipeline is documented in `docs/PIPELINE.md` using Mermaid diagram
|
|||||||
2. If adding new SVG diagrams, render them manually using an external tool (e.g., Mermaid Live Editor)
|
2. If adding new SVG diagrams, render them manually using an external tool (e.g., Mermaid Live Editor)
|
||||||
3. Commit both the markdown and any new diagram files
|
3. Commit both the markdown and any new diagram files
|
||||||
|
|
||||||
### Pipeline Mutation API
|
|
||||||
|
|
||||||
The Pipeline class supports dynamic mutation during runtime via the mutation API:
|
|
||||||
|
|
||||||
**Core Methods:**
|
|
||||||
- `add_stage(name, stage, initialize=True)` - Add a stage to the pipeline
|
|
||||||
- `remove_stage(name, cleanup=True)` - Remove a stage and rebuild execution order
|
|
||||||
- `replace_stage(name, new_stage, preserve_state=True)` - Replace a stage with another
|
|
||||||
- `swap_stages(name1, name2)` - Swap two stages
|
|
||||||
- `move_stage(name, after=None, before=None)` - Move a stage in execution order
|
|
||||||
- `enable_stage(name)` - Enable a stage
|
|
||||||
- `disable_stage(name)` - Disable a stage
|
|
||||||
|
|
||||||
**New Methods (Issue #35):**
|
|
||||||
- `cleanup_stage(name)` - Clean up specific stage without removing it
|
|
||||||
- `remove_stage_safe(name, cleanup=True)` - Alias for remove_stage that explicitly rebuilds
|
|
||||||
- `can_hot_swap(name)` - Check if a stage can be safely hot-swapped
|
|
||||||
- Returns False for stages that provide minimum capabilities as sole provider
|
|
||||||
- Returns True for swappable stages
|
|
||||||
|
|
||||||
**WebSocket Commands:**
|
|
||||||
Commands can be sent via WebSocket to mutate the pipeline at runtime:
|
|
||||||
```json
|
|
||||||
{"action": "remove_stage", "stage": "stage_name"}
|
|
||||||
{"action": "swap_stages", "stage1": "name1", "stage2": "name2"}
|
|
||||||
{"action": "enable_stage", "stage": "stage_name"}
|
|
||||||
{"action": "disable_stage", "stage": "stage_name"}
|
|
||||||
{"action": "cleanup_stage", "stage": "stage_name"}
|
|
||||||
{"action": "can_hot_swap", "stage": "stage_name"}
|
|
||||||
```
|
|
||||||
|
|
||||||
**Implementation Files:**
|
|
||||||
- `engine/pipeline/controller.py` - Pipeline class with mutation methods
|
|
||||||
- `engine/app/pipeline_runner.py` - `_handle_pipeline_mutation()` function
|
|
||||||
- `engine/pipeline/ui.py` - execute_command() with docstrings
|
|
||||||
- `tests/test_pipeline_mutation_commands.py` - Integration tests
|
|
||||||
|
|
||||||
## Skills Library
|
## Skills Library
|
||||||
|
|
||||||
A skills library MCP server (`skills`) is available for capturing and tracking learned knowledge. Skills are stored in `~/.skills/`.
|
A skills library MCP server (`skills`) is available for capturing and tracking learned knowledge. Skills are stored in `~/.skills/`.
|
||||||
@@ -436,23 +369,23 @@ A skills library MCP server (`skills`) is available for capturing and tracking l
|
|||||||
### Workflow
|
### Workflow
|
||||||
|
|
||||||
**Before starting work:**
|
**Before starting work:**
|
||||||
1. Run `local_skills_list_skills` to see available skills
|
1. Run `skills_list_skills` to see available skills
|
||||||
2. Use `local_skills_peek_skill({name: "skill-name"})` to preview relevant skills
|
2. Use `skills_peek_skill({name: "skill-name"})` to preview relevant skills
|
||||||
3. Use `local_skills_skill_slice({name: "skill-name", query: "your question"})` to get relevant sections
|
3. Use `skills_skill_slice({name: "skill-name", query: "your question"})` to get relevant sections
|
||||||
|
|
||||||
**While working:**
|
**While working:**
|
||||||
- If a skill was wrong or incomplete: `local_skills_update_skill` → `local_skills_record_assessment` → `local_skills_report_outcome({quality: 1})`
|
- If a skill was wrong or incomplete: `skills_update_skill` → `skills_record_assessment` → `skills_report_outcome({quality: 1})`
|
||||||
- If a skill worked correctly: `local_skills_report_outcome({quality: 4})` (normal) or `quality: 5` (perfect)
|
- If a skill worked correctly: `skills_report_outcome({quality: 4})` (normal) or `quality: 5` (perfect)
|
||||||
|
|
||||||
**End of session:**
|
**End of session:**
|
||||||
- Run `local_skills_reflect_on_session({context_summary: "what you did"})` to identify new skills to capture
|
- Run `skills_reflect_on_session({context_summary: "what you did"})` to identify new skills to capture
|
||||||
- Use `local_skills_create_skill` to add new skills
|
- Use `skills_create_skill` to add new skills
|
||||||
- Use `local_skills_record_assessment` to score them
|
- Use `skills_record_assessment` to score them
|
||||||
|
|
||||||
### Useful Tools
|
### Useful Tools
|
||||||
- `local_skills_review_stale_skills()` - Skills due for review (negative days_until_due)
|
- `skills_review_stale_skills()` - Skills due for review (negative days_until_due)
|
||||||
- `local_skills_skills_report()` - Overview of entire collection
|
- `skills_skills_report()` - Overview of entire collection
|
||||||
- `local_skills_validate_skill({name: "skill-name"})` - Load skill for review with sources
|
- `skills_validate_skill({name: "skill-name"})` - Load skill for review with sources
|
||||||
|
|
||||||
### Agent Skills
|
### Agent Skills
|
||||||
|
|
||||||
|
|||||||
234
docs/PIPELINE.md
234
docs/PIPELINE.md
@@ -2,160 +2,136 @@
|
|||||||
|
|
||||||
## Architecture Overview
|
## Architecture Overview
|
||||||
|
|
||||||
The Mainline pipeline uses a **Stage-based architecture** with **capability-based dependency resolution**. Stages declare capabilities (what they provide) and dependencies (what they need), and the Pipeline resolves dependencies using prefix matching.
|
|
||||||
|
|
||||||
```
|
```
|
||||||
Source Stage → Render Stage → Effect Stages → Display Stage
|
Sources (static/dynamic) → Fetch → Prepare → Scroll → Effects → Render → Display
|
||||||
↓
|
↓
|
||||||
Camera Stage (provides camera.state capability)
|
NtfyPoller ← MicMonitor (async)
|
||||||
```
|
```
|
||||||
|
|
||||||
### Capability-Based Dependency Resolution
|
### Data Source Abstraction (sources_v2.py)
|
||||||
|
|
||||||
Stages declare capabilities and dependencies:
|
- **Static sources**: Data fetched once and cached (HeadlinesDataSource, PoetryDataSource)
|
||||||
- **Capabilities**: What the stage provides (e.g., `source`, `render.output`, `display.output`, `camera.state`)
|
- **Dynamic sources**: Idempotent fetch for runtime updates (PipelineDataSource)
|
||||||
- **Dependencies**: What the stage needs (e.g., `source`, `render.output`, `camera.state`)
|
- **SourceRegistry**: Discovery and management of data sources
|
||||||
|
|
||||||
The Pipeline resolves dependencies using **prefix matching**:
|
### Camera Modes
|
||||||
- `"source"` matches `"source.headlines"`, `"source.poetry"`, etc.
|
|
||||||
- `"camera.state"` matches the camera state capability provided by `CameraClockStage`
|
|
||||||
- This allows flexible composition without hardcoding specific stage names
|
|
||||||
|
|
||||||
### Minimum Capabilities
|
- **Vertical**: Scroll up (default)
|
||||||
|
- **Horizontal**: Scroll left
|
||||||
|
- **Omni**: Diagonal scroll
|
||||||
|
- **Floating**: Sinusoidal bobbing
|
||||||
|
- **Trace**: Follow network path node-by-node (for pipeline viz)
|
||||||
|
|
||||||
The pipeline requires these minimum capabilities to function:
|
## Content to Display Rendering Pipeline
|
||||||
- `"source"` - Data source capability (provides raw items)
|
|
||||||
- `"render.output"` - Rendered content capability
|
|
||||||
- `"display.output"` - Display output capability
|
|
||||||
- `"camera.state"` - Camera state for viewport filtering
|
|
||||||
|
|
||||||
These are automatically injected if missing by the `ensure_minimum_capabilities()` method.
|
|
||||||
|
|
||||||
### Stage Registry
|
|
||||||
|
|
||||||
The `StageRegistry` discovers and registers stages automatically:
|
|
||||||
- Scans `engine/stages/` for stage implementations
|
|
||||||
- Registers stages by their declared capabilities
|
|
||||||
- Enables runtime stage discovery and composition
|
|
||||||
|
|
||||||
## Stage-Based Pipeline Flow
|
|
||||||
|
|
||||||
```mermaid
|
```mermaid
|
||||||
flowchart TD
|
flowchart TD
|
||||||
subgraph Stages["Stage Pipeline"]
|
subgraph Sources["Data Sources (v2)"]
|
||||||
subgraph SourceStage["Source Stage (provides: source.*)"]
|
Headlines[HeadlinesDataSource]
|
||||||
Headlines[HeadlinesSource]
|
Poetry[PoetryDataSource]
|
||||||
Poetry[PoetrySource]
|
Pipeline[PipelineDataSource]
|
||||||
Pipeline[PipelineSource]
|
Registry[SourceRegistry]
|
||||||
end
|
end
|
||||||
|
|
||||||
subgraph RenderStage["Render Stage (provides: render.*)"]
|
subgraph SourcesLegacy["Data Sources (legacy)"]
|
||||||
Render[RenderStage]
|
RSS[("RSS Feeds")]
|
||||||
Canvas[Canvas]
|
PoetryFeed[("Poetry Feed")]
|
||||||
Camera[Camera]
|
Ntfy[("Ntfy Messages")]
|
||||||
end
|
Mic[("Microphone")]
|
||||||
|
end
|
||||||
|
|
||||||
subgraph EffectStages["Effect Stages (provides: effect.*)"]
|
subgraph Fetch["Fetch Layer"]
|
||||||
|
FC[fetch_all]
|
||||||
|
FP[fetch_poetry]
|
||||||
|
Cache[(Cache)]
|
||||||
|
end
|
||||||
|
|
||||||
|
subgraph Prepare["Prepare Layer"]
|
||||||
|
MB[make_block]
|
||||||
|
Strip[strip_tags]
|
||||||
|
Trans[translate]
|
||||||
|
end
|
||||||
|
|
||||||
|
subgraph Scroll["Scroll Engine"]
|
||||||
|
SC[StreamController]
|
||||||
|
CAM[Camera]
|
||||||
|
RTZ[render_ticker_zone]
|
||||||
|
Msg[render_message_overlay]
|
||||||
|
Grad[lr_gradient]
|
||||||
|
VT[vis_trunc / vis_offset]
|
||||||
|
end
|
||||||
|
|
||||||
|
subgraph Effects["Effect Pipeline"]
|
||||||
|
subgraph EffectsPlugins["Effect Plugins"]
|
||||||
Noise[NoiseEffect]
|
Noise[NoiseEffect]
|
||||||
Fade[FadeEffect]
|
Fade[FadeEffect]
|
||||||
Glitch[GlitchEffect]
|
Glitch[GlitchEffect]
|
||||||
Firehose[FirehoseEffect]
|
Firehose[FirehoseEffect]
|
||||||
Hud[HudEffect]
|
Hud[HudEffect]
|
||||||
end
|
end
|
||||||
|
EC[EffectChain]
|
||||||
subgraph DisplayStage["Display Stage (provides: display.*)"]
|
ER[EffectRegistry]
|
||||||
Terminal[TerminalDisplay]
|
|
||||||
Pygame[PygameDisplay]
|
|
||||||
WebSocket[WebSocketDisplay]
|
|
||||||
Null[NullDisplay]
|
|
||||||
end
|
|
||||||
end
|
end
|
||||||
|
|
||||||
subgraph Capabilities["Capability Map"]
|
subgraph Render["Render Layer"]
|
||||||
SourceCaps["source.headlines<br/>source.poetry<br/>source.pipeline"]
|
BW[big_wrap]
|
||||||
RenderCaps["render.output<br/>render.canvas"]
|
RL[render_line]
|
||||||
EffectCaps["effect.noise<br/>effect.fade<br/>effect.glitch"]
|
|
||||||
DisplayCaps["display.output<br/>display.terminal"]
|
|
||||||
end
|
end
|
||||||
|
|
||||||
SourceStage --> RenderStage
|
subgraph Display["Display Backends"]
|
||||||
RenderStage --> EffectStages
|
TD[TerminalDisplay]
|
||||||
EffectStages --> DisplayStage
|
PD[PygameDisplay]
|
||||||
|
SD[SixelDisplay]
|
||||||
|
KD[KittyDisplay]
|
||||||
|
WSD[WebSocketDisplay]
|
||||||
|
ND[NullDisplay]
|
||||||
|
end
|
||||||
|
|
||||||
SourceStage --> SourceCaps
|
subgraph Async["Async Sources"]
|
||||||
RenderStage --> RenderCaps
|
NTFY[NtfyPoller]
|
||||||
EffectStages --> EffectCaps
|
MIC[MicMonitor]
|
||||||
DisplayStage --> DisplayCaps
|
end
|
||||||
|
|
||||||
style SourceStage fill:#f9f,stroke:#333
|
subgraph Animation["Animation System"]
|
||||||
style RenderStage fill:#bbf,stroke:#333
|
AC[AnimationController]
|
||||||
style EffectStages fill:#fbf,stroke:#333
|
PR[Preset]
|
||||||
style DisplayStage fill:#bfb,stroke:#333
|
end
|
||||||
|
|
||||||
|
Sources --> Fetch
|
||||||
|
RSS --> FC
|
||||||
|
PoetryFeed --> FP
|
||||||
|
FC --> Cache
|
||||||
|
FP --> Cache
|
||||||
|
Cache --> MB
|
||||||
|
Strip --> MB
|
||||||
|
Trans --> MB
|
||||||
|
MB --> SC
|
||||||
|
NTFY --> SC
|
||||||
|
SC --> RTZ
|
||||||
|
CAM --> RTZ
|
||||||
|
Grad --> RTZ
|
||||||
|
VT --> RTZ
|
||||||
|
RTZ --> EC
|
||||||
|
EC --> ER
|
||||||
|
ER --> EffectsPlugins
|
||||||
|
EffectsPlugins --> BW
|
||||||
|
BW --> RL
|
||||||
|
RL --> Display
|
||||||
|
Ntfy --> RL
|
||||||
|
Mic --> RL
|
||||||
|
MIC --> RL
|
||||||
|
|
||||||
|
style Sources fill:#f9f,stroke:#333
|
||||||
|
style Fetch fill:#bbf,stroke:#333
|
||||||
|
style Prepare fill:#bff,stroke:#333
|
||||||
|
style Scroll fill:#bfb,stroke:#333
|
||||||
|
style Effects fill:#fbf,stroke:#333
|
||||||
|
style Render fill:#ffb,stroke:#333
|
||||||
|
style Display fill:#bbf,stroke:#333
|
||||||
|
style Async fill:#fbb,stroke:#333
|
||||||
|
style Animation fill:#bfb,stroke:#333
|
||||||
```
|
```
|
||||||
|
|
||||||
## Stage Adapters
|
|
||||||
|
|
||||||
Existing components are wrapped as Stages via adapters:
|
|
||||||
|
|
||||||
### Source Stage Adapter
|
|
||||||
- Wraps `HeadlinesDataSource`, `PoetryDataSource`, etc.
|
|
||||||
- Provides `source.*` capabilities
|
|
||||||
- Fetches data and outputs to pipeline buffer
|
|
||||||
|
|
||||||
### Render Stage Adapter
|
|
||||||
- Wraps `StreamController`, `Camera`, `render_ticker_zone`
|
|
||||||
- Provides `render.output` capability
|
|
||||||
- Processes content and renders to canvas
|
|
||||||
|
|
||||||
### Effect Stage Adapter
|
|
||||||
- Wraps `EffectChain` and individual effect plugins
|
|
||||||
- Provides `effect.*` capabilities
|
|
||||||
- Applies visual effects to rendered content
|
|
||||||
|
|
||||||
### Display Stage Adapter
|
|
||||||
- Wraps `TerminalDisplay`, `PygameDisplay`, etc.
|
|
||||||
- Provides `display.*` capabilities
|
|
||||||
- Outputs final buffer to display backend
|
|
||||||
|
|
||||||
## Pipeline Mutation API
|
|
||||||
|
|
||||||
The Pipeline supports dynamic mutation during runtime:
|
|
||||||
|
|
||||||
### Core Methods
|
|
||||||
- `add_stage(name, stage, initialize=True)` - Add a stage
|
|
||||||
- `remove_stage(name, cleanup=True)` - Remove a stage and rebuild execution order
|
|
||||||
- `replace_stage(name, new_stage, preserve_state=True)` - Replace a stage
|
|
||||||
- `swap_stages(name1, name2)` - Swap two stages
|
|
||||||
- `move_stage(name, after=None, before=None)` - Move a stage in execution order
|
|
||||||
- `enable_stage(name)` / `disable_stage(name)` - Enable/disable stages
|
|
||||||
|
|
||||||
### Safety Checks
|
|
||||||
- `can_hot_swap(name)` - Check if a stage can be safely hot-swapped
|
|
||||||
- `cleanup_stage(name)` - Clean up specific stage without removing it
|
|
||||||
|
|
||||||
### WebSocket Commands
|
|
||||||
The mutation API is accessible via WebSocket for remote control:
|
|
||||||
```json
|
|
||||||
{"action": "remove_stage", "stage": "stage_name"}
|
|
||||||
{"action": "swap_stages", "stage1": "name1", "stage2": "name2"}
|
|
||||||
{"action": "enable_stage", "stage": "stage_name"}
|
|
||||||
{"action": "cleanup_stage", "stage": "stage_name"}
|
|
||||||
```
|
|
||||||
|
|
||||||
## Camera Modes
|
|
||||||
|
|
||||||
The Camera supports the following modes:
|
|
||||||
|
|
||||||
- **FEED**: Single item view (static or rapid cycling)
|
|
||||||
- **SCROLL**: Smooth vertical scrolling (movie credits style)
|
|
||||||
- **HORIZONTAL**: Left/right movement
|
|
||||||
- **OMNI**: Combination of vertical and horizontal
|
|
||||||
- **FLOATING**: Sinusoidal/bobbing motion
|
|
||||||
- **BOUNCE**: DVD-style bouncing off edges
|
|
||||||
- **RADIAL**: Polar coordinate scanning (radar sweep)
|
|
||||||
|
|
||||||
Note: Camera state is provided by `CameraClockStage` (capability: `camera.state`) which updates independently of data flow. The `CameraStage` applies viewport transformations (capability: `camera`).
|
|
||||||
|
|
||||||
## Animation & Presets
|
## Animation & Presets
|
||||||
|
|
||||||
```mermaid
|
```mermaid
|
||||||
@@ -185,7 +161,7 @@ flowchart LR
|
|||||||
Triggers --> Events
|
Triggers --> Events
|
||||||
```
|
```
|
||||||
|
|
||||||
## Camera Modes State Diagram
|
## Camera Modes
|
||||||
|
|
||||||
```mermaid
|
```mermaid
|
||||||
stateDiagram-v2
|
stateDiagram-v2
|
||||||
|
|||||||
@@ -1,217 +0,0 @@
|
|||||||
# ADR: Preset Scripting Language for Mainline
|
|
||||||
|
|
||||||
## Status: Draft
|
|
||||||
|
|
||||||
## Context
|
|
||||||
|
|
||||||
We need to evaluate whether to add a scripting language for authoring presets in Mainline, replacing or augmenting the current TOML-based preset system. The goals are:
|
|
||||||
|
|
||||||
1. **Expressiveness**: More powerful than TOML for describing dynamic, procedural, or dataflow-based presets
|
|
||||||
2. **Live coding**: Support hot-reloading of presets during runtime (like TidalCycles or Sonic Pi)
|
|
||||||
3. **Testing**: Include assertion language to package tests alongside presets
|
|
||||||
4. **Toolchain**: Consider packaging and build processes
|
|
||||||
|
|
||||||
### Current State
|
|
||||||
|
|
||||||
The current preset system uses TOML files (`presets.toml`) with a simple structure:
|
|
||||||
|
|
||||||
```toml
|
|
||||||
[presets.demo-base]
|
|
||||||
description = "Demo: Base preset for effect hot-swapping"
|
|
||||||
source = "headlines"
|
|
||||||
display = "terminal"
|
|
||||||
camera = "feed"
|
|
||||||
effects = [] # Demo script will add/remove effects dynamically
|
|
||||||
camera_speed = 0.1
|
|
||||||
viewport_width = 80
|
|
||||||
viewport_height = 24
|
|
||||||
```
|
|
||||||
|
|
||||||
This is declarative and static. It cannot express:
|
|
||||||
- Conditional logic based on runtime state
|
|
||||||
- Dataflow between pipeline stages
|
|
||||||
- Procedural generation of stage configurations
|
|
||||||
- Assertions or validation of preset behavior
|
|
||||||
|
|
||||||
### Problems with TOML
|
|
||||||
|
|
||||||
- No way to express dependencies between effects or stages
|
|
||||||
- Cannot describe temporal/animated behavior
|
|
||||||
- No support for sensor bindings or parametric animations
|
|
||||||
- Static configuration cannot adapt to runtime conditions
|
|
||||||
- No built-in testing/assertion mechanism
|
|
||||||
|
|
||||||
## Approaches
|
|
||||||
|
|
||||||
### 1. Visual Dataflow Language (PureData-style)
|
|
||||||
|
|
||||||
Inspired by Pure Data (Pd), Max/MSP, and TouchDesigner:
|
|
||||||
|
|
||||||
**Pros:**
|
|
||||||
- Intuitive for creative coding and live performance
|
|
||||||
- Strong model for real-time parameter modulation
|
|
||||||
- Matches the "patcher" paradigm already seen in pipeline architecture
|
|
||||||
- Rich ecosystem of visual programming tools
|
|
||||||
|
|
||||||
**Cons:**
|
|
||||||
- Complex to implement from scratch
|
|
||||||
- Requires dedicated GUI editor
|
|
||||||
- Harder to version control (binary/graph formats)
|
|
||||||
- Mermaid diagrams alone aren't sufficient for this
|
|
||||||
|
|
||||||
**Tools to explore:**
|
|
||||||
- libpd (Pure Data bindings for other languages)
|
|
||||||
- Node-based frameworks (node-red, various DSP tools)
|
|
||||||
- TouchDesigner-like approaches
|
|
||||||
|
|
||||||
### 2. Textual DSL (TidalCycles-style)
|
|
||||||
|
|
||||||
Domain-specific language focused on pattern transformation:
|
|
||||||
|
|
||||||
**Pros:**
|
|
||||||
- Lightweight, fast iteration
|
|
||||||
- Easy to version control (text files)
|
|
||||||
- Can express complex patterns with minimal syntax
|
|
||||||
- Proven in livecoding community
|
|
||||||
|
|
||||||
**Cons:**
|
|
||||||
- Learning curve for non-programmers
|
|
||||||
- Less visual than PureData approach
|
|
||||||
|
|
||||||
**Example (hypothetical):**
|
|
||||||
```
|
|
||||||
preset my-show {
|
|
||||||
source: headlines
|
|
||||||
|
|
||||||
every 8s {
|
|
||||||
effect noise: intensity = (0.5 <-> 1.0)
|
|
||||||
}
|
|
||||||
|
|
||||||
on mic.level > 0.7 {
|
|
||||||
effect glitch: intensity += 0.2
|
|
||||||
}
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
### 3. Embed Existing Language
|
|
||||||
|
|
||||||
Embed Lua, Python, or JavaScript:
|
|
||||||
|
|
||||||
**Pros:**
|
|
||||||
- Full power of general-purpose language
|
|
||||||
- Existing tooling, testing frameworks
|
|
||||||
- Easy to integrate (many embeddable interpreters)
|
|
||||||
|
|
||||||
**Cons:**
|
|
||||||
- Security concerns with running user code
|
|
||||||
- May be overkill for simple presets
|
|
||||||
- Testing/assertion system must be built on top
|
|
||||||
|
|
||||||
**Tools:**
|
|
||||||
- Lua (lightweight, fast)
|
|
||||||
- Python (rich ecosystem, but heavier)
|
|
||||||
- QuickJS (small, embeddable JS)
|
|
||||||
|
|
||||||
### 4. Hybrid Approach
|
|
||||||
|
|
||||||
Visual editor generates textual DSL that compiles to Python:
|
|
||||||
|
|
||||||
**Pros:**
|
|
||||||
- Best of both worlds
|
|
||||||
- Can start with simple DSL and add editor later
|
|
||||||
|
|
||||||
**Cons:**
|
|
||||||
- More complex initial implementation
|
|
||||||
|
|
||||||
## Requirements Analysis
|
|
||||||
|
|
||||||
### Must Have
|
|
||||||
- [ ] Express pipeline stage configurations (source, effects, camera, display)
|
|
||||||
- [ ] Support parameter bindings to sensors
|
|
||||||
- [ ] Hot-reloading during runtime
|
|
||||||
- [ ] Integration with existing Pipeline architecture
|
|
||||||
|
|
||||||
### Should Have
|
|
||||||
- [ ] Basic assertion language for testing
|
|
||||||
- [ ] Ability to define custom abstractions/modules
|
|
||||||
- [ ] Version control friendly (text-based)
|
|
||||||
|
|
||||||
### Could Have
|
|
||||||
- [ ] Visual node-based editor
|
|
||||||
- [ ] Real-time visualization of dataflow
|
|
||||||
- [ ] MIDI/OSC support for external controllers
|
|
||||||
|
|
||||||
## User Stories (Proposed)
|
|
||||||
|
|
||||||
### Spike Stories (Investigation)
|
|
||||||
|
|
||||||
**Story 1: Evaluate DSL Parsing Tools**
|
|
||||||
> As a developer, I want to understand the available Python DSL parsing libraries (Lark, parsy, pyparsing) so that I can choose the right tool for implementing a preset DSL.
|
|
||||||
>
|
|
||||||
> **Acceptance**: Document pros/cons of 3+ parsing libraries with small proof-of-concept experiments
|
|
||||||
|
|
||||||
**Story 2: Research Livecoding Languages**
|
|
||||||
> As a developer, I want to understand how TidalCycles, Sonic Pi, and PureData handle hot-reloading and pattern generation so that I can apply similar techniques to Mainline.
|
|
||||||
>
|
|
||||||
> **Acceptance**: Document key architectural patterns from 2+ livecoding systems
|
|
||||||
|
|
||||||
**Story 3: Prototype Textual DSL**
|
|
||||||
> As a preset author, I want to write presets in a simple textual DSL that supports basic conditionals and sensor bindings.
|
|
||||||
>
|
|
||||||
> **Acceptance**: Create a prototype DSL that can parse a sample preset and convert to PipelineConfig
|
|
||||||
|
|
||||||
**Story 4: Investigate Assertion/Testing Approaches**
|
|
||||||
> As a quality engineer, I want to include assertions with presets so that preset behavior can be validated automatically.
|
|
||||||
>
|
|
||||||
> **Acceptance**: Survey testing patterns in livecoding and propose assertion syntax
|
|
||||||
|
|
||||||
### Implementation Stories (Future)
|
|
||||||
|
|
||||||
**Story 5: Implement Core DSL Parser**
|
|
||||||
> As a preset author, I want to write presets in a textual DSL that supports sensors, conditionals, and parameter bindings.
|
|
||||||
>
|
|
||||||
> **Acceptance**: DSL parser handles the core syntax, produces valid PipelineConfig
|
|
||||||
|
|
||||||
**Story 6: Hot-Reload System**
|
|
||||||
> As a performer, I want to edit preset files and see changes reflected in real-time without restarting.
|
|
||||||
>
|
|
||||||
> **Acceptance**: File watcher + pipeline mutation API integration works
|
|
||||||
|
|
||||||
**Story 7: Assertion Language**
|
|
||||||
> As a preset author, I want to include assertions that validate sensor values or pipeline state.
|
|
||||||
>
|
|
||||||
> **Acceptance**: Assertions can run as part of preset execution and report pass/fail
|
|
||||||
|
|
||||||
**Story 8: Toolchain/Packaging**
|
|
||||||
> As a preset distributor, I want to package presets with dependencies for easy sharing.
|
|
||||||
>
|
|
||||||
> **Acceptance**: Can create, build, and install a preset package
|
|
||||||
|
|
||||||
## Decision
|
|
||||||
|
|
||||||
**Recommend: Start with textual DSL approach (Option 2/4)**
|
|
||||||
|
|
||||||
Rationale:
|
|
||||||
- Lowest barrier to entry (text files, version control)
|
|
||||||
- Can evolve to hybrid later if visual editor is needed
|
|
||||||
- Strong precedents in livecoding community (TidalCycles, Sonic Pi)
|
|
||||||
- Enables hot-reloading naturally
|
|
||||||
- Assertion language can be part of the DSL syntax
|
|
||||||
|
|
||||||
**Not recommending Mermaid**: Mermaid is excellent for documentation and visualization, but it's a diagramming tool, not a programming language. It cannot express the logic, conditionals, and sensor bindings we need.
|
|
||||||
|
|
||||||
## Next Steps
|
|
||||||
|
|
||||||
1. Execute Spike Stories 1-4 to reduce uncertainty
|
|
||||||
2. Create minimal viable DSL syntax
|
|
||||||
3. Prototype hot-reloading with existing preset system
|
|
||||||
4. Evaluate whether visual editor adds sufficient value to warrant complexity
|
|
||||||
|
|
||||||
## References
|
|
||||||
|
|
||||||
- Pure Data: https://puredata.info/
|
|
||||||
- TidalCycles: https://tidalcycles.org/
|
|
||||||
- Sonic Pi: https://sonic-pi.net/
|
|
||||||
- Lark parser: https://lark-parser.readthedocs.io/
|
|
||||||
- Mainline Pipeline Architecture: `engine/pipeline/`
|
|
||||||
- Current Presets: `presets.toml`
|
|
||||||
@@ -1,10 +1 @@
|
|||||||
# engine — modular internals for mainline
|
# engine — modular internals for mainline
|
||||||
|
|
||||||
# Import submodules to make them accessible via engine.<name>
|
|
||||||
# This is required for unittest.mock.patch to work with "engine.<module>.<function>"
|
|
||||||
# strings and for direct attribute access on the engine package.
|
|
||||||
import engine.config # noqa: F401
|
|
||||||
import engine.fetch # noqa: F401
|
|
||||||
import engine.filter # noqa: F401
|
|
||||||
import engine.sources # noqa: F401
|
|
||||||
import engine.terminal # noqa: F401
|
|
||||||
|
|||||||
@@ -8,7 +8,7 @@ import time
|
|||||||
from engine import config
|
from engine import config
|
||||||
from engine.display import BorderMode, DisplayRegistry
|
from engine.display import BorderMode, DisplayRegistry
|
||||||
from engine.effects import get_registry
|
from engine.effects import get_registry
|
||||||
from engine.fetch import fetch_all, fetch_all_fast, fetch_poetry, load_cache, save_cache
|
from engine.fetch import fetch_all, fetch_poetry, load_cache
|
||||||
from engine.pipeline import (
|
from engine.pipeline import (
|
||||||
Pipeline,
|
Pipeline,
|
||||||
PipelineConfig,
|
PipelineConfig,
|
||||||
@@ -84,7 +84,6 @@ def run_pipeline_mode_direct():
|
|||||||
--pipeline-ui: Enable UI panel (BorderMode.UI)
|
--pipeline-ui: Enable UI panel (BorderMode.UI)
|
||||||
--pipeline-border <mode>: off, simple, ui
|
--pipeline-border <mode>: off, simple, ui
|
||||||
"""
|
"""
|
||||||
import engine.effects.plugins as effects_plugins
|
|
||||||
from engine.camera import Camera
|
from engine.camera import Camera
|
||||||
from engine.data_sources.pipeline_introspection import PipelineIntrospectionSource
|
from engine.data_sources.pipeline_introspection import PipelineIntrospectionSource
|
||||||
from engine.data_sources.sources import EmptyDataSource, ListDataSource
|
from engine.data_sources.sources import EmptyDataSource, ListDataSource
|
||||||
@@ -93,9 +92,6 @@ def run_pipeline_mode_direct():
|
|||||||
ViewportFilterStage,
|
ViewportFilterStage,
|
||||||
)
|
)
|
||||||
|
|
||||||
# Discover and register all effect plugins
|
|
||||||
effects_plugins.discover_plugins()
|
|
||||||
|
|
||||||
# Parse CLI arguments
|
# Parse CLI arguments
|
||||||
source_name = None
|
source_name = None
|
||||||
effect_names = []
|
effect_names = []
|
||||||
@@ -212,18 +208,7 @@ def run_pipeline_mode_direct():
|
|||||||
if cached:
|
if cached:
|
||||||
source_items = cached
|
source_items = cached
|
||||||
else:
|
else:
|
||||||
source_items = fetch_all_fast()
|
source_items, _, _ = fetch_all()
|
||||||
if source_items:
|
|
||||||
import threading
|
|
||||||
|
|
||||||
def background_fetch():
|
|
||||||
full_items, _, _ = fetch_all()
|
|
||||||
save_cache(full_items)
|
|
||||||
|
|
||||||
background_thread = threading.Thread(
|
|
||||||
target=background_fetch, daemon=True
|
|
||||||
)
|
|
||||||
background_thread.start()
|
|
||||||
elif source_name == "fixture":
|
elif source_name == "fixture":
|
||||||
source_items = load_cache()
|
source_items = load_cache()
|
||||||
if not source_items:
|
if not source_items:
|
||||||
@@ -289,11 +274,6 @@ def run_pipeline_mode_direct():
|
|||||||
"viewport_filter", ViewportFilterStage(name="viewport-filter")
|
"viewport_filter", ViewportFilterStage(name="viewport-filter")
|
||||||
)
|
)
|
||||||
pipeline.add_stage("font", FontStage(name="font"))
|
pipeline.add_stage("font", FontStage(name="font"))
|
||||||
else:
|
|
||||||
# Fallback to simple conversion for other sources
|
|
||||||
from engine.pipeline.adapters import SourceItemsToBufferStage
|
|
||||||
|
|
||||||
pipeline.add_stage("render", SourceItemsToBufferStage(name="items-to-buffer"))
|
|
||||||
|
|
||||||
# Add camera
|
# Add camera
|
||||||
speed = getattr(params, "camera_speed", 1.0)
|
speed = getattr(params, "camera_speed", 1.0)
|
||||||
@@ -403,8 +383,7 @@ def run_pipeline_mode_direct():
|
|||||||
|
|
||||||
result = pipeline.execute(source_items)
|
result = pipeline.execute(source_items)
|
||||||
if not result.success:
|
if not result.success:
|
||||||
error_msg = f" ({result.error})" if result.error else ""
|
print(" \033[38;5;196mPipeline execution failed\033[0m")
|
||||||
print(f" \033[38;5;196mPipeline execution failed{error_msg}\033[0m")
|
|
||||||
break
|
break
|
||||||
|
|
||||||
# Render with UI panel
|
# Render with UI panel
|
||||||
|
|||||||
@@ -8,7 +8,7 @@ from typing import Any
|
|||||||
|
|
||||||
from engine.display import BorderMode, DisplayRegistry
|
from engine.display import BorderMode, DisplayRegistry
|
||||||
from engine.effects import get_registry
|
from engine.effects import get_registry
|
||||||
from engine.fetch import fetch_all, fetch_all_fast, fetch_poetry, load_cache, save_cache
|
from engine.fetch import fetch_all, fetch_poetry, load_cache
|
||||||
from engine.pipeline import Pipeline, PipelineConfig, PipelineContext, get_preset
|
from engine.pipeline import Pipeline, PipelineConfig, PipelineContext, get_preset
|
||||||
from engine.pipeline.adapters import (
|
from engine.pipeline.adapters import (
|
||||||
EffectPluginStage,
|
EffectPluginStage,
|
||||||
@@ -24,85 +24,6 @@ except ImportError:
|
|||||||
WebSocketDisplay = None
|
WebSocketDisplay = None
|
||||||
|
|
||||||
|
|
||||||
def _handle_pipeline_mutation(pipeline: Pipeline, command: dict) -> bool:
|
|
||||||
"""Handle pipeline mutation commands from WebSocket or other external control.
|
|
||||||
|
|
||||||
Args:
|
|
||||||
pipeline: The pipeline to mutate
|
|
||||||
command: Command dictionary with 'action' and other parameters
|
|
||||||
|
|
||||||
Returns:
|
|
||||||
True if command was successfully handled, False otherwise
|
|
||||||
"""
|
|
||||||
action = command.get("action")
|
|
||||||
|
|
||||||
if action == "add_stage":
|
|
||||||
# For now, this just returns True to acknowledge the command
|
|
||||||
# In a full implementation, we'd need to create the appropriate stage
|
|
||||||
print(f" [Pipeline] add_stage command received: {command}")
|
|
||||||
return True
|
|
||||||
|
|
||||||
elif action == "remove_stage":
|
|
||||||
stage_name = command.get("stage")
|
|
||||||
if stage_name:
|
|
||||||
result = pipeline.remove_stage(stage_name)
|
|
||||||
print(f" [Pipeline] Removed stage '{stage_name}': {result is not None}")
|
|
||||||
return result is not None
|
|
||||||
|
|
||||||
elif action == "replace_stage":
|
|
||||||
stage_name = command.get("stage")
|
|
||||||
# For now, this just returns True to acknowledge the command
|
|
||||||
print(f" [Pipeline] replace_stage command received: {command}")
|
|
||||||
return True
|
|
||||||
|
|
||||||
elif action == "swap_stages":
|
|
||||||
stage1 = command.get("stage1")
|
|
||||||
stage2 = command.get("stage2")
|
|
||||||
if stage1 and stage2:
|
|
||||||
result = pipeline.swap_stages(stage1, stage2)
|
|
||||||
print(f" [Pipeline] Swapped stages '{stage1}' and '{stage2}': {result}")
|
|
||||||
return result
|
|
||||||
|
|
||||||
elif action == "move_stage":
|
|
||||||
stage_name = command.get("stage")
|
|
||||||
after = command.get("after")
|
|
||||||
before = command.get("before")
|
|
||||||
if stage_name:
|
|
||||||
result = pipeline.move_stage(stage_name, after, before)
|
|
||||||
print(f" [Pipeline] Moved stage '{stage_name}': {result}")
|
|
||||||
return result
|
|
||||||
|
|
||||||
elif action == "enable_stage":
|
|
||||||
stage_name = command.get("stage")
|
|
||||||
if stage_name:
|
|
||||||
result = pipeline.enable_stage(stage_name)
|
|
||||||
print(f" [Pipeline] Enabled stage '{stage_name}': {result}")
|
|
||||||
return result
|
|
||||||
|
|
||||||
elif action == "disable_stage":
|
|
||||||
stage_name = command.get("stage")
|
|
||||||
if stage_name:
|
|
||||||
result = pipeline.disable_stage(stage_name)
|
|
||||||
print(f" [Pipeline] Disabled stage '{stage_name}': {result}")
|
|
||||||
return result
|
|
||||||
|
|
||||||
elif action == "cleanup_stage":
|
|
||||||
stage_name = command.get("stage")
|
|
||||||
if stage_name:
|
|
||||||
pipeline.cleanup_stage(stage_name)
|
|
||||||
print(f" [Pipeline] Cleaned up stage '{stage_name}'")
|
|
||||||
return True
|
|
||||||
|
|
||||||
elif action == "can_hot_swap":
|
|
||||||
stage_name = command.get("stage")
|
|
||||||
if stage_name:
|
|
||||||
can_swap = pipeline.can_hot_swap(stage_name)
|
|
||||||
print(f" [Pipeline] Can hot-swap '{stage_name}': {can_swap}")
|
|
||||||
return True
|
|
||||||
|
|
||||||
return False
|
|
||||||
|
|
||||||
|
|
||||||
def run_pipeline_mode(preset_name: str = "demo"):
|
def run_pipeline_mode(preset_name: str = "demo"):
|
||||||
"""Run using the new unified pipeline architecture."""
|
"""Run using the new unified pipeline architecture."""
|
||||||
import engine.effects.plugins as effects_plugins
|
import engine.effects.plugins as effects_plugins
|
||||||
@@ -138,7 +59,14 @@ def run_pipeline_mode(preset_name: str = "demo"):
|
|||||||
print("Error: Invalid viewport format. Use WxH (e.g., 40x15)")
|
print("Error: Invalid viewport format. Use WxH (e.g., 40x15)")
|
||||||
sys.exit(1)
|
sys.exit(1)
|
||||||
|
|
||||||
pipeline = Pipeline(config=preset.to_config())
|
pipeline = Pipeline(
|
||||||
|
config=PipelineConfig(
|
||||||
|
source=preset.source,
|
||||||
|
display=preset.display,
|
||||||
|
camera=preset.camera,
|
||||||
|
effects=preset.effects,
|
||||||
|
)
|
||||||
|
)
|
||||||
|
|
||||||
print(" \033[38;5;245mFetching content...\033[0m")
|
print(" \033[38;5;245mFetching content...\033[0m")
|
||||||
|
|
||||||
@@ -160,24 +88,10 @@ def run_pipeline_mode(preset_name: str = "demo"):
|
|||||||
cached = load_cache()
|
cached = load_cache()
|
||||||
if cached:
|
if cached:
|
||||||
items = cached
|
items = cached
|
||||||
print(f" \033[38;5;82mLoaded {len(items)} items from cache\033[0m")
|
|
||||||
elif preset.source == "poetry":
|
elif preset.source == "poetry":
|
||||||
items, _, _ = fetch_poetry()
|
items, _, _ = fetch_poetry()
|
||||||
else:
|
else:
|
||||||
items = fetch_all_fast()
|
items, _, _ = fetch_all()
|
||||||
if items:
|
|
||||||
print(
|
|
||||||
f" \033[38;5;82mFast start: {len(items)} items from first 5 sources\033[0m"
|
|
||||||
)
|
|
||||||
|
|
||||||
import threading
|
|
||||||
|
|
||||||
def background_fetch():
|
|
||||||
full_items, _, _ = fetch_all()
|
|
||||||
save_cache(full_items)
|
|
||||||
|
|
||||||
background_thread = threading.Thread(target=background_fetch, daemon=True)
|
|
||||||
background_thread.start()
|
|
||||||
|
|
||||||
if not items:
|
if not items:
|
||||||
print(" \033[38;5;196mNo content available\033[0m")
|
print(" \033[38;5;196mNo content available\033[0m")
|
||||||
@@ -436,28 +350,6 @@ def run_pipeline_mode(preset_name: str = "demo"):
|
|||||||
|
|
||||||
def handle_websocket_command(command: dict) -> None:
|
def handle_websocket_command(command: dict) -> None:
|
||||||
"""Handle commands from WebSocket clients."""
|
"""Handle commands from WebSocket clients."""
|
||||||
action = command.get("action")
|
|
||||||
|
|
||||||
# Handle pipeline mutation commands directly
|
|
||||||
if action in (
|
|
||||||
"add_stage",
|
|
||||||
"remove_stage",
|
|
||||||
"replace_stage",
|
|
||||||
"swap_stages",
|
|
||||||
"move_stage",
|
|
||||||
"enable_stage",
|
|
||||||
"disable_stage",
|
|
||||||
"cleanup_stage",
|
|
||||||
"can_hot_swap",
|
|
||||||
):
|
|
||||||
result = _handle_pipeline_mutation(pipeline, command)
|
|
||||||
if result:
|
|
||||||
state = display._get_state_snapshot()
|
|
||||||
if state:
|
|
||||||
display.broadcast_state(state)
|
|
||||||
return
|
|
||||||
|
|
||||||
# Handle UI panel commands
|
|
||||||
if ui_panel.execute_command(command):
|
if ui_panel.execute_command(command):
|
||||||
# Broadcast updated state after command execution
|
# Broadcast updated state after command execution
|
||||||
state = display._get_state_snapshot()
|
state = display._get_state_snapshot()
|
||||||
|
|||||||
@@ -72,17 +72,6 @@ class Camera:
|
|||||||
"""Shorthand for viewport_width."""
|
"""Shorthand for viewport_width."""
|
||||||
return self.viewport_width
|
return self.viewport_width
|
||||||
|
|
||||||
def set_speed(self, speed: float) -> None:
|
|
||||||
"""Set the camera scroll speed dynamically.
|
|
||||||
|
|
||||||
This allows camera speed to be modulated during runtime
|
|
||||||
via PipelineParams or directly.
|
|
||||||
|
|
||||||
Args:
|
|
||||||
speed: New speed value (0.0 = stopped, >0 = movement)
|
|
||||||
"""
|
|
||||||
self.speed = max(0.0, speed)
|
|
||||||
|
|
||||||
@property
|
@property
|
||||||
def h(self) -> int:
|
def h(self) -> int:
|
||||||
"""Shorthand for viewport_height."""
|
"""Shorthand for viewport_height."""
|
||||||
@@ -384,11 +373,10 @@ class Camera:
|
|||||||
truncated_line = vis_trunc(offset_line, viewport_width)
|
truncated_line = vis_trunc(offset_line, viewport_width)
|
||||||
|
|
||||||
# Pad line to full viewport width to prevent ghosting when panning
|
# Pad line to full viewport width to prevent ghosting when panning
|
||||||
# Skip padding for empty lines to preserve intentional blank lines
|
|
||||||
import re
|
import re
|
||||||
|
|
||||||
visible_len = len(re.sub(r"\x1b\[[0-9;]*m", "", truncated_line))
|
visible_len = len(re.sub(r"\x1b\[[0-9;]*m", "", truncated_line))
|
||||||
if visible_len < viewport_width and visible_len > 0:
|
if visible_len < viewport_width:
|
||||||
truncated_line += " " * (viewport_width - visible_len)
|
truncated_line += " " * (viewport_width - visible_len)
|
||||||
|
|
||||||
horizontal_slice.append(truncated_line)
|
horizontal_slice.append(truncated_line)
|
||||||
|
|||||||
@@ -99,6 +99,7 @@ class PygameDisplay:
|
|||||||
self.width = width
|
self.width = width
|
||||||
self.height = height
|
self.height = height
|
||||||
|
|
||||||
|
|
||||||
try:
|
try:
|
||||||
import pygame
|
import pygame
|
||||||
except ImportError:
|
except ImportError:
|
||||||
|
|||||||
@@ -104,9 +104,7 @@ class TerminalDisplay:
|
|||||||
frame_time = avg_ms
|
frame_time = avg_ms
|
||||||
|
|
||||||
# Apply border if requested
|
# Apply border if requested
|
||||||
from engine.display import BorderMode
|
if border:
|
||||||
|
|
||||||
if border and border != BorderMode.OFF:
|
|
||||||
buffer = render_border(buffer, self.width, self.height, fps, frame_time)
|
buffer = render_border(buffer, self.width, self.height, fps, frame_time)
|
||||||
|
|
||||||
# Write buffer with cursor home + erase down to avoid flicker
|
# Write buffer with cursor home + erase down to avoid flicker
|
||||||
|
|||||||
@@ -92,7 +92,7 @@ class HudEffect(EffectPlugin):
|
|||||||
|
|
||||||
for i, line in enumerate(hud_lines):
|
for i, line in enumerate(hud_lines):
|
||||||
if i < len(result):
|
if i < len(result):
|
||||||
result[i] = line
|
result[i] = line + result[i][len(line) :]
|
||||||
else:
|
else:
|
||||||
result.append(line)
|
result.append(line)
|
||||||
|
|
||||||
|
|||||||
158
engine/fetch.py
158
engine/fetch.py
@@ -7,7 +7,6 @@ import json
|
|||||||
import pathlib
|
import pathlib
|
||||||
import re
|
import re
|
||||||
import urllib.request
|
import urllib.request
|
||||||
from concurrent.futures import ThreadPoolExecutor, as_completed
|
|
||||||
from datetime import datetime
|
from datetime import datetime
|
||||||
from typing import Any
|
from typing import Any
|
||||||
|
|
||||||
@@ -18,98 +17,54 @@ from engine.filter import skip, strip_tags
|
|||||||
from engine.sources import FEEDS, POETRY_SOURCES
|
from engine.sources import FEEDS, POETRY_SOURCES
|
||||||
from engine.terminal import boot_ln
|
from engine.terminal import boot_ln
|
||||||
|
|
||||||
|
# Type alias for headline items
|
||||||
HeadlineTuple = tuple[str, str, str]
|
HeadlineTuple = tuple[str, str, str]
|
||||||
|
|
||||||
DEFAULT_MAX_WORKERS = 10
|
|
||||||
FAST_START_SOURCES = 5
|
|
||||||
FAST_START_TIMEOUT = 3
|
|
||||||
|
|
||||||
|
# ─── SINGLE FEED ──────────────────────────────────────────
|
||||||
def fetch_feed(url: str) -> tuple[str, Any] | tuple[None, None]:
|
def fetch_feed(url: str) -> Any | None:
|
||||||
"""Fetch and parse a single RSS feed URL. Returns (url, feed) tuple."""
|
"""Fetch and parse a single RSS feed URL."""
|
||||||
try:
|
try:
|
||||||
req = urllib.request.Request(url, headers={"User-Agent": "mainline/0.1"})
|
req = urllib.request.Request(url, headers={"User-Agent": "mainline/0.1"})
|
||||||
timeout = FAST_START_TIMEOUT if url in _fast_start_urls else config.FEED_TIMEOUT
|
resp = urllib.request.urlopen(req, timeout=config.FEED_TIMEOUT)
|
||||||
resp = urllib.request.urlopen(req, timeout=timeout)
|
return feedparser.parse(resp.read())
|
||||||
return (url, feedparser.parse(resp.read()))
|
|
||||||
except Exception:
|
except Exception:
|
||||||
return (url, None)
|
return None
|
||||||
|
|
||||||
|
|
||||||
def _parse_feed(feed: Any, src: str) -> list[HeadlineTuple]:
|
|
||||||
"""Parse a feed and return list of headline tuples."""
|
|
||||||
items = []
|
|
||||||
if feed is None or (feed.bozo and not feed.entries):
|
|
||||||
return items
|
|
||||||
|
|
||||||
for e in feed.entries:
|
|
||||||
t = strip_tags(e.get("title", ""))
|
|
||||||
if not t or skip(t):
|
|
||||||
continue
|
|
||||||
pub = e.get("published_parsed") or e.get("updated_parsed")
|
|
||||||
try:
|
|
||||||
ts = datetime(*pub[:6]).strftime("%H:%M") if pub else "——:——"
|
|
||||||
except Exception:
|
|
||||||
ts = "——:——"
|
|
||||||
items.append((t, src, ts))
|
|
||||||
return items
|
|
||||||
|
|
||||||
|
|
||||||
def fetch_all_fast() -> list[HeadlineTuple]:
|
|
||||||
"""Fetch only the first N sources for fast startup."""
|
|
||||||
global _fast_start_urls
|
|
||||||
_fast_start_urls = set(list(FEEDS.values())[:FAST_START_SOURCES])
|
|
||||||
|
|
||||||
items: list[HeadlineTuple] = []
|
|
||||||
with ThreadPoolExecutor(max_workers=FAST_START_SOURCES) as executor:
|
|
||||||
futures = {
|
|
||||||
executor.submit(fetch_feed, url): src
|
|
||||||
for src, url in list(FEEDS.items())[:FAST_START_SOURCES]
|
|
||||||
}
|
|
||||||
for future in as_completed(futures):
|
|
||||||
src = futures[future]
|
|
||||||
url, feed = future.result()
|
|
||||||
if feed is None or (feed.bozo and not feed.entries):
|
|
||||||
boot_ln(src, "DARK", False)
|
|
||||||
continue
|
|
||||||
parsed = _parse_feed(feed, src)
|
|
||||||
if parsed:
|
|
||||||
items.extend(parsed)
|
|
||||||
boot_ln(src, f"LINKED [{len(parsed)}]", True)
|
|
||||||
else:
|
|
||||||
boot_ln(src, "EMPTY", False)
|
|
||||||
return items
|
|
||||||
|
|
||||||
|
|
||||||
|
# ─── ALL RSS FEEDS ────────────────────────────────────────
|
||||||
def fetch_all() -> tuple[list[HeadlineTuple], int, int]:
|
def fetch_all() -> tuple[list[HeadlineTuple], int, int]:
|
||||||
"""Fetch all RSS feeds concurrently and return items, linked count, failed count."""
|
"""Fetch all RSS feeds and return items, linked count, failed count."""
|
||||||
global _fast_start_urls
|
|
||||||
_fast_start_urls = set()
|
|
||||||
|
|
||||||
items: list[HeadlineTuple] = []
|
items: list[HeadlineTuple] = []
|
||||||
linked = failed = 0
|
linked = failed = 0
|
||||||
|
for src, url in FEEDS.items():
|
||||||
with ThreadPoolExecutor(max_workers=DEFAULT_MAX_WORKERS) as executor:
|
feed = fetch_feed(url)
|
||||||
futures = {executor.submit(fetch_feed, url): src for src, url in FEEDS.items()}
|
if feed is None or (feed.bozo and not feed.entries):
|
||||||
for future in as_completed(futures):
|
boot_ln(src, "DARK", False)
|
||||||
src = futures[future]
|
failed += 1
|
||||||
url, feed = future.result()
|
continue
|
||||||
if feed is None or (feed.bozo and not feed.entries):
|
n = 0
|
||||||
boot_ln(src, "DARK", False)
|
for e in feed.entries:
|
||||||
failed += 1
|
t = strip_tags(e.get("title", ""))
|
||||||
|
if not t or skip(t):
|
||||||
continue
|
continue
|
||||||
parsed = _parse_feed(feed, src)
|
pub = e.get("published_parsed") or e.get("updated_parsed")
|
||||||
if parsed:
|
try:
|
||||||
items.extend(parsed)
|
ts = datetime(*pub[:6]).strftime("%H:%M") if pub else "——:——"
|
||||||
boot_ln(src, f"LINKED [{len(parsed)}]", True)
|
except Exception:
|
||||||
linked += 1
|
ts = "——:——"
|
||||||
else:
|
items.append((t, src, ts))
|
||||||
boot_ln(src, "EMPTY", False)
|
n += 1
|
||||||
failed += 1
|
if n:
|
||||||
|
boot_ln(src, f"LINKED [{n}]", True)
|
||||||
|
linked += 1
|
||||||
|
else:
|
||||||
|
boot_ln(src, "EMPTY", False)
|
||||||
|
failed += 1
|
||||||
return items, linked, failed
|
return items, linked, failed
|
||||||
|
|
||||||
|
|
||||||
|
# ─── PROJECT GUTENBERG ────────────────────────────────────
|
||||||
def _fetch_gutenberg(url: str, label: str) -> list[HeadlineTuple]:
|
def _fetch_gutenberg(url: str, label: str) -> list[HeadlineTuple]:
|
||||||
"""Download and parse stanzas/passages from a Project Gutenberg text."""
|
"""Download and parse stanzas/passages from a Project Gutenberg text."""
|
||||||
try:
|
try:
|
||||||
@@ -121,21 +76,23 @@ def _fetch_gutenberg(url: str, label: str) -> list[HeadlineTuple]:
|
|||||||
.replace("\r\n", "\n")
|
.replace("\r\n", "\n")
|
||||||
.replace("\r", "\n")
|
.replace("\r", "\n")
|
||||||
)
|
)
|
||||||
|
# Strip PG boilerplate
|
||||||
m = re.search(r"\*\*\*\s*START OF[^\n]*\n", text)
|
m = re.search(r"\*\*\*\s*START OF[^\n]*\n", text)
|
||||||
if m:
|
if m:
|
||||||
text = text[m.end() :]
|
text = text[m.end() :]
|
||||||
m = re.search(r"\*\*\*\s*END OF", text)
|
m = re.search(r"\*\*\*\s*END OF", text)
|
||||||
if m:
|
if m:
|
||||||
text = text[: m.start()]
|
text = text[: m.start()]
|
||||||
|
# Split on blank lines into stanzas/passages
|
||||||
blocks = re.split(r"\n{2,}", text.strip())
|
blocks = re.split(r"\n{2,}", text.strip())
|
||||||
items = []
|
items = []
|
||||||
for blk in blocks:
|
for blk in blocks:
|
||||||
blk = " ".join(blk.split())
|
blk = " ".join(blk.split()) # flatten to one line
|
||||||
if len(blk) < 20 or len(blk) > 280:
|
if len(blk) < 20 or len(blk) > 280:
|
||||||
continue
|
continue
|
||||||
if blk.isupper():
|
if blk.isupper(): # skip all-caps headers
|
||||||
continue
|
continue
|
||||||
if re.match(r"^[IVXLCDM]+\.?\s*$", blk):
|
if re.match(r"^[IVXLCDM]+\.?\s*$", blk): # roman numerals
|
||||||
continue
|
continue
|
||||||
items.append((blk, label, ""))
|
items.append((blk, label, ""))
|
||||||
return items
|
return items
|
||||||
@@ -143,35 +100,29 @@ def _fetch_gutenberg(url: str, label: str) -> list[HeadlineTuple]:
|
|||||||
return []
|
return []
|
||||||
|
|
||||||
|
|
||||||
def fetch_poetry() -> tuple[list[HeadlineTuple], int, int]:
|
def fetch_poetry():
|
||||||
"""Fetch all poetry/literature sources concurrently."""
|
"""Fetch all poetry/literature sources."""
|
||||||
items = []
|
items = []
|
||||||
linked = failed = 0
|
linked = failed = 0
|
||||||
|
for label, url in POETRY_SOURCES.items():
|
||||||
with ThreadPoolExecutor(max_workers=DEFAULT_MAX_WORKERS) as executor:
|
stanzas = _fetch_gutenberg(url, label)
|
||||||
futures = {
|
if stanzas:
|
||||||
executor.submit(_fetch_gutenberg, url, label): label
|
boot_ln(label, f"LOADED [{len(stanzas)}]", True)
|
||||||
for label, url in POETRY_SOURCES.items()
|
items.extend(stanzas)
|
||||||
}
|
linked += 1
|
||||||
for future in as_completed(futures):
|
else:
|
||||||
label = futures[future]
|
boot_ln(label, "DARK", False)
|
||||||
stanzas = future.result()
|
failed += 1
|
||||||
if stanzas:
|
|
||||||
boot_ln(label, f"LOADED [{len(stanzas)}]", True)
|
|
||||||
items.extend(stanzas)
|
|
||||||
linked += 1
|
|
||||||
else:
|
|
||||||
boot_ln(label, "DARK", False)
|
|
||||||
failed += 1
|
|
||||||
|
|
||||||
return items, linked, failed
|
return items, linked, failed
|
||||||
|
|
||||||
|
|
||||||
_cache_dir = pathlib.Path(__file__).resolve().parent / "fixtures"
|
# ─── CACHE ────────────────────────────────────────────────
|
||||||
|
# Cache moved to engine/fixtures/headlines.json
|
||||||
|
_CACHE_DIR = pathlib.Path(__file__).resolve().parent / "fixtures"
|
||||||
|
|
||||||
|
|
||||||
def _cache_path():
|
def _cache_path():
|
||||||
return _cache_dir / "headlines.json"
|
return _CACHE_DIR / "headlines.json"
|
||||||
|
|
||||||
|
|
||||||
def load_cache():
|
def load_cache():
|
||||||
@@ -193,6 +144,3 @@ def save_cache(items):
|
|||||||
_cache_path().write_text(json.dumps({"items": items}))
|
_cache_path().write_text(json.dumps({"items": items}))
|
||||||
except Exception:
|
except Exception:
|
||||||
pass
|
pass
|
||||||
|
|
||||||
|
|
||||||
_fast_start_urls: set = set()
|
|
||||||
|
|||||||
@@ -1 +1,19 @@
|
|||||||
{"items": []}
|
{
|
||||||
|
"items": [
|
||||||
|
["Breaking: AI systems achieve breakthrough in natural language understanding", "TechDaily", "14:32"],
|
||||||
|
["Scientists discover new exoplanet in habitable zone", "ScienceNews", "13:15"],
|
||||||
|
["Global markets rally as inflation shows signs of cooling", "FinanceWire", "12:48"],
|
||||||
|
["New study reveals benefits of Mediterranean diet for cognitive health", "HealthJournal", "11:22"],
|
||||||
|
["Tech giants announce collaboration on AI safety standards", "TechDaily", "10:55"],
|
||||||
|
["Archaeologists uncover 3000-year-old city in desert", "HistoryNow", "09:30"],
|
||||||
|
["Renewable energy capacity surpasses fossil fuels for first time", "GreenWorld", "08:15"],
|
||||||
|
["Space agency prepares for next Mars mission launch window", "SpaceNews", "07:42"],
|
||||||
|
["New film breaks box office records on opening weekend", "EntertainmentHub", "06:18"],
|
||||||
|
["Local community raises funds for new library project", "CommunityPost", "05:30"],
|
||||||
|
["Quantum computing breakthrough could revolutionize cryptography", "TechWeekly", "15:20"],
|
||||||
|
["New species of deep-sea creature discovered in Pacific trench", "NatureToday", "14:05"],
|
||||||
|
["Electric vehicle sales surpass traditional cars in Europe", "AutoNews", "12:33"],
|
||||||
|
["Renowned artist unveils interactive AI-generated exhibition", "ArtsMonthly", "11:10"],
|
||||||
|
["Climate summit reaches historic agreement on emissions", "WorldNews", "09:55"]
|
||||||
|
]
|
||||||
|
}
|
||||||
|
|||||||
@@ -62,16 +62,6 @@ class CameraClockStage(Stage):
|
|||||||
if data is None:
|
if data is None:
|
||||||
return data
|
return data
|
||||||
|
|
||||||
# Update camera speed from params if explicitly set (for dynamic modulation)
|
|
||||||
# Only update if camera_speed in params differs from the default (1.0)
|
|
||||||
# This preserves camera speed set during construction
|
|
||||||
if (
|
|
||||||
ctx.params
|
|
||||||
and hasattr(ctx.params, "camera_speed")
|
|
||||||
and ctx.params.camera_speed != 1.0
|
|
||||||
):
|
|
||||||
self._camera.set_speed(ctx.params.camera_speed)
|
|
||||||
|
|
||||||
current_time = time.perf_counter()
|
current_time = time.perf_counter()
|
||||||
dt = 0.0
|
dt = 0.0
|
||||||
if self._last_frame_time is not None:
|
if self._last_frame_time is not None:
|
||||||
|
|||||||
@@ -104,11 +104,6 @@ class EffectPluginStage(Stage):
|
|||||||
if "metrics" in ctx.state:
|
if "metrics" in ctx.state:
|
||||||
effect_ctx.set_state("metrics", ctx.state["metrics"])
|
effect_ctx.set_state("metrics", ctx.state["metrics"])
|
||||||
|
|
||||||
# Copy pipeline_order from PipelineContext services to EffectContext state
|
|
||||||
pipeline_order = ctx.get("pipeline_order")
|
|
||||||
if pipeline_order:
|
|
||||||
effect_ctx.set_state("pipeline_order", pipeline_order)
|
|
||||||
|
|
||||||
# Apply sensor param bindings if effect has them
|
# Apply sensor param bindings if effect has them
|
||||||
if hasattr(self._effect, "param_bindings") and self._effect.param_bindings:
|
if hasattr(self._effect, "param_bindings") and self._effect.param_bindings:
|
||||||
bound_config = apply_param_bindings(self._effect, effect_ctx)
|
bound_config = apply_param_bindings(self._effect, effect_ctx)
|
||||||
|
|||||||
@@ -111,80 +111,8 @@ class Pipeline:
|
|||||||
stage.cleanup()
|
stage.cleanup()
|
||||||
except Exception:
|
except Exception:
|
||||||
pass
|
pass
|
||||||
|
|
||||||
# Rebuild execution order and capability map if stage was removed
|
|
||||||
if stage and self._initialized:
|
|
||||||
self._rebuild()
|
|
||||||
|
|
||||||
return stage
|
return stage
|
||||||
|
|
||||||
def remove_stage_safe(self, name: str, cleanup: bool = True) -> Stage | None:
|
|
||||||
"""Remove a stage and rebuild execution order safely.
|
|
||||||
|
|
||||||
This is an alias for remove_stage() that explicitly rebuilds
|
|
||||||
the execution order after removal.
|
|
||||||
|
|
||||||
Args:
|
|
||||||
name: Name of the stage to remove
|
|
||||||
cleanup: If True, call cleanup() on the removed stage
|
|
||||||
|
|
||||||
Returns:
|
|
||||||
The removed stage, or None if not found
|
|
||||||
"""
|
|
||||||
return self.remove_stage(name, cleanup)
|
|
||||||
|
|
||||||
def cleanup_stage(self, name: str) -> None:
|
|
||||||
"""Clean up a specific stage without removing it.
|
|
||||||
|
|
||||||
This is useful for stages that need to release resources
|
|
||||||
(like display connections) without being removed from the pipeline.
|
|
||||||
|
|
||||||
Args:
|
|
||||||
name: Name of the stage to clean up
|
|
||||||
"""
|
|
||||||
stage = self._stages.get(name)
|
|
||||||
if stage:
|
|
||||||
try:
|
|
||||||
stage.cleanup()
|
|
||||||
except Exception:
|
|
||||||
pass
|
|
||||||
|
|
||||||
def can_hot_swap(self, name: str) -> bool:
|
|
||||||
"""Check if a stage can be safely hot-swapped.
|
|
||||||
|
|
||||||
A stage can be hot-swapped if:
|
|
||||||
1. It exists in the pipeline
|
|
||||||
2. It's not required for basic pipeline function
|
|
||||||
3. It doesn't have strict dependencies that can't be re-resolved
|
|
||||||
|
|
||||||
Args:
|
|
||||||
name: Name of the stage to check
|
|
||||||
|
|
||||||
Returns:
|
|
||||||
True if the stage can be hot-swapped, False otherwise
|
|
||||||
"""
|
|
||||||
# Check if stage exists
|
|
||||||
if name not in self._stages:
|
|
||||||
return False
|
|
||||||
|
|
||||||
# Check if stage is a minimum capability provider
|
|
||||||
stage = self._stages[name]
|
|
||||||
stage_caps = stage.capabilities if hasattr(stage, "capabilities") else set()
|
|
||||||
minimum_caps = self._minimum_capabilities
|
|
||||||
|
|
||||||
# If stage provides a minimum capability, it's more critical
|
|
||||||
# but still potentially swappable if another stage provides the same capability
|
|
||||||
for cap in stage_caps:
|
|
||||||
if cap in minimum_caps:
|
|
||||||
# Check if another stage provides this capability
|
|
||||||
providers = self._capability_map.get(cap, [])
|
|
||||||
# This stage is the sole provider - might be critical
|
|
||||||
# but still allow hot-swap if pipeline is not initialized
|
|
||||||
if len(providers) <= 1 and self._initialized:
|
|
||||||
return False
|
|
||||||
|
|
||||||
return True
|
|
||||||
|
|
||||||
def replace_stage(
|
def replace_stage(
|
||||||
self, name: str, new_stage: Stage, preserve_state: bool = True
|
self, name: str, new_stage: Stage, preserve_state: bool = True
|
||||||
) -> Stage | None:
|
) -> Stage | None:
|
||||||
@@ -302,16 +230,11 @@ class Pipeline:
|
|||||||
self._capability_map = self._build_capability_map()
|
self._capability_map = self._build_capability_map()
|
||||||
self._execution_order = self._resolve_dependencies()
|
self._execution_order = self._resolve_dependencies()
|
||||||
|
|
||||||
# Note: We intentionally DO NOT validate dependencies here.
|
try:
|
||||||
# Mutation operations (remove/swap/move) might leave the pipeline
|
self._validate_dependencies()
|
||||||
# temporarily invalid (e.g., removing a stage that others depend on).
|
self._validate_types()
|
||||||
# Validation is performed explicitly in build() or can be checked
|
except StageError:
|
||||||
# manually via validate_minimum_capabilities().
|
pass
|
||||||
# try:
|
|
||||||
# self._validate_dependencies()
|
|
||||||
# self._validate_types()
|
|
||||||
# except StageError:
|
|
||||||
# pass
|
|
||||||
|
|
||||||
# Restore initialized state
|
# Restore initialized state
|
||||||
self._initialized = was_initialized
|
self._initialized = was_initialized
|
||||||
@@ -507,16 +430,6 @@ class Pipeline:
|
|||||||
self._capability_map = self._build_capability_map()
|
self._capability_map = self._build_capability_map()
|
||||||
self._execution_order = self._resolve_dependencies()
|
self._execution_order = self._resolve_dependencies()
|
||||||
|
|
||||||
# Re-validate after injection attempt (whether anything was injected or not)
|
|
||||||
# If injection didn't run (injected empty), we still need to check if we're valid
|
|
||||||
# If injection ran but failed to fix (injected empty), we need to check
|
|
||||||
is_valid, missing = self.validate_minimum_capabilities()
|
|
||||||
if not is_valid:
|
|
||||||
raise StageError(
|
|
||||||
"build",
|
|
||||||
f"Auto-injection failed to provide minimum capabilities: {missing}",
|
|
||||||
)
|
|
||||||
|
|
||||||
self._validate_dependencies()
|
self._validate_dependencies()
|
||||||
self._validate_types()
|
self._validate_types()
|
||||||
self._initialized = True
|
self._initialized = True
|
||||||
@@ -725,9 +638,8 @@ class Pipeline:
|
|||||||
frame_start = time.perf_counter() if self._metrics_enabled else 0
|
frame_start = time.perf_counter() if self._metrics_enabled else 0
|
||||||
stage_timings: list[StageMetrics] = []
|
stage_timings: list[StageMetrics] = []
|
||||||
|
|
||||||
# Separate overlay stages and display stage from regular stages
|
# Separate overlay stages from regular stages
|
||||||
overlay_stages: list[tuple[int, Stage]] = []
|
overlay_stages: list[tuple[int, Stage]] = []
|
||||||
display_stage: Stage | None = None
|
|
||||||
regular_stages: list[str] = []
|
regular_stages: list[str] = []
|
||||||
|
|
||||||
for name in self._execution_order:
|
for name in self._execution_order:
|
||||||
@@ -735,11 +647,6 @@ class Pipeline:
|
|||||||
if not stage or not stage.is_enabled():
|
if not stage or not stage.is_enabled():
|
||||||
continue
|
continue
|
||||||
|
|
||||||
# Check if this is the display stage - execute last
|
|
||||||
if stage.category == "display":
|
|
||||||
display_stage = stage
|
|
||||||
continue
|
|
||||||
|
|
||||||
# Safely check is_overlay - handle MagicMock and other non-bool returns
|
# Safely check is_overlay - handle MagicMock and other non-bool returns
|
||||||
try:
|
try:
|
||||||
is_overlay = bool(getattr(stage, "is_overlay", False))
|
is_overlay = bool(getattr(stage, "is_overlay", False))
|
||||||
@@ -756,7 +663,7 @@ class Pipeline:
|
|||||||
else:
|
else:
|
||||||
regular_stages.append(name)
|
regular_stages.append(name)
|
||||||
|
|
||||||
# Execute regular stages in dependency order (excluding display)
|
# Execute regular stages in dependency order
|
||||||
for name in regular_stages:
|
for name in regular_stages:
|
||||||
stage = self._stages.get(name)
|
stage = self._stages.get(name)
|
||||||
if not stage or not stage.is_enabled():
|
if not stage or not stage.is_enabled():
|
||||||
@@ -847,35 +754,6 @@ class Pipeline:
|
|||||||
)
|
)
|
||||||
)
|
)
|
||||||
|
|
||||||
# Execute display stage LAST (after overlay stages)
|
|
||||||
# This ensures overlay effects like HUD are visible in the final output
|
|
||||||
if display_stage:
|
|
||||||
stage_start = time.perf_counter() if self._metrics_enabled else 0
|
|
||||||
|
|
||||||
try:
|
|
||||||
current_data = display_stage.process(current_data, self.context)
|
|
||||||
except Exception as e:
|
|
||||||
if not display_stage.optional:
|
|
||||||
return StageResult(
|
|
||||||
success=False,
|
|
||||||
data=current_data,
|
|
||||||
error=str(e),
|
|
||||||
stage_name=display_stage.name,
|
|
||||||
)
|
|
||||||
|
|
||||||
if self._metrics_enabled:
|
|
||||||
stage_duration = (time.perf_counter() - stage_start) * 1000
|
|
||||||
chars_in = len(str(data)) if data else 0
|
|
||||||
chars_out = len(str(current_data)) if current_data else 0
|
|
||||||
stage_timings.append(
|
|
||||||
StageMetrics(
|
|
||||||
name=display_stage.name,
|
|
||||||
duration_ms=stage_duration,
|
|
||||||
chars_in=chars_in,
|
|
||||||
chars_out=chars_out,
|
|
||||||
)
|
|
||||||
)
|
|
||||||
|
|
||||||
if self._metrics_enabled:
|
if self._metrics_enabled:
|
||||||
total_duration = (time.perf_counter() - frame_start) * 1000
|
total_duration = (time.perf_counter() - frame_start) * 1000
|
||||||
self._frame_metrics.append(
|
self._frame_metrics.append(
|
||||||
|
|||||||
@@ -32,7 +32,7 @@ class PipelineParams:
|
|||||||
|
|
||||||
# Camera config
|
# Camera config
|
||||||
camera_mode: str = "vertical"
|
camera_mode: str = "vertical"
|
||||||
camera_speed: float = 1.0 # Default speed
|
camera_speed: float = 1.0
|
||||||
camera_x: int = 0 # For horizontal scrolling
|
camera_x: int = 0 # For horizontal scrolling
|
||||||
|
|
||||||
# Effect config
|
# Effect config
|
||||||
|
|||||||
@@ -11,14 +11,11 @@ Loading order:
|
|||||||
"""
|
"""
|
||||||
|
|
||||||
from dataclasses import dataclass, field
|
from dataclasses import dataclass, field
|
||||||
from typing import TYPE_CHECKING, Any
|
from typing import Any
|
||||||
|
|
||||||
from engine.display import BorderMode
|
from engine.display import BorderMode
|
||||||
from engine.pipeline.params import PipelineParams
|
from engine.pipeline.params import PipelineParams
|
||||||
|
|
||||||
if TYPE_CHECKING:
|
|
||||||
from engine.pipeline.controller import PipelineConfig
|
|
||||||
|
|
||||||
|
|
||||||
def _load_toml_presets() -> dict[str, Any]:
|
def _load_toml_presets() -> dict[str, Any]:
|
||||||
"""Load presets from TOML file."""
|
"""Load presets from TOML file."""
|
||||||
@@ -58,10 +55,9 @@ class PipelinePreset:
|
|||||||
viewport_width: int = 80 # Viewport width in columns
|
viewport_width: int = 80 # Viewport width in columns
|
||||||
viewport_height: int = 24 # Viewport height in rows
|
viewport_height: int = 24 # Viewport height in rows
|
||||||
source_items: list[dict[str, Any]] | None = None # For ListDataSource
|
source_items: list[dict[str, Any]] | None = None # For ListDataSource
|
||||||
enable_metrics: bool = True # Enable performance metrics collection
|
|
||||||
|
|
||||||
def to_params(self) -> PipelineParams:
|
def to_params(self) -> PipelineParams:
|
||||||
"""Convert to PipelineParams (runtime configuration)."""
|
"""Convert to PipelineParams."""
|
||||||
from engine.display import BorderMode
|
from engine.display import BorderMode
|
||||||
|
|
||||||
params = PipelineParams()
|
params = PipelineParams()
|
||||||
@@ -76,27 +72,10 @@ class PipelinePreset:
|
|||||||
)
|
)
|
||||||
params.camera_mode = self.camera
|
params.camera_mode = self.camera
|
||||||
params.effect_order = self.effects.copy()
|
params.effect_order = self.effects.copy()
|
||||||
params.camera_speed = self.camera_speed
|
# Note: camera_speed, viewport_width/height are not stored in PipelineParams
|
||||||
# Note: viewport_width/height are read from PipelinePreset directly
|
# They are used directly from the preset object in pipeline_runner.py
|
||||||
# in pipeline_runner.py, not from PipelineParams
|
|
||||||
return params
|
return params
|
||||||
|
|
||||||
def to_config(self) -> "PipelineConfig":
|
|
||||||
"""Convert to PipelineConfig (static pipeline construction config).
|
|
||||||
|
|
||||||
PipelineConfig is used once at pipeline initialization and contains
|
|
||||||
the core settings that don't change during execution.
|
|
||||||
"""
|
|
||||||
from engine.pipeline.controller import PipelineConfig
|
|
||||||
|
|
||||||
return PipelineConfig(
|
|
||||||
source=self.source,
|
|
||||||
display=self.display,
|
|
||||||
camera=self.camera,
|
|
||||||
effects=self.effects.copy(),
|
|
||||||
enable_metrics=self.enable_metrics,
|
|
||||||
)
|
|
||||||
|
|
||||||
@classmethod
|
@classmethod
|
||||||
def from_yaml(cls, name: str, data: dict[str, Any]) -> "PipelinePreset":
|
def from_yaml(cls, name: str, data: dict[str, Any]) -> "PipelinePreset":
|
||||||
"""Create a PipelinePreset from YAML data."""
|
"""Create a PipelinePreset from YAML data."""
|
||||||
@@ -112,7 +91,6 @@ class PipelinePreset:
|
|||||||
viewport_width=data.get("viewport_width", 80),
|
viewport_width=data.get("viewport_width", 80),
|
||||||
viewport_height=data.get("viewport_height", 24),
|
viewport_height=data.get("viewport_height", 24),
|
||||||
source_items=data.get("source_items"),
|
source_items=data.get("source_items"),
|
||||||
enable_metrics=data.get("enable_metrics", True),
|
|
||||||
)
|
)
|
||||||
|
|
||||||
|
|
||||||
|
|||||||
@@ -370,24 +370,13 @@ class UIPanel:
|
|||||||
def execute_command(self, command: dict) -> bool:
|
def execute_command(self, command: dict) -> bool:
|
||||||
"""Execute a command from external control (e.g., WebSocket).
|
"""Execute a command from external control (e.g., WebSocket).
|
||||||
|
|
||||||
Supported UI commands:
|
Supported commands:
|
||||||
- {"action": "toggle_stage", "stage": "stage_name"}
|
- {"action": "toggle_stage", "stage": "stage_name"}
|
||||||
- {"action": "select_stage", "stage": "stage_name"}
|
- {"action": "select_stage", "stage": "stage_name"}
|
||||||
- {"action": "adjust_param", "stage": "stage_name", "param": "param_name", "delta": 0.1}
|
- {"action": "adjust_param", "stage": "stage_name", "param": "param_name", "delta": 0.1}
|
||||||
- {"action": "change_preset", "preset": "preset_name"}
|
- {"action": "change_preset", "preset": "preset_name"}
|
||||||
- {"action": "cycle_preset", "direction": 1}
|
- {"action": "cycle_preset", "direction": 1}
|
||||||
|
|
||||||
Pipeline Mutation commands are handled by the WebSocket/runner handler:
|
|
||||||
- {"action": "add_stage", "stage": "stage_name", "type": "source|display|camera|effect"}
|
|
||||||
- {"action": "remove_stage", "stage": "stage_name"}
|
|
||||||
- {"action": "replace_stage", "stage": "old_stage_name", "with": "new_stage_type"}
|
|
||||||
- {"action": "swap_stages", "stage1": "name1", "stage2": "name2"}
|
|
||||||
- {"action": "move_stage", "stage": "stage_name", "after": "other_stage"|"before": "other_stage"}
|
|
||||||
- {"action": "enable_stage", "stage": "stage_name"}
|
|
||||||
- {"action": "disable_stage", "stage": "stage_name"}
|
|
||||||
- {"action": "cleanup_stage", "stage": "stage_name"}
|
|
||||||
- {"action": "can_hot_swap", "stage": "stage_name"}
|
|
||||||
|
|
||||||
Returns:
|
Returns:
|
||||||
True if command was handled, False if not
|
True if command was handled, False if not
|
||||||
"""
|
"""
|
||||||
|
|||||||
@@ -1,473 +0,0 @@
|
|||||||
"""
|
|
||||||
HTML Acceptance Test Report Generator
|
|
||||||
|
|
||||||
Generates HTML reports showing frame buffers from acceptance tests.
|
|
||||||
Uses NullDisplay to capture frames and renders them with monospace font.
|
|
||||||
"""
|
|
||||||
|
|
||||||
import html
|
|
||||||
from datetime import datetime
|
|
||||||
from pathlib import Path
|
|
||||||
from typing import Any
|
|
||||||
|
|
||||||
ANSI_256_TO_RGB = {
|
|
||||||
0: (0, 0, 0),
|
|
||||||
1: (128, 0, 0),
|
|
||||||
2: (0, 128, 0),
|
|
||||||
3: (128, 128, 0),
|
|
||||||
4: (0, 0, 128),
|
|
||||||
5: (128, 0, 128),
|
|
||||||
6: (0, 128, 128),
|
|
||||||
7: (192, 192, 192),
|
|
||||||
8: (128, 128, 128),
|
|
||||||
9: (255, 0, 0),
|
|
||||||
10: (0, 255, 0),
|
|
||||||
11: (255, 255, 0),
|
|
||||||
12: (0, 0, 255),
|
|
||||||
13: (255, 0, 255),
|
|
||||||
14: (0, 255, 255),
|
|
||||||
15: (255, 255, 255),
|
|
||||||
}
|
|
||||||
|
|
||||||
|
|
||||||
def ansi_to_rgb(color_code: int) -> tuple[int, int, int]:
|
|
||||||
"""Convert ANSI 256-color code to RGB tuple."""
|
|
||||||
if 0 <= color_code <= 15:
|
|
||||||
return ANSI_256_TO_RGB.get(color_code, (255, 255, 255))
|
|
||||||
elif 16 <= color_code <= 231:
|
|
||||||
color_code -= 16
|
|
||||||
r = (color_code // 36) * 51
|
|
||||||
g = ((color_code % 36) // 6) * 51
|
|
||||||
b = (color_code % 6) * 51
|
|
||||||
return (r, g, b)
|
|
||||||
elif 232 <= color_code <= 255:
|
|
||||||
gray = (color_code - 232) * 10 + 8
|
|
||||||
return (gray, gray, gray)
|
|
||||||
return (255, 255, 255)
|
|
||||||
|
|
||||||
|
|
||||||
def parse_ansi_line(line: str) -> list[dict[str, Any]]:
|
|
||||||
"""Parse a single line with ANSI escape codes into styled segments.
|
|
||||||
|
|
||||||
Returns list of dicts with 'text', 'fg', 'bg', 'bold' keys.
|
|
||||||
"""
|
|
||||||
import re
|
|
||||||
|
|
||||||
segments = []
|
|
||||||
current_fg = None
|
|
||||||
current_bg = None
|
|
||||||
current_bold = False
|
|
||||||
pos = 0
|
|
||||||
|
|
||||||
# Find all ANSI escape sequences
|
|
||||||
escape_pattern = re.compile(r"\x1b\[([0-9;]*)m")
|
|
||||||
|
|
||||||
while pos < len(line):
|
|
||||||
match = escape_pattern.search(line, pos)
|
|
||||||
if not match:
|
|
||||||
# Remaining text with current styling
|
|
||||||
if pos < len(line):
|
|
||||||
text = line[pos:]
|
|
||||||
if text:
|
|
||||||
segments.append(
|
|
||||||
{
|
|
||||||
"text": text,
|
|
||||||
"fg": current_fg,
|
|
||||||
"bg": current_bg,
|
|
||||||
"bold": current_bold,
|
|
||||||
}
|
|
||||||
)
|
|
||||||
break
|
|
||||||
|
|
||||||
# Add text before escape sequence
|
|
||||||
if match.start() > pos:
|
|
||||||
text = line[pos : match.start()]
|
|
||||||
if text:
|
|
||||||
segments.append(
|
|
||||||
{
|
|
||||||
"text": text,
|
|
||||||
"fg": current_fg,
|
|
||||||
"bg": current_bg,
|
|
||||||
"bold": current_bold,
|
|
||||||
}
|
|
||||||
)
|
|
||||||
|
|
||||||
# Parse escape sequence
|
|
||||||
codes = match.group(1).split(";") if match.group(1) else ["0"]
|
|
||||||
for code in codes:
|
|
||||||
code = code.strip()
|
|
||||||
if not code or code == "0":
|
|
||||||
current_fg = None
|
|
||||||
current_bg = None
|
|
||||||
current_bold = False
|
|
||||||
elif code == "1":
|
|
||||||
current_bold = True
|
|
||||||
elif code.isdigit():
|
|
||||||
code_int = int(code)
|
|
||||||
if 30 <= code_int <= 37:
|
|
||||||
current_fg = ansi_to_rgb(code_int - 30 + 8)
|
|
||||||
elif 90 <= code_int <= 97:
|
|
||||||
current_fg = ansi_to_rgb(code_int - 90)
|
|
||||||
elif code_int == 38:
|
|
||||||
current_fg = (255, 255, 255)
|
|
||||||
elif code_int == 39:
|
|
||||||
current_fg = None
|
|
||||||
|
|
||||||
pos = match.end()
|
|
||||||
|
|
||||||
return segments
|
|
||||||
|
|
||||||
|
|
||||||
def render_line_to_html(line: str) -> str:
|
|
||||||
"""Render a single terminal line to HTML with styling."""
|
|
||||||
import re
|
|
||||||
|
|
||||||
result = ""
|
|
||||||
pos = 0
|
|
||||||
current_fg = None
|
|
||||||
current_bg = None
|
|
||||||
current_bold = False
|
|
||||||
|
|
||||||
escape_pattern = re.compile(r"(\x1b\[[0-9;]*m)|(\x1b\[([0-9]+);([0-9]+)H)")
|
|
||||||
|
|
||||||
while pos < len(line):
|
|
||||||
match = escape_pattern.search(line, pos)
|
|
||||||
if not match:
|
|
||||||
# Remaining text
|
|
||||||
if pos < len(line):
|
|
||||||
text = html.escape(line[pos:])
|
|
||||||
if text:
|
|
||||||
style = _build_style(current_fg, current_bg, current_bold)
|
|
||||||
result += f"<span{style}>{text}</span>"
|
|
||||||
break
|
|
||||||
|
|
||||||
# Handle cursor positioning - just skip it for rendering
|
|
||||||
if match.group(2): # Cursor positioning \x1b[row;colH
|
|
||||||
pos = match.end()
|
|
||||||
continue
|
|
||||||
|
|
||||||
# Handle style codes
|
|
||||||
if match.group(1):
|
|
||||||
codes = match.group(1)[2:-1].split(";") if match.group(1) else ["0"]
|
|
||||||
for code in codes:
|
|
||||||
code = code.strip()
|
|
||||||
if not code or code == "0":
|
|
||||||
current_fg = None
|
|
||||||
current_bg = None
|
|
||||||
current_bold = False
|
|
||||||
elif code == "1":
|
|
||||||
current_bold = True
|
|
||||||
elif code.isdigit():
|
|
||||||
code_int = int(code)
|
|
||||||
if 30 <= code_int <= 37:
|
|
||||||
current_fg = ansi_to_rgb(code_int - 30 + 8)
|
|
||||||
elif 90 <= code_int <= 97:
|
|
||||||
current_fg = ansi_to_rgb(code_int - 90)
|
|
||||||
|
|
||||||
pos = match.end()
|
|
||||||
continue
|
|
||||||
|
|
||||||
pos = match.end()
|
|
||||||
|
|
||||||
# Handle remaining text without escape codes
|
|
||||||
if pos < len(line):
|
|
||||||
text = html.escape(line[pos:])
|
|
||||||
if text:
|
|
||||||
style = _build_style(current_fg, current_bg, current_bold)
|
|
||||||
result += f"<span{style}>{text}</span>"
|
|
||||||
|
|
||||||
return result or html.escape(line)
|
|
||||||
|
|
||||||
|
|
||||||
def _build_style(
|
|
||||||
fg: tuple[int, int, int] | None, bg: tuple[int, int, int] | None, bold: bool
|
|
||||||
) -> str:
|
|
||||||
"""Build CSS style string from color values."""
|
|
||||||
styles = []
|
|
||||||
if fg:
|
|
||||||
styles.append(f"color: rgb({fg[0]},{fg[1]},{fg[2]})")
|
|
||||||
if bg:
|
|
||||||
styles.append(f"background-color: rgb({bg[0]},{bg[1]},{bg[2]})")
|
|
||||||
if bold:
|
|
||||||
styles.append("font-weight: bold")
|
|
||||||
if not styles:
|
|
||||||
return ""
|
|
||||||
return f' style="{"; ".join(styles)}"'
|
|
||||||
|
|
||||||
|
|
||||||
def render_frame_to_html(frame: list[str], frame_number: int = 0) -> str:
|
|
||||||
"""Render a complete frame (list of lines) to HTML."""
|
|
||||||
html_lines = []
|
|
||||||
for i, line in enumerate(frame):
|
|
||||||
# Strip ANSI cursor positioning but preserve colors
|
|
||||||
clean_line = (
|
|
||||||
line.replace("\x1b[1;1H", "")
|
|
||||||
.replace("\x1b[2;1H", "")
|
|
||||||
.replace("\x1b[3;1H", "")
|
|
||||||
)
|
|
||||||
rendered = render_line_to_html(clean_line)
|
|
||||||
html_lines.append(f'<div class="frame-line" data-line="{i}">{rendered}</div>')
|
|
||||||
|
|
||||||
return f"""<div class="frame" id="frame-{frame_number}">
|
|
||||||
<div class="frame-header">Frame {frame_number} ({len(frame)} lines)</div>
|
|
||||||
<div class="frame-content">
|
|
||||||
{"".join(html_lines)}
|
|
||||||
</div>
|
|
||||||
</div>"""
|
|
||||||
|
|
||||||
|
|
||||||
def generate_test_report(
|
|
||||||
test_name: str,
|
|
||||||
frames: list[list[str]],
|
|
||||||
status: str = "PASS",
|
|
||||||
duration_ms: float = 0.0,
|
|
||||||
metadata: dict[str, Any] | None = None,
|
|
||||||
) -> str:
|
|
||||||
"""Generate HTML report for a single test."""
|
|
||||||
frames_html = ""
|
|
||||||
for i, frame in enumerate(frames):
|
|
||||||
frames_html += render_frame_to_html(frame, i)
|
|
||||||
|
|
||||||
metadata_html = ""
|
|
||||||
if metadata:
|
|
||||||
metadata_html = '<div class="metadata">'
|
|
||||||
for key, value in metadata.items():
|
|
||||||
metadata_html += f'<div class="meta-row"><span class="meta-key">{key}:</span> <span class="meta-value">{value}</span></div>'
|
|
||||||
metadata_html += "</div>"
|
|
||||||
|
|
||||||
status_class = "pass" if status == "PASS" else "fail"
|
|
||||||
|
|
||||||
return f"""<!DOCTYPE html>
|
|
||||||
<html>
|
|
||||||
<head>
|
|
||||||
<meta charset="UTF-8">
|
|
||||||
<title>{test_name} - Acceptance Test Report</title>
|
|
||||||
<style>
|
|
||||||
body {{
|
|
||||||
font-family: -apple-system, BlinkMacSystemFont, 'Segoe UI', sans-serif;
|
|
||||||
background: #1a1a2e;
|
|
||||||
color: #eee;
|
|
||||||
margin: 0;
|
|
||||||
padding: 20px;
|
|
||||||
}}
|
|
||||||
.test-report {{
|
|
||||||
max-width: 1200px;
|
|
||||||
margin: 0 auto;
|
|
||||||
}}
|
|
||||||
.test-header {{
|
|
||||||
background: #16213e;
|
|
||||||
padding: 20px;
|
|
||||||
border-radius: 8px;
|
|
||||||
margin-bottom: 20px;
|
|
||||||
display: flex;
|
|
||||||
justify-content: space-between;
|
|
||||||
align-items: center;
|
|
||||||
}}
|
|
||||||
.test-name {{
|
|
||||||
font-size: 24px;
|
|
||||||
font-weight: bold;
|
|
||||||
color: #fff;
|
|
||||||
}}
|
|
||||||
.status {{
|
|
||||||
padding: 8px 16px;
|
|
||||||
border-radius: 4px;
|
|
||||||
font-weight: bold;
|
|
||||||
}}
|
|
||||||
.status.pass {{
|
|
||||||
background: #28a745;
|
|
||||||
color: white;
|
|
||||||
}}
|
|
||||||
.status.fail {{
|
|
||||||
background: #dc3545;
|
|
||||||
color: white;
|
|
||||||
}}
|
|
||||||
.frame {{
|
|
||||||
background: #0f0f1a;
|
|
||||||
border: 1px solid #333;
|
|
||||||
border-radius: 4px;
|
|
||||||
margin-bottom: 20px;
|
|
||||||
overflow: hidden;
|
|
||||||
}}
|
|
||||||
.frame-header {{
|
|
||||||
background: #16213e;
|
|
||||||
padding: 10px 15px;
|
|
||||||
font-size: 14px;
|
|
||||||
color: #888;
|
|
||||||
border-bottom: 1px solid #333;
|
|
||||||
}}
|
|
||||||
.frame-content {{
|
|
||||||
padding: 15px;
|
|
||||||
font-family: 'Fira Code', 'Consolas', 'Monaco', monospace;
|
|
||||||
font-size: 13px;
|
|
||||||
line-height: 1.4;
|
|
||||||
white-space: pre;
|
|
||||||
overflow-x: auto;
|
|
||||||
}}
|
|
||||||
.frame-line {{
|
|
||||||
min-height: 1.4em;
|
|
||||||
}}
|
|
||||||
.metadata {{
|
|
||||||
background: #16213e;
|
|
||||||
padding: 15px;
|
|
||||||
border-radius: 4px;
|
|
||||||
margin-bottom: 20px;
|
|
||||||
}}
|
|
||||||
.meta-row {{
|
|
||||||
display: flex;
|
|
||||||
gap: 20px;
|
|
||||||
font-size: 14px;
|
|
||||||
}}
|
|
||||||
.meta-key {{
|
|
||||||
color: #888;
|
|
||||||
}}
|
|
||||||
.meta-value {{
|
|
||||||
color: #fff;
|
|
||||||
}}
|
|
||||||
.footer {{
|
|
||||||
text-align: center;
|
|
||||||
color: #666;
|
|
||||||
font-size: 12px;
|
|
||||||
margin-top: 40px;
|
|
||||||
}}
|
|
||||||
</style>
|
|
||||||
</head>
|
|
||||||
<body>
|
|
||||||
<div class="test-report">
|
|
||||||
<div class="test-header">
|
|
||||||
<div class="test-name">{test_name}</div>
|
|
||||||
<div class="status {status_class}">{status}</div>
|
|
||||||
</div>
|
|
||||||
{metadata_html}
|
|
||||||
{frames_html}
|
|
||||||
<div class="footer">
|
|
||||||
Generated: {datetime.now().isoformat()}
|
|
||||||
</div>
|
|
||||||
</div>
|
|
||||||
</body>
|
|
||||||
</html>"""
|
|
||||||
|
|
||||||
|
|
||||||
def save_report(
|
|
||||||
test_name: str,
|
|
||||||
frames: list[list[str]],
|
|
||||||
output_dir: str = "test-reports",
|
|
||||||
status: str = "PASS",
|
|
||||||
duration_ms: float = 0.0,
|
|
||||||
metadata: dict[str, Any] | None = None,
|
|
||||||
) -> str:
|
|
||||||
"""Save HTML report to disk and return the file path."""
|
|
||||||
output_path = Path(output_dir)
|
|
||||||
output_path.mkdir(parents=True, exist_ok=True)
|
|
||||||
|
|
||||||
# Sanitize test name for filename
|
|
||||||
safe_name = "".join(c if c.isalnum() or c in "-_" else "_" for c in test_name)
|
|
||||||
filename = f"{safe_name}.html"
|
|
||||||
filepath = output_path / filename
|
|
||||||
|
|
||||||
html_content = generate_test_report(
|
|
||||||
test_name, frames, status, duration_ms, metadata
|
|
||||||
)
|
|
||||||
filepath.write_text(html_content)
|
|
||||||
|
|
||||||
return str(filepath)
|
|
||||||
|
|
||||||
|
|
||||||
def save_index_report(
|
|
||||||
reports: list[dict[str, Any]],
|
|
||||||
output_dir: str = "test-reports",
|
|
||||||
) -> str:
|
|
||||||
"""Generate an index HTML page linking to all test reports."""
|
|
||||||
output_path = Path(output_dir)
|
|
||||||
output_path.mkdir(parents=True, exist_ok=True)
|
|
||||||
|
|
||||||
rows = ""
|
|
||||||
for report in reports:
|
|
||||||
safe_name = "".join(
|
|
||||||
c if c.isalnum() or c in "-_" else "_" for c in report["test_name"]
|
|
||||||
)
|
|
||||||
filename = f"{safe_name}.html"
|
|
||||||
status_class = "pass" if report["status"] == "PASS" else "fail"
|
|
||||||
rows += f"""
|
|
||||||
<tr>
|
|
||||||
<td><a href="{filename}">{report["test_name"]}</a></td>
|
|
||||||
<td class="status {status_class}">{report["status"]}</td>
|
|
||||||
<td>{report.get("duration_ms", 0):.1f}ms</td>
|
|
||||||
<td>{report.get("frame_count", 0)}</td>
|
|
||||||
</tr>
|
|
||||||
"""
|
|
||||||
|
|
||||||
html = f"""<!DOCTYPE html>
|
|
||||||
<html>
|
|
||||||
<head>
|
|
||||||
<meta charset="UTF-8">
|
|
||||||
<title>Acceptance Test Reports</title>
|
|
||||||
<style>
|
|
||||||
body {{
|
|
||||||
font-family: -apple-system, BlinkMacSystemFont, 'Segoe UI', sans-serif;
|
|
||||||
background: #1a1a2e;
|
|
||||||
color: #eee;
|
|
||||||
margin: 0;
|
|
||||||
padding: 40px;
|
|
||||||
}}
|
|
||||||
h1 {{
|
|
||||||
color: #fff;
|
|
||||||
margin-bottom: 30px;
|
|
||||||
}}
|
|
||||||
table {{
|
|
||||||
width: 100%;
|
|
||||||
border-collapse: collapse;
|
|
||||||
}}
|
|
||||||
th, td {{
|
|
||||||
padding: 12px;
|
|
||||||
text-align: left;
|
|
||||||
border-bottom: 1px solid #333;
|
|
||||||
}}
|
|
||||||
th {{
|
|
||||||
background: #16213e;
|
|
||||||
color: #888;
|
|
||||||
font-weight: normal;
|
|
||||||
}}
|
|
||||||
a {{
|
|
||||||
color: #4dabf7;
|
|
||||||
text-decoration: none;
|
|
||||||
}}
|
|
||||||
a:hover {{
|
|
||||||
text-decoration: underline;
|
|
||||||
}}
|
|
||||||
.status {{
|
|
||||||
padding: 4px 8px;
|
|
||||||
border-radius: 4px;
|
|
||||||
font-size: 12px;
|
|
||||||
font-weight: bold;
|
|
||||||
}}
|
|
||||||
.status.pass {{
|
|
||||||
background: #28a745;
|
|
||||||
color: white;
|
|
||||||
}}
|
|
||||||
.status.fail {{
|
|
||||||
background: #dc3545;
|
|
||||||
color: white;
|
|
||||||
}}
|
|
||||||
</style>
|
|
||||||
</head>
|
|
||||||
<body>
|
|
||||||
<h1>Acceptance Test Reports</h1>
|
|
||||||
<table>
|
|
||||||
<thead>
|
|
||||||
<tr>
|
|
||||||
<th>Test</th>
|
|
||||||
<th>Status</th>
|
|
||||||
<th>Duration</th>
|
|
||||||
<th>Frames</th>
|
|
||||||
</tr>
|
|
||||||
</thead>
|
|
||||||
<tbody>
|
|
||||||
{rows}
|
|
||||||
</tbody>
|
|
||||||
</table>
|
|
||||||
</body>
|
|
||||||
</html>"""
|
|
||||||
|
|
||||||
index_path = output_path / "index.html"
|
|
||||||
index_path.write_text(html)
|
|
||||||
return str(index_path)
|
|
||||||
@@ -1,290 +0,0 @@
|
|||||||
"""
|
|
||||||
Acceptance tests for HUD visibility and positioning.
|
|
||||||
|
|
||||||
These tests verify that HUD appears in the final output frame.
|
|
||||||
Frames are captured and saved as HTML reports for visual verification.
|
|
||||||
"""
|
|
||||||
|
|
||||||
import queue
|
|
||||||
|
|
||||||
from engine.data_sources.sources import ListDataSource, SourceItem
|
|
||||||
from engine.effects.plugins.hud import HudEffect
|
|
||||||
from engine.pipeline import Pipeline, PipelineConfig
|
|
||||||
from engine.pipeline.adapters import (
|
|
||||||
DataSourceStage,
|
|
||||||
DisplayStage,
|
|
||||||
EffectPluginStage,
|
|
||||||
SourceItemsToBufferStage,
|
|
||||||
)
|
|
||||||
from engine.pipeline.core import PipelineContext
|
|
||||||
from engine.pipeline.params import PipelineParams
|
|
||||||
from tests.acceptance_report import save_report
|
|
||||||
|
|
||||||
|
|
||||||
class FrameCaptureDisplay:
|
|
||||||
"""Display that captures frames for HTML report generation."""
|
|
||||||
|
|
||||||
def __init__(self):
|
|
||||||
self.frames: queue.Queue[list[str]] = queue.Queue()
|
|
||||||
self.width = 80
|
|
||||||
self.height = 24
|
|
||||||
self._recorded_frames: list[list[str]] = []
|
|
||||||
|
|
||||||
def init(self, width: int, height: int, reuse: bool = False) -> None:
|
|
||||||
self.width = width
|
|
||||||
self.height = height
|
|
||||||
|
|
||||||
def show(self, buffer: list[str], border: bool = False) -> None:
|
|
||||||
self._recorded_frames.append(list(buffer))
|
|
||||||
self.frames.put(list(buffer))
|
|
||||||
|
|
||||||
def clear(self) -> None:
|
|
||||||
pass
|
|
||||||
|
|
||||||
def cleanup(self) -> None:
|
|
||||||
pass
|
|
||||||
|
|
||||||
def get_dimensions(self) -> tuple[int, int]:
|
|
||||||
return (self.width, self.height)
|
|
||||||
|
|
||||||
def get_recorded_frames(self) -> list[list[str]]:
|
|
||||||
return self._recorded_frames
|
|
||||||
|
|
||||||
|
|
||||||
def _build_pipeline_with_hud(
|
|
||||||
items: list[SourceItem],
|
|
||||||
) -> tuple[Pipeline, FrameCaptureDisplay, PipelineContext]:
|
|
||||||
"""Build a pipeline with HUD effect."""
|
|
||||||
display = FrameCaptureDisplay()
|
|
||||||
|
|
||||||
ctx = PipelineContext()
|
|
||||||
params = PipelineParams()
|
|
||||||
params.viewport_width = display.width
|
|
||||||
params.viewport_height = display.height
|
|
||||||
params.frame_number = 0
|
|
||||||
params.effect_order = ["noise", "hud"]
|
|
||||||
params.effect_enabled = {"noise": False}
|
|
||||||
ctx.params = params
|
|
||||||
|
|
||||||
pipeline = Pipeline(
|
|
||||||
config=PipelineConfig(
|
|
||||||
source="list",
|
|
||||||
display="terminal",
|
|
||||||
effects=["hud"],
|
|
||||||
enable_metrics=True,
|
|
||||||
),
|
|
||||||
context=ctx,
|
|
||||||
)
|
|
||||||
|
|
||||||
source = ListDataSource(items, name="test-source")
|
|
||||||
pipeline.add_stage("source", DataSourceStage(source, name="test-source"))
|
|
||||||
pipeline.add_stage("render", SourceItemsToBufferStage(name="items-to-buffer"))
|
|
||||||
|
|
||||||
hud_effect = HudEffect()
|
|
||||||
pipeline.add_stage("hud", EffectPluginStage(hud_effect, name="hud"))
|
|
||||||
|
|
||||||
pipeline.add_stage("display", DisplayStage(display, name="terminal"))
|
|
||||||
|
|
||||||
pipeline.build()
|
|
||||||
pipeline.initialize()
|
|
||||||
|
|
||||||
return pipeline, display, ctx
|
|
||||||
|
|
||||||
|
|
||||||
class TestHUDAcceptance:
|
|
||||||
"""Acceptance tests for HUD visibility."""
|
|
||||||
|
|
||||||
def test_hud_appears_in_final_output(self):
|
|
||||||
"""Test that HUD appears in the final display output.
|
|
||||||
|
|
||||||
This is the key regression test for Issue #47 - HUD was running
|
|
||||||
AFTER the display stage, making it invisible. Now it should appear
|
|
||||||
in the frame captured by the display.
|
|
||||||
"""
|
|
||||||
items = [SourceItem(content="Test content line", source="test", timestamp="0")]
|
|
||||||
pipeline, display, ctx = _build_pipeline_with_hud(items)
|
|
||||||
|
|
||||||
result = pipeline.execute(items)
|
|
||||||
assert result.success, f"Pipeline execution failed: {result.error}"
|
|
||||||
|
|
||||||
frame = display.frames.get(timeout=1)
|
|
||||||
frame_text = "\n".join(frame)
|
|
||||||
|
|
||||||
assert "MAINLINE" in frame_text, "HUD header not found in final output"
|
|
||||||
assert "EFFECT:" in frame_text, "EFFECT line not found in final output"
|
|
||||||
assert "PIPELINE:" in frame_text, "PIPELINE line not found in final output"
|
|
||||||
|
|
||||||
save_report(
|
|
||||||
test_name="test_hud_appears_in_final_output",
|
|
||||||
frames=display.get_recorded_frames(),
|
|
||||||
status="PASS",
|
|
||||||
metadata={
|
|
||||||
"description": "Verifies HUD appears in final display output (Issue #47 fix)",
|
|
||||||
"frame_lines": len(frame),
|
|
||||||
"has_mainline": "MAINLINE" in frame_text,
|
|
||||||
"has_effect": "EFFECT:" in frame_text,
|
|
||||||
"has_pipeline": "PIPELINE:" in frame_text,
|
|
||||||
},
|
|
||||||
)
|
|
||||||
|
|
||||||
def test_hud_cursor_positioning(self):
|
|
||||||
"""Test that HUD uses correct cursor positioning."""
|
|
||||||
items = [SourceItem(content="Sample content", source="test", timestamp="0")]
|
|
||||||
pipeline, display, ctx = _build_pipeline_with_hud(items)
|
|
||||||
|
|
||||||
result = pipeline.execute(items)
|
|
||||||
assert result.success
|
|
||||||
|
|
||||||
frame = display.frames.get(timeout=1)
|
|
||||||
has_cursor_pos = any("\x1b[" in line and "H" in line for line in frame)
|
|
||||||
|
|
||||||
save_report(
|
|
||||||
test_name="test_hud_cursor_positioning",
|
|
||||||
frames=display.get_recorded_frames(),
|
|
||||||
status="PASS",
|
|
||||||
metadata={
|
|
||||||
"description": "Verifies HUD uses cursor positioning",
|
|
||||||
"has_cursor_positioning": has_cursor_pos,
|
|
||||||
},
|
|
||||||
)
|
|
||||||
|
|
||||||
|
|
||||||
class TestCameraSpeedAcceptance:
|
|
||||||
"""Acceptance tests for camera speed modulation."""
|
|
||||||
|
|
||||||
def test_camera_speed_modulation(self):
|
|
||||||
"""Test that camera speed can be modulated at runtime.
|
|
||||||
|
|
||||||
This verifies the camera speed modulation feature added in Phase 1.
|
|
||||||
"""
|
|
||||||
from engine.camera import Camera
|
|
||||||
from engine.pipeline.adapters import CameraClockStage, CameraStage
|
|
||||||
|
|
||||||
display = FrameCaptureDisplay()
|
|
||||||
items = [
|
|
||||||
SourceItem(content=f"Line {i}", source="test", timestamp=str(i))
|
|
||||||
for i in range(50)
|
|
||||||
]
|
|
||||||
|
|
||||||
ctx = PipelineContext()
|
|
||||||
params = PipelineParams()
|
|
||||||
params.viewport_width = display.width
|
|
||||||
params.viewport_height = display.height
|
|
||||||
params.frame_number = 0
|
|
||||||
params.camera_speed = 1.0
|
|
||||||
ctx.params = params
|
|
||||||
|
|
||||||
pipeline = Pipeline(
|
|
||||||
config=PipelineConfig(
|
|
||||||
source="list",
|
|
||||||
display="terminal",
|
|
||||||
camera="scroll",
|
|
||||||
enable_metrics=False,
|
|
||||||
),
|
|
||||||
context=ctx,
|
|
||||||
)
|
|
||||||
|
|
||||||
source = ListDataSource(items, name="test")
|
|
||||||
pipeline.add_stage("source", DataSourceStage(source, name="test"))
|
|
||||||
pipeline.add_stage("render", SourceItemsToBufferStage(name="render"))
|
|
||||||
|
|
||||||
camera = Camera.scroll(speed=0.5)
|
|
||||||
pipeline.add_stage(
|
|
||||||
"camera_update", CameraClockStage(camera, name="camera-clock")
|
|
||||||
)
|
|
||||||
pipeline.add_stage("camera", CameraStage(camera, name="camera"))
|
|
||||||
pipeline.add_stage("display", DisplayStage(display, name="terminal"))
|
|
||||||
|
|
||||||
pipeline.build()
|
|
||||||
pipeline.initialize()
|
|
||||||
|
|
||||||
initial_camera_speed = camera.speed
|
|
||||||
|
|
||||||
for _ in range(3):
|
|
||||||
pipeline.execute(items)
|
|
||||||
|
|
||||||
speed_after_first_run = camera.speed
|
|
||||||
|
|
||||||
params.camera_speed = 5.0
|
|
||||||
ctx.params = params
|
|
||||||
|
|
||||||
for _ in range(3):
|
|
||||||
pipeline.execute(items)
|
|
||||||
|
|
||||||
speed_after_increase = camera.speed
|
|
||||||
|
|
||||||
assert speed_after_increase == 5.0, (
|
|
||||||
f"Camera speed should be modulated to 5.0, got {speed_after_increase}"
|
|
||||||
)
|
|
||||||
|
|
||||||
params.camera_speed = 0.0
|
|
||||||
ctx.params = params
|
|
||||||
|
|
||||||
for _ in range(3):
|
|
||||||
pipeline.execute(items)
|
|
||||||
|
|
||||||
speed_after_stop = camera.speed
|
|
||||||
assert speed_after_stop == 0.0, (
|
|
||||||
f"Camera speed should be 0.0, got {speed_after_stop}"
|
|
||||||
)
|
|
||||||
|
|
||||||
save_report(
|
|
||||||
test_name="test_camera_speed_modulation",
|
|
||||||
frames=display.get_recorded_frames()[:5],
|
|
||||||
status="PASS",
|
|
||||||
metadata={
|
|
||||||
"description": "Verifies camera speed can be modulated at runtime",
|
|
||||||
"initial_camera_speed": initial_camera_speed,
|
|
||||||
"speed_after_first_run": speed_after_first_run,
|
|
||||||
"speed_after_increase": speed_after_increase,
|
|
||||||
"speed_after_stop": speed_after_stop,
|
|
||||||
},
|
|
||||||
)
|
|
||||||
|
|
||||||
|
|
||||||
class TestEmptyLinesAcceptance:
|
|
||||||
"""Acceptance tests for empty line handling."""
|
|
||||||
|
|
||||||
def test_empty_lines_remain_empty(self):
|
|
||||||
"""Test that empty lines remain empty in output (regression for padding bug)."""
|
|
||||||
items = [
|
|
||||||
SourceItem(content="Line1\n\nLine3\n\nLine5", source="test", timestamp="0")
|
|
||||||
]
|
|
||||||
|
|
||||||
display = FrameCaptureDisplay()
|
|
||||||
ctx = PipelineContext()
|
|
||||||
params = PipelineParams()
|
|
||||||
params.viewport_width = display.width
|
|
||||||
params.viewport_height = display.height
|
|
||||||
ctx.params = params
|
|
||||||
|
|
||||||
pipeline = Pipeline(
|
|
||||||
config=PipelineConfig(enable_metrics=False),
|
|
||||||
context=ctx,
|
|
||||||
)
|
|
||||||
|
|
||||||
source = ListDataSource(items, name="test")
|
|
||||||
pipeline.add_stage("source", DataSourceStage(source, name="test"))
|
|
||||||
pipeline.add_stage("render", SourceItemsToBufferStage(name="render"))
|
|
||||||
pipeline.add_stage("display", DisplayStage(display, name="terminal"))
|
|
||||||
|
|
||||||
pipeline.build()
|
|
||||||
pipeline.initialize()
|
|
||||||
|
|
||||||
result = pipeline.execute(items)
|
|
||||||
assert result.success
|
|
||||||
|
|
||||||
frame = display.frames.get(timeout=1)
|
|
||||||
has_truly_empty = any(not line for line in frame)
|
|
||||||
|
|
||||||
save_report(
|
|
||||||
test_name="test_empty_lines_remain_empty",
|
|
||||||
frames=display.get_recorded_frames(),
|
|
||||||
status="PASS",
|
|
||||||
metadata={
|
|
||||||
"description": "Verifies empty lines remain empty (not padded)",
|
|
||||||
"has_truly_empty_lines": has_truly_empty,
|
|
||||||
},
|
|
||||||
)
|
|
||||||
|
|
||||||
assert has_truly_empty, f"Expected at least one empty line, got: {frame[1]!r}"
|
|
||||||
@@ -69,11 +69,9 @@ class TestRunPipelineMode:
|
|||||||
"""run_pipeline_mode() exits if no content can be fetched."""
|
"""run_pipeline_mode() exits if no content can be fetched."""
|
||||||
with (
|
with (
|
||||||
patch("engine.app.pipeline_runner.load_cache", return_value=None),
|
patch("engine.app.pipeline_runner.load_cache", return_value=None),
|
||||||
patch("engine.app.pipeline_runner.fetch_all_fast", return_value=[]),
|
|
||||||
patch(
|
patch(
|
||||||
"engine.app.pipeline_runner.fetch_all", return_value=([], None, None)
|
"engine.app.pipeline_runner.fetch_all", return_value=([], None, None)
|
||||||
), # Mock background thread
|
),
|
||||||
patch("engine.app.pipeline_runner.save_cache"), # Prevent disk I/O
|
|
||||||
patch("engine.effects.plugins.discover_plugins"),
|
patch("engine.effects.plugins.discover_plugins"),
|
||||||
pytest.raises(SystemExit) as exc_info,
|
pytest.raises(SystemExit) as exc_info,
|
||||||
):
|
):
|
||||||
@@ -88,7 +86,6 @@ class TestRunPipelineMode:
|
|||||||
"engine.app.pipeline_runner.load_cache", return_value=cached
|
"engine.app.pipeline_runner.load_cache", return_value=cached
|
||||||
) as mock_load,
|
) as mock_load,
|
||||||
patch("engine.app.pipeline_runner.fetch_all") as mock_fetch,
|
patch("engine.app.pipeline_runner.fetch_all") as mock_fetch,
|
||||||
patch("engine.app.pipeline_runner.fetch_all_fast"),
|
|
||||||
patch("engine.app.pipeline_runner.DisplayRegistry.create") as mock_create,
|
patch("engine.app.pipeline_runner.DisplayRegistry.create") as mock_create,
|
||||||
):
|
):
|
||||||
mock_display = Mock()
|
mock_display = Mock()
|
||||||
@@ -112,8 +109,7 @@ class TestRunPipelineMode:
|
|||||||
def test_run_pipeline_mode_creates_display(self):
|
def test_run_pipeline_mode_creates_display(self):
|
||||||
"""run_pipeline_mode() creates a display backend."""
|
"""run_pipeline_mode() creates a display backend."""
|
||||||
with (
|
with (
|
||||||
patch("engine.app.pipeline_runner.load_cache", return_value=["item"]),
|
patch("engine.app.load_cache", return_value=["item"]),
|
||||||
patch("engine.app.pipeline_runner.fetch_all_fast", return_value=[]),
|
|
||||||
patch("engine.app.DisplayRegistry.create") as mock_create,
|
patch("engine.app.DisplayRegistry.create") as mock_create,
|
||||||
):
|
):
|
||||||
mock_display = Mock()
|
mock_display = Mock()
|
||||||
@@ -138,8 +134,7 @@ class TestRunPipelineMode:
|
|||||||
sys.argv = ["mainline.py", "--display", "websocket"]
|
sys.argv = ["mainline.py", "--display", "websocket"]
|
||||||
|
|
||||||
with (
|
with (
|
||||||
patch("engine.app.pipeline_runner.load_cache", return_value=["item"]),
|
patch("engine.app.load_cache", return_value=["item"]),
|
||||||
patch("engine.app.pipeline_runner.fetch_all_fast", return_value=[]),
|
|
||||||
patch("engine.app.DisplayRegistry.create") as mock_create,
|
patch("engine.app.DisplayRegistry.create") as mock_create,
|
||||||
):
|
):
|
||||||
mock_display = Mock()
|
mock_display = Mock()
|
||||||
@@ -168,7 +163,6 @@ class TestRunPipelineMode:
|
|||||||
return_value=(["poem"], None, None),
|
return_value=(["poem"], None, None),
|
||||||
) as mock_fetch_poetry,
|
) as mock_fetch_poetry,
|
||||||
patch("engine.app.pipeline_runner.fetch_all") as mock_fetch_all,
|
patch("engine.app.pipeline_runner.fetch_all") as mock_fetch_all,
|
||||||
patch("engine.app.pipeline_runner.fetch_all_fast", return_value=[]),
|
|
||||||
patch("engine.app.pipeline_runner.DisplayRegistry.create") as mock_create,
|
patch("engine.app.pipeline_runner.DisplayRegistry.create") as mock_create,
|
||||||
):
|
):
|
||||||
mock_display = Mock()
|
mock_display = Mock()
|
||||||
@@ -193,7 +187,6 @@ class TestRunPipelineMode:
|
|||||||
"""run_pipeline_mode() discovers available effect plugins."""
|
"""run_pipeline_mode() discovers available effect plugins."""
|
||||||
with (
|
with (
|
||||||
patch("engine.app.pipeline_runner.load_cache", return_value=["item"]),
|
patch("engine.app.pipeline_runner.load_cache", return_value=["item"]),
|
||||||
patch("engine.app.pipeline_runner.fetch_all_fast", return_value=[]),
|
|
||||||
patch("engine.effects.plugins.discover_plugins") as mock_discover,
|
patch("engine.effects.plugins.discover_plugins") as mock_discover,
|
||||||
patch("engine.app.pipeline_runner.DisplayRegistry.create") as mock_create,
|
patch("engine.app.pipeline_runner.DisplayRegistry.create") as mock_create,
|
||||||
):
|
):
|
||||||
|
|||||||
@@ -31,12 +31,12 @@ class TestFetchFeed:
|
|||||||
|
|
||||||
@patch("engine.fetch.urllib.request.urlopen")
|
@patch("engine.fetch.urllib.request.urlopen")
|
||||||
def test_fetch_network_error(self, mock_urlopen):
|
def test_fetch_network_error(self, mock_urlopen):
|
||||||
"""Network error returns tuple with None feed."""
|
"""Network error returns None."""
|
||||||
mock_urlopen.side_effect = Exception("Network error")
|
mock_urlopen.side_effect = Exception("Network error")
|
||||||
|
|
||||||
url, feed = fetch_feed("http://example.com/feed")
|
result = fetch_feed("http://example.com/feed")
|
||||||
|
|
||||||
assert feed is None
|
assert result is None
|
||||||
|
|
||||||
|
|
||||||
class TestFetchAll:
|
class TestFetchAll:
|
||||||
@@ -54,7 +54,7 @@ class TestFetchAll:
|
|||||||
{"title": "Headline 1", "published_parsed": (2024, 1, 1, 12, 0, 0)},
|
{"title": "Headline 1", "published_parsed": (2024, 1, 1, 12, 0, 0)},
|
||||||
{"title": "Headline 2", "updated_parsed": (2024, 1, 2, 12, 0, 0)},
|
{"title": "Headline 2", "updated_parsed": (2024, 1, 2, 12, 0, 0)},
|
||||||
]
|
]
|
||||||
mock_fetch_feed.return_value = ("http://example.com", mock_feed)
|
mock_fetch_feed.return_value = mock_feed
|
||||||
mock_skip.return_value = False
|
mock_skip.return_value = False
|
||||||
mock_strip.side_effect = lambda x: x
|
mock_strip.side_effect = lambda x: x
|
||||||
|
|
||||||
@@ -67,7 +67,7 @@ class TestFetchAll:
|
|||||||
@patch("engine.fetch.boot_ln")
|
@patch("engine.fetch.boot_ln")
|
||||||
def test_fetch_all_feed_error(self, mock_boot, mock_fetch_feed):
|
def test_fetch_all_feed_error(self, mock_boot, mock_fetch_feed):
|
||||||
"""Feed error increments failed count."""
|
"""Feed error increments failed count."""
|
||||||
mock_fetch_feed.return_value = ("http://example.com", None)
|
mock_fetch_feed.return_value = None
|
||||||
|
|
||||||
items, linked, failed = fetch_all()
|
items, linked, failed = fetch_all()
|
||||||
|
|
||||||
@@ -87,7 +87,7 @@ class TestFetchAll:
|
|||||||
{"title": "Sports scores"},
|
{"title": "Sports scores"},
|
||||||
{"title": "Valid headline"},
|
{"title": "Valid headline"},
|
||||||
]
|
]
|
||||||
mock_fetch_feed.return_value = ("http://example.com", mock_feed)
|
mock_fetch_feed.return_value = mock_feed
|
||||||
mock_skip.side_effect = lambda x: x == "Sports scores"
|
mock_skip.side_effect = lambda x: x == "Sports scores"
|
||||||
mock_strip.side_effect = lambda x: x
|
mock_strip.side_effect = lambda x: x
|
||||||
|
|
||||||
|
|||||||
@@ -1772,73 +1772,3 @@ class TestPipelineMutation:
|
|||||||
result = pipeline.execute(None)
|
result = pipeline.execute(None)
|
||||||
assert result.success
|
assert result.success
|
||||||
assert call_log == ["source", "display"]
|
assert call_log == ["source", "display"]
|
||||||
|
|
||||||
|
|
||||||
class TestAutoInjection:
|
|
||||||
"""Tests for auto-injection of minimum capabilities."""
|
|
||||||
|
|
||||||
def setup_method(self):
|
|
||||||
"""Reset registry before each test."""
|
|
||||||
StageRegistry._discovered = False
|
|
||||||
StageRegistry._categories.clear()
|
|
||||||
StageRegistry._instances.clear()
|
|
||||||
discover_stages()
|
|
||||||
|
|
||||||
def test_auto_injection_provides_minimum_capabilities(self):
|
|
||||||
"""Pipeline with no stages gets minimum capabilities auto-injected."""
|
|
||||||
pipeline = Pipeline()
|
|
||||||
# Don't add any stages
|
|
||||||
pipeline.build(auto_inject=True)
|
|
||||||
|
|
||||||
# Should have stages for source, render, camera, display
|
|
||||||
assert len(pipeline.stages) > 0
|
|
||||||
assert "source" in pipeline.stages
|
|
||||||
assert "display" in pipeline.stages
|
|
||||||
|
|
||||||
def test_auto_injection_rebuilds_execution_order(self):
|
|
||||||
"""Auto-injection rebuilds execution order correctly."""
|
|
||||||
pipeline = Pipeline()
|
|
||||||
pipeline.build(auto_inject=True)
|
|
||||||
|
|
||||||
# Execution order should be valid
|
|
||||||
assert len(pipeline.execution_order) > 0
|
|
||||||
# Source should come before display
|
|
||||||
source_idx = pipeline.execution_order.index("source")
|
|
||||||
display_idx = pipeline.execution_order.index("display")
|
|
||||||
assert source_idx < display_idx
|
|
||||||
|
|
||||||
def test_validation_error_after_auto_injection(self):
|
|
||||||
"""Pipeline raises error if auto-injection fails to provide capabilities."""
|
|
||||||
from unittest.mock import patch
|
|
||||||
|
|
||||||
pipeline = Pipeline()
|
|
||||||
|
|
||||||
# Mock ensure_minimum_capabilities to return empty list (injection failed)
|
|
||||||
with (
|
|
||||||
patch.object(pipeline, "ensure_minimum_capabilities", return_value=[]),
|
|
||||||
patch.object(
|
|
||||||
pipeline,
|
|
||||||
"validate_minimum_capabilities",
|
|
||||||
return_value=(False, ["source"]),
|
|
||||||
),
|
|
||||||
):
|
|
||||||
# Even though injection "ran", it didn't provide the capability
|
|
||||||
# build() should raise StageError
|
|
||||||
with pytest.raises(StageError) as exc_info:
|
|
||||||
pipeline.build(auto_inject=True)
|
|
||||||
|
|
||||||
assert "Auto-injection failed" in str(exc_info.value)
|
|
||||||
|
|
||||||
def test_minimum_capability_removal_recovery(self):
|
|
||||||
"""Pipeline re-injects minimum capability if removed."""
|
|
||||||
pipeline = Pipeline()
|
|
||||||
pipeline.build(auto_inject=True)
|
|
||||||
|
|
||||||
# Remove the display stage
|
|
||||||
pipeline.remove_stage("display", cleanup=True)
|
|
||||||
|
|
||||||
# Rebuild with auto-injection
|
|
||||||
pipeline.build(auto_inject=True)
|
|
||||||
|
|
||||||
# Display should be back
|
|
||||||
assert "display" in pipeline.stages
|
|
||||||
|
|||||||
@@ -218,10 +218,9 @@ class TestPipelineE2EHappyPath:
|
|||||||
|
|
||||||
assert result.success
|
assert result.success
|
||||||
frame = display.frames.get(timeout=1)
|
frame = display.frames.get(timeout=1)
|
||||||
# Camera stage pads lines to viewport width, so check for substring match
|
assert "Line A" in frame
|
||||||
assert any("Line A" in line for line in frame)
|
assert "Line B" in frame
|
||||||
assert any("Line B" in line for line in frame)
|
assert "Line C" in frame
|
||||||
assert any("Line C" in line for line in frame)
|
|
||||||
|
|
||||||
def test_empty_source_produces_empty_buffer(self):
|
def test_empty_source_produces_empty_buffer(self):
|
||||||
"""An empty source should produce an empty (or blank) frame."""
|
"""An empty source should produce an empty (or blank) frame."""
|
||||||
@@ -264,10 +263,7 @@ class TestPipelineE2EEffects:
|
|||||||
|
|
||||||
assert result.success
|
assert result.success
|
||||||
frame = display.frames.get(timeout=1)
|
frame = display.frames.get(timeout=1)
|
||||||
# Camera stage pads lines to viewport width, so check for substring match
|
assert "[FX1]" in frame, f"Marker not found in frame: {frame}"
|
||||||
assert any("[FX1]" in line for line in frame), (
|
|
||||||
f"Marker not found in frame: {frame}"
|
|
||||||
)
|
|
||||||
assert "Original" in "\n".join(frame)
|
assert "Original" in "\n".join(frame)
|
||||||
|
|
||||||
def test_effect_chain_ordering(self):
|
def test_effect_chain_ordering(self):
|
||||||
@@ -391,7 +387,7 @@ class TestPipelineE2EStageOrder:
|
|||||||
# All regular (non-overlay) stages should have metrics
|
# All regular (non-overlay) stages should have metrics
|
||||||
assert "source" in stage_names
|
assert "source" in stage_names
|
||||||
assert "render" in stage_names
|
assert "render" in stage_names
|
||||||
assert "queue" in stage_names # Display stage is named "queue" in the test
|
assert "display" in stage_names
|
||||||
assert "effect_m" in stage_names
|
assert "effect_m" in stage_names
|
||||||
|
|
||||||
|
|
||||||
|
|||||||
@@ -1,259 +0,0 @@
|
|||||||
"""
|
|
||||||
Integration tests for pipeline mutation commands via WebSocket/UI panel.
|
|
||||||
|
|
||||||
Tests the mutation API through the command interface.
|
|
||||||
"""
|
|
||||||
|
|
||||||
from unittest.mock import Mock
|
|
||||||
|
|
||||||
from engine.app.pipeline_runner import _handle_pipeline_mutation
|
|
||||||
from engine.pipeline import Pipeline
|
|
||||||
from engine.pipeline.ui import UIConfig, UIPanel
|
|
||||||
|
|
||||||
|
|
||||||
class TestPipelineMutationCommands:
|
|
||||||
"""Test pipeline mutation commands through the mutation API."""
|
|
||||||
|
|
||||||
def test_can_hot_swap_existing_stage(self):
|
|
||||||
"""Test can_hot_swap returns True for existing, non-critical stage."""
|
|
||||||
pipeline = Pipeline()
|
|
||||||
|
|
||||||
# Add a test stage
|
|
||||||
mock_stage = Mock()
|
|
||||||
mock_stage.capabilities = {"test_capability"}
|
|
||||||
pipeline.add_stage("test_stage", mock_stage)
|
|
||||||
pipeline._capability_map = {"test_capability": ["test_stage"]}
|
|
||||||
|
|
||||||
# Test that we can check hot-swap capability
|
|
||||||
result = pipeline.can_hot_swap("test_stage")
|
|
||||||
assert result is True
|
|
||||||
|
|
||||||
def test_can_hot_swap_nonexistent_stage(self):
|
|
||||||
"""Test can_hot_swap returns False for non-existent stage."""
|
|
||||||
pipeline = Pipeline()
|
|
||||||
result = pipeline.can_hot_swap("nonexistent_stage")
|
|
||||||
assert result is False
|
|
||||||
|
|
||||||
def test_can_hot_swap_minimum_capability(self):
|
|
||||||
"""Test can_hot_swap with minimum capability stage."""
|
|
||||||
pipeline = Pipeline()
|
|
||||||
|
|
||||||
# Add a source stage (minimum capability)
|
|
||||||
mock_stage = Mock()
|
|
||||||
mock_stage.capabilities = {"source"}
|
|
||||||
pipeline.add_stage("source", mock_stage)
|
|
||||||
pipeline._capability_map = {"source": ["source"]}
|
|
||||||
|
|
||||||
# Initialize pipeline to trigger capability validation
|
|
||||||
pipeline._initialized = True
|
|
||||||
|
|
||||||
# Source is the only provider of minimum capability
|
|
||||||
result = pipeline.can_hot_swap("source")
|
|
||||||
# Should be False because it's the sole provider of a minimum capability
|
|
||||||
assert result is False
|
|
||||||
|
|
||||||
def test_cleanup_stage(self):
|
|
||||||
"""Test cleanup_stage calls cleanup on specific stage."""
|
|
||||||
pipeline = Pipeline()
|
|
||||||
|
|
||||||
# Add a stage with a mock cleanup method
|
|
||||||
mock_stage = Mock()
|
|
||||||
pipeline.add_stage("test_stage", mock_stage)
|
|
||||||
|
|
||||||
# Cleanup the specific stage
|
|
||||||
pipeline.cleanup_stage("test_stage")
|
|
||||||
|
|
||||||
# Verify cleanup was called
|
|
||||||
mock_stage.cleanup.assert_called_once()
|
|
||||||
|
|
||||||
def test_cleanup_stage_nonexistent(self):
|
|
||||||
"""Test cleanup_stage on non-existent stage doesn't crash."""
|
|
||||||
pipeline = Pipeline()
|
|
||||||
pipeline.cleanup_stage("nonexistent_stage")
|
|
||||||
# Should not raise an exception
|
|
||||||
|
|
||||||
def test_remove_stage_rebuilds_execution_order(self):
|
|
||||||
"""Test that remove_stage rebuilds execution order."""
|
|
||||||
pipeline = Pipeline()
|
|
||||||
|
|
||||||
# Add two independent stages
|
|
||||||
stage1 = Mock()
|
|
||||||
stage1.capabilities = {"source"}
|
|
||||||
stage1.dependencies = set()
|
|
||||||
stage1.stage_dependencies = [] # Add empty list for stage dependencies
|
|
||||||
|
|
||||||
stage2 = Mock()
|
|
||||||
stage2.capabilities = {"render.output"}
|
|
||||||
stage2.dependencies = set() # No dependencies
|
|
||||||
stage2.stage_dependencies = [] # No stage dependencies
|
|
||||||
|
|
||||||
pipeline.add_stage("stage1", stage1)
|
|
||||||
pipeline.add_stage("stage2", stage2)
|
|
||||||
|
|
||||||
# Build pipeline to establish execution order
|
|
||||||
pipeline._initialized = True
|
|
||||||
pipeline._capability_map = {"source": ["stage1"], "render.output": ["stage2"]}
|
|
||||||
pipeline._execution_order = ["stage1", "stage2"]
|
|
||||||
|
|
||||||
# Remove stage1
|
|
||||||
pipeline.remove_stage("stage1")
|
|
||||||
|
|
||||||
# Verify execution order was rebuilt
|
|
||||||
assert "stage1" not in pipeline._execution_order
|
|
||||||
assert "stage2" in pipeline._execution_order
|
|
||||||
|
|
||||||
def test_handle_pipeline_mutation_remove_stage(self):
|
|
||||||
"""Test _handle_pipeline_mutation with remove_stage command."""
|
|
||||||
pipeline = Pipeline()
|
|
||||||
|
|
||||||
# Add a mock stage
|
|
||||||
mock_stage = Mock()
|
|
||||||
pipeline.add_stage("test_stage", mock_stage)
|
|
||||||
|
|
||||||
# Create remove command
|
|
||||||
command = {"action": "remove_stage", "stage": "test_stage"}
|
|
||||||
|
|
||||||
# Handle the mutation
|
|
||||||
result = _handle_pipeline_mutation(pipeline, command)
|
|
||||||
|
|
||||||
# Verify it was handled and stage was removed
|
|
||||||
assert result is True
|
|
||||||
assert "test_stage" not in pipeline._stages
|
|
||||||
|
|
||||||
def test_handle_pipeline_mutation_swap_stages(self):
|
|
||||||
"""Test _handle_pipeline_mutation with swap_stages command."""
|
|
||||||
pipeline = Pipeline()
|
|
||||||
|
|
||||||
# Add two mock stages
|
|
||||||
stage1 = Mock()
|
|
||||||
stage2 = Mock()
|
|
||||||
pipeline.add_stage("stage1", stage1)
|
|
||||||
pipeline.add_stage("stage2", stage2)
|
|
||||||
|
|
||||||
# Create swap command
|
|
||||||
command = {"action": "swap_stages", "stage1": "stage1", "stage2": "stage2"}
|
|
||||||
|
|
||||||
# Handle the mutation
|
|
||||||
result = _handle_pipeline_mutation(pipeline, command)
|
|
||||||
|
|
||||||
# Verify it was handled
|
|
||||||
assert result is True
|
|
||||||
|
|
||||||
def test_handle_pipeline_mutation_enable_stage(self):
|
|
||||||
"""Test _handle_pipeline_mutation with enable_stage command."""
|
|
||||||
pipeline = Pipeline()
|
|
||||||
|
|
||||||
# Add a mock stage with set_enabled method
|
|
||||||
mock_stage = Mock()
|
|
||||||
mock_stage.set_enabled = Mock()
|
|
||||||
pipeline.add_stage("test_stage", mock_stage)
|
|
||||||
|
|
||||||
# Create enable command
|
|
||||||
command = {"action": "enable_stage", "stage": "test_stage"}
|
|
||||||
|
|
||||||
# Handle the mutation
|
|
||||||
result = _handle_pipeline_mutation(pipeline, command)
|
|
||||||
|
|
||||||
# Verify it was handled
|
|
||||||
assert result is True
|
|
||||||
mock_stage.set_enabled.assert_called_once_with(True)
|
|
||||||
|
|
||||||
def test_handle_pipeline_mutation_disable_stage(self):
|
|
||||||
"""Test _handle_pipeline_mutation with disable_stage command."""
|
|
||||||
pipeline = Pipeline()
|
|
||||||
|
|
||||||
# Add a mock stage with set_enabled method
|
|
||||||
mock_stage = Mock()
|
|
||||||
mock_stage.set_enabled = Mock()
|
|
||||||
pipeline.add_stage("test_stage", mock_stage)
|
|
||||||
|
|
||||||
# Create disable command
|
|
||||||
command = {"action": "disable_stage", "stage": "test_stage"}
|
|
||||||
|
|
||||||
# Handle the mutation
|
|
||||||
result = _handle_pipeline_mutation(pipeline, command)
|
|
||||||
|
|
||||||
# Verify it was handled
|
|
||||||
assert result is True
|
|
||||||
mock_stage.set_enabled.assert_called_once_with(False)
|
|
||||||
|
|
||||||
def test_handle_pipeline_mutation_cleanup_stage(self):
|
|
||||||
"""Test _handle_pipeline_mutation with cleanup_stage command."""
|
|
||||||
pipeline = Pipeline()
|
|
||||||
|
|
||||||
# Add a mock stage
|
|
||||||
mock_stage = Mock()
|
|
||||||
pipeline.add_stage("test_stage", mock_stage)
|
|
||||||
|
|
||||||
# Create cleanup command
|
|
||||||
command = {"action": "cleanup_stage", "stage": "test_stage"}
|
|
||||||
|
|
||||||
# Handle the mutation
|
|
||||||
result = _handle_pipeline_mutation(pipeline, command)
|
|
||||||
|
|
||||||
# Verify it was handled and cleanup was called
|
|
||||||
assert result is True
|
|
||||||
mock_stage.cleanup.assert_called_once()
|
|
||||||
|
|
||||||
def test_handle_pipeline_mutation_can_hot_swap(self):
|
|
||||||
"""Test _handle_pipeline_mutation with can_hot_swap command."""
|
|
||||||
pipeline = Pipeline()
|
|
||||||
|
|
||||||
# Add a mock stage
|
|
||||||
mock_stage = Mock()
|
|
||||||
mock_stage.capabilities = {"test"}
|
|
||||||
pipeline.add_stage("test_stage", mock_stage)
|
|
||||||
pipeline._capability_map = {"test": ["test_stage"]}
|
|
||||||
|
|
||||||
# Create can_hot_swap command
|
|
||||||
command = {"action": "can_hot_swap", "stage": "test_stage"}
|
|
||||||
|
|
||||||
# Handle the mutation
|
|
||||||
result = _handle_pipeline_mutation(pipeline, command)
|
|
||||||
|
|
||||||
# Verify it was handled
|
|
||||||
assert result is True
|
|
||||||
|
|
||||||
def test_handle_pipeline_mutation_move_stage(self):
|
|
||||||
"""Test _handle_pipeline_mutation with move_stage command."""
|
|
||||||
pipeline = Pipeline()
|
|
||||||
|
|
||||||
# Add two mock stages
|
|
||||||
stage1 = Mock()
|
|
||||||
stage2 = Mock()
|
|
||||||
pipeline.add_stage("stage1", stage1)
|
|
||||||
pipeline.add_stage("stage2", stage2)
|
|
||||||
|
|
||||||
# Initialize execution order
|
|
||||||
pipeline._execution_order = ["stage1", "stage2"]
|
|
||||||
|
|
||||||
# Create move command to move stage1 after stage2
|
|
||||||
command = {"action": "move_stage", "stage": "stage1", "after": "stage2"}
|
|
||||||
|
|
||||||
# Handle the mutation
|
|
||||||
result = _handle_pipeline_mutation(pipeline, command)
|
|
||||||
|
|
||||||
# Verify it was handled (result might be True or False depending on validation)
|
|
||||||
# The key is that the command was processed
|
|
||||||
assert result in (True, False)
|
|
||||||
|
|
||||||
def test_ui_panel_execute_command_mutation_actions(self):
|
|
||||||
"""Test UI panel execute_command with mutation actions."""
|
|
||||||
ui_panel = UIPanel(UIConfig())
|
|
||||||
|
|
||||||
# Test that mutation actions return False (not handled by UI panel)
|
|
||||||
# These should be handled by the WebSocket command handler instead
|
|
||||||
mutation_actions = [
|
|
||||||
{"action": "remove_stage", "stage": "test"},
|
|
||||||
{"action": "swap_stages", "stage1": "a", "stage2": "b"},
|
|
||||||
{"action": "enable_stage", "stage": "test"},
|
|
||||||
{"action": "disable_stage", "stage": "test"},
|
|
||||||
{"action": "cleanup_stage", "stage": "test"},
|
|
||||||
{"action": "can_hot_swap", "stage": "test"},
|
|
||||||
]
|
|
||||||
|
|
||||||
for command in mutation_actions:
|
|
||||||
result = ui_panel.execute_command(command)
|
|
||||||
assert result is False, (
|
|
||||||
f"Mutation action {command['action']} should not be handled by UI panel"
|
|
||||||
)
|
|
||||||
@@ -2,6 +2,7 @@
|
|||||||
Tests for streaming protocol utilities.
|
Tests for streaming protocol utilities.
|
||||||
"""
|
"""
|
||||||
|
|
||||||
|
|
||||||
from engine.display.streaming import (
|
from engine.display.streaming import (
|
||||||
FrameDiff,
|
FrameDiff,
|
||||||
MessageType,
|
MessageType,
|
||||||
|
|||||||
@@ -1,3 +1,4 @@
|
|||||||
|
|
||||||
from engine.effects.legacy import vis_offset, vis_trunc
|
from engine.effects.legacy import vis_offset, vis_trunc
|
||||||
|
|
||||||
|
|
||||||
|
|||||||
Reference in New Issue
Block a user