Files
Mainline/AGENTS.md
David Gwilliam ff08b1d6f5 feat: Complete Pipeline Mutation API implementation
- Add can_hot_swap() function to Pipeline class
- Add cleanup_stage() method to Pipeline class
- Fix remove_stage() to rebuild execution order after removal
- Extend ui_panel.execute_command() with docstrings for mutation commands
- Update WebSocket handler to support pipeline mutation commands
- Add _handle_pipeline_mutation() function for command routing
- Add comprehensive integration tests in test_pipeline_mutation_commands.py
- Update AGENTS.md with mutation API documentation

Issue: #35 (Pipeline Mutation API)
Acceptance criteria met:
-  can_hot_swap() checker for stage compatibility
-  cleanup_stage() cleans up specific stages
-  remove_stage_safe() rebuilds execution order (via remove_stage)
-  Unit tests for all operations
-  Integration with WebSocket commands
-  Documentation in AGENTS.md
2026-03-19 04:33:00 -07:00

15 KiB

Agent Development Guide

Development Environment

This project uses:

  • mise (mise.jdx.dev) - tool version manager and task runner
  • uv - fast Python package installer
  • ruff - linter and formatter (line-length 88, target Python 3.10)
  • pytest - test runner with strict marker enforcement

Setup

mise run install          # Install dependencies
# Or: uv sync --all-extras   # includes mic, websocket support

Available Commands

# Testing
mise run test            # Run all tests
mise run test-cov       # Run tests with coverage report
pytest tests/test_foo.py::TestClass::test_method  # Run single test

# Linting & Formatting
mise run lint            # Run ruff linter
mise run lint-fix       # Run ruff with auto-fix
mise run format         # Run ruff formatter

# CI
mise run ci             # Full CI pipeline (topics-init + lint + test-cov)

Running a Single Test

# Run a specific test function
pytest tests/test_eventbus.py::TestEventBusInit::test_init_creates_empty_subscribers

# Run all tests in a file
pytest tests/test_eventbus.py

# Run tests matching a pattern
pytest -k "test_subscribe"

Git Hooks

Install hooks at start of session:

ls -la .git/hooks/pre-commit  # Verify installed
hk init --mise                # Install if missing
mise run pre-commit           # Run manually

Code Style Guidelines

Imports (three sections, alphabetical within each)

# 1. Standard library
import os
import threading
from collections import defaultdict
from collections.abc import Callable
from dataclasses import dataclass, field
from typing import Any

# 2. Third-party
from abc import ABC, abstractmethod

# 3. Local project
from engine.events import EventType

Type Hints

  • Use type hints for all function signatures (parameters and return)
  • Use | for unions (Python 3.10+): EventType | None
  • Use dict[K, V], list[V] (generic syntax): dict[str, list[int]]
  • Use Callable[[ArgType], ReturnType] for callbacks
def subscribe(self, event_type: EventType, callback: Callable[[Any], None]) -> None:
    ...

def get_sensor_value(self, sensor_name: str) -> float | None:
    return self._state.get(f"sensor.{sensor_name}")

Naming Conventions

  • Classes: PascalCase (e.g., EventBus, EffectPlugin)
  • Functions/methods: snake_case (e.g., get_event_bus, process_partial)
  • Constants: SCREAMING_SNAKE_CASE (e.g., CURSOR_OFF)
  • Private methods: _snake_case prefix (e.g., _initialize)
  • Type variables: PascalCase (e.g., T, EffectT)

Dataclasses

Use @dataclass for simple data containers:

@dataclass
class EffectContext:
    terminal_width: int
    terminal_height: int
    scroll_cam: int
    ticker_height: int = 0
    _state: dict[str, Any] = field(default_factory=dict, repr=False)

Abstract Base Classes

Use ABC for interface enforcement:

class EffectPlugin(ABC):
    name: str
    config: EffectConfig
    
    @abstractmethod
    def process(self, buf: list[str], ctx: EffectContext) -> list[str]:
        ...
    
    @abstractmethod
    def configure(self, config: EffectConfig) -> None:
        ...

Error Handling

  • Catch specific exceptions, not bare Exception
  • Use try/except with fallbacks for optional features
  • Silent pass in event callbacks to prevent one handler from breaking others
# Good: specific exception
try:
    term_size = os.get_terminal_size()
except OSError:
    term_width = 80

# Good: silent pass in callbacks
for callback in callbacks:
    try:
        callback(event)
    except Exception:
        pass

Thread Safety

Use locks for shared state:

class EventBus:
    def __init__(self):
        self._lock = threading.Lock()
    
    def publish(self, event_type: EventType, event: Any = None) -> None:
        with self._lock:
            callbacks = list(self._subscribers.get(event_type, []))

Comments

  • DO NOT ADD comments unless explicitly required
  • Let code be self-documenting with good naming
  • Use docstrings only for public APIs or complex logic

Testing Patterns

Follow pytest conventions:

class TestEventBusSubscribe:
    """Tests for EventBus.subscribe method."""
    
    def test_subscribe_adds_callback(self):
        """subscribe() adds a callback for an event type."""
        bus = EventBus()
        def callback(e):
            return None
        bus.subscribe(EventType.NTFY_MESSAGE, callback)
        assert bus.subscriber_count(EventType.NTFY_MESSAGE) == 1
  • Use classes to group related tests (Test<ClassName>, Test<method_name>)
  • Test docstrings follow "<method>() <action>" pattern
  • Use descriptive assertion messages via pytest behavior

Workflow Rules

Before Committing

  1. Run tests: mise run test
  2. Run linter: mise run lint
  3. Review changes: git diff

On Failing Tests

  • Out-of-date test: Update test to match new expected behavior
  • Correctly failing test: Fix implementation, not the test

Never modify a test to make it pass without understanding why it failed.

Testing

Tests live in tests/ and follow the pattern test_*.py.

Run all tests:

mise run test

Run with coverage:

mise run test-cov

The project uses pytest with strict marker enforcement. Test configuration is in pyproject.toml under [tool.pytest.ini_options].

Test Coverage Strategy

Current coverage: 56% (463 tests)

Key areas with lower coverage (acceptable for now):

  • app.py (8%): Main entry point - integration heavy, requires terminal
  • scroll.py (10%): Terminal-dependent rendering logic (unused)

Key areas with good coverage:

  • display/backends/null.py (95%): Easy to test headlessly
  • display/backends/terminal.py (96%): Uses mocking
  • display/backends/multi.py (100%): Simple forwarding logic
  • effects/performance.py (99%): Pure Python logic
  • eventbus.py (96%): Simple event system
  • effects/controller.py (95%): Effects command handling

Areas needing more tests:

  • websocket.py (48%): Network I/O, hard to test in CI
  • ntfy.py (50%): Network I/O, hard to test in CI
  • mic.py (61%): Audio I/O, hard to test in CI

Note: Terminal-dependent modules (scroll, layers render) are harder to test in CI. Performance regression tests are in tests/test_benchmark.py with @pytest.mark.benchmark.

Architecture Notes

  • ntfy.py - standalone notification poller with zero internal dependencies
  • sensors/ - Sensor framework (MicSensor, OscillatorSensor) for real-time input
  • eventbus.py provides thread-safe event publishing for decoupled communication
  • effects/ - plugin architecture with performance monitoring
  • The new pipeline architecture: source → render → effects → display

Canvas & Camera

  • Canvas (engine/canvas.py): 2D rendering surface with dirty region tracking
  • Camera (engine/camera.py): Viewport controller for scrolling content

The Canvas tracks dirty regions automatically when content is written (via put_region, put_text, fill), enabling partial buffer updates for optimized effect processing.

Pipeline Architecture

The new Stage-based pipeline architecture provides capability-based dependency resolution:

  • Stage (engine/pipeline/core.py): Base class for pipeline stages
  • Pipeline (engine/pipeline/controller.py): Executes stages with capability-based dependency resolution
  • StageRegistry (engine/pipeline/registry.py): Discovers and registers stages
  • Stage Adapters (engine/pipeline/adapters.py): Wraps existing components as stages

Capability-Based Dependencies

Stages declare capabilities (what they provide) and dependencies (what they need). The Pipeline resolves dependencies using prefix matching:

  • "source" matches "source.headlines", "source.poetry", etc.
  • This allows flexible composition without hardcoding specific stage names

Sensor Framework

  • Sensor (engine/sensors/__init__.py): Base class for real-time input sensors
  • SensorRegistry: Discovers available sensors
  • SensorStage: Pipeline adapter that provides sensor values to effects
  • MicSensor (engine/sensors/mic.py): Self-contained microphone input
  • OscillatorSensor (engine/sensors/oscillator.py): Test sensor for development
  • PipelineMetricsSensor (engine/sensors/pipeline_metrics.py): Exposes pipeline metrics as sensor values

Sensors support param bindings to drive effect parameters in real-time.

Pipeline Introspection

  • PipelineIntrospectionSource (engine/data_sources/pipeline_introspection.py): Renders live ASCII visualization of pipeline DAG with metrics
  • PipelineIntrospectionDemo (engine/pipeline/pipeline_introspection_demo.py): 3-phase demo controller for effect animation

Preset: pipeline-inspect - Live pipeline introspection with DAG and performance metrics

Partial Update Support

Effect plugins can opt-in to partial buffer updates for performance optimization:

  • Set supports_partial_updates = True on the effect class
  • Implement process_partial(buf, ctx, partial) method
  • The PartialUpdate dataclass indicates which regions changed

Preset System

Presets use TOML format (no external dependencies):

  • Built-in: engine/presets.toml

  • User config: ~/.config/mainline/presets.toml

  • Local override: ./presets.toml

  • Preset loader (engine/pipeline/preset_loader.py): Loads and validates presets

  • PipelinePreset (engine/pipeline/presets.py): Dataclass for preset configuration

Functions:

  • validate_preset() - Validate preset structure
  • validate_signal_path() - Detect circular dependencies
  • generate_preset_toml() - Generate skeleton preset

Display System

  • Display abstraction (engine/display/): swap display backends via the Display protocol

    • display/backends/terminal.py - ANSI terminal output
    • display/backends/websocket.py - broadcasts to web clients via WebSocket
    • display/backends/null.py - headless display for testing
    • display/backends/multi.py - forwards to multiple displays simultaneously
    • display/backends/moderngl.py - GPU-accelerated OpenGL rendering (optional)
    • display/__init__.py - DisplayRegistry for backend discovery
  • WebSocket display (engine/display/backends/websocket.py): real-time frame broadcasting to web browsers

    • WebSocket server on port 8765
    • HTTP server on port 8766 (serves HTML client)
    • Client at client/index.html with ANSI color parsing and fullscreen support
  • Display modes (--display flag):

    • terminal - Default ANSI terminal output
    • websocket - Web browser display (requires websockets package)
    • moderngl - GPU-accelerated rendering (requires moderngl package)

Effect Plugin System

  • EffectPlugin ABC (engine/effects/types.py): abstract base class for effects

    • All effects must inherit from EffectPlugin and implement process() and configure()
    • Runtime discovery via effects_plugins/__init__.py using issubclass() checks
  • EffectRegistry (engine/effects/registry.py): manages registered effects

  • EffectChain (engine/effects/chain.py): chains effects in pipeline order

Command & Control

  • C&C uses separate ntfy topics for commands and responses
  • NTFY_CC_CMD_TOPIC - commands from cmdline.py
  • NTFY_CC_RESP_TOPIC - responses back to cmdline.py
  • Effects controller handles /effects commands (list, on/off, intensity, reorder, stats)

Pipeline Documentation

The rendering pipeline is documented in docs/PIPELINE.md using Mermaid diagrams.

IMPORTANT: When making significant architectural changes to the rendering pipeline (new layers, effects, display backends), update docs/PIPELINE.md to reflect the changes:

  1. Edit docs/PIPELINE.md with the new architecture
  2. If adding new SVG diagrams, render them manually using an external tool (e.g., Mermaid Live Editor)
  3. Commit both the markdown and any new diagram files

Pipeline Mutation API

The Pipeline class supports dynamic mutation during runtime via the mutation API:

Core Methods:

  • add_stage(name, stage, initialize=True) - Add a stage to the pipeline
  • remove_stage(name, cleanup=True) - Remove a stage and rebuild execution order
  • replace_stage(name, new_stage, preserve_state=True) - Replace a stage with another
  • swap_stages(name1, name2) - Swap two stages
  • move_stage(name, after=None, before=None) - Move a stage in execution order
  • enable_stage(name) - Enable a stage
  • disable_stage(name) - Disable a stage

New Methods (Issue #35):

  • cleanup_stage(name) - Clean up specific stage without removing it
  • remove_stage_safe(name, cleanup=True) - Alias for remove_stage that explicitly rebuilds
  • can_hot_swap(name) - Check if a stage can be safely hot-swapped
    • Returns False for stages that provide minimum capabilities as sole provider
    • Returns True for swappable stages

WebSocket Commands: Commands can be sent via WebSocket to mutate the pipeline at runtime:

{"action": "remove_stage", "stage": "stage_name"}
{"action": "swap_stages", "stage1": "name1", "stage2": "name2"}
{"action": "enable_stage", "stage": "stage_name"}
{"action": "disable_stage", "stage": "stage_name"}
{"action": "cleanup_stage", "stage": "stage_name"}
{"action": "can_hot_swap", "stage": "stage_name"}

Implementation Files:

  • engine/pipeline/controller.py - Pipeline class with mutation methods
  • engine/app/pipeline_runner.py - _handle_pipeline_mutation() function
  • engine/pipeline/ui.py - execute_command() with docstrings
  • tests/test_pipeline_mutation_commands.py - Integration tests

Skills Library

A skills library MCP server (skills) is available for capturing and tracking learned knowledge. Skills are stored in ~/.skills/.

Workflow

Before starting work:

  1. Run skills_list_skills to see available skills
  2. Use skills_peek_skill({name: "skill-name"}) to preview relevant skills
  3. Use skills_skill_slice({name: "skill-name", query: "your question"}) to get relevant sections

While working:

  • If a skill was wrong or incomplete: skills_update_skillskills_record_assessmentskills_report_outcome({quality: 1})
  • If a skill worked correctly: skills_report_outcome({quality: 4}) (normal) or quality: 5 (perfect)

End of session:

  • Run skills_reflect_on_session({context_summary: "what you did"}) to identify new skills to capture
  • Use skills_create_skill to add new skills
  • Use skills_record_assessment to score them

Useful Tools

  • skills_review_stale_skills() - Skills due for review (negative days_until_due)
  • skills_skills_report() - Overview of entire collection
  • skills_validate_skill({name: "skill-name"}) - Load skill for review with sources

Agent Skills

This project also has Agent Skills (SKILL.md files) in .opencode/skills/. Use the skill tool to load them:

  • skill({name: "mainline-architecture"}) - Pipeline stages, capability resolution
  • skill({name: "mainline-effects"}) - How to add new effect plugins
  • skill({name: "mainline-display"}) - Display backend implementation
  • skill({name: "mainline-sources"}) - Adding new RSS feeds
  • skill({name: "mainline-presets"}) - Creating pipeline presets
  • skill({name: "mainline-sensors"}) - Sensor framework usage