forked from genewildish/Mainline
Compare commits
18 Commits
7c26150408
...
f136bd75f1
| Author | SHA1 | Date | |
|---|---|---|---|
| f136bd75f1 | |||
| 860bab6550 | |||
| f568cc1a73 | |||
| 7d4623b009 | |||
| c999a9a724 | |||
| 6c06f12c5a | |||
| b058160e9d | |||
| b28cd154c7 | |||
| 66f4957c24 | |||
| afee03f693 | |||
| a747f67f63 | |||
| 018778dd11 | |||
| 4acd7b3344 | |||
| 2976839f7b | |||
| ead4cc3d5a | |||
| 1010f5868e | |||
| fff87382f6 | |||
| b3ac72884d |
2
.gitignore
vendored
2
.gitignore
vendored
@@ -13,3 +13,5 @@ coverage.xml
|
||||
*.dot
|
||||
*.png
|
||||
test-reports/
|
||||
.opencode/
|
||||
tests/comparison_output/
|
||||
|
||||
158
analysis/visual_output_comparison.md
Normal file
158
analysis/visual_output_comparison.md
Normal file
@@ -0,0 +1,158 @@
|
||||
# Visual Output Comparison: Upstream/Main vs Sideline
|
||||
|
||||
## Summary
|
||||
|
||||
A comprehensive comparison of visual output between `upstream/main` and the sideline branch (`feature/capability-based-deps`) reveals fundamental architectural differences in how content is rendered and displayed.
|
||||
|
||||
## Captured Outputs
|
||||
|
||||
### Sideline (Pipeline Architecture)
|
||||
- **File**: `output/sideline_demo.json`
|
||||
- **Format**: Plain text lines without ANSI cursor positioning
|
||||
- **Content**: Readable headlines with gradient colors applied
|
||||
|
||||
### Upstream/Main (Monolithic Architecture)
|
||||
- **File**: `output/upstream_demo.json`
|
||||
- **Format**: Lines with explicit ANSI cursor positioning codes
|
||||
- **Content**: Cursor positioning codes + block characters + ANSI colors
|
||||
|
||||
## Key Architectural Differences
|
||||
|
||||
### 1. Buffer Content Structure
|
||||
|
||||
**Sideline Pipeline:**
|
||||
```python
|
||||
# Each line is plain text with ANSI colors
|
||||
buffer = [
|
||||
"The Download: OpenAI is building...",
|
||||
"OpenAI is throwing everything...",
|
||||
# ... more lines
|
||||
]
|
||||
```
|
||||
|
||||
**Upstream Monolithic:**
|
||||
```python
|
||||
# Each line includes cursor positioning
|
||||
buffer = [
|
||||
"\033[10;1H \033[2;38;5;238mユ\033[0m \033[2;38;5;37mモ\033[0m ...",
|
||||
"\033[11;1H\033[K", # Clear line 11
|
||||
# ... more lines with positioning
|
||||
]
|
||||
```
|
||||
|
||||
### 2. Rendering Approach
|
||||
|
||||
**Sideline (Pipeline Architecture):**
|
||||
- Stages produce plain text buffers
|
||||
- Display backend handles cursor positioning
|
||||
- `TerminalDisplay.show()` prepends `\033[H\033[J` (home + clear)
|
||||
- Lines are appended sequentially
|
||||
|
||||
**Upstream (Monolithic Architecture):**
|
||||
- `render_ticker_zone()` produces buffers with explicit positioning
|
||||
- Each line includes `\033[{row};1H` to position cursor
|
||||
- Display backend writes buffer directly to stdout
|
||||
- Lines are positioned explicitly in the buffer
|
||||
|
||||
### 3. Content Rendering
|
||||
|
||||
**Sideline:**
|
||||
- Headlines rendered as plain text
|
||||
- Gradient colors applied via ANSI codes
|
||||
- Ticker effect via camera/viewport filtering
|
||||
|
||||
**Upstream:**
|
||||
- Headlines rendered as block characters (▀, ▄, █, etc.)
|
||||
- Japanese katakana glyphs used for glitch effect
|
||||
- Explicit row positioning for each line
|
||||
|
||||
## Visual Output Analysis
|
||||
|
||||
### Sideline Frame 0 (First 5 lines):
|
||||
```
|
||||
Line 0: 'The Download: OpenAI is building a fully automated researcher...'
|
||||
Line 1: 'OpenAI is throwing everything into building a fully automated...'
|
||||
Line 2: 'Mind-altering substances are (still) falling short in clinical...'
|
||||
Line 3: 'The Download: Quantum computing for health...'
|
||||
Line 4: 'Can quantum computers now solve health care problems...'
|
||||
```
|
||||
|
||||
### Upstream Frame 0 (First 5 lines):
|
||||
```
|
||||
Line 0: ''
|
||||
Line 1: '\x1b[2;1H\x1b[K'
|
||||
Line 2: '\x1b[3;1H\x1b[K'
|
||||
Line 3: '\x1b[4;1H\x1b[2;38;5;238m \x1b[0m \x1b[2;38;5;238mリ\x1b[0m ...'
|
||||
Line 4: '\x1b[5;1H\x1b[K'
|
||||
```
|
||||
|
||||
## Implications for Visual Comparison
|
||||
|
||||
### Challenges with Direct Comparison
|
||||
1. **Different buffer formats**: Plain text vs. positioned ANSI codes
|
||||
2. **Different rendering pipelines**: Pipeline stages vs. monolithic functions
|
||||
3. **Different content generation**: Headlines vs. block characters
|
||||
|
||||
### Approaches for Visual Verification
|
||||
|
||||
#### Option 1: Render and Compare Terminal Output
|
||||
- Run both branches with `TerminalDisplay`
|
||||
- Capture terminal output (not buffer)
|
||||
- Compare visual rendering
|
||||
- **Challenge**: Requires actual terminal rendering
|
||||
|
||||
#### Option 2: Normalize Buffers for Comparison
|
||||
- Convert upstream positioned buffers to plain text
|
||||
- Strip ANSI cursor positioning codes
|
||||
- Compare normalized content
|
||||
- **Challenge**: Loses positioning information
|
||||
|
||||
#### Option 3: Functional Equivalence Testing
|
||||
- Verify features work the same way
|
||||
- Test message overlay rendering
|
||||
- Test effect application
|
||||
- **Challenge**: Doesn't verify exact visual match
|
||||
|
||||
## Recommendations
|
||||
|
||||
### For Exact Visual Match
|
||||
1. **Update sideline to match upstream architecture**:
|
||||
- Change `MessageOverlayStage` to return positioned buffers
|
||||
- Update terminal display to handle positioned buffers
|
||||
- This requires significant refactoring
|
||||
|
||||
2. **Accept architectural differences**:
|
||||
- The sideline pipeline architecture is fundamentally different
|
||||
- Visual differences are expected and acceptable
|
||||
- Focus on functional equivalence
|
||||
|
||||
### For Functional Verification
|
||||
1. **Test message overlay rendering**:
|
||||
- Verify message appears in correct position
|
||||
- Verify gradient colors are applied
|
||||
- Verify metadata bar is displayed
|
||||
|
||||
2. **Test effect rendering**:
|
||||
- Verify glitch effect applies block characters
|
||||
- Verify firehose effect renders correctly
|
||||
- Verify figment effect integrates properly
|
||||
|
||||
3. **Test pipeline execution**:
|
||||
- Verify stage execution order
|
||||
- Verify capability resolution
|
||||
- Verify dependency injection
|
||||
|
||||
## Conclusion
|
||||
|
||||
The visual output comparison reveals that `sideline` and `upstream/main` use fundamentally different rendering architectures:
|
||||
|
||||
- **Upstream**: Explicit cursor positioning in buffer, monolithic rendering
|
||||
- **Sideline**: Plain text buffer, display handles positioning, pipeline rendering
|
||||
|
||||
These differences are **architectural**, not bugs. The sideline branch has successfully adapted the upstream features to a new pipeline architecture.
|
||||
|
||||
### Next Steps
|
||||
1. ✅ Document architectural differences (this file)
|
||||
2. ⏳ Create functional tests for visual verification
|
||||
3. ⏳ Update Gitea issue #50 with findings
|
||||
4. ⏳ Consider whether to adapt sideline to match upstream rendering style
|
||||
99
completion/mainline-completion.bash
Normal file
99
completion/mainline-completion.bash
Normal file
@@ -0,0 +1,99 @@
|
||||
# Mainline bash completion script
|
||||
#
|
||||
# To install:
|
||||
# source /path/to/completion/mainline-completion.bash
|
||||
#
|
||||
# Or add to ~/.bashrc:
|
||||
# source /path/to/completion/mainline-completion.bash
|
||||
|
||||
_mainline_completion() {
|
||||
local cur prev words cword
|
||||
_init_completion || return
|
||||
|
||||
# Get current word and previous word
|
||||
cur="${COMP_WORDS[COMP_CWORD]}"
|
||||
prev="${COMP_WORDS[COMP_CWORD-1]}"
|
||||
|
||||
# Completion options based on previous word
|
||||
case "${prev}" in
|
||||
--display)
|
||||
# Display backends
|
||||
COMPREPLY=($(compgen -W "terminal null replay websocket pygame moderngl" -- "${cur}"))
|
||||
return
|
||||
;;
|
||||
|
||||
--pipeline-source)
|
||||
# Available sources
|
||||
COMPREPLY=($(compgen -W "headlines poetry empty fixture pipeline-inspect" -- "${cur}"))
|
||||
return
|
||||
;;
|
||||
|
||||
--pipeline-effects)
|
||||
# Available effects (comma-separated)
|
||||
local effects="afterimage border crop fade firehose glitch hud motionblur noise tint"
|
||||
COMPREPLY=($(compgen -W "${effects}" -- "${cur}"))
|
||||
return
|
||||
;;
|
||||
|
||||
--pipeline-camera)
|
||||
# Camera modes
|
||||
COMPREPLY=($(compgen -W "feed scroll horizontal omni floating bounce radial" -- "${cur}"))
|
||||
return
|
||||
;;
|
||||
|
||||
--pipeline-border)
|
||||
# Border modes
|
||||
COMPREPLY=($(compgen -W "off simple ui" -- "${cur}"))
|
||||
return
|
||||
;;
|
||||
|
||||
--pipeline-display)
|
||||
# Display backends (same as --display)
|
||||
COMPREPLY=($(compgen -W "terminal null replay websocket pygame moderngl" -- "${cur}"))
|
||||
return
|
||||
;;
|
||||
|
||||
--theme)
|
||||
# Theme colors
|
||||
COMPREPLY=($(compgen -W "green orange purple blue red" -- "${cur}"))
|
||||
return
|
||||
;;
|
||||
|
||||
--viewport)
|
||||
# Viewport size suggestions
|
||||
COMPREPLY=($(compgen -W "80x24 100x30 120x40 60x20" -- "${cur}"))
|
||||
return
|
||||
;;
|
||||
|
||||
--preset)
|
||||
# Presets (would need to query available presets)
|
||||
COMPREPLY=($(compgen -W "demo demo-base demo-pygame demo-camera-showcase poetry headlines empty test-basic test-border test-scroll-camera" -- "${cur}"))
|
||||
return
|
||||
;;
|
||||
esac
|
||||
|
||||
# Flag completion (start with --)
|
||||
if [[ "${cur}" == -* ]]; then
|
||||
COMPREPLY=($(compgen -W "
|
||||
--display
|
||||
--pipeline-source
|
||||
--pipeline-effects
|
||||
--pipeline-camera
|
||||
--pipeline-display
|
||||
--pipeline-ui
|
||||
--pipeline-border
|
||||
--viewport
|
||||
--preset
|
||||
--theme
|
||||
--websocket
|
||||
--websocket-port
|
||||
--allow-unsafe
|
||||
--help
|
||||
" -- "${cur}"))
|
||||
return
|
||||
fi
|
||||
}
|
||||
|
||||
complete -F _mainline_completion mainline.py
|
||||
complete -F _mainline_completion python\ -m\ engine.app
|
||||
complete -F _mainline_completion python\ -m\ mainline
|
||||
81
completion/mainline-completion.fish
Normal file
81
completion/mainline-completion.fish
Normal file
@@ -0,0 +1,81 @@
|
||||
# Fish completion script for Mainline
|
||||
#
|
||||
# To install:
|
||||
# source /path/to/completion/mainline-completion.fish
|
||||
#
|
||||
# Or copy to ~/.config/fish/completions/mainline.fish
|
||||
|
||||
# Define display backends
|
||||
set -l display_backends terminal null replay websocket pygame moderngl
|
||||
|
||||
# Define sources
|
||||
set -l sources headlines poetry empty fixture pipeline-inspect
|
||||
|
||||
# Define effects
|
||||
set -l effects afterimage border crop fade firehose glitch hud motionblur noise tint
|
||||
|
||||
# Define camera modes
|
||||
set -l cameras feed scroll horizontal omni floating bounce radial
|
||||
|
||||
# Define border modes
|
||||
set -l borders off simple ui
|
||||
|
||||
# Define themes
|
||||
set -l themes green orange purple blue red
|
||||
|
||||
# Define presets
|
||||
set -l presets demo demo-base demo-pygame demo-camera-showcase poetry headlines empty test-basic test-border test-scroll-camera test-figment test-message-overlay
|
||||
|
||||
# Main completion function
|
||||
function __mainline_complete
|
||||
set -l cmd (commandline -po)
|
||||
set -l token (commandline -t)
|
||||
|
||||
# Complete display backends
|
||||
complete -c mainline.py -n '__fish_seen_argument --display' -a "$display_backends" -d 'Display backend'
|
||||
|
||||
# Complete sources
|
||||
complete -c mainline.py -n '__fish_seen_argument --pipeline-source' -a "$sources" -d 'Data source'
|
||||
|
||||
# Complete effects
|
||||
complete -c mainline.py -n '__fish_seen_argument --pipeline-effects' -a "$effects" -d 'Effect plugin'
|
||||
|
||||
# Complete camera modes
|
||||
complete -c mainline.py -n '__fish_seen_argument --pipeline-camera' -a "$cameras" -d 'Camera mode'
|
||||
|
||||
# Complete display backends (pipeline)
|
||||
complete -c mainline.py -n '__fish_seen_argument --pipeline-display' -a "$display_backends" -d 'Display backend'
|
||||
|
||||
# Complete border modes
|
||||
complete -c mainline.py -n '__fish_seen_argument --pipeline-border' -a "$borders" -d 'Border mode'
|
||||
|
||||
# Complete themes
|
||||
complete -c mainline.py -n '__fish_seen_argument --theme' -a "$themes" -d 'Color theme'
|
||||
|
||||
# Complete presets
|
||||
complete -c mainline.py -n '__fish_seen_argument --preset' -a "$presets" -d 'Preset name'
|
||||
|
||||
# Complete viewport sizes
|
||||
complete -c mainline.py -n '__fish_seen_argument --viewport' -a '80x24 100x30 120x40 60x20' -d 'Viewport size (WxH)'
|
||||
|
||||
# Complete flag options
|
||||
complete -c mainline.py -n 'not __fish_seen_argument --display' -l display -d 'Display backend' -a "$display_backends"
|
||||
complete -c mainline.py -n 'not __fish_seen_argument --preset' -l preset -d 'Preset to use' -a "$presets"
|
||||
complete -c mainline.py -n 'not __fish_seen_argument --viewport' -l viewport -d 'Viewport size (WxH)' -a '80x24 100x30 120x40 60x20'
|
||||
complete -c mainline.py -n 'not __fish_seen_argument --theme' -l theme -d 'Color theme' -a "$themes"
|
||||
complete -c mainline.py -l websocket -d 'Enable WebSocket server'
|
||||
complete -c mainline.py -n 'not __fish_seen_argument --websocket-port' -l websocket-port -d 'WebSocket port' -a '8765'
|
||||
complete -c mainline.py -l allow-unsafe -d 'Allow unsafe pipeline configuration'
|
||||
complete -c mainline.py -n 'not __fish_seen_argument --help' -l help -d 'Show help'
|
||||
|
||||
# Pipeline-specific flags
|
||||
complete -c mainline.py -n 'not __fish_seen_argument --pipeline-source' -l pipeline-source -d 'Data source' -a "$sources"
|
||||
complete -c mainline.py -n 'not __fish_seen_argument --pipeline-effects' -l pipeline-effects -d 'Effect plugins (comma-separated)' -a "$effects"
|
||||
complete -c mainline.py -n 'not __fish_seen_argument --pipeline-camera' -l pipeline-camera -d 'Camera mode' -a "$cameras"
|
||||
complete -c mainline.py -n 'not __fish_seen_argument --pipeline-display' -l pipeline-display -d 'Display backend' -a "$display_backends"
|
||||
complete -c mainline.py -l pipeline-ui -d 'Enable UI panel'
|
||||
complete -c mainline.py -n 'not __fish_seen_argument --pipeline-border' -l pipeline-border -d 'Border mode' -a "$borders"
|
||||
end
|
||||
|
||||
# Register the completion function
|
||||
__mainline_complete
|
||||
48
completion/mainline-completion.zsh
Normal file
48
completion/mainline-completion.zsh
Normal file
@@ -0,0 +1,48 @@
|
||||
#compdef mainline.py
|
||||
|
||||
# Mainline zsh completion script
|
||||
#
|
||||
# To install:
|
||||
# source /path/to/completion/mainline-completion.zsh
|
||||
#
|
||||
# Or add to ~/.zshrc:
|
||||
# source /path/to/completion/mainline-completion.zsh
|
||||
|
||||
# Define completion function
|
||||
_mainline() {
|
||||
local -a commands
|
||||
local curcontext="$curcontext" state line
|
||||
typeset -A opt_args
|
||||
|
||||
_arguments -C \
|
||||
'(-h --help)'{-h,--help}'[Show help]' \
|
||||
'--display=[Display backend]:backend:(terminal null replay websocket pygame moderngl)' \
|
||||
'--preset=[Preset to use]:preset:(demo demo-base demo-pygame demo-camera-showcase poetry headlines empty test-basic test-border test-scroll-camera test-figment test-message-overlay)' \
|
||||
'--viewport=[Viewport size]:size:(80x24 100x30 120x40 60x20)' \
|
||||
'--theme=[Color theme]:theme:(green orange purple blue red)' \
|
||||
'--websocket[Enable WebSocket server]' \
|
||||
'--websocket-port=[WebSocket port]:port:' \
|
||||
'--allow-unsafe[Allow unsafe pipeline configuration]' \
|
||||
'(-)*: :{_files}' \
|
||||
&& ret=0
|
||||
|
||||
# Handle --pipeline-* arguments
|
||||
if [[ -n ${words[*]} ]]; then
|
||||
_arguments -C \
|
||||
'--pipeline-source=[Data source]:source:(headlines poetry empty fixture pipeline-inspect)' \
|
||||
'--pipeline-effects=[Effect plugins]:effects:(afterimage border crop fade firehose glitch hud motionblur noise tint)' \
|
||||
'--pipeline-camera=[Camera mode]:camera:(feed scroll horizontal omni floating bounce radial)' \
|
||||
'--pipeline-display=[Display backend]:backend:(terminal null replay websocket pygame moderngl)' \
|
||||
'--pipeline-ui[Enable UI panel]' \
|
||||
'--pipeline-border=[Border mode]:mode:(off simple ui)' \
|
||||
'--viewport=[Viewport size]:size:(80x24 100x30 120x40 60x20)' \
|
||||
&& ret=0
|
||||
fi
|
||||
|
||||
return ret
|
||||
}
|
||||
|
||||
# Register completion function
|
||||
compdef _mainline mainline.py
|
||||
compdef _mainline "python -m engine.app"
|
||||
compdef _mainline "python -m mainline"
|
||||
@@ -254,6 +254,16 @@ def run_pipeline_mode_direct():
|
||||
|
||||
# Create display using validated display name
|
||||
display_name = result.config.display or "terminal" # Default to terminal if empty
|
||||
|
||||
# Warn if display was auto-selected (not explicitly specified)
|
||||
if not display_name:
|
||||
print(
|
||||
" \033[38;5;226mWarning: No --pipeline-display specified, using default: terminal\033[0m"
|
||||
)
|
||||
print(
|
||||
" \033[38;5;245mTip: Use --pipeline-display null for headless mode (useful for testing)\033[0m"
|
||||
)
|
||||
|
||||
display = DisplayRegistry.create(display_name)
|
||||
if not display:
|
||||
print(f" \033[38;5;196mFailed to create display: {display_name}\033[0m")
|
||||
|
||||
@@ -12,6 +12,7 @@ from engine.fetch import fetch_all, fetch_all_fast, fetch_poetry, load_cache, sa
|
||||
from engine.pipeline import Pipeline, PipelineConfig, PipelineContext, get_preset
|
||||
from engine.pipeline.adapters import (
|
||||
EffectPluginStage,
|
||||
MessageOverlayStage,
|
||||
SourceItemsToBufferStage,
|
||||
create_stage_from_display,
|
||||
create_stage_from_effect,
|
||||
@@ -188,10 +189,19 @@ def run_pipeline_mode(preset_name: str = "demo"):
|
||||
# CLI --display flag takes priority over preset
|
||||
# Check if --display was explicitly provided
|
||||
display_name = preset.display
|
||||
if "--display" in sys.argv:
|
||||
display_explicitly_specified = "--display" in sys.argv
|
||||
if display_explicitly_specified:
|
||||
idx = sys.argv.index("--display")
|
||||
if idx + 1 < len(sys.argv):
|
||||
display_name = sys.argv[idx + 1]
|
||||
else:
|
||||
# Warn user that display is falling back to preset default
|
||||
print(
|
||||
f" \033[38;5;226mWarning: No --display specified, using preset default: {display_name}\033[0m"
|
||||
)
|
||||
print(
|
||||
" \033[38;5;245mTip: Use --display null for headless mode (useful for testing/capture)\033[0m"
|
||||
)
|
||||
|
||||
display = DisplayRegistry.create(display_name)
|
||||
if not display and not display_name.startswith("multi"):
|
||||
@@ -311,6 +321,24 @@ def run_pipeline_mode(preset_name: str = "demo"):
|
||||
f"effect_{effect_name}", create_stage_from_effect(effect, effect_name)
|
||||
)
|
||||
|
||||
# Add message overlay stage if enabled
|
||||
if getattr(preset, "enable_message_overlay", False):
|
||||
from engine import config as engine_config
|
||||
from engine.pipeline.adapters import MessageOverlayConfig
|
||||
|
||||
overlay_config = MessageOverlayConfig(
|
||||
enabled=True,
|
||||
display_secs=engine_config.MESSAGE_DISPLAY_SECS
|
||||
if hasattr(engine_config, "MESSAGE_DISPLAY_SECS")
|
||||
else 30,
|
||||
topic_url=engine_config.NTFY_TOPIC
|
||||
if hasattr(engine_config, "NTFY_TOPIC")
|
||||
else None,
|
||||
)
|
||||
pipeline.add_stage(
|
||||
"message_overlay", MessageOverlayStage(config=overlay_config)
|
||||
)
|
||||
|
||||
pipeline.add_stage("display", create_stage_from_display(display, display_name))
|
||||
|
||||
pipeline.build()
|
||||
@@ -625,6 +653,24 @@ def run_pipeline_mode(preset_name: str = "demo"):
|
||||
create_stage_from_effect(effect, effect_name),
|
||||
)
|
||||
|
||||
# Add message overlay stage if enabled
|
||||
if getattr(new_preset, "enable_message_overlay", False):
|
||||
from engine import config as engine_config
|
||||
from engine.pipeline.adapters import MessageOverlayConfig
|
||||
|
||||
overlay_config = MessageOverlayConfig(
|
||||
enabled=True,
|
||||
display_secs=engine_config.MESSAGE_DISPLAY_SECS
|
||||
if hasattr(engine_config, "MESSAGE_DISPLAY_SECS")
|
||||
else 30,
|
||||
topic_url=engine_config.NTFY_TOPIC
|
||||
if hasattr(engine_config, "NTFY_TOPIC")
|
||||
else None,
|
||||
)
|
||||
pipeline.add_stage(
|
||||
"message_overlay", MessageOverlayStage(config=overlay_config)
|
||||
)
|
||||
|
||||
# Add display (respect CLI override)
|
||||
display_name = new_preset.display
|
||||
if "--display" in sys.argv:
|
||||
|
||||
@@ -132,6 +132,7 @@ class Config:
|
||||
display: str = "pygame"
|
||||
websocket: bool = False
|
||||
websocket_port: int = 8765
|
||||
theme: str = "green"
|
||||
|
||||
@classmethod
|
||||
def from_args(cls, argv: list[str] | None = None) -> "Config":
|
||||
@@ -175,6 +176,7 @@ class Config:
|
||||
display=_arg_value("--display", argv) or "terminal",
|
||||
websocket="--websocket" in argv,
|
||||
websocket_port=_arg_int("--websocket-port", 8765, argv),
|
||||
theme=_arg_value("--theme", argv) or "green",
|
||||
)
|
||||
|
||||
|
||||
@@ -246,6 +248,40 @@ DEMO = "--demo" in sys.argv
|
||||
DEMO_EFFECT_DURATION = 5.0 # seconds per effect
|
||||
PIPELINE_DEMO = "--pipeline-demo" in sys.argv
|
||||
|
||||
# ─── THEME MANAGEMENT ─────────────────────────────────────────
|
||||
ACTIVE_THEME = None
|
||||
|
||||
|
||||
def set_active_theme(theme_id: str = "green"):
|
||||
"""Set the active theme by ID.
|
||||
|
||||
Args:
|
||||
theme_id: Theme identifier from theme registry (e.g., "green", "orange", "purple")
|
||||
|
||||
Raises:
|
||||
KeyError: If theme_id is not in the theme registry
|
||||
|
||||
Side Effects:
|
||||
Sets the ACTIVE_THEME global variable
|
||||
"""
|
||||
global ACTIVE_THEME
|
||||
from engine import themes
|
||||
|
||||
ACTIVE_THEME = themes.get_theme(theme_id)
|
||||
|
||||
|
||||
# Initialize theme on module load (lazy to avoid circular dependency)
|
||||
def _init_theme():
|
||||
theme_id = _arg_value("--theme", sys.argv) or "green"
|
||||
try:
|
||||
set_active_theme(theme_id)
|
||||
except KeyError:
|
||||
pass # Theme not found, keep None
|
||||
|
||||
|
||||
_init_theme()
|
||||
|
||||
|
||||
# ─── PIPELINE MODE (new unified architecture) ─────────────
|
||||
PIPELINE_MODE = "--pipeline" in sys.argv
|
||||
PIPELINE_PRESET = _arg_value("--pipeline-preset", sys.argv) or "demo"
|
||||
@@ -256,6 +292,9 @@ PRESET = _arg_value("--preset", sys.argv)
|
||||
# ─── PIPELINE DIAGRAM ────────────────────────────────────
|
||||
PIPELINE_DIAGRAM = "--pipeline-diagram" in sys.argv
|
||||
|
||||
# ─── THEME ──────────────────────────────────────────────────
|
||||
THEME = _arg_value("--theme", sys.argv) or "green"
|
||||
|
||||
|
||||
def set_font_selection(font_path=None, font_index=None):
|
||||
"""Set runtime primary font selection."""
|
||||
|
||||
@@ -99,7 +99,6 @@ class PygameDisplay:
|
||||
self.width = width
|
||||
self.height = height
|
||||
|
||||
|
||||
try:
|
||||
import pygame
|
||||
except ImportError:
|
||||
|
||||
@@ -15,6 +15,7 @@ from .factory import (
|
||||
create_stage_from_font,
|
||||
create_stage_from_source,
|
||||
)
|
||||
from .message_overlay import MessageOverlayConfig, MessageOverlayStage
|
||||
from .transform import (
|
||||
CanvasStage,
|
||||
FontStage,
|
||||
@@ -35,6 +36,8 @@ __all__ = [
|
||||
"FontStage",
|
||||
"ImageToTextStage",
|
||||
"CanvasStage",
|
||||
"MessageOverlayStage",
|
||||
"MessageOverlayConfig",
|
||||
# Factory functions
|
||||
"create_stage_from_display",
|
||||
"create_stage_from_effect",
|
||||
|
||||
@@ -179,7 +179,7 @@ class CameraStage(Stage):
|
||||
|
||||
@property
|
||||
def dependencies(self) -> set[str]:
|
||||
return {"render.output"}
|
||||
return {"render.output", "camera.state"}
|
||||
|
||||
@property
|
||||
def inlet_types(self) -> set:
|
||||
|
||||
@@ -53,7 +53,8 @@ class DisplayStage(Stage):
|
||||
|
||||
@property
|
||||
def dependencies(self) -> set[str]:
|
||||
return {"render.output"} # Display needs rendered content
|
||||
# Display needs rendered content and camera transformation
|
||||
return {"render.output", "camera"}
|
||||
|
||||
@property
|
||||
def inlet_types(self) -> set:
|
||||
|
||||
185
engine/pipeline/adapters/message_overlay.py
Normal file
185
engine/pipeline/adapters/message_overlay.py
Normal file
@@ -0,0 +1,185 @@
|
||||
"""
|
||||
Message overlay stage - Renders ntfy messages as an overlay on the buffer.
|
||||
|
||||
This stage provides message overlay capability for displaying ntfy.sh messages
|
||||
as a centered panel with pink/magenta gradient, matching upstream/main aesthetics.
|
||||
"""
|
||||
|
||||
import re
|
||||
import time
|
||||
from dataclasses import dataclass
|
||||
from datetime import datetime
|
||||
|
||||
from engine import config
|
||||
from engine.effects.legacy import vis_trunc
|
||||
from engine.pipeline.core import DataType, PipelineContext, Stage
|
||||
from engine.render.blocks import big_wrap
|
||||
from engine.render.gradient import msg_gradient
|
||||
|
||||
|
||||
@dataclass
|
||||
class MessageOverlayConfig:
|
||||
"""Configuration for MessageOverlayStage."""
|
||||
|
||||
enabled: bool = True
|
||||
display_secs: int = 30 # How long to display messages
|
||||
topic_url: str | None = None # Ntfy topic URL (None = use config default)
|
||||
|
||||
|
||||
class MessageOverlayStage(Stage):
|
||||
"""Stage that renders ntfy message overlay on the buffer.
|
||||
|
||||
Provides:
|
||||
- message.overlay capability (optional)
|
||||
- Renders centered panel with pink/magenta gradient
|
||||
- Shows title, body, timestamp, and remaining time
|
||||
"""
|
||||
|
||||
name = "message_overlay"
|
||||
category = "overlay"
|
||||
|
||||
def __init__(
|
||||
self, config: MessageOverlayConfig | None = None, name: str = "message_overlay"
|
||||
):
|
||||
self.config = config or MessageOverlayConfig()
|
||||
self._ntfy_poller = None
|
||||
self._msg_cache = (None, None) # (cache_key, rendered_rows)
|
||||
|
||||
@property
|
||||
def capabilities(self) -> set[str]:
|
||||
"""Provides message overlay capability."""
|
||||
return {"message.overlay"} if self.config.enabled else set()
|
||||
|
||||
@property
|
||||
def dependencies(self) -> set[str]:
|
||||
"""Needs rendered buffer and camera transformation to overlay onto."""
|
||||
return {"render.output", "camera"}
|
||||
|
||||
@property
|
||||
def inlet_types(self) -> set:
|
||||
return {DataType.TEXT_BUFFER}
|
||||
|
||||
@property
|
||||
def outlet_types(self) -> set:
|
||||
return {DataType.TEXT_BUFFER}
|
||||
|
||||
def init(self, ctx: PipelineContext) -> bool:
|
||||
"""Initialize ntfy poller if topic URL is configured."""
|
||||
if not self.config.enabled:
|
||||
return True
|
||||
|
||||
# Get or create ntfy poller
|
||||
topic_url = self.config.topic_url or config.NTFY_TOPIC
|
||||
if topic_url:
|
||||
from engine.ntfy import NtfyPoller
|
||||
|
||||
self._ntfy_poller = NtfyPoller(
|
||||
topic_url=topic_url,
|
||||
reconnect_delay=getattr(config, "NTFY_RECONNECT_DELAY", 5),
|
||||
display_secs=self.config.display_secs,
|
||||
)
|
||||
self._ntfy_poller.start()
|
||||
ctx.set("ntfy_poller", self._ntfy_poller)
|
||||
|
||||
return True
|
||||
|
||||
def process(self, data: list[str], ctx: PipelineContext) -> list[str]:
|
||||
"""Render message overlay on the buffer."""
|
||||
if not self.config.enabled or not data:
|
||||
return data
|
||||
|
||||
# Get active message from poller
|
||||
msg = None
|
||||
if self._ntfy_poller:
|
||||
msg = self._ntfy_poller.get_active_message()
|
||||
|
||||
if msg is None:
|
||||
return data
|
||||
|
||||
# Render overlay
|
||||
w = ctx.terminal_width if hasattr(ctx, "terminal_width") else 80
|
||||
h = ctx.terminal_height if hasattr(ctx, "terminal_height") else 24
|
||||
|
||||
overlay, self._msg_cache = self._render_message_overlay(
|
||||
msg, w, h, self._msg_cache
|
||||
)
|
||||
|
||||
# Composite overlay onto buffer
|
||||
result = list(data)
|
||||
for line in overlay:
|
||||
# Overlay uses ANSI cursor positioning, just append
|
||||
result.append(line)
|
||||
|
||||
return result
|
||||
|
||||
def _render_message_overlay(
|
||||
self,
|
||||
msg: tuple[str, str, float] | None,
|
||||
w: int,
|
||||
h: int,
|
||||
msg_cache: tuple,
|
||||
) -> tuple[list[str], tuple]:
|
||||
"""Render ntfy message overlay.
|
||||
|
||||
Args:
|
||||
msg: (title, body, timestamp) or None
|
||||
w: terminal width
|
||||
h: terminal height
|
||||
msg_cache: (cache_key, rendered_rows) for caching
|
||||
|
||||
Returns:
|
||||
(list of ANSI strings, updated cache)
|
||||
"""
|
||||
overlay = []
|
||||
if msg is None:
|
||||
return overlay, msg_cache
|
||||
|
||||
m_title, m_body, m_ts = msg
|
||||
display_text = m_body or m_title or "(empty)"
|
||||
display_text = re.sub(r"\s+", " ", display_text.upper())
|
||||
|
||||
cache_key = (display_text, w)
|
||||
if msg_cache[0] != cache_key:
|
||||
msg_rows = big_wrap(display_text, w - 4)
|
||||
msg_cache = (cache_key, msg_rows)
|
||||
else:
|
||||
msg_rows = msg_cache[1]
|
||||
|
||||
msg_rows = msg_gradient(msg_rows, (time.monotonic() * config.GRAD_SPEED) % 1.0)
|
||||
|
||||
elapsed_s = int(time.monotonic() - m_ts)
|
||||
remaining = max(0, self.config.display_secs - elapsed_s)
|
||||
ts_str = datetime.now().strftime("%H:%M:%S")
|
||||
panel_h = len(msg_rows) + 2
|
||||
panel_top = max(0, (h - panel_h) // 2)
|
||||
|
||||
row_idx = 0
|
||||
for mr in msg_rows:
|
||||
ln = vis_trunc(mr, w)
|
||||
overlay.append(f"\033[{panel_top + row_idx + 1};1H {ln}\033[0m\033[K")
|
||||
row_idx += 1
|
||||
|
||||
meta_parts = []
|
||||
if m_title and m_title != m_body:
|
||||
meta_parts.append(m_title)
|
||||
meta_parts.append(f"ntfy \u00b7 {ts_str} \u00b7 {remaining}s")
|
||||
meta = (
|
||||
" " + " \u00b7 ".join(meta_parts)
|
||||
if len(meta_parts) > 1
|
||||
else " " + meta_parts[0]
|
||||
)
|
||||
overlay.append(
|
||||
f"\033[{panel_top + row_idx + 1};1H\033[38;5;245m{meta}\033[0m\033[K"
|
||||
)
|
||||
row_idx += 1
|
||||
|
||||
bar = "\u2500" * (w - 4)
|
||||
overlay.append(
|
||||
f"\033[{panel_top + row_idx + 1};1H \033[2;38;5;37m{bar}\033[0m\033[K"
|
||||
)
|
||||
|
||||
return overlay, msg_cache
|
||||
|
||||
def cleanup(self) -> None:
|
||||
"""Cleanup resources."""
|
||||
pass
|
||||
@@ -474,9 +474,10 @@ class Pipeline:
|
||||
not self._find_stage_with_capability("display.output")
|
||||
and "display" not in self._stages
|
||||
):
|
||||
display = DisplayRegistry.create("terminal")
|
||||
display_name = self.config.display or "terminal"
|
||||
display = DisplayRegistry.create(display_name)
|
||||
if display:
|
||||
self.add_stage("display", DisplayStage(display, name="terminal"))
|
||||
self.add_stage("display", DisplayStage(display, name=display_name))
|
||||
injected.append("display")
|
||||
|
||||
# Rebuild pipeline if stages were injected
|
||||
|
||||
@@ -59,6 +59,7 @@ class PipelinePreset:
|
||||
viewport_height: int = 24 # Viewport height in rows
|
||||
source_items: list[dict[str, Any]] | None = None # For ListDataSource
|
||||
enable_metrics: bool = True # Enable performance metrics collection
|
||||
enable_message_overlay: bool = False # Enable ntfy message overlay
|
||||
|
||||
def to_params(self) -> PipelineParams:
|
||||
"""Convert to PipelineParams (runtime configuration)."""
|
||||
@@ -113,6 +114,7 @@ class PipelinePreset:
|
||||
viewport_height=data.get("viewport_height", 24),
|
||||
source_items=data.get("source_items"),
|
||||
enable_metrics=data.get("enable_metrics", True),
|
||||
enable_message_overlay=data.get("enable_message_overlay", False),
|
||||
)
|
||||
|
||||
|
||||
@@ -124,6 +126,7 @@ DEMO_PRESET = PipelinePreset(
|
||||
display="pygame",
|
||||
camera="scroll",
|
||||
effects=["noise", "fade", "glitch", "firehose"],
|
||||
enable_message_overlay=True,
|
||||
)
|
||||
|
||||
UI_PRESET = PipelinePreset(
|
||||
@@ -134,6 +137,7 @@ UI_PRESET = PipelinePreset(
|
||||
camera="scroll",
|
||||
effects=["noise", "fade", "glitch"],
|
||||
border=BorderMode.UI,
|
||||
enable_message_overlay=True,
|
||||
)
|
||||
|
||||
POETRY_PRESET = PipelinePreset(
|
||||
@@ -170,6 +174,7 @@ FIREHOSE_PRESET = PipelinePreset(
|
||||
display="pygame",
|
||||
camera="scroll",
|
||||
effects=["noise", "fade", "glitch", "firehose"],
|
||||
enable_message_overlay=True,
|
||||
)
|
||||
|
||||
FIXTURE_PRESET = PipelinePreset(
|
||||
|
||||
@@ -80,3 +80,57 @@ def lr_gradient_opposite(rows, offset=0.0):
|
||||
List of lines with complementary gradient coloring applied
|
||||
"""
|
||||
return lr_gradient(rows, offset, MSG_GRAD_COLS)
|
||||
|
||||
|
||||
def msg_gradient(rows, offset):
|
||||
"""Apply message (ntfy) gradient using theme complementary colors.
|
||||
|
||||
Returns colored rows using ACTIVE_THEME.message_gradient if available,
|
||||
falling back to default magenta if no theme is set.
|
||||
|
||||
Args:
|
||||
rows: List of text strings to colorize
|
||||
offset: Gradient offset (0.0-1.0) for animation
|
||||
|
||||
Returns:
|
||||
List of rows with ANSI color codes applied
|
||||
"""
|
||||
from engine import config
|
||||
|
||||
# Check if theme is set and use it
|
||||
if config.ACTIVE_THEME:
|
||||
cols = _color_codes_to_ansi(config.ACTIVE_THEME.message_gradient)
|
||||
else:
|
||||
# Fallback to default magenta gradient
|
||||
cols = MSG_GRAD_COLS
|
||||
|
||||
return lr_gradient(rows, offset, cols)
|
||||
|
||||
|
||||
def _color_codes_to_ansi(color_codes):
|
||||
"""Convert a list of 256-color codes to ANSI escape code strings.
|
||||
|
||||
Pattern: first 2 are bold, middle 8 are normal, last 2 are dim.
|
||||
|
||||
Args:
|
||||
color_codes: List of 12 integers (256-color palette codes)
|
||||
|
||||
Returns:
|
||||
List of ANSI escape code strings
|
||||
"""
|
||||
if not color_codes or len(color_codes) != 12:
|
||||
# Fallback to default green if invalid
|
||||
return GRAD_COLS
|
||||
|
||||
result = []
|
||||
for i, code in enumerate(color_codes):
|
||||
if i < 2:
|
||||
# Bold for first 2 (bright leading edge)
|
||||
result.append(f"\033[1;38;5;{code}m")
|
||||
elif i < 10:
|
||||
# Normal for middle 8
|
||||
result.append(f"\033[38;5;{code}m")
|
||||
else:
|
||||
# Dim for last 2 (dark trailing edge)
|
||||
result.append(f"\033[2;38;5;{code}m")
|
||||
return result
|
||||
|
||||
@@ -19,7 +19,8 @@ format = "uv run ruff format engine/ mainline.py"
|
||||
# Run
|
||||
# =====================
|
||||
|
||||
run = "uv run mainline.py"
|
||||
mainline = "uv run mainline.py"
|
||||
run = { run = "uv run mainline.py", depends = ["sync-all"] }
|
||||
run-pygame = { run = "uv run mainline.py --display pygame", depends = ["sync-all"] }
|
||||
run-terminal = { run = "uv run mainline.py --display terminal", depends = ["sync-all"] }
|
||||
|
||||
|
||||
1870
output/sideline_demo.json
Normal file
1870
output/sideline_demo.json
Normal file
File diff suppressed because it is too large
Load Diff
1870
output/upstream_demo.json
Normal file
1870
output/upstream_demo.json
Normal file
File diff suppressed because it is too large
Load Diff
14
presets.toml
14
presets.toml
@@ -62,6 +62,7 @@ effects = [] # Demo script will add/remove effects dynamically
|
||||
camera_speed = 0.1
|
||||
viewport_width = 80
|
||||
viewport_height = 24
|
||||
enable_message_overlay = true
|
||||
|
||||
[presets.demo-pygame]
|
||||
description = "Demo: Pygame display version"
|
||||
@@ -72,6 +73,7 @@ effects = [] # Demo script will add/remove effects dynamically
|
||||
camera_speed = 0.1
|
||||
viewport_width = 80
|
||||
viewport_height = 24
|
||||
enable_message_overlay = true
|
||||
|
||||
[presets.demo-camera-showcase]
|
||||
description = "Demo: Camera mode showcase"
|
||||
@@ -82,6 +84,18 @@ effects = [] # Demo script will cycle through camera modes
|
||||
camera_speed = 0.5
|
||||
viewport_width = 80
|
||||
viewport_height = 24
|
||||
enable_message_overlay = true
|
||||
|
||||
[presets.test-message-overlay]
|
||||
description = "Test: Message overlay with ntfy integration"
|
||||
source = "headlines"
|
||||
display = "terminal"
|
||||
camera = "feed"
|
||||
effects = ["hud"]
|
||||
camera_speed = 0.1
|
||||
viewport_width = 80
|
||||
viewport_height = 24
|
||||
enable_message_overlay = true
|
||||
|
||||
# ============================================
|
||||
# SENSOR CONFIGURATION
|
||||
|
||||
@@ -65,6 +65,7 @@ dev = [
|
||||
"pytest-cov>=4.1.0",
|
||||
"pytest-mock>=3.12.0",
|
||||
"ruff>=0.1.0",
|
||||
"tomli>=2.0.0",
|
||||
]
|
||||
|
||||
[tool.pytest.ini_options]
|
||||
|
||||
201
scripts/capture_output.py
Normal file
201
scripts/capture_output.py
Normal file
@@ -0,0 +1,201 @@
|
||||
#!/usr/bin/env python3
|
||||
"""
|
||||
Capture output utility for Mainline.
|
||||
|
||||
This script captures the output of a Mainline pipeline using NullDisplay
|
||||
and saves it to a JSON file for comparison with other branches.
|
||||
"""
|
||||
|
||||
import argparse
|
||||
import json
|
||||
import time
|
||||
from pathlib import Path
|
||||
|
||||
from engine.display import DisplayRegistry
|
||||
from engine.pipeline import Pipeline, PipelineConfig, PipelineContext
|
||||
from engine.pipeline.adapters import create_stage_from_display
|
||||
from engine.pipeline.presets import get_preset
|
||||
|
||||
|
||||
def capture_pipeline_output(
|
||||
preset_name: str,
|
||||
output_file: str,
|
||||
frames: int = 60,
|
||||
width: int = 80,
|
||||
height: int = 24,
|
||||
):
|
||||
"""Capture pipeline output for a given preset.
|
||||
|
||||
Args:
|
||||
preset_name: Name of preset to use
|
||||
output_file: Path to save captured output
|
||||
frames: Number of frames to capture
|
||||
width: Terminal width
|
||||
height: Terminal height
|
||||
"""
|
||||
print(f"Capturing output for preset '{preset_name}'...")
|
||||
|
||||
# Get preset
|
||||
preset = get_preset(preset_name)
|
||||
if not preset:
|
||||
print(f"Error: Preset '{preset_name}' not found")
|
||||
return False
|
||||
|
||||
# Create NullDisplay with recording
|
||||
display = DisplayRegistry.create("null")
|
||||
display.init(width, height)
|
||||
display.start_recording()
|
||||
|
||||
# Build pipeline
|
||||
config = PipelineConfig(
|
||||
source=preset.source,
|
||||
display="null", # Use null display
|
||||
camera=preset.camera,
|
||||
effects=preset.effects,
|
||||
enable_metrics=False,
|
||||
)
|
||||
|
||||
# Create pipeline context with params
|
||||
from engine.pipeline.params import PipelineParams
|
||||
|
||||
params = PipelineParams(
|
||||
source=preset.source,
|
||||
display="null",
|
||||
camera_mode=preset.camera,
|
||||
effect_order=preset.effects,
|
||||
viewport_width=preset.viewport_width,
|
||||
viewport_height=preset.viewport_height,
|
||||
camera_speed=preset.camera_speed,
|
||||
)
|
||||
|
||||
ctx = PipelineContext()
|
||||
ctx.params = params
|
||||
|
||||
pipeline = Pipeline(config=config, context=ctx)
|
||||
|
||||
# Add stages based on preset
|
||||
from engine.data_sources.sources import HeadlinesDataSource
|
||||
from engine.pipeline.adapters import DataSourceStage
|
||||
|
||||
# Add source stage
|
||||
source = HeadlinesDataSource()
|
||||
pipeline.add_stage("source", DataSourceStage(source, name="headlines"))
|
||||
|
||||
# Add message overlay if enabled
|
||||
if getattr(preset, "enable_message_overlay", False):
|
||||
from engine import config as engine_config
|
||||
from engine.pipeline.adapters import MessageOverlayConfig, MessageOverlayStage
|
||||
|
||||
overlay_config = MessageOverlayConfig(
|
||||
enabled=True,
|
||||
display_secs=getattr(engine_config, "MESSAGE_DISPLAY_SECS", 30),
|
||||
topic_url=getattr(engine_config, "NTFY_TOPIC", None),
|
||||
)
|
||||
pipeline.add_stage(
|
||||
"message_overlay", MessageOverlayStage(config=overlay_config)
|
||||
)
|
||||
|
||||
# Add display stage
|
||||
pipeline.add_stage("display", create_stage_from_display(display, "null"))
|
||||
|
||||
# Build and initialize
|
||||
pipeline.build()
|
||||
if not pipeline.initialize():
|
||||
print("Error: Failed to initialize pipeline")
|
||||
return False
|
||||
|
||||
# Capture frames
|
||||
print(f"Capturing {frames} frames...")
|
||||
start_time = time.time()
|
||||
|
||||
for frame in range(frames):
|
||||
try:
|
||||
pipeline.execute([])
|
||||
if frame % 10 == 0:
|
||||
print(f" Frame {frame}/{frames}")
|
||||
except Exception as e:
|
||||
print(f"Error on frame {frame}: {e}")
|
||||
break
|
||||
|
||||
elapsed = time.time() - start_time
|
||||
print(f"Captured {frame + 1} frames in {elapsed:.2f}s")
|
||||
|
||||
# Get captured frames
|
||||
captured_frames = display.get_frames()
|
||||
print(f"Retrieved {len(captured_frames)} frames from display")
|
||||
|
||||
# Save to JSON
|
||||
output_path = Path(output_file)
|
||||
output_path.parent.mkdir(parents=True, exist_ok=True)
|
||||
|
||||
recording_data = {
|
||||
"version": 1,
|
||||
"preset": preset_name,
|
||||
"display": "null",
|
||||
"width": width,
|
||||
"height": height,
|
||||
"frame_count": len(captured_frames),
|
||||
"frames": [
|
||||
{
|
||||
"frame_number": i,
|
||||
"buffer": frame,
|
||||
"width": width,
|
||||
"height": height,
|
||||
}
|
||||
for i, frame in enumerate(captured_frames)
|
||||
],
|
||||
}
|
||||
|
||||
with open(output_path, "w") as f:
|
||||
json.dump(recording_data, f, indent=2)
|
||||
|
||||
print(f"Saved recording to {output_path}")
|
||||
return True
|
||||
|
||||
|
||||
def main():
|
||||
parser = argparse.ArgumentParser(description="Capture Mainline pipeline output")
|
||||
parser.add_argument(
|
||||
"--preset",
|
||||
default="demo",
|
||||
help="Preset name to use (default: demo)",
|
||||
)
|
||||
parser.add_argument(
|
||||
"--output",
|
||||
default="output/capture.json",
|
||||
help="Output file path (default: output/capture.json)",
|
||||
)
|
||||
parser.add_argument(
|
||||
"--frames",
|
||||
type=int,
|
||||
default=60,
|
||||
help="Number of frames to capture (default: 60)",
|
||||
)
|
||||
parser.add_argument(
|
||||
"--width",
|
||||
type=int,
|
||||
default=80,
|
||||
help="Terminal width (default: 80)",
|
||||
)
|
||||
parser.add_argument(
|
||||
"--height",
|
||||
type=int,
|
||||
default=24,
|
||||
help="Terminal height (default: 24)",
|
||||
)
|
||||
|
||||
args = parser.parse_args()
|
||||
|
||||
success = capture_pipeline_output(
|
||||
preset_name=args.preset,
|
||||
output_file=args.output,
|
||||
frames=args.frames,
|
||||
width=args.width,
|
||||
height=args.height,
|
||||
)
|
||||
|
||||
return 0 if success else 1
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
exit(main())
|
||||
186
scripts/capture_upstream.py
Normal file
186
scripts/capture_upstream.py
Normal file
@@ -0,0 +1,186 @@
|
||||
#!/usr/bin/env python3
|
||||
"""
|
||||
Capture output from upstream/main branch.
|
||||
|
||||
This script captures the output of upstream/main Mainline using NullDisplay
|
||||
and saves it to a JSON file for comparison with sideline branch.
|
||||
"""
|
||||
|
||||
import argparse
|
||||
import json
|
||||
import sys
|
||||
from pathlib import Path
|
||||
|
||||
# Add upstream/main to path
|
||||
sys.path.insert(0, "/tmp/upstream_mainline")
|
||||
|
||||
|
||||
def capture_upstream_output(
|
||||
output_file: str,
|
||||
frames: int = 60,
|
||||
width: int = 80,
|
||||
height: int = 24,
|
||||
):
|
||||
"""Capture upstream/main output.
|
||||
|
||||
Args:
|
||||
output_file: Path to save captured output
|
||||
frames: Number of frames to capture
|
||||
width: Terminal width
|
||||
height: Terminal height
|
||||
"""
|
||||
print(f"Capturing upstream/main output...")
|
||||
|
||||
try:
|
||||
# Import upstream modules
|
||||
from engine import config, themes
|
||||
from engine.display import NullDisplay
|
||||
from engine.fetch import fetch_all, load_cache
|
||||
from engine.scroll import stream
|
||||
from engine.ntfy import NtfyPoller
|
||||
from engine.mic import MicMonitor
|
||||
except ImportError as e:
|
||||
print(f"Error importing upstream modules: {e}")
|
||||
print("Make sure upstream/main is in the Python path")
|
||||
return False
|
||||
|
||||
# Create a custom NullDisplay that captures frames
|
||||
class CapturingNullDisplay:
|
||||
def __init__(self, width, height, max_frames):
|
||||
self.width = width
|
||||
self.height = height
|
||||
self.max_frames = max_frames
|
||||
self.frame_count = 0
|
||||
self.frames = []
|
||||
|
||||
def init(self, width: int, height: int) -> None:
|
||||
self.width = width
|
||||
self.height = height
|
||||
|
||||
def show(self, buffer: list[str], border: bool = False) -> None:
|
||||
if self.frame_count < self.max_frames:
|
||||
self.frames.append(list(buffer))
|
||||
self.frame_count += 1
|
||||
if self.frame_count >= self.max_frames:
|
||||
raise StopIteration("Frame limit reached")
|
||||
|
||||
def clear(self) -> None:
|
||||
pass
|
||||
|
||||
def cleanup(self) -> None:
|
||||
pass
|
||||
|
||||
def get_frames(self):
|
||||
return self.frames
|
||||
|
||||
display = CapturingNullDisplay(width, height, frames)
|
||||
|
||||
# Load items (use cached headlines)
|
||||
items = load_cache()
|
||||
if not items:
|
||||
print("No cached items found, fetching...")
|
||||
result = fetch_all()
|
||||
if isinstance(result, tuple):
|
||||
items, linked, failed = result
|
||||
else:
|
||||
items = result
|
||||
if not items:
|
||||
print("Error: No items available")
|
||||
return False
|
||||
|
||||
print(f"Loaded {len(items)} items")
|
||||
|
||||
# Create ntfy poller and mic monitor (upstream uses these)
|
||||
ntfy_poller = NtfyPoller(config.NTFY_TOPIC, reconnect_delay=5, display_secs=30)
|
||||
mic_monitor = MicMonitor()
|
||||
|
||||
# Run stream for specified number of frames
|
||||
print(f"Capturing {frames} frames...")
|
||||
|
||||
try:
|
||||
# Run the stream
|
||||
stream(
|
||||
items=items,
|
||||
ntfy_poller=ntfy_poller,
|
||||
mic_monitor=mic_monitor,
|
||||
display=display,
|
||||
)
|
||||
except StopIteration:
|
||||
print("Frame limit reached")
|
||||
except Exception as e:
|
||||
print(f"Error during capture: {e}")
|
||||
# Continue to save what we have
|
||||
|
||||
# Get captured frames
|
||||
captured_frames = display.get_frames()
|
||||
print(f"Retrieved {len(captured_frames)} frames from display")
|
||||
|
||||
# Save to JSON
|
||||
output_path = Path(output_file)
|
||||
output_path.parent.mkdir(parents=True, exist_ok=True)
|
||||
|
||||
recording_data = {
|
||||
"version": 1,
|
||||
"preset": "upstream_demo",
|
||||
"display": "null",
|
||||
"width": width,
|
||||
"height": height,
|
||||
"frame_count": len(captured_frames),
|
||||
"frames": [
|
||||
{
|
||||
"frame_number": i,
|
||||
"buffer": frame,
|
||||
"width": width,
|
||||
"height": height,
|
||||
}
|
||||
for i, frame in enumerate(captured_frames)
|
||||
],
|
||||
}
|
||||
|
||||
with open(output_path, "w") as f:
|
||||
json.dump(recording_data, f, indent=2)
|
||||
|
||||
print(f"Saved recording to {output_path}")
|
||||
return True
|
||||
|
||||
|
||||
def main():
|
||||
parser = argparse.ArgumentParser(description="Capture upstream/main output")
|
||||
parser.add_argument(
|
||||
"--output",
|
||||
default="output/upstream_demo.json",
|
||||
help="Output file path (default: output/upstream_demo.json)",
|
||||
)
|
||||
parser.add_argument(
|
||||
"--frames",
|
||||
type=int,
|
||||
default=60,
|
||||
help="Number of frames to capture (default: 60)",
|
||||
)
|
||||
parser.add_argument(
|
||||
"--width",
|
||||
type=int,
|
||||
default=80,
|
||||
help="Terminal width (default: 80)",
|
||||
)
|
||||
parser.add_argument(
|
||||
"--height",
|
||||
type=int,
|
||||
default=24,
|
||||
help="Terminal height (default: 24)",
|
||||
)
|
||||
|
||||
args = parser.parse_args()
|
||||
|
||||
success = capture_upstream_output(
|
||||
output_file=args.output,
|
||||
frames=args.frames,
|
||||
width=args.width,
|
||||
height=args.height,
|
||||
)
|
||||
|
||||
return 0 if success else 1
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
exit(main())
|
||||
144
scripts/capture_upstream_comparison.py
Normal file
144
scripts/capture_upstream_comparison.py
Normal file
@@ -0,0 +1,144 @@
|
||||
"""Capture frames from upstream Mainline for comparison testing.
|
||||
|
||||
This script should be run on the upstream/main branch to capture frames
|
||||
that will later be compared with sideline branch output.
|
||||
|
||||
Usage:
|
||||
# On upstream/main branch
|
||||
python scripts/capture_upstream_comparison.py --preset demo
|
||||
|
||||
# This will create tests/comparison_output/demo_upstream.json
|
||||
"""
|
||||
|
||||
import argparse
|
||||
import json
|
||||
import sys
|
||||
from pathlib import Path
|
||||
|
||||
# Add project root to path
|
||||
sys.path.insert(0, str(Path(__file__).parent.parent))
|
||||
|
||||
|
||||
def load_preset(preset_name: str) -> dict:
|
||||
"""Load a preset from presets.toml."""
|
||||
import tomli
|
||||
|
||||
# Try user presets first
|
||||
user_presets = Path.home() / ".config" / "mainline" / "presets.toml"
|
||||
local_presets = Path("presets.toml")
|
||||
built_in_presets = Path(__file__).parent.parent / "presets.toml"
|
||||
|
||||
for preset_file in [user_presets, local_presets, built_in_presets]:
|
||||
if preset_file.exists():
|
||||
with open(preset_file, "rb") as f:
|
||||
config = tomli.load(f)
|
||||
if "presets" in config and preset_name in config["presets"]:
|
||||
return config["presets"][preset_name]
|
||||
|
||||
raise ValueError(f"Preset '{preset_name}' not found")
|
||||
|
||||
|
||||
def capture_upstream_frames(
|
||||
preset_name: str,
|
||||
frame_count: int = 30,
|
||||
output_dir: Path = Path("tests/comparison_output"),
|
||||
) -> Path:
|
||||
"""Capture frames from upstream pipeline.
|
||||
|
||||
Note: This is a simplified version that mimics upstream behavior.
|
||||
For actual upstream comparison, you may need to:
|
||||
1. Checkout upstream/main branch
|
||||
2. Run this script
|
||||
3. Copy the output file
|
||||
4. Checkout your branch
|
||||
5. Run comparison
|
||||
"""
|
||||
output_dir.mkdir(parents=True, exist_ok=True)
|
||||
|
||||
# Load preset
|
||||
preset = load_preset(preset_name)
|
||||
|
||||
# For upstream, we need to use the old monolithic rendering approach
|
||||
# This is a simplified placeholder - actual implementation depends on
|
||||
# the specific upstream architecture
|
||||
|
||||
print(f"Capturing {frame_count} frames from upstream preset '{preset_name}'")
|
||||
print("Note: This script should be run on upstream/main branch")
|
||||
print(f" for accurate comparison with sideline branch")
|
||||
|
||||
# Placeholder: In a real implementation, this would:
|
||||
# 1. Import upstream-specific modules
|
||||
# 2. Create pipeline using upstream architecture
|
||||
# 3. Capture frames
|
||||
# 4. Save to JSON
|
||||
|
||||
# For now, create a placeholder file with instructions
|
||||
placeholder_data = {
|
||||
"preset": preset_name,
|
||||
"config": preset,
|
||||
"note": "This is a placeholder file.",
|
||||
"instructions": [
|
||||
"1. Checkout upstream/main branch: git checkout main",
|
||||
"2. Run frame capture: python scripts/capture_upstream_comparison.py --preset <name>",
|
||||
"3. Copy output file to sideline branch",
|
||||
"4. Checkout sideline branch: git checkout feature/capability-based-deps",
|
||||
"5. Run comparison: python tests/run_comparison.py --preset <name>",
|
||||
],
|
||||
"frames": [], # Empty until properly captured
|
||||
}
|
||||
|
||||
output_file = output_dir / f"{preset_name}_upstream.json"
|
||||
with open(output_file, "w") as f:
|
||||
json.dump(placeholder_data, f, indent=2)
|
||||
|
||||
print(f"\nPlaceholder file created: {output_file}")
|
||||
print("\nTo capture actual upstream frames:")
|
||||
print("1. Ensure you are on upstream/main branch")
|
||||
print("2. This script needs to be adapted to use upstream-specific rendering")
|
||||
print("3. The captured frames will be used for comparison with sideline")
|
||||
|
||||
return output_file
|
||||
|
||||
|
||||
def main():
|
||||
"""Main entry point."""
|
||||
parser = argparse.ArgumentParser(
|
||||
description="Capture frames from upstream Mainline for comparison"
|
||||
)
|
||||
parser.add_argument(
|
||||
"--preset",
|
||||
"-p",
|
||||
required=True,
|
||||
help="Preset name to capture",
|
||||
)
|
||||
parser.add_argument(
|
||||
"--frames",
|
||||
"-f",
|
||||
type=int,
|
||||
default=30,
|
||||
help="Number of frames to capture",
|
||||
)
|
||||
parser.add_argument(
|
||||
"--output-dir",
|
||||
"-o",
|
||||
type=Path,
|
||||
default=Path("tests/comparison_output"),
|
||||
help="Output directory",
|
||||
)
|
||||
|
||||
args = parser.parse_args()
|
||||
|
||||
try:
|
||||
output_file = capture_upstream_frames(
|
||||
preset_name=args.preset,
|
||||
frame_count=args.frames,
|
||||
output_dir=args.output_dir,
|
||||
)
|
||||
print(f"\nCapture complete: {output_file}")
|
||||
except Exception as e:
|
||||
print(f"Error: {e}")
|
||||
sys.exit(1)
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
main()
|
||||
220
scripts/compare_outputs.py
Normal file
220
scripts/compare_outputs.py
Normal file
@@ -0,0 +1,220 @@
|
||||
#!/usr/bin/env python3
|
||||
"""
|
||||
Compare captured outputs from different branches or configurations.
|
||||
|
||||
This script loads two captured recordings and compares them frame-by-frame,
|
||||
reporting any differences found.
|
||||
"""
|
||||
|
||||
import argparse
|
||||
import difflib
|
||||
import json
|
||||
from pathlib import Path
|
||||
|
||||
|
||||
def load_recording(file_path: str) -> dict:
|
||||
"""Load a recording from a JSON file."""
|
||||
with open(file_path, "r") as f:
|
||||
return json.load(f)
|
||||
|
||||
|
||||
def compare_frame_buffers(buf1: list[str], buf2: list[str]) -> tuple[int, list[str]]:
|
||||
"""Compare two frame buffers and return differences.
|
||||
|
||||
Returns:
|
||||
tuple: (difference_count, list of difference descriptions)
|
||||
"""
|
||||
differences = []
|
||||
|
||||
# Check dimensions
|
||||
if len(buf1) != len(buf2):
|
||||
differences.append(f"Height mismatch: {len(buf1)} vs {len(buf2)}")
|
||||
|
||||
# Check each line
|
||||
max_lines = max(len(buf1), len(buf2))
|
||||
for i in range(max_lines):
|
||||
if i >= len(buf1):
|
||||
differences.append(f"Line {i}: Missing in first buffer")
|
||||
continue
|
||||
if i >= len(buf2):
|
||||
differences.append(f"Line {i}: Missing in second buffer")
|
||||
continue
|
||||
|
||||
line1 = buf1[i]
|
||||
line2 = buf2[i]
|
||||
|
||||
if line1 != line2:
|
||||
# Find the specific differences in the line
|
||||
if len(line1) != len(line2):
|
||||
differences.append(
|
||||
f"Line {i}: Length mismatch ({len(line1)} vs {len(line2)})"
|
||||
)
|
||||
|
||||
# Show a snippet of the difference
|
||||
max_len = max(len(line1), len(line2))
|
||||
snippet1 = line1[:50] + "..." if len(line1) > 50 else line1
|
||||
snippet2 = line2[:50] + "..." if len(line2) > 50 else line2
|
||||
differences.append(f"Line {i}: '{snippet1}' != '{snippet2}'")
|
||||
|
||||
return len(differences), differences
|
||||
|
||||
|
||||
def compare_recordings(
|
||||
recording1: dict, recording2: dict, max_frames: int = None
|
||||
) -> dict:
|
||||
"""Compare two recordings frame-by-frame.
|
||||
|
||||
Returns:
|
||||
dict: Comparison results with summary and detailed differences
|
||||
"""
|
||||
results = {
|
||||
"summary": {},
|
||||
"frames": [],
|
||||
"total_differences": 0,
|
||||
"frames_with_differences": 0,
|
||||
}
|
||||
|
||||
# Compare metadata
|
||||
results["summary"]["recording1"] = {
|
||||
"preset": recording1.get("preset", "unknown"),
|
||||
"frame_count": recording1.get("frame_count", 0),
|
||||
"width": recording1.get("width", 0),
|
||||
"height": recording1.get("height", 0),
|
||||
}
|
||||
results["summary"]["recording2"] = {
|
||||
"preset": recording2.get("preset", "unknown"),
|
||||
"frame_count": recording2.get("frame_count", 0),
|
||||
"width": recording2.get("width", 0),
|
||||
"height": recording2.get("height", 0),
|
||||
}
|
||||
|
||||
# Compare frames
|
||||
frames1 = recording1.get("frames", [])
|
||||
frames2 = recording2.get("frames", [])
|
||||
|
||||
num_frames = min(len(frames1), len(frames2))
|
||||
if max_frames:
|
||||
num_frames = min(num_frames, max_frames)
|
||||
|
||||
print(f"Comparing {num_frames} frames...")
|
||||
|
||||
for frame_idx in range(num_frames):
|
||||
frame1 = frames1[frame_idx]
|
||||
frame2 = frames2[frame_idx]
|
||||
|
||||
buf1 = frame1.get("buffer", [])
|
||||
buf2 = frame2.get("buffer", [])
|
||||
|
||||
diff_count, differences = compare_frame_buffers(buf1, buf2)
|
||||
|
||||
if diff_count > 0:
|
||||
results["total_differences"] += diff_count
|
||||
results["frames_with_differences"] += 1
|
||||
results["frames"].append(
|
||||
{
|
||||
"frame_number": frame_idx,
|
||||
"differences": differences,
|
||||
"diff_count": diff_count,
|
||||
}
|
||||
)
|
||||
|
||||
if frame_idx < 5: # Only print first 5 frames with differences
|
||||
print(f"\nFrame {frame_idx} ({diff_count} differences):")
|
||||
for diff in differences[:5]: # Limit to 5 differences per frame
|
||||
print(f" - {diff}")
|
||||
|
||||
# Summary
|
||||
results["summary"]["total_frames_compared"] = num_frames
|
||||
results["summary"]["frames_with_differences"] = results["frames_with_differences"]
|
||||
results["summary"]["total_differences"] = results["total_differences"]
|
||||
results["summary"]["match_percentage"] = (
|
||||
(1 - results["frames_with_differences"] / num_frames) * 100
|
||||
if num_frames > 0
|
||||
else 0
|
||||
)
|
||||
|
||||
return results
|
||||
|
||||
|
||||
def print_comparison_summary(results: dict):
|
||||
"""Print a summary of the comparison results."""
|
||||
print("\n" + "=" * 80)
|
||||
print("COMPARISON SUMMARY")
|
||||
print("=" * 80)
|
||||
|
||||
r1 = results["summary"]["recording1"]
|
||||
r2 = results["summary"]["recording2"]
|
||||
|
||||
print(f"\nRecording 1: {r1['preset']}")
|
||||
print(
|
||||
f" Frames: {r1['frame_count']}, Width: {r1['width']}, Height: {r1['height']}"
|
||||
)
|
||||
|
||||
print(f"\nRecording 2: {r2['preset']}")
|
||||
print(
|
||||
f" Frames: {r2['frame_count']}, Width: {r2['width']}, Height: {r2['height']}"
|
||||
)
|
||||
|
||||
print(f"\nComparison:")
|
||||
print(f" Frames compared: {results['summary']['total_frames_compared']}")
|
||||
print(f" Frames with differences: {results['summary']['frames_with_differences']}")
|
||||
print(f" Total differences: {results['summary']['total_differences']}")
|
||||
print(f" Match percentage: {results['summary']['match_percentage']:.2f}%")
|
||||
|
||||
if results["summary"]["match_percentage"] == 100:
|
||||
print("\n✓ Recordings match perfectly!")
|
||||
else:
|
||||
print("\n⚠ Recordings have differences.")
|
||||
|
||||
|
||||
def main():
|
||||
parser = argparse.ArgumentParser(
|
||||
description="Compare captured outputs from different branches"
|
||||
)
|
||||
parser.add_argument(
|
||||
"recording1",
|
||||
help="First recording file (JSON)",
|
||||
)
|
||||
parser.add_argument(
|
||||
"recording2",
|
||||
help="Second recording file (JSON)",
|
||||
)
|
||||
parser.add_argument(
|
||||
"--max-frames",
|
||||
type=int,
|
||||
help="Maximum number of frames to compare",
|
||||
)
|
||||
parser.add_argument(
|
||||
"--output",
|
||||
"-o",
|
||||
help="Output file for detailed comparison results (JSON)",
|
||||
)
|
||||
|
||||
args = parser.parse_args()
|
||||
|
||||
# Load recordings
|
||||
print(f"Loading {args.recording1}...")
|
||||
recording1 = load_recording(args.recording1)
|
||||
|
||||
print(f"Loading {args.recording2}...")
|
||||
recording2 = load_recording(args.recording2)
|
||||
|
||||
# Compare
|
||||
results = compare_recordings(recording1, recording2, args.max_frames)
|
||||
|
||||
# Print summary
|
||||
print_comparison_summary(results)
|
||||
|
||||
# Save detailed results if requested
|
||||
if args.output:
|
||||
output_path = Path(args.output)
|
||||
output_path.parent.mkdir(parents=True, exist_ok=True)
|
||||
with open(output_path, "w") as f:
|
||||
json.dump(results, f, indent=2)
|
||||
print(f"\nDetailed results saved to {args.output}")
|
||||
|
||||
return 0
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
exit(main())
|
||||
489
tests/comparison_capture.py
Normal file
489
tests/comparison_capture.py
Normal file
@@ -0,0 +1,489 @@
|
||||
"""Frame capture utilities for upstream vs sideline comparison.
|
||||
|
||||
This module provides functions to capture frames from both upstream and sideline
|
||||
implementations for visual comparison and performance analysis.
|
||||
"""
|
||||
|
||||
import json
|
||||
import time
|
||||
from pathlib import Path
|
||||
from typing import Any, Dict, List, Tuple
|
||||
|
||||
import tomli
|
||||
|
||||
from engine.pipeline import Pipeline, PipelineConfig, PipelineContext
|
||||
from engine.pipeline.params import PipelineParams
|
||||
|
||||
|
||||
def load_comparison_preset(preset_name: str) -> Any:
|
||||
"""Load a comparison preset from comparison_presets.toml.
|
||||
|
||||
Args:
|
||||
preset_name: Name of the preset to load
|
||||
|
||||
Returns:
|
||||
Preset configuration dictionary
|
||||
"""
|
||||
presets_file = Path("tests/comparison_presets.toml")
|
||||
if not presets_file.exists():
|
||||
raise FileNotFoundError(f"Comparison presets file not found: {presets_file}")
|
||||
|
||||
with open(presets_file, "rb") as f:
|
||||
config = tomli.load(f)
|
||||
|
||||
presets = config.get("presets", {})
|
||||
full_name = (
|
||||
f"presets.{preset_name}"
|
||||
if not preset_name.startswith("presets.")
|
||||
else preset_name
|
||||
)
|
||||
simple_name = (
|
||||
preset_name.replace("presets.", "")
|
||||
if preset_name.startswith("presets.")
|
||||
else preset_name
|
||||
)
|
||||
|
||||
if full_name in presets:
|
||||
return presets[full_name]
|
||||
elif simple_name in presets:
|
||||
return presets[simple_name]
|
||||
else:
|
||||
raise ValueError(
|
||||
f"Preset '{preset_name}' not found in {presets_file}. Available: {list(presets.keys())}"
|
||||
)
|
||||
|
||||
|
||||
def capture_frames(
|
||||
preset_name: str,
|
||||
frame_count: int = 30,
|
||||
output_dir: Path = Path("tests/comparison_output"),
|
||||
) -> Dict[str, Any]:
|
||||
"""Capture frames from sideline pipeline using a preset.
|
||||
|
||||
Args:
|
||||
preset_name: Name of preset to use
|
||||
frame_count: Number of frames to capture
|
||||
output_dir: Directory to save captured frames
|
||||
|
||||
Returns:
|
||||
Dictionary with captured frames and metadata
|
||||
"""
|
||||
from engine.pipeline.presets import get_preset
|
||||
|
||||
output_dir.mkdir(parents=True, exist_ok=True)
|
||||
|
||||
# Load preset - try comparison presets first, then built-in presets
|
||||
try:
|
||||
preset = load_comparison_preset(preset_name)
|
||||
# Convert dict to object-like access
|
||||
from types import SimpleNamespace
|
||||
|
||||
preset = SimpleNamespace(**preset)
|
||||
except (FileNotFoundError, ValueError):
|
||||
# Fall back to built-in presets
|
||||
preset = get_preset(preset_name)
|
||||
if not preset:
|
||||
raise ValueError(
|
||||
f"Preset '{preset_name}' not found in comparison or built-in presets"
|
||||
)
|
||||
|
||||
# Create pipeline config from preset
|
||||
config = PipelineConfig(
|
||||
source=preset.source,
|
||||
display="null", # Always use null display for capture
|
||||
camera=preset.camera,
|
||||
effects=preset.effects,
|
||||
)
|
||||
|
||||
# Create pipeline
|
||||
ctx = PipelineContext()
|
||||
ctx.terminal_width = preset.viewport_width
|
||||
ctx.terminal_height = preset.viewport_height
|
||||
pipeline = Pipeline(config=config, context=ctx)
|
||||
|
||||
# Create params
|
||||
params = PipelineParams(
|
||||
viewport_width=preset.viewport_width,
|
||||
viewport_height=preset.viewport_height,
|
||||
)
|
||||
ctx.params = params
|
||||
|
||||
# Add stages based on source type (similar to pipeline_runner)
|
||||
from engine.display import DisplayRegistry
|
||||
from engine.pipeline.adapters import create_stage_from_display
|
||||
from engine.data_sources.sources import EmptyDataSource
|
||||
from engine.pipeline.adapters import DataSourceStage
|
||||
|
||||
# Add source stage
|
||||
if preset.source == "empty":
|
||||
source_stage = DataSourceStage(
|
||||
EmptyDataSource(width=preset.viewport_width, height=preset.viewport_height),
|
||||
name="empty",
|
||||
)
|
||||
else:
|
||||
# For headlines/poetry, use the actual source
|
||||
from engine.data_sources.sources import HeadlinesDataSource, PoetryDataSource
|
||||
|
||||
if preset.source == "headlines":
|
||||
source_stage = DataSourceStage(HeadlinesDataSource(), name="headlines")
|
||||
elif preset.source == "poetry":
|
||||
source_stage = DataSourceStage(PoetryDataSource(), name="poetry")
|
||||
else:
|
||||
# Fallback to empty
|
||||
source_stage = DataSourceStage(
|
||||
EmptyDataSource(
|
||||
width=preset.viewport_width, height=preset.viewport_height
|
||||
),
|
||||
name="empty",
|
||||
)
|
||||
pipeline.add_stage("source", source_stage)
|
||||
|
||||
# Add font stage for headlines/poetry (with viewport filter)
|
||||
if preset.source in ["headlines", "poetry"]:
|
||||
from engine.pipeline.adapters import FontStage, ViewportFilterStage
|
||||
|
||||
# Add viewport filter to prevent rendering all items
|
||||
pipeline.add_stage(
|
||||
"viewport_filter", ViewportFilterStage(name="viewport-filter")
|
||||
)
|
||||
# Add font stage for block character rendering
|
||||
pipeline.add_stage("font", FontStage(name="font"))
|
||||
else:
|
||||
# Fallback to simple conversion for empty/other sources
|
||||
from engine.pipeline.adapters import SourceItemsToBufferStage
|
||||
|
||||
pipeline.add_stage("render", SourceItemsToBufferStage(name="items-to-buffer"))
|
||||
|
||||
# Add camera stage
|
||||
from engine.camera import Camera
|
||||
from engine.pipeline.adapters import CameraStage, CameraClockStage
|
||||
|
||||
# Create camera based on preset
|
||||
if preset.camera == "feed":
|
||||
camera = Camera.feed()
|
||||
elif preset.camera == "scroll":
|
||||
camera = Camera.scroll(speed=0.1)
|
||||
elif preset.camera == "horizontal":
|
||||
camera = Camera.horizontal(speed=0.1)
|
||||
else:
|
||||
camera = Camera.feed()
|
||||
|
||||
camera.set_canvas_size(preset.viewport_width, preset.viewport_height * 2)
|
||||
|
||||
# Add camera update (for animation)
|
||||
pipeline.add_stage("camera_update", CameraClockStage(camera, name="camera-clock"))
|
||||
# Add camera stage
|
||||
pipeline.add_stage("camera", CameraStage(camera, name=preset.camera))
|
||||
|
||||
# Add effects
|
||||
if preset.effects:
|
||||
from engine.effects.registry import EffectRegistry
|
||||
from engine.pipeline.adapters import create_stage_from_effect
|
||||
|
||||
effect_registry = EffectRegistry()
|
||||
for effect_name in preset.effects:
|
||||
effect = effect_registry.get(effect_name)
|
||||
if effect:
|
||||
pipeline.add_stage(
|
||||
f"effect_{effect_name}",
|
||||
create_stage_from_effect(effect, effect_name),
|
||||
)
|
||||
|
||||
# Add message overlay stage if enabled (BEFORE display)
|
||||
if getattr(preset, "enable_message_overlay", False):
|
||||
from engine.pipeline.adapters import MessageOverlayConfig, MessageOverlayStage
|
||||
|
||||
overlay_config = MessageOverlayConfig(
|
||||
enabled=True,
|
||||
display_secs=30,
|
||||
)
|
||||
pipeline.add_stage(
|
||||
"message_overlay", MessageOverlayStage(config=overlay_config)
|
||||
)
|
||||
|
||||
# Add null display stage (LAST)
|
||||
null_display = DisplayRegistry.create("null")
|
||||
if null_display:
|
||||
pipeline.add_stage("display", create_stage_from_display(null_display, "null"))
|
||||
|
||||
# Build pipeline
|
||||
pipeline.build()
|
||||
|
||||
# Enable recording on null display if available
|
||||
display_stage = pipeline._stages.get("display")
|
||||
if display_stage and hasattr(display_stage, "_display"):
|
||||
backend = display_stage._display
|
||||
if hasattr(backend, "start_recording"):
|
||||
backend.start_recording()
|
||||
|
||||
# Capture frames
|
||||
frames = []
|
||||
start_time = time.time()
|
||||
|
||||
for i in range(frame_count):
|
||||
frame_start = time.time()
|
||||
stage_result = pipeline.execute()
|
||||
frame_time = time.time() - frame_start
|
||||
|
||||
# Get frames from display recording
|
||||
display_stage = pipeline._stages.get("display")
|
||||
if display_stage and hasattr(display_stage, "_display"):
|
||||
backend = display_stage._display
|
||||
if hasattr(backend, "get_recorded_data"):
|
||||
recorded_frames = backend.get_recorded_data()
|
||||
# Add render_time_ms to each frame
|
||||
for frame in recorded_frames:
|
||||
frame["render_time_ms"] = frame_time * 1000
|
||||
frames = recorded_frames
|
||||
|
||||
# Fallback: create empty frames if no recording
|
||||
if not frames:
|
||||
for i in range(frame_count):
|
||||
frames.append(
|
||||
{
|
||||
"frame_number": i,
|
||||
"buffer": [],
|
||||
"width": preset.viewport_width,
|
||||
"height": preset.viewport_height,
|
||||
"render_time_ms": frame_time * 1000,
|
||||
}
|
||||
)
|
||||
|
||||
# Stop recording on null display
|
||||
display_stage = pipeline._stages.get("display")
|
||||
if display_stage and hasattr(display_stage, "_display"):
|
||||
backend = display_stage._display
|
||||
if hasattr(backend, "stop_recording"):
|
||||
backend.stop_recording()
|
||||
|
||||
total_time = time.time() - start_time
|
||||
|
||||
# Save captured data
|
||||
output_file = output_dir / f"{preset_name}_sideline.json"
|
||||
captured_data = {
|
||||
"preset": preset_name,
|
||||
"config": {
|
||||
"source": preset.source,
|
||||
"camera": preset.camera,
|
||||
"effects": preset.effects,
|
||||
"viewport_width": preset.viewport_width,
|
||||
"viewport_height": preset.viewport_height,
|
||||
"enable_message_overlay": getattr(preset, "enable_message_overlay", False),
|
||||
},
|
||||
"capture_stats": {
|
||||
"frame_count": frame_count,
|
||||
"total_time_ms": total_time * 1000,
|
||||
"avg_frame_time_ms": (total_time * 1000) / frame_count,
|
||||
"fps": frame_count / total_time if total_time > 0 else 0,
|
||||
},
|
||||
"frames": frames,
|
||||
}
|
||||
|
||||
with open(output_file, "w") as f:
|
||||
json.dump(captured_data, f, indent=2)
|
||||
|
||||
return captured_data
|
||||
|
||||
|
||||
def compare_captured_outputs(
|
||||
sideline_file: Path,
|
||||
upstream_file: Path,
|
||||
output_dir: Path = Path("tests/comparison_output"),
|
||||
) -> Dict[str, Any]:
|
||||
"""Compare captured outputs from sideline and upstream.
|
||||
|
||||
Args:
|
||||
sideline_file: Path to sideline captured output
|
||||
upstream_file: Path to upstream captured output
|
||||
output_dir: Directory to save comparison results
|
||||
|
||||
Returns:
|
||||
Dictionary with comparison results
|
||||
"""
|
||||
output_dir.mkdir(parents=True, exist_ok=True)
|
||||
|
||||
# Load captured data
|
||||
with open(sideline_file) as f:
|
||||
sideline_data = json.load(f)
|
||||
|
||||
with open(upstream_file) as f:
|
||||
upstream_data = json.load(f)
|
||||
|
||||
# Compare configurations
|
||||
config_diff = {}
|
||||
for key in [
|
||||
"source",
|
||||
"camera",
|
||||
"effects",
|
||||
"viewport_width",
|
||||
"viewport_height",
|
||||
"enable_message_overlay",
|
||||
]:
|
||||
sideline_val = sideline_data["config"].get(key)
|
||||
upstream_val = upstream_data["config"].get(key)
|
||||
if sideline_val != upstream_val:
|
||||
config_diff[key] = {"sideline": sideline_val, "upstream": upstream_val}
|
||||
|
||||
# Compare frame counts
|
||||
sideline_frames = len(sideline_data["frames"])
|
||||
upstream_frames = len(upstream_data["frames"])
|
||||
frame_count_match = sideline_frames == upstream_frames
|
||||
|
||||
# Compare individual frames
|
||||
frame_comparisons = []
|
||||
total_diff = 0
|
||||
max_diff = 0
|
||||
identical_frames = 0
|
||||
|
||||
min_frames = min(sideline_frames, upstream_frames)
|
||||
for i in range(min_frames):
|
||||
sideline_frame = sideline_data["frames"][i]
|
||||
upstream_frame = upstream_data["frames"][i]
|
||||
|
||||
sideline_buffer = sideline_frame["buffer"]
|
||||
upstream_buffer = upstream_frame["buffer"]
|
||||
|
||||
# Compare buffers line by line
|
||||
line_diffs = []
|
||||
frame_diff = 0
|
||||
max_lines = max(len(sideline_buffer), len(upstream_buffer))
|
||||
|
||||
for line_idx in range(max_lines):
|
||||
sideline_line = (
|
||||
sideline_buffer[line_idx] if line_idx < len(sideline_buffer) else ""
|
||||
)
|
||||
upstream_line = (
|
||||
upstream_buffer[line_idx] if line_idx < len(upstream_buffer) else ""
|
||||
)
|
||||
|
||||
if sideline_line != upstream_line:
|
||||
line_diffs.append(
|
||||
{
|
||||
"line": line_idx,
|
||||
"sideline": sideline_line,
|
||||
"upstream": upstream_line,
|
||||
}
|
||||
)
|
||||
frame_diff += 1
|
||||
|
||||
if frame_diff == 0:
|
||||
identical_frames += 1
|
||||
|
||||
total_diff += frame_diff
|
||||
max_diff = max(max_diff, frame_diff)
|
||||
|
||||
frame_comparisons.append(
|
||||
{
|
||||
"frame_number": i,
|
||||
"differences": frame_diff,
|
||||
"line_diffs": line_diffs[
|
||||
:5
|
||||
], # Only store first 5 differences per frame
|
||||
"render_time_diff_ms": sideline_frame.get("render_time_ms", 0)
|
||||
- upstream_frame.get("render_time_ms", 0),
|
||||
}
|
||||
)
|
||||
|
||||
# Calculate statistics
|
||||
stats = {
|
||||
"total_frames_compared": min_frames,
|
||||
"identical_frames": identical_frames,
|
||||
"frames_with_differences": min_frames - identical_frames,
|
||||
"total_differences": total_diff,
|
||||
"max_differences_per_frame": max_diff,
|
||||
"avg_differences_per_frame": total_diff / min_frames if min_frames > 0 else 0,
|
||||
"match_percentage": (identical_frames / min_frames * 100)
|
||||
if min_frames > 0
|
||||
else 0,
|
||||
}
|
||||
|
||||
# Compare performance stats
|
||||
sideline_stats = sideline_data.get("capture_stats", {})
|
||||
upstream_stats = upstream_data.get("capture_stats", {})
|
||||
performance_comparison = {
|
||||
"sideline": {
|
||||
"total_time_ms": sideline_stats.get("total_time_ms", 0),
|
||||
"avg_frame_time_ms": sideline_stats.get("avg_frame_time_ms", 0),
|
||||
"fps": sideline_stats.get("fps", 0),
|
||||
},
|
||||
"upstream": {
|
||||
"total_time_ms": upstream_stats.get("total_time_ms", 0),
|
||||
"avg_frame_time_ms": upstream_stats.get("avg_frame_time_ms", 0),
|
||||
"fps": upstream_stats.get("fps", 0),
|
||||
},
|
||||
"diff": {
|
||||
"total_time_ms": sideline_stats.get("total_time_ms", 0)
|
||||
- upstream_stats.get("total_time_ms", 0),
|
||||
"avg_frame_time_ms": sideline_stats.get("avg_frame_time_ms", 0)
|
||||
- upstream_stats.get("avg_frame_time_ms", 0),
|
||||
"fps": sideline_stats.get("fps", 0) - upstream_stats.get("fps", 0),
|
||||
},
|
||||
}
|
||||
|
||||
# Build comparison result
|
||||
result = {
|
||||
"preset": sideline_data["preset"],
|
||||
"config_diff": config_diff,
|
||||
"frame_count_match": frame_count_match,
|
||||
"stats": stats,
|
||||
"performance_comparison": performance_comparison,
|
||||
"frame_comparisons": frame_comparisons,
|
||||
"sideline_file": str(sideline_file),
|
||||
"upstream_file": str(upstream_file),
|
||||
}
|
||||
|
||||
# Save comparison result
|
||||
output_file = output_dir / f"{sideline_data['preset']}_comparison.json"
|
||||
with open(output_file, "w") as f:
|
||||
json.dump(result, f, indent=2)
|
||||
|
||||
return result
|
||||
|
||||
|
||||
def generate_html_report(
|
||||
comparison_results: List[Dict[str, Any]],
|
||||
output_dir: Path = Path("tests/comparison_output"),
|
||||
) -> Path:
|
||||
"""Generate HTML report from comparison results using acceptance_report.py.
|
||||
|
||||
Args:
|
||||
comparison_results: List of comparison results
|
||||
output_dir: Directory to save HTML report
|
||||
|
||||
Returns:
|
||||
Path to generated HTML report
|
||||
"""
|
||||
from tests.acceptance_report import save_index_report
|
||||
|
||||
output_dir.mkdir(parents=True, exist_ok=True)
|
||||
|
||||
# Generate index report with links to all comparison results
|
||||
reports = []
|
||||
for result in comparison_results:
|
||||
reports.append(
|
||||
{
|
||||
"test_name": f"comparison-{result['preset']}",
|
||||
"status": "PASS" if result.get("status") == "success" else "FAIL",
|
||||
"frame_count": result["stats"]["total_frames_compared"],
|
||||
"duration_ms": result["performance_comparison"]["sideline"][
|
||||
"total_time_ms"
|
||||
],
|
||||
}
|
||||
)
|
||||
|
||||
# Save index report
|
||||
index_file = save_index_report(reports, str(output_dir))
|
||||
|
||||
# Also save a summary JSON file for programmatic access
|
||||
summary_file = output_dir / "comparison_summary.json"
|
||||
with open(summary_file, "w") as f:
|
||||
json.dump(
|
||||
{
|
||||
"timestamp": __import__("datetime").datetime.now().isoformat(),
|
||||
"results": comparison_results,
|
||||
},
|
||||
f,
|
||||
indent=2,
|
||||
)
|
||||
|
||||
return Path(index_file)
|
||||
253
tests/comparison_presets.toml
Normal file
253
tests/comparison_presets.toml
Normal file
@@ -0,0 +1,253 @@
|
||||
# Comparison Presets for Upstream vs Sideline Testing
|
||||
# These presets are designed to test various pipeline configurations
|
||||
# to ensure visual equivalence and performance parity
|
||||
|
||||
# ============================================
|
||||
# CORE PIPELINE TESTS (Basic functionality)
|
||||
# ============================================
|
||||
|
||||
[presets.comparison-basic]
|
||||
description = "Comparison: Basic pipeline, no effects"
|
||||
source = "headlines"
|
||||
display = "null"
|
||||
camera = "feed"
|
||||
effects = []
|
||||
viewport_width = 80
|
||||
viewport_height = 24
|
||||
enable_message_overlay = false
|
||||
frame_count = 30
|
||||
|
||||
[presets.comparison-with-message-overlay]
|
||||
description = "Comparison: Basic pipeline with message overlay"
|
||||
source = "headlines"
|
||||
display = "null"
|
||||
camera = "feed"
|
||||
effects = []
|
||||
viewport_width = 80
|
||||
viewport_height = 24
|
||||
enable_message_overlay = true
|
||||
frame_count = 30
|
||||
|
||||
# ============================================
|
||||
# EFFECT TESTS (Various effect combinations)
|
||||
# ============================================
|
||||
|
||||
[presets.comparison-single-effect]
|
||||
description = "Comparison: Single effect (border)"
|
||||
source = "headlines"
|
||||
display = "null"
|
||||
camera = "feed"
|
||||
effects = ["border"]
|
||||
viewport_width = 80
|
||||
viewport_height = 24
|
||||
enable_message_overlay = false
|
||||
frame_count = 30
|
||||
|
||||
[presets.comparison-multiple-effects]
|
||||
description = "Comparison: Multiple effects chain"
|
||||
source = "headlines"
|
||||
display = "null"
|
||||
camera = "feed"
|
||||
effects = ["border", "tint", "hud"]
|
||||
viewport_width = 80
|
||||
viewport_height = 24
|
||||
enable_message_overlay = false
|
||||
frame_count = 30
|
||||
|
||||
[presets.comparison-all-effects]
|
||||
description = "Comparison: All available effects"
|
||||
source = "headlines"
|
||||
display = "null"
|
||||
camera = "feed"
|
||||
effects = ["border", "tint", "hud", "fade", "noise", "glitch"]
|
||||
viewport_width = 80
|
||||
viewport_height = 24
|
||||
enable_message_overlay = false
|
||||
frame_count = 30
|
||||
|
||||
# ============================================
|
||||
# CAMERA MODE TESTS (Different viewport behaviors)
|
||||
# ============================================
|
||||
|
||||
[presets.comparison-camera-feed]
|
||||
description = "Comparison: Feed camera mode"
|
||||
source = "headlines"
|
||||
display = "null"
|
||||
camera = "feed"
|
||||
effects = []
|
||||
viewport_width = 80
|
||||
viewport_height = 24
|
||||
enable_message_overlay = false
|
||||
frame_count = 30
|
||||
|
||||
[presets.comparison-camera-scroll]
|
||||
description = "Comparison: Scroll camera mode"
|
||||
source = "headlines"
|
||||
display = "null"
|
||||
camera = "scroll"
|
||||
effects = []
|
||||
viewport_width = 80
|
||||
viewport_height = 24
|
||||
enable_message_overlay = false
|
||||
frame_count = 30
|
||||
camera_speed = 0.5
|
||||
|
||||
[presets.comparison-camera-horizontal]
|
||||
description = "Comparison: Horizontal camera mode"
|
||||
source = "headlines"
|
||||
display = "null"
|
||||
camera = "horizontal"
|
||||
effects = []
|
||||
viewport_width = 80
|
||||
viewport_height = 24
|
||||
enable_message_overlay = false
|
||||
frame_count = 30
|
||||
|
||||
# ============================================
|
||||
# SOURCE TESTS (Different data sources)
|
||||
# ============================================
|
||||
|
||||
[presets.comparison-source-headlines]
|
||||
description = "Comparison: Headlines source"
|
||||
source = "headlines"
|
||||
display = "null"
|
||||
camera = "feed"
|
||||
effects = []
|
||||
viewport_width = 80
|
||||
viewport_height = 24
|
||||
enable_message_overlay = false
|
||||
frame_count = 30
|
||||
|
||||
[presets.comparison-source-poetry]
|
||||
description = "Comparison: Poetry source"
|
||||
source = "poetry"
|
||||
display = "null"
|
||||
camera = "feed"
|
||||
effects = []
|
||||
viewport_width = 80
|
||||
viewport_height = 24
|
||||
enable_message_overlay = false
|
||||
frame_count = 30
|
||||
|
||||
[presets.comparison-source-empty]
|
||||
description = "Comparison: Empty source (blank canvas)"
|
||||
source = "empty"
|
||||
display = "null"
|
||||
camera = "feed"
|
||||
effects = []
|
||||
viewport_width = 80
|
||||
viewport_height = 24
|
||||
enable_message_overlay = false
|
||||
frame_count = 30
|
||||
|
||||
# ============================================
|
||||
# DIMENSION TESTS (Different viewport sizes)
|
||||
# ============================================
|
||||
|
||||
[presets.comparison-small-viewport]
|
||||
description = "Comparison: Small viewport"
|
||||
source = "headlines"
|
||||
display = "null"
|
||||
camera = "feed"
|
||||
effects = []
|
||||
viewport_width = 60
|
||||
viewport_height = 20
|
||||
enable_message_overlay = false
|
||||
frame_count = 30
|
||||
|
||||
[presets.comparison-large-viewport]
|
||||
description = "Comparison: Large viewport"
|
||||
source = "headlines"
|
||||
display = "null"
|
||||
camera = "feed"
|
||||
effects = []
|
||||
viewport_width = 120
|
||||
viewport_height = 40
|
||||
enable_message_overlay = false
|
||||
frame_count = 30
|
||||
|
||||
[presets.comparison-wide-viewport]
|
||||
description = "Comparison: Wide viewport"
|
||||
source = "headlines"
|
||||
display = "null"
|
||||
camera = "feed"
|
||||
effects = []
|
||||
viewport_width = 160
|
||||
viewport_height = 24
|
||||
enable_message_overlay = false
|
||||
frame_count = 30
|
||||
|
||||
# ============================================
|
||||
# COMPREHENSIVE TESTS (Combined scenarios)
|
||||
# ============================================
|
||||
|
||||
[presets.comparison-comprehensive-1]
|
||||
description = "Comparison: Headlines + Effects + Message Overlay"
|
||||
source = "headlines"
|
||||
display = "null"
|
||||
camera = "feed"
|
||||
effects = ["border", "tint"]
|
||||
viewport_width = 80
|
||||
viewport_height = 24
|
||||
enable_message_overlay = true
|
||||
frame_count = 30
|
||||
|
||||
[presets.comparison-comprehensive-2]
|
||||
description = "Comparison: Poetry + Camera Scroll + Effects"
|
||||
source = "poetry"
|
||||
display = "null"
|
||||
camera = "scroll"
|
||||
effects = ["fade", "noise"]
|
||||
viewport_width = 80
|
||||
viewport_height = 24
|
||||
enable_message_overlay = false
|
||||
frame_count = 30
|
||||
camera_speed = 0.3
|
||||
|
||||
[presets.comparison-comprehensive-3]
|
||||
description = "Comparison: Headlines + Horizontal Camera + All Effects"
|
||||
source = "headlines"
|
||||
display = "null"
|
||||
camera = "horizontal"
|
||||
effects = ["border", "tint", "hud", "fade"]
|
||||
viewport_width = 100
|
||||
viewport_height = 30
|
||||
enable_message_overlay = true
|
||||
frame_count = 30
|
||||
|
||||
# ============================================
|
||||
# REGRESSION TESTS (Specific edge cases)
|
||||
# ============================================
|
||||
|
||||
[presets.comparison-regression-empty-message]
|
||||
description = "Regression: Empty message overlay"
|
||||
source = "empty"
|
||||
display = "null"
|
||||
camera = "feed"
|
||||
effects = []
|
||||
viewport_width = 80
|
||||
viewport_height = 24
|
||||
enable_message_overlay = true
|
||||
frame_count = 30
|
||||
|
||||
[presets.comparison-regression-narrow-viewport]
|
||||
description = "Regression: Very narrow viewport with long text"
|
||||
source = "headlines"
|
||||
display = "null"
|
||||
camera = "feed"
|
||||
effects = []
|
||||
viewport_width = 40
|
||||
viewport_height = 24
|
||||
enable_message_overlay = false
|
||||
frame_count = 30
|
||||
|
||||
[presets.comparison-regression-tall-viewport]
|
||||
description = "Regression: Tall viewport with few items"
|
||||
source = "empty"
|
||||
display = "null"
|
||||
camera = "feed"
|
||||
effects = []
|
||||
viewport_width = 80
|
||||
viewport_height = 60
|
||||
enable_message_overlay = false
|
||||
frame_count = 30
|
||||
243
tests/run_comparison.py
Normal file
243
tests/run_comparison.py
Normal file
@@ -0,0 +1,243 @@
|
||||
"""Main comparison runner for upstream vs sideline testing.
|
||||
|
||||
This script runs comparisons between upstream and sideline implementations
|
||||
using multiple presets and generates HTML reports.
|
||||
"""
|
||||
|
||||
import argparse
|
||||
import json
|
||||
import sys
|
||||
from pathlib import Path
|
||||
|
||||
from tests.comparison_capture import (
|
||||
capture_frames,
|
||||
compare_captured_outputs,
|
||||
generate_html_report,
|
||||
)
|
||||
|
||||
|
||||
def load_comparison_presets() -> list[str]:
|
||||
"""Load list of comparison presets from config file.
|
||||
|
||||
Returns:
|
||||
List of preset names
|
||||
"""
|
||||
import tomli
|
||||
|
||||
config_file = Path("tests/comparison_presets.toml")
|
||||
if not config_file.exists():
|
||||
raise FileNotFoundError(f"Comparison presets not found: {config_file}")
|
||||
|
||||
with open(config_file, "rb") as f:
|
||||
config = tomli.load(f)
|
||||
|
||||
presets = list(config.get("presets", {}).keys())
|
||||
# Strip "presets." prefix if present
|
||||
return [p.replace("presets.", "") for p in presets]
|
||||
|
||||
|
||||
def run_comparison_for_preset(
|
||||
preset_name: str,
|
||||
sideline_only: bool = False,
|
||||
upstream_file: Path | None = None,
|
||||
) -> dict:
|
||||
"""Run comparison for a single preset.
|
||||
|
||||
Args:
|
||||
preset_name: Name of preset to test
|
||||
sideline_only: If True, only capture sideline frames
|
||||
upstream_file: Path to upstream captured output (if not None, use this instead of capturing)
|
||||
|
||||
Returns:
|
||||
Comparison result dict
|
||||
"""
|
||||
print(f" Running preset: {preset_name}")
|
||||
|
||||
# Capture sideline frames
|
||||
sideline_data = capture_frames(preset_name, frame_count=30)
|
||||
sideline_file = Path(f"tests/comparison_output/{preset_name}_sideline.json")
|
||||
|
||||
if sideline_only:
|
||||
return {
|
||||
"preset": preset_name,
|
||||
"status": "sideline_only",
|
||||
"sideline_file": str(sideline_file),
|
||||
}
|
||||
|
||||
# Use provided upstream file or look for it
|
||||
if upstream_file:
|
||||
upstream_path = upstream_file
|
||||
else:
|
||||
upstream_path = Path(f"tests/comparison_output/{preset_name}_upstream.json")
|
||||
|
||||
if not upstream_path.exists():
|
||||
print(f" Warning: Upstream file not found: {upstream_path}")
|
||||
return {
|
||||
"preset": preset_name,
|
||||
"status": "missing_upstream",
|
||||
"sideline_file": str(sideline_file),
|
||||
"upstream_file": str(upstream_path),
|
||||
}
|
||||
|
||||
# Compare outputs
|
||||
try:
|
||||
comparison_result = compare_captured_outputs(
|
||||
sideline_file=sideline_file,
|
||||
upstream_file=upstream_path,
|
||||
)
|
||||
comparison_result["status"] = "success"
|
||||
return comparison_result
|
||||
except Exception as e:
|
||||
print(f" Error comparing outputs: {e}")
|
||||
return {
|
||||
"preset": preset_name,
|
||||
"status": "error",
|
||||
"error": str(e),
|
||||
"sideline_file": str(sideline_file),
|
||||
"upstream_file": str(upstream_path),
|
||||
}
|
||||
|
||||
|
||||
def main():
|
||||
"""Main entry point for comparison runner."""
|
||||
parser = argparse.ArgumentParser(
|
||||
description="Run comparison tests between upstream and sideline implementations"
|
||||
)
|
||||
parser.add_argument(
|
||||
"--preset",
|
||||
"-p",
|
||||
help="Run specific preset (can be specified multiple times)",
|
||||
action="append",
|
||||
dest="presets",
|
||||
)
|
||||
parser.add_argument(
|
||||
"--all",
|
||||
"-a",
|
||||
help="Run all comparison presets",
|
||||
action="store_true",
|
||||
)
|
||||
parser.add_argument(
|
||||
"--sideline-only",
|
||||
"-s",
|
||||
help="Only capture sideline frames (no comparison)",
|
||||
action="store_true",
|
||||
)
|
||||
parser.add_argument(
|
||||
"--upstream-file",
|
||||
"-u",
|
||||
help="Path to upstream captured output file",
|
||||
type=Path,
|
||||
)
|
||||
parser.add_argument(
|
||||
"--output-dir",
|
||||
"-o",
|
||||
help="Output directory for captured frames and reports",
|
||||
type=Path,
|
||||
default=Path("tests/comparison_output"),
|
||||
)
|
||||
parser.add_argument(
|
||||
"--no-report",
|
||||
help="Skip HTML report generation",
|
||||
action="store_true",
|
||||
)
|
||||
|
||||
args = parser.parse_args()
|
||||
|
||||
# Determine which presets to run
|
||||
if args.presets:
|
||||
presets_to_run = args.presets
|
||||
elif args.all:
|
||||
presets_to_run = load_comparison_presets()
|
||||
else:
|
||||
print("Error: Either --preset or --all must be specified")
|
||||
print(f"Available presets: {', '.join(load_comparison_presets())}")
|
||||
sys.exit(1)
|
||||
|
||||
print(f"Running comparison for {len(presets_to_run)} preset(s)")
|
||||
print(f"Output directory: {args.output_dir}")
|
||||
print()
|
||||
|
||||
# Run comparisons
|
||||
results = []
|
||||
for preset_name in presets_to_run:
|
||||
try:
|
||||
result = run_comparison_for_preset(
|
||||
preset_name,
|
||||
sideline_only=args.sideline_only,
|
||||
upstream_file=args.upstream_file,
|
||||
)
|
||||
results.append(result)
|
||||
|
||||
if result["status"] == "success":
|
||||
match_pct = result["stats"]["match_percentage"]
|
||||
print(f" ✓ Match: {match_pct:.1f}%")
|
||||
elif result["status"] == "missing_upstream":
|
||||
print(f" ⚠ Missing upstream file")
|
||||
elif result["status"] == "error":
|
||||
print(f" ✗ Error: {result['error']}")
|
||||
else:
|
||||
print(f" ✓ Captured sideline only")
|
||||
|
||||
except Exception as e:
|
||||
print(f" ✗ Failed: {e}")
|
||||
results.append(
|
||||
{
|
||||
"preset": preset_name,
|
||||
"status": "failed",
|
||||
"error": str(e),
|
||||
}
|
||||
)
|
||||
|
||||
# Generate HTML report
|
||||
if not args.no_report and not args.sideline_only:
|
||||
successful_results = [r for r in results if r.get("status") == "success"]
|
||||
if successful_results:
|
||||
print(f"\nGenerating HTML report...")
|
||||
report_file = generate_html_report(successful_results, args.output_dir)
|
||||
print(f" Report saved to: {report_file}")
|
||||
|
||||
# Also save summary JSON
|
||||
summary_file = args.output_dir / "comparison_summary.json"
|
||||
with open(summary_file, "w") as f:
|
||||
json.dump(
|
||||
{
|
||||
"timestamp": __import__("datetime").datetime.now().isoformat(),
|
||||
"presets_tested": [r["preset"] for r in results],
|
||||
"results": results,
|
||||
},
|
||||
f,
|
||||
indent=2,
|
||||
)
|
||||
print(f" Summary saved to: {summary_file}")
|
||||
else:
|
||||
print(f"\nNote: No successful comparisons to report.")
|
||||
print(f" Capture files saved in {args.output_dir}")
|
||||
print(f" Run comparison when upstream files are available.")
|
||||
|
||||
# Print summary
|
||||
print("\n" + "=" * 60)
|
||||
print("SUMMARY")
|
||||
print("=" * 60)
|
||||
|
||||
status_counts = {}
|
||||
for result in results:
|
||||
status = result.get("status", "unknown")
|
||||
status_counts[status] = status_counts.get(status, 0) + 1
|
||||
|
||||
for status, count in sorted(status_counts.items()):
|
||||
print(f" {status}: {count}")
|
||||
|
||||
if "success" in status_counts:
|
||||
successful_results = [r for r in results if r.get("status") == "success"]
|
||||
avg_match = sum(
|
||||
r["stats"]["match_percentage"] for r in successful_results
|
||||
) / len(successful_results)
|
||||
print(f"\n Average match rate: {avg_match:.1f}%")
|
||||
|
||||
# Exit with error code if any failures
|
||||
if any(r.get("status") in ["error", "failed"] for r in results):
|
||||
sys.exit(1)
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
main()
|
||||
341
tests/test_comparison_framework.py
Normal file
341
tests/test_comparison_framework.py
Normal file
@@ -0,0 +1,341 @@
|
||||
"""Comparison framework tests for upstream vs sideline pipeline.
|
||||
|
||||
These tests verify that the comparison framework works correctly
|
||||
and can be used for regression testing.
|
||||
"""
|
||||
|
||||
import json
|
||||
import tempfile
|
||||
from pathlib import Path
|
||||
|
||||
import pytest
|
||||
|
||||
from tests.comparison_capture import capture_frames, compare_captured_outputs
|
||||
|
||||
|
||||
class TestComparisonCapture:
|
||||
"""Tests for frame capture functionality."""
|
||||
|
||||
def test_capture_basic_preset(self):
|
||||
"""Test capturing frames from a basic preset."""
|
||||
with tempfile.TemporaryDirectory() as tmpdir:
|
||||
output_dir = Path(tmpdir)
|
||||
|
||||
# Capture frames
|
||||
result = capture_frames(
|
||||
preset_name="comparison-basic",
|
||||
frame_count=10,
|
||||
output_dir=output_dir,
|
||||
)
|
||||
|
||||
# Verify result structure
|
||||
assert "preset" in result
|
||||
assert "config" in result
|
||||
assert "frames" in result
|
||||
assert "capture_stats" in result
|
||||
|
||||
# Verify frame count
|
||||
assert len(result["frames"]) == 10
|
||||
|
||||
# Verify frame structure
|
||||
frame = result["frames"][0]
|
||||
assert "frame_number" in frame
|
||||
assert "buffer" in frame
|
||||
assert "width" in frame
|
||||
assert "height" in frame
|
||||
|
||||
def test_capture_with_message_overlay(self):
|
||||
"""Test capturing frames with message overlay enabled."""
|
||||
with tempfile.TemporaryDirectory() as tmpdir:
|
||||
output_dir = Path(tmpdir)
|
||||
|
||||
result = capture_frames(
|
||||
preset_name="comparison-with-message-overlay",
|
||||
frame_count=5,
|
||||
output_dir=output_dir,
|
||||
)
|
||||
|
||||
# Verify message overlay is enabled in config
|
||||
assert result["config"]["enable_message_overlay"] is True
|
||||
|
||||
def test_capture_multiple_presets(self):
|
||||
"""Test capturing frames from multiple presets."""
|
||||
presets = ["comparison-basic", "comparison-single-effect"]
|
||||
|
||||
with tempfile.TemporaryDirectory() as tmpdir:
|
||||
output_dir = Path(tmpdir)
|
||||
|
||||
for preset in presets:
|
||||
result = capture_frames(
|
||||
preset_name=preset,
|
||||
frame_count=5,
|
||||
output_dir=output_dir,
|
||||
)
|
||||
assert result["preset"] == preset
|
||||
|
||||
|
||||
class TestComparisonAnalysis:
|
||||
"""Tests for comparison analysis functionality."""
|
||||
|
||||
def test_compare_identical_outputs(self):
|
||||
"""Test comparing identical outputs shows 100% match."""
|
||||
with tempfile.TemporaryDirectory() as tmpdir:
|
||||
output_dir = Path(tmpdir)
|
||||
|
||||
# Create two identical captured outputs
|
||||
sideline_file = output_dir / "test_sideline.json"
|
||||
upstream_file = output_dir / "test_upstream.json"
|
||||
|
||||
test_data = {
|
||||
"preset": "test",
|
||||
"config": {"viewport_width": 80, "viewport_height": 24},
|
||||
"frames": [
|
||||
{
|
||||
"frame_number": 0,
|
||||
"buffer": ["Line 1", "Line 2", "Line 3"],
|
||||
"width": 80,
|
||||
"height": 24,
|
||||
"render_time_ms": 10.0,
|
||||
}
|
||||
],
|
||||
"capture_stats": {
|
||||
"frame_count": 1,
|
||||
"total_time_ms": 10.0,
|
||||
"avg_frame_time_ms": 10.0,
|
||||
"fps": 100.0,
|
||||
},
|
||||
}
|
||||
|
||||
with open(sideline_file, "w") as f:
|
||||
json.dump(test_data, f)
|
||||
|
||||
with open(upstream_file, "w") as f:
|
||||
json.dump(test_data, f)
|
||||
|
||||
# Compare
|
||||
result = compare_captured_outputs(
|
||||
sideline_file=sideline_file,
|
||||
upstream_file=upstream_file,
|
||||
)
|
||||
|
||||
# Should have 100% match
|
||||
assert result["stats"]["match_percentage"] == 100.0
|
||||
assert result["stats"]["identical_frames"] == 1
|
||||
assert result["stats"]["total_differences"] == 0
|
||||
|
||||
def test_compare_different_outputs(self):
|
||||
"""Test comparing different outputs detects differences."""
|
||||
with tempfile.TemporaryDirectory() as tmpdir:
|
||||
output_dir = Path(tmpdir)
|
||||
|
||||
sideline_file = output_dir / "test_sideline.json"
|
||||
upstream_file = output_dir / "test_upstream.json"
|
||||
|
||||
# Create different outputs
|
||||
sideline_data = {
|
||||
"preset": "test",
|
||||
"config": {"viewport_width": 80, "viewport_height": 24},
|
||||
"frames": [
|
||||
{
|
||||
"frame_number": 0,
|
||||
"buffer": ["Sideline Line 1", "Line 2"],
|
||||
"width": 80,
|
||||
"height": 24,
|
||||
"render_time_ms": 10.0,
|
||||
}
|
||||
],
|
||||
"capture_stats": {
|
||||
"frame_count": 1,
|
||||
"total_time_ms": 10.0,
|
||||
"avg_frame_time_ms": 10.0,
|
||||
"fps": 100.0,
|
||||
},
|
||||
}
|
||||
|
||||
upstream_data = {
|
||||
"preset": "test",
|
||||
"config": {"viewport_width": 80, "viewport_height": 24},
|
||||
"frames": [
|
||||
{
|
||||
"frame_number": 0,
|
||||
"buffer": ["Upstream Line 1", "Line 2"],
|
||||
"width": 80,
|
||||
"height": 24,
|
||||
"render_time_ms": 12.0,
|
||||
}
|
||||
],
|
||||
"capture_stats": {
|
||||
"frame_count": 1,
|
||||
"total_time_ms": 12.0,
|
||||
"avg_frame_time_ms": 12.0,
|
||||
"fps": 83.33,
|
||||
},
|
||||
}
|
||||
|
||||
with open(sideline_file, "w") as f:
|
||||
json.dump(sideline_data, f)
|
||||
|
||||
with open(upstream_file, "w") as f:
|
||||
json.dump(upstream_data, f)
|
||||
|
||||
# Compare
|
||||
result = compare_captured_outputs(
|
||||
sideline_file=sideline_file,
|
||||
upstream_file=upstream_file,
|
||||
)
|
||||
|
||||
# Should detect differences
|
||||
assert result["stats"]["match_percentage"] < 100.0
|
||||
assert result["stats"]["total_differences"] > 0
|
||||
assert len(result["frame_comparisons"][0]["line_diffs"]) > 0
|
||||
|
||||
def test_performance_comparison(self):
|
||||
"""Test that performance metrics are compared correctly."""
|
||||
with tempfile.TemporaryDirectory() as tmpdir:
|
||||
output_dir = Path(tmpdir)
|
||||
|
||||
sideline_file = output_dir / "test_sideline.json"
|
||||
upstream_file = output_dir / "test_upstream.json"
|
||||
|
||||
sideline_data = {
|
||||
"preset": "test",
|
||||
"config": {"viewport_width": 80, "viewport_height": 24},
|
||||
"frames": [
|
||||
{
|
||||
"frame_number": 0,
|
||||
"buffer": [],
|
||||
"width": 80,
|
||||
"height": 24,
|
||||
"render_time_ms": 10.0,
|
||||
}
|
||||
],
|
||||
"capture_stats": {
|
||||
"frame_count": 1,
|
||||
"total_time_ms": 10.0,
|
||||
"avg_frame_time_ms": 10.0,
|
||||
"fps": 100.0,
|
||||
},
|
||||
}
|
||||
|
||||
upstream_data = {
|
||||
"preset": "test",
|
||||
"config": {"viewport_width": 80, "viewport_height": 24},
|
||||
"frames": [
|
||||
{
|
||||
"frame_number": 0,
|
||||
"buffer": [],
|
||||
"width": 80,
|
||||
"height": 24,
|
||||
"render_time_ms": 12.0,
|
||||
}
|
||||
],
|
||||
"capture_stats": {
|
||||
"frame_count": 1,
|
||||
"total_time_ms": 12.0,
|
||||
"avg_frame_time_ms": 12.0,
|
||||
"fps": 83.33,
|
||||
},
|
||||
}
|
||||
|
||||
with open(sideline_file, "w") as f:
|
||||
json.dump(sideline_data, f)
|
||||
|
||||
with open(upstream_file, "w") as f:
|
||||
json.dump(upstream_data, f)
|
||||
|
||||
result = compare_captured_outputs(
|
||||
sideline_file=sideline_file,
|
||||
upstream_file=upstream_file,
|
||||
)
|
||||
|
||||
# Verify performance comparison
|
||||
perf = result["performance_comparison"]
|
||||
assert "sideline" in perf
|
||||
assert "upstream" in perf
|
||||
assert "diff" in perf
|
||||
assert (
|
||||
perf["sideline"]["fps"] > perf["upstream"]["fps"]
|
||||
) # Sideline is faster in this example
|
||||
|
||||
|
||||
class TestComparisonPresets:
|
||||
"""Tests for comparison preset configuration."""
|
||||
|
||||
def test_comparison_presets_exist(self):
|
||||
"""Test that comparison presets file exists and is valid."""
|
||||
presets_file = Path("tests/comparison_presets.toml")
|
||||
assert presets_file.exists(), "Comparison presets file should exist"
|
||||
|
||||
def test_preset_structure(self):
|
||||
"""Test that presets have required fields."""
|
||||
import tomli
|
||||
|
||||
with open("tests/comparison_presets.toml", "rb") as f:
|
||||
config = tomli.load(f)
|
||||
|
||||
presets = config.get("presets", {})
|
||||
assert len(presets) > 0, "Should have at least one preset"
|
||||
|
||||
for preset_name, preset_config in presets.items():
|
||||
# Each preset should have required fields
|
||||
assert "source" in preset_config, f"{preset_name} should have 'source'"
|
||||
assert "display" in preset_config, f"{preset_name} should have 'display'"
|
||||
assert "camera" in preset_config, f"{preset_name} should have 'camera'"
|
||||
assert "viewport_width" in preset_config, (
|
||||
f"{preset_name} should have 'viewport_width'"
|
||||
)
|
||||
assert "viewport_height" in preset_config, (
|
||||
f"{preset_name} should have 'viewport_height'"
|
||||
)
|
||||
assert "frame_count" in preset_config, (
|
||||
f"{preset_name} should have 'frame_count'"
|
||||
)
|
||||
|
||||
def test_preset_variety(self):
|
||||
"""Test that presets cover different scenarios."""
|
||||
import tomli
|
||||
|
||||
with open("tests/comparison_presets.toml", "rb") as f:
|
||||
config = tomli.load(f)
|
||||
|
||||
presets = config.get("presets", {})
|
||||
|
||||
# Should have presets for different categories
|
||||
categories = {
|
||||
"basic": 0,
|
||||
"effect": 0,
|
||||
"camera": 0,
|
||||
"source": 0,
|
||||
"viewport": 0,
|
||||
"comprehensive": 0,
|
||||
"regression": 0,
|
||||
}
|
||||
|
||||
for preset_name in presets.keys():
|
||||
name_lower = preset_name.lower()
|
||||
if "basic" in name_lower:
|
||||
categories["basic"] += 1
|
||||
elif (
|
||||
"effect" in name_lower or "border" in name_lower or "tint" in name_lower
|
||||
):
|
||||
categories["effect"] += 1
|
||||
elif "camera" in name_lower:
|
||||
categories["camera"] += 1
|
||||
elif "source" in name_lower:
|
||||
categories["source"] += 1
|
||||
elif (
|
||||
"viewport" in name_lower
|
||||
or "small" in name_lower
|
||||
or "large" in name_lower
|
||||
):
|
||||
categories["viewport"] += 1
|
||||
elif "comprehensive" in name_lower:
|
||||
categories["comprehensive"] += 1
|
||||
elif "regression" in name_lower:
|
||||
categories["regression"] += 1
|
||||
|
||||
# Verify we have variety
|
||||
assert categories["basic"] > 0, "Should have at least one basic preset"
|
||||
assert categories["effect"] > 0, "Should have at least one effect preset"
|
||||
assert categories["camera"] > 0, "Should have at least one camera preset"
|
||||
assert categories["source"] > 0, "Should have at least one source preset"
|
||||
234
tests/test_visual_verification.py
Normal file
234
tests/test_visual_verification.py
Normal file
@@ -0,0 +1,234 @@
|
||||
"""
|
||||
Visual verification tests for message overlay and effect rendering.
|
||||
|
||||
These tests verify that the sideline pipeline produces visual output
|
||||
that matches the expected behavior of upstream/main, even if the
|
||||
buffer format differs due to architectural differences.
|
||||
"""
|
||||
|
||||
import json
|
||||
from pathlib import Path
|
||||
|
||||
import pytest
|
||||
|
||||
from engine.display import DisplayRegistry
|
||||
from engine.pipeline import Pipeline, PipelineConfig, PipelineContext
|
||||
from engine.pipeline.adapters import create_stage_from_display
|
||||
from engine.pipeline.params import PipelineParams
|
||||
from engine.pipeline.presets import get_preset
|
||||
|
||||
|
||||
class TestMessageOverlayVisuals:
|
||||
"""Test message overlay visual rendering."""
|
||||
|
||||
def test_message_overlay_produces_output(self):
|
||||
"""Verify message overlay stage produces output when ntfy message is present."""
|
||||
# This test verifies the message overlay stage is working
|
||||
# It doesn't compare with upstream, just verifies functionality
|
||||
|
||||
from engine.pipeline.adapters.message_overlay import MessageOverlayStage
|
||||
from engine.pipeline.adapters import MessageOverlayConfig
|
||||
|
||||
# Test the rendering function directly
|
||||
stage = MessageOverlayStage(
|
||||
config=MessageOverlayConfig(enabled=True, display_secs=30)
|
||||
)
|
||||
|
||||
# Test with a mock message
|
||||
msg = ("Test Title", "Test Message Body", 0.0)
|
||||
w, h = 80, 24
|
||||
|
||||
# Render overlay
|
||||
overlay, _ = stage._render_message_overlay(msg, w, h, (None, None))
|
||||
|
||||
# Verify overlay has content
|
||||
assert len(overlay) > 0, "Overlay should have content when message is present"
|
||||
|
||||
# Verify overlay contains expected content
|
||||
overlay_text = "".join(overlay)
|
||||
# Note: Message body is rendered as block characters, not text
|
||||
# The title appears in the metadata line
|
||||
assert "Test Title" in overlay_text, "Overlay should contain message title"
|
||||
assert "ntfy" in overlay_text, "Overlay should contain ntfy metadata"
|
||||
assert "\033[" in overlay_text, "Overlay should contain ANSI codes"
|
||||
|
||||
def test_message_overlay_appears_in_correct_position(self):
|
||||
"""Verify message overlay appears in centered position."""
|
||||
# This test verifies the message overlay positioning logic
|
||||
# It checks that the overlay coordinates are calculated correctly
|
||||
|
||||
from engine.pipeline.adapters.message_overlay import MessageOverlayStage
|
||||
from engine.pipeline.adapters import MessageOverlayConfig
|
||||
|
||||
stage = MessageOverlayStage(config=MessageOverlayConfig())
|
||||
|
||||
# Test positioning calculation
|
||||
msg = ("Test Title", "Test Body", 0.0)
|
||||
w, h = 80, 24
|
||||
|
||||
# Render overlay
|
||||
overlay, _ = stage._render_message_overlay(msg, w, h, (None, None))
|
||||
|
||||
# Verify overlay has content
|
||||
assert len(overlay) > 0, "Overlay should have content"
|
||||
|
||||
# Verify overlay contains cursor positioning codes
|
||||
overlay_text = "".join(overlay)
|
||||
assert "\033[" in overlay_text, "Overlay should contain ANSI codes"
|
||||
assert "H" in overlay_text, "Overlay should contain cursor positioning"
|
||||
|
||||
# Verify panel is centered (check first line's position)
|
||||
# Panel height is len(msg_rows) + 2 (content + meta + border)
|
||||
# panel_top = max(0, (h - panel_h) // 2)
|
||||
# First content line should be at panel_top + 1
|
||||
first_line = overlay[0]
|
||||
assert "\033[" in first_line, "First line should have cursor positioning"
|
||||
assert ";1H" in first_line, "First line should position at column 1"
|
||||
|
||||
def test_theme_system_integration(self):
|
||||
"""Verify theme system is integrated with message overlay."""
|
||||
from engine import config as engine_config
|
||||
from engine.themes import THEME_REGISTRY
|
||||
|
||||
# Verify theme registry has expected themes
|
||||
assert "green" in THEME_REGISTRY, "Green theme should exist"
|
||||
assert "orange" in THEME_REGISTRY, "Orange theme should exist"
|
||||
assert "purple" in THEME_REGISTRY, "Purple theme should exist"
|
||||
|
||||
# Verify active theme is set
|
||||
assert engine_config.ACTIVE_THEME is not None, "Active theme should be set"
|
||||
assert engine_config.ACTIVE_THEME.name in THEME_REGISTRY, (
|
||||
"Active theme should be in registry"
|
||||
)
|
||||
|
||||
# Verify theme has gradient colors
|
||||
assert len(engine_config.ACTIVE_THEME.main_gradient) == 12, (
|
||||
"Main gradient should have 12 colors"
|
||||
)
|
||||
assert len(engine_config.ACTIVE_THEME.message_gradient) == 12, (
|
||||
"Message gradient should have 12 colors"
|
||||
)
|
||||
|
||||
|
||||
class TestPipelineExecutionOrder:
|
||||
"""Test pipeline execution order for visual consistency."""
|
||||
|
||||
def test_message_overlay_after_camera(self):
|
||||
"""Verify message overlay is applied after camera transformation."""
|
||||
from engine.pipeline import Pipeline, PipelineConfig, PipelineContext
|
||||
from engine.pipeline.adapters import (
|
||||
create_stage_from_display,
|
||||
MessageOverlayStage,
|
||||
MessageOverlayConfig,
|
||||
)
|
||||
from engine.display import DisplayRegistry
|
||||
|
||||
# Create pipeline
|
||||
config = PipelineConfig(
|
||||
source="empty",
|
||||
display="null",
|
||||
camera="feed",
|
||||
effects=[],
|
||||
)
|
||||
|
||||
ctx = PipelineContext()
|
||||
pipeline = Pipeline(config=config, context=ctx)
|
||||
|
||||
# Add stages
|
||||
from engine.data_sources.sources import EmptyDataSource
|
||||
from engine.pipeline.adapters import DataSourceStage
|
||||
|
||||
pipeline.add_stage(
|
||||
"source",
|
||||
DataSourceStage(EmptyDataSource(width=80, height=24), name="empty"),
|
||||
)
|
||||
pipeline.add_stage(
|
||||
"message_overlay", MessageOverlayStage(config=MessageOverlayConfig())
|
||||
)
|
||||
pipeline.add_stage(
|
||||
"display", create_stage_from_display(DisplayRegistry.create("null"), "null")
|
||||
)
|
||||
|
||||
# Build and check order
|
||||
pipeline.build()
|
||||
execution_order = pipeline.execution_order
|
||||
|
||||
# Verify message_overlay comes after camera stages
|
||||
camera_idx = next(
|
||||
(i for i, name in enumerate(execution_order) if "camera" in name), -1
|
||||
)
|
||||
msg_idx = next(
|
||||
(i for i, name in enumerate(execution_order) if "message_overlay" in name),
|
||||
-1,
|
||||
)
|
||||
|
||||
if camera_idx >= 0 and msg_idx >= 0:
|
||||
assert msg_idx > camera_idx, "Message overlay should be after camera stage"
|
||||
|
||||
|
||||
class TestCapturedOutputAnalysis:
|
||||
"""Test analysis of captured output files."""
|
||||
|
||||
def test_captured_files_exist(self):
|
||||
"""Verify captured output files exist."""
|
||||
sideline_path = Path("output/sideline_demo.json")
|
||||
upstream_path = Path("output/upstream_demo.json")
|
||||
|
||||
assert sideline_path.exists(), "Sideline capture file should exist"
|
||||
assert upstream_path.exists(), "Upstream capture file should exist"
|
||||
|
||||
def test_captured_files_valid(self):
|
||||
"""Verify captured output files are valid JSON."""
|
||||
sideline_path = Path("output/sideline_demo.json")
|
||||
upstream_path = Path("output/upstream_demo.json")
|
||||
|
||||
with open(sideline_path) as f:
|
||||
sideline = json.load(f)
|
||||
with open(upstream_path) as f:
|
||||
upstream = json.load(f)
|
||||
|
||||
# Verify structure
|
||||
assert "frames" in sideline, "Sideline should have frames"
|
||||
assert "frames" in upstream, "Upstream should have frames"
|
||||
assert len(sideline["frames"]) > 0, "Sideline should have at least one frame"
|
||||
assert len(upstream["frames"]) > 0, "Upstream should have at least one frame"
|
||||
|
||||
def test_sideline_buffer_format(self):
|
||||
"""Verify sideline buffer format is plain text."""
|
||||
sideline_path = Path("output/sideline_demo.json")
|
||||
|
||||
with open(sideline_path) as f:
|
||||
sideline = json.load(f)
|
||||
|
||||
# Check first frame
|
||||
frame0 = sideline["frames"][0]["buffer"]
|
||||
|
||||
# Sideline should have plain text lines (no cursor positioning)
|
||||
# Check first few lines
|
||||
for i, line in enumerate(frame0[:5]):
|
||||
# Should not start with cursor positioning
|
||||
if line.strip():
|
||||
assert not line.startswith("\033["), (
|
||||
f"Line {i} should not start with cursor positioning"
|
||||
)
|
||||
# Should have actual content
|
||||
assert len(line.strip()) > 0, f"Line {i} should have content"
|
||||
|
||||
def test_upstream_buffer_format(self):
|
||||
"""Verify upstream buffer format includes cursor positioning."""
|
||||
upstream_path = Path("output/upstream_demo.json")
|
||||
|
||||
with open(upstream_path) as f:
|
||||
upstream = json.load(f)
|
||||
|
||||
# Check first frame
|
||||
frame0 = upstream["frames"][0]["buffer"]
|
||||
|
||||
# Upstream should have cursor positioning codes
|
||||
overlay_text = "".join(frame0[:10])
|
||||
assert "\033[" in overlay_text, "Upstream buffer should contain ANSI codes"
|
||||
assert "H" in overlay_text, "Upstream buffer should contain cursor positioning"
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
pytest.main([__file__, "-v"])
|
||||
Reference in New Issue
Block a user