feat(integration): Complete feature rewrite with pipeline architecture, effects system, and display improvements

Major changes:
- Pipeline architecture with capability-based dependency resolution
- Effects plugin system with performance monitoring
- Display abstraction with multiple backends (terminal, null, websocket)
- Camera system for viewport scrolling
- Sensor framework for real-time input
- Command-and-control system via ntfy
- WebSocket display backend for browser clients
- Comprehensive test suite and documentation

Issue #48: ADR for preset scripting language included

This commit consolidates 110 individual commits into a single
feature integration that can be reviewed and tested before
further refinement.
This commit is contained in:
2026-03-20 04:41:23 -07:00
parent 42aa6f16cc
commit ef98add0c5
179 changed files with 27649 additions and 6552 deletions

73
engine/benchmark.py Normal file
View File

@@ -0,0 +1,73 @@
"""
Benchmark module for performance testing.
Usage:
python -m engine.benchmark # Run all benchmarks
python -m engine.benchmark --hook # Run benchmarks in hook mode (for CI)
python -m engine.benchmark --displays null --iterations 20
"""
import argparse
import sys
def main():
parser = argparse.ArgumentParser(description="Run performance benchmarks")
parser.add_argument(
"--hook",
action="store_true",
help="Run in hook mode (fail on regression)",
)
parser.add_argument(
"--displays",
default="null",
help="Comma-separated list of displays to benchmark",
)
parser.add_argument(
"--iterations",
type=int,
default=100,
help="Number of iterations per benchmark",
)
args = parser.parse_args()
# Run pytest with benchmark markers
pytest_args = [
"-v",
"-m",
"benchmark",
]
if args.hook:
# Hook mode: stricter settings
pytest_args.extend(
[
"--benchmark-only",
"--benchmark-compare",
"--benchmark-compare-fail=min:5%", # Fail if >5% slower
]
)
# Add display filter if specified
if args.displays:
pytest_args.extend(["-k", args.displays])
# Add iterations
if args.iterations:
# Set environment variable for benchmark tests
import os
os.environ["BENCHMARK_ITERATIONS"] = str(args.iterations)
# Run pytest
import subprocess
result = subprocess.run(
[sys.executable, "-m", "pytest", "tests/test_benchmark.py"] + pytest_args,
cwd=None, # Current directory
)
sys.exit(result.returncode)
if __name__ == "__main__":
main()