Library optimizer
  • Go 74.6%
  • HTML 18.1%
  • Just 4.8%
  • Nix 2.5%
Find a file
2026-01-28 21:47:43 -07:00
cmd websockets helloworld node to server 2026-01-21 18:26:35 -07:00
internal heartbeat mechanism 2026-01-28 21:47:43 -07:00
.envrc initial commit 2026-01-21 15:53:05 -07:00
.gitignore enhance handshake to include Node Capabilities during registration 2026-01-26 00:15:43 -07:00
flake.lock initial commit 2026-01-21 15:53:05 -07:00
flake.nix initial justfile 2026-01-25 02:02:29 -07:00
GEMINI.md DB Schema + initialization 2026-01-25 01:48:30 -07:00
go.mod websockets helloworld node to server 2026-01-21 18:26:35 -07:00
go.sum websockets helloworld node to server 2026-01-21 18:26:35 -07:00
justfile enhance handshake to include Node Capabilities during registration 2026-01-26 00:15:43 -07:00
README.md heartbeat mechanism 2026-01-28 21:47:43 -07:00

Siftarr

Siftarr is a distributed media file optimization system. It monitors media libraries, analyzes files, and distributes optimization tasks (transcoding, cleanup) to worker nodes using user-defined Lua scripts.

Project Roadmap & Checklist

Phase 1: Core Infrastructure & Communication

  • Database Setup
    • Design SQLite schema (nodes, libraries, files, jobs, scripts).
    • Implement database initialization and migration logic.
  • Server - Node Communication (WebSocket)
    • Enhance handshake to include Node Capabilities (CPU/GPU, supported codecs) during registration.
    • Implement Heartbeat mechanism (POST /api/v1/nodes/{id}/heartbeat or WS ping/pong).
    • Implement Job Dispatching via WebSocket or polling (GET /api/v1/jobs/next).

Phase 2: Library Monitoring & Analysis

  • Library Watcher (Server)
    • Implement file system watcher (recursive) for configured library paths.
    • Handle events: Create, Delete, Move/Rename.
  • Analysis Engine
    • Implement detectMediaType logic.
    • Implement file probing (using ffprobe or similar) to extract metadata (codec, container, bitrate).
    • Store file metadata in DB.

Phase 3: Scripting Engine (Lua)

Replaces the original plan for Anko. We will use go-lua.

  • Lua Environment Setup
    • Integrate github.com/Shopify/go-lua into the Node.
  • Host API Bindings (Lua -> Go)
    • detectMediaType(path)
    • readMeta(path)
    • probeMedia(path)
    • ffmpegRun(args...) - Wraps exec.Command for FFmpeg.
    • logEntry(map)
  • Script Management API
    • CRUD endpoints for Scripts (/api/v1/scripts).

Phase 4: Node Execution Logic

  • Job Processing
    • Node fetches "Next Job".
    • Node downloads the required Lua script.
    • Node executes the Lua script against the target file.
  • FFmpeg Integration
    • Ensure ffmpeg and ffprobe are available in the Node's path.
    • Handle stdout/stderr parsing from FFmpeg for progress reporting.
  • Job Completion
    • Report status (Success/Fail) and metrics (size saved) back to Server.

Phase 5: Web UI & Dashboard

  • Dashboard
    • Stats: Storage saved, Files processed, Active nodes.
  • Node Management
    • List connected nodes and their status/capabilities.
  • Script Editor
    • Code editor for Lua scripts.
    • (Optional) Block-based editor (it's probably easier to do custom rather than blockly) that compiles to Lua.
  • Library Browser
    • View monitored files and their statuses.

Phase 6: Polish & Deployment

  • Containerization
    • Dockerfile for Server.
    • Dockerfile for Node (including FFmpeg dependencies).
  • Nix Flakes Note: Possibly combine these into one flake.
    • Nix flake for Node.
    • Nix flake for Server.
  • Testing
    • Unit tests for Lua bindings.
    • Integration tests for Server-Node communication.