Back to Home

Code Quality System

OPS

17 subsystems of real-time code quality enforcement. Deterministic rules, not prompt suggestions. The system that makes AI write code like a senior engineer.

OPS runs as a background process alongside your language server. It watches every file change, evaluates a comprehensive rule set against your entire project, and produces diagnostics that both the developer and the AI agent see simultaneously. Code that violates a rule does not pass. Not because you asked — because the system won't let it.

Real-timeDeterministicZero-configAI-aware
Orbit Protocol Server

The problem

System prompts are suggestions. Models treat them as guidelines, not laws. The bigger the task, the more the model drifts from instructions.

claude.md says "don't add comments"

AI ignores it sometimes

claude.md says "use barrel exports"

AI forgets in file 4 of 10

claude.md says "max 200 lines per file"

AI writes 500 lines anyway

claude.md says "always handle errors"

AI skips try/catch when lazy

claude.md says "colocate tests"

AI puts tests in /tests folder

claude.md says "no nested ternaries"

AI nests three levels deep

OPS turns every one of these into a programmatic check that runs on every file change. The agent reads the diagnostic and must fix it before proceeding.

The enforcement loop

Runs on every file save. Developer and agent see the same violations, at the same time, from the same source. One system, two interfaces.

01

Write

Agent writes code and saves the file.

02

Detect

Gravity detects the change instantly.

03

Evaluate

Perihelion evaluates all rules in parallel (< 100ms).

04

Diagnose

Diagnostics appear in both the editor UI and the agent tool output.

05

Fix

Burn applies auto-fixes, or the agent fixes manually.

06

Repeat

Clean? Next file. Violations remain? Fix again (max 3, then escalate to human).

17 subsystems

Each subsystem handles a specific responsibility. They work together in real-time, share data, and produce a unified view of your project's health.

Core Engine

OLPerihelionLinter

Rule evaluation engine. Evaluates all rules against code in under 100ms, parallelized. Everything else feeds data to Perihelion or acts on its output.

OXBurnAuto-Fix

Takes diagnostics and applies programmatic corrections — barrel exports, try/catch wrapping, import reordering, env var extraction.

ORRingRules

Loads .orbit/rules.json, manages framework presets, handles custom rules. Controls what Perihelion checks and at what severity.

OWGravityWatcher

File system watcher. Detects changes, triggers re-evaluation, debounces rapid saves. The fundamental force that holds everything together.

Intelligence

OFNebulaFingerprint

Scans your codebase on first open, detects existing patterns, and generates rules from what it finds. Zero configuration.

OACoronaArchitecture

Builds the dependency graph. Detects circular dependencies, dead code, and layer violations across your entire project.

OITrajectoryImpact

Given changed files, traces the dependency graph to show every file that will break and every test that needs running.

ODFluxSemantic Diff

Shows meaningful changes — new function, modified type, API shape change — instead of just lines added/removed.

Safety

OGEclipseRegression Guard

Hashes code regions where bugs were fixed. Flags if any future change modifies a known fix. Nothing passes through the shadow of a previous fix.

ONReentryRollback

Creates file snapshots before agent tasks. One command restores to a clean state if something goes wrong.

OKShieldSecurity

Detects hardcoded secrets, eval usage, XSS patterns, SQL injection, CORS misconfiguration, and dependency vulnerabilities.

OEAtmosphereEnvironment

Checks that all referenced env vars exist in .env files. Detects config drift between environments. Validates types.

Verification

OSApogeeScore

Calculates a confidence score (0-100) after every agent task. Teams set auto-merge thresholds — above 90 merges automatically.

OPHorizonCI Predict

Runs typecheck, lint, test, and build locally. Predicts CI pass/fail before you push. Green or red badge in your status bar.

OTProbeTests

Enforces test discipline. No empty tests, no weakened assertions, no deleted tests. Optional TDD enforcement.

OBDebrisDependencies

Detects outdated, unused, duplicate, and vulnerable packages. Tracks bundle size impact of every change.

Learning

OMDriftMemory

Watches how developers correct agent output. After repeated corrections of the same pattern, suggests new rules. OPS gets smarter over time.

OPS vs ESLint

ESLint checks code syntax and style within individual files. OPS checks everything else across your entire project. They're complementary.

ESLint covers

OPS covers

Unused variables

File structure and organization

Missing semicolons

Export patterns (barrel enforcement)

Inconsistent quotes

Test existence per module

Undefined references

CI pipeline prediction

Code style preferences

Security pattern detection

Some complexity rules

Dependency health

Cross-file relationship rules

Git workflow enforcement

Environment variable validation

Agent-specific anti-patterns

Architecture analysis

Change impact prediction

Built for AI discipline

These rules exist because AI models have specific, predictable bad habits that human developers don't typically share. ESLint was designed for humans. OPS is the first system designed to discipline AI code output.

no_function_dump

AI puts 10 functions in one file. Force separation.

no_god_file

AI creates utils.ts with 500 lines. Force splitting.

no_any_type

AI defaults to 'any' when lazy. Force proper typing.

no_obvious_comments

AI writes '// increment counter' above 'counter++'. Flagged.

no_nested_ternary

AI nests ternaries three levels deep. Force if/else.

error_handling_required

AI skips try/catch to save tokens. Enforce it.

no_hardcoded_values

AI hardcodes URLs and keys. Extract to env vars.

no_duplicate_logic

AI rewrites utilities instead of importing existing ones. Detect duplication.

barrel_exports_required

AI does direct imports from deep paths. Force barrel pattern.

test_required_for_exports

AI writes code but forgets tests. Flag missing tests immediately.

Zero configuration to start

Nebula scans your codebase on first open and generates rules from what it finds. No manual rule writing. Customize after if you want.

$ ops nebula

Scanning 234 files...

Detected patterns:
  Export style:       barrel (87% of directories have index.ts)
  Test location:      colocated (92% of tests next to source)
  File naming:        kebab-case (95% match)
  Component naming:   PascalCase (100% match)
  Function naming:    camelCase (98% match)
  Import order:       external → internal → relative (78% match)
  Avg file length:    180 lines
  Error handling:     try/catch in 60% of async functions
  Framework:          Next.js 15

Generated: .orbit/rules.json (41 rules, 38 auto-detected, 3 defaults)
Confidence: high — codebase is consistent

CLI

Every subsystem has three invocation styles: orbital name, descriptive name, and short code. Use whichever feels natural.

$ops peri# Run the linter (or: ops lint, ops ol)
$ops burn --all# Auto-fix all violations (or: ops fix, ops ox)
$ops nebula# Fingerprint codebase (or: ops fingerprint, ops of)
$ops corona# Architecture analysis (or: ops analyze, ops oa)
$ops horizon# CI pass/fail prediction (or: ops predict, ops op)
$ops shield# Security scan (or: ops lock, ops ok)
$ops gravity --watch# Start file watcher daemon (or: ops watch, ops ow)

Configuration

Nebula generates .orbit/rules.json automatically. Every rule has severity levels, exceptions, and auto-fix options.

.orbit/rules.json

{
  "extends": "nextjs",
  "structure": {
    "barrel_exports": { "enabled": true, "severity": "error", "auto_fix": true },
    "max_file_lines": { "enabled": true, "severity": "warning", "value": 300 },
    "one_component_per_file": { "enabled": true, "severity": "error" },
    "test_colocated": { "enabled": true, "severity": "warning" }
  },
  "quality": {
    "no_any": { "enabled": true, "severity": "error" },
    "error_handling": { "enabled": true, "severity": "error", "auto_fix": true },
    "no_hardcoded_values": { "enabled": true, "severity": "error" },
    "no_nested_ternary": { "enabled": true, "severity": "error", "auto_fix": true }
  },
  "security": {
    "no_secrets_in_code": { "enabled": true, "severity": "error" },
    "no_eval": { "enabled": true, "severity": "error" }
  },
  "ci_prediction": {
    "enabled": true,
    "checks": ["typecheck", "lint", "test", "build"],
    "run_on": "pre_commit"
  }
}

Roadmap

Perihelion rule evaluation engine

Burn auto-fix engine

Ring rule configuration and presets

Gravity file watcher with debounce

Nebula codebase fingerprinting

Corona architecture analysis

Shield security scanning

Atmosphere environment validation

Probe test verification and TDD

AI discipline rules (10 anti-patterns)

LSP protocol integration

Framework presets (Next.js, React, Node)

Trajectory change impact prediction

Flux semantic diff analysis

Eclipse regression guard

Reentry rollback snapshots

Apogee confidence scoring

Horizon CI pass/fail prediction

Debris dependency health

Drift learning from corrections

Orbit IDE integration

VS Code extension

17

Subsystems

100+

Rules

<100ms

Evaluation time

0

Configuration required

Coming soon

OPS is in active development. 17 subsystems, 100+ rules, zero-config setup. The system that makes AI write code like a senior engineer, not a junior intern.