Selected Work

I do architecture-heavy systems work where the constraints are real: adversarial environments, latency budgets, partial observability, secure deployment, or incentive misalignment. The public record spans privacy-preserving protocols, distributed ML in air-gapped settings, edge inference, and knowledge infrastructure built for actual use.

  • Sub-50ms operational inference
  • Distributed ML in secure environments
  • Multi-million-dollar protocol operations
  • Public capability platform architecture
  • MILP-optimized financial construction
  • Agent-coordinated audited research
  • Enterprise evidence-graph retrieval

Operating Areas

The work clusters around a few recurring operating domains, each shaped by non-ideal conditions rather than clean-room assumptions.

Protocol & Mechanism Design

Threat modeling, incentive design, proofs, privacy-preserving architecture, and systems that remain coherent under strategic behavior.

Distributed ML & Production Systems

Training orchestration, multi-GPU environments, deployment workflows, fault tolerance, and reproducible infrastructure in constrained settings.

Real-Time Detection & Edge Inference

Latency-sensitive systems, signal processing, sensor fusion, embedded deployment, and measurable performance on operational hardware.

Quantitative Systems & Mathematical Optimization

Constrained optimization, payoff engineering, combinatorial construction problems, deterministic reproducibility, and two-language architectures bridging quant engines with production web stacks.

Systems Architecture & Technical Leadership

Cross-domain integration, technical translation, architecture review, research-to-production handoff, and turning ambitious work into operational capability.

Collective Intelligence, Agent Coordination & Agentic Retrieval

Multi-agent research platforms, event-sourced context graphs, agentic retrieval planners, structured claim graphs, lease-based task orchestration, and knowledge systems where relevance is defined by causality and organizational state rather than vector similarity alone.

Selected Case Studies

These cases show the kind of problems I am most drawn to: high consequence, real constraints, and very little room for conceptual sloppiness.

Infrastructure / Protocol

Adversarial Storage & Incentives

Role Protocol architect, technical leadership

Problem Designed a ciphertext-only storage network where storage is an ongoing proof, not a one-time claim.

Built and led production of a ciphertext-only storage network where storage is proved over time rather than claimed once. The core challenge was not only cryptography, but incentive design: challenge generation, anti-precompute mechanics, long-horizon reputation, and tier mobility that rewards sustained performance instead of easy gaming.

Constraints
Ciphertext-only, ongoing proof-of-spacetime, adversarial counterparties, incentive alignment over long horizons
Outcomes
~$7M avg MRR (~$60M revenue); anti-tamper verification and incentive system in production.
Defense / ML

Real-Time Underwater Detection

Role ML lead, architecture, edge deployment

Problem Build latency-sensitive underwater threat detection that works under operational conditions, not lab conditions.

Led hybrid DSP + ML architecture for underwater threat detection on Jetson edge hardware, optimized around operational detection metrics and latency rather than generic benchmark scores. The load-bearing innovation was not just model design, but synthetic-data and domain-adaptation workflows that made transfer from simulation to tactical deployment possible.

Constraints
Sub-50ms latency on Jetson, operational environment noise, simulation-to-real domain gap
Outcomes
Sub-50ms inference on Jetson; helped secure ~$5M multi-year Navy follow-on; operational detection pipeline deployed.
Defense / Infrastructure

Secure Distributed ML Infrastructure

Role ML infrastructure architect, secure deployment

Problem Enable full-capability LLM fine-tuning and serving inside air-gapped, multi-GPU environments with strict approval requirements.

Built full distributed LLM fine-tuning capability inside air-gapped, multi-GPU environments with strict approval and handling requirements. In these contexts, software supply chain discipline matters as much as model training: every dependency, artifact, and operator workflow has to survive without the conveniences of public infrastructure.

Constraints
TS/SCI SCIF environment, air-gapped, no external dependencies, strict artifact approval
Outcomes
128-GPU distributed fine-tuning operational in classified environment; 65% cost reduction via quantized adapters.
Defense / ML

RF Classification & Streaming ML

Role ML engineer, model development, production deployment

Problem Improve RF modulation classification accuracy and deploy into streaming production.

Improved RF modulation classification and helped deploy models into streaming production environments spanning R, Python, and C++. The work combined model development with the less glamorous but decisive layers of interfaces, APIs, and operational compatibility.

Constraints
Multi-language production stack (R, Python, C++), streaming latency requirements, spectral domain
Outcomes
+12pp accuracy improvement; models deployed to production streaming pipeline.
Quantitative Finance / Optimization

Structure Lab (GEX)

Role Architect, full-stack, quant engine design

Problem Options structure construction is a combinatorially explosive problem. Traders navigate it by heuristic — choosing familiar patterns rather than systematically optimizing for cost, robustness, and simplicity.

Built a payoff-engineering platform that replaces heuristic structure selection with mixed-integer linear programming (MILP). The user describes a target payoff shape — floor, cap, horizon, symbol — and the system constructs the optimal multi-leg option structures via branch-and-bound optimization over the listed chain. Three solver passes with different objective weight profiles explore the Pareto frontier, a multi-dimensional scorer evaluates each solution across cost, simplicity, robustness, and liquidity, and a selector returns exactly three distinct candidates: cheapest, simplest, most robust.

Constraints
Integer contract quantities (MILP), 10-second solve budget per pass, deterministic reproducibility, two-language split (TypeScript web stack + Python quant engine), piecewise-linear payoff matching across 9 grid points
Stack
Next.js 15 / React 19, Express, FastAPI, OR-Tools CBC solver, PostgreSQL, Docker Compose, Turborepo monorepo
Outcomes
3–15s full pipeline (chain generation → optimization → scoring → selection); 35 quant engine tests; 2 templates live (Phase 1), 8 planned; deterministic version-stamped outputs
Agent Infrastructure / Research Platform

SwarmOS

Role Architect, platform design, full-stack

Problem Modern research is bottlenecked by fragmented evidence, combinatorial search spaces, reproducibility breakdowns, and human bandwidth limits. Existing AI tools amplify noise without enforcing provenance or verification.

Built a collective intelligence platform that coordinates specialized AI agents to perform continuous, audited scientific research. The core design bet is substrate quality over agent quantity: every artifact is content-addressed with lineage tracking, every claim lives in a structured evidence graph with counterevidence and calibrated confidence, and every computational run produces a deterministic reproduction bundle. Agents operate within explicit compute/cost/risk budgets, acquire tasks via exclusive leases with heartbeating, and must continuously outperform baselines to remain active. A separation-of-duties protocol ensures proposers cannot verify their own claims — independent critics, replicators, and arbiters enforce epistemic discipline.

Constraints
Content-addressed deduplication, structured claim graphs with verification grades, separation of duties (proposer ≠ verifier ≠ integrator), budgeted autonomy with human approval gates, lease-based task assignment with fault tolerance
Stack
TypeScript monorepo (pnpm + Turborepo), Fastify, Drizzle ORM, PostgreSQL (pgvector), Redis Streams (CloudEvents), MinIO/S3, JWT/RBAC/ABAC, 13 packages, 30+ API endpoints
Outcomes
13-package service-oriented architecture; 30+ authenticated API endpoints; full artifact lifecycle with multipart upload; claim graph with evidence/counterevidence/verification; task state machine with retry and approval gates; agent registry with lease management

Active development — see the dedicated SwarmOS page.

Enterprise Infrastructure / Retrieval Engine

Agentic Data

Role Architect, platform design, full-stack

Problem Enterprise knowledge is scattered across dozens of systems. The connections between artifacts — this PR caused that incident, which led to this RCA, which changed this decision — live only in people's heads and disappear when they leave.

Built an enterprise context graph and agentic retrieval engine that captures institutional knowledge as typed, timestamped, permissioned Context Objects linked by 35 edge types spanning causal, decisional, lifecycle, and structural relationships. Instead of top-k cosine similarity, the retrieval planner runs an iterative control loop: interpret intent, select strategy (causal chain, decision rationale, impact analysis), search with hybrid lexical + dense RRF fusion, expand through the graph via multi-hop typed edges, rerank by five-component sufficiency scoring (coverage, recency, authority, diversity, completeness), and stop when the evidence threshold is met or the budget is exhausted. A self-correcting canonical memory layer maintains truth through supersession chains, contradiction detection, and reconsolidation loops.

Constraints
Four-store architecture (each for what it does best), 35 typed edge taxonomy with confidence scoring, budget-aware retrieval with sufficiency verification, self-correcting canonical memory with truth maintenance, ACL-filtered graph traversals, multi-resolution memory (raw → episodic → canonical)
Stack
Python / FastAPI / SQLAlchemy 2 (async), PostgreSQL (21 tables), OpenSearch (KNN + lexical), Neo4j (Cypher graph traversals), S3/MinIO, JWT/RBAC, Docker Compose, 315 tests
Outcomes
27 API endpoints; 21-table relational schema; 35 typed edge taxonomy; iterative retrieval planner with 5 intent-driven strategies; hybrid search with RRF fusion; multi-hop graph expansion; truth maintenance with supersession and reconsolidation; 315 passing tests; GitHub/Jira/PagerDuty connectors

Active development — see the dedicated Agentic Data page.

Public Initiative / Platform

Capability Commons

Role Founder, architect, full-stack platform build

Problem Practical knowledge is trapped behind institutions, jargon, and credential barriers.

Designed and built a Postgres-first knowledge platform for practical public capability. The hard problem is not storing information. It is structuring it so a beginner can move from immediate need to reproducible skill, in context, without already knowing the jargon.

Constraints
Open by default, Postgres-first, versioned knowledge objects, typed edges with provenance, barrier-lowering by design
Outcomes
49 knowledge objects, 175 typed edges, 167+ context facets, 7 domains, working API with CRUD/graph/search/retrieval.

Active development — see the dedicated Capability Commons page.

How I Work

I am best suited to problems where architecture matters more than headcount and where clarity matters more than performative velocity. That usually means protocol and threat-model analysis, ML systems architecture, knowledge-platform design, technical synthesis, and selective leadership where a team needs a sharper map of the problem before it needs more code.

Engagement Fit

Get in touch

If the problem is technically real, constraint-heavy, and worth thinking through properly, reach out with the problem, the constraints, and the desired outcome.

Ask