@josephwecker
Independent researcher on agent architecture and AI safety. 30+ years startups & ML engineering; experience from: CTO Angel Studios, founding-era Twitch, founder Samaritan Technologies.
https://v2.io$0 in pending offers
I've been a full-time researcher on AI safety and alignment since August 2025, with no institutional affiliation and no commercial AI interest, coming to AI safety from a startups and ML engineering background rather than academia. Before this I was CTO at Angel Studios through a Series-A growth-and-litigation period. Earlier in my career I founded Exsig, where I wrote the original ML algorithms for real-time foreign-exchange trading agents — online evolutionary steady-state random forests, robust EMD innovations — on an Erlang/Elixir/C HPC real-time production stack. Before Exsig I was one of two senior engineers in the founding team that branched from Justin.tv to start Twitch.tv, where I designed the channel-subscription and revenue-share model now standard across major platforms. Finally, the first ~12 years of my career were spent founding and running Samaritan Technologies, a volunteer-coordination platform now used globally by disaster-response organizations, hospitals, and NGOs.
Right now I'm building and hope to continue full-time with the Agentic Systems Framework (https://doi.org/10.5281/zenodo.19986312 | https://github.com/v2-io/agentic-systems) — a formal architectural theory of agents under uncertainty, an answer to the IBM Research, ICML 2025 position paper "Agentic AI Needs a System Theory," that integrates control theory (Lyapunov stability, contraction analysis), causal inference (Pearl's hierarchy, identifiability), and information theory under a common formalism.
The recent findings most directly relevant to AI safety so far include a Lyapunov-survival exploration drive that resolves the active-inference dark-room problem, a Pearl-hierarchy structural ceiling on what pre-deployment sandbox evaluation can verify, and a derivation that modular safety architectures fail under goal divergence. Each theory segment carries explicit epistemic-status tags, and the public FINDINGS catalog (https://github.com/v2-io/agentic-systems/blob/main/msc/FINDINGS-RANKED-DRAFT.md) currently lists 14 Tier-1 findings.
Currently I have a TACL submission in review, an Anthropic Fellowship in review, and three ASF-derived NeurIPS 2026 papers in review.
pending admin approval