Rapid-response research on long-tail bottlenecks.
Ashiba Research is a rapid-response research lab focused on long-tail bottlenecks.
Ashiba is currently focused on the canonical legal-and-technical infrastructure for AI in physical-world deep tech — the contracts that license worker data, the ontologies that type industrial failures, and especially the verification protocols that catch silicon errors.
The bet: in 2026-2030, this infrastructure is the rate-limit on AI's deployment into the physical economy. Compute scales. Models scale. Algorithms generalize. What does not scale by itself is the legal and ontological substrate that lets AI safely train on, deploy into, and integrate with the long tail of physical work without lawyers getting involved.
The long tail of real-world technical work is not a single ontology. It is thousands of small ones — silicon failure modes, industrial-interop primitives, license terms, evaluation rubrics, vertical buyer maps, standards-body vocabularies. Each rebuilt privately and badly across every company that touches the work. Compressed, named, and standardized, they become substrate AI can train on, integrate with, and ship inside. Each Ashiba program is one piece of that substrate.
The verification and attribution layer for ML kernels across heterogeneous silicon. Eight-part contract object across twelve failure classes. Open-source reference verifier (ashiba-verify) implementing Freivalds' 1979 algorithm at sub-1% overhead on NVIDIA H100 and AMD MI300X. Engagements designed for 48-hour turnaround on production numerical incidents.
The compressible abstraction: the contract object. Every kernel ships with an implicit contract about what it computes — numerical tolerance, determinism, shape limits, composition semantics. Until now those contracts were never written down. Kernel Contracts is the language that closes the gap, and the certification artifact silicon vendors and frontier training operators need to share evidence with each other.
The standard clearance contract for licensing operational AI data assets. It treats raw workflow data as contaminated until cleared, separates the seven rights buyers usually try to bundle (access, evaluation, training, retention, resale, updates, exclusivity), and produces a buyer-readable Passport for every cleared asset.
The compressible abstraction: the LADDER Passport. A single legible record that turns thousands of bespoke license negotiations — each one a lawyer project — into a flow buyers and contributors can both read in five minutes. License what you can show. Reserve what you know.
The operator-facing program. Live at deeptechtools.com: a free library of buyer maps, funder maps, and operator primers, plus The Guide to AI for Deep Tech Operators (June 2026). Sibling artifact: ProbSpec — the typed failure ontology for brownfield industrial interop, twelve named failure classes, supplier-reliability schema, recovery playbooks.
The compressible abstraction: the operator playbook. Rights-clean workflow data, ontologized failure modes, scoreable judgment under constraints, standards-body engagement maps. Most deep-tech founders treat AI as something the engineering team builds; the companies that win run AI as the substrate every function depends on.
High-alpha GTM data tools and methodology for cracking long-tail deep-tech markets. The thesis: in a market that is a thousand small verticals, each with its own elites and standards and counter-positions, the team that wins is the one whose forward-deployed researcher already lived inside the vertical six months before anyone could have known to point them there. Sales as research. Researchers as sales. Always Be Collecting Dots so you can Always Be Connecting Dots.
The compressible abstraction: the forward-deployed researcher. The lab's brain in the field, bringing back the gradient that decides which environments the lab should optimize against. Whoever wins the right envs wins.
Ashiba is small on purpose. The work is calibrated to questions that have a clock, where the marginal hour spent on framing-by-committee is an hour the failure mode keeps producing in production. Engagements ship in hours, not quarters. Papers are published when they are useful, not when the cycle permits.
We are not a foundation lab and not a consulting shop. The intermediate is intentional: independent applied research, with the publication record of a lab and the response time of an operator or elite newsman.