Open, inspectable AI infrastructure for safety-science workflows.
The public systems show the implementation style: MCP-native tool boundaries, evidence bundles, replayable runs, and review gates instead of opaque outputs.
Built in public. Governed by default.
Our open-source work demonstrates the implementation style: typed interfaces, explicit boundaries, provenance-rich outputs, and review gates around safety-science workflows.
ToxMCP Suite
PUBLIC INFRASTRUCTUREA suite of guardrailed MCP servers across computational toxicology, exposure science, mechanistic reasoning, QSAR workflows, ADMET utilities, and kinetic modeling. The point is not a single black-box answer, but modular evidence surfaces that can be inspected and orchestrated.
O-QT — OECD QSAR Toolbox AI Assistant
PUBLISHED SYSTEMA multi-agent AI system that connects to a local OECD QSAR Toolbox installation to support chemical evidence retrieval, QSAR Toolbox interaction, read-across workflow preparation, and auditable report drafting.
Open by Design. Governed by Default.
Many AI safety platforms are optimized as closed SaaS products. That can be useful for simple tasks, but evidence-heavy teams often need infrastructure they can inspect, adapt, and govern.
Open Protocol
Built on the Model Context Protocol (MCP) — an open standard. Not a walled garden. Your tools connect to any MCP-compatible agent framework.
Self-Hosted by Default
Design deployments around your cloud, on-premise, or controlled research environment so sensitive workflows do not have to move into a generic SaaS box.
Inspect Everything
Capture tool calls, parameters, source references, assumptions, and review outcomes so a run can be inspected instead of merely trusted.
No Vendor Lock-in
MCP keeps interfaces portable, and public building blocks use permissive licenses where appropriate so teams can inspect, adapt, and integrate.
Open-Science Context
Informed by public open-science work and community interactions, including work performed during VHP4Safety, hackathons, and toxicology community discussions.
Governed, Not Gated
Maturity labels, human review gates, and scientific bounds on every module. When evidence is weak, the system says so — honestly.
While others offer closed platforms with opaque models, we provide auditable infrastructure you control. Our work is public on GitHub. Our protocol is open. Our commitment is to transparency.
Inspectable Infrastructure, Not Opaque Outputs
The difference is control: keep your tools, preserve your review process, and make each run traceable enough for scientific scrutiny.
Typical Closed Platforms
Fast to demo, harder to govern
- Data leaves your controlled environment by default
- Outputs arrive without enough provenance to inspect
- Workflows depend on proprietary formats and APIs
- Hard to adapt around internal SOPs and scientific tools
- Report generation without review gates or replay
in4r.ai — Inspectable Infrastructure
Built around your operating model
- Deployment choice: your cloud, on-prem, or controlled environment
- Inspectable runs: tool calls, parameters, sources, and reviews captured
- Open protocol: MCP-native interfaces instead of proprietary glue
- Open building blocks: public Apache-2.0 infrastructure where appropriate
- Evidence bundles: source links, assumptions, and review status travel with the output
From Question to Evidence — Every Step Inspectable
Intake
Questions and scenarios enter the workflow with structured context capture.
Context
Domain logic, SOPs, and prior evidence frame the problem space.
Orchestrate
Tools, models, and data sources execute in a defined, logged sequence.
Evidence
Outputs are bundled with sources, confidence markers, and reasoning trails.
Review
Human experts inspect, approve, or escalate before any decision is finalized.
Every Tool Call Is Logged
Our MCP servers emit structured audit logs for every tool invocation: query parameters, source APIs, confidence scores, conflict flags, and human review gate status. Every output is traceable to its source. Every workflow step and generated claim is reviewable.
Powerful tools need explicit boundaries.
MCP can expose valuable scientific tools and data sources. in4r treats that power as something to govern: private deployment, least-privilege access, allowlisted tools, audit logs, and review gates before action.
This is a security posture for pilot and implementation work, not a claim of formal certification.
How are MCP workflows bounded?
Tools are allowlisted, scoped to the pilot workflow, and wrapped with schema validation so agents cannot freely call arbitrary systems.
Where does sensitive work run?
Deployment is designed around the client environment: private cloud, on-premise, or controlled research infrastructure where appropriate.
How does expert control stay visible?
Human review gates, tool-call logs, source trails, assumptions, and review outcomes are captured so the workflow can be inspected and replayed.
What is reviewed before scaling?
Each pilot includes a client-specific security and governance review covering credentials, data boundaries, failure modes, and escalation rules.
Start with one case-study pilot or advisory retainer.
Bring one safety-science or research case study. We will define the pilot boundary, consulting scope, evidence sources, review gates, and deployment path before scaling anything. We are opening pilot and design-partner conversations for teams operationalizing AI in review-heavy scientific workflows.