Guided learning paths
Orient new teams with short, opinionated paths: first governed run, first promoted claim, first publish bundle—each with checkpoints that match how R&D actually reviews science.
For humans
External agents may execute the work, but people set objectives, interpret claims, and sign the compliance narrative. These guides help researchers, reviewers, and operators collaborate without losing the thread from raw data to external-grade artifacts.
Orient new teams with short, opinionated paths: first governed run, first promoted claim, first publish bundle—each with checkpoints that match how R&D actually reviews science.
Understand how longitudinal programs map to campaigns and runs so program managers, lab leads, and platform owners share one vocabulary for scheduling, budgets, and completion.
Learn to explain Truth Dial tiers, negative controls, and ledger hashes to partners who will never read your notebooks—but who must trust your evidence chain.
Task-focused recipes for common scenarios: importing instrument data, comparing discovery modes, exporting replay recipes, and attaching claims to downstream systems.
Your first interaction with ARDA typically follows a predictable arc: create a project, attach a dataset, run a governed discovery pass, and review the resulting claims. This section walks through each step so you can orient quickly before branching into more specialized workflows.
A project is the top-level container for your research. It defines the scope, the team membership, and the governance configuration that applies to all campaigns and runs within it. When you create a project, you also select the default autonomy policies that will govern how automated systems interact with your data—these can be refined per-campaign later.
Once a project exists, you attach one or more datasets. ARDA supports structured tabular data, time-series streams, and semi-structured formats. During attachment, the platform runs an initial profiling pass that characterizes the data's shape, coverage, and potential quality issues. This profile informs which discovery modes are viable and what constraints the symbolic layer can enforce.

A run is a single execution of a discovery mode against a dataset within the context of a campaign. Governed runs differ from ad-hoc analysis because every step is recorded in the Evidence Ledger and every resulting claim enters the Truth Dial at the Explore tier.
Select the mode that matches your research question. If you have strong domain priors and structured data, Symbolic discovery lets you encode those priors as constraints. If the data is high-dimensional and the answer structure is unknown, Neural discovery is a better starting point. Neuro-Symbolic combines both approaches, and Causal mode (powered by CDE) is appropriate when you need directional causal claims rather than correlational patterns.
Each mode accepts parameters specific to its methodology. Symbolic runs accept constraint sets and ontology references. Neural runs accept embedding configurations and similarity thresholds. Causal runs accept temporal windows and intervention specifications. The platform validates parameters before launch and warns if the configuration is unlikely to produce meaningful results given the data profile.
After a run completes, its claims appear in the Explore tier of the Truth Dial. Each claim includes the evidence that supports it, the scope boundaries within which it holds, and references back to the specific data subsets and parameters that produced it. Reviewing claims is not just about reading conclusions—it is about evaluating the evidence chain that connects the claim to the raw data.
Claims that survive scrutiny can be promoted to the Validate tier, which signals that they have passed initial review and warrant further investigation—perhaps through negative controls, alternative mode runs, or cross-dataset validation. Claims that do not hold up are flagged but never deleted; the Evidence Ledger preserves the full history so future researchers can understand what was tried and why it was set aside.

Campaigns are the organizational unit between projects and runs. A campaign groups related runs that share a research hypothesis or operational objective. For example, a pharmaceutical team might create a campaign for each screening round, with individual runs for different compound libraries or target configurations.
Campaigns carry their own budget allocations, completion criteria, and governance overrides. A campaign can tighten the default autonomy policy of its parent project—for instance, requiring human approval for any claim promotion during a sensitive validation phase—or it can expand permissions if the project configuration allows it.
Runs within a campaign are ordered chronologically and linked through the Evidence Ledger. When a researcher opens a campaign view, they see the full run history: which modes were used, what claims emerged, which claims were promoted or set aside, and how the overall evidence landscape evolved over time. This longitudinal view is what makes campaigns useful for programs that span weeks or months, not just individual analysis sessions.
The governance stack exists to make research trustworthy. But trust also requires communication—explaining to non-technical stakeholders why a claim is credible and what the platform did to earn that credibility. This section covers the narrative patterns that work.
When presenting to external partners, frame the Truth Dial as a maturity model rather than a confidence score. Explore means "we found something interesting and recorded how we found it." Validate means "we tested it against alternatives and negative controls, and it held up." Publish means "we are confident enough to attach our institutional reputation to this claim, and here is the complete evidence trail." This framing avoids false precision and focuses on the process.
Negative controls are the strongest governance signal because they represent attempts to disprove the claim. When presenting results, lead with what the negative controls tested and their outcomes. A claim that survived permutation tests, alternative-explanation checks, and subset holdout validation tells a stronger story than one that was only confirmed positively. If controls were skipped, the ledger records the reason—present this transparently.
The Evidence Ledger provides content-addressed entries that trace every step from data ingestion to published claim. When compliance or regulatory partners ask "how did you arrive at this conclusion," the ledger answers that question with cryptographic integrity. Each entry links to its predecessor, forming a chain that cannot be modified without detection. Present this as an audit trail that exists independently of any individual researcher's notes.
These task-focused patterns cover the most frequent operations that research teams perform in ARDA. Each workflow is self-contained and includes the governance implications of each step.
Attach raw instrument output to a project, run the profiling step, review the data quality assessment, and resolve any coverage gaps before launching discovery. The profiling step creates the first ledger entries for the dataset, establishing its provenance baseline.
Run the same dataset through multiple discovery modes within a single campaign to compare their outputs. The platform does not prescribe which mode is "better"—comparison is domain-specific. The campaign view lets you see all claims side by side, with provenance links back to the mode and parameters that produced each one.
Select claims from the Explore tier, run negative controls against them, review the control outcomes, and promote surviving claims to Validate. Record the rationale for each promotion decision. Claims that reach Validate can later be bundled into a publish package with frozen evidence context for external distribution.
Publish bundles freeze the evidence context for a set of claims at the Publish tier. The bundle includes the claims, their full provenance chains, the negative control results, and any attached artifacts. Once sealed, a bundle cannot be modified—it serves as the permanent external-grade record of what was found and how it was validated.
Pair these guides with the API and SDK references when you are ready to automate—human docs explain intent; machine docs explain contracts.