Outputs Overview
The example bundle shows the output package as a layered decision system. The files are grouped by function rather than by audience only.
Output families in the example bundle
Run identity and planning
run_manifest.jsonplans/*_module_plan.json
These files explain what the run was, which category it resolved to, and which modules were expected.
Raw module outputs
raw/*_raw.jsonraw/sar_explorer/*
These are the module-native artifacts and should be treated as the closest layer to the execution source.
Results
results/summary.jsonresults/phase_5_fastfail_summary.jsonresults/phase_5_fastfail_summary.htmlresults/phase_5_computational_safety.htmlresults/phase_5_pipeline_analytics_snapshot.*results/context_of_use.jsonresults/evidence_state_machine.jsonresults/failure_mode_ontology_output_v1.jsonresults/risk_channels_map.json
These files are the main outputs used for interpretation.
Threshold and category artifacts
thresholds/category_scores.jsonthresholds/measurement_profile.jsonthresholds/risk_summary.json
These artifacts explain how raw and aggregated signals are turned into risk bands and evidence statuses.
Integrity and replay artifacts
inputs/input_hashes.jsonseal/preseal.jsonrepro_pack/
These files support hashing, pre-seal provenance, rebuild recipes, and verification of reproduced outputs.
How to read the package
Start with summary.json, then move to fast-fail, then coverage and evidence-state artifacts, and finally the reproducibility materials if you need audit depth.