Cross-domain
One geographic key across all packs
Earthquakes, UN development indicators, FEMA risk scores, economic data. All normalized to
the same loc_id schema. Ask across domains without writing custom join logic.
Disasters, demographics, economics, climate, and risk normalized to the same geographic schema. Source-documented, versioned, and exportable. From query to Parquet without building the pipeline first.
Cross-domain
Earthquakes, UN development indicators, FEMA risk scores, economic data. All normalized to
the same loc_id schema. Ask across domains without writing custom join logic.
Provenance
Coverage scope, QA state, source URL, and update timestamps travel with every pack. The data trail you need for citations and reproducibility is part of the release, not an afterthought.
Depth
1M+ earthquake events back to 2150 BC. 13,000+ storms since 1842. 45,000+ landslide events since 1760. Longitudinal depth for serious research.
The most expensive part of geographic research is usually not the analysis. It is collecting datasets from separate portals, reconciling different geographic identifiers, normalizing conflicting schemas, and QA-testing joins before any actual analysis can begin.
DaedalMap's maintained packs represent that work already completed. Each pack is normalized
to a shared geographic schema, QA-gated before release, versioned, and documented.
The same loc_id key that identifies a US census tract in the FEMA National Risk
Index is the same key used in disaster event data, economic indicators, and climate projections.
Cross-domain queries do not require a custom join.
Published packs cover:
All packs share the same loc_id geography schema. A query combining
earthquake frequency with economic indicators or social vulnerability requires no schema
translation.
Research mode is a bounded analytical workspace. You build a named corpus from published packs in your account, then load it in the app to ask comparison, synthesis, and trend questions against that specific data.
The corpus boundary is explicit. Research answers from what is in the corpus. If something is missing, it says so. That keeps the reasoning honest and the results auditable.
How it works
The pack schema is public. If you have your own datasets -- survey results, field
measurements, custom raster aggregations, institutional data -- they can be normalized
to the same loc_id schema and used alongside maintained packs in the same
Research workspace.
A climate researcher who brings land surface temperature rasters for a county study can cross-reference them against FEMA risk scores, NLCD impervious surface, and building footprint data in the same query session. The local data travels with the same geographic key as the maintained packs.
Read the pack schema guideQuery results export to CSV and Parquet. Both formats are compatible with R, Python, Stata, and QGIS. Pack metadata, source attribution, and coverage scope export alongside the data.
For research involving sensitive or proprietary data that cannot leave your environment, the same engine runs locally. Install the packs you need, add your own private sources, and run queries with no cloud dependency. 2x to 6x faster than the hosted app in benchmarks. Optional local AI model removes the last external dependency.
Compare hosted and localOpen
Maintained
Local install
The local version runs entirely on your machine: no hosted latency, no cloud dependency, no per-query cost. Install the packs you need, add your own private data sources, and work offline. 2x to 6x faster than the hosted app. Add an optional local AI model and remove the last external dependency.
The right path for sensitive workflows, restricted environments, and any research that requires the data to stay on your machine.