← Back to Blog
[engineering]Dec 20259 min read

Building Digital Twins at City Scale

Building a digital twin of a single structure is hard. Building one that covers an entire city — and stays accurate over time — is a fundamentally different problem.

Scale Changes Everything

At city scale, you're dealing with:

  • Terabytes of input data from dozens of sensor types
  • Coordinate system complexity — projections, datums, and the curvature of the Earth actually matter
  • Temporal inconsistency — different parts of the city were captured at different times
  • Varying quality — some areas have LiDAR, others only have satellite imagery

The Persistent Twin

A digital twin isn't a snapshot — it's a living model that updates continuously. New drone surveys, updated satellite imagery, and even street-level phone captures all need to be fused into the existing model.

python
# Incremental update — fuse new data into existing twin
twin = percept.load("city_twin_sf")
twin.update(
  source="new_drone_survey_2026_q1.mp4",
  merge_strategy="weighted_fusion"
)
print(f"Updated regions: {twin.changed_areas}")
print(f"New objects: {twin.new_object_count}")

This requires solving the registration problem at scale: how do you align new data with the existing model when the existing model covers 50 square kilometers?

Compound Accuracy

The key insight is that accuracy compounds over time. Each new data source doesn't just add coverage — it improves the accuracy of existing regions through cross-validation.

After 6 months of continuous updates, our San Francisco pilot improved overall accuracy from ±4cm to ±1.8cm without any targeted re-survey.

This is the power of a persistent, updateable digital twin versus periodic one-shot reconstructions.