Giving machines the eyes to navigate the built world.

At Inform we are building a spatial AI to understand built environments. Using machine learning, computer vision, and artificial intelligence, we are building a hyperintelligent tool to digitalize the built world.

We're creating a spatial data infrastructure for the built world

Vision

We are automating the way buildings are measured and understood. By developing an engine capable of autonomous navigation and semantic data gathering, we provide the spatial data infrastructure for built environments. We're not just mapping spaces; we're creating an intelligence that empowers humans to master the built environment.

Technology

We use Computer Vision and Machine Learning to process raw Point Clouds into structured spatial data. By integrating LLMs for semantic classification, our system moves beyond simple geometry to recognize and categorize the components of a building. This allows us to automate the transition from a physical scan to a functional, digital understanding of any space.

The team

We are a stealth-mode company backed by serial founders and long-term investors. Our team of 12 consists of 7 different nationalities with an office in Helsinki, Finland. We are focused on sustainable, data-driven company building, combining diverse global expertise to solve the challenge of spatial infrastructures.

Why point clouds aren't enough: the shift to semantic spatial data
Explore
Explore more
Senior Spatial AI / 3D Computer Vision Engineer — PhD-level · Remote

About the role

As a Senior Spatial AI Engineer, you will be the architectural lead for our core 3D perception engine. You are responsible for transforming raw, "noisy" mobile sensor data into the ground-truth geometric foundation of our platform. Your work is the first and most critical step in solving the "data void"—ensuring that the built environment intelligence we provide is rooted in physical accuracy and spatial consistency.

What you'll do

  • Lead the design and delivery of core 3D perception and geometry intelligence modules: semantic understanding, 3D reconstruction refinement, and spatial intelligence.
  • Build robust geometry extraction pipelines from real scan data: denoising, meshing/fusion, segmentation, plane/structure detection, and structured output generation.
  • Define and implement quality / coverage / confidence signals that answer questions like "is the scan complete?" and "is the geometry trustworthy?", with metrics and automated checks.
  • Establish datasets, evaluation protocols, failure taxonomies, and regression tests so system reliability improves measurably week over week.
  • Collaborate closely with mobile, backend, and product to ensure end-to-end performance, scalable compute, and operational reliability.

What we're looking for (must-have)

  • Deep experience in 3D computer vision: point clouds, meshing, segmentation, pose estimation, and 3D reconstruction.
  • Strong ML fundamentals and hands-on experience with PyTorch (or equivalent).
  • Track record of shipping end-to-end systems (data in → poses/map/structured outputs/quality signals out), with clear trade-offs and reproducible results.
  • Excellent engineering fundamentals (clean code, experiment hygiene, documentation, CI/CD).

Nice to have

  • Modern deep learning for 3D (sparse convs, 3D transformers, implicit methods).
  • Strong research implementation ability; open-source repos/papers/projects are a plus.
  • MLOps / cloud experience (GCP/AWS), deploying batch/stream pipelines and managing model/data versioning.
  • Experience building uncertainty/confidence systems, auto-QA, anomaly detection, or reliability scoring for perception pipelines.

What success looks like

  • You ship improvements that reduce manual intervention, increase reliability on messy real-world scans, and create dependable "health signals" for the entire pipeline.

How to apply: send your resume to anna-lyydia@millhillgarage.com

Generative AI Engineer — PhD-level · Remote

About the role

As a Generative AI Engineer, you will be at the heart of our mission to solve the "data void" of the built environment. You will design and deploy generative models that bridge the gap between messy, real-world mobile scans and clean, structured, "on-demand" property intelligence. Your work will focus on using state-of-the-art Generative AI to automate the creation of reliable property geometry and structured data.

What you'll do

  • Model Development: Design, develop, and implement generative AI models (such as GANs, VAEs, or diffusion-based architectures) to synthesize, complete, or refine 3D spatial data.
  • Pipeline Integration: Build and maintain end-to-end AI pipelines, translating raw mobile-captured scans into high-fidelity, structured property outputs.
  • Optimization & Scaling: Optimize generative models for performance and cost-effectiveness, ensuring our built environment intelligence is delivered in real-time.
  • R&D: Stay at the forefront of Generative AI research—specifically in 3D reconstruction and spatial reasoning—and rapidly prototype new solutions for geometry cleanup and structure extraction.
  • Collaboration: Work closely with the Spatial AI and 3D Computer Vision teams to turn human review signals into automated, high-confidence training loops.

What we're looking for (must-have)

  • Deep AI Expertise: Proven experience developing and deploying generative models (GANs, VAEs, LLMs, or similar) with a focus on image or 3D data.
  • Technical Proficiency: Strong Python skills and deep hands-on experience with PyTorch or TensorFlow.
  • Spatial Literacy: Solid understanding of 3D data structures (point clouds, meshes) and geometry processing.
  • Engineering Rigor: Excellence in clean code, experiment hygiene, and CI/CD for ML (MLOps).

Nice to have

  • Advanced Research: A PhD or strong publication/open-source record in Generative AI or Computer Vision.
  • Specialized Knowledge: Experience with 3D Transformers, implicit neural representations (NeRFs), or multi-modal models.
  • Cloud Native: Experience deploying and managing large-scale AI models on GCP or AWS.
  • Real-World Impact: Experience working with ARKit, mobile scanning data, or architectural geometry.

What success looks like

  • You deliver generative systems that significantly reduce the need for manual intervention, moving the platform toward full autonomy while maintaining high-confidence accuracy in real-time.

How to apply: send your resume to anna-lyydia@millhillgarage.com

Junior Applied Spatial AI / ML Engineer — Master's-level OR Master's Thesis Worker · Remote

About the role

You'll work hands-on on the applied 3D vision and ML pieces that turn scan data into consistent property geometry and structured spatial outputs. You'll ship improvements quickly, learn from real-world data, and help harden the system with metrics and automated checks. We welcome applicants looking for a full-time junior position as well as those interested in completing their Master's Thesis with us. For thesis workers, we will collaborate to define a research topic that aligns with our core spatial intelligence challenges.

What you'll do

  • Implement and improve pipeline modules: preprocessing, segmentation, semantic labeling, structure inference, and geometry post-processing.
  • Create and maintain datasets and evaluation harnesses: metrics, visual debugging outputs, regression tests, and clear experiment tracking.
  • Turn human review/corrections into training signals and iterative improvements in models and heuristics.
  • Collaborate across the team to ensure reproducibility, reliability, and scalability (clean data handling, automation, and documentation).

What we're looking for

  • MSc (or equivalent experience) in CS/EE/Robotics/ML or similar.
  • Strong Python skills; practical experience with PyTorch.
  • Comfortable working with 3D data (point clouds/meshes) and common tooling (Open3D, OpenCV, etc).
  • Ability to communicate trade-offs clearly and produce clean, reproducible outputs.

Nice to have

  • Coursework/projects in 3D vision, robotics, geometry processing, or reconstruction.
  • Fundamentals in machine learning and deep learning (e.g. completed university courses).
  • Cloud/deployment basics (Docker, CI/CD, batch processing).
  • Interest in quality systems: confidence scoring, anomaly detection, automated pipeline checks.

What success looks like

  • You deliver measurable improvements to accuracy and robustness, and you leave behind code + metrics that make future iteration faster.

How to apply: send your resume to anna-lyydia@millhillgarage.com

Summer Trainee – 3D Vision / Spatial Intelligence — Master's student · Remote

About the internship

This is a hands-on summer role where you'll ship a scoped project that improves our tech stack's ability to produce accurate property geometry and structured spatial data from mobile-captured scans. You'll work closely with the team, get weekly feedback, and deliver something measurable and usable.

Example project directions (choose 1–2)

  • Quality / coverage signals (highly product-relevant): completeness metrics, coverage heatmaps, and automated checks that help decide whether scans are sufficient and trustworthy.
  • Geometry cleanup & structure extraction: denoising, plane/structure detection, occupancy-grid style representations, and post-processing that improves stability.
  • Multi-level understanding: splitting scans into levels/areas and producing consistent per-level geometry.
  • Evaluation harness: build a clean benchmark set, failure taxonomy, and a repeatable evaluation script that outputs metrics + visualizations.

What you'll do

  • Implement prototypes in Python (and PyTorch where relevant) and run experiments on real data.
  • Deliver a shippable result: working code, clear outputs (visualizations + metrics), and a short write-up explaining approach, results, trade-offs, and next steps.
  • Present progress weekly and iterate quickly based on feedback.

What we're looking for

  • Master's student in CS/Robotics/ML/CV (or similar).
  • Strong Python fundamentals; some exposure to ML and/or 3D data.
  • Practical mindset: you can build something end-to-end, test it, and explain your choices.

Nice to have

  • Experience with ARKit / mobile scanning data, Open3D/OpenCV, or geometry processing.
  • Prior research or projects in SLAM / reconstruction / 3D perception.

What success looks like

  • By the end of the summer, you've delivered a component (or mini-pipeline) that can be integrated and that makes the system more robust or more measurable.

How to apply: send your resume to anna-lyydia@millhillgarage.com