{ "@context": "https://schema.org", "@graph": [ /* ---------- Brand & site-wide objects ---------- */ { "@type": "Organization", "@id": "https://www.multiset.ai/#organization", "name": "MultiSet AI", "url": "https://www.multiset.ai/", "logo": "https://www.multiset.ai/assets/logo.svg", "sameAs": [ "https://www.linkedin.com/company/multiset-ai", "https://github.com/multiset-ai", "https://twitter.com/multiset_ai" ], "contactPoint": [ { "@type": "ContactPoint", "contactType": "sales", "email": "mailto:contact@multiset.ai", "areaServed": "GLOBAL", "availableLanguage": ["English"] }, { "@type": "ContactPoint", "contactType": "customer support", "email": "mailto:support@multiset.ai", "availableLanguage": ["English"] } ] }, { "@type": "WebSite", "@id": "https://www.multiset.ai/#website", "url": "https://www.multiset.ai/", "name": "MultiSet AI – Scan-Agnostic Visual Positioning System", "potentialAction": { "@type": "SearchAction", "target": "https://www.multiset.ai/search?q={search_term_string}", "query-input": "required name=search_term_string" }, "publisher": { "@id": "https://www.multiset.ai/#organization" } }, /* ---------- Home page ---------- */ { "@type": "WebPage", "@id": "https://www.multiset.ai/#home", "url": "https://www.multiset.ai/", "name": "Enterprise Visual Positioning System | MultiSet AI", "description": "Sub-10 cm accurate, scan-agnostic Visual Positioning System (VPS) built for enterprise AR, robotics and digital twins. Deploy on-prem, in private cloud or fully offline, and localize any device — iOS, Android, Unity, WebXR, ROS.", "isPartOf": { "@id": "https://www.multiset.ai/#website" }, "breadcrumb": { "@type": "BreadcrumbList", "itemListElement": [ { "@type": "ListItem", "position": 1, "name": "Home" } ] }, "primaryImageOfPage": { "@type": "ImageObject", "url": "https://www.multiset.ai/assets/og/home-hero.png" }, "inLanguage": "en", "datePublished": "2026-02-01", "dateModified": "2026-02-12", "keywords": [ "visual positioning system", "VPS for enterprise", "scan-agnostic mapping", "6DoF localization", "indoor localization without GPS", "on-prem VPS", "private cloud AR", "object tracking SDK", "digital twin mapping", "centimeter-level positioning" ], "about": [ { "@type": "Thing", "name": "Visual Positioning System" }, { "@type": "Thing", "name": "3D mapping" }, { "@type": "Thing", "name": "Object tracking SDK" } ] }, /* ---------- Core product as SoftwareApplication ---------- */ { "@type": "SoftwareApplication", "@id": "https://www.multiset.ai/#multiset-vps", "name": "MultiSet VPS SDK", "applicationCategory": "AugmentedRealityApplication", "operatingSystem": "iOS, Android, Unity, WebXR, ROS", "softwareVersion": "v1.6.5", "offers": { "@type": "Offer", "price": "0.00", "priceCurrency": "USD", "description": "Free tier – 10 maps & 10 000 localization calls per month" }, "featureList": [ "Sub-10 cm indoor accuracy", "Low-drift 6-DoF tracking", "Scan-agnostic MapSet ingestion", "Offline & on-prem deployment", "Multi-floor stitching" ], "publisher": { "@id": "https://www.multiset.ai/#organization" } }, /* ---------- High-intent FAQ for zero-click wins ---------- */ { "@type": "FAQPage", "@id": "https://www.multiset.ai/#faq", "mainEntity": [ { "@type": "Question", "name": "What makes MultiSet different from other VPS solutions?", "acceptedAnswer": { "@type": "Answer", "text": "Unlike map-locked platforms, MultiSet ingests any LiDAR or photogrammetry scan (E57, Matterport, NavVis, Leica), runs on-prem or air-gapped, and localizes devices in <6 cm median accuracy across dynamic indoor and outdoor scenes." } }, { "@type": "Question", "name": "Can I deploy MultiSet fully offline?", "acceptedAnswer": { "@type": "Answer", "text": "Yes. The VPS backend and MapSets can run on-device or in a private cluster with no external network calls, meeting stringent data-sovereignty requirements." } }, { "@type": "Question", "name": "Which devices and engines are supported?", "acceptedAnswer": { "@type": "Answer", "text": "MultiSet provides native iOS/Android libraries, a Unity package, a WebXR SDK and a ROS 2 node, enabling rapid integration into mobile apps, head-mounted displays, browsers and robots." } } ] } ] }

Enterprise Visual Positioning That Just Works

Deploy on any device, use your existing 3D scans, and get centimeter level accuracy - no markers, beacons, or new infrastructure required

The Complete Visual Positioning Stack

A unified workflow to map any facility, stitch multi-floor environments, and deploy with sub-5 cm precision - indoors, outdoors, even in low light.

Book cover titled AREA with subtitle Augmented Reality for Enterprise Alliance, featuring a map with glowing location pins and a person standing at a highlighted point, and the text Visual Positioning Systems for Enterprise AR Applications by ETHAR, INC.

Most Robust VPS

Augmented Reality Enterprise Alliance (AREA) - 15th AREA
Research Report, September 2025
Feature
MultiSet VPS
Others
Robustness
≤5 cm accuracy, low drift; instant 6-DoF relocalization in tough light, weather, and motion.
Drops or drifts under glare, low light, or occlusion; slower re-locks.
Scan-Agnostic
Use what you have - LiDAR, E57/point clouds, textured meshes, Gaussian splats.
Pre-mapping lock-in to a vendor app or limited file formats.
Multi-Floor Persistence
Plant-wide continuity - auto-stitches floors/ buildings; safe re-maps keep anchors/ semantics.
Manual linking; updates can break anchors and content.
Dynamic Environments
Change-tolerant - stays stable as equipment or layouts move; minimal re-scans.
Static-scene bias; frequent re-mapping to maintain accuracy.
Indoor ↔ Outdoor Transitions
Seamless handover - top indoor performance with clean transitions to GNSS/RTK, IoT, UWB.
Outdoor-first stacks struggle indoors and at thresholds.
Deployment Flexibility
Your cloud or ours - public, private, on-prem, or on-device/offline; full data ownership.
Vendor-cloud default; limited offline/on-prem and data control.
Support
24/7 engineer-led SLAs; SDKs for Unity, iOS, Android, WebXR, Quest.
Support varies; forum-first or business-hours only; narrower SDKs.
Read the Report
Companies We Work With

Unify your spatial intelligence

Any Input

One pipeline for any reality capture source

LiDAR, E57, Matterport, textured meshes, iPhone scans - it all works. No vendor lock-in, no rescanning.
Infinite Scale

Stitch maps across floors, buildings, and campuses

Multi-floor support, map stitching, no drift as you scale. Handles massive environments without remapping.
Any Output

One SDK for every device

Unity, Web, iOS, Android, Quest, ROS. Deploy to phones, robots, headsets, browsers - all from the same build.

Complete Cloud Choice

Neon purple cloud outline with interconnected glowing nodes representing cloud network or cloud computing.

MultiSet Managed Cloud

Purple neon cloud with a shield and lock icon symbolizing cloud security on a dark background.

Enterprise Private Cloud

Glowing purple microchip icon with a crossed-out smartphone and Wi-Fi symbol inside.

On Device (Offline)

Use Cases

Frequently asked questions
What is MultiSet AI?

MultiSet AI is an enterprise spatial computing platform that gives cameras, headsets, mobile apps, and robots precise 6-DoF localization to align AR content and operations to the real world. It powers reliable navigation, inspection, training, and digital-twin overlays across complex indoor/outdoor, multi-floor environments.

How is MultiSet different from other VPS or AR toolkits?

Unlike marker/beacon systems or single-vendor stacks, MultiSet is scan-agnostic and cross-platform: import E57/point clouds/meshes from your reality-capture tools, run on iOS/Android/headsets/robots, and build with Unity, WebXR, native SDKs, or ROS. The result is low drift, low latency localization that scales from a bay to an entire campus—without hardware lock-in.

Will MultiSet fit our existing stack and security requirements?

Yes. Bring your current scanners and devices; choose public cloud, private cloud/VPC, self-hosted, or on-device deployments. Data is encrypted in transit/at rest with enterprise identity and audit controls—designed for regulated and air-gapped environments.

How fast can we go from pilot to campus-wide coverage?

Teams typically stand up a first localized scene in minutes using samples, then expand by stitching scans into map sets for seamless indoor↔outdoor and multi-building continuity. This lets you scale predictably while keeping performance consistent across sites.

What outcomes should we expect in the first 90 days?

Common wins include guided workflows & wayfinding, faster operator training, AR-assisted inspection/QA, context-aware IoT overlays, and more consistent robotics navigation—measured as reduced task time, error rates, and downtime, plus higher adoption of digital procedures.

How do we buy and get started?

Start on the free tier to prototype; production pricing aligns to usage (maps, API calls, map area) with optional SLAs for private-cloud or on-device deployments. If you’re migrating from retired/legacy spatial anchors, we provide guided migration with minimal app changes. Book a demo, import or capture a pilot scan, and ship your first experience.