What is Mixed Reality and How is it Used Today?

<!doctype html>

Mixed Reality (MR): What It Is, How It Works, and Where It’s Going (2021 Update)

Mixed Reality blends the physical world with digital content that’s spatially aware—anchored, occluded, and interactive. Here’s the 2021 state of play, from devices to real-world use cases.

Mixed reality (MR) blends the physical world with digital content that’s anchored to (and interacts with) your real environment. Thanks to advances in computer vision (SLAM), depth sensing, hand/eye tracking, and modern GPUs, MR lets holograms “stick” to your room, respond to surfaces, and even hide behind real objects (occlusion). It sits on the same spectrum as AR and VR but adds spatial understanding and bidirectional interaction—the key difference from simple overlays.

Quick take: If AR is “adding labels on top of reality” and VR is “fully virtual,” MR is “digital objects that behave like they truly exist in your space.”

How Mixed Reality Works

  • Environment mapping (SLAM): Head-mounted sensors build a live 3D mesh of your room (walls, tables, floors). Digital objects can then collide with, rest on, or hide behind real geometry.
  • Spatial anchors: Coordinates tied to the real world keep holograms in place across sessions/devices. See Azure Spatial Anchors.
  • Natural input: Hand, eye, and voice input—pinch to grab, gaze to target, voice to command—replace or complement controllers.
  • Rendering + occlusion: Real-time graphics and depth allow correct scale, lighting, and for real objects to occlude virtual ones.

Want the quick sizzle? Microsoft’s page shows the core concepts well: HoloLens (official).

MR vs. AR vs. VR

  • VR: Fully virtual environment (max immersion).
  • AR: Overlays on top of a camera view or HUD (limited world understanding).
  • MR: Digital content that understands and interacts with your space (anchoring, physics, occlusion, hand/eye tracking).

Note: “Windows Mixed Reality” is a platform name; many headsets in that PC lineup are primarily VR. True MR requires spatial mapping and real-world interaction.

Devices You’ll Actually See in 2021

  • Microsoft HoloLens 2 (optical see-through): enterprise-grade, eye/hand tracking, depth sensing; strong for field service and training. Official site.
  • Magic Leap One (Creator → Enterprise): comfortable wear, hand tracking; medical visualization and AEC pilots. Official site.
  • VR headsets with pass-through MR (e.g., Quest 2 in 2021): black-and-white room view + anchors enable “passthrough MR” for utility apps.
  • Mobile MR via ARKit/ARCore: phone-based spatial anchors, depth APIs, and occlusion for lighter MR.

What You Need to Build MR

  • Hardware: Head-worn MR (HoloLens 2 / Magic Leap) or supported VR with pass-through; or a modern phone.
  • Engine & SDK: Unity/Unreal with Microsoft’s Mixed Reality Toolkit (MRTK); platform SDKs for anchors/meshing.
  • Standards: OpenXR simplifies cross-device builds.

Real-World Uses

Healthcare

Pre-op planning, 3D anatomy training, and remote expert assist—MR shortens learning curves and improves spatial understanding.

Architecture, Engineering & Construction (AEC)

Full-scale BIM walk-throughs, clash detection in situ, client sign-off with life-size context (SketchUp/Trimble, Unity Reflect pipelines).

Manufacturing & Field Service

Hands-free step-by-step guides and remote annotations pinned to real equipment; reduced downtime and travel.

Aviation & Defense Training

Procedural drills and cockpit familiarization—repeatable scenarios without the logistics and risks of live fire.

Remote Collaboration

Shared spatial anchors let teams see and edit the same hologram at 1:1 scale from different locations—think whiteboards that live in your office forever. (Microsoft previewed cross-platform rooms and avatars with Mesh in 2021.)

A Short History of MR

  • 1960s: Sutherland/Sproull pioneer head-mounted displays.
  • 1992: Louis Rosenberg’s Virtual Fixtures demonstrates early MR assistive tech.
  • 1994: Milgram & Kishino define the “Reality–Virtuality Continuum,” naming Mixed Reality.
  • 2016–2019: HoloLens & HoloLens 2 make spatial MR practical for enterprise; Magic Leap debuts creator hardware.
  • 2020–2021: Depth APIs and remote-assist demand accelerate deployments; OpenXR rises.

Why MR Is More Than “Just AR Overlays”

MR ties content to your room with millimeter-level anchors, realistic lighting, and physics. That’s the leap from “a label floating on camera” to “a 3D valve hovering on the real machine that you can grab, disassemble, and reassemble step by step.”

What’s Next (from the 2021 Lens)

  • Better pass-through MR: Higher-res, low-latency camera pipelines on VR headsets blur the AR/VR hardware line.
  • Shared persistence: Multi-user anchors that stick around (days/months) across devices enable true “digital twins” of spaces.
  • Natural UX: Eye-gaze targeting, haptics, and hand-pose libraries become standard—less UI chrome, more direct manipulation.
  • Standards: Broad OpenXR adoption lowers friction.

Common Questions

Is MR “better” than VR? Different tools. VR maximizes immersion; MR maximizes context (training, design review, remote assist).

Do phones count as MR? ARKit/ARCore can deliver MR-like behaviors (anchors, occlusion), but head-worn MR wins on FOV, input, and comfort.

Do I need cloud services? For shared anchors, large meshes, or multi-user sessions—usually yes (e.g., Azure Spatial Anchors).

How We Update This Guide

Scope & sources: We verify definitions against primary documentation (Microsoft HoloLens, Azure Spatial Anchors, Khronos OpenXR) and hands-on testing in Unity/MRTK. We prioritize devices and workflows that shipped or were actively supported in 2020–2021.

Evaluation criteria: spatial mapping quality, input (hand/eye/voice), deployment friction (OpenXR/MRTK), and real-world traction (healthcare, AEC, field service).

Internal reading path: New to spatial computing? Start with What Is Virtual Reality? then compare with AR vs. VR before you plan an MR pilot.

Editor’s note (2021): This update corrects earlier definitions (MR ≠ “between VR and speech recognition”), adds current devices and workflows (MRTK, OpenXR, Spatial Anchors), and expands use-cases driven by remote work.