Smart playback in XR

SpatialFin is getting opinionated in the right places.

Audio language, subtitle fallback, series memory, voice parsing, AI chat, and diagnostics now work together so playback feels more adaptive and less manual.

Original Audio Preference Ordered Spoken Languages Per-Series Memory Voice + AI Fallback
SpatialFin smart playback UI
Less setup per episode Make a correction once and SpatialFin can remember it for the whole series.
Smarter defaults Prioritized spoken languages, subtitle fallback, and runtime diagnostics.
Languages

Built for multilingual libraries

Choose spoken languages in priority order, prefer original audio when appropriate, and let SpatialFin enable subtitles only when they are actually needed.

Memory

Manual fixes should compound

If SpatialFin guesses wrong for a series, your later audio or subtitle correction can be remembered across episodes instead of resetting every time.

Runtime

AI and voice are transparent

Voice telemetry, AICore visibility, and cloud fallback status make it obvious what path the headset is actually using.

How it works

Smart playback is a stack, not a checkbox.

SpatialFin now combines language preferences, track ranking, memory, and AI tooling so common playback decisions need fewer manual corrections.

Smart Audio And Subtitles

  • Ordered spoken languages: Users can add, search, and reorder the languages they understand.
  • OS-aware defaults: The first spoken language starts from the headset OS language.
  • Original audio preference: SpatialFin can favor likely-original audio instead of defaulting to the first dub.
  • Subtitle fallback: Automatically enables subtitles if selected audio is not in a spoken language.

Series-Level Memory

  • Audio memory: Switch one episode to a specific language and future episodes inherit it.
  • Subtitle memory: Stores the exact subtitle variant when you make a better track choice manually.
  • Override priority: Manual user corrections always win and persist across the series.

Voice And AI

  • Deterministic first: Core playback commands are handled directly for reliability.
  • On-device path: Uses LiteRT Gemma or AICore / Gemini Nano where available for private, fast inference.
  • Cloud fallback: Optional Gemini API key support for devices where AICore is absent.

Current reality

Galaxy XR can use on-device Gemma or cloud fallback.

Current testing on Galaxy XR firmware shows that the system image does not expose com.google.android.aicore to third-party apps. To provide local AI features, SpatialFin downloads and runs the LiteRT Gemma model directly on-device. SpatialFin’s voice stack also supports deterministic handling and optional cloud fallback.