The goal of the 6D.ai SDK is to enhance the AR apps you build, helping them interact with the world in a natural and realistic way.
Current AR limits a user to a small area, where content is lost as soon as an app is closed.
Devices using the 6D platform will be able to have a deeper understanding of the world around a user.
In short, create apps at room or house scale, with assets that go around and behind objects, that remember what a user did weeks or months ago.
How it works:
The 6D.ai API uses a standard built-in smartphone camera to build a real-time, three-dimensional semantic, crowd sourced map of the world, all in the background. No depth camera is needed to capture the world in 3D.
Using the 3D mesh generated by the 6D.ai API, developers build world-scale apps where assets are persistent, responsive to occlusion, and synced between multiple users more efficiently.
Spatial computing apps that were only possible to build on expensive Head Mounted Displays are now possible on ARCore and ARKit smartphones.
3D spatial data provides the foundation for 6D.ai AI neural-networks to help developers’ applications understand the world in 3D.
3D occlusion mesh was built 100% on-device (no cloud or offline processing) at 30fps, while the app was running, using only the iPhone camera. No depth cameras like structure etc. no speed-up or post processing of the video. This is a raw capture.
Our state-of-the art relocalization technology ensures that AR content stays in place between sessions, and most importantly is easy to find again when you start a new session nearby.
Thanks to our AR Cloud technologies, the location of that AR content is not limited to the host device, but available to any other device that has the permission to access it.
This enables users to have an app session, close the app, and return to it weeks or months later with all the AR content just like they left it. It enables groups of users to asynchronously add digital content to a location over a long period of time. It can also dramatically decrease the pre-scan time for users that start an app session in a location that has already been visited by another user.
The 6D.ai API uses a standard built-in smartphone camera to build a real-time, three-dimensional semantic, crowd sourced maps of the world, all in the background. No depth camera is needed to capture the world in 3D.
A real-time 3D mesh of the world means that AR content can be blocked from view - occluded - by real world objects. The effect is hard to quantify, but its impact is undeniable.
A digital kitten can hide behind a real world box; virtual beads can be placed into a real world bowl; an AR puppy can run under and around a real world table.
Sharing an AR experience with someone else can be great when it works, but getting it to work is usually difficult. Users had to carefully hold their device in the right spot & hope then wait for SLAM sync to occur. Finding the same session often involves typing in room IDs and multiple connection attempts.
6D.ai solves this problem with almost instant joining from any position. Player 1 starts a session, automatically uploading a map of the location. Player 2 opens the app, and quickly joins as soon as background processes successfully relocalize. From the user’s point of view, it just works.
True spatial computing apps that were only possible to build on expensive Head Mounted Displays are now possible on ARCore and ARKit smartphones. Any device will be able to see the same experience at the same time, to the best of its hardware capabilities.
We aim to make it easier for developers to ship their AR content to multiple smartphone platforms. This also provides an opportunity for developers that have made apps for AR HMDs or depth cameras to make their content available to a wider audience.