General overview


What is is an AR platform for developers, that rapidly captures depth information from the real world to build a dense 3D mesh. Using the 3D mesh captured by the software, developers can build easy to use AR apps where assets are persistent, responsive to occlusion, and synced between multiple users on different platforms more efficiently. Most processing happens on-device in real-time, however this feature set is extended with cloud capabilities, allowing the device to handle world-scale data from the AR Cloud.

What is the SDK Capable of?

The SDK uses a standard built-in smartphone camera to build a real-time, three-dimensional semantic, crowd sourced map of the world, all in the background. No depth camera is needed to capture the world in 3D.
As of now, this core technology enables several features in our beta SDK:

  • Content Persistence - objects stay in the world where you left them, across app sessions and devices

  • 'No-click' Multiplayer - devices recognize other devices without needing complicated calibration or syncing procedures

  • Occlusion Mesh - assets can be hidden by real-world objects, and peek out from behind them

  • Physics Mesh - assets can interact physically with the real world (bounce, collide etc). Because our model is persistent (not calculated per-frame using a depth-from-cnn estimation), assets will correctly physically interact with objects the camera can't see now that have been previously seen.

Essentially, spatial computing apps that were only possible to build on expensive Head Mounted Displays will now be possible on ARCore and ARKit smartphones. In the future, the 3D spatial data captured by will provide the foundation for AI neural-networks to help developers’ applications understand the world in 3D. We will also be implementing cloud services that will drastically increase capture speed and size.

What do I need to get started with the 6D SDK?

The first step is to sign up for the beta, which will get you squared away with login credentials for the 6D developer dashboard.
The dashboard is where you can download the latest SDK release, API Keys, and find documentation.
You also want to ensure that your development environment is matching our requirements:

  • iOS 11.4+ (iOS 12 beta 3 ok)
  • Xcode 9.4.1+ with ARKit 1.5+ support (10.0 beta 3 ok)
  • Unity3D 2018.2+
  • iPhone 8, iPhone 8 Plus, or iPhone X (these work with ALL Sample App scenes)
    (iPhone 7 & 7 Plus will work with Basic Sample scene & Basic Drawing scene, but do not currently support meshing so will NOT work with the Meshing or Ball Pit scenes)

With the hardware and software listed above, you will be ready to build apps using the 6D SDK!

When will I be able to ship a product that uses this SDK?


I have feedback and questions for the 6D team. Where do I go?



Platform Support


What platforms does 6D support?

Our beta program will start with support the iPhone X and 8. We will add support for leading ARCore devices shortly thereafter, and expand to support all ARKit and ARCore devices at general public launch. features are cross platform across ARKit and ARCore.
HMD devices and other software eco-systems will be supported over time.

Will the SDK work for iPad?


What about some of the older ARKit devices,like iPhone6S or older iPads?


When can we expect Android support?


Will the SDK support AR HMDs like Hololens or Magic Leap?


Does the SDK work with other game engines, like Unreal?


When will you support other platforms like WebXR, Unreal, Microsoft, Magic Leap?

We already have customers and partners requesting all of the above (and more). We wish we build could everything in parallel at once, but we're a seed stage company with limited resources, and have to prioritize. Our beta program will support a C++ / C# API with a sample Unity App, for both IOS and Android. After that it's up to you to tell us what's most exciting. There is a lot of interest in supporting WebXR before the other platforms, but we haven't made any investments in porting (yet).

6D sdk and Native apps


Can I build a native iOS app using the 6D SDK?


Can I use ARKit specific methods simultaneously with the 6D SDK features?


Can I use the 6D SDK in combination with SceneKit?




What makes the 6D multiplayer feature different?

No-friction UX

I've noticed that virtual objects may not be in the EXACT same place for each user. What is up with that?

At this time, there will be some expected variance in object position between players. The amount of error in the position is normally related to the amount of area that has been scanned and saved. Relocalization improves as more feature points are captured. Depending on environmental factors, variance can range from 3cm to 30cm.
We have some updates on the way to address this issue. Updates to our neural networks are incoming, which will improve relocalization speed and accuracy. We also have some cloud features on the way to automatically save, load, and merge meshes in the background. These changes are expected to bring the variance down to 2cm - 10cm.
If you are seeing huge variance consistently, there may be a network issue. Currently, all devices need to be successfully connected to the internet on the same network.




What am I able to do with the scanned mesh?


Is there a way to adjust the density of the mesh?

Meshing density is not adjustable at the moment. The SDK runs a layer of triangle decimation before passing the mesh to the app, but decimation is made per block. Arguably our blocks are pretty small. We recognize that developers may have different needs within the app that they build, and will be visiting this issue in the near future.

What is the size limit of the captured mesh?

phone’s memory will be able to store the three-dimensional geometry of around 100 sq meters in walking distance, while the on-device storage can hold city blocks of information. Each captured 3D block of space (scene) is saved to disk or the cloud as you move around, and is loaded dynamically as required, similar to the way Google Maps handles tiled sections of a 2D map.

Does 6D just capture the raw mesh, or does it get textures too? captures a dense mesh model of the world based on the depth of every pixel that the camera sees. We have successfully applied textures to the mesh, however the use case we are supporting (AR) only needs a transparent mesh. We felt that the work required to build a system where the texturing quality was high enough to compete with dedicated scanning apps was a lower priority than supporting AR apps. We're always willing to admit we read the market wrong, and if this is an important feature for you, please educate us by reaching out to with details.


Does the mesh need to be pre-scanned? 

We have technology that allows the meshing to build in real-time, seamlessly synching meshes across multiple devices also in real-time, while other applications run in the foreground. However the load this activity places on a GPU and Network makes it unsuitable for even today's fastest mobile devices. While we know our mesh will build in real-time over very large areas on a current top of the range phone, we are still working to tune the system to play nicely with rendering engines. We expect our initial release of meshing to request a dedicated pre-scan phase, and eventually become a background task. We will of course give developers complete control over when meshing runs, and if you have a lightweight application, meshing may work just fine for you as a background task.


Is the mesh metric scale? 

Yes. We take scale from the underlying tracker, so it is as accurate as ARKit or ARCore.


Is network access required for 6D.AI to run?

No. The system will work fine (though at limited scale) without a network. All our processing is done locally on-device. The cloud is only used for data storage, stitching data from multiple sessions, multi-player, and cross-platform adaptations.


Do you have any example of a generated Mesh that I can download?

We do not have examples of 3D meshes captured via the API available for download at this time. Stay tuned, we want to provide developers with all the resources you need from us to design, prototype and build your vision.


AR Cloud


What is the AR Cloud?

AR systems need an operating system that lives partially on device, and partially in the cloud. Network/Cloud data services are as critical to AR apps as the network is to making mobile phone calls. Think back before smartphones… your old Nokia mobile phone without the network could still be a calculator and you could play snake, but its usefulness was pretty limited. The network and AR Cloud are going to be just as essential to AR apps. We believe we will come to view today’s simple tabletop ARKit/ARCore apps as the equivalent to just having offline “Nokia Snake” vs a network connected phone.

How does the cloud implementation work?

Each captured 3D block of space (scene) is saved to disk or the cloud as you move around. This means that as more and more devices running’s engine hit the streets, a web of phones will begin building up a cloud map of the world’s ground-level three-dimensional data, in machine readable form. As other users open up apps in areas that have previously been mapped, will download that information and allow the new user to further refine the precision of the 3D model while pushing 3D interactions further into the distance than their devices can sense.

How long will spatial anchors/maps be saved in the cloud?


How big are the maps being uploaded and dowloaded from the cloud?

100KB for 10mx10m, but it doesn't cover the whole area. It's more like 10mx2.5 meters (side walk+ a little wall and street)




What is the licensing model?

We do not have plans to charge any licensing fees while the software is in beta. We will always offer a free tier for developers to experiment, and eventually intend to charge per transaction in a similar manner to Stripe, AWS or Twilio.

How much will it cost to use the 6D SDK?

We are not setting our pricing until we have enough usage data to understand the value our APIs actually bring to developers. You *need* to succeed for us to succeed. Our pricing model will be identical to Amazon Web Services. Pay a tiny amount for each API call (per kb/cpu etc). We will provide a generous free-tier, and bundled packages for various common use-cases. While we learn, today's API set will remain no cost to developers, though we will implement a no-abuse/extreme-use clause so that our own hosting costs don't send us broke if there is a massive hit on our platform. For developers and partners who are confident of high uptake (say 1m+ users), contact us and we can work out a flat-rate unlimited use model for the first year

Can I pay 6D to host my data?


Other Stuff


What's Next for the 6D Platform?

After beta program, we will be working on automatically segmenting meshes so that its system can identify different 3D real-world objects and passing that knowledge back to developers. This means roads and sidewalks can be distinguished outdoors, chairs and walls recognized indoors.

Are you hiring?

Absolutely! Find out more about the team and our open positions.