Overview

 

What is 6D.ai?

6D.ai is an AR platform for developers, that rapidly captures depth information from the real world to build a dense 3D mesh. Using the 3D mesh captured by the 6D.ai software, developers can build easy to use AR apps where assets are persistent, responsive to occlusion, and synced between multiple users on different platforms more efficiently. Most processing happens on-device in real-time, however this feature set is extended with cloud capabilities, allowing the device to handle world-scale data from the AR Cloud.
 

How does it work?

The 6D.ai API uses a standard built-in smartphone camera to build a real-time, three-dimensional semantic, crowd sourced map of the world, all in the background. No depth camera is needed to capture the world in 3D.
Using the 3D mesh generated by the 6D.ai API, developers build world-scale apps with a powerful feature set:

  • Persistence - objects stay in the world where you left them

  • Multiplayer - devices recognize other devices without needing complicated calibration or synching procedures

  • Occlusion - assets can be hidden by real-world objects, and peek out from behind them

  • Physics - assets can interact physically with the real world (bounce, collide etc). Because our model is persistent (not calculated per-frame using a depth-from-cnn estimation), assets will correctly physically interact with objects the camera can't see now that have been previously seen.

Essentially, spatial computing apps that were only possible to build on expensive Head Mounted Displays will now be possible on ARCore and ARKit smartphones. In the future, the 3D spatial data captured by 6D.ai will provide the foundation for 6D.ai AI neural-networks to help developers’ applications understand the world in 3D. We will also be implementing cloud services that will drastically increase capture speed and size.
 

What can I do with it?

The use cases of such a technology expand far beyond phone-based AR games. We’re seeing interest from enterprises, artists, musicians, OEMs, tool builders, drone-manufacturers and robotics companies to name a few. Having a 3D mesh at your disposal compared to a few sparse anchor points and planes makes a huge difference in realism, interaction, and usability.
In short, the goal is to ensure that anything a developer can do in a high-end AR HMD system can be done on a smartphone.
 

What platforms does 6D support?

Our SDK currently runs on iOS devices, with Android beta signup underway for true cross-platform multiplayer support. Note that all of the data collection, processing, and output is handled directly on device, not requiring any external services or cloud computing! We only use the 6D AR Cloud to store your maps and return them at a later date to keep your coordinate system consistent for that location across user sessions, and to expand maps for a richer 3D experience. HMD devices and other software eco-systems will be supported over time.

See a full list if iOS supported devices here.

Sign up for Android beta here.

When will you support WebXR, Unreal, Microsoft, Magic Leap?

We have customers and partners requesting all of the above (and more) and are working with a few leading headset companies that we will share publicly when ready. In August 2019, we announced we are working on supporting Nreal headsets through our partnership with Qualcomm. Our beta program supports a C++ / C# API with a sample Unity App, for both iOS and Android. Please continue to share what is most exciting to you. There is a lot of interest in supporting WebXR before the other platforms, but we haven't made any investments in porting (yet).

 

What is your pricing model?

Any app that ships in 2019 will receive three years of free service. On January 1, 2020, we will begin billing using a transaction based model that counts the number of times your app pulls down the anonymized SLAM data from the 6D AR Cloud. We have built the model to allow for a large range of usage under the free tier and created a month-by-month billing cycle so that you only pay for what you use, when you use it.

Mesh Capture

 

What is the size limit of the captured mesh?

Your phone’s memory will be able to store the three-dimensional geometry of around 100 sq meters in walking distance, while the on-device storage can hold city blocks of information. Each captured 3D block of space (scene) is saved to disk or the cloud as you move around, and is loaded dynamically as required, similar to the way Google Maps handles tiled sections of a 2D map.
 

Does 6D.ai just capture the raw mesh, or does it get textures too?

6D.ai captures a dense mesh model of the world based on the depth of every pixel that the camera sees. We have successfully applied textures to the mesh for applications that would like to see the textures while meshing. You can test out the mesh and texturing by registering for 6D.ai and downloading our TestFlight app. You will find the instructions here.

 

Does the mesh need to be pre-scanned? 

We have technology that allows the meshing to build in real-time, seamlessly synching meshes across multiple devices also in real-time, while other applications run in the foreground. However the load this activity places on a GPU and Network makes it unsuitable for even today's fastest mobile devices. While we know our mesh will build in real-time over very large areas on a current top of the range phone, we are still working to tune the system to play nicely with rendering engines. We expect our initial release of meshing to request a dedicated pre-scan phase, and eventually become a background task. We will of course give developers complete control over when meshing runs, and if you have a lightweight application, meshing may work just fine for you as a background task.

 

Is the mesh metric scale? 

Yes. We take scale from the underlying tracker, so it is as accurate as ARKit or ARCore.

 

Is network access required for 6D.ai to run?

No. The system will work fine (though at limited scale) without a network. All our processing is done locally on-device. The cloud is only used for data storage, stitching data from multiple sessions, multi-player, and cross-platform adaptations.

 

Do you have any example of a generated Mesh that I can download?

You can register for 6D.ai and download the TestFlight app by following the instructions on our developer dashboard here.
 

Multiplayer

 

How is 6D.ai handling multiplayer?

What makes multiplayer AR gaming so difficult is that both phones generally need to see the world from the same vantage point in order to sync up, meaning you literally need to put a phone next to another user’s to sync your maps before joining a game or sharing an experience.


The additional dense 3D data & neural-networks (AI) that the 6D.ai software applies makes AR multiplayer a lot easier to handle. Currently, 6D.ai allows users to sync up easily over a wide range of relative positions between the players. Soon users won’t have to deal with this repositioning at all and will be able to complete the process from any angle at any distance.

6D.ai is the first software platform to connect users across AR and VR devices. See more about our latest news around this topic https://www.6d.ai/newsroom.
 

AR Cloud

 

What is the AR Cloud?

AR systems need an operating system that lives partially on device, and partially in the cloud. Network/Cloud data services are as critical to AR apps as the network is to making mobile phone calls. Think back before smartphones… your old Nokia mobile phone without the network could still be a calculator and you could play snake, but its usefulness was pretty limited. The network and AR Cloud are going to be just as essential to AR apps. We believe we will come to view today’s simple tabletop ARKit/ARCore apps as the equivalent to just having offline “Nokia Snake” vs a network connected phone.

How does the cloud implementation work?

Each captured 3D block of space (scene) is saved to disk or the cloud as you move around. This means that as more and more devices running 6D.ai’s engine hit the streets, a web of phones will begin building up a cloud map of the world’s ground-level three-dimensional data, in machine readable form. As other users open up apps in areas that have previously been mapped, 6D.ai will download that information and allow the new user to further refine the precision of the 3D model while pushing 3D interactions further into the distance than their devices can sense.
 

Other Stuff

 

What is the pricing model?

Please visit our pricing page to see the billing model that will kick off on January 1, 2020.

In August 2019, we announced our commercial availability and rolled out our transactional pricing model so that app developers can start publicly shipping applications immediately. The pricing model was built with two of the company’s value in mind:

1) The first, around giving developers the ability to experiment and ship apps that push the limits on what is possible in spatial computing, AR, robotics, and location-based applications. As part of this goal, we created a free tier to allow for innovation and chose to provide the meshing capabilities at no charge to help move the market forward. We also announced that any app that ships on our platform by December 31, 2019 will enjoy free use of its SDK for three years.

2) The second, around our focus on data privacy. With this transaction-based pricing model, we are committed to charge for the value we directly provide to your application vs building a business model that is built on advertising.
 

What is your stance on data privacy?

We take privacy very seriously and constantly work to build our product and business model with data privacy in mind. We are putting a lot of thought into how data will be gained, accessed, and stored through 6D.ai. We also expect to learn quite a bit from what developers build - certain use cases will almost certainly have unique approaches to privacy and security. Currently, the mesh captured by 6D is texture-less and not photo-realistic, providing some degree of obfuscation. We have a number of technical programs of work underway (some common sense, some quite innovative) to responsibly manage 3D data. Many of these social issues have been solved for regular 2D photography (eg Flickr, Google photos) and we expect to listen & learn from our stakeholders to figure out the new aspects to address.
 

What's Next for the 6D.ai Platform?

We are working with more distributors like Qualcomm and cross-platform opportunities like Nreal, we aim to build a robust platform for self-service developers to ship applications and we are working on automatically segmenting meshes so that its system can identify different 3D real-world objects and passing that knowledge back to developers. This means roads and sidewalks can be distinguished outdoors, chairs and walls recognized indoors.
 

Are you hiring?

Absolutely! Find out more about the team and our open positions.