ARCore Cloud Anchor Testing

Introduction

I recently decided to test drive Google’s ARCore Extension for the Unity game engine to find out how Cloud Anchors work. Cloud anchors allow AR (Augmented Reality) apps to create virtual objects within a physical space and persist them across multiple user sessions. This means that different users should be able see and interact with the same virtual objects at the same time from an AR app.

Test Objective

If I could confirm that ARCore and Cloud Anchors meet my app idea’s technical requirements, I would be able to experiment with embedding the AR app (built with Unity) within a cross platform Flutter app. See my previous post where I set out my arguments for why I would want to use an embedded approach to AR design.

Background

It’s been 5 months since I worked on the first prototype for this idea. The original prototype was was built in Unity and used Vuforia for the AR. So, I decided to open up that project and update Unity to a recent version. I quickly discovered that much has changed since my previous work on this project. Vuforia is no longer an integral part of Unity and all the XR SDK’s have changes to a new extensions format. This is all recent news and it seems to have even caught the Vuforia team by surprise.

On the positive side, Google’s Cloud Anchor functionality, which enables an app to persist AR objects between user sessions, is now available to developers; this is a feature I’d been anticipating for a while. However, a slightly pessimistic point to mention is that these persistent anchors are limited to just 24 hours. After that time the AR objects will vanish. This is not great news if you’re hoping to sell virtual items in AR space. But… on a more optimistic note, Google are considering permanent cloud anchors sometime in the near future.

Anyway, I closed my old (5 months!) Unity project and decided to test drive ARCore with Cloud Anchors to see how well it works and to indentify which, if any, further limitations might hinder my app design.

ArCore Cloud Anchor Setup

To get started, I followed a short ARCore cloud anchors course in Google Codelabs . This helped me to build a barebones AR cloud anchor app. To my pleasure, everthing worked on the first build. The instructions provided within the Codelabs course were extremely clear and helpful.

Once deployed to my phone (Google Pixel 3 XL), I could create AR objects in the physcial space of my room from within my test app. I could then close and reopen the app to wipe my previously created objects before calling Google’s Cloud Anchor API to reinstate my objects again.

The process worked correctly; despite closing/restarting my app, objects would reappear again in their inital position. Well, actually, I found that the objects would move about quite a bit unless the device is positioned and pointed at the exact same place where the anchor was first created. In fact with or without cloud anchors I generally found my AR test experiences to be quite jittery.

It would be totally unfair to compare my barebone test app AR experience with the kind of equipment being developed for the medical and manufacturing industries. Although, a recent 2019 study concludes that while there is a growing interest in commercial AR for high precision manual tasks, attention should be paid to the limitations of the available technology.

ARCore Testing Feedback

I was pleasantly surprised by the ease of setup. The documentation and CodeLabs course were very clear and helpful. However, there are still serious limitations. For example, cloud anchors only last 24 hours before they are deleted. But, Google are working towards allowing permanent cloud anchors in the future.

Firstly, before an object anchor can be persisted to the cloud, ARCore needs to gather information about the scene. This requires 30 seconds of data gathering. ARCore uses a technique similar to photogrammetry 3D modelling; where large numbers of points of interest are identified within the scene. These points of interest are mapped across different perspectives as the user moves about the scene and ARCore is able to construct a 3D model of the scene made up of horizontal and vertical planes.

I found this process was not producing totally accurate modelling of my room. It could determine a large tabletop and the floor plane quite well. But, couldn’t identify smaller surfaces like a chair or coffee table with any reliable consistency. When the user poses a 3D object into the scene, the object remains more-or-less in the same place. Although, the best word I can think of to describe the general experience is ‘jittery’. All that said and done, this is emerging tech and jittery is probably acceptable at this point.

To save the 3D object’s position to the cloud, an API call is made which sends data containing the modelling and object pose position.

Next, the API returns an ID for the newly persisted cloud anchor. In my tests, this process was taking anywhere between 5 and 20 seconds. Once the ID is returned, it can be used to share the 3D object within the scene across different user sessions. To test this, I copied the ID and closed my test app. I fired it back up again and my scene was now empty. I walked around in the scene for 30 seconds to recreate the mapping. I then sent a request to the Google Cloud Anchor API to fetch the anchor with my saved ID. This return trip was generally quite quick, usually just a second or two using my office wifi connection. The 3D object is then recreated in the scene! Wow! That’s cool! I would stress, though, that it only appeared in the correct position (or at all) when I was in the same position as where it was originally created. When I did try to call the API from the other side of the table, the cloud anchor hosting returned 0 matches.

Overall Conclusion

So, I’m impressed with the progress being made with cloud based AR anchors. But, I have to concede that I feel reluctant to get too excited, just yet, about rolling out an app for which the central premise is a shared multiscene AR experience. In my view, a shared-AR-experience app would need to be limited to a small number of AR objects in a well defined physical space where the users are actively searching for the objects that they know to be there and where there are limited vantage points from which the scene could be visualised. It wouldn’t be a trivial matter, for example, to reveal previously persisted 3D objects to a user as they walk down the street because the scene would be continuiously changing; but, if you know where to look in advance (as well as from which position, etc.) it works pretty well.

Video Walkthrough

The video below presents my testing of ARCore Cloud Anchors.

References

Perceptual Limits of Optical See-Through Visors for Augmented Reality Guidance of Manual Tasks (2019)
https://ieeexplore.ieee.org/document/8707062

Shared AR Experiences With Cloud Anchors
https://developers.google.com/ar/develop/java/cloud-anchors/overview-android

Leave a Reply