WWDC22 - Augmented Reality Digital Lounge
Questions and answers collected from the WWDC22 Augmented Reality Digital Lounge, which was held from 07 - 10 June 2022.
All
I am pretty new to Reality Composer. I would like to know how (if it is possible) to add textures to custom USD objects.
Reality Converter makes it easy to convert, view, and customize USDZ 3D objects on Mac. For more information, visit:
https://developer.apple.com/augmented-reality/tools/
Question pageThe recent update of Xcode made scenes in Reality Composer look much darker. Is this caused by a change in RealityKit?
We are aware of a bug in macOS Ventura/iOS 16 that is causing the lighting to appear darker. Please feel free to file a bug report on Feedback Assistant about this.
Question pageI am trying to use LiDAR scanner to create a 3d model from capturing an object. But couldn't get enough resources for that. Any references/resources please?
For creating 3D models from images captured on device, this should be a helpful resource to get more inspiration and help:
Bring your world into augmented reality
Question pageIs there a way of exporting a Reality Composer scene to a .usdz, rather than a .reality or .rcproject? If not, what are your suggested ways of leveraging Reality Composer for building animations but sharing to other devices/platforms so they can see those animations baked into the 3D model?
Yes, on macOS, you can open [from the menu bar] Reality Composer ▸ Settings [or Prefences on older versions of macOS] and check Enable USDZ export
Question pageAre there any plans (or is there any way?) to bring post-process effects and lights into Reality Composer? I'm making a short animated musical film in AR. I love how RC does so much automatically (spatial audio, object occlusion...). I just wish it was possible to amp up the cinematic-ness a little with effects.
Post processing effects are not supported in RC right now, only in a RealityKit app. However, feel free to file a feature request on Feedback Assistant about this.
Question pageThe LookToCamera Action works different on AR Quick Look than while testing in Reality Composer. Will this be fixed with Xcode 14?
Please file a bug report for this. Any additional information you can provide, such as a video reproducing the issue would be hugely helpful.
https://developer.apple.com/bug-reporting/
Question pageAre there guidelines or best practices for exporting a RealityKit scene to a USDZ? Is this possible? I’ve seen just a little about the ModelIO framework. Is this the tool we should be using?
I don’t think we have any guidelines about this, since exporting/saving a scene is not supported by the current APIs. ModelIO seems like a reasonable solution to me, but you might also want to file a feature request for this on Feedback Assistant.
Question pageHello there, I am someone who is still fairly novice with Reality / AR Kit. And I want to ask what is the best way to implement Multiuser AR experiences. I’ve been thinking on creating an AR app that would use this feature to allow multiple users to view a single AR view (e.g., multiple users seeing the same rendered model from their own perspectives).
Multiuser AR experiences can be created using the SynchronizationComponent
This is a good tutorial (along with sample code) on building collaborative AR sessions between devices:
https://developer.apple.com/documentation/arkit/creatingacollaborative_session
Question pageIn the keynote, there's a mention about a Background API in Metal. Please share documentation/resources link
Are you referring to https://developer.apple.com/documentation/metal/resource_loading?
Question pageiOS 15.4 includes the builtInLiDARDepthCamera type in AVFoundation. Is there any advantage in implementing this camera type when doing Object Capture for better depth calculation, or does that not change the outcome of the rendered 3D model?
Capturing images with LiDAR devices will give you automatic scale estimation and gravity vector information on your final usdz output
Question pageWhat are some ideal non-product examples of good USDZs
There are great USDZ examples on Quick Look Gallery
For example you have the Lunar Rover from For All Mankind
We've also added new documentation to help you generate better USD assets here:
Creating USD files for Apple devices
Question pageIs there a way to access System instances for a Scene or must System updates (e.g., change the culling distance of a System) always route through a Component?
Generally Systems are designed to operate on Entities (within Scenes) and their Components. Each System can be updated against multiple Scenes (and the Scene’s entities).
If you have state that you want to be represented with a System, one method to do that is to have a root entity that holds a “System component”.
There’s more information on System’s in last years WWDC session and on the developer documentation website:
- Dive into RealityKit 2
- https://developer.apple.com/documentation/realitykit/system/update(context:)-69f86
When do you think we will see new versions of Reality Composer and Reality Converter apps? I'm a college professor - Graduate Industrial Design, and use these as an intro to AR tools. Better, more capable versions might be nice? Thanks.
Unfortunately, we don’t discuss our future plans. However, we are aware that our tools haven’t been updated in a few years and could use some new features. Could you share what features you are looking for us to add?
Question pageWe are seeing some memory leaks when adding ModelEntities to an anchor, pausing the ARSession and starting it again and adding ModelEntities again....We see memory growing in the re::SyncObject section. Does anyone have experience troubleshooting memory leaks that have happened in a similar way?
I’d recommend this year’s WWDC Xcode session, What's new in Xcode, for what’s new in debugging. And there have been many other excellent sessions over the years on debugging.
That said if you believe it may be RealityKit or another system framework responsible for leaking the entities we’d ask you to file a Feedback Item on http://feedbackassistant.apple.com if you haven’t done so already.
Question pageAny plans for instant AR tracking on devices without LiDAR? This could be helpful for translation apps and other apps that overlay 2D text/images on 3D landmarks.
You might want to ask this to the ARKit team, but I’m not aware of any plans.
A feedback item would be good though!
Question pageIs there a way to localize against a scanned room from the Room Plan API (via ARKit) so that it could be used for example to setup a game in your room and share that with other people?
No there is no re-localization in RoomPlan. But we expose the ARSession
so you could fallback to ARKit for re-localization.
Is there a suggested manner of writing ARKit/RealityKit experiences to a video file? I'm current using RealityKit 2's post-processing to convert the source `MTLTexture` to a `CVPixelBuffer`, and writing that to an `AVAssetWriter`, but this occasionally ends up leading to dropped frames or random flickers in the video.
We don’t currently have a recommend method for doing this and as such would love to see a feedback item explaining what you need and a use case explaining it. That would be wonderful.
That said your method should in theory work and we’d also love to see feedback item describing the issues you’re seeing.
Question pageIs there a way to use video textures in Reality Composer?
Video textures are currently not supported through Reality Composer UI. However, if your .rcproj is part of an Xcode project, you can use the RealityKit VideoMaterial api to change the material of your object in the scene at runtime.
Question pageIs there currently a built-in way or example of a way transform a CapturedRoom from RoomPlan into a ModelEntity or other type of RealityKit entity? Instead of only the exported USDZ file?
I don’t believe there is a built in way, but loading a USDZ into a RealityKit scene as a ModelEntity is very simple
Question pageI noticed the new beta class "ImageRenderer" for SwiftUI, allowing SwiftUI views to be rendered into a static image and be used as a texture in ARKit. Will there be an interactive version of displaying SwiftUI views in ARKit?
We don’t discuss future plans, but gathering developer feedback is important to us so we’d ask you to post your request to
Question pageIn the State of the Union, there is reference to `ScanKit` alongside the mention of `RoomPlan`. Is `ScanKit` a SDK, or if that the same thing as `RoomPlan`?
RoomPlan is the name of the SDK. You’ll want to refer to those APIs as RoomPlan instead of ScanKit.
Question pageAt last year's WWDC 2021 RealityKit 2.0 got new changes to make programming with Entity Component System (ECS) easier and simpler! The current RealityKit ECS code seems too cumbersome and hard to program. Will ease of programming with ECS be a focus in the future?
While we don’t discuss specific future plans, we always want to make RealityKit as easy to use for everyone as we can.
We’d ask you to post your issues and/or suggestions to Bug Reporting
I’d love to find out more about what you find too cumbersome. Thanks!
Question pageCollaboration frameworks for AR are important. Is Apple considering features related to remote participation in AR experiences? Unity has this capability to some extent.
While we don't discuss future plans, we always hope to gather this sort of feedback during WWDC. Thanks for taking the time to share 🙏
We do support collaborative sessions over the same network, more details and sample code can be found here:
Creating a Collaborative Session
Is this what you were looking for?
Question pageThis question may be better suited for tomorrow's #object-and-room-capture-lounge, but is the output `CapturedRoom` type able to be modified prior to export to USDZ? For example, could I remove all `[.objects]` types, and leave just walls/doors, or change the texture of a surface?
Yes, please ask this during the object capture lounge tomorrow. But you should be able to modify after export and re-render.
You would need to use the RoomCaptureSession API and subscribe to a delegate to get those updates which contain the Surfaces and Objects. You can then process that data and render it as per your liking.
Question pageI've noticed that when occlusion is enabled on LiDAR devices, far away objects are automatically being clipped after a certain distance like 10m or so (even if there is no physically occluding them). I've tried to adjust the far parameters of the PerspectiveCameraComponent – https://developer.apple.com/documentation/realitykit/perspectivecameracomponent/far But unfortunately that didn't help. Only disabling occlusion removes the clipping. Is there a workaround for this behavior?
This should be fixed in iOS 16.
Question pageI need SwiftUI Views in my RealityKit experience...please and ASAP.
You can host RealityKit content inside SwiftUI views with UIViewRepresentable
If you’re asking if you can use SwiftUI content within RealityKit - there is no direct support for that at present and we’d ask you to file a feedback item explaining your use-case for that feature.
Question pageI’d love to have SF Symbols renderable in AR! It actually works with RealityKit on macOS by copy and pasting the symbols, but not available in the system font on iOS.
You may want to check with the SF Symbols team to confirm this is not possible yet, and also file a feature request on feedback assistant.
A bit of a hack solution but you may be able to get this work via drawing the Symbol to a CGImage
and passing that image in as a texture.
I have been really interested in RealityKit and ARKit for the past couple of years. Where can I learn more about it? I’m currently into Designing, Writing, Editing, and Management and would love to work on futuristic tech.
Check out this page:
To learn more about RealityKit and ARKit, I would recommend starting with our documentation and videos. Here are a few links to help you get started:
You can also always ask questions on Developer Forums 💬
Suggestion from non-Apple developer:
Question pageMany of you know .glb file format (android's scene-viewer) support compression like draco. Any planning update for compress .usdz files?
I would suggest filing an enhancement request on feedback assistant for this
Question pageWhat is the recommended way to add live stream or capture capabilities with RealityKit? Do we need to build frame capture and video writers with AVFoundation? A higher level API would be a better fit for RealityKit.
I would recommend using ReplayKit or ScreenCaptureKit to record your app screen to stream / share
Question pageQuickLook is currently very dark (much darker than would be expected). Clients are complaining about washed-out colors, and we need to overcorrect via emission (not ideal, breaks dark rooms). (Easy to test: make a pure white material and display it in QuickLook in a bright room, it will never get white.) Are there plans to fix this?
Great question - and we have good news for you 🙂
We are releasing new lighting to AR Quick Look which is brighter with enhanced contrast and improved shape definition to make your assets look even better.
Please check tomorrow's session - Explore USD tools and rendering - with examples and how to implement it!
Question pageI have a model that is a .reality file, that opens in AR Quick Look. When the user taps the model, it shrinks to show the same light in a different size. However, it's not very clear to the user that this is a possibility. If they don't tap for a while, ARQL encourages them to "Tap the object to activate" is there a way I can customize this message?
That is a standard message and unfortunately there’s currently no way to customize the text.
Alternatively, you can create your own banner within your asset.
For example, you can check out the first asset on the gallery page
Please file a feedback report, if you haven't already 🙏
Question pageIt seems a bit weird that there's currently three different implementations of USD at use across iOS / Mac. Are there plans to consolidate those into one to make testing and verification of assets across platforms easier? The shared feature subset is pretty small, resulting in less-than-ideal products for clients.
There are different USD renderers across our platforms but each serve a different purpose.
Here is a developer document that explains these different USD renderers and what their feature sets are
Creating USD files for Apple devices
Question pageIs it possible to support more than one image anchor in a scene with AR Quick Look?
This is not supported at this point. The team is still investigating this possibility.
Question pageI'm creating pendant lights for viewing in in AR Quick Look, is it possible to anchor these to the ceiling of a room?
Yes this is something that is supported in AR Quick Look. You can place objects on the ceiling by dragging them there. This can be done using a regular horizontal (or vertical) anchor.
However, there are potential challenges to be aware, the biggest is that ceilings usually lack a lot of feature points, which makes it difficult to detect a proper plane. Using a device with LiDAR can improve the results that you get.
Question pageUsing AR Quick Look, how might I add a color picker to change between colors of a model? For example, the iMac ARQL on apple.com requires users to jump in and out of ARQL to try different colors. Is there a way to have color pickers in ARQL to try different materials or change different scenes in a .reality file?
You could use Reality Composer's interactions to make an interactive USD where you can tap on different colors to change the model
This would need to be done in 3D. There’s a previous session, Building AR Experiences with Reality Composer, that has some examples.
Question pageAny thoughts about making USD/USDZ files with particle effects? Things on fire/sparking etc?
This is not currently possible to do in a USD, but you should submit the idea to https://feedbackassistant.apple.com.
You can however do some particle effects in an app by using RealityKit's CustomShaders.
Depending on how complex your effect is, you can also bake your particle effects to regular mesh + bones animation :sparkles:
In many cases you can also create a pretty convincing effect just by scaling/rotating a few planes. Example link (no USDZ behind that right now, but you get the idea - this is just two simple meshes for the particles)
Question pageIs there a simple way to create a 3D object with a custom image as a texture? Reality Composer only allows a material and a color, and without that, I'll have to dip into a far more complex 3D app. I'd really, really like to use USDZ more in Motion, for pre-viz and prototyping, but without texture editing it's quite limited. Have I missed something? :)
There are various third-party DCCs with great USD support that let you create complex 3D object with textures and export as USD. You can then use Reality Converter to convert those to USDZ to import into Motion.
Another approach: three.js (web render engine) can actually create USDZs on the fly from 3D scenes. A colleague used that recently for USDZ AR files with changeable textures on https://webweb.jetzt/ar-gallery/ar-gallery.html
Also take a look at the Explore USD tools and rendering session tomorrow. You can now change materials properties in RealityConverter! (edited)
Another thing that might help for making quick adjustments: the browser-based three.js editor at https://threejs.org/editor.
Question pageReality Composer is great, but our team of 3D asset modelers has found it easier to sculpt characters in Zbrush. Do ARKit and RealityKit accept models created in Zbrush, or are there intermediate steps best for preparing a model for Apple platforms? (KeyShot, etc.)
Yes, if you can export your assets to FBX, glTF or OBJ, you can convert them to USDZ using Reality Converter, which is compatible with ARKit and RealityKit
Question pageAre there tools that can be used to rig skeletons for USD characters? I have not found anything that works?
Yes, there are various third-party Digital Content Creators (DCC) that let you create skeletons and Reality Converter lets you convert other file formats with skeletons to USD.
Are some example Digital Content Creation tools that can help you create rigged skeletons for characters exported to USD.
Question pageIs Reality Composer appropriate for end-users on macOS? We'd like to export "raw"/unfinished USD from our app then have users use Reality Composer to put something together with multimedia.
You can assemble different USDZ assets together to build out a larger scene in Reality Composer and add triggers and actions to individual assets within the project
Question pageIs there a way to modify ModelEntities loaded from an .usdz file on a node basis? E.g. show/hide specific nodes?
Yes, if you load the USDZ with Entity.load(...)
or Entity.loadAsync(...)
you can traverse the hierarchy and modify the individual entities.
You’d want to use Entity.isEnabled
in this instance to hide/show a node.
Note that .loadModel
will flatten the hierarchy whereas .load
will show all entities
Will there be an async await (concurrency) API to detect when entities are added to an ARView?
Hey there, we don’t discuss future releases of Apple products. But we’d love to hear your feedback and suggestions. Please file your feedback here to get it into our system.
Question pageWhat’s the easiest way to add user interactions (pinch to scale, rotation, transform) to an Entity loaded from a local USDZ file in RealityKit?
You can use the installGestures function on ARView
. Keep in mind that the entity will need to conform to HasCollision
.
To do this you could create your own CollisionComponent
with a custom mesh and add it to your entity or you could simply call generateCollisionShapes(recursive: Bool) on your entity. Putting it all together, you can use .loadModel
/.loadModelAsync
, which will flatten the USDZ into a single entity. Then call generateCollisionShapes
and pass that entity to the installGestures
function. This will make your USDZ one single entity that you can interact with.
Is there any updated to Reality Composer this year?
No.
We don't discuss details about unreleased updates, but one of the things that’s most helpful to us as we continue to build out our suite of augmented reality developer tools is feedback
Please continue to submit ideas or suggestions in Feedback Assistant 🙂
Question pageIs there a way to have light sources in AR Quick Look files hosted on the web? For example, a client would like to have lamps in AR Quick Look. It would be awesome if we could use RC to turn off/on light sources. Is there any way to do this?
I don't think that it's possible. But you should submit the idea for supporting virtual lights on:
https://feedbackassistant.apple.com
Question pageCan I render a snapshot of only the virtual content in RealityKit? Something similar like the snapshot functionality in SceneKit?
Yes, you can use ARView.snapshot(...)
If you want, you can change the background of the ARView
there:
Is it possible to instance meshes in RealityKit (similar to SceneKit's clone method)?
If you call .clone(...) on an Entity, the clone will re-use the same meshes.
Question pageMany aspects of USD are open source. Could Reality Composer also be Open-Sourced so that members of the community could work on features?
Hey there, we’d definitely be interested in hearing more about your idea.
I’d suggest submitting the suggestion at Bug Reporting
Question pageIn SceneKit there were shader modifiers. Is there something similar in RealityKit? We need PBR shaders but have to discard certain fragments.
You can apply CustomMaterial
s & CustomMaterial.SurfaceShader
to achieve certain cool effects for entities!
From the metal side you can call discard_fragment()
Reality Composer objects on top of each other, such as a vase on a table cast shadow only to the ground plane and not to one another. If baked AO textures aren't an option since the vase may be moved by the user what would you suggest in order to achieve an equally good result to the default grounding shadow given that the quality of shadows is critical for an AR experience?
We don’t have any materials you can apply to objects to make them participate in the same shadows as ground planes. However, you can enable shadow casting from directional and spot lights via DirectionalLightComponent.Shadows
and SpotLightComponent.Shadows
. This may alter the overall lighting of your scene though.
Alternatively, we do have CustomMaterial
, which allows you to create custom materials via Metal, but for this use-case may not be able to get you the desired effect.
We’re always looking to improve RealityKit, so would appreciate if you submitted a request for this via https://feedbackassistant.apple.com/
Question pageIs it possible to take a snapshot of only the virtual content and a snapshot of only the real content like in SceneKit?
That’s a good question.
I think you can get some of the way there via the ARKit APIs to get the current frame.
You can also toggle the mode of an ARView
to switch it to .nonAR
view - then use ARView.snapshot()
to grab a snapshot of the virtual content. And then switch it back.
However, I don’t believe that would give you exactly what you want - I think the ARView snapshot would not necessarily have a transparent background (if that’s what you need). And even then the performance of this may not be great.
You could also try setting the Environment background color to something with 100% alpha.
I’d suggest filing a feature request for this with Bug Reporting
Question pageWith USDZ content, what's the best way to link to an external website or take users to a product landing page?
If you have your USDZ content on the web you can check out the AR Quick Look functionality for such things at:
Adding an Apple Pay Button or a Custom Action in AR Quick Look
As far as I know there isn’t currently a way to do such a thing directly from a USDZ sent from iMessage, but I can pass that request along.
Question pageCan Reality Composer be made available as a macOS app in the App Store?
While Reality Composer is available only for iOS and iPadOS on the App Store, we'll pass this feedback along. Thanks 🙏
Reality Composer is available on macOS as part of Xcode as a Developer Tool, though.
Question pageIs there a capture video for ARView the way there is a take snapshot()? I see there is 4k video being hyped - will this include the ability to let users take video recordings?
There’s no API in RealityKit to capture video. That said there are system level APIs to capture screen recordings and I wonder if that would be useful for you:
I’d suggest filing a feature request with your use-case. Thanks!
Question pageHello, for artist/designer only experienced with Reality Composer with no code, is there any suggestion and resources on getting started with RealityKit to make more advanced AR experiences?
Hi! We have a number of WWDC sessions covering RealityKit and Reality Composer which is a great place to start.
There’s also a great guide on building a SwiftStrike game: SwiftStrike: Creating a Game with RealityKit
Question pageIs there a way to get access to more advanced materials rendering on RealityKit models? I want to "skin" a plane with a UIView, currently I need to fall back to ARKit and SceneKit in order to do this
RealityKit has a CustomMaterial
API which allows you to create custom Metal-based materials. I’d recommend our Explore advanced rendering with RealityKit 2 WWDC talk to learn more.
There is also a great resource on Custom Shader API that gives more details on the APIs available in Metal.
Question pageIs there a means of exporting a USDZ file (either from Reality Composer, Cinema 4D, etc., or programmatically), with a video texture already applied?
There’s no support for that in Reality Composer currently. As always a feature request filed on Bug Reporting would be most appreciated.
There’s also no method to export USDZ from RealityKit and again feature requests appreciated. Thank you!
Question pageIs it possible to show or hide only a single child node from a model entity dynamically?
You can certainly load a model and preserve your hierarchy, then use the entity name or another attribute to find an entity, then hide/show it with Entity.isEnabled
Look at EntityQuery for finding entities efficiently.
Question pageCan I place a model in a target, such as a cover of a book or a QR, so that it doesn't move from that position by just using USDZ? and how could I achieve this?
You can use Reality Composer to create a scene attached to an image anchor. You can then export the scene to a USDZ or a Reality File.
See Selecting an Anchor for a Reality Composer Scene
Question pageIs taking the output MTLTexture from RealityKit 2's `postProcessing` pipeline suitable for writing to an AVAssetWriter, streaming via RTMP, etc?
“Maybe” 🙂
So you can certainly take MTLTexture
s and convert them (if they’re configured correctly) into CVPixelBuffer
s for AVFoundation to consume.
That said it’s really not the intended use case of RealityKit's post processing functionality and I wouldn’t be surprised if either it doesn’t work as you’d expect or if we break you in the future.
Sounds like a great feature request though - Bug Reporting
Question pageFrom an AR design perspective, what is best for knocking down objects? Say in a game where you knock down blocks, is it better to have the user run the device through the blocks, tap the blocks, or press a button to trigger something to hit the blocks?
It depends which approach is best — each have a set of pros and cons based on what you want out of the experience.
It can be compelling to run through AR blocks if you want to emphasize lots of user motion in an experience and the scale of the experience is quite large — good for apps that can take advantage of wide open spaces.
Tapping them is more immediate and indirect so if you wanted to destroy a tower quickly or something like that then that would be the way to go — and I could see that being very satisfying to trigger many physics objects to react at once.
I think the same would apply to a button press, it’s an indirect way to trigger it if the experience requires rapidly knocking them down.
Overall I think it’s up to what you want the experience to be, and maintaining internal consistency with other interactions within the app.
Swiftstrike and Swiftshot are great example apps that use similar techniques.
Question pageIs it possible to control audio media in USDZ (i.e. pause, skip, load new audio file) with a scene / behavior (using Reality Composer or other tool)?
Currently Reality Composer does not support this. This sounds like a great feature request and we would appreciate if you can file feedback through Feedback Assistant.
If you are willing to jump into code…
You can use the AudioPlaybackController
returned from the playAudio API to play, pause, etc. You can also use AudioFileResource
to to add / replace audio on entities.
Regarding optimizations: is there support for level of detail and instancing in RealityKit?
Instancing is mostly abstracted away behind the Entity.clone()
method.
Level of detail is not currently exposed as API and we’d recommend filing a feature suggestion on Bug Reporting
That said you can implement Level of Detail yourself (probably using custom Systems and Components) although we understand that may not be ideal. Please file feature suggestions regardless!
Question pageIs there a plan to have custom render passes like in SceneKit with SCNTechnique in RealityKit?
While we do not currently support custom render passes, we have support for post process effects. Please file a feature request through Feedback Assistant if your use case requires more customization 🙏
Question pageDoes RealityKit support light sources in objects – for example, if you wanted a light bulb. If so, is there documentation for this?
There are various sorts of lighting in RealityKit - you might want to start here perhaps?
(see the Cameras and Lighting section in the docs)
But looks like we don’t support lighting in Reality Composer unfortunately so I’d suggest filing a feature suggestion:
Question pageIn Reality Composer, a force was applied to an object. Then I wanted to animate it into another scene, starting from the post force location. Is there a way to apply a new scene using its last known position? I hacked the position by guessing the ending location and starting the next scene close to that position but it results in a slight motion jitter.
This may be achievable if embedded in Xcode with some code.
I recommend signing up for a Reality Composer lab if you would like to explore that further.
But yes, being able to observe live parameters sounds like a great feature in Reality Composer. Please file a feature request using Feedback Assistant with your use case 🙂
Question pageIs there a way to add gestures to an entire Reality Composer scene? I can add it to an individual entity, but it would be cool to let users place the entire scene (otherwise I lose all the Reality Composer behaviors when i just target the entity)
A way to get the entity gestures working on an entire scene is to use visualBounds(…) and create a CollisionComponent
on the root entity. You can then use CollisionGroup
to make sure it doesn’t interfere with any physics.
If you’re using ARView.installGestures(…)
you’ll need the entity to conform to HasCollision
, which may require you to create a new entity type for the root. Quick example:
// New Entity type which conforms for `HasCollision`
class CollisionAnchorEntity: Entity, HasAnchoring, HasCollision { }
// Transfer scene contents
let collisionAnchor = CollisionAnchorEntity()
collisionAnchor
.children.append(contentsOf: originalAnchor.children)
collisionAnchor.anchoring = originalAnchor.anchoring
// Create CollisionComponent for bounds of scene
let sceneBounds = collisionAnchor
.visualBounds(recursive: true, relativeTo: collisionAnchor)
let collisionShape = ShapeResource
.generateBox(size: sceneBounds.extents)
.offsetBy(translation: sceneBounds.center)
collisionAnchor.collision = CollisionComponent(
shapes: [collisionShape]
)
// Install gesture on new anchor
arView.installGestures(for: collisionAnchor)
Question pageIs VisionKit / Data Scanner available in AR?
Using data scanner via VisionKit is possible using ARKit. ARKit provides the captured image on the ARFrame
. One can inject the ARFrame
's captured image into data scanner and obtain information about text.
However, the result will be two-dimensional. If the use-case is to bring the detected text into the AR world in three dimensions one needs to estimate a transform for the 2D text. ARKit does not support this natively but does support custom anchoring.
Question pageCan we get the LiDAR camera position while doing a mesh in ARKit?
ARMeshAnchor
transforms are already aligned with the wide camera, which is also what the camera transform is relative to.
Is the mesh from an ARSession available through the delegate methods?
Yes, once you turn on scene reconstruction by setting the sceneReconstruction
property on ARWorldTrackingConfiguration
. The meshes are available as ARMeshAnchor
through the ARKit's anchor delegate methods
Reference APIs:
Question pageWhat's the difference between ARWorldTrackingConfiguration.recommendedVideoFormatForHighResolutionFrameCapturing and recommendedVideoFormatFor4KResolution?
recommendedVideoFormatForHighResolutionFrameCapturing is used for capturing high resolution still images while the session is running.
For 4K video, you should use recommendedVideoFormatFor4KResolution
Note that this feature is only supported on iPad with M1
Question pageWhat are some tips/best practices to prevent AR objects from shifting? We're finding a bit of drift that's most noticeable with larger virtual objects.
We recommend adding an ARAnchor
in the position where you want to place and object and then associate your node/entity with that anchor. This should help prevent drifting.
I’ve had some experience with Reality Composer, but for coding, I only know SwiftUI. Is it possible to create an AR App with ARKit only with SwiftUI?If so, could you share some suggestions or links on getting started?
You can use ARKit inside a SwiftUI app. You can also use RealityKit to build ARKit apps in a declarative way.
Here are the links to resources and sample code to help you get started:
Question pageDoes adding anchors everywhere force ARKit to keep a good understanding and reduce drift everywhere? If yes, will this affect the tracking quality?
ARKit offers functionality to add custom anchors which is the preferred and recommended way to place content.
See the add(anchor:) method.
Custom anchors are used internally for drift correction. We cannot guarantee absolutely no drift. However, using your own anchors will use the system's best knowledge to correct for any drift.
Question pageI am working on an app that uses ARKit to guide the user around an object while semi-automatically capturing images for later (server side) 3D reconstruction. I very much appreciate the ability to control the capture session and the ability to capture high resolution images that you added in iOS 16. I believe currently we do not have much control over the high resolution image capture? It would be great if we could configure the AVCapturePhotoSettings used for the capture. For photogrammetric reconstruction purposes it would be amazing if we could for example capture a Pro RAW image during the ARKit session.
We really appreciate the feedback and are glad that you are already starting to put these API changes to good use! At the moment, we do not expose the ability to pass in AVCapturePhotoSettings
through our API, but this would be a great feature request to submit via Bug Reporting
We want to play with the depth map. Is it possible to get the LiDAR camera position with the depth map? We've tried using the wide camera position and it doesn't work, because the wide camera position is not the same as the depth map's camera position.
The depth map surfaced through the Scene Depth API does align with the wide angle camera and should correspond to the camera transform available through the ARFrame
.
Here is a sample code that generates a colored point cloud by combining the wide angle camera image and depth map:
Displaying a Point Cloud Using Scene Depth
If you still see some issues, I recommend filing a bug through the feedback assistant at Bug Reporting
Question pageDoes ARKit track which version of USDZ Is in use? I’m interested in using tools from multiple providers in my pipeline and I want to verify the format is consistent through workflow.
ARKit itself has no notion of rendered content. Content (USDZ) is commonly handled by the rendering engine on top of ARKit like RealityKit, SceneKit, Metal, etc.
In order to learn more about USDZ and how to efficiently use it we recommend this talk.
Question pageWhen I read the exif data that ARKit what is SubjectArea? Why does LensSpecification repeat the 4.2 (the focal I think) and the 1.6 (the aperture I think)?
"SubjectArea": ( 2013, 1511, 2116, 1270 )
"LensSpecification": ( 4.2, 4.2, 1.6, 1.6 )
The subject area is defined as rectangle with center coordinate and its dimensions. In this case, the center is at (2013, 1511)
with rectangle dimensions 2116 x 1270
.
For more details, you may refer to Exif standard tags.
Question pageDo modifications made to configurableCaptureDeviceForPrimaryCamera while an ARSession is running change the output of captureHighResolutionFrame? What about modifications before running a new ARConfiguration?
No, any modifications to ARConfiguration
object does not affect a running session. You need to call run(_:options:) after modifying configuration for it to be used.
You can change capture device settings such as exposure, white balance etc, that will be reflected in the output of ARSession
. However, you cannot change the input/output configurations on capture session.
Are there resources on how to generate a texture for the mesh generated by ARKit ?
We do not have any resources for this.
You should be able to use the wide angle camera and camera transform to generate texture maps for the meshes but unfortunately we do not have any resources or sample code showing that.
We do have this sample code showing how to generate colored point clouds using the scene depth API, hope it is of some help.
Displaying a Point Cloud Using Scene Depth
Question pageAny tips for getting started in AR development with 0 coding knowledge?
Regardless of your educational background, anyone can learn how to code if you put in the effort and are passionate about it. There are tons of resources online, many of which have been produced by Apple in the form of documentation, example projects, and WWDC videos, that can help you to learn a programming language, such as Swift.
I would suggest doing some tutorials, watching videos, maybe find a highly rated book on iOS programming, etc to learn how to begin building iOS apps.
Once you are comfortable with that, then you can start to dive into AR specifically. Finding a good book on linear algebra would be useful if you are going to get into AR and graphics programming, but start with the basics first!
For ARKit, we have all sorts of documentation and examples that you can take a look at:
https://developer.apple.com/documentation/arkit/
From a non-Apple developer:
Apple’s documentation is great.
I also found the site RayWenderlich to be super helpful. They even have a book specifically for AR:
Apple Augmented Reality by Tutorials
Question pageDo any of the AR frameworks have hand tracking, and the ability to register a pinch between the thumb and pointer finger?
ARKit does not have any hand tracking feature. The Vision framework offers functionality for hand gesture detection.
Detect Body and Hand Pose with Vision - WWDC20
You may find the camera's captured images on the ARFrame
and can inject this into Vision. So by combining multiple frameworks you could achieve something close to the requested feature.
Does ARKit give any confidence score for each camera position it estimates during camera tracking? If any camera position is not estimated correctly do you suggest any option to improve it?
ARKit returns discrete tracking state for every frame update. You can read more about it here:
Managing Session Life Cycle and Tracking Quality
It is highly recommended to integrate standard coaching view in your app to guide users when tracking is limited. More details at:
Question pageIs it possible to do body tracking while being in an ARWorldTrackingConfiguration?
3D body tracking is only supported using the ARBodyTrackingConfiguration
. However, we support the detection of 2D bodies on multiple configurations; the ARWorldTrackingConfiguration
is one of them.
In order to check which configuration supports this you may use the supportsFrameSemantics(_:) function.
Question pageIs there a way to select which rear camera to use for ARView? (Wide, ultrawide, panoramic)
ARKit only supports wide camera as the primary camera. It is not possible use other cameras for rendering.
ARFaceTrackingConfiguration, however, uses the front facing camera.
If you have a need to use a different camera, please file a feature request through the feedback assistant at Bug Reporting
Question pageCan I use the MapKit 3D model of a city, and anchor it as a child of an anchor using LiDAR geotracking? For long distance occlusion and collision purposes?
There is no integration of MapKit into the ARView
. If you know the building footprint (i.e. the polygon in lat/lon coordinates) or even exact geometry anchored to a lat/lon coordinate you can transform these coordinates by placing ARGeoAnchor
s at a that location. If they are tracked in ARKit you get the local coordinates and can build an occlusion/collision mesh.
What’s the maximum dimensions RoomPlan support?
The recommended maximum size of the room is 30 x 30 feet.
Question pageYono's Note: This is about 9 x 9 meters.
When we have a RoomPlan scan, can we use it next time as an anchor so we can always Paint Model in same place?
RoomPlan is not an ARAnchor
in current design. Thanks for suggestion. We will take into consideration.
From a non-Apple developer:
I created a demo where custom ARAnchor
s are created for RoomPlan objects. The same could be done for surfaces and then saved to a world map:
https://github.com/jmousseau/RoomObjectReplicatorDemo
Question pageDoes setting ARKit to use 4K resolution affect the battery longevity? Does it increase the risk to get the device too hot, even if the fps is limited at 30 fps instead of 60 fps? Is there a way to get 60 fps at 4K resolution?
Yes, using 4k resolution may result in more power being consumed. It may also result in thermal mitigation engaging to keep the device from getting too hot, which may impact performance. At the moment, we are only supporting 4k @ 30hz.
Question pageARSession has the getGeoLocation(forPoint: … method. Is there also a way to obtain the heading relative to north given a directional vector within the scene or for the device (point of view)?
We are not exposing the heading directly.
You can create any ARGeoAnchor
in your vicinity and then compare its transform with your camera’s transform. Since ARGeoAnchor
s are always aligned to East-Up-South, you can derive any global camera orientation by comparing the camera’s transform to the ARGeoAnchor
transform.
Might there be more example projects showcasing pure Metal with ARKit? SceneKit is cool, but admittedly, I'd love to see more low-level examples. :) Alternatively, is anyone working on some open source projects showcasing something like this? I think it would be a big win for Apple-platform development to build-up a lot more examples.
Thanks for the suggestion. Here are some existing sample code that uses Metal with ARKit:
- Displaying a Point Cloud Using Scene Depth
- Creating a Fog Effect Using Scene Depth
- Displaying an AR Experience with Metal
Do any of the AR frameworks accept location or hand position from Apple Watch?
No, ARKit runs standalone on iPhone and iPad devices only and does not take any external inputs.
Question pageWe can capture session events (namely anchors add/remove) by implementing ARSessionDelegate (not RealityKit), is it possible get similar or part of this events with RealityKit? (To avoid converting a from ARAnchor to AnchorEntity)
RealityKit exposes the ARSession
through this API:
https://developer.apple.com/documentation/realitykit/arview/session
You can set the delegate on it to listen to ARKit delegate events.
Question pageIs there a talk that goes into any detail about using the new Spatial framework and how it works with ARKit, SceneKit, and/or RealityKit?
There is no dedicated talk about Spatial framework. It provides core functions that can be used with any 2D/3D primitive data.
Question pageWhen using the new 4K resolution in ARKit for a post-production (film/television) workflow, what is the suggested way to take the AR experience and output to a video file?
To capture and replay an ARKit session, see an example here:
Recording and Replaying AR Session Data
If you want to capture video in your app in order to do post processing later, you could use and configure an AVAssetWriter to capture a video.
We also provide a camera frame with every ARFrame
, see:
ARFrame.capturedImage
is just the ‘clean slate’, it doesn’t contain any virtual content rendered on top of it. If you are doing your own rendering and your Metal textures are backed by IOSurface
s. then you can easily create CVPixelBuffer
s using the IOSurface
s and then pass those to AVFoundation for recording.
Is there a way to get notified when ARKit relocates itself after it finds out that it has drifted? From my experience, the tracking status does not change when this happens. Also is there a way to ask ARKit to not try to relocate itself after a drift?
We recommend adding ARAnchor
s and associating your virtual content with ARAnchor
s. In case there is a drift, the session delegate session(_didUpdate:) would update the anchor such that the virtual content stays in the same location in the real world.
In recent years I read and partially experimented with the latest "graphics" frameworks - but somehow I got lost over a cohesive developer experience when to use which framework (and how to integrate them into a good product). The are amazing "vertical" solutions in these frameworks but I see only few strong stories/app/solutions around them. Does Apple has a "big picture" guide when to use which framework, how to interact between them?
We understand that the number of frameworks can be daunting sometimes. However as you alluded to, we try and offer "high level" frameworks to try and meet developers' needs out of the box, for example, being able to use RealityKit for rendering instead of the lower level Metal.
That said, Apple provides several tutorials and code samples to introduce developers into the various frameworks, e.g.:
Building an Immersive Experience with RealityKit
Another great resource are WWDC videos, which go back several years in order to build a solid understanding of a particular framework or technology.
Question pageAny guidance on how to build a bridge between ARKit and Spatial Audio? Say you're viewing an object and the audio evolves as you change the object's perspective...
We do not have a sample code that uses ARKit together with spatial audio (PHASE). However, this is a great question, can you please send us a request through Bug Reporting
Question pageWith body motion tracking, can a dev specify the sample rate of the sample (every few ms) and write out that sample in a continues manner. eg a basic motion recorder. Please ignore the question if this is the wrong place to ask
Body tracking runs at 60 Hz at the same cadence of the camera. We cannot compute this faster than this. However, by changing the ARVideoFormat
you may change this to 30 Hz or other supported frame rates.
We do not offer a functionality to write the motion capture data to file. However, our data is very well compatible with a format called BVH. By following the topology of the skeleton given by the ARSkeletonDefinition
and the data coming from an ARBodyAnchor
one could generate such a file output.
Have there been any changes to the light estimation APIs? For example, is directional light available with a world tracking config?
No, there haven’t been changes to light estimation in ARKit this year.
Question pageIn "Discover ARKit 6" there's a cool demo of setting a point in AR where a picture was taken, and guiding the user there. Is sample code for this available somewhere?
Thanks for your excitement about that app idea. We do not have the sample code, but I recommend going to our Explore ARKit 4 session where we explain how to pick a coordinate in Maps and create an ARGeoAnchor
based on it. For alignment with the real world we have the example with the Ferry Building in SF. We followed that exact workflow with the focus square example.
Is adding custom anchors for drift correction pertinent on a LiDAR enabled device?
In general we recommend to use/add anchors independent from the device you are using.
Question pageI noticed that the built-in Camera app can detect very small QR codes compared to 4K AR. Why is that? Is there a workaround?
We don’t have QR code detection in ARKit. However, you can use the Vision APIs to do QR code detection on the captured image. This VisionKit talk and article might be of interest to you:
Question pageRegarding the new ARKit 6 API that takes a 4k photo of the AR scene, is there a limit to how many times it can be called? Can I take say 30 photos within a second?
You can take the next photo right after the completion handler of your previous captureHighResolutionFrame call - or even from within the completion handler.
If you try taking a new photo before the previous call completed, you will receive an ARError.Code.highResolutionFrameCaptureInProgress error in the completion handler/
Question pageWe'd like to use an object both as a source for a USDZ based on the PhotogrammetrySession and as an ARReferenceObject, so that we can overlay information at the same position on both the real object and the created model.Is there any guidance on how to align these coordinate systems, e.g. by aligning the point clouds from the photogrammetry session and reference object? Or can we make assumptions on the origin of the resulting USDZ from the PhotogrammetrySession?
Creating a model for Object Detection and creating a textured mesh with Object Capture are two different use cases with separate workflows, we do not offer a tool to convert from one to another. That sounds like a great use case though, I encourage you to file a feature request.
Question pageIs there a maximum number of 2D bodies that can be tracked in an ARWorldTrackingConfiguration?
ARKit detects one body at a time. If multiple people are in the scene, the most prominent one is returned.
Question pageIs it now possible to do AR with the ultra wide angle 0.5 camera?
Unfortunately not. ARKit consumes the UW camera internally for certain processing tasks in a specific configuration.
Though I encourage you to file a feature request. Feedback Assistant
Question pageWe're planning to integrate an AR distance measuring view into our app. Does ARKit now provide the necessary technology to achieve this, or is RealityKit a better match? Are there any useful docs to look at?
ARKit offers several ways to measure distances. You can either evaluate distances from the device to its environment or between ARAnchor
s.
Please see this documentation to get an overview:
Question pageAre there any good resources on getting started with estimated object dimensions? Similar to the measurable app but to do height and width.
I recommend checking out our documentation of our SceneGeometry API that we presented in ARKit 3.5. A good overview is given in this tech talk:
Advanced Scene Understanding in AR
After getting a geometry that is good enough you still have to solve the task of isolating your object of choice and computing its volume. There would be several ways of doing it. For example cutting everything off above the ground level, or letting the user create a cube objects and then intersect it with the scene geometry, we do not have any code sample for these tasks, though.
Question pageVideo feed is always overexposed using ARKit. Trying to enable HDR for ARSession doesn't seem to work. Setting videoHDRAllowed to true on ARWorldTrackingConfiguration does not change video rendering. Also when accessing the AVCaptureDevice with ARWorldTrackingConfiguration.configurableCaptureDeviceForPrimaryCamera, activeFormat.isVideoHDRSupported returns false (on iPhone 12 Pro Max) so I cannot set captureDevice.isVideoHDREnabled to true. Also when using setExposureModeCustom and setting iso to activeFormat.minISO, the image rendered by ARKit has always a way greater exposure than when running an AVCaptureSession. The use case is for using ARKit in a Basketball stadium: the pitch always appears totally white with ARKit so we cannot see any player while with AVCaptureSession (or just the iOS camera app) the pitch and players appear clearly thanks to HDR.
Setting videoHDRAllowed
means that HDR will be enabled on the formats supporting it; however this is not the case for all video formats.
In iOS 16, ARVideoFormat
has a new property isVideoHDRSupported
. You can filter the list of the configuration’s supportedVideoFormat
to find one where videoHDRSupported
is true, and set this format as the configuration’s videoFormat
before running the session.
Does ARKit or RealityKit support rigid body physics defined in a USD file?
ARKit doesn’t support physics but rather detects the surrounding scene to allow RealityKit to handle virtual objects. RealityKit does support rigid body physics and a good place to start looking is at the physics APIs here:
Preliminary_PhysicsRigidBodyAPI
Question pageI’d like to ask what might the possible causes be for the ARSessionDelegate retaining ARFrames console warning. I use the session:didUpdateframe: delegate method to just check whether the AnchorEntity(plane:) I’m looking for is in a sufficient distance from the camera.
We have a limited pool of resources for our ARFrame
s and in order to keep some available for ARKit to process, we recommend processing the frames as quickly as possible. If you need to perform longer computations, you can copy an ARFrame
and release the ARFrame
from the delegate method.
I would like to know if it's possible to use SharePlay with a ARKit app? When I try there is no video on the FaceTime call if the back camera is started. Is it possible to have both cameras at the same time (front for FaceTime and back for my AR app)?
ARKit configures the cameras according to the selected configuration. Capturing from another camera while an ARKit session is running is not supported.
Question pageIs it possible to do perspective correction in ARKit using the captured depth map? Like on the continuity camera "desk view" for example
Glad you’re also a fan of the new desk view feature. There are potentially two solutions to this:
- Do a single perspective projections for the whole image
- Use a per-pixel correction like you suggested
Both come with their own benefits and drawbacks. Please check out our documentation for implementing the second approach:
Displaying a Point Cloud Using Scene Depth
Question pageIs there any plan to allow built-in hand and finger detection within ARKit to let the user interact with an object directly with his hands and not only though touch events on the device screen?
ARKit has no built-in hand or finger detection, but you can use Vision to track hands or detect hand poses. Here is a developer sample illustrating this:
Detecting Hand Poses with Vision
For ARKit feature requests, we encourage you to send us a report in Feedback Assistant
Question pageI want to build an AR game where buildings can occlude content. Should I build an occlusion mesh for every building I want occlusion/collision. I am kind of new to ARKit but I saw that I can create a metal renderer to work with ARKit. Can I get depth information using a convolutional neural network from metal?
Throwing machine learning at it sounds like a super fun project, I would recommend to start a bit simpler.
So as a small experiment you can take the four corners of a building from the Maps app and then create four Location Anchors based on these coordinates. As soon as those are tracked you can look at the local coordinates (in x,y,z) and then build a polygon based on it, you can extrude it towards the sky (y up) to get a nice collision/occlusion mesh.
Question pageUsing additional cameras in ARKit - are there any resources to show how this is setup?
ARKit allows streaming video from only one camera at a time. Which camera is used is determined by your configuration (e.g. ARFaceTrackingConfiguration
will use the front facing camera, ARWorldTrackingConfiguration
will use the back wide camera).
You can, however, enable face anchors detected by the front camera in an ARWorldTrackingConfiguration
with userFaceTrackingEnabled. Vice versa, you can enable isWorldTrackingEnabled in an ARFaceTrackingConfiguration
to benefit from 6DOF world tracking.
Check out this developer sample:
Combining User Face-Tracking and World Tracking
Question pageAm I able to capture frames off additional camera feeds at the same time (not necessarily exactly synchronous) in ARKit?
We introduced new API to capture frames in higher resolution than your configuration’s video format:
captureHighResolutionFrame(completion:)
Those frames are captured from the same camera, though.
Setting up additional cameras is not supported. We encourage you to file a feature request in Feedback Assistant
Question pageHi! We're working on an AR experience that allows user to put AR objects in their surroundings and replay it later. We're saving the data on an ARWorldMap and archive it on the filesystem to be retrieved later. Everything works great on smaller areas with small ARWorldMap file sizes. However as user adds more stuff, the ARWorldMap file gets bigger and at some point, it takes so long or even impossible to relocalize using the big ARWorldMap files. I'm seeing slower relocalization on ARWorldMap files with >10 mb size.\nQuestion:\nIs there a known cap of how big ARWorldMap files can be to retain effectivenes of relocalization and the AR experience? What can impact performance for AR relocalization other than lighting condition and the object textures that we're rendering (maybe area size? camera motion? features in the area?) since we're seeing frame drops on bigger ARWorldMap files.
ARWorldMap
s are optimized for room sized scenarios. If you exceed that limit then relocalization will stop working in certain areas as the map is not big enough any more to cover the whole area.
The frame drops sound related to the amount of content being displayed though. For that, feel free to provide more details through Feedback Assistant
Question pageIs there a way to force ARWorldMap to relocalize on our position instead of inferring from the features around us? For example, since ARWorldMap has its own root anchor, can we do something like "load this ARWorldMap using my current position/transform in the real world as the root anchor"? From my understanding we can do this with a single/multiple ARObjects but haven't found any resources about collections of ARAnchors in an ARWorldMap
This is not supported out of the box. What you could do is compute the offset between your current location (before relocalization) and after relocalization and apply that accordingly.
Question page3D Model
I am trying to use LiDAR scanner to create a 3d model from capturing an object. But couldn't get enough resources for that. Any references/resources please?
For creating 3D models from images captured on device, this should be a helpful resource to get more inspiration and help:
Bring your world into augmented reality
Question pageiOS 15.4 includes the builtInLiDARDepthCamera type in AVFoundation. Is there any advantage in implementing this camera type when doing Object Capture for better depth calculation, or does that not change the outcome of the rendered 3D model?
Capturing images with LiDAR devices will give you automatic scale estimation and gravity vector information on your final usdz output
Question page4k
Is there a capture video for ARView the way there is a take snapshot()? I see there is 4k video being hyped - will this include the ability to let users take video recordings?
There’s no API in RealityKit to capture video. That said there are system level APIs to capture screen recordings and I wonder if that would be useful for you:
I’d suggest filing a feature request with your use-case. Thanks!
Question pageWhat's the difference between ARWorldTrackingConfiguration.recommendedVideoFormatForHighResolutionFrameCapturing and recommendedVideoFormatFor4KResolution?
recommendedVideoFormatForHighResolutionFrameCapturing is used for capturing high resolution still images while the session is running.
For 4K video, you should use recommendedVideoFormatFor4KResolution
Note that this feature is only supported on iPad with M1
Question pageDoes setting ARKit to use 4K resolution affect the battery longevity? Does it increase the risk to get the device too hot, even if the fps is limited at 30 fps instead of 60 fps? Is there a way to get 60 fps at 4K resolution?
Yes, using 4k resolution may result in more power being consumed. It may also result in thermal mitigation engaging to keep the device from getting too hot, which may impact performance. At the moment, we are only supporting 4k @ 30hz.
Question pageWhen using the new 4K resolution in ARKit for a post-production (film/television) workflow, what is the suggested way to take the AR experience and output to a video file?
To capture and replay an ARKit session, see an example here:
Recording and Replaying AR Session Data
If you want to capture video in your app in order to do post processing later, you could use and configure an AVAssetWriter to capture a video.
We also provide a camera frame with every ARFrame
, see:
ARFrame.capturedImage
is just the ‘clean slate’, it doesn’t contain any virtual content rendered on top of it. If you are doing your own rendering and your Metal textures are backed by IOSurface
s. then you can easily create CVPixelBuffer
s using the IOSurface
s and then pass those to AVFoundation for recording.
I noticed that the built-in Camera app can detect very small QR codes compared to 4K AR. Why is that? Is there a workaround?
We don’t have QR code detection in ARKit. However, you can use the Vision APIs to do QR code detection on the captured image. This VisionKit talk and article might be of interest to you:
Question pageRegarding the new ARKit 6 API that takes a 4k photo of the AR scene, is there a limit to how many times it can be called? Can I take say 30 photos within a second?
You can take the next photo right after the completion handler of your previous captureHighResolutionFrame call - or even from within the completion handler.
If you try taking a new photo before the previous call completed, you will receive an ARError.Code.highResolutionFrameCaptureInProgress error in the completion handler/
Question pageAR Quick Look
The LookToCamera Action works different on AR Quick Look than while testing in Reality Composer. Will this be fixed with Xcode 14?
Please file a bug report for this. Any additional information you can provide, such as a video reproducing the issue would be hugely helpful.
https://developer.apple.com/bug-reporting/
Question pageQuickLook is currently very dark (much darker than would be expected). Clients are complaining about washed-out colors, and we need to overcorrect via emission (not ideal, breaks dark rooms). (Easy to test: make a pure white material and display it in QuickLook in a bright room, it will never get white.) Are there plans to fix this?
Great question - and we have good news for you 🙂
We are releasing new lighting to AR Quick Look which is brighter with enhanced contrast and improved shape definition to make your assets look even better.
Please check tomorrow's session - Explore USD tools and rendering - with examples and how to implement it!
Question pageI have a model that is a .reality file, that opens in AR Quick Look. When the user taps the model, it shrinks to show the same light in a different size. However, it's not very clear to the user that this is a possibility. If they don't tap for a while, ARQL encourages them to "Tap the object to activate" is there a way I can customize this message?
That is a standard message and unfortunately there’s currently no way to customize the text.
Alternatively, you can create your own banner within your asset.
For example, you can check out the first asset on the gallery page
Please file a feedback report, if you haven't already 🙏
Question pageIs it possible to support more than one image anchor in a scene with AR Quick Look?
This is not supported at this point. The team is still investigating this possibility.
Question pageI'm creating pendant lights for viewing in in AR Quick Look, is it possible to anchor these to the ceiling of a room?
Yes this is something that is supported in AR Quick Look. You can place objects on the ceiling by dragging them there. This can be done using a regular horizontal (or vertical) anchor.
However, there are potential challenges to be aware, the biggest is that ceilings usually lack a lot of feature points, which makes it difficult to detect a proper plane. Using a device with LiDAR can improve the results that you get.
Question pageUsing AR Quick Look, how might I add a color picker to change between colors of a model? For example, the iMac ARQL on apple.com requires users to jump in and out of ARQL to try different colors. Is there a way to have color pickers in ARQL to try different materials or change different scenes in a .reality file?
You could use Reality Composer's interactions to make an interactive USD where you can tap on different colors to change the model
This would need to be done in 3D. There’s a previous session, Building AR Experiences with Reality Composer, that has some examples.
Question pageIs there a way to have light sources in AR Quick Look files hosted on the web? For example, a client would like to have lamps in AR Quick Look. It would be awesome if we could use RC to turn off/on light sources. Is there any way to do this?
I don't think that it's possible. But you should submit the idea for supporting virtual lights on:
https://feedbackassistant.apple.com
Question pageARKit
Hello there, I am someone who is still fairly novice with Reality / AR Kit. And I want to ask what is the best way to implement Multiuser AR experiences. I’ve been thinking on creating an AR app that would use this feature to allow multiple users to view a single AR view (e.g., multiple users seeing the same rendered model from their own perspectives).
Multiuser AR experiences can be created using the SynchronizationComponent
This is a good tutorial (along with sample code) on building collaborative AR sessions between devices:
https://developer.apple.com/documentation/arkit/creatingacollaborative_session
Question pageIs there a way to localize against a scanned room from the Room Plan API (via ARKit) so that it could be used for example to setup a game in your room and share that with other people?
No there is no re-localization in RoomPlan. But we expose the ARSession
so you could fallback to ARKit for re-localization.
Is there a suggested manner of writing ARKit/RealityKit experiences to a video file? I'm current using RealityKit 2's post-processing to convert the source `MTLTexture` to a `CVPixelBuffer`, and writing that to an `AVAssetWriter`, but this occasionally ends up leading to dropped frames or random flickers in the video.
We don’t currently have a recommend method for doing this and as such would love to see a feedback item explaining what you need and a use case explaining it. That would be wonderful.
That said your method should in theory work and we’d also love to see feedback item describing the issues you’re seeing.
Question pageI noticed the new beta class "ImageRenderer" for SwiftUI, allowing SwiftUI views to be rendered into a static image and be used as a texture in ARKit. Will there be an interactive version of displaying SwiftUI views in ARKit?
We don’t discuss future plans, but gathering developer feedback is important to us so we’d ask you to post your request to
Question pageCollaboration frameworks for AR are important. Is Apple considering features related to remote participation in AR experiences? Unity has this capability to some extent.
While we don't discuss future plans, we always hope to gather this sort of feedback during WWDC. Thanks for taking the time to share 🙏
We do support collaborative sessions over the same network, more details and sample code can be found here:
Creating a Collaborative Session
Is this what you were looking for?
Question pageI have been really interested in RealityKit and ARKit for the past couple of years. Where can I learn more about it? I’m currently into Designing, Writing, Editing, and Management and would love to work on futuristic tech.
Check out this page:
To learn more about RealityKit and ARKit, I would recommend starting with our documentation and videos. Here are a few links to help you get started:
You can also always ask questions on Developer Forums 💬
Suggestion from non-Apple developer:
Question pageReality Composer is great, but our team of 3D asset modelers has found it easier to sculpt characters in Zbrush. Do ARKit and RealityKit accept models created in Zbrush, or are there intermediate steps best for preparing a model for Apple platforms? (KeyShot, etc.)
Yes, if you can export your assets to FBX, glTF or OBJ, you can convert them to USDZ using Reality Converter, which is compatible with ARKit and RealityKit
Question pageWill there be an async await (concurrency) API to detect when entities are added to an ARView?
Hey there, we don’t discuss future releases of Apple products. But we’d love to hear your feedback and suggestions. Please file your feedback here to get it into our system.
Question pageIs it possible to take a snapshot of only the virtual content and a snapshot of only the real content like in SceneKit?
That’s a good question.
I think you can get some of the way there via the ARKit APIs to get the current frame.
You can also toggle the mode of an ARView
to switch it to .nonAR
view - then use ARView.snapshot()
to grab a snapshot of the virtual content. And then switch it back.
However, I don’t believe that would give you exactly what you want - I think the ARView snapshot would not necessarily have a transparent background (if that’s what you need). And even then the performance of this may not be great.
You could also try setting the Environment background color to something with 100% alpha.
I’d suggest filing a feature request for this with Bug Reporting
Question pageIs VisionKit / Data Scanner available in AR?
Using data scanner via VisionKit is possible using ARKit. ARKit provides the captured image on the ARFrame
. One can inject the ARFrame
's captured image into data scanner and obtain information about text.
However, the result will be two-dimensional. If the use-case is to bring the detected text into the AR world in three dimensions one needs to estimate a transform for the 2D text. ARKit does not support this natively but does support custom anchoring.
Question pageCan we get the LiDAR camera position while doing a mesh in ARKit?
ARMeshAnchor
transforms are already aligned with the wide camera, which is also what the camera transform is relative to.
Is the mesh from an ARSession available through the delegate methods?
Yes, once you turn on scene reconstruction by setting the sceneReconstruction
property on ARWorldTrackingConfiguration
. The meshes are available as ARMeshAnchor
through the ARKit's anchor delegate methods
Reference APIs:
Question pageWhat's the difference between ARWorldTrackingConfiguration.recommendedVideoFormatForHighResolutionFrameCapturing and recommendedVideoFormatFor4KResolution?
recommendedVideoFormatForHighResolutionFrameCapturing is used for capturing high resolution still images while the session is running.
For 4K video, you should use recommendedVideoFormatFor4KResolution
Note that this feature is only supported on iPad with M1
Question pageWhat are some tips/best practices to prevent AR objects from shifting? We're finding a bit of drift that's most noticeable with larger virtual objects.
We recommend adding an ARAnchor
in the position where you want to place and object and then associate your node/entity with that anchor. This should help prevent drifting.
I’ve had some experience with Reality Composer, but for coding, I only know SwiftUI. Is it possible to create an AR App with ARKit only with SwiftUI?If so, could you share some suggestions or links on getting started?
You can use ARKit inside a SwiftUI app. You can also use RealityKit to build ARKit apps in a declarative way.
Here are the links to resources and sample code to help you get started:
Question pageDoes adding anchors everywhere force ARKit to keep a good understanding and reduce drift everywhere? If yes, will this affect the tracking quality?
ARKit offers functionality to add custom anchors which is the preferred and recommended way to place content.
See the add(anchor:) method.
Custom anchors are used internally for drift correction. We cannot guarantee absolutely no drift. However, using your own anchors will use the system's best knowledge to correct for any drift.
Question pageI am working on an app that uses ARKit to guide the user around an object while semi-automatically capturing images for later (server side) 3D reconstruction. I very much appreciate the ability to control the capture session and the ability to capture high resolution images that you added in iOS 16. I believe currently we do not have much control over the high resolution image capture? It would be great if we could configure the AVCapturePhotoSettings used for the capture. For photogrammetric reconstruction purposes it would be amazing if we could for example capture a Pro RAW image during the ARKit session.
We really appreciate the feedback and are glad that you are already starting to put these API changes to good use! At the moment, we do not expose the ability to pass in AVCapturePhotoSettings
through our API, but this would be a great feature request to submit via Bug Reporting
We want to play with the depth map. Is it possible to get the LiDAR camera position with the depth map? We've tried using the wide camera position and it doesn't work, because the wide camera position is not the same as the depth map's camera position.
The depth map surfaced through the Scene Depth API does align with the wide angle camera and should correspond to the camera transform available through the ARFrame
.
Here is a sample code that generates a colored point cloud by combining the wide angle camera image and depth map:
Displaying a Point Cloud Using Scene Depth
If you still see some issues, I recommend filing a bug through the feedback assistant at Bug Reporting
Question pageDoes ARKit track which version of USDZ Is in use? I’m interested in using tools from multiple providers in my pipeline and I want to verify the format is consistent through workflow.
ARKit itself has no notion of rendered content. Content (USDZ) is commonly handled by the rendering engine on top of ARKit like RealityKit, SceneKit, Metal, etc.
In order to learn more about USDZ and how to efficiently use it we recommend this talk.
Question pageWhen I read the exif data that ARKit what is SubjectArea? Why does LensSpecification repeat the 4.2 (the focal I think) and the 1.6 (the aperture I think)?
"SubjectArea": ( 2013, 1511, 2116, 1270 )
"LensSpecification": ( 4.2, 4.2, 1.6, 1.6 )
The subject area is defined as rectangle with center coordinate and its dimensions. In this case, the center is at (2013, 1511)
with rectangle dimensions 2116 x 1270
.
For more details, you may refer to Exif standard tags.
Question pageDo modifications made to configurableCaptureDeviceForPrimaryCamera while an ARSession is running change the output of captureHighResolutionFrame? What about modifications before running a new ARConfiguration?
No, any modifications to ARConfiguration
object does not affect a running session. You need to call run(_:options:) after modifying configuration for it to be used.
You can change capture device settings such as exposure, white balance etc, that will be reflected in the output of ARSession
. However, you cannot change the input/output configurations on capture session.
Are there resources on how to generate a texture for the mesh generated by ARKit ?
We do not have any resources for this.
You should be able to use the wide angle camera and camera transform to generate texture maps for the meshes but unfortunately we do not have any resources or sample code showing that.
We do have this sample code showing how to generate colored point clouds using the scene depth API, hope it is of some help.
Displaying a Point Cloud Using Scene Depth
Question pageDo any of the AR frameworks have hand tracking, and the ability to register a pinch between the thumb and pointer finger?
ARKit does not have any hand tracking feature. The Vision framework offers functionality for hand gesture detection.
Detect Body and Hand Pose with Vision - WWDC20
You may find the camera's captured images on the ARFrame
and can inject this into Vision. So by combining multiple frameworks you could achieve something close to the requested feature.
Does ARKit give any confidence score for each camera position it estimates during camera tracking? If any camera position is not estimated correctly do you suggest any option to improve it?
ARKit returns discrete tracking state for every frame update. You can read more about it here:
Managing Session Life Cycle and Tracking Quality
It is highly recommended to integrate standard coaching view in your app to guide users when tracking is limited. More details at:
Question pageIs it possible to do body tracking while being in an ARWorldTrackingConfiguration?
3D body tracking is only supported using the ARBodyTrackingConfiguration
. However, we support the detection of 2D bodies on multiple configurations; the ARWorldTrackingConfiguration
is one of them.
In order to check which configuration supports this you may use the supportsFrameSemantics(_:) function.
Question pageIs there a way to select which rear camera to use for ARView? (Wide, ultrawide, panoramic)
ARKit only supports wide camera as the primary camera. It is not possible use other cameras for rendering.
ARFaceTrackingConfiguration, however, uses the front facing camera.
If you have a need to use a different camera, please file a feature request through the feedback assistant at Bug Reporting
Question pageCan I use the MapKit 3D model of a city, and anchor it as a child of an anchor using LiDAR geotracking? For long distance occlusion and collision purposes?
There is no integration of MapKit into the ARView
. If you know the building footprint (i.e. the polygon in lat/lon coordinates) or even exact geometry anchored to a lat/lon coordinate you can transform these coordinates by placing ARGeoAnchor
s at a that location. If they are tracked in ARKit you get the local coordinates and can build an occlusion/collision mesh.
When we have a RoomPlan scan, can we use it next time as an anchor so we can always Paint Model in same place?
RoomPlan is not an ARAnchor
in current design. Thanks for suggestion. We will take into consideration.
From a non-Apple developer:
I created a demo where custom ARAnchor
s are created for RoomPlan objects. The same could be done for surfaces and then saved to a world map:
https://github.com/jmousseau/RoomObjectReplicatorDemo
Question pageDoes setting ARKit to use 4K resolution affect the battery longevity? Does it increase the risk to get the device too hot, even if the fps is limited at 30 fps instead of 60 fps? Is there a way to get 60 fps at 4K resolution?
Yes, using 4k resolution may result in more power being consumed. It may also result in thermal mitigation engaging to keep the device from getting too hot, which may impact performance. At the moment, we are only supporting 4k @ 30hz.
Question pageARSession has the getGeoLocation(forPoint: … method. Is there also a way to obtain the heading relative to north given a directional vector within the scene or for the device (point of view)?
We are not exposing the heading directly.
You can create any ARGeoAnchor
in your vicinity and then compare its transform with your camera’s transform. Since ARGeoAnchor
s are always aligned to East-Up-South, you can derive any global camera orientation by comparing the camera’s transform to the ARGeoAnchor
transform.
Might there be more example projects showcasing pure Metal with ARKit? SceneKit is cool, but admittedly, I'd love to see more low-level examples. :) Alternatively, is anyone working on some open source projects showcasing something like this? I think it would be a big win for Apple-platform development to build-up a lot more examples.
Thanks for the suggestion. Here are some existing sample code that uses Metal with ARKit:
- Displaying a Point Cloud Using Scene Depth
- Creating a Fog Effect Using Scene Depth
- Displaying an AR Experience with Metal
Do any of the AR frameworks accept location or hand position from Apple Watch?
No, ARKit runs standalone on iPhone and iPad devices only and does not take any external inputs.
Question pageWe can capture session events (namely anchors add/remove) by implementing ARSessionDelegate (not RealityKit), is it possible get similar or part of this events with RealityKit? (To avoid converting a from ARAnchor to AnchorEntity)
RealityKit exposes the ARSession
through this API:
https://developer.apple.com/documentation/realitykit/arview/session
You can set the delegate on it to listen to ARKit delegate events.
Question pageWhen using the new 4K resolution in ARKit for a post-production (film/television) workflow, what is the suggested way to take the AR experience and output to a video file?
To capture and replay an ARKit session, see an example here:
Recording and Replaying AR Session Data
If you want to capture video in your app in order to do post processing later, you could use and configure an AVAssetWriter to capture a video.
We also provide a camera frame with every ARFrame
, see:
ARFrame.capturedImage
is just the ‘clean slate’, it doesn’t contain any virtual content rendered on top of it. If you are doing your own rendering and your Metal textures are backed by IOSurface
s. then you can easily create CVPixelBuffer
s using the IOSurface
s and then pass those to AVFoundation for recording.
Is there a way to get notified when ARKit relocates itself after it finds out that it has drifted? From my experience, the tracking status does not change when this happens. Also is there a way to ask ARKit to not try to relocate itself after a drift?
We recommend adding ARAnchor
s and associating your virtual content with ARAnchor
s. In case there is a drift, the session delegate session(_didUpdate:) would update the anchor such that the virtual content stays in the same location in the real world.
Any guidance on how to build a bridge between ARKit and Spatial Audio? Say you're viewing an object and the audio evolves as you change the object's perspective...
We do not have a sample code that uses ARKit together with spatial audio (PHASE). However, this is a great question, can you please send us a request through Bug Reporting
Question pageWith body motion tracking, can a dev specify the sample rate of the sample (every few ms) and write out that sample in a continues manner. eg a basic motion recorder. Please ignore the question if this is the wrong place to ask
Body tracking runs at 60 Hz at the same cadence of the camera. We cannot compute this faster than this. However, by changing the ARVideoFormat
you may change this to 30 Hz or other supported frame rates.
We do not offer a functionality to write the motion capture data to file. However, our data is very well compatible with a format called BVH. By following the topology of the skeleton given by the ARSkeletonDefinition
and the data coming from an ARBodyAnchor
one could generate such a file output.
Have there been any changes to the light estimation APIs? For example, is directional light available with a world tracking config?
No, there haven’t been changes to light estimation in ARKit this year.
Question pageIn "Discover ARKit 6" there's a cool demo of setting a point in AR where a picture was taken, and guiding the user there. Is sample code for this available somewhere?
Thanks for your excitement about that app idea. We do not have the sample code, but I recommend going to our Explore ARKit 4 session where we explain how to pick a coordinate in Maps and create an ARGeoAnchor
based on it. For alignment with the real world we have the example with the Ferry Building in SF. We followed that exact workflow with the focus square example.
Is adding custom anchors for drift correction pertinent on a LiDAR enabled device?
In general we recommend to use/add anchors independent from the device you are using.
Question pageI noticed that the built-in Camera app can detect very small QR codes compared to 4K AR. Why is that? Is there a workaround?
We don’t have QR code detection in ARKit. However, you can use the Vision APIs to do QR code detection on the captured image. This VisionKit talk and article might be of interest to you:
Question pageRegarding the new ARKit 6 API that takes a 4k photo of the AR scene, is there a limit to how many times it can be called? Can I take say 30 photos within a second?
You can take the next photo right after the completion handler of your previous captureHighResolutionFrame call - or even from within the completion handler.
If you try taking a new photo before the previous call completed, you will receive an ARError.Code.highResolutionFrameCaptureInProgress error in the completion handler/
Question pageWe'd like to use an object both as a source for a USDZ based on the PhotogrammetrySession and as an ARReferenceObject, so that we can overlay information at the same position on both the real object and the created model.Is there any guidance on how to align these coordinate systems, e.g. by aligning the point clouds from the photogrammetry session and reference object? Or can we make assumptions on the origin of the resulting USDZ from the PhotogrammetrySession?
Creating a model for Object Detection and creating a textured mesh with Object Capture are two different use cases with separate workflows, we do not offer a tool to convert from one to another. That sounds like a great use case though, I encourage you to file a feature request.
Question pageIs there a maximum number of 2D bodies that can be tracked in an ARWorldTrackingConfiguration?
ARKit detects one body at a time. If multiple people are in the scene, the most prominent one is returned.
Question pageIs it now possible to do AR with the ultra wide angle 0.5 camera?
Unfortunately not. ARKit consumes the UW camera internally for certain processing tasks in a specific configuration.
Though I encourage you to file a feature request. Feedback Assistant
Question pageWe're planning to integrate an AR distance measuring view into our app. Does ARKit now provide the necessary technology to achieve this, or is RealityKit a better match? Are there any useful docs to look at?
ARKit offers several ways to measure distances. You can either evaluate distances from the device to its environment or between ARAnchor
s.
Please see this documentation to get an overview:
Question pageAre there any good resources on getting started with estimated object dimensions? Similar to the measurable app but to do height and width.
I recommend checking out our documentation of our SceneGeometry API that we presented in ARKit 3.5. A good overview is given in this tech talk:
Advanced Scene Understanding in AR
After getting a geometry that is good enough you still have to solve the task of isolating your object of choice and computing its volume. There would be several ways of doing it. For example cutting everything off above the ground level, or letting the user create a cube objects and then intersect it with the scene geometry, we do not have any code sample for these tasks, though.
Question pageVideo feed is always overexposed using ARKit. Trying to enable HDR for ARSession doesn't seem to work. Setting videoHDRAllowed to true on ARWorldTrackingConfiguration does not change video rendering. Also when accessing the AVCaptureDevice with ARWorldTrackingConfiguration.configurableCaptureDeviceForPrimaryCamera, activeFormat.isVideoHDRSupported returns false (on iPhone 12 Pro Max) so I cannot set captureDevice.isVideoHDREnabled to true. Also when using setExposureModeCustom and setting iso to activeFormat.minISO, the image rendered by ARKit has always a way greater exposure than when running an AVCaptureSession. The use case is for using ARKit in a Basketball stadium: the pitch always appears totally white with ARKit so we cannot see any player while with AVCaptureSession (or just the iOS camera app) the pitch and players appear clearly thanks to HDR.
Setting videoHDRAllowed
means that HDR will be enabled on the formats supporting it; however this is not the case for all video formats.
In iOS 16, ARVideoFormat
has a new property isVideoHDRSupported
. You can filter the list of the configuration’s supportedVideoFormat
to find one where videoHDRSupported
is true, and set this format as the configuration’s videoFormat
before running the session.
Does ARKit or RealityKit support rigid body physics defined in a USD file?
ARKit doesn’t support physics but rather detects the surrounding scene to allow RealityKit to handle virtual objects. RealityKit does support rigid body physics and a good place to start looking is at the physics APIs here:
Preliminary_PhysicsRigidBodyAPI
Question pageI’d like to ask what might the possible causes be for the ARSessionDelegate retaining ARFrames console warning. I use the session:didUpdateframe: delegate method to just check whether the AnchorEntity(plane:) I’m looking for is in a sufficient distance from the camera.
We have a limited pool of resources for our ARFrame
s and in order to keep some available for ARKit to process, we recommend processing the frames as quickly as possible. If you need to perform longer computations, you can copy an ARFrame
and release the ARFrame
from the delegate method.
I would like to know if it's possible to use SharePlay with a ARKit app? When I try there is no video on the FaceTime call if the back camera is started. Is it possible to have both cameras at the same time (front for FaceTime and back for my AR app)?
ARKit configures the cameras according to the selected configuration. Capturing from another camera while an ARKit session is running is not supported.
Question pageIs it possible to do perspective correction in ARKit using the captured depth map? Like on the continuity camera "desk view" for example
Glad you’re also a fan of the new desk view feature. There are potentially two solutions to this:
- Do a single perspective projections for the whole image
- Use a per-pixel correction like you suggested
Both come with their own benefits and drawbacks. Please check out our documentation for implementing the second approach:
Displaying a Point Cloud Using Scene Depth
Question pageIs there any plan to allow built-in hand and finger detection within ARKit to let the user interact with an object directly with his hands and not only though touch events on the device screen?
ARKit has no built-in hand or finger detection, but you can use Vision to track hands or detect hand poses. Here is a developer sample illustrating this:
Detecting Hand Poses with Vision
For ARKit feature requests, we encourage you to send us a report in Feedback Assistant
Question pageI want to build an AR game where buildings can occlude content. Should I build an occlusion mesh for every building I want occlusion/collision. I am kind of new to ARKit but I saw that I can create a metal renderer to work with ARKit. Can I get depth information using a convolutional neural network from metal?
Throwing machine learning at it sounds like a super fun project, I would recommend to start a bit simpler.
So as a small experiment you can take the four corners of a building from the Maps app and then create four Location Anchors based on these coordinates. As soon as those are tracked you can look at the local coordinates (in x,y,z) and then build a polygon based on it, you can extrude it towards the sky (y up) to get a nice collision/occlusion mesh.
Question pageUsing additional cameras in ARKit - are there any resources to show how this is setup?
ARKit allows streaming video from only one camera at a time. Which camera is used is determined by your configuration (e.g. ARFaceTrackingConfiguration
will use the front facing camera, ARWorldTrackingConfiguration
will use the back wide camera).
You can, however, enable face anchors detected by the front camera in an ARWorldTrackingConfiguration
with userFaceTrackingEnabled. Vice versa, you can enable isWorldTrackingEnabled in an ARFaceTrackingConfiguration
to benefit from 6DOF world tracking.
Check out this developer sample:
Combining User Face-Tracking and World Tracking
Question pageAm I able to capture frames off additional camera feeds at the same time (not necessarily exactly synchronous) in ARKit?
We introduced new API to capture frames in higher resolution than your configuration’s video format:
captureHighResolutionFrame(completion:)
Those frames are captured from the same camera, though.
Setting up additional cameras is not supported. We encourage you to file a feature request in Feedback Assistant
Question pageHi! We're working on an AR experience that allows user to put AR objects in their surroundings and replay it later. We're saving the data on an ARWorldMap and archive it on the filesystem to be retrieved later. Everything works great on smaller areas with small ARWorldMap file sizes. However as user adds more stuff, the ARWorldMap file gets bigger and at some point, it takes so long or even impossible to relocalize using the big ARWorldMap files. I'm seeing slower relocalization on ARWorldMap files with >10 mb size.\nQuestion:\nIs there a known cap of how big ARWorldMap files can be to retain effectivenes of relocalization and the AR experience? What can impact performance for AR relocalization other than lighting condition and the object textures that we're rendering (maybe area size? camera motion? features in the area?) since we're seeing frame drops on bigger ARWorldMap files.
ARWorldMap
s are optimized for room sized scenarios. If you exceed that limit then relocalization will stop working in certain areas as the map is not big enough any more to cover the whole area.
The frame drops sound related to the amount of content being displayed though. For that, feel free to provide more details through Feedback Assistant
Question pageIs there a way to force ARWorldMap to relocalize on our position instead of inferring from the features around us? For example, since ARWorldMap has its own root anchor, can we do something like "load this ARWorldMap using my current position/transform in the real world as the root anchor"? From my understanding we can do this with a single/multiple ARObjects but haven't found any resources about collections of ARAnchors in an ARWorldMap
This is not supported out of the box. What you could do is compute the offset between your current location (before relocalization) and after relocalization and apply that accordingly.
Question pageBody Tracking
Is it possible to do body tracking while being in an ARWorldTrackingConfiguration?
3D body tracking is only supported using the ARBodyTrackingConfiguration
. However, we support the detection of 2D bodies on multiple configurations; the ARWorldTrackingConfiguration
is one of them.
In order to check which configuration supports this you may use the supportsFrameSemantics(_:) function.
Question pageWith body motion tracking, can a dev specify the sample rate of the sample (every few ms) and write out that sample in a continues manner. eg a basic motion recorder. Please ignore the question if this is the wrong place to ask
Body tracking runs at 60 Hz at the same cadence of the camera. We cannot compute this faster than this. However, by changing the ARVideoFormat
you may change this to 30 Hz or other supported frame rates.
We do not offer a functionality to write the motion capture data to file. However, our data is very well compatible with a format called BVH. By following the topology of the skeleton given by the ARSkeletonDefinition
and the data coming from an ARBodyAnchor
one could generate such a file output.
Is there a maximum number of 2D bodies that can be tracked in an ARWorldTrackingConfiguration?
ARKit detects one body at a time. If multiple people are in the scene, the most prominent one is returned.
Question pageCamera
iOS 15.4 includes the builtInLiDARDepthCamera type in AVFoundation. Is there any advantage in implementing this camera type when doing Object Capture for better depth calculation, or does that not change the outcome of the rendered 3D model?
Capturing images with LiDAR devices will give you automatic scale estimation and gravity vector information on your final usdz output
Question pageCan we get the LiDAR camera position while doing a mesh in ARKit?
ARMeshAnchor
transforms are already aligned with the wide camera, which is also what the camera transform is relative to.
We want to play with the depth map. Is it possible to get the LiDAR camera position with the depth map? We've tried using the wide camera position and it doesn't work, because the wide camera position is not the same as the depth map's camera position.
The depth map surfaced through the Scene Depth API does align with the wide angle camera and should correspond to the camera transform available through the ARFrame
.
Here is a sample code that generates a colored point cloud by combining the wide angle camera image and depth map:
Displaying a Point Cloud Using Scene Depth
If you still see some issues, I recommend filing a bug through the feedback assistant at Bug Reporting
Question pageDo modifications made to configurableCaptureDeviceForPrimaryCamera while an ARSession is running change the output of captureHighResolutionFrame? What about modifications before running a new ARConfiguration?
No, any modifications to ARConfiguration
object does not affect a running session. You need to call run(_:options:) after modifying configuration for it to be used.
You can change capture device settings such as exposure, white balance etc, that will be reflected in the output of ARSession
. However, you cannot change the input/output configurations on capture session.
Are there resources on how to generate a texture for the mesh generated by ARKit ?
We do not have any resources for this.
You should be able to use the wide angle camera and camera transform to generate texture maps for the meshes but unfortunately we do not have any resources or sample code showing that.
We do have this sample code showing how to generate colored point clouds using the scene depth API, hope it is of some help.
Displaying a Point Cloud Using Scene Depth
Question pageDoes ARKit give any confidence score for each camera position it estimates during camera tracking? If any camera position is not estimated correctly do you suggest any option to improve it?
ARKit returns discrete tracking state for every frame update. You can read more about it here:
Managing Session Life Cycle and Tracking Quality
It is highly recommended to integrate standard coaching view in your app to guide users when tracking is limited. More details at:
Question pageIs there a way to select which rear camera to use for ARView? (Wide, ultrawide, panoramic)
ARKit only supports wide camera as the primary camera. It is not possible use other cameras for rendering.
ARFaceTrackingConfiguration, however, uses the front facing camera.
If you have a need to use a different camera, please file a feature request through the feedback assistant at Bug Reporting
Question pageARSession has the getGeoLocation(forPoint: … method. Is there also a way to obtain the heading relative to north given a directional vector within the scene or for the device (point of view)?
We are not exposing the heading directly.
You can create any ARGeoAnchor
in your vicinity and then compare its transform with your camera’s transform. Since ARGeoAnchor
s are always aligned to East-Up-South, you can derive any global camera orientation by comparing the camera’s transform to the ARGeoAnchor
transform.
When using the new 4K resolution in ARKit for a post-production (film/television) workflow, what is the suggested way to take the AR experience and output to a video file?
To capture and replay an ARKit session, see an example here:
Recording and Replaying AR Session Data
If you want to capture video in your app in order to do post processing later, you could use and configure an AVAssetWriter to capture a video.
We also provide a camera frame with every ARFrame
, see:
ARFrame.capturedImage
is just the ‘clean slate’, it doesn’t contain any virtual content rendered on top of it. If you are doing your own rendering and your Metal textures are backed by IOSurface
s. then you can easily create CVPixelBuffer
s using the IOSurface
s and then pass those to AVFoundation for recording.
With body motion tracking, can a dev specify the sample rate of the sample (every few ms) and write out that sample in a continues manner. eg a basic motion recorder. Please ignore the question if this is the wrong place to ask
Body tracking runs at 60 Hz at the same cadence of the camera. We cannot compute this faster than this. However, by changing the ARVideoFormat
you may change this to 30 Hz or other supported frame rates.
We do not offer a functionality to write the motion capture data to file. However, our data is very well compatible with a format called BVH. By following the topology of the skeleton given by the ARSkeletonDefinition
and the data coming from an ARBodyAnchor
one could generate such a file output.
Is it now possible to do AR with the ultra wide angle 0.5 camera?
Unfortunately not. ARKit consumes the UW camera internally for certain processing tasks in a specific configuration.
Though I encourage you to file a feature request. Feedback Assistant
Question pageI would like to know if it's possible to use SharePlay with a ARKit app? When I try there is no video on the FaceTime call if the back camera is started. Is it possible to have both cameras at the same time (front for FaceTime and back for my AR app)?
ARKit configures the cameras according to the selected configuration. Capturing from another camera while an ARKit session is running is not supported.
Question pageUsing additional cameras in ARKit - are there any resources to show how this is setup?
ARKit allows streaming video from only one camera at a time. Which camera is used is determined by your configuration (e.g. ARFaceTrackingConfiguration
will use the front facing camera, ARWorldTrackingConfiguration
will use the back wide camera).
You can, however, enable face anchors detected by the front camera in an ARWorldTrackingConfiguration
with userFaceTrackingEnabled. Vice versa, you can enable isWorldTrackingEnabled in an ARFaceTrackingConfiguration
to benefit from 6DOF world tracking.
Check out this developer sample:
Combining User Face-Tracking and World Tracking
Question pageAm I able to capture frames off additional camera feeds at the same time (not necessarily exactly synchronous) in ARKit?
We introduced new API to capture frames in higher resolution than your configuration’s video format:
captureHighResolutionFrame(completion:)
Those frames are captured from the same camera, though.
Setting up additional cameras is not supported. We encourage you to file a feature request in Feedback Assistant
Question pageDebugging
We are seeing some memory leaks when adding ModelEntities to an anchor, pausing the ARSession and starting it again and adding ModelEntities again....We see memory growing in the re::SyncObject section. Does anyone have experience troubleshooting memory leaks that have happened in a similar way?
I’d recommend this year’s WWDC Xcode session, What's new in Xcode, for what’s new in debugging. And there have been many other excellent sessions over the years on debugging.
That said if you believe it may be RealityKit or another system framework responsible for leaking the entities we’d ask you to file a Feedback Item on http://feedbackassistant.apple.com if you haven’t done so already.
Question pageDesign
From an AR design perspective, what is best for knocking down objects? Say in a game where you knock down blocks, is it better to have the user run the device through the blocks, tap the blocks, or press a button to trigger something to hit the blocks?
It depends which approach is best — each have a set of pros and cons based on what you want out of the experience.
It can be compelling to run through AR blocks if you want to emphasize lots of user motion in an experience and the scale of the experience is quite large — good for apps that can take advantage of wide open spaces.
Tapping them is more immediate and indirect so if you wanted to destroy a tower quickly or something like that then that would be the way to go — and I could see that being very satisfying to trigger many physics objects to react at once.
I think the same would apply to a button press, it’s an indirect way to trigger it if the experience requires rapidly knocking them down.
Overall I think it’s up to what you want the experience to be, and maintaining internal consistency with other interactions within the app.
Swiftstrike and Swiftshot are great example apps that use similar techniques.
Question pageDrift
What are some tips/best practices to prevent AR objects from shifting? We're finding a bit of drift that's most noticeable with larger virtual objects.
We recommend adding an ARAnchor
in the position where you want to place and object and then associate your node/entity with that anchor. This should help prevent drifting.
Does adding anchors everywhere force ARKit to keep a good understanding and reduce drift everywhere? If yes, will this affect the tracking quality?
ARKit offers functionality to add custom anchors which is the preferred and recommended way to place content.
See the add(anchor:) method.
Custom anchors are used internally for drift correction. We cannot guarantee absolutely no drift. However, using your own anchors will use the system's best knowledge to correct for any drift.
Question pageIs there a way to get notified when ARKit relocates itself after it finds out that it has drifted? From my experience, the tracking status does not change when this happens. Also is there a way to ask ARKit to not try to relocate itself after a drift?
We recommend adding ARAnchor
s and associating your virtual content with ARAnchor
s. In case there is a drift, the session delegate session(_didUpdate:) would update the anchor such that the virtual content stays in the same location in the real world.
Is adding custom anchors for drift correction pertinent on a LiDAR enabled device?
In general we recommend to use/add anchors independent from the device you are using.
Question pageFeedback
The recent update of Xcode made scenes in Reality Composer look much darker. Is this caused by a change in RealityKit?
We are aware of a bug in macOS Ventura/iOS 16 that is causing the lighting to appear darker. Please feel free to file a bug report on Feedback Assistant about this.
Question pageAre there any plans (or is there any way?) to bring post-process effects and lights into Reality Composer? I'm making a short animated musical film in AR. I love how RC does so much automatically (spatial audio, object occlusion...). I just wish it was possible to amp up the cinematic-ness a little with effects.
Post processing effects are not supported in RC right now, only in a RealityKit app. However, feel free to file a feature request on Feedback Assistant about this.
Question pageThe LookToCamera Action works different on AR Quick Look than while testing in Reality Composer. Will this be fixed with Xcode 14?
Please file a bug report for this. Any additional information you can provide, such as a video reproducing the issue would be hugely helpful.
https://developer.apple.com/bug-reporting/
Question pageAre there guidelines or best practices for exporting a RealityKit scene to a USDZ? Is this possible? I’ve seen just a little about the ModelIO framework. Is this the tool we should be using?
I don’t think we have any guidelines about this, since exporting/saving a scene is not supported by the current APIs. ModelIO seems like a reasonable solution to me, but you might also want to file a feature request for this on Feedback Assistant.
Question pageWhen do you think we will see new versions of Reality Composer and Reality Converter apps? I'm a college professor - Graduate Industrial Design, and use these as an intro to AR tools. Better, more capable versions might be nice? Thanks.
Unfortunately, we don’t discuss our future plans. However, we are aware that our tools haven’t been updated in a few years and could use some new features. Could you share what features you are looking for us to add?
Question pageWe are seeing some memory leaks when adding ModelEntities to an anchor, pausing the ARSession and starting it again and adding ModelEntities again....We see memory growing in the re::SyncObject section. Does anyone have experience troubleshooting memory leaks that have happened in a similar way?
I’d recommend this year’s WWDC Xcode session, What's new in Xcode, for what’s new in debugging. And there have been many other excellent sessions over the years on debugging.
That said if you believe it may be RealityKit or another system framework responsible for leaking the entities we’d ask you to file a Feedback Item on http://feedbackassistant.apple.com if you haven’t done so already.
Question pageAny plans for instant AR tracking on devices without LiDAR? This could be helpful for translation apps and other apps that overlay 2D text/images on 3D landmarks.
You might want to ask this to the ARKit team, but I’m not aware of any plans.
A feedback item would be good though!
Question pageIs there a suggested manner of writing ARKit/RealityKit experiences to a video file? I'm current using RealityKit 2's post-processing to convert the source `MTLTexture` to a `CVPixelBuffer`, and writing that to an `AVAssetWriter`, but this occasionally ends up leading to dropped frames or random flickers in the video.
We don’t currently have a recommend method for doing this and as such would love to see a feedback item explaining what you need and a use case explaining it. That would be wonderful.
That said your method should in theory work and we’d also love to see feedback item describing the issues you’re seeing.
Question pageI noticed the new beta class "ImageRenderer" for SwiftUI, allowing SwiftUI views to be rendered into a static image and be used as a texture in ARKit. Will there be an interactive version of displaying SwiftUI views in ARKit?
We don’t discuss future plans, but gathering developer feedback is important to us so we’d ask you to post your request to
Question pageAt last year's WWDC 2021 RealityKit 2.0 got new changes to make programming with Entity Component System (ECS) easier and simpler! The current RealityKit ECS code seems too cumbersome and hard to program. Will ease of programming with ECS be a focus in the future?
While we don’t discuss specific future plans, we always want to make RealityKit as easy to use for everyone as we can.
We’d ask you to post your issues and/or suggestions to Bug Reporting
I’d love to find out more about what you find too cumbersome. Thanks!
Question pageCollaboration frameworks for AR are important. Is Apple considering features related to remote participation in AR experiences? Unity has this capability to some extent.
While we don't discuss future plans, we always hope to gather this sort of feedback during WWDC. Thanks for taking the time to share 🙏
We do support collaborative sessions over the same network, more details and sample code can be found here:
Creating a Collaborative Session
Is this what you were looking for?
Question pageI need SwiftUI Views in my RealityKit experience...please and ASAP.
You can host RealityKit content inside SwiftUI views with UIViewRepresentable
If you’re asking if you can use SwiftUI content within RealityKit - there is no direct support for that at present and we’d ask you to file a feedback item explaining your use-case for that feature.
Question pageI’d love to have SF Symbols renderable in AR! It actually works with RealityKit on macOS by copy and pasting the symbols, but not available in the system font on iOS.
You may want to check with the SF Symbols team to confirm this is not possible yet, and also file a feature request on feedback assistant.
A bit of a hack solution but you may be able to get this work via drawing the Symbol to a CGImage
and passing that image in as a texture.
Many of you know .glb file format (android's scene-viewer) support compression like draco. Any planning update for compress .usdz files?
I would suggest filing an enhancement request on feedback assistant for this
Question pageWill there be an async await (concurrency) API to detect when entities are added to an ARView?
Hey there, we don’t discuss future releases of Apple products. But we’d love to hear your feedback and suggestions. Please file your feedback here to get it into our system.
Question pageIs there any updated to Reality Composer this year?
No.
We don't discuss details about unreleased updates, but one of the things that’s most helpful to us as we continue to build out our suite of augmented reality developer tools is feedback
Please continue to submit ideas or suggestions in Feedback Assistant 🙂
Question pageIs there a way to have light sources in AR Quick Look files hosted on the web? For example, a client would like to have lamps in AR Quick Look. It would be awesome if we could use RC to turn off/on light sources. Is there any way to do this?
I don't think that it's possible. But you should submit the idea for supporting virtual lights on:
https://feedbackassistant.apple.com
Question pageMany aspects of USD are open source. Could Reality Composer also be Open-Sourced so that members of the community could work on features?
Hey there, we’d definitely be interested in hearing more about your idea.
I’d suggest submitting the suggestion at Bug Reporting
Question pageWith USDZ content, what's the best way to link to an external website or take users to a product landing page?
If you have your USDZ content on the web you can check out the AR Quick Look functionality for such things at:
Adding an Apple Pay Button or a Custom Action in AR Quick Look
As far as I know there isn’t currently a way to do such a thing directly from a USDZ sent from iMessage, but I can pass that request along.
Question pageCan Reality Composer be made available as a macOS app in the App Store?
While Reality Composer is available only for iOS and iPadOS on the App Store, we'll pass this feedback along. Thanks 🙏
Reality Composer is available on macOS as part of Xcode as a Developer Tool, though.
Question pageIs there a means of exporting a USDZ file (either from Reality Composer, Cinema 4D, etc., or programmatically), with a video texture already applied?
There’s no support for that in Reality Composer currently. As always a feature request filed on Bug Reporting would be most appreciated.
There’s also no method to export USDZ from RealityKit and again feature requests appreciated. Thank you!
Question pageIs it possible to control audio media in USDZ (i.e. pause, skip, load new audio file) with a scene / behavior (using Reality Composer or other tool)?
Currently Reality Composer does not support this. This sounds like a great feature request and we would appreciate if you can file feedback through Feedback Assistant.
If you are willing to jump into code…
You can use the AudioPlaybackController
returned from the playAudio API to play, pause, etc. You can also use AudioFileResource
to to add / replace audio on entities.
Regarding optimizations: is there support for level of detail and instancing in RealityKit?
Instancing is mostly abstracted away behind the Entity.clone()
method.
Level of detail is not currently exposed as API and we’d recommend filing a feature suggestion on Bug Reporting
That said you can implement Level of Detail yourself (probably using custom Systems and Components) although we understand that may not be ideal. Please file feature suggestions regardless!
Question pageIs there a plan to have custom render passes like in SceneKit with SCNTechnique in RealityKit?
While we do not currently support custom render passes, we have support for post process effects. Please file a feature request through Feedback Assistant if your use case requires more customization 🙏
Question pageIn Reality Composer, a force was applied to an object. Then I wanted to animate it into another scene, starting from the post force location. Is there a way to apply a new scene using its last known position? I hacked the position by guessing the ending location and starting the next scene close to that position but it results in a slight motion jitter.
This may be achievable if embedded in Xcode with some code.
I recommend signing up for a Reality Composer lab if you would like to explore that further.
But yes, being able to observe live parameters sounds like a great feature in Reality Composer. Please file a feature request using Feedback Assistant with your use case 🙂
Question pageI am working on an app that uses ARKit to guide the user around an object while semi-automatically capturing images for later (server side) 3D reconstruction. I very much appreciate the ability to control the capture session and the ability to capture high resolution images that you added in iOS 16. I believe currently we do not have much control over the high resolution image capture? It would be great if we could configure the AVCapturePhotoSettings used for the capture. For photogrammetric reconstruction purposes it would be amazing if we could for example capture a Pro RAW image during the ARKit session.
We really appreciate the feedback and are glad that you are already starting to put these API changes to good use! At the moment, we do not expose the ability to pass in AVCapturePhotoSettings
through our API, but this would be a great feature request to submit via Bug Reporting
We want to play with the depth map. Is it possible to get the LiDAR camera position with the depth map? We've tried using the wide camera position and it doesn't work, because the wide camera position is not the same as the depth map's camera position.
The depth map surfaced through the Scene Depth API does align with the wide angle camera and should correspond to the camera transform available through the ARFrame
.
Here is a sample code that generates a colored point cloud by combining the wide angle camera image and depth map:
Displaying a Point Cloud Using Scene Depth
If you still see some issues, I recommend filing a bug through the feedback assistant at Bug Reporting
Question pageIs there a way to select which rear camera to use for ARView? (Wide, ultrawide, panoramic)
ARKit only supports wide camera as the primary camera. It is not possible use other cameras for rendering.
ARFaceTrackingConfiguration, however, uses the front facing camera.
If you have a need to use a different camera, please file a feature request through the feedback assistant at Bug Reporting
Question pageAny guidance on how to build a bridge between ARKit and Spatial Audio? Say you're viewing an object and the audio evolves as you change the object's perspective...
We do not have a sample code that uses ARKit together with spatial audio (PHASE). However, this is a great question, can you please send us a request through Bug Reporting
Question pageWe'd like to use an object both as a source for a USDZ based on the PhotogrammetrySession and as an ARReferenceObject, so that we can overlay information at the same position on both the real object and the created model.Is there any guidance on how to align these coordinate systems, e.g. by aligning the point clouds from the photogrammetry session and reference object? Or can we make assumptions on the origin of the resulting USDZ from the PhotogrammetrySession?
Creating a model for Object Detection and creating a textured mesh with Object Capture are two different use cases with separate workflows, we do not offer a tool to convert from one to another. That sounds like a great use case though, I encourage you to file a feature request.
Question pageIs it now possible to do AR with the ultra wide angle 0.5 camera?
Unfortunately not. ARKit consumes the UW camera internally for certain processing tasks in a specific configuration.
Though I encourage you to file a feature request. Feedback Assistant
Question pageIs there any plan to allow built-in hand and finger detection within ARKit to let the user interact with an object directly with his hands and not only though touch events on the device screen?
ARKit has no built-in hand or finger detection, but you can use Vision to track hands or detect hand poses. Here is a developer sample illustrating this:
Detecting Hand Poses with Vision
For ARKit feature requests, we encourage you to send us a report in Feedback Assistant
Question pageAm I able to capture frames off additional camera feeds at the same time (not necessarily exactly synchronous) in ARKit?
We introduced new API to capture frames in higher resolution than your configuration’s video format:
captureHighResolutionFrame(completion:)
Those frames are captured from the same camera, though.
Setting up additional cameras is not supported. We encourage you to file a feature request in Feedback Assistant
Question pageGestures
What’s the easiest way to add user interactions (pinch to scale, rotation, transform) to an Entity loaded from a local USDZ file in RealityKit?
You can use the installGestures function on ARView
. Keep in mind that the entity will need to conform to HasCollision
.
To do this you could create your own CollisionComponent
with a custom mesh and add it to your entity or you could simply call generateCollisionShapes(recursive: Bool) on your entity. Putting it all together, you can use .loadModel
/.loadModelAsync
, which will flatten the USDZ into a single entity. Then call generateCollisionShapes
and pass that entity to the installGestures
function. This will make your USDZ one single entity that you can interact with.
Is there a way to add gestures to an entire Reality Composer scene? I can add it to an individual entity, but it would be cool to let users place the entire scene (otherwise I lose all the Reality Composer behaviors when i just target the entity)
A way to get the entity gestures working on an entire scene is to use visualBounds(…) and create a CollisionComponent
on the root entity. You can then use CollisionGroup
to make sure it doesn’t interfere with any physics.
If you’re using ARView.installGestures(…)
you’ll need the entity to conform to HasCollision
, which may require you to create a new entity type for the root. Quick example:
// New Entity type which conforms for `HasCollision`
class CollisionAnchorEntity: Entity, HasAnchoring, HasCollision { }
// Transfer scene contents
let collisionAnchor = CollisionAnchorEntity()
collisionAnchor
.children.append(contentsOf: originalAnchor.children)
collisionAnchor.anchoring = originalAnchor.anchoring
// Create CollisionComponent for bounds of scene
let sceneBounds = collisionAnchor
.visualBounds(recursive: true, relativeTo: collisionAnchor)
let collisionShape = ShapeResource
.generateBox(size: sceneBounds.extents)
.offsetBy(translation: sceneBounds.center)
collisionAnchor.collision = CollisionComponent(
shapes: [collisionShape]
)
// Install gesture on new anchor
arView.installGestures(for: collisionAnchor)
Question pageGetting Started
I have been really interested in RealityKit and ARKit for the past couple of years. Where can I learn more about it? I’m currently into Designing, Writing, Editing, and Management and would love to work on futuristic tech.
Check out this page:
To learn more about RealityKit and ARKit, I would recommend starting with our documentation and videos. Here are a few links to help you get started:
You can also always ask questions on Developer Forums 💬
Suggestion from non-Apple developer:
Question pageHello, for artist/designer only experienced with Reality Composer with no code, is there any suggestion and resources on getting started with RealityKit to make more advanced AR experiences?
Hi! We have a number of WWDC sessions covering RealityKit and Reality Composer which is a great place to start.
There’s also a great guide on building a SwiftStrike game: SwiftStrike: Creating a Game with RealityKit
Question pageAny tips for getting started in AR development with 0 coding knowledge?
Regardless of your educational background, anyone can learn how to code if you put in the effort and are passionate about it. There are tons of resources online, many of which have been produced by Apple in the form of documentation, example projects, and WWDC videos, that can help you to learn a programming language, such as Swift.
I would suggest doing some tutorials, watching videos, maybe find a highly rated book on iOS programming, etc to learn how to begin building iOS apps.
Once you are comfortable with that, then you can start to dive into AR specifically. Finding a good book on linear algebra would be useful if you are going to get into AR and graphics programming, but start with the basics first!
For ARKit, we have all sorts of documentation and examples that you can take a look at:
https://developer.apple.com/documentation/arkit/
From a non-Apple developer:
Apple’s documentation is great.
I also found the site RayWenderlich to be super helpful. They even have a book specifically for AR:
Apple Augmented Reality by Tutorials
Question pageHand Tracking
Do any of the AR frameworks have hand tracking, and the ability to register a pinch between the thumb and pointer finger?
ARKit does not have any hand tracking feature. The Vision framework offers functionality for hand gesture detection.
Detect Body and Hand Pose with Vision - WWDC20
You may find the camera's captured images on the ARFrame
and can inject this into Vision. So by combining multiple frameworks you could achieve something close to the requested feature.
Is there any plan to allow built-in hand and finger detection within ARKit to let the user interact with an object directly with his hands and not only though touch events on the device screen?
ARKit has no built-in hand or finger detection, but you can use Vision to track hands or detect hand poses. Here is a developer sample illustrating this:
Detecting Hand Poses with Vision
For ARKit feature requests, we encourage you to send us a report in Feedback Assistant
Question pageLiDAR
I am trying to use LiDAR scanner to create a 3d model from capturing an object. But couldn't get enough resources for that. Any references/resources please?
For creating 3D models from images captured on device, this should be a helpful resource to get more inspiration and help:
Bring your world into augmented reality
Question pageiOS 15.4 includes the builtInLiDARDepthCamera type in AVFoundation. Is there any advantage in implementing this camera type when doing Object Capture for better depth calculation, or does that not change the outcome of the rendered 3D model?
Capturing images with LiDAR devices will give you automatic scale estimation and gravity vector information on your final usdz output
Question pageAny plans for instant AR tracking on devices without LiDAR? This could be helpful for translation apps and other apps that overlay 2D text/images on 3D landmarks.
You might want to ask this to the ARKit team, but I’m not aware of any plans.
A feedback item would be good though!
Question pageI've noticed that when occlusion is enabled on LiDAR devices, far away objects are automatically being clipped after a certain distance like 10m or so (even if there is no physically occluding them). I've tried to adjust the far parameters of the PerspectiveCameraComponent – https://developer.apple.com/documentation/realitykit/perspectivecameracomponent/far But unfortunately that didn't help. Only disabling occlusion removes the clipping. Is there a workaround for this behavior?
This should be fixed in iOS 16.
Question pageI'm creating pendant lights for viewing in in AR Quick Look, is it possible to anchor these to the ceiling of a room?
Yes this is something that is supported in AR Quick Look. You can place objects on the ceiling by dragging them there. This can be done using a regular horizontal (or vertical) anchor.
However, there are potential challenges to be aware, the biggest is that ceilings usually lack a lot of feature points, which makes it difficult to detect a proper plane. Using a device with LiDAR can improve the results that you get.
Question pageCan we get the LiDAR camera position while doing a mesh in ARKit?
ARMeshAnchor
transforms are already aligned with the wide camera, which is also what the camera transform is relative to.
We want to play with the depth map. Is it possible to get the LiDAR camera position with the depth map? We've tried using the wide camera position and it doesn't work, because the wide camera position is not the same as the depth map's camera position.
The depth map surfaced through the Scene Depth API does align with the wide angle camera and should correspond to the camera transform available through the ARFrame
.
Here is a sample code that generates a colored point cloud by combining the wide angle camera image and depth map:
Displaying a Point Cloud Using Scene Depth
If you still see some issues, I recommend filing a bug through the feedback assistant at Bug Reporting
Question pageCan I use the MapKit 3D model of a city, and anchor it as a child of an anchor using LiDAR geotracking? For long distance occlusion and collision purposes?
There is no integration of MapKit into the ARView
. If you know the building footprint (i.e. the polygon in lat/lon coordinates) or even exact geometry anchored to a lat/lon coordinate you can transform these coordinates by placing ARGeoAnchor
s at a that location. If they are tracked in ARKit you get the local coordinates and can build an occlusion/collision mesh.
Is adding custom anchors for drift correction pertinent on a LiDAR enabled device?
In general we recommend to use/add anchors independent from the device you are using.
Question pageLight
Is there a way to have light sources in AR Quick Look files hosted on the web? For example, a client would like to have lamps in AR Quick Look. It would be awesome if we could use RC to turn off/on light sources. Is there any way to do this?
I don't think that it's possible. But you should submit the idea for supporting virtual lights on:
https://feedbackassistant.apple.com
Question pageDoes RealityKit support light sources in objects – for example, if you wanted a light bulb. If so, is there documentation for this?
There are various sorts of lighting in RealityKit - you might want to start here perhaps?
(see the Cameras and Lighting section in the docs)
But looks like we don’t support lighting in Reality Composer unfortunately so I’d suggest filing a feature suggestion:
Question pageHave there been any changes to the light estimation APIs? For example, is directional light available with a world tracking config?
No, there haven’t been changes to light estimation in ARKit this year.
Question pageMapKit
Can I use the MapKit 3D model of a city, and anchor it as a child of an anchor using LiDAR geotracking? For long distance occlusion and collision purposes?
There is no integration of MapKit into the ARView
. If you know the building footprint (i.e. the polygon in lat/lon coordinates) or even exact geometry anchored to a lat/lon coordinate you can transform these coordinates by placing ARGeoAnchor
s at a that location. If they are tracked in ARKit you get the local coordinates and can build an occlusion/collision mesh.
I want to build an AR game where buildings can occlude content. Should I build an occlusion mesh for every building I want occlusion/collision. I am kind of new to ARKit but I saw that I can create a metal renderer to work with ARKit. Can I get depth information using a convolutional neural network from metal?
Throwing machine learning at it sounds like a super fun project, I would recommend to start a bit simpler.
So as a small experiment you can take the four corners of a building from the Maps app and then create four Location Anchors based on these coordinates. As soon as those are tracked you can look at the local coordinates (in x,y,z) and then build a polygon based on it, you can extrude it towards the sky (y up) to get a nice collision/occlusion mesh.
Question pageMaterial
In SceneKit there were shader modifiers. Is there something similar in RealityKit? We need PBR shaders but have to discard certain fragments.
You can apply CustomMaterial
s & CustomMaterial.SurfaceShader
to achieve certain cool effects for entities!
From the metal side you can call discard_fragment()
Reality Composer objects on top of each other, such as a vase on a table cast shadow only to the ground plane and not to one another. If baked AO textures aren't an option since the vase may be moved by the user what would you suggest in order to achieve an equally good result to the default grounding shadow given that the quality of shadows is critical for an AR experience?
We don’t have any materials you can apply to objects to make them participate in the same shadows as ground planes. However, you can enable shadow casting from directional and spot lights via DirectionalLightComponent.Shadows
and SpotLightComponent.Shadows
. This may alter the overall lighting of your scene though.
Alternatively, we do have CustomMaterial
, which allows you to create custom materials via Metal, but for this use-case may not be able to get you the desired effect.
We’re always looking to improve RealityKit, so would appreciate if you submitted a request for this via https://feedbackassistant.apple.com/
Question pageIs there a way to get access to more advanced materials rendering on RealityKit models? I want to "skin" a plane with a UIView, currently I need to fall back to ARKit and SceneKit in order to do this
RealityKit has a CustomMaterial
API which allows you to create custom Metal-based materials. I’d recommend our Explore advanced rendering with RealityKit 2 WWDC talk to learn more.
There is also a great resource on Custom Shader API that gives more details on the APIs available in Metal.
Question pageMeasure
We're planning to integrate an AR distance measuring view into our app. Does ARKit now provide the necessary technology to achieve this, or is RealityKit a better match? Are there any useful docs to look at?
ARKit offers several ways to measure distances. You can either evaluate distances from the device to its environment or between ARAnchor
s.
Please see this documentation to get an overview:
Question pageAre there any good resources on getting started with estimated object dimensions? Similar to the measurable app but to do height and width.
I recommend checking out our documentation of our SceneGeometry API that we presented in ARKit 3.5. A good overview is given in this tech talk:
Advanced Scene Understanding in AR
After getting a geometry that is good enough you still have to solve the task of isolating your object of choice and computing its volume. There would be several ways of doing it. For example cutting everything off above the ground level, or letting the user create a cube objects and then intersect it with the scene geometry, we do not have any code sample for these tasks, though.
Question pageMetal
In the keynote, there's a mention about a Background API in Metal. Please share documentation/resources link
Are you referring to https://developer.apple.com/documentation/metal/resource_loading?
Question pageReality Composer objects on top of each other, such as a vase on a table cast shadow only to the ground plane and not to one another. If baked AO textures aren't an option since the vase may be moved by the user what would you suggest in order to achieve an equally good result to the default grounding shadow given that the quality of shadows is critical for an AR experience?
We don’t have any materials you can apply to objects to make them participate in the same shadows as ground planes. However, you can enable shadow casting from directional and spot lights via DirectionalLightComponent.Shadows
and SpotLightComponent.Shadows
. This may alter the overall lighting of your scene though.
Alternatively, we do have CustomMaterial
, which allows you to create custom materials via Metal, but for this use-case may not be able to get you the desired effect.
We’re always looking to improve RealityKit, so would appreciate if you submitted a request for this via https://feedbackassistant.apple.com/
Question pageIs there a way to get access to more advanced materials rendering on RealityKit models? I want to "skin" a plane with a UIView, currently I need to fall back to ARKit and SceneKit in order to do this
RealityKit has a CustomMaterial
API which allows you to create custom Metal-based materials. I’d recommend our Explore advanced rendering with RealityKit 2 WWDC talk to learn more.
There is also a great resource on Custom Shader API that gives more details on the APIs available in Metal.
Question pageIs taking the output MTLTexture from RealityKit 2's `postProcessing` pipeline suitable for writing to an AVAssetWriter, streaming via RTMP, etc?
“Maybe” 🙂
So you can certainly take MTLTexture
s and convert them (if they’re configured correctly) into CVPixelBuffer
s for AVFoundation to consume.
That said it’s really not the intended use case of RealityKit's post processing functionality and I wouldn’t be surprised if either it doesn’t work as you’d expect or if we break you in the future.
Sounds like a great feature request though - Bug Reporting
Question pageMight there be more example projects showcasing pure Metal with ARKit? SceneKit is cool, but admittedly, I'd love to see more low-level examples. :) Alternatively, is anyone working on some open source projects showcasing something like this? I think it would be a big win for Apple-platform development to build-up a lot more examples.
Thanks for the suggestion. Here are some existing sample code that uses Metal with ARKit:
- Displaying a Point Cloud Using Scene Depth
- Creating a Fog Effect Using Scene Depth
- Displaying an AR Experience with Metal
I want to build an AR game where buildings can occlude content. Should I build an occlusion mesh for every building I want occlusion/collision. I am kind of new to ARKit but I saw that I can create a metal renderer to work with ARKit. Can I get depth information using a convolutional neural network from metal?
Throwing machine learning at it sounds like a super fun project, I would recommend to start a bit simpler.
So as a small experiment you can take the four corners of a building from the Maps app and then create four Location Anchors based on these coordinates. As soon as those are tracked you can look at the local coordinates (in x,y,z) and then build a polygon based on it, you can extrude it towards the sky (y up) to get a nice collision/occlusion mesh.
Question pageModelIO
Are there guidelines or best practices for exporting a RealityKit scene to a USDZ? Is this possible? I’ve seen just a little about the ModelIO framework. Is this the tool we should be using?
I don’t think we have any guidelines about this, since exporting/saving a scene is not supported by the current APIs. ModelIO seems like a reasonable solution to me, but you might also want to file a feature request for this on Feedback Assistant.
Question pageMultiuser
Hello there, I am someone who is still fairly novice with Reality / AR Kit. And I want to ask what is the best way to implement Multiuser AR experiences. I’ve been thinking on creating an AR app that would use this feature to allow multiple users to view a single AR view (e.g., multiple users seeing the same rendered model from their own perspectives).
Multiuser AR experiences can be created using the SynchronizationComponent
This is a good tutorial (along with sample code) on building collaborative AR sessions between devices:
https://developer.apple.com/documentation/arkit/creatingacollaborative_session
Question pageObject Capture
I am trying to use LiDAR scanner to create a 3d model from capturing an object. But couldn't get enough resources for that. Any references/resources please?
For creating 3D models from images captured on device, this should be a helpful resource to get more inspiration and help:
Bring your world into augmented reality
Question pageiOS 15.4 includes the builtInLiDARDepthCamera type in AVFoundation. Is there any advantage in implementing this camera type when doing Object Capture for better depth calculation, or does that not change the outcome of the rendered 3D model?
Capturing images with LiDAR devices will give you automatic scale estimation and gravity vector information on your final usdz output
Question pageWe'd like to use an object both as a source for a USDZ based on the PhotogrammetrySession and as an ARReferenceObject, so that we can overlay information at the same position on both the real object and the created model.Is there any guidance on how to align these coordinate systems, e.g. by aligning the point clouds from the photogrammetry session and reference object? Or can we make assumptions on the origin of the resulting USDZ from the PhotogrammetrySession?
Creating a model for Object Detection and creating a textured mesh with Object Capture are two different use cases with separate workflows, we do not offer a tool to convert from one to another. That sounds like a great use case though, I encourage you to file a feature request.
Question pageOptimization
Regarding optimizations: is there support for level of detail and instancing in RealityKit?
Instancing is mostly abstracted away behind the Entity.clone()
method.
Level of detail is not currently exposed as API and we’d recommend filing a feature suggestion on Bug Reporting
That said you can implement Level of Detail yourself (probably using custom Systems and Components) although we understand that may not be ideal. Please file feature suggestions regardless!
Question pageI’d like to ask what might the possible causes be for the ARSessionDelegate retaining ARFrames console warning. I use the session:didUpdateframe: delegate method to just check whether the AnchorEntity(plane:) I’m looking for is in a sufficient distance from the camera.
We have a limited pool of resources for our ARFrame
s and in order to keep some available for ARKit to process, we recommend processing the frames as quickly as possible. If you need to perform longer computations, you can copy an ARFrame
and release the ARFrame
from the delegate method.
Hi! We're working on an AR experience that allows user to put AR objects in their surroundings and replay it later. We're saving the data on an ARWorldMap and archive it on the filesystem to be retrieved later. Everything works great on smaller areas with small ARWorldMap file sizes. However as user adds more stuff, the ARWorldMap file gets bigger and at some point, it takes so long or even impossible to relocalize using the big ARWorldMap files. I'm seeing slower relocalization on ARWorldMap files with >10 mb size.\nQuestion:\nIs there a known cap of how big ARWorldMap files can be to retain effectivenes of relocalization and the AR experience? What can impact performance for AR relocalization other than lighting condition and the object textures that we're rendering (maybe area size? camera motion? features in the area?) since we're seeing frame drops on bigger ARWorldMap files.
ARWorldMap
s are optimized for room sized scenarios. If you exceed that limit then relocalization will stop working in certain areas as the map is not big enough any more to cover the whole area.
The frame drops sound related to the amount of content being displayed though. For that, feel free to provide more details through Feedback Assistant
Question pagePerformance
Does setting ARKit to use 4K resolution affect the battery longevity? Does it increase the risk to get the device too hot, even if the fps is limited at 30 fps instead of 60 fps? Is there a way to get 60 fps at 4K resolution?
Yes, using 4k resolution may result in more power being consumed. It may also result in thermal mitigation engaging to keep the device from getting too hot, which may impact performance. At the moment, we are only supporting 4k @ 30hz.
Question pageRegarding the new ARKit 6 API that takes a 4k photo of the AR scene, is there a limit to how many times it can be called? Can I take say 30 photos within a second?
You can take the next photo right after the completion handler of your previous captureHighResolutionFrame call - or even from within the completion handler.
If you try taking a new photo before the previous call completed, you will receive an ARError.Code.highResolutionFrameCaptureInProgress error in the completion handler/
Question pageI’d like to ask what might the possible causes be for the ARSessionDelegate retaining ARFrames console warning. I use the session:didUpdateframe: delegate method to just check whether the AnchorEntity(plane:) I’m looking for is in a sufficient distance from the camera.
We have a limited pool of resources for our ARFrame
s and in order to keep some available for ARKit to process, we recommend processing the frames as quickly as possible. If you need to perform longer computations, you can copy an ARFrame
and release the ARFrame
from the delegate method.
Hi! We're working on an AR experience that allows user to put AR objects in their surroundings and replay it later. We're saving the data on an ARWorldMap and archive it on the filesystem to be retrieved later. Everything works great on smaller areas with small ARWorldMap file sizes. However as user adds more stuff, the ARWorldMap file gets bigger and at some point, it takes so long or even impossible to relocalize using the big ARWorldMap files. I'm seeing slower relocalization on ARWorldMap files with >10 mb size.\nQuestion:\nIs there a known cap of how big ARWorldMap files can be to retain effectivenes of relocalization and the AR experience? What can impact performance for AR relocalization other than lighting condition and the object textures that we're rendering (maybe area size? camera motion? features in the area?) since we're seeing frame drops on bigger ARWorldMap files.
ARWorldMap
s are optimized for room sized scenarios. If you exceed that limit then relocalization will stop working in certain areas as the map is not big enough any more to cover the whole area.
The frame drops sound related to the amount of content being displayed though. For that, feel free to provide more details through Feedback Assistant
Question pageReality Composer
I am pretty new to Reality Composer. I would like to know how (if it is possible) to add textures to custom USD objects.
Reality Converter makes it easy to convert, view, and customize USDZ 3D objects on Mac. For more information, visit:
https://developer.apple.com/augmented-reality/tools/
Question pageThe recent update of Xcode made scenes in Reality Composer look much darker. Is this caused by a change in RealityKit?
We are aware of a bug in macOS Ventura/iOS 16 that is causing the lighting to appear darker. Please feel free to file a bug report on Feedback Assistant about this.
Question pageIs there a way of exporting a Reality Composer scene to a .usdz, rather than a .reality or .rcproject? If not, what are your suggested ways of leveraging Reality Composer for building animations but sharing to other devices/platforms so they can see those animations baked into the 3D model?
Yes, on macOS, you can open [from the menu bar] Reality Composer ▸ Settings [or Prefences on older versions of macOS] and check Enable USDZ export
Question pageAre there any plans (or is there any way?) to bring post-process effects and lights into Reality Composer? I'm making a short animated musical film in AR. I love how RC does so much automatically (spatial audio, object occlusion...). I just wish it was possible to amp up the cinematic-ness a little with effects.
Post processing effects are not supported in RC right now, only in a RealityKit app. However, feel free to file a feature request on Feedback Assistant about this.
Question pageThe LookToCamera Action works different on AR Quick Look than while testing in Reality Composer. Will this be fixed with Xcode 14?
Please file a bug report for this. Any additional information you can provide, such as a video reproducing the issue would be hugely helpful.
https://developer.apple.com/bug-reporting/
Question pageWhen do you think we will see new versions of Reality Composer and Reality Converter apps? I'm a college professor - Graduate Industrial Design, and use these as an intro to AR tools. Better, more capable versions might be nice? Thanks.
Unfortunately, we don’t discuss our future plans. However, we are aware that our tools haven’t been updated in a few years and could use some new features. Could you share what features you are looking for us to add?
Question pageIs there a way to use video textures in Reality Composer?
Video textures are currently not supported through Reality Composer UI. However, if your .rcproj is part of an Xcode project, you can use the RealityKit VideoMaterial api to change the material of your object in the scene at runtime.
Question pageUsing AR Quick Look, how might I add a color picker to change between colors of a model? For example, the iMac ARQL on apple.com requires users to jump in and out of ARQL to try different colors. Is there a way to have color pickers in ARQL to try different materials or change different scenes in a .reality file?
You could use Reality Composer's interactions to make an interactive USD where you can tap on different colors to change the model
This would need to be done in 3D. There’s a previous session, Building AR Experiences with Reality Composer, that has some examples.
Question pageIs there a simple way to create a 3D object with a custom image as a texture? Reality Composer only allows a material and a color, and without that, I'll have to dip into a far more complex 3D app. I'd really, really like to use USDZ more in Motion, for pre-viz and prototyping, but without texture editing it's quite limited. Have I missed something? :)
There are various third-party DCCs with great USD support that let you create complex 3D object with textures and export as USD. You can then use Reality Converter to convert those to USDZ to import into Motion.
Another approach: three.js (web render engine) can actually create USDZs on the fly from 3D scenes. A colleague used that recently for USDZ AR files with changeable textures on https://webweb.jetzt/ar-gallery/ar-gallery.html
Also take a look at the Explore USD tools and rendering session tomorrow. You can now change materials properties in RealityConverter! (edited)
Another thing that might help for making quick adjustments: the browser-based three.js editor at https://threejs.org/editor.
Question pageIs Reality Composer appropriate for end-users on macOS? We'd like to export "raw"/unfinished USD from our app then have users use Reality Composer to put something together with multimedia.
You can assemble different USDZ assets together to build out a larger scene in Reality Composer and add triggers and actions to individual assets within the project
Question pageIs there any updated to Reality Composer this year?
No.
We don't discuss details about unreleased updates, but one of the things that’s most helpful to us as we continue to build out our suite of augmented reality developer tools is feedback
Please continue to submit ideas or suggestions in Feedback Assistant 🙂
Question pageMany aspects of USD are open source. Could Reality Composer also be Open-Sourced so that members of the community could work on features?
Hey there, we’d definitely be interested in hearing more about your idea.
I’d suggest submitting the suggestion at Bug Reporting
Question pageReality Composer objects on top of each other, such as a vase on a table cast shadow only to the ground plane and not to one another. If baked AO textures aren't an option since the vase may be moved by the user what would you suggest in order to achieve an equally good result to the default grounding shadow given that the quality of shadows is critical for an AR experience?
We don’t have any materials you can apply to objects to make them participate in the same shadows as ground planes. However, you can enable shadow casting from directional and spot lights via DirectionalLightComponent.Shadows
and SpotLightComponent.Shadows
. This may alter the overall lighting of your scene though.
Alternatively, we do have CustomMaterial
, which allows you to create custom materials via Metal, but for this use-case may not be able to get you the desired effect.
We’re always looking to improve RealityKit, so would appreciate if you submitted a request for this via https://feedbackassistant.apple.com/
Question pageCan Reality Composer be made available as a macOS app in the App Store?
While Reality Composer is available only for iOS and iPadOS on the App Store, we'll pass this feedback along. Thanks 🙏
Reality Composer is available on macOS as part of Xcode as a Developer Tool, though.
Question pageHello, for artist/designer only experienced with Reality Composer with no code, is there any suggestion and resources on getting started with RealityKit to make more advanced AR experiences?
Hi! We have a number of WWDC sessions covering RealityKit and Reality Composer which is a great place to start.
There’s also a great guide on building a SwiftStrike game: SwiftStrike: Creating a Game with RealityKit
Question pageIs there a means of exporting a USDZ file (either from Reality Composer, Cinema 4D, etc., or programmatically), with a video texture already applied?
There’s no support for that in Reality Composer currently. As always a feature request filed on Bug Reporting would be most appreciated.
There’s also no method to export USDZ from RealityKit and again feature requests appreciated. Thank you!
Question pageCan I place a model in a target, such as a cover of a book or a QR, so that it doesn't move from that position by just using USDZ? and how could I achieve this?
You can use Reality Composer to create a scene attached to an image anchor. You can then export the scene to a USDZ or a Reality File.
See Selecting an Anchor for a Reality Composer Scene
Question pageIs it possible to control audio media in USDZ (i.e. pause, skip, load new audio file) with a scene / behavior (using Reality Composer or other tool)?
Currently Reality Composer does not support this. This sounds like a great feature request and we would appreciate if you can file feedback through Feedback Assistant.
If you are willing to jump into code…
You can use the AudioPlaybackController
returned from the playAudio API to play, pause, etc. You can also use AudioFileResource
to to add / replace audio on entities.
Does RealityKit support light sources in objects – for example, if you wanted a light bulb. If so, is there documentation for this?
There are various sorts of lighting in RealityKit - you might want to start here perhaps?
(see the Cameras and Lighting section in the docs)
But looks like we don’t support lighting in Reality Composer unfortunately so I’d suggest filing a feature suggestion:
Question pageIn Reality Composer, a force was applied to an object. Then I wanted to animate it into another scene, starting from the post force location. Is there a way to apply a new scene using its last known position? I hacked the position by guessing the ending location and starting the next scene close to that position but it results in a slight motion jitter.
This may be achievable if embedded in Xcode with some code.
I recommend signing up for a Reality Composer lab if you would like to explore that further.
But yes, being able to observe live parameters sounds like a great feature in Reality Composer. Please file a feature request using Feedback Assistant with your use case 🙂
Question pageIs there a way to add gestures to an entire Reality Composer scene? I can add it to an individual entity, but it would be cool to let users place the entire scene (otherwise I lose all the Reality Composer behaviors when i just target the entity)
A way to get the entity gestures working on an entire scene is to use visualBounds(…) and create a CollisionComponent
on the root entity. You can then use CollisionGroup
to make sure it doesn’t interfere with any physics.
If you’re using ARView.installGestures(…)
you’ll need the entity to conform to HasCollision
, which may require you to create a new entity type for the root. Quick example:
// New Entity type which conforms for `HasCollision`
class CollisionAnchorEntity: Entity, HasAnchoring, HasCollision { }
// Transfer scene contents
let collisionAnchor = CollisionAnchorEntity()
collisionAnchor
.children.append(contentsOf: originalAnchor.children)
collisionAnchor.anchoring = originalAnchor.anchoring
// Create CollisionComponent for bounds of scene
let sceneBounds = collisionAnchor
.visualBounds(recursive: true, relativeTo: collisionAnchor)
let collisionShape = ShapeResource
.generateBox(size: sceneBounds.extents)
.offsetBy(translation: sceneBounds.center)
collisionAnchor.collision = CollisionComponent(
shapes: [collisionShape]
)
// Install gesture on new anchor
arView.installGestures(for: collisionAnchor)
Question pageReality Converter
When do you think we will see new versions of Reality Composer and Reality Converter apps? I'm a college professor - Graduate Industrial Design, and use these as an intro to AR tools. Better, more capable versions might be nice? Thanks.
Unfortunately, we don’t discuss our future plans. However, we are aware that our tools haven’t been updated in a few years and could use some new features. Could you share what features you are looking for us to add?
Question pageIs there a simple way to create a 3D object with a custom image as a texture? Reality Composer only allows a material and a color, and without that, I'll have to dip into a far more complex 3D app. I'd really, really like to use USDZ more in Motion, for pre-viz and prototyping, but without texture editing it's quite limited. Have I missed something? :)
There are various third-party DCCs with great USD support that let you create complex 3D object with textures and export as USD. You can then use Reality Converter to convert those to USDZ to import into Motion.
Another approach: three.js (web render engine) can actually create USDZs on the fly from 3D scenes. A colleague used that recently for USDZ AR files with changeable textures on https://webweb.jetzt/ar-gallery/ar-gallery.html
Also take a look at the Explore USD tools and rendering session tomorrow. You can now change materials properties in RealityConverter! (edited)
Another thing that might help for making quick adjustments: the browser-based three.js editor at https://threejs.org/editor.
Question pageReality Composer is great, but our team of 3D asset modelers has found it easier to sculpt characters in Zbrush. Do ARKit and RealityKit accept models created in Zbrush, or are there intermediate steps best for preparing a model for Apple platforms? (KeyShot, etc.)
Yes, if you can export your assets to FBX, glTF or OBJ, you can convert them to USDZ using Reality Converter, which is compatible with ARKit and RealityKit
Question pageRealityKit
The recent update of Xcode made scenes in Reality Composer look much darker. Is this caused by a change in RealityKit?
We are aware of a bug in macOS Ventura/iOS 16 that is causing the lighting to appear darker. Please feel free to file a bug report on Feedback Assistant about this.
Question pageAre there any plans (or is there any way?) to bring post-process effects and lights into Reality Composer? I'm making a short animated musical film in AR. I love how RC does so much automatically (spatial audio, object occlusion...). I just wish it was possible to amp up the cinematic-ness a little with effects.
Post processing effects are not supported in RC right now, only in a RealityKit app. However, feel free to file a feature request on Feedback Assistant about this.
Question pageAre there guidelines or best practices for exporting a RealityKit scene to a USDZ? Is this possible? I’ve seen just a little about the ModelIO framework. Is this the tool we should be using?
I don’t think we have any guidelines about this, since exporting/saving a scene is not supported by the current APIs. ModelIO seems like a reasonable solution to me, but you might also want to file a feature request for this on Feedback Assistant.
Question pageHello there, I am someone who is still fairly novice with Reality / AR Kit. And I want to ask what is the best way to implement Multiuser AR experiences. I’ve been thinking on creating an AR app that would use this feature to allow multiple users to view a single AR view (e.g., multiple users seeing the same rendered model from their own perspectives).
Multiuser AR experiences can be created using the SynchronizationComponent
This is a good tutorial (along with sample code) on building collaborative AR sessions between devices:
https://developer.apple.com/documentation/arkit/creatingacollaborative_session
Question pageIs there a way to access System instances for a Scene or must System updates (e.g., change the culling distance of a System) always route through a Component?
Generally Systems are designed to operate on Entities (within Scenes) and their Components. Each System can be updated against multiple Scenes (and the Scene’s entities).
If you have state that you want to be represented with a System, one method to do that is to have a root entity that holds a “System component”.
There’s more information on System’s in last years WWDC session and on the developer documentation website:
- Dive into RealityKit 2
- https://developer.apple.com/documentation/realitykit/system/update(context:)-69f86
We are seeing some memory leaks when adding ModelEntities to an anchor, pausing the ARSession and starting it again and adding ModelEntities again....We see memory growing in the re::SyncObject section. Does anyone have experience troubleshooting memory leaks that have happened in a similar way?
I’d recommend this year’s WWDC Xcode session, What's new in Xcode, for what’s new in debugging. And there have been many other excellent sessions over the years on debugging.
That said if you believe it may be RealityKit or another system framework responsible for leaking the entities we’d ask you to file a Feedback Item on http://feedbackassistant.apple.com if you haven’t done so already.
Question pageIs there a suggested manner of writing ARKit/RealityKit experiences to a video file? I'm current using RealityKit 2's post-processing to convert the source `MTLTexture` to a `CVPixelBuffer`, and writing that to an `AVAssetWriter`, but this occasionally ends up leading to dropped frames or random flickers in the video.
We don’t currently have a recommend method for doing this and as such would love to see a feedback item explaining what you need and a use case explaining it. That would be wonderful.
That said your method should in theory work and we’d also love to see feedback item describing the issues you’re seeing.
Question pageIs there a way to use video textures in Reality Composer?
Video textures are currently not supported through Reality Composer UI. However, if your .rcproj is part of an Xcode project, you can use the RealityKit VideoMaterial api to change the material of your object in the scene at runtime.
Question pageIs there currently a built-in way or example of a way transform a CapturedRoom from RoomPlan into a ModelEntity or other type of RealityKit entity? Instead of only the exported USDZ file?
I don’t believe there is a built in way, but loading a USDZ into a RealityKit scene as a ModelEntity is very simple
Question pageAt last year's WWDC 2021 RealityKit 2.0 got new changes to make programming with Entity Component System (ECS) easier and simpler! The current RealityKit ECS code seems too cumbersome and hard to program. Will ease of programming with ECS be a focus in the future?
While we don’t discuss specific future plans, we always want to make RealityKit as easy to use for everyone as we can.
We’d ask you to post your issues and/or suggestions to Bug Reporting
I’d love to find out more about what you find too cumbersome. Thanks!
Question pageI've noticed that when occlusion is enabled on LiDAR devices, far away objects are automatically being clipped after a certain distance like 10m or so (even if there is no physically occluding them). I've tried to adjust the far parameters of the PerspectiveCameraComponent – https://developer.apple.com/documentation/realitykit/perspectivecameracomponent/far But unfortunately that didn't help. Only disabling occlusion removes the clipping. Is there a workaround for this behavior?
This should be fixed in iOS 16.
Question pageI need SwiftUI Views in my RealityKit experience...please and ASAP.
You can host RealityKit content inside SwiftUI views with UIViewRepresentable
If you’re asking if you can use SwiftUI content within RealityKit - there is no direct support for that at present and we’d ask you to file a feedback item explaining your use-case for that feature.
Question pageI’d love to have SF Symbols renderable in AR! It actually works with RealityKit on macOS by copy and pasting the symbols, but not available in the system font on iOS.
You may want to check with the SF Symbols team to confirm this is not possible yet, and also file a feature request on feedback assistant.
A bit of a hack solution but you may be able to get this work via drawing the Symbol to a CGImage
and passing that image in as a texture.
I have been really interested in RealityKit and ARKit for the past couple of years. Where can I learn more about it? I’m currently into Designing, Writing, Editing, and Management and would love to work on futuristic tech.
Check out this page:
To learn more about RealityKit and ARKit, I would recommend starting with our documentation and videos. Here are a few links to help you get started:
You can also always ask questions on Developer Forums 💬
Suggestion from non-Apple developer:
Question pageWhat is the recommended way to add live stream or capture capabilities with RealityKit? Do we need to build frame capture and video writers with AVFoundation? A higher level API would be a better fit for RealityKit.
I would recommend using ReplayKit or ScreenCaptureKit to record your app screen to stream / share
Question pageAny thoughts about making USD/USDZ files with particle effects? Things on fire/sparking etc?
This is not currently possible to do in a USD, but you should submit the idea to https://feedbackassistant.apple.com.
You can however do some particle effects in an app by using RealityKit's CustomShaders.
Depending on how complex your effect is, you can also bake your particle effects to regular mesh + bones animation :sparkles:
In many cases you can also create a pretty convincing effect just by scaling/rotating a few planes. Example link (no USDZ behind that right now, but you get the idea - this is just two simple meshes for the particles)
Question pageReality Composer is great, but our team of 3D asset modelers has found it easier to sculpt characters in Zbrush. Do ARKit and RealityKit accept models created in Zbrush, or are there intermediate steps best for preparing a model for Apple platforms? (KeyShot, etc.)
Yes, if you can export your assets to FBX, glTF or OBJ, you can convert them to USDZ using Reality Converter, which is compatible with ARKit and RealityKit
Question pageWhat’s the easiest way to add user interactions (pinch to scale, rotation, transform) to an Entity loaded from a local USDZ file in RealityKit?
You can use the installGestures function on ARView
. Keep in mind that the entity will need to conform to HasCollision
.
To do this you could create your own CollisionComponent
with a custom mesh and add it to your entity or you could simply call generateCollisionShapes(recursive: Bool) on your entity. Putting it all together, you can use .loadModel
/.loadModelAsync
, which will flatten the USDZ into a single entity. Then call generateCollisionShapes
and pass that entity to the installGestures
function. This will make your USDZ one single entity that you can interact with.
Can I render a snapshot of only the virtual content in RealityKit? Something similar like the snapshot functionality in SceneKit?
Yes, you can use ARView.snapshot(...)
If you want, you can change the background of the ARView
there:
Is it possible to instance meshes in RealityKit (similar to SceneKit's clone method)?
If you call .clone(...) on an Entity, the clone will re-use the same meshes.
Question pageIn SceneKit there were shader modifiers. Is there something similar in RealityKit? We need PBR shaders but have to discard certain fragments.
You can apply CustomMaterial
s & CustomMaterial.SurfaceShader
to achieve certain cool effects for entities!
From the metal side you can call discard_fragment()
Reality Composer objects on top of each other, such as a vase on a table cast shadow only to the ground plane and not to one another. If baked AO textures aren't an option since the vase may be moved by the user what would you suggest in order to achieve an equally good result to the default grounding shadow given that the quality of shadows is critical for an AR experience?
We don’t have any materials you can apply to objects to make them participate in the same shadows as ground planes. However, you can enable shadow casting from directional and spot lights via DirectionalLightComponent.Shadows
and SpotLightComponent.Shadows
. This may alter the overall lighting of your scene though.
Alternatively, we do have CustomMaterial
, which allows you to create custom materials via Metal, but for this use-case may not be able to get you the desired effect.
We’re always looking to improve RealityKit, so would appreciate if you submitted a request for this via https://feedbackassistant.apple.com/
Question pageIs there a capture video for ARView the way there is a take snapshot()? I see there is 4k video being hyped - will this include the ability to let users take video recordings?
There’s no API in RealityKit to capture video. That said there are system level APIs to capture screen recordings and I wonder if that would be useful for you:
I’d suggest filing a feature request with your use-case. Thanks!
Question pageHello, for artist/designer only experienced with Reality Composer with no code, is there any suggestion and resources on getting started with RealityKit to make more advanced AR experiences?
Hi! We have a number of WWDC sessions covering RealityKit and Reality Composer which is a great place to start.
There’s also a great guide on building a SwiftStrike game: SwiftStrike: Creating a Game with RealityKit
Question pageIs there a way to get access to more advanced materials rendering on RealityKit models? I want to "skin" a plane with a UIView, currently I need to fall back to ARKit and SceneKit in order to do this
RealityKit has a CustomMaterial
API which allows you to create custom Metal-based materials. I’d recommend our Explore advanced rendering with RealityKit 2 WWDC talk to learn more.
There is also a great resource on Custom Shader API that gives more details on the APIs available in Metal.
Question pageIs there a means of exporting a USDZ file (either from Reality Composer, Cinema 4D, etc., or programmatically), with a video texture already applied?
There’s no support for that in Reality Composer currently. As always a feature request filed on Bug Reporting would be most appreciated.
There’s also no method to export USDZ from RealityKit and again feature requests appreciated. Thank you!
Question pageIs it possible to show or hide only a single child node from a model entity dynamically?
You can certainly load a model and preserve your hierarchy, then use the entity name or another attribute to find an entity, then hide/show it with Entity.isEnabled
Look at EntityQuery for finding entities efficiently.
Question pageIs taking the output MTLTexture from RealityKit 2's `postProcessing` pipeline suitable for writing to an AVAssetWriter, streaming via RTMP, etc?
“Maybe” 🙂
So you can certainly take MTLTexture
s and convert them (if they’re configured correctly) into CVPixelBuffer
s for AVFoundation to consume.
That said it’s really not the intended use case of RealityKit's post processing functionality and I wouldn’t be surprised if either it doesn’t work as you’d expect or if we break you in the future.
Sounds like a great feature request though - Bug Reporting
Question pageIs it possible to control audio media in USDZ (i.e. pause, skip, load new audio file) with a scene / behavior (using Reality Composer or other tool)?
Currently Reality Composer does not support this. This sounds like a great feature request and we would appreciate if you can file feedback through Feedback Assistant.
If you are willing to jump into code…
You can use the AudioPlaybackController
returned from the playAudio API to play, pause, etc. You can also use AudioFileResource
to to add / replace audio on entities.
Regarding optimizations: is there support for level of detail and instancing in RealityKit?
Instancing is mostly abstracted away behind the Entity.clone()
method.
Level of detail is not currently exposed as API and we’d recommend filing a feature suggestion on Bug Reporting
That said you can implement Level of Detail yourself (probably using custom Systems and Components) although we understand that may not be ideal. Please file feature suggestions regardless!
Question pageIs there a plan to have custom render passes like in SceneKit with SCNTechnique in RealityKit?
While we do not currently support custom render passes, we have support for post process effects. Please file a feature request through Feedback Assistant if your use case requires more customization 🙏
Question pageDoes RealityKit support light sources in objects – for example, if you wanted a light bulb. If so, is there documentation for this?
There are various sorts of lighting in RealityKit - you might want to start here perhaps?
(see the Cameras and Lighting section in the docs)
But looks like we don’t support lighting in Reality Composer unfortunately so I’d suggest filing a feature suggestion:
Question pageI’ve had some experience with Reality Composer, but for coding, I only know SwiftUI. Is it possible to create an AR App with ARKit only with SwiftUI?If so, could you share some suggestions or links on getting started?
You can use ARKit inside a SwiftUI app. You can also use RealityKit to build ARKit apps in a declarative way.
Here are the links to resources and sample code to help you get started:
Question pageWe can capture session events (namely anchors add/remove) by implementing ARSessionDelegate (not RealityKit), is it possible get similar or part of this events with RealityKit? (To avoid converting a from ARAnchor to AnchorEntity)
RealityKit exposes the ARSession
through this API:
https://developer.apple.com/documentation/realitykit/arview/session
You can set the delegate on it to listen to ARKit delegate events.
Question pageIn recent years I read and partially experimented with the latest "graphics" frameworks - but somehow I got lost over a cohesive developer experience when to use which framework (and how to integrate them into a good product). The are amazing "vertical" solutions in these frameworks but I see only few strong stories/app/solutions around them. Does Apple has a "big picture" guide when to use which framework, how to interact between them?
We understand that the number of frameworks can be daunting sometimes. However as you alluded to, we try and offer "high level" frameworks to try and meet developers' needs out of the box, for example, being able to use RealityKit for rendering instead of the lower level Metal.
That said, Apple provides several tutorials and code samples to introduce developers into the various frameworks, e.g.:
Building an Immersive Experience with RealityKit
Another great resource are WWDC videos, which go back several years in order to build a solid understanding of a particular framework or technology.
Question pageDoes ARKit or RealityKit support rigid body physics defined in a USD file?
ARKit doesn’t support physics but rather detects the surrounding scene to allow RealityKit to handle virtual objects. RealityKit does support rigid body physics and a good place to start looking is at the physics APIs here:
Preliminary_PhysicsRigidBodyAPI
Question pageResource Loading
In the keynote, there's a mention about a Background API in Metal. Please share documentation/resources link
Are you referring to https://developer.apple.com/documentation/metal/resource_loading?
Question pageRoomPlan
Is there a way to localize against a scanned room from the Room Plan API (via ARKit) so that it could be used for example to setup a game in your room and share that with other people?
No there is no re-localization in RoomPlan. But we expose the ARSession
so you could fallback to ARKit for re-localization.
Is there currently a built-in way or example of a way transform a CapturedRoom from RoomPlan into a ModelEntity or other type of RealityKit entity? Instead of only the exported USDZ file?
I don’t believe there is a built in way, but loading a USDZ into a RealityKit scene as a ModelEntity is very simple
Question pageIn the State of the Union, there is reference to `ScanKit` alongside the mention of `RoomPlan`. Is `ScanKit` a SDK, or if that the same thing as `RoomPlan`?
RoomPlan is the name of the SDK. You’ll want to refer to those APIs as RoomPlan instead of ScanKit.
Question pageThis question may be better suited for tomorrow's #object-and-room-capture-lounge, but is the output `CapturedRoom` type able to be modified prior to export to USDZ? For example, could I remove all `[.objects]` types, and leave just walls/doors, or change the texture of a surface?
Yes, please ask this during the object capture lounge tomorrow. But you should be able to modify after export and re-render.
You would need to use the RoomCaptureSession API and subscribe to a delegate to get those updates which contain the Surfaces and Objects. You can then process that data and render it as per your liking.
Question pageWhat’s the maximum dimensions RoomPlan support?
The recommended maximum size of the room is 30 x 30 feet.
Question pageYono's Note: This is about 9 x 9 meters.
When we have a RoomPlan scan, can we use it next time as an anchor so we can always Paint Model in same place?
RoomPlan is not an ARAnchor
in current design. Thanks for suggestion. We will take into consideration.
From a non-Apple developer:
I created a demo where custom ARAnchor
s are created for RoomPlan objects. The same could be done for surfaces and then saved to a world map:
https://github.com/jmousseau/RoomObjectReplicatorDemo
Question pageSample Code
Hello there, I am someone who is still fairly novice with Reality / AR Kit. And I want to ask what is the best way to implement Multiuser AR experiences. I’ve been thinking on creating an AR app that would use this feature to allow multiple users to view a single AR view (e.g., multiple users seeing the same rendered model from their own perspectives).
Multiuser AR experiences can be created using the SynchronizationComponent
This is a good tutorial (along with sample code) on building collaborative AR sessions between devices:
https://developer.apple.com/documentation/arkit/creatingacollaborative_session
Question pageHello, for artist/designer only experienced with Reality Composer with no code, is there any suggestion and resources on getting started with RealityKit to make more advanced AR experiences?
Hi! We have a number of WWDC sessions covering RealityKit and Reality Composer which is a great place to start.
There’s also a great guide on building a SwiftStrike game: SwiftStrike: Creating a Game with RealityKit
Question pageFrom an AR design perspective, what is best for knocking down objects? Say in a game where you knock down blocks, is it better to have the user run the device through the blocks, tap the blocks, or press a button to trigger something to hit the blocks?
It depends which approach is best — each have a set of pros and cons based on what you want out of the experience.
It can be compelling to run through AR blocks if you want to emphasize lots of user motion in an experience and the scale of the experience is quite large — good for apps that can take advantage of wide open spaces.
Tapping them is more immediate and indirect so if you wanted to destroy a tower quickly or something like that then that would be the way to go — and I could see that being very satisfying to trigger many physics objects to react at once.
I think the same would apply to a button press, it’s an indirect way to trigger it if the experience requires rapidly knocking them down.
Overall I think it’s up to what you want the experience to be, and maintaining internal consistency with other interactions within the app.
Swiftstrike and Swiftshot are great example apps that use similar techniques.
Question pageWe want to play with the depth map. Is it possible to get the LiDAR camera position with the depth map? We've tried using the wide camera position and it doesn't work, because the wide camera position is not the same as the depth map's camera position.
The depth map surfaced through the Scene Depth API does align with the wide angle camera and should correspond to the camera transform available through the ARFrame
.
Here is a sample code that generates a colored point cloud by combining the wide angle camera image and depth map:
Displaying a Point Cloud Using Scene Depth
If you still see some issues, I recommend filing a bug through the feedback assistant at Bug Reporting
Question pageAre there resources on how to generate a texture for the mesh generated by ARKit ?
We do not have any resources for this.
You should be able to use the wide angle camera and camera transform to generate texture maps for the meshes but unfortunately we do not have any resources or sample code showing that.
We do have this sample code showing how to generate colored point clouds using the scene depth API, hope it is of some help.
Displaying a Point Cloud Using Scene Depth
Question pageWhen we have a RoomPlan scan, can we use it next time as an anchor so we can always Paint Model in same place?
RoomPlan is not an ARAnchor
in current design. Thanks for suggestion. We will take into consideration.
From a non-Apple developer:
I created a demo where custom ARAnchor
s are created for RoomPlan objects. The same could be done for surfaces and then saved to a world map:
https://github.com/jmousseau/RoomObjectReplicatorDemo
Question pageMight there be more example projects showcasing pure Metal with ARKit? SceneKit is cool, but admittedly, I'd love to see more low-level examples. :) Alternatively, is anyone working on some open source projects showcasing something like this? I think it would be a big win for Apple-platform development to build-up a lot more examples.
Thanks for the suggestion. Here are some existing sample code that uses Metal with ARKit:
- Displaying a Point Cloud Using Scene Depth
- Creating a Fog Effect Using Scene Depth
- Displaying an AR Experience with Metal
In recent years I read and partially experimented with the latest "graphics" frameworks - but somehow I got lost over a cohesive developer experience when to use which framework (and how to integrate them into a good product). The are amazing "vertical" solutions in these frameworks but I see only few strong stories/app/solutions around them. Does Apple has a "big picture" guide when to use which framework, how to interact between them?
We understand that the number of frameworks can be daunting sometimes. However as you alluded to, we try and offer "high level" frameworks to try and meet developers' needs out of the box, for example, being able to use RealityKit for rendering instead of the lower level Metal.
That said, Apple provides several tutorials and code samples to introduce developers into the various frameworks, e.g.:
Building an Immersive Experience with RealityKit
Another great resource are WWDC videos, which go back several years in order to build a solid understanding of a particular framework or technology.
Question pageIs it possible to do perspective correction in ARKit using the captured depth map? Like on the continuity camera "desk view" for example
Glad you’re also a fan of the new desk view feature. There are potentially two solutions to this:
- Do a single perspective projections for the whole image
- Use a per-pixel correction like you suggested
Both come with their own benefits and drawbacks. Please check out our documentation for implementing the second approach:
Displaying a Point Cloud Using Scene Depth
Question pageUsing additional cameras in ARKit - are there any resources to show how this is setup?
ARKit allows streaming video from only one camera at a time. Which camera is used is determined by your configuration (e.g. ARFaceTrackingConfiguration
will use the front facing camera, ARWorldTrackingConfiguration
will use the back wide camera).
You can, however, enable face anchors detected by the front camera in an ARWorldTrackingConfiguration
with userFaceTrackingEnabled. Vice versa, you can enable isWorldTrackingEnabled in an ARFaceTrackingConfiguration
to benefit from 6DOF world tracking.
Check out this developer sample:
Combining User Face-Tracking and World Tracking
Question pageScanKit
In the State of the Union, there is reference to `ScanKit` alongside the mention of `RoomPlan`. Is `ScanKit` a SDK, or if that the same thing as `RoomPlan`?
RoomPlan is the name of the SDK. You’ll want to refer to those APIs as RoomPlan instead of ScanKit.
Question pageSF Symbols
I’d love to have SF Symbols renderable in AR! It actually works with RealityKit on macOS by copy and pasting the symbols, but not available in the system font on iOS.
You may want to check with the SF Symbols team to confirm this is not possible yet, and also file a feature request on feedback assistant.
A bit of a hack solution but you may be able to get this work via drawing the Symbol to a CGImage
and passing that image in as a texture.
SharePlay
I would like to know if it's possible to use SharePlay with a ARKit app? When I try there is no video on the FaceTime call if the back camera is started. Is it possible to have both cameras at the same time (front for FaceTime and back for my AR app)?
ARKit configures the cameras according to the selected configuration. Capturing from another camera while an ARKit session is running is not supported.
Question pageSkeleton
Are there tools that can be used to rig skeletons for USD characters? I have not found anything that works?
Yes, there are various third-party Digital Content Creators (DCC) that let you create skeletons and Reality Converter lets you convert other file formats with skeletons to USD.
Are some example Digital Content Creation tools that can help you create rigged skeletons for characters exported to USD.
Question pageSnapshot
Can I render a snapshot of only the virtual content in RealityKit? Something similar like the snapshot functionality in SceneKit?
Yes, you can use ARView.snapshot(...)
If you want, you can change the background of the ARView
there:
Is it possible to take a snapshot of only the virtual content and a snapshot of only the real content like in SceneKit?
That’s a good question.
I think you can get some of the way there via the ARKit APIs to get the current frame.
You can also toggle the mode of an ARView
to switch it to .nonAR
view - then use ARView.snapshot()
to grab a snapshot of the virtual content. And then switch it back.
However, I don’t believe that would give you exactly what you want - I think the ARView snapshot would not necessarily have a transparent background (if that’s what you need). And even then the performance of this may not be great.
You could also try setting the Environment background color to something with 100% alpha.
I’d suggest filing a feature request for this with Bug Reporting
Question pageSpatial
Is there a talk that goes into any detail about using the new Spatial framework and how it works with ARKit, SceneKit, and/or RealityKit?
There is no dedicated talk about Spatial framework. It provides core functions that can be used with any 2D/3D primitive data.
Question pageSpatial Audio
Any guidance on how to build a bridge between ARKit and Spatial Audio? Say you're viewing an object and the audio evolves as you change the object's perspective...
We do not have a sample code that uses ARKit together with spatial audio (PHASE). However, this is a great question, can you please send us a request through Bug Reporting
Question pageStream
What is the recommended way to add live stream or capture capabilities with RealityKit? Do we need to build frame capture and video writers with AVFoundation? A higher level API would be a better fit for RealityKit.
I would recommend using ReplayKit or ScreenCaptureKit to record your app screen to stream / share
Question pageSwiftUI
I noticed the new beta class "ImageRenderer" for SwiftUI, allowing SwiftUI views to be rendered into a static image and be used as a texture in ARKit. Will there be an interactive version of displaying SwiftUI views in ARKit?
We don’t discuss future plans, but gathering developer feedback is important to us so we’d ask you to post your request to
Question pageI need SwiftUI Views in my RealityKit experience...please and ASAP.
You can host RealityKit content inside SwiftUI views with UIViewRepresentable
If you’re asking if you can use SwiftUI content within RealityKit - there is no direct support for that at present and we’d ask you to file a feedback item explaining your use-case for that feature.
Question pageI’ve had some experience with Reality Composer, but for coding, I only know SwiftUI. Is it possible to create an AR App with ARKit only with SwiftUI?If so, could you share some suggestions or links on getting started?
You can use ARKit inside a SwiftUI app. You can also use RealityKit to build ARKit apps in a declarative way.
Here are the links to resources and sample code to help you get started:
Question pageUSD
I am pretty new to Reality Composer. I would like to know how (if it is possible) to add textures to custom USD objects.
Reality Converter makes it easy to convert, view, and customize USDZ 3D objects on Mac. For more information, visit:
https://developer.apple.com/augmented-reality/tools/
Question pageIs there a way of exporting a Reality Composer scene to a .usdz, rather than a .reality or .rcproject? If not, what are your suggested ways of leveraging Reality Composer for building animations but sharing to other devices/platforms so they can see those animations baked into the 3D model?
Yes, on macOS, you can open [from the menu bar] Reality Composer ▸ Settings [or Prefences on older versions of macOS] and check Enable USDZ export
Question pageAre there guidelines or best practices for exporting a RealityKit scene to a USDZ? Is this possible? I’ve seen just a little about the ModelIO framework. Is this the tool we should be using?
I don’t think we have any guidelines about this, since exporting/saving a scene is not supported by the current APIs. ModelIO seems like a reasonable solution to me, but you might also want to file a feature request for this on Feedback Assistant.
Question pageWhat are some ideal non-product examples of good USDZs
There are great USDZ examples on Quick Look Gallery
For example you have the Lunar Rover from For All Mankind
We've also added new documentation to help you generate better USD assets here:
Creating USD files for Apple devices
Question pageIs there currently a built-in way or example of a way transform a CapturedRoom from RoomPlan into a ModelEntity or other type of RealityKit entity? Instead of only the exported USDZ file?
I don’t believe there is a built in way, but loading a USDZ into a RealityKit scene as a ModelEntity is very simple
Question pageThis question may be better suited for tomorrow's #object-and-room-capture-lounge, but is the output `CapturedRoom` type able to be modified prior to export to USDZ? For example, could I remove all `[.objects]` types, and leave just walls/doors, or change the texture of a surface?
Yes, please ask this during the object capture lounge tomorrow. But you should be able to modify after export and re-render.
You would need to use the RoomCaptureSession API and subscribe to a delegate to get those updates which contain the Surfaces and Objects. You can then process that data and render it as per your liking.
Question pageMany of you know .glb file format (android's scene-viewer) support compression like draco. Any planning update for compress .usdz files?
I would suggest filing an enhancement request on feedback assistant for this
Question pageIt seems a bit weird that there's currently three different implementations of USD at use across iOS / Mac. Are there plans to consolidate those into one to make testing and verification of assets across platforms easier? The shared feature subset is pretty small, resulting in less-than-ideal products for clients.
There are different USD renderers across our platforms but each serve a different purpose.
Here is a developer document that explains these different USD renderers and what their feature sets are
Creating USD files for Apple devices
Question pageAny thoughts about making USD/USDZ files with particle effects? Things on fire/sparking etc?
This is not currently possible to do in a USD, but you should submit the idea to https://feedbackassistant.apple.com.
You can however do some particle effects in an app by using RealityKit's CustomShaders.
Depending on how complex your effect is, you can also bake your particle effects to regular mesh + bones animation :sparkles:
In many cases you can also create a pretty convincing effect just by scaling/rotating a few planes. Example link (no USDZ behind that right now, but you get the idea - this is just two simple meshes for the particles)
Question pageIs there a simple way to create a 3D object with a custom image as a texture? Reality Composer only allows a material and a color, and without that, I'll have to dip into a far more complex 3D app. I'd really, really like to use USDZ more in Motion, for pre-viz and prototyping, but without texture editing it's quite limited. Have I missed something? :)
There are various third-party DCCs with great USD support that let you create complex 3D object with textures and export as USD. You can then use Reality Converter to convert those to USDZ to import into Motion.
Another approach: three.js (web render engine) can actually create USDZs on the fly from 3D scenes. A colleague used that recently for USDZ AR files with changeable textures on https://webweb.jetzt/ar-gallery/ar-gallery.html
Also take a look at the Explore USD tools and rendering session tomorrow. You can now change materials properties in RealityConverter! (edited)
Another thing that might help for making quick adjustments: the browser-based three.js editor at https://threejs.org/editor.
Question pageReality Composer is great, but our team of 3D asset modelers has found it easier to sculpt characters in Zbrush. Do ARKit and RealityKit accept models created in Zbrush, or are there intermediate steps best for preparing a model for Apple platforms? (KeyShot, etc.)
Yes, if you can export your assets to FBX, glTF or OBJ, you can convert them to USDZ using Reality Converter, which is compatible with ARKit and RealityKit
Question pageAre there tools that can be used to rig skeletons for USD characters? I have not found anything that works?
Yes, there are various third-party Digital Content Creators (DCC) that let you create skeletons and Reality Converter lets you convert other file formats with skeletons to USD.
Are some example Digital Content Creation tools that can help you create rigged skeletons for characters exported to USD.
Question pageIs Reality Composer appropriate for end-users on macOS? We'd like to export "raw"/unfinished USD from our app then have users use Reality Composer to put something together with multimedia.
You can assemble different USDZ assets together to build out a larger scene in Reality Composer and add triggers and actions to individual assets within the project
Question pageIs there a way to modify ModelEntities loaded from an .usdz file on a node basis? E.g. show/hide specific nodes?
Yes, if you load the USDZ with Entity.load(...)
or Entity.loadAsync(...)
you can traverse the hierarchy and modify the individual entities.
You’d want to use Entity.isEnabled
in this instance to hide/show a node.
Note that .loadModel
will flatten the hierarchy whereas .load
will show all entities
What’s the easiest way to add user interactions (pinch to scale, rotation, transform) to an Entity loaded from a local USDZ file in RealityKit?
You can use the installGestures function on ARView
. Keep in mind that the entity will need to conform to HasCollision
.
To do this you could create your own CollisionComponent
with a custom mesh and add it to your entity or you could simply call generateCollisionShapes(recursive: Bool) on your entity. Putting it all together, you can use .loadModel
/.loadModelAsync
, which will flatten the USDZ into a single entity. Then call generateCollisionShapes
and pass that entity to the installGestures
function. This will make your USDZ one single entity that you can interact with.
Many aspects of USD are open source. Could Reality Composer also be Open-Sourced so that members of the community could work on features?
Hey there, we’d definitely be interested in hearing more about your idea.
I’d suggest submitting the suggestion at Bug Reporting
Question pageWith USDZ content, what's the best way to link to an external website or take users to a product landing page?
If you have your USDZ content on the web you can check out the AR Quick Look functionality for such things at:
Adding an Apple Pay Button or a Custom Action in AR Quick Look
As far as I know there isn’t currently a way to do such a thing directly from a USDZ sent from iMessage, but I can pass that request along.
Question pageIs there a means of exporting a USDZ file (either from Reality Composer, Cinema 4D, etc., or programmatically), with a video texture already applied?
There’s no support for that in Reality Composer currently. As always a feature request filed on Bug Reporting would be most appreciated.
There’s also no method to export USDZ from RealityKit and again feature requests appreciated. Thank you!
Question pageCan I place a model in a target, such as a cover of a book or a QR, so that it doesn't move from that position by just using USDZ? and how could I achieve this?
You can use Reality Composer to create a scene attached to an image anchor. You can then export the scene to a USDZ or a Reality File.
See Selecting an Anchor for a Reality Composer Scene
Question pageIs it possible to control audio media in USDZ (i.e. pause, skip, load new audio file) with a scene / behavior (using Reality Composer or other tool)?
Currently Reality Composer does not support this. This sounds like a great feature request and we would appreciate if you can file feedback through Feedback Assistant.
If you are willing to jump into code…
You can use the AudioPlaybackController
returned from the playAudio API to play, pause, etc. You can also use AudioFileResource
to to add / replace audio on entities.
Does ARKit track which version of USDZ Is in use? I’m interested in using tools from multiple providers in my pipeline and I want to verify the format is consistent through workflow.
ARKit itself has no notion of rendered content. Content (USDZ) is commonly handled by the rendering engine on top of ARKit like RealityKit, SceneKit, Metal, etc.
In order to learn more about USDZ and how to efficiently use it we recommend this talk.
Question pageWe'd like to use an object both as a source for a USDZ based on the PhotogrammetrySession and as an ARReferenceObject, so that we can overlay information at the same position on both the real object and the created model.Is there any guidance on how to align these coordinate systems, e.g. by aligning the point clouds from the photogrammetry session and reference object? Or can we make assumptions on the origin of the resulting USDZ from the PhotogrammetrySession?
Creating a model for Object Detection and creating a textured mesh with Object Capture are two different use cases with separate workflows, we do not offer a tool to convert from one to another. That sounds like a great use case though, I encourage you to file a feature request.
Question pageDoes ARKit or RealityKit support rigid body physics defined in a USD file?
ARKit doesn’t support physics but rather detects the surrounding scene to allow RealityKit to handle virtual objects. RealityKit does support rigid body physics and a good place to start looking is at the physics APIs here:
Preliminary_PhysicsRigidBodyAPI
Question pageVideo
Is there a suggested manner of writing ARKit/RealityKit experiences to a video file? I'm current using RealityKit 2's post-processing to convert the source `MTLTexture` to a `CVPixelBuffer`, and writing that to an `AVAssetWriter`, but this occasionally ends up leading to dropped frames or random flickers in the video.
We don’t currently have a recommend method for doing this and as such would love to see a feedback item explaining what you need and a use case explaining it. That would be wonderful.
That said your method should in theory work and we’d also love to see feedback item describing the issues you’re seeing.
Question pageIs there a way to use video textures in Reality Composer?
Video textures are currently not supported through Reality Composer UI. However, if your .rcproj is part of an Xcode project, you can use the RealityKit VideoMaterial api to change the material of your object in the scene at runtime.
Question pageWhat is the recommended way to add live stream or capture capabilities with RealityKit? Do we need to build frame capture and video writers with AVFoundation? A higher level API would be a better fit for RealityKit.
I would recommend using ReplayKit or ScreenCaptureKit to record your app screen to stream / share
Question pageIs there a capture video for ARView the way there is a take snapshot()? I see there is 4k video being hyped - will this include the ability to let users take video recordings?
There’s no API in RealityKit to capture video. That said there are system level APIs to capture screen recordings and I wonder if that would be useful for you:
I’d suggest filing a feature request with your use-case. Thanks!
Question pageIs taking the output MTLTexture from RealityKit 2's `postProcessing` pipeline suitable for writing to an AVAssetWriter, streaming via RTMP, etc?
“Maybe” 🙂
So you can certainly take MTLTexture
s and convert them (if they’re configured correctly) into CVPixelBuffer
s for AVFoundation to consume.
That said it’s really not the intended use case of RealityKit's post processing functionality and I wouldn’t be surprised if either it doesn’t work as you’d expect or if we break you in the future.
Sounds like a great feature request though - Bug Reporting
Question pageWhat's the difference between ARWorldTrackingConfiguration.recommendedVideoFormatForHighResolutionFrameCapturing and recommendedVideoFormatFor4KResolution?
recommendedVideoFormatForHighResolutionFrameCapturing is used for capturing high resolution still images while the session is running.
For 4K video, you should use recommendedVideoFormatFor4KResolution
Note that this feature is only supported on iPad with M1
Question pageDoes setting ARKit to use 4K resolution affect the battery longevity? Does it increase the risk to get the device too hot, even if the fps is limited at 30 fps instead of 60 fps? Is there a way to get 60 fps at 4K resolution?
Yes, using 4k resolution may result in more power being consumed. It may also result in thermal mitigation engaging to keep the device from getting too hot, which may impact performance. At the moment, we are only supporting 4k @ 30hz.
Question pageWhen using the new 4K resolution in ARKit for a post-production (film/television) workflow, what is the suggested way to take the AR experience and output to a video file?
To capture and replay an ARKit session, see an example here:
Recording and Replaying AR Session Data
If you want to capture video in your app in order to do post processing later, you could use and configure an AVAssetWriter to capture a video.
We also provide a camera frame with every ARFrame
, see:
ARFrame.capturedImage
is just the ‘clean slate’, it doesn’t contain any virtual content rendered on top of it. If you are doing your own rendering and your Metal textures are backed by IOSurface
s. then you can easily create CVPixelBuffer
s using the IOSurface
s and then pass those to AVFoundation for recording.
Video feed is always overexposed using ARKit. Trying to enable HDR for ARSession doesn't seem to work. Setting videoHDRAllowed to true on ARWorldTrackingConfiguration does not change video rendering. Also when accessing the AVCaptureDevice with ARWorldTrackingConfiguration.configurableCaptureDeviceForPrimaryCamera, activeFormat.isVideoHDRSupported returns false (on iPhone 12 Pro Max) so I cannot set captureDevice.isVideoHDREnabled to true. Also when using setExposureModeCustom and setting iso to activeFormat.minISO, the image rendered by ARKit has always a way greater exposure than when running an AVCaptureSession. The use case is for using ARKit in a Basketball stadium: the pitch always appears totally white with ARKit so we cannot see any player while with AVCaptureSession (or just the iOS camera app) the pitch and players appear clearly thanks to HDR.
Setting videoHDRAllowed
means that HDR will be enabled on the formats supporting it; however this is not the case for all video formats.
In iOS 16, ARVideoFormat
has a new property isVideoHDRSupported
. You can filter the list of the configuration’s supportedVideoFormat
to find one where videoHDRSupported
is true, and set this format as the configuration’s videoFormat
before running the session.
Vision
Do any of the AR frameworks have hand tracking, and the ability to register a pinch between the thumb and pointer finger?
ARKit does not have any hand tracking feature. The Vision framework offers functionality for hand gesture detection.
Detect Body and Hand Pose with Vision - WWDC20
You may find the camera's captured images on the ARFrame
and can inject this into Vision. So by combining multiple frameworks you could achieve something close to the requested feature.
I noticed that the built-in Camera app can detect very small QR codes compared to 4K AR. Why is that? Is there a workaround?
We don’t have QR code detection in ARKit. However, you can use the Vision APIs to do QR code detection on the captured image. This VisionKit talk and article might be of interest to you:
Question pageIs there any plan to allow built-in hand and finger detection within ARKit to let the user interact with an object directly with his hands and not only though touch events on the device screen?
ARKit has no built-in hand or finger detection, but you can use Vision to track hands or detect hand poses. Here is a developer sample illustrating this:
Detecting Hand Poses with Vision
For ARKit feature requests, we encourage you to send us a report in Feedback Assistant
Question pageVisionKit
Is VisionKit / Data Scanner available in AR?
Using data scanner via VisionKit is possible using ARKit. ARKit provides the captured image on the ARFrame
. One can inject the ARFrame
's captured image into data scanner and obtain information about text.
However, the result will be two-dimensional. If the use-case is to bring the detected text into the AR world in three dimensions one needs to estimate a transform for the 2D text. ARKit does not support this natively but does support custom anchoring.
Question pageI noticed that the built-in Camera app can detect very small QR codes compared to 4K AR. Why is that? Is there a workaround?
We don’t have QR code detection in ARKit. However, you can use the Vision APIs to do QR code detection on the captured image. This VisionKit talk and article might be of interest to you:
Question page