Running Core ML Model on Linux
Question about running a Core ML model on Linux
Question about running a Core ML model on Linux
Question on immediately dispatching Core ML inference as part of a display or compute shader pipeline
Question on the availability of the sample code for the image recolorization demo
Question on a performance issue while training a PyTorch model using Metal
Request to make pet and object segmentation available to developers
Question on performing face recognition on-device
Question on how to add live text interaction to a SwiftUI Image
Question on limiting VNDocumentCameraViewController to just on scan
Question on what needs to be taken into account when models may be run on older devices
Question on adding support for 4-channel no-copy buffers
Question about an error message when calling model.predict()
Question on how to specify a custom neural network structure
Question on handling different Xcode Core ML Compilers behaving differently on different iOS versions
Question on getting per-layer performance analysis for Core ML models
Question about profiling the execution time of individual Core ML model layers
Question on dealing with operators not currently supported by Core ML
Question on different amounts of memory usage when loading models on different devices
Question on the availability of an API to semantically segment a photo
Question on using Create ML to train a drawing classifier vs using Turi Create
Question about automatic snapshots when training a model using Create ML
Question on capturing frames from additional cameras in ARKit
Question on how to track hands within ARKit
Question on using the same object for both detection and capture
Question on using the ultra wide camera for AR
Question on using ARKit together with spatial audio
Question on if it's possible to select other cameras for use with an ARView
Question on getting the LiDAR camera position along with the depth maps in ARKit
Question on configuring image capture in ARKit
Question on observing live parameters of objects in Reality Composer
Question on optimizing via support for level of detail and instancing
Question on support for custom render passes in RealityKit
Question on controlling audio media in USDZ files
Question on whether any tools support exporting USDZ files with video textures applied
Question on adding links to external websites from USD content
Question on making Reality Composer available via the macOS App Store
Question on the possibility of open sourcing Reality Composer
Question on whether there are updates to Reality Composer
Question on the availability of remote control light sources in AR Quick Look
Question on whether there will be a concurrency API for ARKit
Question on support for compressing USD files
Question on rendering SF Symbols in RealityKit
Question on using SwiftUI Views in RealityKit
Question on support for remote participation and collaboration in AR experiences
Question on if there is a focus to make RealityKit easier to use
Question about whether interactive (as opposed to static) SwiftUI Views will be supported in ARKit
Question on instant AR tracking on devices without LiDAR
Question on suggested options for saving augmented reality experiences to video files
Question about memory leaks when using RealityKit
Question on when we can expect updates to Reality Composer and Reality Converter
Question as to whether different behavior between AR Quick Look and Reality Composer for LookToCamera will be fixed
Question about best practices for exporting RealityKit scenes to USDZ
Question on why scenes look darker in Reality Composer after an update to Xcode
Question as to whether there are plans to support post processing effects in Reality Composer