Pictofit / iOS SDK / 2.6.2 / Using the AR View

Using the AR View

Augmented Reality (AR) is a great way to show 3D content in a very immersive way. Our SDK provides functionality to render avatars & garments in AR based on Apple’s ARKit. Traditional on-screen UI elements don’t work well in this context since they break the impression of seeing virtual content in the physical world. Therefore, the framework provides specific UI elements like the RRCarouselRenderable to show an interface that is also part of the physical world.

Example of a photorealistic virtual avatar displayed in AR.

To display content in AR, first create an RRARView. This component implicitly creates an ARSession which you can access through the RRARView.session property. Thereby, you can configure ARKit to your liking. You must not set the ARSessionDelegate directly. If you need to receive these callbacks, use the RRARView.sessionDelegate property to register your delegate.

To render content on top of the camera stream, access the RRARView.renderView property. See the Core Concepts section for general information on how to use the rendering engine. The camera position within the render view is automatically updated based on the tracking provided by ARKit. This means that your virtual camera will match the physical camera position of your phone which creates the effect of the virtual content being part of the physical world. To get more information on ARKit, have a look at Apple’s documentation.

Handling Surfaces

When placing content in AR, you usually want it to sit on a surface in the actual physical world (like the floor or a table). Otherwise it would just float through space which might break the immersive experience. ARKit automatically detects surfaces and returns them in the form of ARAnchors. When a new plane is detected, the func session(_ session: ARSession, didAdd anchors: [ARAnchor]) callback of the ARSessionDelegate is invoked. This is also the point where we can add a RRRenderable with a RRPlaneCollider to the render view to use it later for placing content on surfaces. The following sample shows how it’s done.

func session(_ session: ARSession, didAdd anchors: [ARAnchor]) {
    for anchor in anchors {
      guard let planeAnchor = anchor as? ARPlaneAnchor else {
      // create the renderable and set the transformation
      let planeRenderable = RRRenderable.init()
      let transformation = RRTransformation.init(transformationMatrix: planeAnchor.transform)
      planeRenderable.transformation = transformation

      // create a collider for the plane
      let planeMesh = RRMesh3D.init(arPlaneAnchor: planeAnchor)
      let meshCollider = RRMeshCollider.init(mesh3D: planeMesh)

      // add it to the render view
      try! self.arView.renderView.add(planeRenderable)
      // keep track of the planes so that we can update the colliders later on
      self.planeRenderables[planeAnchor] = planeRenderable

ARKit also updates and removes ARAnchors over time. Update in this case means that the detected plane is refined. Therefore, it is advisable to keep track of the planes so we can update them later on.

Interacting With the Real World

A very natural way to place content is by allowing the user to tap on the screen and place a renderable on the corresponding location in the real world. This can be done by intersecting the detected planes with a ray originating from the touch position. This ray can be created by calling the ray(fromViewPosition viewPosition: CGPoint) -> RRRay method on the render view.

fileprivate func getFirstIntersection(touchPosition: CGPoint!) -> RRIntersection? {
  let ray = self.arView.renderView.ray(fromViewPosition: touchPosition)
  let intersections = self.arView.renderView.getIntersections(ray)
  return intersections.first

@objc func viewTapped(sender: UITapGestureRecognizer) {

  let location = sender.location(in: self.arView.renderView)
  if self.dragAndDropHandler != nil {
  guard let intersection = getFirstIntersection(touchPosition: location) else {
  for renderable in self.planeRenderables {
    if renderable.value.isEqual(to: intersection.renderable) {
      // user tapped on the ground plane
Using the carousel UI element to place content in AR

The PictofitC SDK provides the RRCarouselRenderable for placing content. This UI element presents the user the different options in the form of a carousel. By swiping, the user can spin it and browse through the available items. The control can be highly customized and mainly provides the interaction logic. To populate it, a data source has to be set which provides the content.

let transformation = RRTransformation.init()
transformation.translation = intersection.intersectionPoint

let carouselRenderable = RRCarouselRenderable.init()
carouselRenderable.transformation = transformation

carouselRenderable.minimumItemScaleAngularDistance = 30.0
carouselRenderable.minimumItemScale = 0.6
carouselRenderable.angularItemsDistance = 35.0
carouselRenderable.visibleAngularRange = 240.0

carouselRenderable.dataSource = self.carouselDataSource
carouselRenderable.delegate = self
try! self.arView.renderView.add(carouselRenderable)

The RRCarouselRenderableDelegate provides callbacks to react to the users interaction. Adding to the selected carousel element to the scene for example can then be accomplished with just a few lines.

extension ViewController: RRCarouselRenderableDelegate {
  func carousel(_ carousel: RRCarouselRenderable, itemWasSelectedAt index: UInt) {
    let filePath = self.getAvatarFilePath(index)
    let avatar = RRAvatar3D.init()
    try! avatar.load(fromFile: filePath, largeObjectDataProvider: nil)
    let renderable = RRAvatar3DRenderable.init(avatar3D: avatar)
    renderable.transformation = carousel.transformation
    try! self.arView.renderView.add(renderable)
  func carouselRegisteredTapOutsideBoundingBox(_ carousel: RRCarouselRenderable) {

Now you should be able to place and see content in AR. There is of course more to explore like interaction with the placed content, handling of the data etc. Check out the AR View sample for a deep dive into the topic.


The RRCarouselRenderable class shows a circle on the ground per default. If you want to have a custom design for this circle you can simply do this by adding an RRRenderable instance to the RRCarouselRenderable instance that represents the circle you want to have rendered. When adding your custom circle renderable, keep in mind that the carousel’s ground plane is the XZ plane and the center of the carousel is (0,0,0) in the carousel’s local coordinate system. The following code snippet shows how you could create a custom circle design using the RRPathRenderer class:

func createCustomCarouselCircle(carouselRenderable: RRCarouselRenderable) {

  let innerCircleRadius = 1.3 * carouselRenderable.carouselRadius
  let outerCircleRadius = 1.5 * carouselRenderable.carouselRadius
  let lineWidth : CGFloat = 0.005
  let subdivisions : UInt = 100

  // Hide the carousel's default circle:
  carouselRenderable.carouselCircleIsHidden = true

  // Create a custom circle renderable using a RRPathRenderer instance
  let circleRenderable = RRRenderable()
  let circlePathRenderer = RRPathRenderer()
  circlePathRenderer.addFilledEllipse(withCenter: CGPoint.zero, extendX: innerCircleRadius, extendY: innerCircleRadius, color: UIColor.lightGray.withAlphaComponent(0.5), subdivisions: subdivisions)
  circlePathRenderer.addEllipse(withCenter: CGPoint.zero, extendX: outerCircleRadius, extendY: outerCircleRadius, lineWidth: lineWidth, color: UIColor.yellow, subdivisions: subdivisions)

  // RRPathRenderer renders in the XY plane and we want to render the circle in the XZ plane:
  let transformation = RRTransformation()
  transformation.rotationAngles = simd_float3(-90.0, 0.0, 0.0)
  circleRenderable.transformation = transformation
  try! carouselRenderable.addChild(circleRenderable)


To learn more about this feature, check out the following samples:

© 2014-2020 Reactive Reality AG