Using the AR View
Augmented Reality (AR) is a great way to show 3D content in a very immersive way. Our SDK provides functionality to render avatars & garments in AR based on Google ARCore API.Traditional on-screen UI elements don’t work well in this context since they break the impression of seeing virtual content in the physical world. Therefore, the framework provides specific UI elements like the RRCarouselRenderable
to show an interface that is also part of the physical world.

To display content in AR, first create an RRARView
. This component implicitly creates an ARCore Session
which you can access through the RRARView.arSession
property. Thereby, you can configure ARCore to your liking.
To render content on top of the camera stream, access the RRARView.renderView
property. See the Core Concepts section for general information on how to use the rendering engine. The camera position within the render view is automatically updated based on the tracking provided by ARCore. This means that your virtual camera will match the physical camera position of your phone which creates the effect of the virtual content being part of the physical world. To get more information on ARCore, have a look at Google’s documentation.
Setup Instructions
We will need to add a special permission to our AndroidManifest.xml
:
<manifest ...>
<uses-permission android:name="android.permission.CAMERA" />
...
</manifest>
Now that we got that done, we also need to request an ARCore installation for the ARCore Google Play Services
. You can read more about that here. This snippet would finish the application incase ARCore could / can not be installed, the user interrupted the installation etc.
private var installRequested = false
/**
* @return `false` if arcore is not installed yet
*/
private fun checkArCoreInstallation(): Boolean {
try {
return when (ArCoreApk.getInstance().requestInstall(this, !installRequested)) {
InstallStatus.INSTALL_REQUESTED -> {
installRequested = true
// force a rerun of the app once arcore is installed
false
}
InstallStatus.INSTALLED -> {
arView.createSession()
true
}
}
} catch (ex: Exception) {
Toast.makeText(applicationContext, "ARCore Google Play services could not be installed!", Toast.LENGTH_LONG).show()
this.finish()
}
return false
}
If you want to be more specific with the installation and availability checks for ARCore check here.
Next we would need to add the view to our activity’s layout.xml
:
<com.reactivereality.pictofitcore.views.gl.RRARView
android:id="@+id/arView"
...
/>
The next step would be to handle the views lifecycle events and create the AR session instance during the first run of the app:
We also have to request camera permissions during that time, as the RRARView draws a camera preview.
public override fun onResume() {
super.onResume()
// check if the arcore session has been started yet
if (!arView.isSessionRunning) {
try {
// ARCore requires camera permissions to operate. If we did not yet obtain runtime
// permission on Android M and above, now is a good time to ask the user for it.
if (!PermissionHelper.hasPermissions(this)) {
PermissionHelper.requestPermissions(this)
if (!PermissionHelper.hasPermissions(this))
return
}
// if it hasn't we request an install of the Google Play AR Services, as they are mandatory to run an AR app
if (!checkArCoreInstallation()) return
// Create the session.
arView.createSession()
} catch (ex: UnavailableApkTooOldException) {
ex.printStackTrace()
}
}
// handle lifecycle events
arView.onResume()
}
public override fun onPause() {
super.onPause()
arView.onPause()
}
Handling Surfaces
When placing content in AR, you usually want it to sit on a surface in the actual physical world (like the floor or a table). Otherwise it would just float through space which might break the immersive experience. ARCore automatically detects surfaces and returns them in the form of Trackables
. If you like to receive these callbacks, use the RRARViewListener
and set it by calling RRARView.setListener(listener : RRARViewListener)
. This listener provides an callback each time the scene frame gets updated. This is also the point where we can add a RRMeshRenderable
. A RRMeshRenderable
already has a RRMeshCollider
attached to it.
First we implement the listener and override the onFrameUpdated
. Since arcore does not sort this events for us we need to do it ourselves. Therefore we create a set for allPlanes
, newPlanes
and updatedPlanes
.
override fun onFrameUpdated(frame: Frame) {
// get the latest updated trackables from that frame
val allUpdatedPlanes: Set<Plane> = HashSet(frame.getUpdatedTrackables(Plane::class.java))
val newPlanes: MutableSet<Plane> = HashSet(allUpdatedPlanes)
newPlanes.removeAll(allPlanes)
val updatedExistingPlanes: MutableSet<Plane> = HashSet(allUpdatedPlanes)
updatedExistingPlanes.removeAll(newPlanes)
allPlanes.addAll(allUpdatedPlanes)
// Now we handle these to integrate with pictofit
for (plane in updatedExistingPlanes) updatePlane(plane)
for (plane in newPlanes) addNewPlane(plane)
}
In this example we use a data class to store relevant information:
data class PlaneData(val plane: Plane, val renderable: RRMeshRenderable, val polygonData: FloatBuffer)
// This will only be called when a completely new plane was detected
private fun addNewPlane(plane: Plane) {
// sometimes arcore provides us with a plane with less than 3 vertices
// as we cannot create a mesh from that, createFromARPlane will return null
val mesh3d = RRMesh3D.createFromARPlane(plane) ?: return
val planeRenderable = RRMeshRenderable.createWithMesh(mesh3d)!!
// match the position and rotation of the renderable to the one of the plane
val planeMatrix = FloatArray(16)
plane.centerPose.toMatrix(planeMatrix, 0)
// arcore returns matrices in column major order while the pictofitcore expects it in row major order, therefore we transpose it once
val transformation = RRTransformation.createWithMatrix(RRMatrix4x4(planeMatrix).transposed())!!
planeRenderable.transformation = transformation
// make the plane barely visible
// replacing the 1 after with a 0 will make the plane fully transparent
planeRenderable.setColor(0x10669900)
// the renderable to the renderview
arView.renderView.addRenderable(planeRenderable)
// hold the plane with its renderable and vertex data, so we can look back on the information when we update it
planeDataList.add(PlaneData(plane, planeRenderable, plane.polygon))
}
// this will only be called when an already existing planes geometry or position was updated
private fun updatePlane(plane: Plane) {
val data = findPlaneDataByPlane(plane) ?: return
// arcore merges planes together but we only want the top-level planes to be in the scene
val masterPlane = plane.subsumedBy
// when the plane is not a top-level plane we remove it and add the top-level plane instead
if (masterPlane != null) {
arView.renderView.removeRenderable(data.renderable)
planeDataList.remove(data)
findPlaneDataByPlane(masterPlane)?.let {
addNewPlane(it.plane)
}
}
// plane is a top-level plane
else {
// update the renderable transformation
val planeMatrix = FloatArray(16)
plane.centerPose.toMatrix(planeMatrix, 0)
val transformation = RRTransformation.createWithMatrix(RRMatrix4x4(planeMatrix).transposed())!!
data.renderable.transformation = transformation
// update the renderable mesh, incase it changed too
if (plane.polygon != data.polygonData) {
val mesh3d = RRMesh3D.createFromARPlane(plane) ?: return
data.renderable.setMesh3D(mesh3d)
val index = planeDataList.indexOf(data)
planeDataList[index] = PlaneData(data.plane, data.renderable, data.polygonData)
}
}
}
ARCore also offers the possibility to add Anchors
to the scene. These serve a reference point for a placed object in the scene. It is recommended to take advantage of the these when placing more important object in the scene than planes. You can read more about Anchors here.
Interacting With the Real World
A very natural way to place content is by allowing the user to tap on the screen and place a renderable on the corresponding location in the real world. This can be done by intersecting the detected planes with a ray originating from the touch position. This ray can be created by calling the rayFromViewPosition(touchPosition) : RRRay
method on the RRARView.renderView
.
private fun getFirstIntersection(event: MotionEvent): RRIntersection? {
val touchPosition = PointF(event.x, event.y)
val ray = this.arView.renderView.rayFromViewPosition(touchPosition)
return this.arView.renderView.getIntersections(ray).first()
}
class MyTouchEventListener : GestureDetector.OnGestureListener {
override fun onSingleTapConfirmed(e: MotionEvent): Boolean {
val firstIntersection = getFirstIntersection(e)
if(planeRenderableExists(firstIntersection?.renderable)) {
// user tapped on a plane
}
}
}
/*** Helper function ***/
private fun planeRenderableExists(renderable : RRenderable) : Boolean {
return planeDataList.find { it.renderable == plane } != null
}
Using the Carousel Renderable

The PictofitCore SDK provides the RRCarouselRenderable
for placing content. This UI element presents the user the different options in the form of a carousel. By swiping, the user can spin it and browse through the available items. The control can be highly customized and mainly provides the interaction logic. To populate it, a data source has to be set which provides the content.
We have to create a custom data source class that extends the RRCarouselRenderableDataSource
superclass. The carousel requires us to provide some information about the data it provides.
class ARObjectsDataSource : RRCarouselRenderableDataSource() {
override fun getNumberOfCarouselItems(carousel: RRCarouselRenderable): Int {
return carouselItems.size
}
// this will be called in the core to know which items to render in the carousel itself
override fun getCarouselItemRenderableAtIndex(carousel: RRCarouselRenderable, itemIndex: Int): RRRenderable {
val modelIndex = itemIndex % carouselItems.size();
// return the selected mesh that should be shown in the carousel
}
}
Now we can configure the RRCarouselRenderable
to our needs:
val transformation = RRTransformation.create()!!
transformation.translation = intersection.intersectionPoint
val carouselRenderable = RRCarouselRenderable.create()!!
carouselRenderable.transformation = transformation
carouselRenderable.minimumItemScaleAngularDistance = 30.0F
carouselRenderable.minimumItemScale = 0.6F
carouselRenderable.angularItemsDistance = 35.0F
carouselRenderable.visibleAngularRange = 240.0F
// it is required to call the static constructor of the superclass with your own constructor here
// this is required to handle the object internally
carouselRenderable.dataSource = RRCarouselRenderableDataSource.createWithInstance(ARObjectsDataSource())
carouselRenderable.setListener(this)
arView.renderView.addRenderable(carouselRenderable)
The RRCarouselRenderable.RRCarouselRenderableListener
provides callbacks to react to the users interaction. Adding to the selected carousel element to the scene for example can then be accomplished with just a few lines.
carouselRenderable.setListener(object : RRCarouselRenderable.RRCarouselRenderableListener {
override fun itemWasSelectedAtIndex(carousel: RRCarouselRenderable, index: Int) {
val avatar = RRAvatar3D.createFromAsset("avatar.av3d")!!
val renderable = RRAvatar3DRenderable.createWithAvatar(avatar)!!
renderable.transformation = carousel.transformation
arView.renderView.addRenderable(renderable)
}
override fun carouselRegisteredTapOutsideBoundingBox(carousel: RRCarouselRenderable) {
carousel.removeFromParent()
}
})
Now you should be able to place and see content in AR. There is of course more to explore like interaction with the placed content, handling of the data etc. Check out the AR View sample for a deep dive into the topic.