Pictofit / Android SDK / 0.13.1 / Capturing Avatars

Capturing Avatars

The SDK currently provides a module for full body capturing which allows users to create a virtual avatar of themselves in 3D. We will add different other ways of capturing for 2D and 3D in the near future. In any case, a Pictofit Content Service account is required to generate the avatar. Please contact our sales team to get your free trail account.

Capturing a full body avatar in 3D is very simple. The person being captured just has to stand still while another person walks 3 rounds around her or him holding the phone in different heights and angles. The phone automatically triggers images and instructs the user so that the pictures are taken in an optimal way. The data is then uploaded to our Pictofit Content Service for processing.

The first thing you need to do in order to add the capturing to your application is to set up the RRCaptureView.

Setup Instructions

Add the following to your <module>/build.gradle:

android {
  // ...
  // assets get compressed by default, but we do not want that for the posedetection model
  aaptOptions {

// add the required dependencies
dependencies {
  // ...

  // kotlin
  implementation "org.jetbrains.kotlin:kotlin-stdlib:1.4.20"
  implementation "androidx.core:core-ktx:1.3.2"
  implementation "org.jetbrains.kotlinx:kotlinx-coroutines-core:1.4.1"
  implementation "org.jetbrains.kotlinx:kotlinx-coroutines-android:1.3.9"

  // capturing
  implementation "androidx.camera:camera-camera2:1.0.0-beta12"
  implementation "androidx.camera:camera-lifecycle:1.0.0-beta12"
  implementation "androidx.camera:camera-view:1.0.0-alpha19"
  implementation "androidx.camera:camera-extensions:1.0.0-alpha19"

  // pose detection
  implementation "org.tensorflow:tensorflow-lite:2.2.0"
  implementation "org.tensorflow:tensorflow-lite-gpu:2.2.0"

  // used to use the content service api
  implementation "com.squareup.retrofit2:retrofit:2.6.3"
  implementation "com.squareup.okhttp3:logging-interceptor:4.2.1"
  implementation "com.squareup.retrofit2:converter-gson:2.6.3"

  // used by the RRUploadCaptureContentHelper
  implementation "androidx.preference:preference-ktx:1.1.1"

Add the following to your AndroidManifest.xml:

<?xml version="1.0" encoding="utf-8"?>
<manifest ...>

    <application .../>

    <!-- Writing and reading the local storage is required by the capture session to save taken images on the device-->
    <uses-permission android:name="android.permission.WRITE_EXTERNAL_STORAGE" />
    <uses-permission android:name="android.permission.READ_EXTERNAL_STORAGE" />

    <!-- We also need camera permissions to take pictures in the first place and show a camerafeed preview -->
    <uses-permission android:name="android.permission.CAMERA" />

    <!-- Last but not least we also need permission to the network, to upload the captured files to the content service -->
    <uses-permission android:name="android.permission.INTERNET" />


The RRUploadCaptureContentHelper, which we will later in this tutorial, also requires to be initialized with a context.

class MainApplication : Application() {
  override fun onCreate() {

Now we can add the RRCaptureView your fragment or activity’s layout:


To prepare the RRCaptureView we need to bind it to the Activity's or Fragment's lifecycle as well as describing some properties. For this example we will use RRUploadStorageHelper, which can be replaced by your own RRCaptureStorageHelper, if needed.

We also add a RRCaptureSessionListener, so we can update our UI depending on the session’s state.

override fun onResume() {

  // permission request handling...

  // init the capture view
  captureView.post {
    captureView.initCamera(this, RRCaptureModeType.AVATAR, RRUploadStorageHelper())

Full Body Capturing

This should already give you a live preview to display. The capturing module tracks the user being captured through out the process to ensure that the images are taken in and ideal way. The PoseDetection is activate for the first 3 stages of the capturing session and provides feedback via the following event:

override fun onPoseDetected(poseState: RRPoseState) {
  // visually handle the detected pose state

During capturing, we have different events that we need to react to. This can be feedback like asking the capturing user to move closer, back up, etc. or that a stage has been completed.

override fun onStageProgress(stage: Int, progress: Float) {
  // ...

override fun onStageFinished(stage: Int, stageQuality: Float) {
  // ...

override fun onMotionDataReceived(
  horizontalDegree: Int, verticalDegree: Int, images: List<RRCaptureImage>
) {
  // ...

override fun onAdditionalImageTaken(additionalImageCount: Int) {
  // ...

override fun onSessionFinished(captureSession: RRCaptureSession) {
  // read the capture session to get information about the session

Uploading the captured data

Once the capture session is completed, we want to upload the captured data. To do this we provide a convenience class that communicates with our web services. T he main class here is the RRCaptureContentUploader which runs the data transfer and respective RRUploadCaptureListener which provides you with callbacks on the state of the process.

// Check our the full example on how to acquire a customerId as well as a login token
val config = RRContentUploadAPIConfig.create(
val params = RRUploadParameters(
  config, params, this.captureSession, this

The remaining steps are mainly UI related and up to your liking. The Avatar3DCapturing sample gives you a complete picture on the different steps involved in order to add this feature to your application.


© 2014-2020 Reactive Reality AG