Using our Shape-from-Measurements feature, we can generate personalised mannequins for mix & match and size recommendation. From a small set of input measurements, we can create a full 3D model of the users body that allows to take measurements virtually anywhere on the body. Check out this demo to see the feature in action. You can also find the source code for it in our sample repository.

Fetching Options from the Server

When generating a personalised mannequin, several aspects can be configured:

  • bodyModelID There are currently two body models (one for each gender) which define the general body shape & pose (and gender) of the mannequin.

  • baseShapeType The shape types refer to the basic body type (Ectomorph, Mesomorph, Endomorph). Custom shape types can be provided in the future.

  • measurements This is the list of input measurements which define the shape of the generated mannequin.

Depending on the used configuration files, the available options might change. Therefore, make sure to query the possible options from the server. How this is done can be seen in the following snippet. Please check the API Reference for more information on each of these parameters.

let computeServer = new Pictofit.ComputeServer(SERVER_URL, SERVER_TOKEN);

// request all available body model ids
let bodymodelIds = await computeServer.requestBodyModelIDs();
// request all available shape types
let shapeTypes = await computeServer.requestBaseShapeTypes();

// request all available measurements (identifier and default values)
let measurements = await computeServer.requestInputMeasurements(myBodyModelId);
// Optionally you may also use a custom config file
let measurements = await computeServer.requestInputMeasurements(myBodyModelId, configUrl);
JS

Generating a Mannequin

To generate your personalised mannequin, create a request of type PersonalisedMannequinRequest and provide your values for the different options. Most important are the input measurements that you provide using the setMeasurement method. These define how your mannequin will look in the end. If you don’t provide a target value for a certain measurement, the default defined by the selected body type (baseShapeType) will be used.

The SDK comes with a PersonalisedMannequinCreator component which takes care of setting up the scene (e.g. defining a background environment, lights and a camera). It’s the fastet way to get started since it works out of the box without any needs to configure anything.

The following snippet shows how to create and trigger a request for generating a personalised mannequin:

let info = new Pictofit.PersonalisedMannequinRequest();
info.inputMeasurementsConfig = "https://myServer/mannequinMeasurementsConfig.json"
info.bodyModelID = myBodyModelId;
info.baseShapeType = myShapeType;
info.pose = "https://pose/myPose.bm3dpose";
info.setMeasurement(new Pictofit.Measurement(myMeasurementId, myMeasurementValue));

const mannequinCreator = new PersonalisedMannequinCreator(computeServer, viewer);
await mannequinCreator.customise({
      personalisedMannequinRequest: info
    });
JS

Once the request has finished, the generated mesh is added to the instance of the viewer as a new cascade layer and the PersonalisedMannequinCreator instance takes care that a default material has been defined for the mannequin as well as lights and camera.

Customising the Material

The appearance of the mannequins surface is defined by a material using our JSON configuration format. You can easily provide a custom material. Simply load it beforehand and provide it’s name in the .customise call. A custom material could look for example like this.

{
  "version" : 2,
  "scene" : {
    "materials" : [
      {
        "name" : "Custom-Mannequin-Material",
        "type" : "StandardMaterial",
        "diffuseColor" : [0.5, 0.5, 0.5],
        "emissiveColor" : [0.5, 0.5, 0.5]
      }
    ]
  }
}
JSON

Now simple load it and pass it’s name to the creator:

// load the mannequin material before triggering the request
const materialCascadeLayerName = "MannequinMaterial";
await viewer.loadConfig("assets/mannequin-material.json", true, materialCascadeLayerName);
await mannequinCreator.customise({
      personalisedMannequinRequest: info,
      mannequinMaterialName: "Custom-Mannequin-Material",
      includedCascadeLayerNames: [materialCascadeLayerName]
    });
JS

Customising the Scene

The PersonalisedMannequinCreator also allows you to customise the whole scene as well. You can pass a configuration JSON or an URL to such a file using the .sceneConfig property. This config is then loaded before the generated avatar is placed within the scene. The following example shows how this is done (the used configuration is the default one by the way).

mannequinCreator.sceneConfig = {
    "version": 2,
    "scene": {
      "type": "Scene",
      "defaultEnvironment": {
        "createGround": false,
        "createSkybox": false
      },
      "nodes": [
        {
          "name": "PersonalisedMannequinCreatorDefaultCamera",
          "type": "ArcRotateCamera",
          "target": "<WILL_BE_REPLACED_WITH_MANNEQUIN_MESH_NAME>",
          "useAutoRotationBehavior": false,
          "lowerVerticalAngleLimit": 0,
          "upperVerticalAngleLimit": 70,
          "horizontalAngle": 90,
          "verticalAngle": -15,
          "radius": 2.5,
          "minZ": 0.1,
          "maxZ": 50,
          "children": [
            {
              "name": "PersonalisedMannequinCreatorDefaultLight",
              "type": "DirectionalLight",
              "direction": [0, 0, 1],
              "diffuse": [0.25, 0.25, 0.25],
              "specular": [0.1, 0.1, 0.1]
            }
          ]
        }
      ]
    }
  };
JS

Accessing the Generated Mannequin Data

A generated personalised mannequin consists of a 3D and a body model state. The former describes the geometry of the avatar whereas the latter describes the body shape and pose in detail. The geometry is required for rendering the mannequin in the browser together with the aforementioned material. The body model state is required for computing the virtual try-on. Both files should be stored and reused ( e.g. with the users profile on your website or within the browsers local storage).

It’s not advise-able to generated a personalised mannequin on the fly before computing a virtual try-on for performance reasons. The request takes some time which can easily be saved by doing it once and caching/storing the generated information.

The .customise call returns a promise which gives you access to both mentioned files

let result = await mannequinCreator.customise({
      personalisedMannequinRequest: info
    });
const bodyModelStateBlob = result.bodymodelstate;
const modelBlob = result.model;
JS

Creating a 2D Avatar

The mannequin created in this process is a full 3D model with accompanying semantic information on the body shape and pose. It can be used with the 3D virtual try-on and size-recommendation & fit visualisation. To use the mannequin as an avatar with the 2D virtual try-on, we need to convert it first. This is fairly simple as can be seen in the following sample

const avatar2D = await mannequinCreator.create2DAvatar({
      mannequinRenderingSize: { width: 2000, height: 3000 }
    });
JS

Store a Converted 2D Avatar

In order to create a 2D Avatar from an asset bundle we expect the bundle to contain the following files:

/my-avatar/mesh.obj
/my-avatar/diffuse.jpg
/my-avatar/opacity.jpg
/my-avatar/backside.jpg
/my-avatar/avatar.min.avatar
CODE

Therefore you need to save the Avatar2D blobs (e.g. by uploading them to your server or in the browsers session or local storage) with the following mapping:

avatar.mesh                --> mesh.obj
avatar.diffuseTexture      --> diffuse.jpg
avatar.opacityTexture      --> opacity.jpg
avatar.backsideAreaTexture --> backside.jpg
avatar.avatar              --> avatar.min.avatar
CODE

Load a Stored Mannequin

To load the mannequins mesh that you’ve previously stored, you can simply call the PersonalisedMannequin.createScene helper method. As input, it requires the blob of the mesh as well as the name of the material that should be used. It then create the same cascade layer as the actual request would.

// load the mannequin material before loading the mannequin
await viewer.loadConfig("assets/mannequin-material.json");

let mannequin = new Pictofit.PersonalisedMannequin(computeServer, viewer);
const modelBlob = ...; // restore model blob
// this will add a new cascade layer and does not make a new request to the compute server
await mannequin.createScene(modelBlob, "Mannequin-Material", "Mannequin");
JS

The result should then look something like this, depending on your scene and request configuration:

How-To: Cache the Generated Data

It is advisable to cache the generated data and reuse it for performance and UX reasons. One way to do this is to store it in the current browser session. The following sample shows how this can be done with the example of the bodyModelState:

// setup common entrypoints
const storage = window.sessionStorage;
const STORAGE_KEY = "bodyModelState";

// save the bodyModelStateBlob
const blobUrl = await blobToUrl(bodyModelStateBlob);
storage.setItem(STORAGE_KEY, blobUrl);

// helper function to convert our blob object into a less complex url string
// that we can later fetch to get our blob back
async function blobToUrl(blob) {
    const reader = new FileReader();
    await new Promise((resolve, reject) => {
        reader.onload = resolve;
        reader.onerror = reject;
        reader.readAsDataURL(blob);
    });
    return reader.result.toString();
}
JS

Now you can reuse this state at a later point:

// restore the bodymodelstate blob
let bodyModelBlobUrl = storage.getItem(STORAGE_KEY);
const restoredBlob = await fetch(bodyModelBlobUrl).then((r) => r.blob());
JS