The following paragraphs describe the core concepts and important classes of the SDK. In general, we follow best practices and common coding standards on iOS.
Our framework providies a series of container classes like e.g.
RRImage. These classes usually provide IO functionality for the respective file types and store the associated information in the device’s memory during runtime. For some instances, there are separate loader classes like the
RROBJLoader which returns a
RRImage class represents our container format for handling images. It provides functionality for seamless conversion to and from
UIImage. The conversion happens on demand and is only performed once.
Renderables are objects which can be displayed by our rendering engine. The base class
RRRenderable provides a
.transformation property to define location, orientation and scale. Renderables support a parent/child hierarchy which allows you to create a scene graph. It’s important to understand that the transformation of a parent renderable is applied to all its children. To actually see the renderable, we need to add it to a render view by calling
let parentRenderable = RRRenderable.init() let childRenderable = RRRenderable.init() try! parentRenderable.addChild(childRenderable) parentRenderable.transformation.translation = simd_float3(0.0, 5.0, 0.0) // The child renderable will inherit the translation from the parentRenderable.
We need to attach colliders to interact with renderables by e.g. selecting them through a touch gesture. The collider is usually a simplified representation of the renderables visual appearance. This could for example be a box in 3D which contains a complex textured mesh with a high number of triangles. The idea behind this concept is to reduce the algorithmic complexity when checking which object has been touched by the user. The user most likey won’t notice whether the test has been performed with the simplified collider geometry or with the complex geometry used for rendering.
let planeRegion = CGRect.init(x: -1, y: -1, width: 2, height: 2) let collider = RRPlaneCollider.init(region: planeRegion) renderable.attachCollider(collider)
To actually select a renderable from the scene, we can intersect the colliders with a ray that corresponds e.g. to the position the user clicked on the render view. The
RRGLRenderView provides functionality to obtain a corresponding ray by providing a position with in the coordinate space of the view. This ray can then be used to query the scene for objects along its path. Be aware that the ray will only possibly intersect renderables with attached colliders.
let ray = self.renderView.ray(fromViewPosition: touchPosition) let intersections = self.renderView.getIntersections(ray)
Layouts are a mechanism that helps you with arranging content in the form of renderables in a meaningful way. On top of that, they can provide logic or user interaction. The
RRUserPhotoLayout or the
RROrbitViewerLayout are two examples. To apply a layout, set it to the
.layout property of the desired render view.
let orbitLayout = RROrbitViewerLayout.init() self.renderView.layout = orbitLayout
If you choose to use a layout for a certain application, it is usually not adviseable to interacte with the renderables of the respective view directly. The logic of the layout might interfere with your code and thereby produce unwanted effects.