In 3D measurement technology, intuitive visualization of measurement results is becoming increasingly important. Relevant information must be selected and processed from the extensive multidimensional data streams. We develop software modules for the application-specific visualization of 3D data including meta information.
GPU programming for interactive 3D applications
3D data collection provides significant added value, for example in the inspection and monitoring of infrastructure. In general, the human-machine interface is still a screen combined with interactive input devices such as a mouse or keyboard. Technologies from the fields of augmented or virtual reality (AR/VR) now provide the technological basis for visualizing measurement data in 3D. This makes it possible to record the relevant measurement parameters intuitively. To simplify operation for the user, the development requires wide-ranging expertise in the specific processing of data and algorithmic visualization techniques.
To ensure user comfort, the display must be smooth and free of jerky motions. This requires image sequences of at least 20 Hertz. 3D data sets from measurement applications usually contain several hundred thousand 3D points. For every 2D image to be generated, these points first have to be transformed into the image space. Then, a complex calculation is performed for every pixel of the resulting image to determine the color values, shading and textures. This requires huge amounts of processing power. Prompted by the gaming industry, extremely high-performance GPUs (graphical processing units) with enormous processing power are now available. This new generation of GPUs enables parallel image processing on a large scale. Efficient use of a GPU forms the basis for our work in the realm of 3D data visualization. To this end, we rely on platform-independent source code.
Preparing data for real-time visualization
Initially, the raw 3D data exist as point clouds – large, unstructured collections of points without any additional information, for instance on their relationship to neighboring points or their association to surface areas. While every pixel in 2D images holds information which appears structured to the human eye, 3D point clouds contain many holes. This makes them difficult to interpret for the human eye. For this reason, our visualization software starts by meshing the data together to construct surface areas which fill the holes. It searches for relationships between individual neighboring points in the point cloud and creates surface areas stretching between points which belong together. The surfaces are usually made of triangulated squares which each in turn consist of two triangles pieced together. The edge length of the triangles determines the mesh size of the resulting grid. These triangles are then assigned a color which is representative of the surface section.
The first reduction in data volume occurs during the meshing, since the points on a surface no longer exist individually but are integrated into the triangle together with the information on size and position in the space. This is why meshed 3D geometries are so advantageous for 3D display in real time. In addition to the relatively low number of points (limited to several hundred thousand), these geometries also contain information on how the points are connected to each other and to surfaces.