[Back]


Publications in Scientific Journals:

I. Mayer, C. Scheiblauer, A. Mayer:
"Virtual Texturing in the Documentation of Cultural Heritage";
Geoinformatics FCE CTU, 7 (2011), 1 - 8.



English abstract:
In the last decade the documentation of cultural heritage by means of laser range scanning and photogrammetric techniques has gained ever more importance. The amount of data collected by these means may be huge, and adequate presentation of 3D documented cultural heritage is still a challenge. For small and limited projects consisting of only a few range scans, the software provided with the laser scanner can be used for viewing and presenting the data. Large projects, consisting of hundreds of scan positions as well as projects where more and more data are collected over time, still have to deal with a massive reduction of the 3D data for presentation. Public demonstrations in museums, as for example shown by the Digital Michelangelo project, are already state of the art. The combination of huge point-base models and mesh models with high resolution textures in one viewer, the first type of models resulting from the data of laser range scans and the second type of models resulting from a photogrammetric reconstruction process, is still not available. Currently viewers are mostly limited to show models that are based on only one geometric primitive - either points or polygons - at once. In the FWF funded START project "The Domitilla Catacomb in Rome. Archaeology, Architecture and Art History of a Late Roman Cemetery" - which is running for 5 years now - 3D point data was collected for the geometrical documentation of the vast gallery system of the Domitilla Catacomb, resulting in point data of some 2 billion (10^9) point samples. Furthermore high quality textured mesh models of the nearly 90 late Roman / early Christian paintings were generated with photogrammetric tools. In close cooperation with the Institute of Computer Graphics and Algorithms of the Vienna University of Technology the point cloud viewer Scanopy was improved for the combined presentation of huge point clouds and high quality textured mesh models in the same viewer. Our viewer is already capable of rendering huge point clouds, so for this a method to manage the vast amount of textures had to be found. Therefore we integrated a virtual texturing algorithm, which allows using the original photographs of the paintings taken on site to be mapped to the mesh models, resulting in a high quality texture for all mesh models. The photographs have a resolution of 11 Megapixels. Due to shortcomings in the programs used in the photogrammetric processing pipeline we scaled down the photographs to a 7.3 Megapixel resolution. Currently 608 of these images are used for texturing 29 mesh models. The work on the mesh models is still ongoing, and when all mesh models will be completed, we will use some 2000 images for texturing about 90 mesh models. These virtually textured models can show the details of each painting in the Domitilla Catacomb. When used in a virtual walkthrough the paintings in the catacomb can be presented to a broad audience under best lighting conditions, even the paintings normally not accessible by the public.

Keywords:
3D-modeling, visualization, texturing, out-of-core, photogrammetric reconstruction


Electronic version of the publication:
http://publik.tuwien.ac.at/files/PubDat_215132.pdf


Created from the Publication Database of the Vienna University of Technology.