Monday 2 February 2015

Letters from the Gamma Room: visiting the i3Mainz

View of historic Mainz from Mainz-Kastel;
 woodcut by Franz Behem, 1565 (image: Mainz City Archive)
I am spending some weeks in Mainz, Germany, at the i3Mainz, in the University of Applied Science.

There are many projects going on there, led by different groups and institutions. It would take much longer than a couple of months to explore them all (so I had to restrain my curiosity as much as my nature allows). Mainly, I am here to experiment a bit with a tool they have developed in the last couple years, the Generic Viewer (GV).

I’m not going to explain in detail how the GV works. It has been presented in various venues (including the past CAA in Paris and the DH conference in Lausanne). If you want to know more about it, you can read the website.

What I’m going to discuss here, are the relationships between the GV and my own research, and their possible interactions. 

The GV allows to annotate semantically specific elements within a virtual 3D space, derived from laser scanner point clouds, and textured with panoramic photographs. You can navigate the space (although through controlled viewpoints), and you can draw geometry (polygons) around a specific feature (a painting, a statue, anything that is relevant) to select and identify it. The selected portion of virtual space automatically receives a URI. Then, it is possible to:
- extract and calculate geometric information about the elements in it,
- attach information to it via linked data. 

You can write the triples yourself, and add them directly in the triplestore; or you can use a little interface that makes RDF triples look much like statements in English language.
Concept for the Generic Viewer interface.
From the project website
You can then filter the annotations in various ways, and find different sort of information. From the author of the annotation to articles and bibliography about the specific feature.

I was very curious about this tool when I saw it presented. Having my PhD in mind, my first two questions were, quite predictably:
* could the GV work with CAD models as well as with point clouds?
* would my ontology work on a point cloud? Even better: would my ontology work on spatial data that has not been produced by me?
A part from these two first questions, I have been given the opportunity to say what else I would like the GV to do, what functions to offer. So, I have started thinking about it, and playing with the system, supported by a nice group of developers and spatial data geeks.

First of all, I had to find out if it was possible to import my CAD 3D model of the Iseum into the GV. On paper it was just not doable, for a series of reasons, from technical to conceptual. The main barrier was simply that the GV is meant to import exclusively point clouds, and not 3D meshes. It allows ptg or xyz formats, nothing else. 
Moreover, the GV is not designed for free exploration of the digital space. The views are constrained and always refer to the positions of viewpoints (i.e. the positions of the scanners that have generated the data). This makes it possible to derive a number of useful geometric and spatial data, but, on the other hand, a completely artificial space that has no specific view-point didn’t seem to fit in.
The adventure looked like it was ended even before starting. But did you notice the effect that the word “impossible” generates on developers? So, luckily, my i3Mainz colleagues Martin Unold and Alexandra Müller were up for a few experiments on it.