Tuesday 24 March 2015

A clash of sources: hypotheses on the digital unification of the ekkleasiasterion

Fragment 1.62: Landscape with ceremony in honor
of Osiris. Archaeological Museum of Naples
Italian scholar Olga Elia already highlighted in 1941 that the visual documentation of the ekklesiasterion and the frescoes in the museum collection are not completely consistent. It may be due to the habit of drawing the documentation after the frescoes had already been removed (which makes the documentation of the time sound certainly not less precious but… well… a tiny bit less reliable). So Elia, and more recently current director of the Archaeological Museum of Naples Valeria Sanpaolo (1992), suggest that a better source, if we want to try an unification, are paradoxically, the verbal accounts compiled during the excavation.

This is the reason why both scholars agree that fragment 1.62 [Landscape with ceremony in honor of Osiris] should be placed in the left side of the south wall (as the Pompeianarum antiquitatum historia records) and not at the right side of the west wall as the engravings show. They supported this assumption mainly with the presence of a framing column on the right edge of the fragment. Well, if we assume that the pattern is correct, having a dividing fake column on the right of the scene wouldn’t be much useful if the image was already on the right side, would it? So, I definitely agree with Elia and Sanpaolo. 

Fragment 1.67: Landscape with sacred door 
and velum.Archaeological Museum of Naples
Moreover, if fragment 1.62 were on the right side of the west wall, it would be asymmetrical with fragment 1.66 [Sacred fence with temples, statues, and square with trees, seen through an architectural scene.], that, all sources agree, was located on the left side of the west wall. And Romans tended to be pretty fond of symmetry.

In addition, Elia suggested that the right side of the west wall was actually occupied by fragment 1.67 [Landscape with sacred door and velum.]. The Landscape with sacred door and velum. does not even appear in the engravings but the piece had been explicitly recorded as “from the Temple of Isis” when it was catalogued. Homogeneity of style, colour and dimensions support this theory and fragment 1.67 is officially exhibited in the museum as part of the ekklesiasterion in the museum collection. 

This, by the way, means that what Chiantarelli drew in the right side of the west wall might be pretty much completely made up (yeah, welcome to Pompeii…). 

Documentation of the west wall by Chiantarelli

superimposition of pictures of the extant fragments of frescoes on Chiantarelli's documentation,
according to Elia (1941) and Sanpaolo's (1992) reading.

We are not done yet! Elia also claims that the south and north walls could have been accidentally inverted in the visual documentation. She based her theory on the position of the mythological scenes. According to modern western conventions, stories are read from left to right. This would make the frescoes representing the myth of Io awkwardly positioned as the story ends on the left (south wall) and begins on the right (north wall). However, Sanpaolo (1992) disagrees, and remarks that there is no evidence in Pompeian frescoes to support such a constraining convention. In fact, scenes depicted on Pompeian walls (especially in a series composed by separated scenes) appear to follow a more symbolic than realistic or linear logic. 

These are the pieces of the puzzle: three walls, a visual record, a written one, the work of two scholars, many frescoes fragments, different theories and some disagreement. How to express that in linked data? And how to do it in the Generic Viewer?


[One part more to come...]

Sunday 22 March 2015

Where are you from?: fragments of frescoes and their spatial context

Archaeological Museum of Naples,
Fragments from the porticus of the Iseum.
The display has no relationships with the
original position in the building.
My idea to experiment with the Generic Viewer comes from two main things that caught my attention. The first one is that, according to its presentation, the Generic Viewer was meant to support projects like IBR in which inscriptions are considered not just as text, but as artefacts, and, more specifically, artefacts thought to be experienced in a particular space, interacting with other features and with the human behaviour in that space. 
The second one, is the potential use of the Generic Viewer not only as a digital model of the real space, but also as a virtual environment where is possible to test different hypotheses about the original position of some decorative features based on geometry, scholarly expertise, and visibility analysis.


Both things, for different reasons that I’m going to explain, reminded me of some of the issues I had encountered studying the artefacts found at the Iseum, and made me wonder if the Generic Viewer was really generic enough that its functions, although developed for epigraphists, could be extended to the study of ancient Pompeian frescoes. 
Moreover, the semantic annotations allowed by the Generic Viewer give me the opportunity to model the relationship between the artefacts, the architectural environment and their documentation all in RDF linked data (using, among others, a few elements of my ontology).

Of course, it would be too ambitious to show the results for the entire Iseum, so I had to pick a specific area. The most interesting one, in this respect, seems to be the ekklesiasterion (and that is the reason why it's the space we have prepared and textured in the Generic Viewer). Although there are other spaces of the Iseum that are rich in surviving artefacts and related documentation (e.g. the porticus), the ekklesiasterion is especially complex. Actually, sometimes 'complex' gets very close to 'pretty bizarre', even for Pompeian standards, bet let me introduce this story of artists, archaeologists, kings, engineers and scholars.


The Palace of Portici in 1745
From wikipedia
The frescoes on the walls of the ekkleasiasterion, as for many other Pompeian buildings, were almost intact when uncovered. Being the Iseum one of the first buildings dug up (1764), it was still very common, at the time, to remove the most eye catching bits of the frescoes (usually the figurative ones) and move them to the Palace of Portici. Everything else was just left in place, if not explicitly destroyed, to enhance the value of the king’s collection or because believed not worthy. Another issue was that the pieces of frescoes stripped off were transported via narrow tunnels dug into the excavation site, so there was no practical means, at the time, to transport anything too big.

Considering what happened (and what is still happening) to the frescoes that have been left in place, we should be happy that some of them have been removed and can now be seen and studied in in the Museum of Naples (or elsewhere). However, we now have only fragments, singularly framed as independent pieces of art and exhibited outside their spatial context (the building) and regardless the relationship with the other parts of the decorative pattern. 
Something that sounded quite close to what the IBR project complained about when discussing inscriptions.

In a 3D model (or in a point cloud generated by a 3D model and imported into the Generic Viewer, to be precise) it is possible to attempt one or more virtual unifications, placing the fragments (visually or just on an informative level) were they might have used to be.
But, first of all: how can we place the fragments in their original position reliably? They are no more in situ, but they have been documented at the time. 


documentation of the north wall of the ekklesiasterion
by Giovanni Morghen
documentation of the south wall of the ekklesiasterion
by Giuseppe Chiantarelli 
documentation of the west wall of the ekklesiasterion
by Giuseppe Chiantarelli
After being harshly criticised by classicists at the time for destroying one of the most valuable source of knowledge about the past, the king of Naples decided to hire someone from the academia of fine arts to document the state of the frescoes before removing parts of them. The documentation of the ekklesiasterion was carried out by Giovanni Morghen (north wall) and Giuseppe Chiantarelli (west and south walls). 
At the beginning of my work on Pompeii, some years ago, I thought that documentation was a very good starting point for digital visualisation; that having a professional documentation was very close to know how things were at the time. Which is true, to a certain extent. But, besides knowing of course that every human representation is intrinsically subjective, I didn't take into account that Pompeii has always an extra layer of complication (if not sheer surrealism…).

Morghen and Chiantarelli's documentation is extremely precious. It tells us about the decorative pattern around the fragments that survived. Shows us the beautiful fake architectural features that Esher would have loved so much. Reveals fascinating illusory doors leading to even more secret rooms.
But…

[part two]              

Monday 2 March 2015

Isis rises from the clouds: converting 3D meshes into point clouds

Sneak peek of my model of the Iseum in 3DSMax.
Untextured
As the Generic Viewer only accepts point clouds as inputs, we had to find a way to convert my 3D model of the Iseum into a point cloud.
I know. The idea would make many modellers (including me) cringe. And I can already hear the puzzled voice of my colleagues asking “just WHY?”. All the painstaking work I did on labelling the different elements would be lost in the process. All the layers would probably be smashed together. And what would happen to the details?
I am not going to ignore this issue, just to leave it aside for the moment. We took some time just to make experiments. It may not lead anywhere. It may not be useful to my research. The process of converting a 3D mesh into a point cloud may involve such a loss of information that it is unreasonable to pursue it. But we just wanted to find out.

How do you do that?
Here you can see the difference between people coming from a humanities or technology background. I started being fascinated by the idea of making a digital imaging of a virtual space. Is that possible? Can you do a photogrammetry of a space that is already digital? While I was pondering these slightly surreal thoughts, and getting lost in their philosophical implications, Alexandra had already found an option in Meshlab that transforms meshes into point clouds. As simple as that.

The Iseum as point cloud in the GV. The software generated
the map and calculated the areas.
The pink dots are the viewpoints
So we exported the Iseum in collada (.dae) from 3DSMax (the only Max exporting format that can be opened in Meshlab), we imported it into Meshlab and generated a point cloud.
We tried out different point densities, and then we settled for 5 millions. And there it was, my temple as a cloud!
I was happy to see that the conversion had worked less problematically than I was expecting. But, at the same time, it can’t be ignored that, even in a quite simple and regular untextured model like mine, everything looked pretty much simplified. The question if the benefits of the semantic annotations and the spatial calculations are worth the information loss is still open.

So, we managed to overcome the first major issue: we had a point cloud version of the virtual space. But, still, it was not suitable for the GV because in a CAD model there are not fixed viewpoints. 
We thought that the key was to keep treating the virtual space as a real one. And do, virtually, all the things we would do in a material space to make the GV work. So, we placed some hypothetical view points in the model and recorded the coordinates in the 3D space. 

screenshots of the virtual panorama
in the texturised ekklesiasterion
Again, I was glad for the success of the experiment. It gives a certain pleasure to push the boundaries of digital tools and software, and to see how much you can stretch them and make them do things they were never supposed to. On the other hand, I was losing information again. I had just turned a virtual space that is entirely explorable, 360 degrees, into a space whose view is constrained by the position of viewpoints.

We missed one last element to simulate the imaging of a material space in the GV: photographic panoramas, taken from the positions of our artificial viewpoints to texture the point cloud.
Again, we tried to think of the digital space as it was a real one. So I placed a (virtual) camera at the same coordinates of one of our viewpoints, and took sequential pictures of the space, with at least a 40% overlap (so, in the end, my idea of a photogrammetry of a virtual space wasn’t that crazy…).
To be fair, our viewpoints weren’t very strategically placed. But, luckily, I thought of putting at least one in a sensible place: the ekklesiasterion. The chronological layer of my model I’m working on (the hypothetical reconstruction of the Iseum as it might have looked like before 79) is meant to be untextured.
I could have left it untextured in our experiments with the GV as well, and just render the camera view of the elements in the not-too-bad 3DSMax solid colour palette (which is a feast of purple, lime green and turquoise, not too dissimilar from my summer wardrobe).
However, I thought that the best use I could make of the GV and its feature was to use it to express the complex relationships between the walls of the ekklesiasterion, the frescoes that have been found there and are now exhibited in the Museum of Naples, and the documentation of those frescoes that have been produced at the time of the excavations.

So, I textured (quite quickly, I’m afraid) the north, west and south wall of the ekklesiasterion with a digital copy of the graphic documentation of the walls commissioned at the time by the Borbons. On the black and white engravings (a not-good-enough picture from a copy of Elia's book) I have superimposed colour pictures of the fragments now exhibited in Naples (if you think that was a straightforward task, you have never had anything to do with Pompeian documentations…).
I didn’t have an equivalent texture for the east wall (the entrance one, with the arches). I could have left it untextured but, just to simulate a minimum of homogeneity, I applied a quick black and white masonry texture to it. 

The ekklesiasterion in the GV, (almost) ready to be annotated
I froze the camera at the exact coordinates we had given the GV for that viewpoint, and moved the target of the camera along the walls, capturing the panorama, as I was doing a photogrammetry. Move the target. Render the camera view. Print the screen. Repeat.
I thought of removing the roof from the model of the ekklesiasterion, to handle better the movements along the walls, and I twitched the light a little bit to get a better illumination.
Actually, I was a bit worried about the light. My model is not finished yet, so I haven’t spent much time working on a well balanced and realistic lighting. I just put some standard 3DSMax omnilights where I need them when I’m modelling. I wondered if the artificiality and inconsistency of lights in the renderings we used to build the panorama might bother the system. But I was over worrying and the system was definitely smarter than I thought.

So, while Alexandra is still working on the last details, it seems that my Iseum is ready to be annotated in the Generic Viewer

Monday 2 February 2015

Letters from the Gamma Room: visiting the i3Mainz

View of historic Mainz from Mainz-Kastel;
 woodcut by Franz Behem, 1565 (image: Mainz City Archive)
I am spending some weeks in Mainz, Germany, at the i3Mainz, in the University of Applied Science.

There are many projects going on there, led by different groups and institutions. It would take much longer than a couple of months to explore them all (so I had to restrain my curiosity as much as my nature allows). Mainly, I am here to experiment a bit with a tool they have developed in the last couple years, the Generic Viewer (GV).

I’m not going to explain in detail how the GV works. It has been presented in various venues (including the past CAA in Paris and the DH conference in Lausanne). If you want to know more about it, you can read the website.

What I’m going to discuss here, are the relationships between the GV and my own research, and their possible interactions. 

The GV allows to annotate semantically specific elements within a virtual 3D space, derived from laser scanner point clouds, and textured with panoramic photographs. You can navigate the space (although through controlled viewpoints), and you can draw geometry (polygons) around a specific feature (a painting, a statue, anything that is relevant) to select and identify it. The selected portion of virtual space automatically receives a URI. Then, it is possible to:
- extract and calculate geometric information about the elements in it,
- attach information to it via linked data. 

You can write the triples yourself, and add them directly in the triplestore; or you can use a little interface that makes RDF triples look much like statements in English language.
Concept for the Generic Viewer interface.
From the project website
You can then filter the annotations in various ways, and find different sort of information. From the author of the annotation to articles and bibliography about the specific feature.

I was very curious about this tool when I saw it presented. Having my PhD in mind, my first two questions were, quite predictably:
* could the GV work with CAD models as well as with point clouds?
* would my ontology work on a point cloud? Even better: would my ontology work on spatial data that has not been produced by me?
A part from these two first questions, I have been given the opportunity to say what else I would like the GV to do, what functions to offer. So, I have started thinking about it, and playing with the system, supported by a nice group of developers and spatial data geeks.

First of all, I had to find out if it was possible to import my CAD 3D model of the Iseum into the GV. On paper it was just not doable, for a series of reasons, from technical to conceptual. The main barrier was simply that the GV is meant to import exclusively point clouds, and not 3D meshes. It allows ptg or xyz formats, nothing else. 
Moreover, the GV is not designed for free exploration of the digital space. The views are constrained and always refer to the positions of viewpoints (i.e. the positions of the scanners that have generated the data). This makes it possible to derive a number of useful geometric and spatial data, but, on the other hand, a completely artificial space that has no specific view-point didn’t seem to fit in.
The adventure looked like it was ended even before starting. But did you notice the effect that the word “impossible” generates on developers? So, luckily, my i3Mainz colleagues Martin Unold and Alexandra Müller were up for a few experiments on it.


Monday 17 November 2014

Routine can kill passion (and mess up your data)

Documenting what you do, step by step, sounds easy. But it is not. Think, for example, of describing your morning routine. Would you be able to? And how accurately? Let’s give it a try: you wake up, get out of bed, prepare your coffee or tea... Wait. We forgot to say that you put your slippers on. Oh, and before that, that you probably turned off your alarm. You see? The things that we do automatically can be among the most complicated to document. So, when I started documenting my work, I realised how many small (and not so small) transformations and adjustments I apply to my data, without even thinking. Then I wondered if these actions should be documented as well.
The problem is, as always, where to draw a line and when “more information” becomes “too much information”. I have tried to keep the ontology slim, so that its complexity is not off putting for other researchers. However, the ontology is theoretically always open to further specification, that the user can decide to use or not. 
Just to give an example, I want to mention some of the operations that virtual archaeologists, in my experience, perform so often that they might go unnoticed. 

Elements of a series: isDerivedFrom
In the real world, of course, things are all unique. If measured, all the columns in the colonnade of the Iseum would have similar but different values. I have decided that the level of granularity of my representation doesn’t require that precision. Therefore, as in many other models of ancient buildings, all my columns have been artificially assumed to be identical (and perfectly aligned). Only one has been measured on site (the one that looked better preserved), and the others duplicated. To express this process, the subelement column that has been actually measured is documented as based on hard measurements (taken by me and available online at a certain url), while all the others are recorded as derived from other elements, i.e. derived from the value of the only measured one. 
My measurements of the ekklesiasterion of the Iseum.

Elements of a series: isConformedTo
Another possibility, is that a series of elements, such as the arches on the east wall of the ekklesiasterion, have actually been singularly measured but, for various reasons, it is not considered relevant to represent these differences visually in the model. In the case of the ekklesiasterion, my assumption is that the differences between the arches are mainly due to weathering and other accidents. And, although they were never perfectly identical in the past, my reckon is that they were meant to look so (a part from the central one which is wider), so I think it made sense to just model one arch and clone it four times. It is actually a more economical approach from a modelling point of view. 
How to represent this process in the documentation? In this case, all the arches had been measured, however they have been «conformed» (the word is a work in progress label. Any better ideas? «regularised»? «normalised»? ) to an average value. In the documentation, they have an attribute that has as value the range between the lowest and the highest values measured, and the percentage that this range is against the whole measured value. That sounds confusing… So, the four arches of the east wall of the ekklesiasterion (I have left out the wider central one) have a width between 159 and 164 cm. So, all the four of them have, as value of hasWidth an average 162 cm. However, the arches (transitions) also have two attributes which are “isConformedTo: average of four (159/164)”, and “hasVariation (again, the label is a work in progress): 3%”; i.e. the percentage of the variation against the whole average value: 5 cm on 162 cm.


If stating that the columns of a colonnade have not been singularly measured can sound unnecessary and pedantic (and, maybe, it actually is…), conforming the value of elements that had been measured might sound like a loss of information. However, in the documentation of the 3D element, there is always a link to the original measurements in case they are needed at a different stage of the research or by other scholars. 

Monday 3 November 2014

Embrace your inner dr. Frankenstein: documenting heterogeneous sources

There are a few things I have noticed writing the documentation of my 3D model in RDF, that I had not realised before starting thinking about it.
When I started my research on the ontology, I assumed that assigning one source to each element of the 3D model would have been more than enough to document sufficiently a 3D visualisation of cultural heritage.
But then I found out that a single source not always could provide all the information I was looking for. I (and possibly many others in this field) have to put together pieces of information that not only come from various archives but that have often different format, author and history. I know, it sounds like a terrible mess…

The ekklesiasterion of the Iseum in Pompeii, the north
wall visible between the arches of the east one.
Picture from pompeiiinpictures
For example, let’s look at the hypothetical restoration of the Iseum pre catastrophe. If we take the north wall of the ekklesiasterion, we can say that the width of the wall has actually been measured. So, the source for that specific information is the measurement taken on site and recorded by the researcher (in this case me) and available online at a certain url. The depth of the wall, however, cannot really be measured, definitely not with the equipment I had with me. So, the value I have assigned to the depth of the north wall of the ekklesiasterion in my model is simply based on the depth of the east wall of the same room, that can be measured because it has arches in it. The guess is supported by the fact that the depth of the walls appears to be quite consistent across the entire architectonic complex. So, the source for this other bit of information, is another element (the east wall) that has actually been measured. Last, it is not possible to know how tall the wall was before the eruptions.
For the more hypothetical elements, I have relied on Piranesi’s drawings as they have proven to be a thorough and, all in all, acceptably reliable visualisation. Thus, the height of the north wall has yet another source.

As you can see, the problem here is that not just each element, but even each dimension of the element can have a different source (it’s not always the case, but it has happened).
 
For this reason, I have decided to enter, for each feature, transition or constrain, the attributes hasHight, hasWidth, hasDepth, and use them not only to express the numeric value, but also (or mainly) to connect them to the related source.
Is this level of documentation, although expressed synthetically through RDF triples, sustainable? I’m not sure yet…

Boris Karloff(*) as the Creature of Dr. Frankenstein
Image from giphy
To achieve a higher consistency, I could have tried to derive all information from the richest source, which is probably Piranesi. This would have been a perfectly acceptable choice, and the outcome would have been “an hypothetical restoration of the Iseum in Pompeii according to Francesco Piranesi”.
Nonetheless, I followed a different approach. Although I don’t want to state any degree of preferability among the different sources, I have chosen to use hard measurements each time they were available. Also, information derived geometrically from the actual remains has been considered preferable to the one derived from drawings or other secondary sources. Piranesi’s data, in the end, have been mostly used for the things that cannot be measured, that I didn’t measured (for various reasons) and that do not exist anymore.

I know that this choice makes my model a little frankenstein of information, but, in the first place, even the most detailed elevation or cross section cannot show all the information needed to produce a 3D model that is actually visible 360° in space. 
Second, my aim is not to produce a new groundbreaking hypothesis on the restoration of the Iseum but to provide a way to connect the 3D model to its sources. From this perspective, it is actually interesting to me to see how much I can stretch the potentiality of my system, and to give an idea of the richness and diversity of data virtual archaeologists deal with.


(*) trivia: glorious actor Boris Karloff is one of the King's College London illustrious alumni.

Friday 3 October 2014

Particular to General: a round trip

A Greek philosopher and his disciples by Antonio Zucchi.
From BBC Your Paintings
When you are supposed to produce something that has to be logical and consistent, like, let’s say… an ontology, you are probably expected to have a very theoretical approach to it. Probably, your colleagues imagine you sitting at your desk, dressed like an ancient Greek philosopher, creating categories that describe reality. 
This is definitely a sensible and fruitful approach. It ensures a better level of consistency and it is likely to generate a more solid piece of knowledge modelling. 
Nonetheless, I have the feeling that what is really missing from the existing ontologies on cultural heritage are the specific issues that become apparent only in the very process of modelling or imaging. Even for 3D modellers it is easy to overlook some aspects of the work when they are not actually modelling but just thinking about modelling.
I will need to wear the philosopher's clothes when this phase is finished to make everything more homogeneous and (hopefully) fill the gaps.

So, I have decided to take a different approach: instead of "from general to particular", I’m going in the opposite direction. Looking at what I actually do, I have started drafting what is the information that, in my opinion and experience, is important to express in linked data. 
We can call it a more experimental, or laboratory-style approach.
Deductive and inductive methods are both valuable and needed, maybe at different stages. At the moment, I am completely immersed in the second one. The idea is that it would make the ontology closer to the actual visualisation process, from the research to the production of the output. 
However, the particular-to-general approach has also its drawbacks. I keep changing things when I realise that there are aspects of the process that I had neglected, or not covered in the right way. I add, change or delete attributes very often. Let’s say that I'm working for subsequent refinements. This obviously very much compromises the consistency of my work. I fully expect to spot quite apparent logical gaps only at the end (I bet, just after having submitted my thesis...).
So I am now wearing (ideally) my lab coat and I am fully in the experimental mood. I have started from the relatively easiest of my layers, the one that visualises the Iseum as it might have been in the period before the eruption (but after its restoration). I think I have started identifying quite clearly some of the recurrent processes in virtual archaeology and 3d visualisation.

Of course, everything is at work-in-progress stage: classifications. Concepts. Actually even labels and, generally, names.