Tuesday 24 March 2015

A clash of sources: hypotheses on the digital unification of the ekkleasiasterion

Fragment 1.62: Landscape with ceremony in honor
of Osiris. Archaeological Museum of Naples
Italian scholar Olga Elia already highlighted in 1941 that the visual documentation of the ekklesiasterion and the frescoes in the museum collection are not completely consistent. It may be due to the habit of drawing the documentation after the frescoes had already been removed (which makes the documentation of the time sound certainly not less precious but… well… a tiny bit less reliable). So Elia, and more recently current director of the Archaeological Museum of Naples Valeria Sanpaolo (1992), suggest that a better source, if we want to try an unification, are paradoxically, the verbal accounts compiled during the excavation.

This is the reason why both scholars agree that fragment 1.62 [Landscape with ceremony in honor of Osiris] should be placed in the left side of the south wall (as the Pompeianarum antiquitatum historia records) and not at the right side of the west wall as the engravings show. They supported this assumption mainly with the presence of a framing column on the right edge of the fragment. Well, if we assume that the pattern is correct, having a dividing fake column on the right of the scene wouldn’t be much useful if the image was already on the right side, would it? So, I definitely agree with Elia and Sanpaolo. 

Fragment 1.67: Landscape with sacred door 
and velum.Archaeological Museum of Naples
Moreover, if fragment 1.62 were on the right side of the west wall, it would be asymmetrical with fragment 1.66 [Sacred fence with temples, statues, and square with trees, seen through an architectural scene.], that, all sources agree, was located on the left side of the west wall. And Romans tended to be pretty fond of symmetry.

In addition, Elia suggested that the right side of the west wall was actually occupied by fragment 1.67 [Landscape with sacred door and velum.]. The Landscape with sacred door and velum. does not even appear in the engravings but the piece had been explicitly recorded as “from the Temple of Isis” when it was catalogued. Homogeneity of style, colour and dimensions support this theory and fragment 1.67 is officially exhibited in the museum as part of the ekklesiasterion in the museum collection. 

This, by the way, means that what Chiantarelli drew in the right side of the west wall might be pretty much completely made up (yeah, welcome to Pompeii…). 

Documentation of the west wall by Chiantarelli

superimposition of pictures of the extant fragments of frescoes on Chiantarelli's documentation,
according to Elia (1941) and Sanpaolo's (1992) reading.

We are not done yet! Elia also claims that the south and north walls could have been accidentally inverted in the visual documentation. She based her theory on the position of the mythological scenes. According to modern western conventions, stories are read from left to right. This would make the frescoes representing the myth of Io awkwardly positioned as the story ends on the left (south wall) and begins on the right (north wall). However, Sanpaolo (1992) disagrees, and remarks that there is no evidence in Pompeian frescoes to support such a constraining convention. In fact, scenes depicted on Pompeian walls (especially in a series composed by separated scenes) appear to follow a more symbolic than realistic or linear logic. 

These are the pieces of the puzzle: three walls, a visual record, a written one, the work of two scholars, many frescoes fragments, different theories and some disagreement. How to express that in linked data? And how to do it in the Generic Viewer?


[One part more to come...]

Sunday 22 March 2015

Where are you from?: fragments of frescoes and their spatial context

Archaeological Museum of Naples,
Fragments from the porticus of the Iseum.
The display has no relationships with the
original position in the building.
My idea to experiment with the Generic Viewer comes from two main things that caught my attention. The first one is that, according to its presentation, the Generic Viewer was meant to support projects like IBR in which inscriptions are considered not just as text, but as artefacts, and, more specifically, artefacts thought to be experienced in a particular space, interacting with other features and with the human behaviour in that space. 
The second one, is the potential use of the Generic Viewer not only as a digital model of the real space, but also as a virtual environment where is possible to test different hypotheses about the original position of some decorative features based on geometry, scholarly expertise, and visibility analysis.


Both things, for different reasons that I’m going to explain, reminded me of some of the issues I had encountered studying the artefacts found at the Iseum, and made me wonder if the Generic Viewer was really generic enough that its functions, although developed for epigraphists, could be extended to the study of ancient Pompeian frescoes. 
Moreover, the semantic annotations allowed by the Generic Viewer give me the opportunity to model the relationship between the artefacts, the architectural environment and their documentation all in RDF linked data (using, among others, a few elements of my ontology).

Of course, it would be too ambitious to show the results for the entire Iseum, so I had to pick a specific area. The most interesting one, in this respect, seems to be the ekklesiasterion (and that is the reason why it's the space we have prepared and textured in the Generic Viewer). Although there are other spaces of the Iseum that are rich in surviving artefacts and related documentation (e.g. the porticus), the ekklesiasterion is especially complex. Actually, sometimes 'complex' gets very close to 'pretty bizarre', even for Pompeian standards, bet let me introduce this story of artists, archaeologists, kings, engineers and scholars.


The Palace of Portici in 1745
From wikipedia
The frescoes on the walls of the ekkleasiasterion, as for many other Pompeian buildings, were almost intact when uncovered. Being the Iseum one of the first buildings dug up (1764), it was still very common, at the time, to remove the most eye catching bits of the frescoes (usually the figurative ones) and move them to the Palace of Portici. Everything else was just left in place, if not explicitly destroyed, to enhance the value of the king’s collection or because believed not worthy. Another issue was that the pieces of frescoes stripped off were transported via narrow tunnels dug into the excavation site, so there was no practical means, at the time, to transport anything too big.

Considering what happened (and what is still happening) to the frescoes that have been left in place, we should be happy that some of them have been removed and can now be seen and studied in in the Museum of Naples (or elsewhere). However, we now have only fragments, singularly framed as independent pieces of art and exhibited outside their spatial context (the building) and regardless the relationship with the other parts of the decorative pattern. 
Something that sounded quite close to what the IBR project complained about when discussing inscriptions.

In a 3D model (or in a point cloud generated by a 3D model and imported into the Generic Viewer, to be precise) it is possible to attempt one or more virtual unifications, placing the fragments (visually or just on an informative level) were they might have used to be.
But, first of all: how can we place the fragments in their original position reliably? They are no more in situ, but they have been documented at the time. 


documentation of the north wall of the ekklesiasterion
by Giovanni Morghen
documentation of the south wall of the ekklesiasterion
by Giuseppe Chiantarelli 
documentation of the west wall of the ekklesiasterion
by Giuseppe Chiantarelli
After being harshly criticised by classicists at the time for destroying one of the most valuable source of knowledge about the past, the king of Naples decided to hire someone from the academia of fine arts to document the state of the frescoes before removing parts of them. The documentation of the ekklesiasterion was carried out by Giovanni Morghen (north wall) and Giuseppe Chiantarelli (west and south walls). 
At the beginning of my work on Pompeii, some years ago, I thought that documentation was a very good starting point for digital visualisation; that having a professional documentation was very close to know how things were at the time. Which is true, to a certain extent. But, besides knowing of course that every human representation is intrinsically subjective, I didn't take into account that Pompeii has always an extra layer of complication (if not sheer surrealism…).

Morghen and Chiantarelli's documentation is extremely precious. It tells us about the decorative pattern around the fragments that survived. Shows us the beautiful fake architectural features that Esher would have loved so much. Reveals fascinating illusory doors leading to even more secret rooms.
But…

[part two]              

Monday 2 March 2015

Isis rises from the clouds: converting 3D meshes into point clouds

Sneak peek of my model of the Iseum in 3DSMax.
Untextured
As the Generic Viewer only accepts point clouds as inputs, we had to find a way to convert my 3D model of the Iseum into a point cloud.
I know. The idea would make many modellers (including me) cringe. And I can already hear the puzzled voice of my colleagues asking “just WHY?”. All the painstaking work I did on labelling the different elements would be lost in the process. All the layers would probably be smashed together. And what would happen to the details?
I am not going to ignore this issue, just to leave it aside for the moment. We took some time just to make experiments. It may not lead anywhere. It may not be useful to my research. The process of converting a 3D mesh into a point cloud may involve such a loss of information that it is unreasonable to pursue it. But we just wanted to find out.

How do you do that?
Here you can see the difference between people coming from a humanities or technology background. I started being fascinated by the idea of making a digital imaging of a virtual space. Is that possible? Can you do a photogrammetry of a space that is already digital? While I was pondering these slightly surreal thoughts, and getting lost in their philosophical implications, Alexandra had already found an option in Meshlab that transforms meshes into point clouds. As simple as that.

The Iseum as point cloud in the GV. The software generated
the map and calculated the areas.
The pink dots are the viewpoints
So we exported the Iseum in collada (.dae) from 3DSMax (the only Max exporting format that can be opened in Meshlab), we imported it into Meshlab and generated a point cloud.
We tried out different point densities, and then we settled for 5 millions. And there it was, my temple as a cloud!
I was happy to see that the conversion had worked less problematically than I was expecting. But, at the same time, it can’t be ignored that, even in a quite simple and regular untextured model like mine, everything looked pretty much simplified. The question if the benefits of the semantic annotations and the spatial calculations are worth the information loss is still open.

So, we managed to overcome the first major issue: we had a point cloud version of the virtual space. But, still, it was not suitable for the GV because in a CAD model there are not fixed viewpoints. 
We thought that the key was to keep treating the virtual space as a real one. And do, virtually, all the things we would do in a material space to make the GV work. So, we placed some hypothetical view points in the model and recorded the coordinates in the 3D space. 

screenshots of the virtual panorama
in the texturised ekklesiasterion
Again, I was glad for the success of the experiment. It gives a certain pleasure to push the boundaries of digital tools and software, and to see how much you can stretch them and make them do things they were never supposed to. On the other hand, I was losing information again. I had just turned a virtual space that is entirely explorable, 360 degrees, into a space whose view is constrained by the position of viewpoints.

We missed one last element to simulate the imaging of a material space in the GV: photographic panoramas, taken from the positions of our artificial viewpoints to texture the point cloud.
Again, we tried to think of the digital space as it was a real one. So I placed a (virtual) camera at the same coordinates of one of our viewpoints, and took sequential pictures of the space, with at least a 40% overlap (so, in the end, my idea of a photogrammetry of a virtual space wasn’t that crazy…).
To be fair, our viewpoints weren’t very strategically placed. But, luckily, I thought of putting at least one in a sensible place: the ekklesiasterion. The chronological layer of my model I’m working on (the hypothetical reconstruction of the Iseum as it might have looked like before 79) is meant to be untextured.
I could have left it untextured in our experiments with the GV as well, and just render the camera view of the elements in the not-too-bad 3DSMax solid colour palette (which is a feast of purple, lime green and turquoise, not too dissimilar from my summer wardrobe).
However, I thought that the best use I could make of the GV and its feature was to use it to express the complex relationships between the walls of the ekklesiasterion, the frescoes that have been found there and are now exhibited in the Museum of Naples, and the documentation of those frescoes that have been produced at the time of the excavations.

So, I textured (quite quickly, I’m afraid) the north, west and south wall of the ekklesiasterion with a digital copy of the graphic documentation of the walls commissioned at the time by the Borbons. On the black and white engravings (a not-good-enough picture from a copy of Elia's book) I have superimposed colour pictures of the fragments now exhibited in Naples (if you think that was a straightforward task, you have never had anything to do with Pompeian documentations…).
I didn’t have an equivalent texture for the east wall (the entrance one, with the arches). I could have left it untextured but, just to simulate a minimum of homogeneity, I applied a quick black and white masonry texture to it. 

The ekklesiasterion in the GV, (almost) ready to be annotated
I froze the camera at the exact coordinates we had given the GV for that viewpoint, and moved the target of the camera along the walls, capturing the panorama, as I was doing a photogrammetry. Move the target. Render the camera view. Print the screen. Repeat.
I thought of removing the roof from the model of the ekklesiasterion, to handle better the movements along the walls, and I twitched the light a little bit to get a better illumination.
Actually, I was a bit worried about the light. My model is not finished yet, so I haven’t spent much time working on a well balanced and realistic lighting. I just put some standard 3DSMax omnilights where I need them when I’m modelling. I wondered if the artificiality and inconsistency of lights in the renderings we used to build the panorama might bother the system. But I was over worrying and the system was definitely smarter than I thought.

So, while Alexandra is still working on the last details, it seems that my Iseum is ready to be annotated in the Generic Viewer