Semantic search

Jump to navigation Jump to search

Der Papst ist tot, Hurra?

Die Umfrage Perspektive Deutschland, die ich ansonsten eigentlich nur empfehlen kann, weil die Fragen echt in die Tiefe gehen und die daraus generierten Reports durchaus inhaltlich bemerkenswert sind, hat mich bei der derzeit laufenden Umfrage doch irgendwie überrascht:

Frage: Wie haben die Ereignisse der letzten zwölf Monate, z.B. der Tod von Papst Johannes Paul II., ihre Meinung gegenüber der katholischen Kirche beeinflusst?

Antwortmöglichkeiten: Stark verbessert - verbessert - nicht verändert - verschlechtert - stark verschlechtert - weiß nicht

Sonst denken die von der Umfrage aber durchaus etwas nach.

Nodix schon wieder angegriffen

Nachdem die Nodix-Seiten schon vor ein paar Wochen Opfer eines Angriffs wurden, hat es uns diesmal fies erwischt. Gab es letztes Mal nur ein Defacement, wurden wir diesmal vollständig gelöscht (nur ein perl-Script wurde hochgeladen, mit spanischen Kommentaren, die, laut Übersetzung, recht fies waren. Aber unpersönlich).

Tja, das bedeutet dass, bis ich etwas Zeit finde, die Nodix-Seiten nur teilweise funktionstüchtig sind. Die DSA4 Werkzeug-Seiten sind ganz weg, ebenso Something*Positive. Nutkidz hat eine weiße Seite, nakit-arts ist halbwegs auf den Beinen, Semantic Nodix läuft großteils schon, Nodix selber ist eher schlecht als recht wieder da.

Grrr. Ich Pappnase. 1und1 hat mich noch gewarnt, und mögliche Fehlerquellen genannt, ich hatte das dummerweise ignoriert. Hat man nun davon.

Annotating axioms in OWL - Reloaded

Yesterday I sent a lengthy mail to the OWL-ED Mailinglist] about how to annotate axioms. Peter Patel-Schneider himself, first author of the OWL Semantics specification, told me in nice words that my solution sucked heavily, by pointing out that the semantics of annotations in OWL are a tiny bit different than I thought. Actually, they are not at all as I thought. So, in the evening hours, instead of packing my stuff for a trip, I tried to solve the problem anew. Let's see where the problem will be this time.

Peter, you were right, I was wrong. I took a thorough look at the Semantics, and I had to learn that my understanding of annotations was totally screwed. I thought they would be like comments in C++ or Prolog, but instead they are rather like a second ABox over (almost) the whole universe. This surprised me a lot.

But still, I am not that good at giving up, and I think my solution pretty much works syntactically. Now we need only a proper Semantics to get a few things right.

What would be the problem? Let's make an example. I need some kind of Syntax to give axioms name. I will just take Name":" Axiom. This is no proposal for the Abstract Syntax extension, this is just for now.

Axiom1: SubClassOf(Human Mortal)
Axiom2: Individual(Socrates type(Human))

Do they entail the following?

Axiom3: Individual(Scorates type(Mortal))

Well, pitily they don't. Because the Axiom3 has a name, Axiom3, that is not entailed by Axiom1 and Axiom2. Their contents would be entailed, but the name of the axiom would not.

I guess, this is the problem Peter saw. So, can we solve it?

Well, yes, we can. But it's a bit tricky.

First, we need the notion of Combined Inverse Functional Properties, CIFP. A CIFP has several dimensions. A CIFP with dimension 1 ist a normal Inverse Functional Property. A CIFP with dimension 2 over the properties R, S can be represented with the following rule: a R c, a S d, b R c, b S d -> a = b. This means, in a two dimensional space I can identify an individual with the help of two roles. More on this here: http://lists.w3.org/Archives/Public/semantic-web/2005Feb/0095.html

Second, we extend the semantics of OWL. Every axiom entails reifying annotations. This means:

SubClassOf(Human Mortal)

entails

Individual(Statement1 type(rdf:statement)
annotation(rdf:subject Human)
annotation(rdf:property owl:SubClassOf)
annotation(rdf:object Mortal))

or, in N3:

Human owl:subClassOf Mortal.

entails

Statement1 rdf:type rdf:statement.
Statement1 rdf:subject Human.
Statement1 rdf:property owl:subClassOf.
Statement1 rdf:object Mortal.
rdf:subject rdf:type owl:AnnotationProperty.
rdf:predicate rdf:type owl:AnnotationProperty.
rdf:object rdf:type owl:AnnotationProperty.

Third, we have to state that we have a 3D-CIFP for statements over rdf:subject, rdf:property and rdf:object*. This is to ensure that Statement1 always maps to the same element in the universe, even though an OWL API could give it a blank node, or a different URI everytime (mind you, I am not suggesting to extend the OWL language with CIFPs, I just say that it is used here in order to state that all triples with the same subject, object and predicate actually is the same triple).

Fourth, the above statement also entails

Individual(Axiom1 type(owl11:axiom)
annotation(owl11:consistsOf Statement1))

or, in N3:

Axiom1 rdf:type owl11:axiom.
Axiom1 owl11:consistsOf Statement1.
owl11:consistsOf rdf:type owl:AnnotationProperty.

Fifth, owl11:consistsOf needs to be an n-dimensional CIFP with n being the number of triples the original axiom got translated to (in this case, happy us!, n=1).

This assures that an axiom is always the same, whatever it's name is, as long as it expresses the same thing. Thus, in our example, Axiom3 would indeed be entailed by Axiom1 and Axiom2. So, even if two editors load an ontology an annotate an axiom, they could later interchange and find each others annotation attached to the correct axiom.

This is only a rough sketch of the way, and yes, I see that the Interpretation gets filled up with a lot of annotations, but I still think that this is quite easy to implement, actually. Both the OWL API by Bechhofer and Volz and the KAON2 API by Motik offer access to axioms on an ontology level, and also offer the possibility to check if they are the same anyway, if I remember correctly (which is basically a shortcut for the whole semantic entailment and CIFP-stuff proposed earlier). All they need is a further field containing the URI of the axiom.

As said, this looks far more nasty than it actually is, and for most practical reasons it won't do much harm. Now we finally can annotate axioms, yeeeha!

Merrily awaiting Peter to acknowledge that this is a brilliant solution :) Or else tell me I did it all wrong again, so that I have to think over the weekend how to solve this problem again.

Cheers, denny

 *What I mean with that is the following rule: a=b :- a rdf:subject s, a rdf:property p, a rdf:object o, b rdf:subject s, b rdf:property p, b rdf:object o

Annotating axioms in OWL

This was sent to the OWLED-List by me, that prepares to come up with an OWL 1.1 recommendation. The week before, Alan Rector suggested to add the possibility to annotate axioms in OWL, which is currently not possible. There is many a good use for that, like provenance, trust, and son on. But the discussion wasn't too fruitful, so I suggested the following solution.

After it came up in discussion last week, I hoped an elegant solution for annotating axioms would arise. Pitily, no one had a brilliant idea, so I went ahead and tackled the problem in my mediocre way.

First, what do I want to achieve with my solution:

  1. Don't crack the Semantic Web stack. The solution has to be compatible to XML, RDF and OWL. I don't want to separate OWL from RDF, but to offer a solution that is able to be handled by both.
  2. We want to annotate not just entities, but also axioms. Thus an axiom needs to be able to be a subject in a statement. Thus an axiom needs to have an URI.
  3. The solution must be easy to implement, or either people will get my FOAF-file and see whom I care about and hurt them.

Did I miss something? I found two solutions for this problem.

A) Define the relationship between an ontology (which does have an URI) and the axioms stated inside. Then we can talk about the ontologies, annotate those, add provenance information, etc. Problem: after importing axioms from one ontology into another, those information is lost. We would need a whole infrastructre for Networked Ontologies to achieve that, which is a major and worthy task. With this solution, you can annotate a single axiom by putting it alone into an ontology, and claim that when annotating the ontology you actually annotate the axiom as well. Not my favourite solution, because of several drawbacks which I won't dwell in deeper if not asked.

B) The other solution is using Reification (stop yelling and moaning right now!). I'm serious. And it's not that hard, really. First, the OWL specification offers a standard of how to translate the Axioms into triples. Second, thte RDF specification offers a standard way to reify a triple. With the RDF reification we can give a triple a name. Then we can introduce a new resource type owl11:axiom, where its instances contains the triples that were translated from a certain DL Axiom. This rdf resource of type owl11:axiom is then the name/URI of the original DL Axiom.

RDF-triples that have a subject of type rdf:statement or owl11:axiom don't have semantics with regards to OWL DLs Model Theoretic Semantics, they are just syntactic parts of the ontology in order to allow the naming of axioms in order to annotate them.

For example, we say that all Humans are Mortal. In Abstract Syntax this is

SubClassOf(Human Mortal)

In RDF triples (N3) this is:

:Human rdfs:subClassOf :Mortal.

Now reifiying this we add the triples:

:statement1 rdf:type rdf:statement.
:statement1 rdf:hasSubject :Human.
:statement1 rdf:hasPredicate owl:subClassOf.
:statement1 rdf:hasObject :Mortal.
:axiom1 owl11:consistsOf :statement1.

Now we can make annotations:

:axiom1 :bestBefore "24/12/2011"^xsd:date.
:axiom1 :utteredBy :Aristotle.

Naturally, :bestBefore and :utteredBy have to be Annotation Properties. When an axiom is broken up in more than one triple, the reasone of having an extra owl11:axiom instead of simply using rdf:statement should become clear.

Does this solution fulfill the given conditions?

  1. The Semantic Web stack is safe and whole. RDF Semantics is adhered to, and OWL semantics is fine, and all syntax regulations imposed by XML and RDF/XML are regarded. Everything is fine.
  2. Yep, we can annotate single axioms. Axioms have URIs. We can annotate our metadata! Yeah!
  3. Is it easy to implement? I think it is: for reading OWL ontologies, a tool may just ignore all those extra triples (it can easily filter them out), and still remain faithful to the standard semantics. Tools that allow to name axioms (or annotate them) and want to deal with those, have to simply check for the correct reification (RDF toolkits should provide these anyway), and get the axiom's URI.

Problems that I see: I identified two problems. First, what happens, if those triples get separated from the other actual axiom triples? What if they get ripped apart and mushed into another ontology? Well, that problem is somewhat open for OWL DL and Lite anyway, since not all axioms map to single triples. The answer probably is, that reification would fail in that case. Strict reading could be that the ontology leaves OWL DL then and moves to OWL full, but I wouldn't require that.

Second problem, and this is by far more serious, is that people can't stand reification in RDF, that they simply hate it and that alone for that they will ignore this solution. I can only answer that reification in practise is probably much easier than expected when done properly, due to some short-hand notations available in RDF/XML-serialization, and other syntaxes. No one holds us back from changing the Abstract Syntax and the OWL XML Presentation Syntax appropriately in order to name axioms far more easy than in the proposed RDF/XML-Syntax. Serializations in RDF/XML-Syntax may get yucky, and the RDF graph of an OWL ontology could become cluttered, but then, so what? RDF/XML isn't read by anyone anyway, is it? And one can remove all those extra triples (and then the annotations) automatically if wished, without changing the Semantics of the ontology.

So, any comments on why this is bad? (Actually, I honestly think this is a practicable solution, though not elegant. I already see the 2007 ISWC best paper award, "On the Properties of Higher Order Logics in OWL"...)

I hope you won't kill me too hard for this solution :) And I need to change my FOAF-file now, in order to protect my friends...

Job at the AIFB

Are you interested in the Semantic Web? (Well, probably yes or else you wouldn't read this). Do you want to work at the AIFB, the so called Semantic Web Machine? (It was Sean Bechhofer who gave us this name, at the ISWC 2005) Maybe this is your chance...

Well, if you ask me, this is the best place to work. The offices are nice, the colleagues are great, our impact is remarkable - oh well, it's loads of fun to work here, really.

We are looking for a person to work on KAON2 especially, which is a main building block of many a AIFB software, as for example my own OWL Tools, and some European Projects. Mind you, this is no easy job. But if you finished your Diploma, Master or your PhD, know a lot about efficient reasoning, and have quite some programming skills, peek at the official job offer (also available in German).

Do you dare?

Semantic Web Gender Issue

Well, at least they went quite a way. With Google Base one can create new types of entities, entities themselves, and search for them. I am not too sure about the User Interface yet, but it's surely one of the best actually running onbig amounts of data. Nice query refinement, really.

But heck, there's one thing that scares me off. I was looking today for all the people interested in the Semantic Web, and there are already some in. And you can filter them by gender. I was just gently surprised about the choices I was offered when I wanted to filter them by gender...

Hier fehlt noch ein Bild.

Oh come on, Google. I know there are not that many girls in computer science, but really, it's not that bad!

What is a good ontology?

You know? Go ahead, tell me!

I really want to know what you think a good ontology is. And I will make it the topic of my PhD: Ontology Evaluation. But I want you to tell me. And I am not the only one who wants to know. That's why Mari Carmen, Aldo, York and I have submitted a proposal for a workshop on Ontology Evaluation, and happily it got accepted. Now we can officially ask the whole world to write a paper on that issue and send it to us.

The EON2006 Workshop on Evaluation of Ontologies for the Web - 4th International EON Workshop (that's the official title) is co-located with the prestigous WWW2006 conference in Ediburgh, UK. We also were very happy that so many reknown experts accepted our invitation to the program committee, thus ensuring a high quality of reviews for the submissions. The deadline is almost two months away: January 10th, 2006. So you have plenty of time to write that mind-busting phantastic paper on Ontology Evaluation until then! Get all the details on the Workshop website http://km.aifb.uni-karlsruhe.de/ws/eon2006.

I really hope to see some of you in Edinburgh next May, and I am looking for lively discussions about what makes an ontology a good ontology (by the way, if you plan to submit something - I would love to get a short notification, that would really be great. But it is by no means requested. It's just so that we can plan a bit better).

Regenbogen

Ich bin gerade in Galway, und hier regnet es ständig. Wirklich. Zwar meist nur kurz, aber doch halt immer wieder neu.

Dafür wurde ich heute mit einem schier unglaublichen Regenbogen belohnt: er ging wirklich über den ganzen Horizont, ein vollständiger Bogen! So was habe ich noch nie gesehen. Das Bild ist untertrieben, deutlich, in Wirklichkeit schien er noch viel heller.

Besonders spannend war, dass er kaum zwei-, dreihundert Meter entfernt aus dem Wasser zu steigen schien. Nicht irgendwo weit weg, er war direkt da - man kann sogar die Häuser durch den Regenbogen hindurch auf dem Bild sehen, der Regenbogen war vor den Häusern. So etwas habe ich noch nie zuvor gesehen. Wahnsinnig beeindruckend.

Ich hoffe, dass ich bald noch ein paar bessere Bilder bekomme.

Hier fehlt noch ein Bild.

ISWC impressions

The ISWC 2005 is over, but I'm still in Galway, hanging around at the OWL Experiences and Direction Workshop. The ISWC was a great conference, really! Met so many people from the Summer School again, heard a surprisingly number of interesting talks (there are some conferences, where one boring talk follows the other, that's definitively different here) and got some great feedback on some work we're doing here in Karlsruhe.

Boris Motik won the Best Paper Award of the ISWC, for his work on the properties of meta-modeling. Great paper and great work! Congratulations to him, and also to Peter Mika, though I have still to read his paper to form my own opinion.

I will follow up on some of the topics from the ISWC and the OWLED workshop, but here's my quick, first wrap-up: great conference! Only the weather was pitily as bad as expected. Who decided on Ireland in November?

KAON2 and Protégé

KAON2 is the Karlsruhe Ontology infrastructure. It is an industry strength reasoner for OWL ontologies, pretty fast and comparable to reasoners like Fact and Racer, who gained from years of development. Since a few days KAON2 also implements the DIG Interface! Yeah, now you can use it with your tools! Go and grab KAON2 and get a feeling for how good it fulfills your needs.

Here's a step to step description of how you can use KAON2 with Protégé (other DIG based tools should be pretty the same). Get the KAON2 package, unpack it and then go to the folder with the kaon2.jar file in it. This is the Java library that does all the magic.

Be sure to have Java 5 installed and in your path. No, Java 1.4 won't do it, KAON2 builds heavily on some of the very nice Java 5 features.

You can start KAON2 now with the following command:

java -cp kaon2.jar org.semanticweb.kaon2.server.ServerMain -registry -rmi -ontologies server_root -dig -digport 8088

Quite lengthy, I know. You will probably want to stuff this into a shell-script or batch-file so you can start your KAON2 reasoner with a simple doubleclick.

The last argument - 8088 in our example - is the port of the DIG service. Fire up your Protege with the OWL plugin, and check in the OWL menu the preferences window. The reasoner URL will tell you where Protege looks for a reasoner - with the above DIG port it should be http://localhost:8088. If you chose another port, be sure to enter the correct address here.

Now you can use the consistency checks and automatic classification and all this as provided by Protege (or any other Ontology Engineering tool featuring the DIG interface). Protégé tells you also the time your reasoner took for its tasks - compare it with Racer and Fact, if you like. I'd be interested in your findings!

But don't forget - this is the very first release of the DIG interface. If you find any bugs, say so! They must be squeezed! And don't forget: KAON2 is quite different than your usual tableaux reasoner, and so some questions are simply not possible. But the restrictions shouldn't be too severe. If you want more information, go to the KAON2 web site and check the references.

Noch eine Blume: die Tulpe

Überraschend: auf n-tv lief gerade ein Bericht über die Geschichte der Tulpe. Tulpen! Das kann doch nicht interessant werden.

Oh doch! Ich will nicht alles aus der Sendung wiederholen, aber Tulpen führten vielen hunder Jahren zu einem ersten Zusammenbruch der holländischen Börse. Zwar waren auch die allgemein bekannten einfarbigen Blüten, mit ihren kräftigen Farben, beliebt, doch wirklich teuer waren die mehrfarbigen Blüten wie die hier dargestellte Semper Augusta. 10.000 Gulden wurde für sie bezahlt. 150 Gulden war der normale Verdienst einer Familie zu jener Zeit, im Jahr - umgerechnet bewegen wir uns also, für eine einzelne Blume wohlgemerkt!, im Millionen Euro Bereich!

Eine andere Blume wurde gegen eine Villa im niederländischen Harleem getauscht.

Die Tulpenmanie beschädigte die Niederlande schwer: die Preise waren deutlich zu hoch, teilweise wurde mit Tulpen gehandelt, die noch nicht mal gepflanzt waren. Innerhalb weniger Tage brach der Markt plötzlich zusammen, und viele Niederländer mussten in den Ruin.

Die Semper Augusta, die man hier sieht, gibt es heute leider nicht mehr. Schade. Eine wirklich schöne Blume...

Mehr nachzulesen zum Beispiel im Wikipedia-Artikel zur Tulpenmanie.

Hier fehlt noch ein Bild.

Flower of Carnage

Im Original

Übersetzung: Blume von Tod und Zerstörung

Shindeita
Asa ni
Tomorai no
Yuki ga furu

Schmerzvoller Schnee fällt im Morgengrauen,
Streunende Hunde heulen
und Getas Schritte durchdringen die Luft

Hagure inu no
Touboe
Geta no
Otokishimu

Das Gewicht der Milchstraße drückt auf meine Schultern
aber ein Schirm hält die Dunkelheit,
die alles ist, was ist.

Iin na naomosa
Mitsumete aruku
Yami wo dakishimeru
Janomeno kasa hitotsu

Ich bin eine Frau,
die an der Grenze zwischen Leben und Tod wandelt,
leerte meine Tränen, viele Monde her.

Inochi no michi wo
Yuku onna
Namida wa tooni
Sutemashita

All das Mitleid, Tränen, Träume,
Die verschneiten Nächte und das Morgen haben keine Bedeutung

Furimuita
Kawa ni
Toozakaru
Tabinohima

Ich tauchte meinen Körper in den Fluss des Zorns
und warf meine Weiblichkeit fort, viele Monde her.

Itteta tsuru wa
Ugokasu
Naita
Ame to kaze

Im Auftrag des Himmels,
sie sind unsere Soldaten,
loyal, unbesiegbar und mutig

Kieta mizu mo ni
Hotsure ga miutsushi
Namida sae misenai
Janomeno kasa hitotsu

Jetzt ist ihre Zeit gekommen,
das Land ihrer Eltern zu verlassen,
ihre Herzen aufgeputscht von ermutigenden Stimmen

Urami no michi wo
Yuku onna
Kokoro wa tooni
Sutemashita

Feierlich werden sie bestimmt
nicht lebendig zurückzukehren,
ohne Sieg.

Giri mo nasake mo
Namida mo yume no
Kinou mo ashita mo
Henno nai kotoba

Hier, daheim,
warten die Bürger auf Euch.
In fremden Ländern,
die mutigen Truppen.

Urami no kawa ni
Mi wo yudanete
Honma wa tooni
Sutemashita

Statt Freundlichkeit von Jemandem
der mir egal ist
möchte ich lieber Egoismus
von Dir

Mein Problem mit asiatischen Texten ist, dass ich sie häufig auch nicht verstehe, wenn man sie übersetzt. Aus dem Kill Bill Soundtrack.

Hommingberger Gepardenforelle

Ich kann mir zwar nicht vorstellen, dass es ernsthaft hilft, aber versuchen kann man es ja mal, ein paar Kollegen dabei zu helfen, im Suchmaschinenwettbewerb von Heise beim erfundenen Begriff Homminberger Gepardenforelle möglichst weit nach vorne zu kommen. Linkt noch jemand mit? Wäre cool, wenn die hübsch weit nach vorne kommen. Mehr Erklärung gibt es übrigens hier, auf der Seite der Uni Kassel dazu.

Ab sofort mit Backlinks

So, jetzt hat auch der Nodix Blog dieses coole, unverzichtbare Feature: Backlinks! Ihr verlinkt auf einen Nodix-Post, und die Backlinks merken das.

Na ja, zumindest in der Theorie.

Mit Dank an Blogger für's Implementieren.

KAON2 OWL Tools V0.23

A few days ago I packaged the new release of the KAON2 OWL tools. And they moved from their old URL (which was pretty obscure: http://www.aifb.uni-karlsruhe.de/WBS/dvr/owltools ) to their new home on OntoWare: owltools.ontoware.org. Much nicer.

The OWL tools are a growing number of little tools that help people working with OWL. Besides the already existing tools, like count, filter or merge, partly enhanced, some new entered the scene: populate, that just populates an ontology randomly with instances (which may be used for testing later on) and screech, that creates a split program out of an ontology (you can find more information on OWL Screech' own website).

A very special little thing is the first beta implementation of shell. This will become a nice OWL shell that will allow to explore and edit OWL files. No, this is not meant as a competitor to full-fledged integrated ontology development environments like OntoStudio, Protégé or SWOOP, it's rather an alternative approach. And it's just started. I hope to have autocompletion implemented pretty soon, and some more commands. If anyone wants to join, give me a mail.

Asterix in Gefahr

Dieser Tage erschien der 33. Band der Asterix-Reihe: Gallien in Gefahr. Nein, kein "Asterix rettet Gallien", oder "Asterix bringt Gallien in Gefahr", oder ähnlich, sondern schlicht der reißerische Titel "Gallien in Gefahr". Im französischen heißt der Band Le ciel lui tombe sur la tête, Der Himmel fällt ihnen auf den Kopf, aber bei Asterix waren die Übersetzungen schon immer sehr frei - und meistens dadurch auch herausragend gut! (Ich bin, mein lieber Freund, sehr glücklich, dich zu sehn!" - "Das ist ein Alexandriner." -Asterix und Kleopatra). Vor dem weiterlesen ist es vielleicht sinnvoll, den Band zuerst zu lesen, sonst wird einiges vorweggenommen. Ich verrate zwar nicht, wer stirbt, aber dennoch.

Aber kommen wir zum Inhalt. Überraschend wenig gelacht. Ich muss mal die alten Hefte rauskramen, ob man da auch so wenig gelacht hat, ob man nur die besten Stellen sich gemerkt hat (mein Liebling ist Asterix als Legionär, und da habe ich reichlich gelacht, da bin ich mir sicher). Asterix war immer dafür begann, mit seiner Zeit zu spielen. Das Jahr 50 v. Chr. Ganz Gallien ist von Römern besetzt... Das zeigte einen gewissen Rahmen auf, auch wenn er schon in Vergangenheit arg gestreckt wurde: Die große Überfahrt führte nach Amerika, Asterix im Morgenland nach Indien. Doch diesmal reist Asterix gar nicht, sondern vielmehr kommt die Fremde in das kleine gallische Dorf. Und zwar heftig.

Aliens besuchen das kleine gallische Dorf. Ihr Kampf gegen die Römer hat Gerüchte um ihre letale Geheimwaffe ins ganze Universum verbreitet. Die guten Aliens kommen zuerst, um Asterix zu warnen, und dann kommen die bösen nach, und es kommt zu einer Schlacht zwischen den Außerirdischen. Gewaltig große Bilder, oft eine halbe Seite, ein Panel gar die ganze Seite bedeckend - ungewöhnlich für Asterix.

Man mag den Band befremdlich finden. Aber so schlecht ist er nicht. Es ist zudem nicht schwer durch die oberflächliche Geschichte zu stoßen, zu sehen, was dahinter steckt: das gute Alien ist offensichtlich Micky Maus ohne Ohren, ja, selbst der Name des Planeten von dem sie stammen ist ein Angram auf Walt Disney. Sogar die Details - Tuuns Knöpfe, die Handschuhe, die Gesichtsmimik - stimmen. Und er isst Heißen Hund - deutliche Anspielung auf Amerika. Sein Begleiter, im Stile Schwarzeneggers, eine Mischung aus Terminator und Superman, hingegen ist ein Klon, ein Superheld, austauschbar durch den anderen. Dies sind die Ikonen des Amerikanischen Comics. Ich frage mich nur, was der Name Tuun und seines weisen Mannes Hubs bedeutet?

Die bösen Aliens hingegen stammen vom Planeten Nagma, ebenfalls ein nichts verhehlendes Anagram auf japanische Comics. Auch sind sie so insektenhaft gezeichnet und bewegen und kämpfen, wie man es von machem Manga gewohnt ist. Ihr deutsch, oder vielmehr gallo-römisch, ist schlecht, und die Amerikaner, entschuldigung, die guten Aliens behaupten, dass sie nur alle ihre Erfolgsrezepte kopieren. Das erste, was der Abgesandte der Nagmas macht, als er Asterix entgegentritt, ist es, ihm einen Friedensabkommen anzubieten, doch Asterix begrüßt den Außerirdischen mit Dresche. Und so kommt es zur Schlacht.

Amerikanische Superheldencomics, Walt Disneys Massenproduktion und Mangas überfluten den europäischen Markt, bedrängen die französisch-belgischen Künstler, wollen sich das Geheimnis ihres Zaubertranks aneignen wollen, das Geheimnis ihres Erfolgs. Das hätte man für die Nachricht dieses Bandes halten können. Uderzos Postscriptum zu seiner und Goscinnys Asterix-Reihe, der wahrscheinlich letzte Band des nunmehr 78-jährigen Uderzo. Sein Kommentar zur Geschichte des europäischen und globalen Comics.

Doch nach zwei Dritteln des Bandes fragt man sich, was will er nun sagen? Es wird kreuz und quer Frieden geschlossen und aufeinander geprügelt, die Römer und Piraten kriegen einen Gastauftritt, der wirkt, als ob man die Todo-Sachen der Liste "Was alles in einen Asterix-Comic auftauchen muss" noch abhandeln müsste, und am Ende wird allen das Gedächtnis gelöscht, so dass die Geschichte vollkommen folgenlos bleibt. Uderzo hätte viel sagen können, und man hätte zugehört. So aber deutete er einen Kommentar an, um dann doch nur irgendwie die Geschichte zum Ende zu bringen, bevor Seite 48 erreicht wird. Schade. Und warum sieht der römische Offizier diesmal wie Signore Berlusconi aus?

Sicher nicht der schlechteste Asterix. Wahrscheinlich der ungewöhnlichste.

Why some are disenchanted

In a comment to my last blog entry, Christopher St John wrote:

"I suffered through the 80's Knowledge Representation fad, both academically in the AI program at Edinburgh and as a practitioner at the only company ever to produce a commercial system written in Prolog (that wasn't a Prolog development system.) So I'm familiar with the problems that the Semantic Web effort is attempting to address. Having slogged through real-life efforts to encode substantial amounts of knowledge, I find some of the misty-eyed musings that surround the Semantic Web effort depressing. That "most information on the Web is designed for human consumption" is seen as an obstacle surmountable via tools like RDF is especially sad. On the other hand, I'm always happy to make use of the cool tools that these sorts of things seem to throw off. There's probably a certain Proverbs 26:11 aspect to it as well."

Thanks for your insightful comment, and being new to the field I certainly appreciate some report based on real life experience - and I have to admit to probably be faulty of being misty-eyed myself more than once about the Semantic Web (and probably will be in the future as well).

'"Most information on the Web is designed for human consumption" is seen as an obstacle'. Yes, you are right, this is probably the worst phrased sentence in the Semantic Web vision. Although I think it's somehow true: if you want the computer to help you dealing with today's information overflow, it must understand as much of the information as possible. The sentence should be at least rephrased as "most information on the Web is designed only for human consumption". I think it would be pretty easy to create both human-readable and machine-friendly information with only little overhead. Providing such systems should be fairly easy. But this is only about the phrasing of the sentence - I hope that every Semwebber agrees that the Semantic Web's ultimate goal is to help humans, not machines. But we must help the machines in order to enable them to help us.

The much more important point that Christopher addresses is his own disenchantment with the Knowledge Represenation research in the 80s, and probably by many people with the AI research a generation before. So the Semantic Web may just seem as the third generation of futile technologies to solve AI-complete problems.

There were some pretty impressive results from AI and KR, and the Semantic Web people build on that. Some more, some less - some too much even, forgetting the most important component of the Semantic Web underway: the Web. Yes, you can write whole 15-page papers and file them to Semantic Web conferences and journals and not even once mention anything web-specific. That's bad, and that's what Christopher, like some researchers, does not see as well, the main difference between this work two decades ago and today's line of investigation. The Web changes it all. I don't know if AI and KR had to fail - it probably must have failed, because they were so many intelligent people doing it and so there's no other explanation than that it had to fail due to the premises of its time. I have no idea if the Semantic Web is bound to fail as well today. I have no idea if we will be able to reach as much as AI and KR did in their time, or less, or maybe even more. I am a researcher. I have no idea if the things I do will work.

But I strongly believe it will and I will invest my time and part of my life towards this goal. And so do dozens of dozens other people. Let's hope that some nice thing will be created in the course of our work. Like RDF.

RDF is not just for dreamers

Sometimes I stumble upon posts that leave me wonder, what actually do people think about the whole Semantic Web idea, and about standards like RDF, OWL and the like. Do you think academia people went out and purposefully made them complicated? That they don't want them to get used?

Christopher St. John wrote down some nice experience with using RDF for logging. And he was pretty astonished that "RDF can actually be a pretty nifty tool if you don't let it go to your head. And it worked great."

And then: "Using RDF doesn't actually add anything I couldn't have done before, but it does mean I get to take advantage of tons of existing tools and expertise." Well, that's pretty much the point of standards. And the point of the whole Semantic Web idea. There won't be anything you will be able to do later, that you're not able to do today! You know, assembler was pretty turing-complete already. But having your data in standard formats helps you. "You can buy a book on RDF, but you're never going to buy a book on whatever internal debug format you come up with"

Stop being surprised that some things on the Semantic Web work. And don't expect miracles either.

Eine Viertelmillion

Heute Nacht überschritt die Zahl der Besucher der Nodix-Webseiten die Viertelmillion.

Vielen Dank für die vielen treuen Besucher auf nakit-arts, auf semantic.nodix, hier, und auch auf den zur Zeit deutlich vernachlässigten Seiten nutkidz, something*positive, DSA4 Werkzeug und XML4Ada95. Ihr seid großartig.

Semantic MediaWiki: The code is out there

Finally! 500 nice lines of code, including the AJAX-powered search, and that's it, version 0.1 of the SeMediaWiki project! Go to Sourceforge and grab the source! Test it! Tell us about the bugs you found, and start developing your own ideas. Create your own Semantic Wiki right now, today.

Well, yes, sure, there is a hell of a lot left to do. Like a proper triplestore connecting to the Wiki. Or a RDF-serialization. But hey, there's something you can play with.