Semantic search

Jump to navigation Jump to search

Semantic Mediawiki 0.3

Yay! Markus "the Sorcerer" Krötzsch finished the new release of Semantic MediaWiki today. The demo website is already running version 0.3 for a while.

I'll let Markus speak:

I am glad to finally announce the official release of Semantic MediaWiki 0.3, available as usual at http://sourceforge.net/projects/semediawiki/. The final 0.3 is largely equivalent to the preview version that is still running on wiki.ontoworld.org -- the latest changes mainly concern localization.

Semantic MediaWiki 0.3 now runs on MediaWiki 1.6.1 that was released just yesterday. Older versions of MediaWiki should also work but upgrading is generally recommended.

The main new features of 0.3 are:

  • support for geographical coordinates (new datatype),
  • improved user interface: service links for JScript tooltips, CSS layout,
  • OWL/RDF export of all annotation data,
  • simplified installation process (including special page for setup/upgrade),
  • (almost) complete localization; translations available for English and German,
  • better MediaWiki integration: namespaces, user/content language, support for MediaWiki 1.6,
  • specials for displaying all relations/attributes,
  • experimental (OWL/RDF) ontology import feature,
  • and, last but not least, we also fixed quite some bugs.

The next steps towards 0.4 will probably be the inclusion of query results into existing pages, date/time support, and individual user settings for displaying certain datatypes. We also will have another look at ways of hiding the annotations from uninitiated users.

Have fun.

Markus

P.S.: I am not available during the weekend. Upgrading existing wikis should work (it's what we do all the time ;), but be aware that there is not going to be much support during the next three days.


Comments are still missing on this post.

Good ontologies?

We have asked you for your thoughts and papers. And you have sent us those -- thank you! 19 submissions, quite a nice number, and the reviewing is still going on.

Now we ask you for your results. Apply your evaluation approaches! We give you four ontologies on the EON2006 website, and we want you to take them and evaluate them. Are these ontologies good? If they are, why? If not, what can be changed? We want practical results, and we want to discuss those results with you!. So we collected four ontologies, all talking about persons, all coming from very different background and with different properties. Enough talking -- let's get down and make our hands dirty by really evaluating these ontologies.

The set is quite nice. Four ontologies. One of them we found over rdfdata.org, a great resource for ontologies, some of them I would have never found myself. We took a list of Elvis impersonators. One person edited the ontology, it is about a clear set of information, basically RDF. The second ontology is the ROVE ontology about the Semantic Web Summer School in Cercedilla last year. It was created by a small team, and is richly axiomatized. Then there is the AIFB ontology, based on the SWRC. It is created out of our Semantic Portal in the AIFB , and edited by all the members of the AIFB -- not all of them experts in the SemWeb. Finally, there's a nice collection of FOAF-files, taken from all over the web, and to be meshed up together and evaluated as one ontology, created with a plethora of different tools, by more than a hundred persons. So there should be an ontology fitting to each of the evaluation approaches.

We had a tough decision to make when choosing the ontologies. In literally the last moment we got the tempting offer to take three or four legal ontologies and to offer those for evaluation. It was hard, and we would have loved to put both ontology sets up to evaluation, but finally decided for the set mentioned previously. The legal ontologies were all of similar types, and certainly would need a domain expert for proper evaluation, which many of the evaluators won't have at hand at the moment. I hope it is the right decision (in research, you usually never know).

The EON2006 workshop will be a great opportunity to bring together all people interested in evaluating ontologies. I read all the submissions, and I am absolutely positive that we will be able to present you with a strong and interesting programme soon. I was astonished how many people have interest in that field, and was intrigued to discover and follow the paths lead out by the authors. I am looking forward to May, and the WWW!


Comments are still missing on this post.

EON2006 deadline extension

We gave the workshop on Evaluating Ontologies for the Semantic Web at the WWW2006 in Edinburgh an extension to the end of the week, due to a number of requests. I think it is more fair to give an extension to all the authors than to allow some of them on request and to deny this possibility to those too shy to ask. If you have something to say on the quality of ontologies and ontology assessment, go ahead and submit! You still have a week to go, and short papers are welcomed as well. The field is exciting and new, and considering the accepted ESWC paper the interest in the field seems to be growing.

A first glance of the submissions reveals an enormous heterogeneity of methods and approaches. Wow, very cool and interesting.

What surprised me was the reaction of some: "oh, an extension. You didn't get enough submissions, sorry". I know that this is a common reason for deadline extensions, and I was afraid of that, too. A day before the deadline there was exactly one submission and we were considering cancelling the workshop. It's my first workshop and thus such things make me a whole lot nervous. But now, two days after the deadline I am quite more relaxed. The number of submissions is fine, and we know about a few more to come. Still: we are looking for more submissions actively. For the sole purpose of gathering the community of people interested in ontology evaluation in Edinburgh! I expect this workshop to become quite a leap for ontology evaluation, and I want the whole community to be there.

I am really excited about the topic, as I consider it an important foundation for the Semantic Web. And as you know I want the Semantic Web to lift off, the sooner the better. So let's get these foundations right.

For more, take a peek at the ontology evaluation workshop website.


Comments are still missing on this post.

My Erdös Number

After reading a post by Ora and one by Tim Finin, I tried to figure my own Erdös Number out. First, taking Ora's path, I came up with an Erdös of 7:

Paul Erdös - Stephan Hedeniemi - Robert Tarjan - David Karger - Lynn Stein - Jim Hendler - Steffen Staab - Denny Vrandečić

But then I looked more, and with Tim's path I could cut it down to 6:

Paul Erdös - Aviczir Fraenkl - Yaacov Yesha - Yelena Yesha - Tim Finin - Steffen Staab - Denny Vrandečić

The point that unnerved me most was that the data was actually there. Not only a subscription-only database for mathematical papers (why the heck is the metadata subscription only?), but there's DBLP, there's the list of Erdös 1 and 2 people on the Erdös Number project, there's Flink, and still, I couldn't mash up the data. This syntactic web sucks.

The only idea that brought me further - without spending even more time with that - was a Google search for "my erdös number" "semantic web", in the hope to find some collegues in my field that already have found and published their own Erdös number. And yep, this worked quite fine, and showed me two further, totally disjunctive paths to the one above:

Paul Erdös - Charles J. Coulborn - A. E. Brouwer - Peter van Emde Boas - Zhsisheng Huang - Peter Haase - Denny Vrandečić

and

Paul Erdös - Menachem Magidor - Karl Schlechta - Franz Baader - Ian Horrocks - Sean Bechhofer - Denny Vrandečić

So that's and Erdös of 6 on at least 3 totally different paths. Nice.

What surprises me - isn't this scenario obviously a great training project for the Semantic Web? Far easier than Flink, I suppose, and still interesting for a wider audience as well, like Mathematicians and Noble Laureates? (Oh, OK, not them, they get covered manually here).

Update

I wrote the post quite a time ago. A colleague of mine notified me in the meantime that I have a Erdös of only 4 by the following path:

Paul Erdös - E. Rodney Canfield - Guo-Quiang Zhang - Markus Krötzsch - Denny Vrandečić

Wow. It's the social web that gave the best answer.

2019 Update

Another update, after more than a dozen years: I was informed that I have now an Erdös number of 3 by the following path:

Paul Erdös - Anthony B. Evans - Pascal Hitzler - Denny Vrandečić

I would be very surprised if this post requires any further updates.


Comments are still missing on this post.

GESTS journal invitation! - ideas for better spam

Yeah, isn't that great! Got an invitation to submit my paper to the GESTS Journal "Transactions on Communications and Signal Processing" (won't link to it). Well, not directly my field, and I never heard of the Journal, but hey, a journal paper, isn't that great...

Ehhm, not exactly. Actually it seems to be spam. Another collegue got the same invitation last week. And no one heard about the journal. And it really isn't my field. I don't have to do anything with Signal Processing. And why do they want money for printing my article?

What I was wondering: why didn't they do it some better? With the AIFB OWL Export they could have got the machine processable information about the interests of each person at the AIFB. With a bit of SPARQLing they could have gotten tons of information -- fully machine processable! They could have found out that I am not into Signal Processing, but into Semantic Web. Personalizing Spam would be sooo easy. Spam could become so much more time-consuming to filter out, and much more attractive, if those spammers would just harvest FOAF-data and semantic exports. I really am surprised they didn't do that yet.


Comments are still missing on this post.

FOAF browser

Thanks Josef, thanks Pascal! I have complained that Morten's Foaf explorer is still down, they, instead of complainig as well, pointed me to their own FOAF explorers: Josef has his Advanced FOAF Explorer, very minimalistic, but it works! And Pascal points to Martin Borho's FOAFer. FOAFer has a few nice properties.

Thank you guys, your sites are great!

Is your source code there? Because both of your tools lack a bit in looks, to be honest. And do you really think, users like to see SHA1 sums? Or error messages? (Well, actually, that was OK, that helped me discover a syntax error in the AIFB FOAF files). Please, don't misunderstand me: your site really are great. And I like to use them. But in order to reach a more general audience, we need something slicker, nicer.

Maybe a student in Karlsruhe would like to work on such a thing? Email me.

New tagline is my New Year's resolution

I just changed the tagline of this blog. The old one was rather, hmm, boring:

"Discovering the Semantic Web, Ontology Engineering and related technologies, and trying to understand these amazing ideas - and maybe sharing an idea or two... "

The new one is at the same time my new year's resolution for 2006.

"Kicking the Semantic Web's butt to reality"

'nuff said, got work to do.

Fellow bloggers

Just a few pointers to people with blogs I usually follow:

  • Max Völkel, a colleague from the AIFB, soon moving to the FZI and right now visiting DERI. He obviously likes groups with acronyms. And he's a fun read.
  • Valentin Zacharias, who has deeper thoughts on this whole Semantic Web stuff than most people I know, working at the FZI. He's often a thought-provoking read.
  • Planet RDF. The #1 blog on news for the (S/s)emantic (W/w)eb, both with major and minor initials. That's informative.
  • Nick Kings from BT exact. We are working together on the SEKT project, and he just started to blog. Welcome! A long first post. But the second leads to a great video!
  • Brendan Eich, one of the Mozilla gurus. I want to know where Mozilla is headed to - so I read his musings.
  • PhD. It's not a person, it's a webcomic, granted, but they offer a RSS feed for the comic. Cool. I always look forward for new episodes.

So, if you think I should read you, drop me a note. I especially like peers, meaning, people who like I do are working on the Semantic Web, maybe PhD students, and who don't know the answer to anything, but like to work on it, making the web come real.

More FOAF

Wow, I never can't get enough FOAF :) Besides my Nodix-FOAF file the AIFB Portal now also offers a FOAF export for all the people at the AIFB (using URIs for the people besides the mailbox SHA1-Sum as identifiers. Hah! FOAFers won't like that, but TimBL told us to do it this way in Galway a few weeks ago).

If you point your smushers at the FOAFs, I wonder if you can also compile the SWRC-output into it, as they use the same URI? And can you also, by adding my own FOAF from Nodix, that I am the same person? Anyone dare's to try? :)

It's a pity Morten's FOAF explorer is down, I'd really like to try it out and browse. Isn't there something similar out there?

A tidbit more on that is also posted on the AIFB blog, but from a different point of view.

A blog for the AIFB

Although I blogged here about the AIFB as well - the great place I am working at - Jens Hartmann suggested to create an own, dedicated AIFB-Blog on ontoworld. It's in beta still, kind of. We hope that other AIFB-people will blog there as well, and so keep you up to date with AIFB-stuff, blog about our papers, workshops, conference attendances, break-through results, but also the great weather in Karlsruhe and stories that happened here.

So, while I will still continue to post on the Semantic Web here, the more workplace related stuff will be found there: at the new AIFB-Blog.

Annotating axioms in OWL - Reloaded

Yesterday I sent a lengthy mail to the OWL-ED Mailinglist] about how to annotate axioms. Peter Patel-Schneider himself, first author of the OWL Semantics specification, told me in nice words that my solution sucked heavily, by pointing out that the semantics of annotations in OWL are a tiny bit different than I thought. Actually, they are not at all as I thought. So, in the evening hours, instead of packing my stuff for a trip, I tried to solve the problem anew. Let's see where the problem will be this time.

Peter, you were right, I was wrong. I took a thorough look at the Semantics, and I had to learn that my understanding of annotations was totally screwed. I thought they would be like comments in C++ or Prolog, but instead they are rather like a second ABox over (almost) the whole universe. This surprised me a lot.

But still, I am not that good at giving up, and I think my solution pretty much works syntactically. Now we need only a proper Semantics to get a few things right.

What would be the problem? Let's make an example. I need some kind of Syntax to give axioms name. I will just take Name":" Axiom. This is no proposal for the Abstract Syntax extension, this is just for now.

Axiom1: SubClassOf(Human Mortal)
Axiom2: Individual(Socrates type(Human))

Do they entail the following?

Axiom3: Individual(Scorates type(Mortal))

Well, pitily they don't. Because the Axiom3 has a name, Axiom3, that is not entailed by Axiom1 and Axiom2. Their contents would be entailed, but the name of the axiom would not.

I guess, this is the problem Peter saw. So, can we solve it?

Well, yes, we can. But it's a bit tricky.

First, we need the notion of Combined Inverse Functional Properties, CIFP. A CIFP has several dimensions. A CIFP with dimension 1 ist a normal Inverse Functional Property. A CIFP with dimension 2 over the properties R, S can be represented with the following rule: a R c, a S d, b R c, b S d -> a = b. This means, in a two dimensional space I can identify an individual with the help of two roles. More on this here: http://lists.w3.org/Archives/Public/semantic-web/2005Feb/0095.html

Second, we extend the semantics of OWL. Every axiom entails reifying annotations. This means:

SubClassOf(Human Mortal)

entails

Individual(Statement1 type(rdf:statement)
annotation(rdf:subject Human)
annotation(rdf:property owl:SubClassOf)
annotation(rdf:object Mortal))

or, in N3:

Human owl:subClassOf Mortal.

entails

Statement1 rdf:type rdf:statement.
Statement1 rdf:subject Human.
Statement1 rdf:property owl:subClassOf.
Statement1 rdf:object Mortal.
rdf:subject rdf:type owl:AnnotationProperty.
rdf:predicate rdf:type owl:AnnotationProperty.
rdf:object rdf:type owl:AnnotationProperty.

Third, we have to state that we have a 3D-CIFP for statements over rdf:subject, rdf:property and rdf:object*. This is to ensure that Statement1 always maps to the same element in the universe, even though an OWL API could give it a blank node, or a different URI everytime (mind you, I am not suggesting to extend the OWL language with CIFPs, I just say that it is used here in order to state that all triples with the same subject, object and predicate actually is the same triple).

Fourth, the above statement also entails

Individual(Axiom1 type(owl11:axiom)
annotation(owl11:consistsOf Statement1))

or, in N3:

Axiom1 rdf:type owl11:axiom.
Axiom1 owl11:consistsOf Statement1.
owl11:consistsOf rdf:type owl:AnnotationProperty.

Fifth, owl11:consistsOf needs to be an n-dimensional CIFP with n being the number of triples the original axiom got translated to (in this case, happy us!, n=1).

This assures that an axiom is always the same, whatever it's name is, as long as it expresses the same thing. Thus, in our example, Axiom3 would indeed be entailed by Axiom1 and Axiom2. So, even if two editors load an ontology an annotate an axiom, they could later interchange and find each others annotation attached to the correct axiom.

This is only a rough sketch of the way, and yes, I see that the Interpretation gets filled up with a lot of annotations, but I still think that this is quite easy to implement, actually. Both the OWL API by Bechhofer and Volz and the KAON2 API by Motik offer access to axioms on an ontology level, and also offer the possibility to check if they are the same anyway, if I remember correctly (which is basically a shortcut for the whole semantic entailment and CIFP-stuff proposed earlier). All they need is a further field containing the URI of the axiom.

As said, this looks far more nasty than it actually is, and for most practical reasons it won't do much harm. Now we finally can annotate axioms, yeeeha!

Merrily awaiting Peter to acknowledge that this is a brilliant solution :) Or else tell me I did it all wrong again, so that I have to think over the weekend how to solve this problem again.

Cheers, denny

 *What I mean with that is the following rule: a=b :- a rdf:subject s, a rdf:property p, a rdf:object o, b rdf:subject s, b rdf:property p, b rdf:object o

Annotating axioms in OWL

This was sent to the OWLED-List by me, that prepares to come up with an OWL 1.1 recommendation. The week before, Alan Rector suggested to add the possibility to annotate axioms in OWL, which is currently not possible. There is many a good use for that, like provenance, trust, and son on. But the discussion wasn't too fruitful, so I suggested the following solution.

After it came up in discussion last week, I hoped an elegant solution for annotating axioms would arise. Pitily, no one had a brilliant idea, so I went ahead and tackled the problem in my mediocre way.

First, what do I want to achieve with my solution:

  1. Don't crack the Semantic Web stack. The solution has to be compatible to XML, RDF and OWL. I don't want to separate OWL from RDF, but to offer a solution that is able to be handled by both.
  2. We want to annotate not just entities, but also axioms. Thus an axiom needs to be able to be a subject in a statement. Thus an axiom needs to have an URI.
  3. The solution must be easy to implement, or either people will get my FOAF-file and see whom I care about and hurt them.

Did I miss something? I found two solutions for this problem.

A) Define the relationship between an ontology (which does have an URI) and the axioms stated inside. Then we can talk about the ontologies, annotate those, add provenance information, etc. Problem: after importing axioms from one ontology into another, those information is lost. We would need a whole infrastructre for Networked Ontologies to achieve that, which is a major and worthy task. With this solution, you can annotate a single axiom by putting it alone into an ontology, and claim that when annotating the ontology you actually annotate the axiom as well. Not my favourite solution, because of several drawbacks which I won't dwell in deeper if not asked.

B) The other solution is using Reification (stop yelling and moaning right now!). I'm serious. And it's not that hard, really. First, the OWL specification offers a standard of how to translate the Axioms into triples. Second, thte RDF specification offers a standard way to reify a triple. With the RDF reification we can give a triple a name. Then we can introduce a new resource type owl11:axiom, where its instances contains the triples that were translated from a certain DL Axiom. This rdf resource of type owl11:axiom is then the name/URI of the original DL Axiom.

RDF-triples that have a subject of type rdf:statement or owl11:axiom don't have semantics with regards to OWL DLs Model Theoretic Semantics, they are just syntactic parts of the ontology in order to allow the naming of axioms in order to annotate them.

For example, we say that all Humans are Mortal. In Abstract Syntax this is

SubClassOf(Human Mortal)

In RDF triples (N3) this is:

:Human rdfs:subClassOf :Mortal.

Now reifiying this we add the triples:

:statement1 rdf:type rdf:statement.
:statement1 rdf:hasSubject :Human.
:statement1 rdf:hasPredicate owl:subClassOf.
:statement1 rdf:hasObject :Mortal.
:axiom1 owl11:consistsOf :statement1.

Now we can make annotations:

:axiom1 :bestBefore "24/12/2011"^xsd:date.
:axiom1 :utteredBy :Aristotle.

Naturally, :bestBefore and :utteredBy have to be Annotation Properties. When an axiom is broken up in more than one triple, the reasone of having an extra owl11:axiom instead of simply using rdf:statement should become clear.

Does this solution fulfill the given conditions?

  1. The Semantic Web stack is safe and whole. RDF Semantics is adhered to, and OWL semantics is fine, and all syntax regulations imposed by XML and RDF/XML are regarded. Everything is fine.
  2. Yep, we can annotate single axioms. Axioms have URIs. We can annotate our metadata! Yeah!
  3. Is it easy to implement? I think it is: for reading OWL ontologies, a tool may just ignore all those extra triples (it can easily filter them out), and still remain faithful to the standard semantics. Tools that allow to name axioms (or annotate them) and want to deal with those, have to simply check for the correct reification (RDF toolkits should provide these anyway), and get the axiom's URI.

Problems that I see: I identified two problems. First, what happens, if those triples get separated from the other actual axiom triples? What if they get ripped apart and mushed into another ontology? Well, that problem is somewhat open for OWL DL and Lite anyway, since not all axioms map to single triples. The answer probably is, that reification would fail in that case. Strict reading could be that the ontology leaves OWL DL then and moves to OWL full, but I wouldn't require that.

Second problem, and this is by far more serious, is that people can't stand reification in RDF, that they simply hate it and that alone for that they will ignore this solution. I can only answer that reification in practise is probably much easier than expected when done properly, due to some short-hand notations available in RDF/XML-serialization, and other syntaxes. No one holds us back from changing the Abstract Syntax and the OWL XML Presentation Syntax appropriately in order to name axioms far more easy than in the proposed RDF/XML-Syntax. Serializations in RDF/XML-Syntax may get yucky, and the RDF graph of an OWL ontology could become cluttered, but then, so what? RDF/XML isn't read by anyone anyway, is it? And one can remove all those extra triples (and then the annotations) automatically if wished, without changing the Semantics of the ontology.

So, any comments on why this is bad? (Actually, I honestly think this is a practicable solution, though not elegant. I already see the 2007 ISWC best paper award, "On the Properties of Higher Order Logics in OWL"...)

I hope you won't kill me too hard for this solution :) And I need to change my FOAF-file now, in order to protect my friends...

Job at the AIFB

Are you interested in the Semantic Web? (Well, probably yes or else you wouldn't read this). Do you want to work at the AIFB, the so called Semantic Web Machine? (It was Sean Bechhofer who gave us this name, at the ISWC 2005) Maybe this is your chance...

Well, if you ask me, this is the best place to work. The offices are nice, the colleagues are great, our impact is remarkable - oh well, it's loads of fun to work here, really.

We are looking for a person to work on KAON2 especially, which is a main building block of many a AIFB software, as for example my own OWL Tools, and some European Projects. Mind you, this is no easy job. But if you finished your Diploma, Master or your PhD, know a lot about efficient reasoning, and have quite some programming skills, peek at the official job offer (also available in German).

Do you dare?

Semantic Web Gender Issue

Well, at least they went quite a way. With Google Base one can create new types of entities, entities themselves, and search for them. I am not too sure about the User Interface yet, but it's surely one of the best actually running onbig amounts of data. Nice query refinement, really.

But heck, there's one thing that scares me off. I was looking today for all the people interested in the Semantic Web, and there are already some in. And you can filter them by gender. I was just gently surprised about the choices I was offered when I wanted to filter them by gender...

Hier fehlt noch ein Bild.

Oh come on, Google. I know there are not that many girls in computer science, but really, it's not that bad!

What is a good ontology?

You know? Go ahead, tell me!

I really want to know what you think a good ontology is. And I will make it the topic of my PhD: Ontology Evaluation. But I want you to tell me. And I am not the only one who wants to know. That's why Mari Carmen, Aldo, York and I have submitted a proposal for a workshop on Ontology Evaluation, and happily it got accepted. Now we can officially ask the whole world to write a paper on that issue and send it to us.

The EON2006 Workshop on Evaluation of Ontologies for the Web - 4th International EON Workshop (that's the official title) is co-located with the prestigous WWW2006 conference in Ediburgh, UK. We also were very happy that so many reknown experts accepted our invitation to the program committee, thus ensuring a high quality of reviews for the submissions. The deadline is almost two months away: January 10th, 2006. So you have plenty of time to write that mind-busting phantastic paper on Ontology Evaluation until then! Get all the details on the Workshop website http://km.aifb.uni-karlsruhe.de/ws/eon2006.

I really hope to see some of you in Edinburgh next May, and I am looking for lively discussions about what makes an ontology a good ontology (by the way, if you plan to submit something - I would love to get a short notification, that would really be great. But it is by no means requested. It's just so that we can plan a bit better).

ISWC impressions

The ISWC 2005 is over, but I'm still in Galway, hanging around at the OWL Experiences and Direction Workshop. The ISWC was a great conference, really! Met so many people from the Summer School again, heard a surprisingly number of interesting talks (there are some conferences, where one boring talk follows the other, that's definitively different here) and got some great feedback on some work we're doing here in Karlsruhe.

Boris Motik won the Best Paper Award of the ISWC, for his work on the properties of meta-modeling. Great paper and great work! Congratulations to him, and also to Peter Mika, though I have still to read his paper to form my own opinion.

I will follow up on some of the topics from the ISWC and the OWLED workshop, but here's my quick, first wrap-up: great conference! Only the weather was pitily as bad as expected. Who decided on Ireland in November?

KAON2 and Protégé

KAON2 is the Karlsruhe Ontology infrastructure. It is an industry strength reasoner for OWL ontologies, pretty fast and comparable to reasoners like Fact and Racer, who gained from years of development. Since a few days KAON2 also implements the DIG Interface! Yeah, now you can use it with your tools! Go and grab KAON2 and get a feeling for how good it fulfills your needs.

Here's a step to step description of how you can use KAON2 with Protégé (other DIG based tools should be pretty the same). Get the KAON2 package, unpack it and then go to the folder with the kaon2.jar file in it. This is the Java library that does all the magic.

Be sure to have Java 5 installed and in your path. No, Java 1.4 won't do it, KAON2 builds heavily on some of the very nice Java 5 features.

You can start KAON2 now with the following command:

java -cp kaon2.jar org.semanticweb.kaon2.server.ServerMain -registry -rmi -ontologies server_root -dig -digport 8088

Quite lengthy, I know. You will probably want to stuff this into a shell-script or batch-file so you can start your KAON2 reasoner with a simple doubleclick.

The last argument - 8088 in our example - is the port of the DIG service. Fire up your Protege with the OWL plugin, and check in the OWL menu the preferences window. The reasoner URL will tell you where Protege looks for a reasoner - with the above DIG port it should be http://localhost:8088. If you chose another port, be sure to enter the correct address here.

Now you can use the consistency checks and automatic classification and all this as provided by Protege (or any other Ontology Engineering tool featuring the DIG interface). Protégé tells you also the time your reasoner took for its tasks - compare it with Racer and Fact, if you like. I'd be interested in your findings!

But don't forget - this is the very first release of the DIG interface. If you find any bugs, say so! They must be squeezed! And don't forget: KAON2 is quite different than your usual tableaux reasoner, and so some questions are simply not possible. But the restrictions shouldn't be too severe. If you want more information, go to the KAON2 web site and check the references.

KAON2 OWL Tools V0.23

A few days ago I packaged the new release of the KAON2 OWL tools. And they moved from their old URL (which was pretty obscure: http://www.aifb.uni-karlsruhe.de/WBS/dvr/owltools ) to their new home on OntoWare: owltools.ontoware.org. Much nicer.

The OWL tools are a growing number of little tools that help people working with OWL. Besides the already existing tools, like count, filter or merge, partly enhanced, some new entered the scene: populate, that just populates an ontology randomly with instances (which may be used for testing later on) and screech, that creates a split program out of an ontology (you can find more information on OWL Screech' own website).

A very special little thing is the first beta implementation of shell. This will become a nice OWL shell that will allow to explore and edit OWL files. No, this is not meant as a competitor to full-fledged integrated ontology development environments like OntoStudio, Protégé or SWOOP, it's rather an alternative approach. And it's just started. I hope to have autocompletion implemented pretty soon, and some more commands. If anyone wants to join, give me a mail.

Why some are disenchanted

In a comment to my last blog entry, Christopher St John wrote:

"I suffered through the 80's Knowledge Representation fad, both academically in the AI program at Edinburgh and as a practitioner at the only company ever to produce a commercial system written in Prolog (that wasn't a Prolog development system.) So I'm familiar with the problems that the Semantic Web effort is attempting to address. Having slogged through real-life efforts to encode substantial amounts of knowledge, I find some of the misty-eyed musings that surround the Semantic Web effort depressing. That "most information on the Web is designed for human consumption" is seen as an obstacle surmountable via tools like RDF is especially sad. On the other hand, I'm always happy to make use of the cool tools that these sorts of things seem to throw off. There's probably a certain Proverbs 26:11 aspect to it as well."

Thanks for your insightful comment, and being new to the field I certainly appreciate some report based on real life experience - and I have to admit to probably be faulty of being misty-eyed myself more than once about the Semantic Web (and probably will be in the future as well).

'"Most information on the Web is designed for human consumption" is seen as an obstacle'. Yes, you are right, this is probably the worst phrased sentence in the Semantic Web vision. Although I think it's somehow true: if you want the computer to help you dealing with today's information overflow, it must understand as much of the information as possible. The sentence should be at least rephrased as "most information on the Web is designed only for human consumption". I think it would be pretty easy to create both human-readable and machine-friendly information with only little overhead. Providing such systems should be fairly easy. But this is only about the phrasing of the sentence - I hope that every Semwebber agrees that the Semantic Web's ultimate goal is to help humans, not machines. But we must help the machines in order to enable them to help us.

The much more important point that Christopher addresses is his own disenchantment with the Knowledge Represenation research in the 80s, and probably by many people with the AI research a generation before. So the Semantic Web may just seem as the third generation of futile technologies to solve AI-complete problems.

There were some pretty impressive results from AI and KR, and the Semantic Web people build on that. Some more, some less - some too much even, forgetting the most important component of the Semantic Web underway: the Web. Yes, you can write whole 15-page papers and file them to Semantic Web conferences and journals and not even once mention anything web-specific. That's bad, and that's what Christopher, like some researchers, does not see as well, the main difference between this work two decades ago and today's line of investigation. The Web changes it all. I don't know if AI and KR had to fail - it probably must have failed, because they were so many intelligent people doing it and so there's no other explanation than that it had to fail due to the premises of its time. I have no idea if the Semantic Web is bound to fail as well today. I have no idea if we will be able to reach as much as AI and KR did in their time, or less, or maybe even more. I am a researcher. I have no idea if the things I do will work.

But I strongly believe it will and I will invest my time and part of my life towards this goal. And so do dozens of dozens other people. Let's hope that some nice thing will be created in the course of our work. Like RDF.

RDF is not just for dreamers

Sometimes I stumble upon posts that leave me wonder, what actually do people think about the whole Semantic Web idea, and about standards like RDF, OWL and the like. Do you think academia people went out and purposefully made them complicated? That they don't want them to get used?

Christopher St. John wrote down some nice experience with using RDF for logging. And he was pretty astonished that "RDF can actually be a pretty nifty tool if you don't let it go to your head. And it worked great."

And then: "Using RDF doesn't actually add anything I couldn't have done before, but it does mean I get to take advantage of tons of existing tools and expertise." Well, that's pretty much the point of standards. And the point of the whole Semantic Web idea. There won't be anything you will be able to do later, that you're not able to do today! You know, assembler was pretty turing-complete already. But having your data in standard formats helps you. "You can buy a book on RDF, but you're never going to buy a book on whatever internal debug format you come up with"

Stop being surprised that some things on the Semantic Web work. And don't expect miracles either.

Semantic MediaWiki: The code is out there

Finally! 500 nice lines of code, including the AJAX-powered search, and that's it, version 0.1 of the SeMediaWiki project! Go to Sourceforge and grab the source! Test it! Tell us about the bugs you found, and start developing your own ideas. Create your own Semantic Wiki right now, today.

Well, yes, sure, there is a hell of a lot left to do. Like a proper triplestore connecting to the Wiki. Or a RDF-serialization. But hey, there's something you can play with.

Semantic MediaWiki Demo

Yeah! Doccheck's Klaus Lassleben is implementing the Semantic MediaWiki, and there's a version of it running for quite some time already, but some bugs had to be killed. Now, go and take a look! It's great.

And the coolest thing is the search. Just start typing the relation, and it gives you an autoexpansion, just like Google Suggest does (well, a tiny bit better :) Sure, the autoexpansion is no scientific breakthrough, but it's a pretty darn cool feature.

The SourceForge project Semediawiki is already up and running, and I sure hope that Mr Lassleben will commit the code any day soon!

Even better, Sudarshan has already started implementing extensions to it - without having the code! That's some dedication. His demo is running here, and shows how the typed links may be hidden from the source text of the wiki, for those users who don't like it. Great.

Now, go and check the demo!

New people at Yahoo and Google

Vint Cerf starts working at Google, Dave Becket moves to Yahoo. Both like the Semantic Web (Vint said so in a German interview with c't, and I probably don't have to remind you about Daves accomplishments).

I'm sure, Yahoo got Dave because of his knowledge about the Semantic Web. And I wonder if Google got Vint out of the same reason? Somehow, I doubt it.

Another Semantic MediaWiki

I stumbled about another Semantic MediaWiki, an implementation created by Hideaki Takeda and Muljadi Hendry of the Japanese National Institute of Informatics in Tokyo. Their implementation looks very neat, although it is quite different in a few basic things (that we consider crucial in order to work), take a look at their full paper (it's in their wiki - oh, and it's in Japanese).

The basic difference between their approach and the one we suggest is that they add metadata management abilities to MediaWiki - which is cool. But they don't seem to aim at a full integration into the Wikipedia, i.e. embedding the metadata into the article text instead of appending it at some place. Actually, if we had software that is able to process natural languages, we wouldn't need our approach, but their would still be useful.

Nevertheless, they have a big huge advantage: a running system. Go there, take a look, it's cool! Actually, we have a system online too, but won't yet disclose the link due to a bug that's a kind of showstopper. But expect it to be online next week - including the source and all! It will be just a first version, but I sure hope to gather the people who want to work on it around the code.

Commited to the Big S

Not everyone likes our proposal for the Semantic Wikipedia. That's not a big surprise really. Boris Mann was talking about the advantages of tagging, and some ideas like blessed tags, that sounded very nice, when Jay Fienberg pointed him to the Semantic MediaWiki proposal. Boris answers: "I notice with a shudder however, that the Mediawiki stuff uses a large "S" Semantic, and includes RDF. I admit it, I'm afraid of RDF."

Yes, we do. And we're proud of it. Actually, it's the base for the better half of the possible applications we describe. Jay has some nice answers to it: "I think the MediaWiki folks are just recognizing the connection between their "tags" and the big "S" Semantic Web [you bet!, denny]. There are taxonomies and ontologies behind the popular tagging apps too--folks behind them just aren't recognizing / publicizing this (for a number of reasons, including that tags are often part of a practical application without big "S" Semantic Web goals). [...] I'm not a super huge fan of RDF myself, but I think it's useful to not be afraid of it, because some interesting things will come out of it at some point."

Our idea was to allow the user to use Semantic Web technologies even without really understanding them. No one needs to understand RDF fully, or OWL, to be able to use it. Sure, if she does, well, it surely will help her. Any by the way, RDF really is not complicated at all, it just has a syntax that sucks. So what?

Maybe it's a crude joke of history to start the Semantic Web with syntactic problems...

By the way, does anyone have a spare invitation to GMail for me? I'd really like to check out their service. Thanks, Peter, that was fast.

Semantic Wikipedia

Marrying Wikipedia and the Semantic Web in Six Easy Steps - that was the title of the WikiMania 2005 presentation we gave about a month ago. On the Meta-Wikipedia we - especially Markus Krötzsch - were quite active on the Semantic MediaWiki project, changing and expanding our plans. DocCheck is working right now on a basic implementation of the ideas - they have lots of Wiki-Experience already, with Flexicon, a MediaWiki-based medical lexicon. We surely hope the prototype will be up and running soon!

Wow, the project seems perceived pretty well.

Tim Finin, Professor in Maryland: "I think this is an exciting project with a lot of potential. Wikipedia, for example, is marvelously successful and has made us all smarter. I’d like my software agents to have a Wikipedia of their own, one they can use to get the knowledge they need and to which they can (eventually) contribute." - Wikipedia meets the Semantic Web, Ebiquity blog at UMBC

Mike Linksvayer, CTO of Creative Commons: "The Semantic MediaWiki proposal looks really promising. Anyone who knows how to edit Wikipedia articles should find the syntax simple and usable. All that fantastic data, unlocked. (I’ve been meaning to write on post on why explicit metadata is democratic.) Wikipedia database dump downloads will skyrocket." - Annotating Wikipedia, Mike Linksvayers Blog

Danny Ayers, one of the developers of Atom and Author of Atom and RSS Programming: "The plan looks very well thought out and quite a pile of related information has been gathered. I expect most folks that have looked at doing an RDF-backed Wiki would come to the same conclusion I did (cf. stiki) - it’s easy to do, but difficult to do well. But this effort looks like it should be the one." - Wikipedia Bits, Danny Ayers, Raw Blog

Lambert Heller of the University of Münster wrote a German blog entry on the netbib weblog, predicting world domination. Rita Nieland has a Dutch entry on her blog, calling us heroes - if we succeed. And on Blog posible Alejandro Gonzalo Bravo García has written a great Spanish entry, saying it all: the web is moving, and at great speed!

So, the idea seems catching like a cold in rainy weather, we really hope the implementation will soon be there. If you're interested in contributing - either ideas or implementation - join our effort! Write us!

Failed test

Testing my mobile blogging thingie (and it failed, should have gone to the other blog). Sorry for the German noise.

Gotta love it

Don't do research if you don't really love it. Financially, it's desastrous. It's the "worst pay for the investment", according to CNN.

Good thing I love it. And good thing Google loves the Semantic Web as well. Or why else do they make my desktop more and more semantic? I just installed the Desktop2 Beta - and it is pretty cool. And it's wide open to Semantic Stuff.

FOAFing around

I tried to create FOAF-files out of the ontology we created during the Summer School for the Semantic Web. It wasn't that hard, really: with our ontology I have enough data to create some FOAF-skeletons, so I looked into the FOAF-specification and started working on it.

<foaf:person about="#gosia">
  <foaf:knows resource="#anne" />
  <foaf:name datatype="http://www.w3.org/2001/XMLSchema#string">Gosia Mochol</foaf:name>
</foaf:person>
<rdf:description about="#anne">
  <rdfs:isdefinedby resource="http://semantic.nodix.net/sssw05/anne.rdf" />
...

Well, every one of us gets his own FOAF-file, where one can find more data about the person. Some foaf:knows-relations have been created automatically for the people who worked together in a miniproject. I didn't want to assume too much else.

The code up there is valid FOAF as much as I can tell. But all the (surprisingly sparse) tools could not cope with it, due to different reasons. One complained about the datatype-declaration in the foaf:name and then ignored the name at all. Most tools didn't know that rdfs:isDefinedBy is a subproperty of rdfs:seeAlso, and thus were not able to link the FOAF-files. And most tools were obviously surprised that I gave the persons URIs instead of using the IFP over the sha1-sum of their e-Mails. The advantage of having URIs is that we can use those URIs to tag pictures or to keep track of each other publications, after the basic stuff has been settled.

Pitily, the basic stuff is not settled. To me it seems, that the whole FOAF stuff, although being called the most widespread use of the Semantic Web, is still in its infancy. The tools hardly collaborate, they don't care too hard about the specs, and there seems no easy way to browse around (Mortens explorer was down at the time when I created the FOAFs, which was frustrating, but now it works: take a look at the created FOAF files, entering with my generated FOAF file or the one for Enrico Motta). Maybe I just screwed it all up when generating the FOAF-files in the first run, but I don't think so really...

Guess someone needs to create some basic working toolset for FOAF. Does anyone need requirements?

SSSW Last Day

The Summer School on Ontological Engineering and the Semantic Web finished on Saturday, July 16th, and I can't remember having a more intense and packed week in years. I really enjoyed it - the tutorials, the invited talks, the workshops, the social events, the mini project - all of it was awesome. It's a pity that it's all over now.

Today, besides the farewells and thank yous and the party in Madrid with maybe half of the people, also saw the presentation of the mini projects. The mini projects where somewhat similar to the The Semantic Web In One Day we had last year - but without a real implementation. Groups of four or five people had to create a Semantic Web solution in only six hours (well, at least conceptually).

The results were interesting. All of them were well done and highlighted some promising use cases for the Semantic Web, where data integration will play an important role: going out in the evening, travelling, dating. I'd rather not consider too deeply if computer scientists are rather attacking an own itch here ;) I really enjoyed the Peer2Peer theater, where messages wandered through the whole class room in order to visualize the system. This was fun.

Our own mini project modelled the Summer School and the projects itself, capturing knowledge about the buildup of the groups and classifying them. We had to use not only quite complex OWL constructs, but also SWRL-rules - and we still had problems expressing a quite simple set of rules. Right now we are trying to write these experiences down in a paper, I will inform you here as soon as it is ready. Our legendary eternal struggle at the boundaries of sanity and Semantic Web technologies seemed to be impressive enough to have earned us a cool price. A clock.

Thanks to all organizers, tutors and invited speakers of the Summer School, thanks to all the students as well, for making it such a great week. Loved it, really. I hope to stay in touch with all of you and see you at some conference pretty soon!

SSSW Day 5

Today (which is July 15th) just one talk. The rest of the day - beside the big dinner (oh well, yes, there was a phantastic dinner speech performed by Aldo Gangemi and prepared by Enrico and Asun if I understood it correctly, which was hilariously funny) and the disco - was available for work on the mini projects. But more about the mini projects in the next blog.

The talk was given by University of Manchester's Carol Goble (I like that website. It starts with the sentence "This web page is undergoing a major overhaul, and about time. This picture is 10 years old. the most recent ones are far too depressing to put on a web site." How many professors did you have that would have done this?). She gave a fun and nevertheless insightful talk about the Semantic Web and the Grid, describing the relationship between the two as a very long engagement. The grid is the old, grudgy, hard working groom, the Semantic Web the bride, being aesthetically pleasing and beautiful.

What is getting gridders excited? Flexible and extensible schemata, data fusion and reasoning. Sounds familiar? Yes, these are exactly the main features of Semantic Web technologies! The grid is not about accessing big computers (as most people think in the US, but they are a bit behind on this as well), it is about knowledge communities. But one thing is definitively lacking: scalability, people, scalability. They went to test a few Semantic Web technologies with a little data - 18 million triples. Every tool broke. The scalability lacks, even thought the ideas are great.

John Domingue pointed out, that scalability is not that much of a problem as it seems, because the TBoxes, where the actual reasoning will happen, will always remain relatively small. And the scalability issue with the ABoxes can be solved with classic database technology.

The grid offers real applications, real users, real problems. The Semantic Web offers a lot of solutions and discussions about the best solution - but lack surprisingly often an actual problem. So it is obvious that the two fit together very nicely. At the end, Carole described them as engaged, but not married yet.

At the end she quotes Trotsky: "Revolution is only possible when it becomes inevitable" (well, at least she claims it's Trosky, Google claims its Carole Goble, maybe someone has a source? - Wikiquote doesn't have it yet). The quote is in line with almost all speakers: the Semantic Web is not Revolution, it is Evolution, an extension of the current web.

Thanks for the talk, Carole!

Wikimania is coming

Wikimania starts on Friday. Looking forward to it, I'll be there with a collegue and we will present a paper on Wikipedia and the Semantic Web - The Missing Links on Friday. Should you be in Frankfurt, don't miss it!

Here's the abstract: "Wikipedia is the biggest collaboratively created source of encyclopaedic knowledge. Growing beyond the borders of any traditional encyclopaedia, it is facing new problems of knowledge management: The current excessive usage of article lists and categories witnesses the fact that 19th century content organization technologies like inter-article references and indices are no longer sufficient for today's needs.

Rather, it is necessary to allow knowledge processing in a computer assisted way, for example to intelligently query the knowledge base. To this end, we propose the introduction of typed links as an extremely simple and unintrusive way for rendering large parts of Wikipedia machine readable. We provide a detailed plan on how to achieve this goal in a way that hardly impacts usability and performance, propose an implementation plan, and discuss possible difficulties on Wikipedia's way to the semantic future of the World Wide Web. The possible gains of this endeavour are huge; we sketch them by considering some immediate applications that semantic technologies can provide to enhance browsing, searching, and editing Wikipedia."

Basically we suggest to introduce typed links to the Wikipedia, and an RDF-export of the articles annotated with these typed links being regarded as relations. And suddenly, you get the a huge ontology, created by thousands and thousands of editors, queryable and usable, a really big starting block and incubator for Semantic Web technologies - and all this, still scalable!

If the Wikipedia community agrees that this is a nice idea, which I hope with all my heart. We'll see this weekend.

SSSW Day 4

This day no theoretical talks, but instead two invited speakers - and much social programme, with a lunch at a swimming pool and a dinner in Segovia. Segovia is a beautiful town, with a huge, real, still standing roman aqueduct. Stunning. And there I ate the best pork ever! The aqueduct survived the huge earthquake of Lisbon of 1755, although houses around it crumbled and broke. This is, because it is built without any mortar - just stone over stone. So the stones could swing and move slightly, and the construction survived.
Made me think of loosely coupled systems. I probably had too much computer science the last few days.

The talks were very different today: first was Mike Woolridge of the University of Liverpool. He talked about Multiagent Systems in the past, the present and the future. He identified five trends in computing: Ubiquity, Interconnection, Intelligence, Delegation and Human-orientation.
His view on intelligence was very interesting: it is about the complexity of tasks that we are able to automate and delegate to computers. He quoted John Alan Robertson - the guy who invented resolution calculus, a professor of philosophy - as exclaiming "This is Artificial Intelligence!", when he saw a presentation of the FORTRAN compiler at a conference. I guess the point was, don't mind about becoming as intelligent as humans, just mind at getting closer.
"The fact that humans were in control of cars - our grandchildren will be quite uncomfortable with this idea."

The second talk was returning to the Semantic Web in a very pragmatic way: how to make money with it? Richard Benjamins of iSOCO just flew in from Amsterdam where he was at the SEKT meeting, and he brought promising news about the developing market for Semantic Web technologies. Mike Woolridge was criticizing Richard's optimistic projections and noted that he also, about ten years ago, spent a lot of energy and money into the growing Multiagent market - and lost most of it. It was an interesting discussion - Richard being the believer, Mike the sceptic, and a lot of young people betting a few years worth of life on the ideas presented by the first one...

SSSW Day 3

Yeah, sure, the Summer School for the Semantic Web is over for quite a while now, and here I started to blog about it daily, and didn't manage to get over the first three days. Let's face it: it was too much! The program was so dense, the social events so enjoyable, I couldn't even spare half an hour a day to continue the blogging. Now I want to recap some of my notes and memories I have of the second half of the Summer School. My bad memory be damned - if you want to correct something feel free to do so.

This day's invited speaker was Roberto Basili of the University of Rome. He sketched the huge field of natural language processing, and although he illustrated the possible interactions between lexical knowledge bases and ontologies, he nevertheless made a strong distinction between these two. Words are not concepts. "The name should have no value for defining a concept." This is like "Don't look into URIs" for HLT-people. He made a very interesting point: abductions will become very important in the Semantic Web, as they model human thinking patterns much closer than strict deduction does. Up until this day I was quite against abductions, I discussed this issue very stubbornly in Granada. But Roberto made me aware of a slightly different viewpoint: just sell abductive resolutions as suggestions, as proposals to the user - et voilà, the world is a better place! I will have to think abou this a bit more some day, but he did made me think.

The theoretical sessions and workshops today were packed and strenuos: we jumped from annotations to Semantic Web Services and back again. Fabio Ciravegna of the University of Sheffield's NLP-Group, who created tools like Armadillo and GATE, gave us a thorough introduction to annotations for the Semantic Web and the usage of Human Language Technologies in order to enhance this task. He admitted that many of the tools are still quite unhandy, but he tried to make a point by saying: "No one writes HTML today anymore with a text editor like Emacs or Notepad... or do you?"
All students raised their hands. Yes, we do! "Well, in the real world at least they don't..."

He also made some critical comments on the developments of the Semantic Web: the technologies being developed right now allow for a today unknown ability of collecting and combining data. Does this mean, our technologies actually require a better world? One with no secrets, privacy and spam, because there is no need for such ideas? Is metadata just adding hay to the haystak instead of really finding the needle?

John Domingue's Talk on Semantic Web (Web) Services was a deep and profound introduction to the field, and especially to the IRS system developed by the KMi at Open University. He was defending WSMO valiantly, but due to time constraints pitily skipped the comparison with OWL-S. But he motivated the need for Semantic Web Services and sketched a possible solution.

The day ended in Cercedilla, where we besieged a local disco. I guess the people were hiding, "watch it, them nerds are coming!" ;) The music surprisingly old - they had those funny vinyl albums - but heck, Frank Sinatra is never outdated. But the 80s certainly are...

SSSW Day 2

Natasha Noy gave the first talk today, providing a general overview on Mapping and Alignment algorithms and tools. Even though I was not too interested in the topic, she really caught my interest with a good and clean and structured talk. Thank for that! After, Steffen Staab continued, elaborating on the QOM approach to ontology mapping, having some really funny slides, but, as this work was mostly developed in Karlsruhe I already knew it. I liked his appeal for more tools that are just downloadable and usable, without having to fight for hours or days just to create the right environment for them. I totally agree on that!

The last talk of the day was from Aldo Gangemi on Ontology Evaluation. As I consider making this the theme of my PhD-thesis - well, I am almost decided on that - I was really looking forward to his talk. Although it was partially hard to follow, because he covered quite a broad approach to this topic, there have been numerous interesting ideas and a nice bibliography. Much to work on. I especially didn't yet see the structural measures he presented applied to the Semantic Web. Not knowing any literature on them, I am still afraid, that they actually fail [SSSW Day 1|Frank's requirements from yesterday]]: not just to be taken from graph theory, but rather to have the full implications of the Semantic Web paradigm been applied to them and thought through. Well, if no one did that yet, there's some obvious work left for me ;)

The hands-on-sessions today were quite stressy, but nevertheless interesting. First, we had to powerconstruct ontologies about different domains of traveling: little groups of four persons working on a flight agency ontology, a car rental service ontology and a hotel ontology. Afterwards, we had to integrate them. Each exercise had to be done in half a hour. We pretty much failed miserably in both, but we surely encountered many problems - which was the actual goal: in OWL DL you can't even concatenate strings. How much data intefration can you do then?

The second hands-on-session was on evaluationg three ontologies. It was quite interesting, although I really think that many of these things can happen automatically (I will work on this in the next two weeks, I hope). But the discussion afterwards was quite revealing, as it showed how differently people think about some quite fundamental issues, the importance they give to structural measures compared to the functional ones. Or, differently said: the question is, is a crappy ontology on a given domain better than a good ontology that doesn't cover your domain of interest? (The question sounds strange to you? To me as well, but well...)

Pitily I had to miss today's social special event, a football match between the students of the Summer School. Instead I had a very interesting chat with a colleague from the UPM, who came here for a talk, and who also wants to make her PhD in Ontology Evaluation, Mari Carmen Suárez de Figueroa. Interesting times are lying ahead.

SSSW Day 1

Today's invited speaker was Frank von Harmelen, co-editor of the OWL standard and author of the Semantic Web Primer. His talk was on fundamental research challenges generated by the Semantic Web (or: two dozen Ph.D. topics in a single talk). He had the idea after he was asked one day in the cafeteria "Hey Frank, whazzup in the Semantic Web?"

In the tradition of Immanuel Kant's four famous questions on philosophy, Frank posed the four big research challenges:

  • Where does the metadata come from?
  • Where do the ontologies come form?
  • What to do with the many different ontologies?
  • Where's the Web in the Semantic Web?

He derived many research questions that arise when you bring results from other fields (like databases, natural language, machine learning, information retrieval or knowledge engineering) to the Semantic Web and not just change the buzzwords, but take the implications that come along with the Semantic Web seriously.

Some more notes:

  • What is the semantic equivalent to a 404? How should a reasoner handle the lack of referential integrity?
  • Inference can be cheaper than lookup on the web.
  • Today OWL lite would probably have become more like OWL DLP, but they didn't know better than

The other talks were given by Asun Gómez-Pérez on Ontological Engineering, and Sean Bechhofer on Knowledge Representation Languages for the SemWeb, pretty good stuff by the people who wrote the book. I just wonder if it was too fast for the people who didn't know about it already, and too repeting for the others, but well, that's always the problem with these kind of things.

The hands-on session later was interesting: we had to understand several OWL ontologies and explain certain inferences, and Natasha Noy helped us with the new Protégé 3.1. It was harder than I thought quite some times. And finally Aldo Gangemi was giving us some exercises with knowledge representation design patterns, based on DOLCE. This was hard stuff...

Wow, this was a lot of namedropping. The social programme (we were hiking today) around the summer school, and the talks with the peers are sometimes even more interesting than the actual summer school programme itself, but this probably won't be too interesting for most of you, and it's getting late as well, so I just call it a day.

Summer School for the Semantic Web, Day 0

Arrived in Cercedilla today, at the Semantic Web Summer School. I really was looking forward to these days, and now, flipping through the detailed programme I am even more excited. This will be a very intense week, I guess, where we learn a lot and have loads of fun.

I was surprised by the sheer number of students being here: 56 or 57 students have come to the summer school, from all over the world - met someone from Australia, from Pittsburgh, and many Europeans. Happily, I also met quite a number of people I already knew, and thus I know it will be a pleasurable week. But let's just do the math for a second: we have more than 50 accepted students at this summer school. There are at least three other summer schools with related fields, like the one in Ljubljana the week before, there's one in Edinburgh, and the ESSLLI. So, that's about 200 students. Even if we claim that every single PhD student is going to a summer school - which I don't think - that would mean we get 200 theses every year! (Probably this number will be only reached in three years or so)

So, just looking at the sheer amount of people working on it - what's the expected impact?

Interesting times lie ahead.

Abraham Bernstein on users

"The regular user is not able to cope with strict inheritance."

Abraham Bernstein of the University of Zürich was today at the AIFB and gave a talk on SimPack - A Generic Java Library for Similarity Measures in Ontologies. Not being an expert in mapping, alignment and similarity I still saw some of the interesting ideas in it, and I liked the big number of different approaches towards measuring similiarity.

Which struck me much more was the above statement, which is based on his experience with, you know, normal users, who are "not brainwashed with object-oriented paradigms". Another example he gave was his 5 years old kid being perfectly able to cope with default reasoning - the "pinguins are birds, but pinguins can't fly"- thing, and thus do not follow strict inheritance.

This was quite enlightening, and leads to many questions: if the user can't even deal with subsumption, how do we expect him to be able to deal with disjunctions, complements or inverse functional properties?

Abraham's statement is based on experience with the Process Handbook, and not just drawn from thin air. There are a lot of use cases for the Semantic Web that do *not* require the participation of the normal end user, thus there still lie plenty of possibilities for great research. But I still believe that the normal end user has to unlock the Semantic Web in order to really make the whole idea lift off and fly. But in order to achieve that we need to tear down the wall that Abraham describes here.

Any ideas how to do this?

Live from ICAIL

"Your work remindes me a lot of abduction, but I can't find you mention it in the paper..."

"Well, it's actually in the title."

ESWC2005 is over

The ESWC2005 is over and there have been a lot of interesting stuff. Check the proceedings! There were some really nice idea, like RDFSculpt, good work like temporal RDF (Best Paper Award), the OWL-Eu extensions, naturally the Karlsruhe stuff like ontology evolution, many many persons to meet, get to know, many chats, and quite some ideas. Blogging from here is quite a mess, the uplouad rate is catastrophal, so I will keep this short, but I certainly hope to pick up on some of the talks and highlight the more interesting ideas (well, interesting to me, at least). Stay tuned! ;)

OWL 2.0

31 May 2005

I posted this to the public OWL dev mailing list as a response to a question posed by Jim Hendler quite some while ago. I publish it here for easier reference.

Quite some while ago the question of OWL 2.0 was rised here, and I wrote already two long replies with a wishlist - but both were never sent and got lost in digital nirvana, one due to a hardware, the second due to a software failure. Well, let's hope this one passes finally through. That's why this answer is so late.

Sorry for the lengthy post. But I tried to structure it a bit and make it readable, so I hope you find some interesting stuff here. So, here is my wishlist.

  1. I would like yet another OWL language, call it OWL RDF or OWL Superlite, or whatever. This is like the subset of OWL Lite and RDFS. For this the difference between of owl:Class and rdf:Class needs to be somehow standardly solved. Why is this good? It makes moving from RDF to OWL easier, as it forces you to keep Individuals, Classes and Relations in different worlds, and forgets about some of the more sophisticated constructs of RDF(S) like lists, bags and such. This is a real beginners language, really easy to learn and implement.
  2. Defined Semantics for OWL FUll. It is unclear -- at least to me -- what some combinations of RDF(S)-Constructs and OWL DL-constructs are meant to mean.
  3. Add easy reification to OWL. I know, I know, making statements about statements is meant to be the root of all evil, but I find it pretty useful. If you like, just add another group of elements to OWL, statements, that are mutually disjoint from classes, instances and relations in OWL DL, but there's a sublanguage that enables us to speak about statements. Or else OWL will suck a lot in comparison to RDF(S) and RDF(S) + Rules will win, because you can't do a lot of the stuff you need to do, like saying what the source of a certain statement is, how reliable this source is, etc. Trust anyone? This is also needed to extend ontologies toward probabilistic, fuzzy or confidence-carrying models.
  4. I would love to be able to define syntactic sugar, like partitionOf (I think, this is from Asun's Book on Ontology Engineering). ((A, B, C) partitionOf D) means that every D is either an A or a B or a C, that every A, B or C is a D, and that A, B and C are mutually disjunct. So you can say this already, but it needs a lot of footwork. It would be nice to be able to define such shotcuts that lever upon the semantics of existing constructors.
  5. That said, another form of syntactic sugar - because again you can use existing OWL constructs to reach the same goal, but it is very strenuous to do so - would be to define UNA locally. Like either to say "all individuals in this ontology are mutually different" or "all individuals with this namespace are mutually different". I think, due to XML constraints the first one would be the weapon of choice.
  6. I would like to be able to have more ontologies in the same file. So you can use ontologies to group a number of axioms, and you also could use the name of this group to refer to it. Oh well, using the name of an ontology as an individual, what does this mean? Does it imply any further semantics? I would like to see this clarified. Is this like named graphs?
  7. The DOM has quite nicely partitioned itself in levels and modules. Why not OWL itself? So you could have like a level 2 ontology of mereological questions, and such stuff, all with well defined semantics, for the generic questions. I am not sure there are too many generic questions, but taxonomy is (already covered), mereology would be, and spatiotemporal and dynamic issues would be as well. Mind you, not everyone must use them, but many will need them. It would be fine to find stan dard answers to such generic questions.
  8. Procedural attachments would be a nice thing. Like have a a standardized possibilities to add pieces of code and have them executed by an appropriate execution environment on certain events or requests by the reasoner. Yes, I am totally aware of the implications on reasoning and decidability, but hey, you asked what people need, and did not ask for theoretical issues. Those you understand better.
  9. There are some ideas of others (which doesn't mean that the rest is necessarily original mine) I would like to see integrated, like a well-defined epistemic operator or streamlining the concrete domains to be more consistent with abstract domains, or to define domain and range _constraints_ on relations, and much more. Much of this stuff could be added optional in the sense of point 7.
  10. And not to forget that we have to integrate with rules later, and to finally have an OWL DL query language. One goal is to make it clear what OWL offers over simply adding rules atop of RDF and ignoring the ontology layer completely.

So, you see, this is quite a list, and it sure is not complete. Even if only two or three points were finally picked up I would be very happy :)

Semantic Scripting

28 May 2005

Oh my, I really need to designate some time to this blog. But let's not ranting about time - no one of us has time - let's directly dive into my paper for the Workshop on Scripting for the Semantic Web on the 2nd ESWC in Heraklion next week. Here is the abstract.

Python reached out to a wide and diverse audience in the last few years. During its evolution it combined a number of different paradigms under its hood: imperative, object-oriented, functional, listoriented, even aspect-oriented programming paradigms are allowed, but still remain true to the Python way of programming, thus retaining simplicity, readability and fun. OWL is a knowledge representation language for the definition of ontologies, standardised by the W3C. It reaps upon the power of Description Logics and allows both the definition of concepts and their interrelations as well as the description of instances. Being created as part of the notoriously known Semantic Web language stack, its dynamics and openness lends naturally to the ever evolving Python language. We will sketch the idea of an integration of OWL and Python, but not by simply suggesting an OWL library, but rather by introducing and motivating the benefits a really deep integration offers, how it can change programming, and make it even more fun.

You can read the full paper on Deep Integration of Scripting Languages and Semantic Web Technologies. Have fun! If you can manage it, pass by the workshop and give me your comments, rants, and fresh ideas - as well as the spontaneous promise to help me design and implement this idea! I am very excited about the workshop and looking forward to it. See you there!

Unique Name Assumption - another example

Ian Davis has a very nice example illustrating the Unique Name Assumption: "Two sons and two fathers went to a pizza restaurant. They ordered three pizzas. When they came, everyone had a whole pizza. How can that be?"'

Better than my examples. And much shorter!

New OWL tools

The KAON2 OWL Tools get more diverse and interesting. Besides the simple axiom and entity counter and dumper, the not so simple dlpconverter, and the syntactic transformer from XML/RDF to OWL/XML and back, you now also have a filter (want to extract only the subClassOf-Relations out of your ontology? Take filter), diff and merge (for some basic syntactic work with ontologies), satisfiable (which checks if the ontology can have a satisfying model), deo (turning SHOIN-ontologies in SHIN-ontologies by weakening, should be sound, but naturally not complete) and ded (removes some domain-related axioms, but it seems this one is still buggy).

I certainly hope this toolbox will still grow a bit. If you have any suggestions or ideas, feel free to mail me or comment here.

MinCardinality

More on the Unique Name Assumption (UNA), because Andrew answered on it, with further arguments. He quotes Paul: " The initial problem was cardinality and OWL Flight attempts to solve the problem with cardinality. Paul put it succinctly: "So what is the point of statements with the owl:minCardinality predicate? They can't ever be false, so they don't tell you anything! It's kind of like a belt and braces when your belt is unbreakable." "

Again I disagree, this time to Paul: the minimal cardinality axiom does make sense. For what, they ask - well, for saying that there is a minimal cardinality on this relation. Yeah, you are right: this is an axiom which hardly can lead to an inconsisten ontology. But so what? You nevertheless can cut down the number of possible models with it and get more information out of the ontology.

"I would agree - this was my main problem - how do you explain to Joe (and Andrew) that all his CDs are the same rather than different."

That's turning around the argument. If the reasoner would claim that all of Joes CDs are the same, he would be doing a grave mistake. But so would he if he would claim that all are different: the point is, he just doesn't know. Without having someone to state sameness or difference explicitly, well, you can't know.

"I did comment that the resolution, local unique names using AllDifferent, didn't actually seem to solve the problem well enough (without consideration for scalability for example)."

I am not sure why that should be. It seems that Andrew would be happy if there was a file-wide switch claiming "If I use different URIs here I mean different objects. This file makes the UNA." These files would easily be translated to standard OWL files, but there would be less clutter inside (actually, everything that would need to be done is adding an axiom of allDifferent with all the names of the file).

"I have a feeling that context is a better solution to this problem (that might just be my golden hammer though)."

I don't understand this one, maybe Andrew will elaborate a bit on this.

If you imagine an environment with axioms floating around, from repository to repository, being crawled, collected, filtered, mapped and combined, you must not make the Unique Name Assumption. If you remain in your own personal knowledge base, you can embrace UNA. And everything you need between is one more axiom.

Is it that bad?

Unique Name Assumption

I just read Andrew Newman's entry on the Unique Name Assumption (UNA). He thinks that not having an UNA is "weird, completely backwards and very non-intuitive". Furher he continues, that "It does seem perverse that the basis for this, the URI, is unique." He cites an OWL Flight paper that caused me quite some headache a few weeks ago (cause there was so little in it that I found to like).

Andrew, whose blog I really like to read, makes one very valid point: "It doesn't really say, though, why you need non-unique names."

There was an OWL requirement that gives a short rationale for the UNA, but it seems it is not yet stated obvious enough.
Let's make a short jump to the close future: the Semantic Web is thriving, private homepages offer rich information sources about anything, and even the companies see the value of offering machine-processable information, thus, ontologies and knowledge bases everywhere!

People want to say how they liked the movie they just saw. They enrich their movie review with an RDF-statement that says

http://semantic.nodix.net/movie#Ring_2 http://semantic.nodix.net/rating#rated http://semantic.nodix.net/rating#4_of_5.

Or rather, their editor creates this statement automatically and publishes it along the review.

I'd be highly surprised if imdb would use the same URI for denoting the movie. They would probably use an imdb-URI. And so could I, using the imdb-specified URI for the movie. But I didn't, and I don't have to. If I want to state that this is the same movie, I can assert that explicitly. If I had UNA, I couldn't do that. The two knowledge bases could not work together.

With UNA, many knowledge bases relying on inverse functional properties would break as well. FOAF, for examples, uses this, identifiying persons with an IFP of their eMail-Hash. With UNA, this wouldn't work anymore.

Let's take another example. On my mothers webpage there could be a statement saying she has three kids, Jurica, Rozana and Zdenko. I would state on my page that I am my moms kid. My sister, being the social kind, tells the world about her mom and her two brothers, Jurica and Denny.
Now, if we have UNA, a reasoner would infer that one of us is lying. But all of us are very honest, trustworthy people. The problem here is, that my name is Zdenko, but most people refer to me as Denny. UNA says that Denny and Zdenko are the same person. If we have no UNA, we wouldn't believe that. But still we can state it explicitly: my mom could have said that she has three kids, Jurica, Rozana and Zdenko, and those are mutually distinct. Problem solved.

You could say, wait, if we had UNA we still could just claim that Zdenko owl:sameAs Denny, and the problem wouldn't arise. That is true. But then I would have to consider my moms statements. That maybe OK on a scale like this, but imagine this in the wilds of the web - you would have to consider every statement made about something, before you may state something as well. Impossible! And you would introduce non-monotonic inferences, and you probably wouldn't really want that.

What does this mean? Let's take the following row of statements, and consider the answer to the question "Is Kain on of Adams two sons?". So we know that Adam has two sons, and that there is an entity named Kain.

Adam fatherOf Abel.

UNA and non-UNA both answer: don't know.

Adam fatherOf Cain.

UNA says "No, Kain is no son of Adam". non-UNA says: "Sorry, I still don't know".

Cain sameAs Kain.

UNA says "Yes, Kain is a son of Adam (hope you didn't notice my little lie seconds before)". non-UNA says: "Yes, Kain is a son of Adam".

Assuming that, instead of the last statement, we claimed that

Adam fatherOf Kain.

UNA would say: "I'm messed up, I don't know anything, my database is inconsistent, sorry." , whereas non-UNA would answer: "Yes, Kain is a son of Adam (and by the way, maybe Kain and Abel are the same, or Kain and Cain, or Abel and Cain)."

The problem is, that in the setting of the Semantic Web you have a World Wide Web with thousands of facts, always changing, and you must assume that you didn't fetch all the information about a subject. You really can't know if you know everything there is about Adam. But you still want to be able to ask questions. And you want to get answers, and these answers to be monotonic. You don't want the Semantic Web to answer one day "No", the other "Yes" and sometimes "I don't know", but you could be fine with having it either provide the correct answer or non at all.

OWL-Flight and proponents of UNA actually forgot that it's a Semantic Web, not just a Semantic Knowledge Base. If you want UNA, take your Prolog-engine. The Semantic Web is more. And therefore it has to meet some requirements, and UNA is an astonishingly basic requirement of the Semantic Web. Don't forget, you can create local unique names if needed. But the other way would be much harder.

Still, Andrews arguments lead to a very important question: taking for granted that Andrew is an intelligent guy with quite some experience with this kind of stuff, how probable is it, that Joe Random User will have really big problems with grasping such concepts as non-UNA? How should the primers be written? How should the tools work in order to help users deal with this stuff - without requiring the user to study these ideas in advance?

Still a long way to go.

AIFB OWL tools

Working with ontologies isn't yet as easy as it could be - especially because the number of little helpers is still far too small. After having written dlpconvert and owlrdf2owlxml (the tool with the maybe most clumsy name in the history of the Semantic Web) I noticed how easy it would be to write some more tools based on Boris' KAON2 OWL ontology infrastructure.

And so I went ahead. First I integrated dlpconvert and owlrdf2owlxml (or short, r2x) in it, then I added a simple ontology dumper and axiom and entity counter. Want to know how many individuals are in your ontology? Simply type owl count myontology.owl -individual, and there you go. Want a list of all Classes? Try owl print myontology.owl -owlclass. It's as easy as that.

I'm totally aware that this functionality maybe isn't worth the effort of building a tool for. But this is just a beginning: I want to add more functionality to filter, merge, compare and much more to it. The point is, at the end having a handy little set of OWL tools you can work with. I miss that really with OWL, and now here it is. At least, a beginning.

Grab your copy now of the AIFB OWL Tools.

Philosophische Grundlagen

I had a talk on Philosophical Foundations of Ontologies last week at the AIFB. I prepared it in German (and thus, all the slides were in German) and just before I started I got asked if I may give the talk in English.

Having never heard a single lesson in philosophy in English and having read English philosophy only on Wikipedia before, I said yes. Nevertheless, the talk was very well perceived, and so I decided to upload it. It's pure evil PowerPoint, no semantic slides format, and I didn't yet manage to translate it to English. If anyone can offer me some help with that - I simply don't know many of the technical terms, and I don't have ready access to the sources - I would be very happy! Just drop me a note, please.

Philosophische Grundlagen der Ontologie (PowerPoint, ca. 4,5 MB)

Broken link

What's DLP?

OWL has some sublanguages which are all more or less connected to each other, and they make the mumbojumbo of ontology languages not any clearer. There is the almighty OWL Full, there's OWL DL, the easy* OWL Lite, and then there are numerous 'proprietary' expansions, which are more (OWL-E) or less (OWL Flight) compatible and useful.

We'd like to add another one, OWL DLP. Not because we think that there aren't enough already, but because we think this one makes a difference. Because it has some nice properties, like fully translatable to logic programs, and because it is easy to use and because it is fully compatible to standard OWL, and you don't have to use any extra tools.

If you want to read more, I and some colleagues at the AIFB wrote a short introduction to DLP (and the best thing is: if I say short, I mean short. Just two pages!). It's meant to be easy to understand as well - but if you have any comments on that, please provide them.

 * whatever easy means here

New versions: owlrdf2owlxml, dlpconvert

New versions of owlrdf2owlxml and dlpconvert are out.

owlrdf2owlxml got renamed, as it was formerly known as rdf2owlxml. But as a colleague pointed out, this name can easily be misunderstood, meaning to transform arbitrarily RDF to OWL. It doesn't do that, it only transforms OWL to OWL, from RDF/XML-serialisation to XML Presentation Syntax. And it seems to work quite stable, it can even transform the famous wine ontology. Version 0.4 out now.

dlpconvert lost a lot of its bugs. And as most of you were feeding RDF/XML to it, well, now you can do it officially (listen to the users), too. It reads both syntaxes, and creates a Prolog program out of your ontology. Version 0.7 is out.

They are both based on KAON2, the Karlsruhe Ontology Infrastructure module, written by Boris Motik. My little tools are just wrapped around KAON2 and using its functionality. To be honest, I'm thinking of writing quite a number of little tools like this, who offer different functionality, thus providing you with a nice toolkit to handle ontologies efficiently. I don't lack ideas right now, it's just I' m not sure that there's interest in this.

Well, maybe I should just start and we'll see...

By the way, both tools are not only available as web services, but you may also download them as command line tools from their respective websites and play around it on your PC. That's a bit more comfortable than using a browser as your operating system.

Unexpected problems

As you know, I'm a strong believer in the vision of the Semantic Web, and I actively pursue this goal. I am not too sure what it means, but I have hundreds of ideas floating through my head, about what will be possible in this future...

But the road seems longer than expected. For some time I have the dlpconvert and rdf2owlxml web services running. It is very enlightening and interesting to see, what kind of ontologies were used for testing. And I most certainly don't mean the domain of the ontologies used, but rather the syntax.

Both services state very clearly what syntaxes you may use. dlpconvert allows only OWL XML presentation syntax, rather obscure, I admit. That's the main reason, rdf2owlxml was offered. But most people didn't care, they just keep on using RDF - and not just OWL in RDF/XML-serialisation, but much more simple, plain RDF.

Yeah, every RDF is in OWL Full. But dlpconvert only deals with OWL DL. That's stated explicitly. And much less does it work with Abstract Syntax or N3. All of this was tested.

I most definitively don't want to rant about users here. You never should rant about users (I mean, in public). Especially, since everyone who uses a service like dlpconvert is probably quite intelligent and has some expertise in the field of Semantic Web. It's not his fault. It isn't mine either, I wrote quite explicitly what is needed. Maybe it's the W3Cs fault, or maybe it's just to blame on politics.

The fine differences between RDF, RDFS, RDF(S), OWL, OWL Full, OWL DL, OWL Lite, DLP - yes, I said fine differences between RDF and OWL DL - it's just too much to cope with. If it is too much for us, what do we expect of the future user of the Semantic Web? The web as we know it grew to its todays size because it was easy. It wasn't because of standards. For the first few years no one really cared about the HTML standard, I mean, not to the extent we do today in the Semantic Web. Even with tons of errors, pages would load and show nice results. It was a very forgiving system. And now, find out why it was so widely adopted?

The problem is: maybe we really need to be as strict as we are. But I hope we don't. I strongly believe into the virtue of "View source" - but this means understandable views on the source. Not RDF/XML-Serialisation. And still easy to copy. Only this way the Semantic Web can lift off from the roots, from the users. The users were creating the Web in the first years, not the companies. I don't know why everybody is turning to the companies today.

Oh, I should stop, it sounds like ranting again.

Flop of the Year?

IEEE Spectrum Editor Steven Cherry wrote the article Digital Dullard in, well, IEEE Spectrum. Well, he obviously dislikes Paul Allen for his money, and can't stop ranting about him, and about Mr Allen spending millions and millions of Dollars in research projects ("that's just the change that drops down behind the sofa cushions"). Yeah, Mr Cherry, you're totally right - why should he spend more than 100 Million Dollars in research, he should rather invest it in a multi-million house, an airline or produce a Hollywood blockbuster with James Cameron.

The thing is, Cherry claims the whole project of creating a Digital Aristotle, dubbed Project Halo, is naught but thrown out money, because understanding a page of chemistry costs about 10.000$. For one single page! Come on, how many students would learn one page for 10.000$?

Project Halo succeeded in creating a software program that is capable of taking a high school advanced-placement exam in chemistry, and actually, to pass the exam - and it did, and even beating the average student in it. Millions have been spent, says Cherry, for that? Wow...

Cherry fails to recognise two points here, that illustrate the achievement of such a project:

First, sure, it may cost 10.000$ to get a program that understands one page, and it may cost only 20$ to get a human to do the same. So, training a program that is able to replace a human may cost millions and millions, whereas training a human to do so will probably cost a mere few ten thousands of dollars. But ever considered the costs of replication? The program can be copied for an extremely low cost of a few hundred bucks, whereas every human costs the initial price.

Second, even though the initial costs of creating such prototype programs may be extremely high, that's no reason against it. Arguments like this would have hindered the development of the power loom, the space shuttle, the ENIAC and virtually all other huge achievements in engineering history.

It's a pity. I really think that Project Halo is very cool, and I think it's great, Mr Allen is spending some of his money on research instead of sports. Hey, it's his money anyway. I'd thank him immediately if I should ever meet him. The technologies exploited and developed there are presented in papers and thus available to the public. They will probably help in the further development and raise of the Semantic Web, as they are able to spend some money and brain on designing usable interfaces for creating knowledge.

Why do people bash on visions? I mean, what's Cherry's argument? I don't catch it... maybe someone should pay me 20.000$ to understand his two pages...

Introducing rdf2owlxml

Very thoughtful - I simply forgot to publish the last entry of this blog. Well, there you see it finally... but let's move to the new news.
Another KAON2 based tool - rdf2owlxml - just got finished, a converter to turn RDF/XML-serialisation of an OWL-ontology into an OWL/XML Presentation Syntax document. And it even works with the Wine-ontology.

So, whenever you need an ontology in the easy to read OWL/XML Presentation Syntax - for example, in order to XSL it further to a HTML-page representing your ontology, or anything like that, because it's hard to do this stuff with RDF/XML, go to rdf2owlxml and just grab the results! (The results work fine with dlpconvert as well, by the way).

Hope you like it, but be reminded - it is a very early service right now, only a 0.2 version.

Released dlpconvert

There's so much to do right now here, and even much, much more I'd like to do - and thus I'm getting a bit late on my announcements. Finally, here we go: dlpconvert is released in a 0.5 version.

You probably think: what is dlpconvert? dlpconvert is able to convert ontologies, that are within the dlp-fragment, from one syntax - namely the OWL XML Presentation Syntax- into another, here Datalog. Thus you can just take your ontology, convert it and then use the result as your program in your Prolog-engine. Isn't that cool?

Well, it would be much cooler if it were stronger tested, and if it could read some more common syntaxes like RDF/XML-Serialisation of OWL ontologies, but both is on the way. As for testing I would hope that you may test a bit - as for the serialisation, it should be available pretty soon.

dlpconvert is based totally on KAON2 for the reduction of the ontology.

I will write more as soon as I have more time.

Comments to naming

Richard Newman sent me some thoughtful comments via eMail on the What's in a name series (there were also some great comments on the individual entries, feel free to browse them). He sent them via eMail, cause he thought he couldn't comment - that should be wrong, everyone should be able to comment anonymously. Or did anyone else encounter problems? I should switch to some dedicated software soon, anyway, but right now I don't have the time to dig deeper into it. I especially miss trackback, sigh.

Here's what Richard wrote:

"Your first point, about ISBNs and "what's being referenced" --- I think you'd be interested in FRBR, which is a modelling of the bibliographical domain. It splits things up into

Work -> Expression -> Manifestation -> Item

A work is an abstract concept, like "Politeia". An expression is a realisation of a work, so a particular translation is an expression. A manifestation is physical embodiment of an expression: this is what's given an ISBN. All copies of a certain book are Items; the edition of the book is their Manifestation.

So, you see, when you're discussing Plato's Politeia, you have to be conceptually clear about whether you're talking about works, expressions, manifestations, or items.

E.g.

:PolWork dc:creator "Plato" ;
  rdfs:label "Plato's Politeia, the abstract concept." .
:PolExp1 ex:translator "Mr Smith" ;
  frbr:work :PolWork ;
  rdfs:label "Mr. Smith's translation of Plato's Politeia." .
:PolMan1 ex:publisher "Penguin" ;
  frbr:expression :PolExp1 ;
  rdfs:label "Penguin's edition of Smith's translation." .
:MyCopy ex:owner hg:RichardNewman ;
  frbr:manifestation :PolMan1 ;
  rdfs:label "Richard's copy of the Penguin edition." .

Do you see? Each level has its own properties (and some may be duplicated; e.g. each has a title: the title of the abstract work, the name given to the translation, the name Penguin prints on each book, and the name printed on my copy).

I've done a bit of work on modelling FRBR in RDFS/OWL, but haven't yet finished. "

I think that's really interesting, and taking a look at FRBR it was pretty well done. I sure am looking forward to see Richards interpretation in OWL, and will probably use it.

"Your second issue is the difference between a resource and its representation. A URI should only refer to one thing; it is entirely wrong to use http://www.holygoat.co.uk to refer both to my homepage (as in using RDF to describe its language, or size, or last-modified) and to me (my name, my email address, etc.) which I have seen done.

Your web server should return RDF for http://semantic.nodx.net/#Plato if your browser says that it accepts RDF+XML. A normal browser should have an HTML representation returned. Indeed, it's possible to do the following:

  • the abstract resource. Hit this with a browser, get an HTML page; with an RDF agent, get some RDF.
http://example.com/Plato a rdf:resource .
  • the HTML representation.
http://example.com/Plato/html a ex:representation ;
  ex:representationOf http://example.com/Plato .
    1. the RDF.
http://example.com/Plato/rdf a ex:representation ;
  ex:representationOf http://example.com/Plato .

i.e. you can unambiguously refer to each representation, and the resource. When your client arrives, asking for Plato, you can redirect them to the appropriate place. Clever, huh?

URIs should never give a 404. They should return the appropriate headers or content for whatever the client is requesting; this may be the RDF file in which the resource is defined, if the client understands RDF, or an HTML page.

If you're interested in this sort of thing, it pops up on the W3C's RDF Interest Group list occasionally.

Patrick Stickler and others have come up with an additional HTTP verb, MGET, which will return the RDF description of a resource. Combined with their URIQA architecture, it will give you a Concise Bounded Description for a URI. This stops you having to somehow put descriptions into particular files, and better deals with the distributed nature of the Semantic Web. Check it out; it presents several convincing arguments for not using fragment identifiers to refer to resources, and solves your bandwidth problem. You should never have to dump a whole file to get a description of a URI."

I have to note that Richard wrote me this just after part 4 of the series was published, so I could answer some of the questions already in the last two parts. Just to summarise it: I don't like content negotiation. Although it is technically totally feasible, I disagree that it should be done or is a good solution. If my browser asks for http://semantic.nodix.net/#Plato I don't think I should get different things depending on the content negotiation. This feels like cheating.

I wrote that to Richard already, and he answered:

"I think we agree on the main point, which is that

foaf:name "Richard" ; ex:format "HTML" .

which is a travesty :) "

He is totally right here.

"You still see it happen, though, with people referring to Wikipedia pages as if they were the abstract resource.

The content negotiation (getting different things depending on what you accept) is exactly what the Web is supposed to do. If I'm using a mobile browser, I want a simplified version of a page; if I'm an RDF agent, I want RDF, if it exists, because HTML is of no use to me. A common usage of this is to serve up strict XHTML to Mozilla, and less-strict HTML to Internet Explorer. It is also done all the time to serve PNG where the client accepts it, and GIF if it doesn't, and there is an intentional disconnect on the Web between a resource and its representations.

The lack of such a disconnect would lead to exactly the problem you describe; if I can't return a representation of a resource, because it's abstract, then how do I find out anything about it? I could use MGET, but you can't MGET a person... so, if you want to talk about the real world thing "Plato", he has to 404, or you get the "what am I talking about?" problem. Better, in my view, to redirect a browser to plato.html and a SW agent to a chunk of RDF. "

I would rather like to ask for http://semantic.nodix.net/Plato.rdf to get the RDF/XML representation, http://semantic.nodix.net/Plato.owl to get the OWL/XML representation, http://semantic.nodix.net/Plato.html to get a HTML page for the user to read and Plato.jpg for a picture of Plato. This shouldn't be hidden behind content negotiation. I know, I know, Patrick would strongly disagree here, but I think it feels wrong and actually defies the idea of an URI.

"You can do exactly that (and I agree that the representations should have separate URIs --- conneg is only for when you're trying to get some description of an abstract resource), but then how do you refer to the abstract concept of "Plato"? http://.../Plato is a resource, and I want to make statements about him. But there's no point in it being 404 when dereferenced, because then how would I find out that Plato.html exists? HTTP doesn't return URIs, it returns representations of them.

A URI is simply something that is dereferenced to get a representation, and that representation should be decided on by conneg. In this case, /Plato is an abstract resource, so one of the representations should be returned. We can then make statements about Plato (e.g. foaf:name "Plato"), and about the JPEG and HTML representations, because they have different URIs, but still get something useful back when we want to access /Plato."

I also dislike MGET right now. Maybe I am wrong, but to me, the whole URIQA architecture feels somewhat wrong - but maybe I should just dwell deeper into it, I have to admit, I didn't study it yet enough to really be in a position to bash on it. The problem is, that MGET seems unnecessary to me - and it works on a different conceptual level than the rest of the Semantic Web proposals. I think everything MGET solves can be solved with tools that already exist: Richards example above, where he gives triples telling us which representations are used to describe a resource, shows perfectly well that you actually don't need content negotiation and MGET.

"There are things to question about URIQA, but it does have some good going for it. MGET is actually an implicit query. In the standard Web model, you request URIs and get back document representations. Doing an MGET on a Web server is asking it to return a description, regardless of where on the site descriptions of that resource exist, and you're explicitly asking for meta-data. As Patrick points out, it's similar doing a GET and specifying that you accept RDF, but is likely to be more concise (the difference between a "representation" and a "description"). In fact, this is exactly what the Nokia URIQA server does.

MGET overlaps with query servers a bit, and with GET a bit, but it's a little bit special, too. The whole idea is that from a single URI you can get a useful description of a resource, just by issuing a single MGET. Every other approach needs more work."

This URIQA / MGET stuff sounds more and more interesting. I really should dwell deeper into it.

Also, the idea of Concise Bounded Descriptions may be very neat, I have to study that more as well. Funny thing, the very same day Richard pointed me to it, a colleague told me about it too - this is usually a sign, that this idea is worth considering more.

Richard also wrote "URIs should never give a 404", and as you know, I disagreed with it mildly. He tried to summarise his position:

"I consider that each returned resource should have its own URI --- e.g. Plato.jpg --- and that the original URI should be used to make statements about the abstract resource. This allows you to say

...Plato foaf:name "Plato" .
...Plato.jpg ex:resolution "150dpi" .
...Plato.html dc:creator "Denny" .

Dereferencing the abstract resource, rather than throwing a 404, should do something useful --- e.g. redirecting with a 303 to one of the representations. Have you ever tried viewing a Blogger Atom feed in your browser? If you hit it with an RSS reader, you get the XML, but in a browser Blogger shows you an XHTML transformation of the XML. That's useful, and I think that's how the Semantic Web should work. Imagine if your agent hit /Plato, and got RDF out of it, but when you looked at it with your browser you saw a dynamically-generated HTML page? Handy!

I can understand your objection, though; it does seem wrong that you get different things out of the same URI. However, you should almost always get HTML out of plato.html, and RDF out of plato.rdf. All the conneg is doing is making sure you can see an abstract thing in the best way possible, according to what you've told the server you can understand. "

Richard is pretty good in convincing me, cause he uses the right arguments: it's for the people, dummy, and the machines can work it out anyway.

I still stick to the recommendations I gave yesterday. But just as I am writing, and rereading it all, I am starting to change my mind on content negotiation. Maybe it is a good thing. I will have to think about it some more, and as soon as I come to a solution, I will bother you with it again. I still have a gut feeling about it that tells me 'no', but the reasons given sound very convincing and I agree with most of them, so heck, let's meditate on this as soon as I find a few hours to spare.

Big thanks to Richard and his thoughts, anyway. I hope this discussion helps you to make up your own mind as well.

What's in a name - Part 6

In this series we learned how to make URIs for entities. I know there's a big discussion flaring up every few weeks or so, if we should use fragment identifier or not. For me, this question is pretty much settled. Using a fragment identifier has the advantage of giving you the ability of providing a human readable page for those few lost souls who look up the URI, so maybe it's a tad nicer than using no fragment identifier and returning 404s. Not using fragids has the advantage of probably reducing bandwidth - but this discussion should be more or less academic, because looking up URIs, as we have seen, should not happen.

There is some talking about different representations, negotiating media-types, returning RDF in one, XHTML in the other case, but to be honest, I think that's far too complicated. And you would need to use another web server and extensions to HTTP to make this real, which doesn't really help the advent of the Semantic Web. Look at Nokias URIQA project for more information.

Keep this rules in mind, and everything should be fine:

  • be careful to use unused URIs if you reference a new entity. Take one from an URI space you have control of, so that URI collision won't appear
  • don't put a website under the URI you used to to name an entity. That would lead to URI collision
  • try to make nice looking URIs, but don't try to hard. They are supposed to be hidden by the application anyway
  • provide rdfs:label and rdfs:seeAlso instead. This solves everything you would want to try to solve with URI naming, but in a standard compliant way
  • give your resources URIs. Please. So that other can reference them more easily.

I should emphasise the last one more. Especially using RDF/XML-Syntax easily leads to anonymous nodes, which are a pain in the ass because they are hard or impossible to address. Especially, don't use rdf:nodeID. They don't give your node an ID that's visible to the outer world. This is just a local name. Don't use it, please.

The second is using them like this:

<foaf:person about="me">
  <foaf:knows>
    <foaf:Person>
      <foaf:name>J. Random User</foaf:name>
    </foaf:Person>
  </foaf:knows>
</foaf:Person>

Actually, the Person known to "me" is an anonymous one. You can't refer to her. Again, try to avoid that. If you can, look up the URI the person gave to herself in her own FOAF-file. Or give her a name in your own URI-space. Don't be afraid, you won't run out of it.

Another very interesting approach is to use published subjects. I will return to this in another blog, promised, but so long: never forget, there is owl:sameAs to make two URIs point to the same thing, so don't mind too much if you doublename something.

Well, that's it. I hope you enjoyed the series, and that you learned a bit from it. Looking forward to your comments, and your questions.

What's in a name - Part 5

After calling Plato an XML-Element, making movies out of websites and having several accidents with careless URIs, it seems we return to the very beginning of this series.

http://semantic.nodix.net/document/Politeia dc:creator "Plato".

Whereby http://semantic.nodix.net/document/Politeia explicitly does not resolve but returns a 404, resource not found. Let's remember, why didn't we like it? Because humans, upon seeing this, have the urge to click on it in order to get more information about it. A pretty good argument, but every solution we tried brought us more or less trouble. We didn't get happy with any of them.

But how can I dismiss such an argument? Don't I risk loosing focus with saying "don't care about humans going nowhere"? No, I really don't think so. Due to two reasons, one meant for humans and one for the machines.

First the humans (humans always should go first, remember this, Ms and Mr PhD-student): humans actually never see this URI (or at least, should not but when debugging). URIs who will grace the GUI should have a rdfs:label which provides the label human users will see when working with this resource. Let's be honest: only geeks like us think that http://semantic.nodix.net/document/Politeia is a pretty obvious and easy name for a resource. Normal humans would probably prefer "Politeia", or even "The Republic" (which is the usual name in English speaking countries). Or be able to define their own name.

As they don't see the URI, they actually never feel the urge to click on it, or to copy and paste it to the next browser window. Naming it http://semantic.nodix.net/document/Politeia instead of http://semantic.nodix.net/concept/1383b_xc is just for the sake of readability of the source RDF files, but actually you should not derive any information out of the URI (that's what the standard says). The computer won't either.

The second point is, a RDF application shouldn't look up URIs either. It's just wrong. URIs are just names, it is important that they remain unique, but they are not there for looking up in a browser. That's what URLs are for. It's a shame they look the same. Mozilla realised the distinction when they gave their XUL language the namespace http://www.mozilla.org/keymaster/gatekeeper/there.is.only.xul. Application developers should realise this too. rdfs:seeAlso and rdfs:isDefinedBy give explicit links applications may follow to get more information about a resource, and using owl:imports actually forces this behaviour - but the name does not.

Getting information out of names is like making fun of names. It's mean. Remember the in-kids in primary school making fun of out-kids because of their names? You know you're better than that (and, being a geek, you probably were an out-kid, so mere compassion and fond memories should hold you back too)..

Just to repeat it explicitly: if an URI gives back a 404 when you put it in a browser navigation bar - that's OK. It was supposed to identify resources, not to locate them.

Now you know the difference between URIs and URLs, and you know why avoiding URI collision is important and how to avoid it. We'll wrap it all in the final instalment of the series (tomorrow, I sincerely hope) and give some practical hints, too.

By the way, right after the series I will talk about content negotiation, which was mentioned in the comments and in e-Mails.

Uh, and just another thing: the wary reader (and every reader should be wary) may also have noticed that

Philosophy:Politeia dc:creator "Plato".

is total nonsense: it says, that there is a resource (identified with QName Philosophy:Politeia) that is created by "Plato". Rest assured that this is wrong - no, not because Socrates should be credited as the creator of the Politeia (this is another discussion entirely) but because the statement claims that the string "Plato" created it - not a Person known by this name (who would be a resource that should have an URI). But this mistake is probably the most frequent one in the world of the Semantic Web - a mistake nevertheless.

It's OK if you make it. Most applications will cope with it (and some are actually not able to cope with the correct way). But it would not be OK if you didn't know that you are making a mistake.

What's in a name - Part 4

I promised you four solutions to the problem of dubbing with appropriate URIs. So, without further ado, let's go.

The first one you've seen already. It's using anonymous nodes.

_person foaf:interest _security.
http://dmoz.org/Computers/Security/ dc:subject _security.

But here we get the problem, that we can't reference _security from outside, thus loosing a lot of the possibilities inherent in the Semantic Web, because this way you can not say that someone else is interested in the same topic as _person above. Even if you say, in another RDF file,

_person2 foaf:interest _security.
http://dmoz.org/Computers/Security/ dc:subject _security.

_security actually does not have to be the same as above. Who says, websites only have one subject? The coincidental equality of the variable name _security bears as much semantics as the equality of two variables x in a C and a Python-Program.
So this solution, although possible, bears too much short-comings. Let's move on.

The second solution is hardly available to the majority of us puny mortals. It's introducing a new URI schema. Let's return to our very first example, where we wanted to say that the Politeia was written by Plato.

urn:isbn:0192833707 dc:creator "Plato".

Great! No problems here. Sure, your web-browser can't (yet) resolve urn:isbn:0192833707, but no ambiguity here: we know exactly of what we speak.

Do we? Incidentally, urn:isbn:0465069347 also denotes the Politeia. No, not in another language (those would be another handful of ISBN numbers), just a different version (the text is public domain). Now, does the following statement hold?

urn:isbn:0192833707 owl:sameAs urn:isbn:0465069347.

Most definitively not. They have different translators. They have different publishers. These are different books. But it's the same - what? What is the same? It's not the same text. It's not the same book. They may have the same source text they are translated from. But how to express this correctly and still useful?

The urn:isbn: scheme is very useful for a very special kind of entities - published books, even the different versions of published books.
The problem with this solution that you would need tons of schemes. Imagine the number of committees! This would, no, this should never happen. We definitively need an easier solution, although this one certainly does work for very special domains.

Let's move on to the third solution: the magic word is fragment identifier. #. Instead of saying:

http://semantic.nodix.net/Politeia dc:creator http://semantic.nodix.net/Plato.

and thus getting 404s en masse, I just say:

http://semantic.nodix.net/#Politeia dc:creator http://semantic.nodx.net/#Plato.

See? No 404. You get to the homepage of this blog by clicking there. And it's valid RDF as well. So, isn't it just perfect? Everything we wished for?

Not totally, I fear. If I click on http://semantic.nodx.net/#Plato, I actually expect to read something about Plato, and not to see a blog about the Semantic Web. So this somehow would disappoint me. Better than a 404, still...
The other point is my bandwidth. There can be RDF files with thousands of references. Following every single one will lead to considerable bandwidth abuse. For naught, as there is no further information about the subject on the other side. Maybe using http://semantic.nodix.net/person#Plato would solve both problems, with http://semantic.nodix.net/person being a website saying something like "This page is used to reserve conceptual space for persons. To understand this, you must understand the magic of URIs and the Semantic Web. Now, go back whereever you came from and have a nice day." Not too much webspace and bandwith will be used for this tiny HTML-page.

You should be careful though to not have a real fragment identifier "Plato" in the page, or you would actually dereference to this element. URI collision again. You don't want Plato to become half-philosopher / half-XML-element, do you?

We will return to fragment identifiers in the last part of this six part series again. And now let's take a quick look at the fourth solution - we will discuss it more thoroughly next time.

Use a fresh URI whenever you need an URI and don't care about it giving a 404.

What's in a name - Part 3

Last time we merrily published our first statement for the Semantic Web:

http://www.imdb.com/title/tt0088247/ http://purl.org/dc/elements/1.1/creator "James Cameron".

A fellow Semantic Web author didn't like the number-encoded IMdb-URI, but found a much more compelling one and then published the following statement:

http://en.wikipedia.org/wiki/The_Terminator http://purl.org/dc/elements/1.1/date "1984-10-26".

A third one sees those and, in order to foster integration of data offers helpfully the following statement:

http://www.imdb.com/title/tt0088247/ owl:sameAs http://en.wikipedia.org/wiki/The_Terminator.

And now they live merrily ever after. Or do you hear the thunder of doom rolling?

The problem is that the URIs above actually already denote something, namely the IMdb website about the Terminator and the Wikipedia-article on the Terminator. They did not denote the movie itself, but that's how they're used in our examples. Statement #3 above actually says the two websites are the same. The first one says, that "James Cameron" created the IMdb website on the Terminator (they'd wish), and the second one says that the Wikipedia article was created in 1984, which is wrong (July 23, 2001 would be the correct date). We have a classic case of URI collision.

This happens all the time. People working professionally on this do this too:

_person foaf:interest http://dmoz.org/Computers/Security/.

I'd bet, _person (remaining anonymously here) does not have such a heavy interest in the website http://dmoz.org/Computers/Security/, but rather in the Topic the website is about.

_person foaf:interest _security.
http://dmoz.org/Computers/Security/ dc:subject _security.

Instead of letting _security be anonymous, we'd rather give it a real URI. This way we can reference it later.

_person foaf:interest http://semantic.nodix.net/topic/security.
http://dmoz.org/Computers/Security/ dc:subject http://semantic.nodix.net/topics/security.

But, oh pain - now we're exactly at the same spot we've been in the last part. We have an URI that does not dereference to a website (by the way, I do know that the definition of foaf:interest actually says the semantics of foaf:interest is, that the Subject is interested in the Topic of the Object, and not the Object itself, but that's not my point here)
Thinking for a moment about it, we must conclude that it is actually impossible to achieve both goals: either the URIs will identify a resource retrievable over the web and are thus unsuitable as URIs for entities outside the web (like persons, chairs and such) because of URI collision, or they don't - and will then lead to 404-land.

Isn't there any solution? (Drums) Stay tuned for the next exciting installment of this series, introducing not one, not two, not three, but four solutions to this problem!

What's in a name - Part 2

How to give a resource a name, an URI? Let's look at this statement:

movie:Terminator dc:creator "James Cameron".

Happy with that? This is a valid RDF statement, and you understand what I wanted to say, and your RDF machine will be able to read and process it, too, so everything is fine.

Well, almost. movie:Terminator is a QName, and movie: is just a shorthand prefix, a namespace, that actually has to be defined as something. But as what? URIs are well-defined, so we shouldn't just define the namespace arbitrarily. The problem is, someone else could do the same, and suddenly, one URI could denote two different resources - this is called URI collision, and it is the next worst thing to immanentizing the Eschaton. That's why you should grab some URI space for yourself and there you go, you may define as many URIs there as you like (remember, the U in URI means Universal, that's why they make such a fuss about the URI space and ownership of it).

I am the webmaster of http://semantic.nodix.net, and the URI belongs to me and with it, all the URIs starting with it. Thus I decide, that movie: shall be http://semantic.nodix.net/movie/. Our example statement thus is the same as:

http://semantic.nodix.net/movie/Terminator http://purl.org/dc/elements/1.1/creator "James Cameron".

So this is actually what the computer sees. The short hand notation above is just for humans. But if you're like me, and you see the above Subject, you're already annoyed that it is not a link, that you can't click on it. So you copy it into your browser address bar, and go to http://semantic.nodix.net/movie/Terminator. Ups. A 404, the website is not found. You start thinking, oh man, stupid! Why you giving the resource such a name that looks so much like an web address, and then point it to 404-Nirvana?

Many think so. That's because they don't grasp the difference between URIs and URLs, and to be honest, this difference is maybe the worst idea the W3C ever had (that's a hard-to-achieve compliment, considering the introduction of XML/RDF-serialisation and XSD). We will return to this difference, but for now, let's see what usually happens.

Because http://semantic.nodix.net/movie/Terminator leads to nowhere, and I'm far too lazy to make a website for the Terminator just for this example, we will take another URI for the movie. Jumping to IMdb we quickly find the appropriate one, and then we can reformulate our statement:

http://www.imdb.com/title/tt0088247/ http://purl.org/dc/elements/1.1/creator "James Cameron".

Great! Our subject is a valid URI, clicking on http://www.imdb.com/title/tt0088247/ (or pasting it to a browser) will tell you more about the subject, and we have a valid RDF statement. Everything is fine again...

...until next time, where we will discuss the minor problems of our solution.

What's in a name - Part 1

There are tons of mistakes that may occur when writing down RDF statements. I will post a six part series of blog entries, starting with this one, about what can go wrong in the course of naming resources, why it is wrong, and why you should care - if at all. I'll try to mix experience with pragmatics, usability with philosophy. And I surely hope that, if you disagree, you'll do so in the comments or in your own blog.

The first one is the easiest to spot. Here we go:

"Politeia" dc:creator "Plato".

If you don't know about the differences between Literals, QNames and URIs, please take a look at the RDF Primer. It's easy to read and absolutely essential. If you know about the differences, you already know that the above said actually isn't a valid RDF statement: you can't have a literal as the subject of a statement. So, let's change this:

philo:Politeia dc:creator "Plato".

What's the difference between these two? In the first one you say that "Plato" is the creator of "Politeia" (we take the semantics of dc:creator for granted for now). But in the second you say that "Plato" is the creator of philo:Politeia. That's like in Dragonheart, where Bowen tries to find a name for the dragon because he can't just call him "dragon", and he decides on "draco". The dragon comments: "So, instead of calling me dragon in your own language, you decide to call me dragon in another language."

Yep, we decide to talk about Politeia in another language. Because RDF is another language. It tries to look like ours, it even has subjects, objects, predicates, but it is not the language of humans. It is (mostly) much easier, so easy in fact even computers can cope with it (and that's about the whole point of the Semantic Web in the first place, so you shouldn't be too surprised here).

"Politeia" has a well defined meaning: it is a literal (the quotation marks tell you that) and thus it is interpreted as a value. "Politeia" actually is just a word, a symbol, a sign pointing to the meant string Politeia (a better example would be: "42" means the number 42. "101010b", "Fourty-Two" or "2Ah" would have been perfectly valid other signs denoting the number 42).

And what about philo:Politeia? How is it different from "Politeia", what does this point to?

philo:Politeia is a Qualified Name (QName), and thus ultimatively a short-hand notation for an URI, an Unified Resource Identifier. In RDF, everything has to be a resource (well, remember, RDF stands for Resource Description Framework), but that's not really a constraint, as you may simply consider everything a resource. Even you and me. And URIs are names for resources. Universally (well, at least globally) unique names. Like philo:Politeia.

You may wonder about what your URI is, the one URI denoting you. Or what the URI of Plato is, or of the Politeia? How to choose good URIs, and what may go wrong? And what do URIs actually denote, and how? We'll discuss this all in the next five parts of this series, don't worry, just stay tuned.

Why we will win

People keep saying that the Semantic Web is just a hype. That we are just an unholy chimaera of undead AI researchers talking about problems solved by the database guys 15 years ago. And that our work will never make any impact in the so called real world out there.

As I stated before: I'm a believer. I'm even a catholic, so this means I'm pretty good at ignoring hard facts about reality in order to stick to my beliefs, but it is different in this case: I slowly start to comprehend why Semantic Web technology will prevail and make life better for everyone out there. It' simply the next step in the IT RevoEvolution.

Let's remember the history of computing. Shortly after the invention of the abacus the obvious next step, the computer mainframe, appeared. Whoever wanted to work with it, had to learn to use this one mainframe model (well, the very first ones were one-of-a-kind machines). Being able to use one didn't necessarily help you using the other.

First the costs for software development were negligible. But slowly this changed, and Fred Brooks wrote down his experience with creating the legendary System/360 in the Mythical Man-Month (a must-read for software engineers), showing how much has changed.

Change was about to come, and it did come twofold. Dennis Ritchie is to blame for both of them: together with Ken Thompson he made Unix, but in order to make that, he had to make a programming language to write Unix in, this was C, which he made together with Brian Kernighan (this account is overly simplified, look at the history of Unix for a better overview).

Things became much easier now. You could port programs in a simpler way than before, just recompile (and introduce a few hundred #IFDEFs). Still, the masses used the Commodore 64, the Amiga, the Atari ST. Buying a compatible model was more important than looking at the stats. It was the achievement of the hardware development for the PC and of Microsoft to unify the operating systems for home computers.

Then came the dawning of the age of World Wide Web. Suddenly the operating system became uninteresting, the browser you use was more important. Browser wars raged. And in parallel, Java emerged. Compile once, run everywhere. How cool was that? And after the browser wars ended, the W3Cs cries for standards became heard.

That's the world as it is now. Working at the AIFB, I see how no one cares what operating system the other has, be it Linux, Mac or Windows, as long as you have a running Java Virtual Machine, a Python interpreter, a Browser, a C++ compiler. Portability really isn't the problem anymore (like everything in this text, this is oversimplified).

But do you think, being OS independent is enough? Are you content with having your programs run everywhere? If so, fine. But you shouldn't be. You should ask for more. You also want to be independent of applications! Take back your data. Data wants to be free, not locked inside an application. After you have written your text in Word, you want to be able to work with it in your Latex typesetter. After getting contact information via a Bluetooth connection to your mobile phone, you want to be able to send an eMail to the contact from your web mail account.

There are two ways to achieve this: the one is with standard data formats. If everyone uses vCard-files for contact information, the data should flow freely, shouldn't it? OpenOffice can read Word files, so there we see interoperability of data, don't we?

Yes, we do. And if it works, fine. But more often than not it doesn't. You need to export and import data explicitly. Tedious, boring, error prone, unnerving. Standards don't happen that easily. Often enough interoperability is achieved with reverse engineering. That's not the way to go.

Using a common data model with well defined semantics and solving tons of interoperability questions (Charset, syntax, file transfer) and being able to declare semantic mappings with ontologies - just try to imagine that! Applications being aware of each other, speaking a common language - but without standard bodies discussing it for years, defining it statically, unmoving.

There is a common theme in the IT history towards more freedom. I don't mean free like in free speech, I mean free like in free will.

That's why we will win.

I am weak

Basically I was working today, instead of doing some stuff I should have finished a week ago for some private activities.

The challenge I posed myself: how semantic can I already get? What tools can I already use? Firefox has some pretty neat extensions, like FOAFer, or the del.icio.us plugin. I'll see if I can work with them, if there's a real payoff. The coolest, somehow semantic plugin I installed is the SearchStatus. It shows me the PageRank and the Alexa rating of the visited site. I think that's really great. It gives me just the first glimpse of what metadata can do in helping being an informed user. The Link Toolbar should be absolutely necessary, but pitily it isn't, as not enough people make us of HTMLs link element the way it is supposed to be used.

Totally unsemantic is the mouse gestures plugin. Nevertheless, I loved those with Opera, and I'm happy to have them back.

Still, there are such neat things like a RDF editor and query engine. Installed it and now I want to see how to work with it... but actually I should go upstairs, clean my room, organise my bills and insurance and doing all this real life stuff...

What's the short message? Get Firefox today and discover its extensions!

Imagine there's a revolution...

... and no one is going to it.

This notion sometimes scares me when I think abou the semantic web. What if all this great ideas are just to complex to be implemented? What if it remains an ivory tower dream? But, on the other hand, how much pragmatism can we take without loosing the vision?

And then, again, I see the semantic web working already: it's del.icio.us, it's flickr, it's julie, and there's so much more to come. The big time of the semantic web is yet to come, and I think none of us can really imagine the impact it is going to have. But it will definitively be interesting!

AcceLogiChip

Accelerated logic chips - that would be neat.

The problem with all this OWL stuff is, that it is computationally expensive. Google beats you in speed easily, having some 60.000 PCs or so, but indexing some 8 billion web pages, each with maybe a thousand words. And if you ever tried Googles Desktop Search, you will see they can perform this miracles right on your PC too! (Never mind that there are a dozen tools doing exactly the same stuff Googles Desktop Search does, just better - but hey, they lack the name!)

What does the Semantic Web achieve? Well, ever tried to run a logic inferencing engine with a few million instances? With a highly axiomatized TBox of, let's say, just a few thousand terms? No? You really should.

Sure, our PCs do get faster all the time (thanks to Moores Law!), but is that fast enough? We want to see the Semantic Web up and running not in a few more iterations of Moores Law, but much, much earlier. Why not use the same trick graphic magicians did? Highly specialized accelerated logic chips, things that can do your tableu reasoning in just a fraction of the time needed with your bloated all-purpose-CPU.

World Wide Prolog

Today I had an idea - maybe this whole Semantic Web idea is nothing else than a big worldwide Prolog program. It's the AI researchers trying to enter the real world through the W3Cs backdoor...

No, really, think about it: almost all most people do with OWL is actually some logic programing. Declaring subsumptions, predicates, conjunctions, testing for entailment, getting answers out of this - but on a world wide scale. And your browser does the inferencing for you (or maybe the server? Depends on your architecture).

They are still a lot of questions open (and the actual semantic differences between Description Logics, and Logic Programming surely ain't the smalles ones of them), like how to infere anything with contradicting data (something that surely will happen in the World Wide Semantic Web), how to treat dynamics (I'm not sure how to do that without reification in RDF), and much more. Looking forward to see this issues resolved...

Gnowsis and further

Today, Leo Sauermann of the DFKI was here, presenting his work on Gnowsis. It was really interesting, and though I don't agree with everything he said, I am totally impressed by the working system he presented. It's close to some ideas I had, about a Semantic Operating System Kernel, doing nothing but administrate your RDF data and offering it to any application around via a http-protocol. Well, I guess this idea was just a tat too obvious...

So I installed Gnowsis on my own desktop and play around with it now. I guess the problem is we don't really have roundtrip information yet - i.e., Information I change in one place shall magically be changed everywhere. What Gnowsis does is integrate the data from various sources into one view, that makes a lot of applications easily accessible. Great idea. But roundtripping data integration is definitively what we need: if I change the phone number of a person, I want this change to get propagated to all applications.

So again, differing to Gnowsis I would prefer a RDF store, that actually offers the whole data householding for all applications sitting atop. Applications are nought but a view on your data. Integrating from existing applications is done the Gnowsis way, but after that we leave the common trail. Oh well, as said, really interesting talk.

Mother philosophy

I should start to write some content on this blog soon, but actually I am still impressed with this technology I am learning here every day...

When the FOIS2004 was approaching, an Italian newspapers published this under the heading "Philosophy - finally useful for something" (or so, my Italian is based on a autodidactic half day course). I found this funny, and totally untrue.

Philosophy always had the bad luck, that every time a certain aspect of it provoced wider attention, this aspect became a discipline of its own. Physics, geometry and mathematics are the classical examples, later on theology, linguistics, anthropology, and then, in the 20th century, logic went this way too. It's like philosophy being the big incubator for new disciplines (you can see that still in the anglo-american tradition of almost all doctors actually being Ph.D.s, philosophical doctors.

Thus this misconception becomes understandable. Now, let's look around - what's the next discipline being born from philosophy? Will it be business ethics? Will it be the philosophy of science, being renamed as scientific managment?

My guess is: due to the fast growing area of the Semantic Web, it will be ontology. Today, the Wikipedia already made two articles on it, ontologies in philosophy and ontologies in computer science. This trend will gain momentum, and even though applied ontology will always feed from the fundamental work done from Socrates until today, it will become a full-fledged discipline of its own.

I'm a believer

The Semantic Web is promising quite a lot. Just take a look at the most cited description of the vision of the Semantic Web, written by Tim Berners-Lee and others. Many people are researching on the various aspects of the SemWeb, but in personal discussions I often sense a lack of believing.

I believe in it. I believe it will change the world. It will be a huge step forward to the data integration problem. It will allow many people to have more time to spend on the things they really love to do. It will help people organize their lives. It will make computers seem more intelligent and helpful. It will make the world a better place to live in.

This doesn't mean it will safe the world. It will offer only "nice to have"-features, but then, so many of them you will hardly be able to think of another world. I hardly remember the world how it was before e-Mail came along (I'm not that old yet, mind you). I sometimes can't remember how we went out in the evening without a mobile. That's where I see the SemWeb in 10 years: no one will think it's essential, but you will be amazed when thinking back how you lived without it.

Who am I?

Well, as this being a blog, it will turn out that it is more important what I write than who I am. Just for the context, I nevertheless want to offer a short sketch about my bio.

I studied Computer Science and Philosophy at the University of Stuttgart, Germany. In Computer Science, I thought about Software Architectures, Programming Languages and User Interfaces, and my master thesis happened to be the first package to offer a validating XML parser for the programming language Ada 95.
In Philosophy I started thinking a lot of Justice, especially John Rawls and Plato, but finally I had a strong move to Construcitivst Epistemology and the ontological status of neural networks (both papers are in German and available from my website.

It's a pretty funny thing that next week I will listen to talk on neural networks and ontologies again, and nevertheless my then made paper and the talk won't have too much in common ;-)

Well, so how comes I am working on Semantic Web technologies by now? I have the incredible luck to work in the Knowledge Management Group of the AIFB in Karlsruhe, and there on the EU SEKT Project. I still have a lot to learn, but in the last few weeks I aggregated quite a good grasp on Ontology Engineering, RDF and OWL and some other fields. This is all pretty exicting and amazing and I am looking forward to see what's around the next triple.

Welcome!

Welcome to my new blog! Technology kindly provided by Blogger.com