Semantic search

Jump to navigation Jump to search

Why some are disenchanted

In a comment to my last blog entry, Christopher St John wrote:

"I suffered through the 80's Knowledge Representation fad, both academically in the AI program at Edinburgh and as a practitioner at the only company ever to produce a commercial system written in Prolog (that wasn't a Prolog development system.) So I'm familiar with the problems that the Semantic Web effort is attempting to address. Having slogged through real-life efforts to encode substantial amounts of knowledge, I find some of the misty-eyed musings that surround the Semantic Web effort depressing. That "most information on the Web is designed for human consumption" is seen as an obstacle surmountable via tools like RDF is especially sad. On the other hand, I'm always happy to make use of the cool tools that these sorts of things seem to throw off. There's probably a certain Proverbs 26:11 aspect to it as well."

Thanks for your insightful comment, and being new to the field I certainly appreciate some report based on real life experience - and I have to admit to probably be faulty of being misty-eyed myself more than once about the Semantic Web (and probably will be in the future as well).

'"Most information on the Web is designed for human consumption" is seen as an obstacle'. Yes, you are right, this is probably the worst phrased sentence in the Semantic Web vision. Although I think it's somehow true: if you want the computer to help you dealing with today's information overflow, it must understand as much of the information as possible. The sentence should be at least rephrased as "most information on the Web is designed only for human consumption". I think it would be pretty easy to create both human-readable and machine-friendly information with only little overhead. Providing such systems should be fairly easy. But this is only about the phrasing of the sentence - I hope that every Semwebber agrees that the Semantic Web's ultimate goal is to help humans, not machines. But we must help the machines in order to enable them to help us.

The much more important point that Christopher addresses is his own disenchantment with the Knowledge Represenation research in the 80s, and probably by many people with the AI research a generation before. So the Semantic Web may just seem as the third generation of futile technologies to solve AI-complete problems.

There were some pretty impressive results from AI and KR, and the Semantic Web people build on that. Some more, some less - some too much even, forgetting the most important component of the Semantic Web underway: the Web. Yes, you can write whole 15-page papers and file them to Semantic Web conferences and journals and not even once mention anything web-specific. That's bad, and that's what Christopher, like some researchers, does not see as well, the main difference between this work two decades ago and today's line of investigation. The Web changes it all. I don't know if AI and KR had to fail - it probably must have failed, because they were so many intelligent people doing it and so there's no other explanation than that it had to fail due to the premises of its time. I have no idea if the Semantic Web is bound to fail as well today. I have no idea if we will be able to reach as much as AI and KR did in their time, or less, or maybe even more. I am a researcher. I have no idea if the things I do will work.

But I strongly believe it will and I will invest my time and part of my life towards this goal. And so do dozens of dozens other people. Let's hope that some nice thing will be created in the course of our work. Like RDF.

Why we will win

People keep saying that the Semantic Web is just a hype. That we are just an unholy chimaera of undead AI researchers talking about problems solved by the database guys 15 years ago. And that our work will never make any impact in the so called real world out there.

As I stated before: I'm a believer. I'm even a catholic, so this means I'm pretty good at ignoring hard facts about reality in order to stick to my beliefs, but it is different in this case: I slowly start to comprehend why Semantic Web technology will prevail and make life better for everyone out there. It' simply the next step in the IT RevoEvolution.

Let's remember the history of computing. Shortly after the invention of the abacus the obvious next step, the computer mainframe, appeared. Whoever wanted to work with it, had to learn to use this one mainframe model (well, the very first ones were one-of-a-kind machines). Being able to use one didn't necessarily help you using the other.

First the costs for software development were negligible. But slowly this changed, and Fred Brooks wrote down his experience with creating the legendary System/360 in the Mythical Man-Month (a must-read for software engineers), showing how much has changed.

Change was about to come, and it did come twofold. Dennis Ritchie is to blame for both of them: together with Ken Thompson he made Unix, but in order to make that, he had to make a programming language to write Unix in, this was C, which he made together with Brian Kernighan (this account is overly simplified, look at the history of Unix for a better overview).

Things became much easier now. You could port programs in a simpler way than before, just recompile (and introduce a few hundred #IFDEFs). Still, the masses used the Commodore 64, the Amiga, the Atari ST. Buying a compatible model was more important than looking at the stats. It was the achievement of the hardware development for the PC and of Microsoft to unify the operating systems for home computers.

Then came the dawning of the age of World Wide Web. Suddenly the operating system became uninteresting, the browser you use was more important. Browser wars raged. And in parallel, Java emerged. Compile once, run everywhere. How cool was that? And after the browser wars ended, the W3Cs cries for standards became heard.

That's the world as it is now. Working at the AIFB, I see how no one cares what operating system the other has, be it Linux, Mac or Windows, as long as you have a running Java Virtual Machine, a Python interpreter, a Browser, a C++ compiler. Portability really isn't the problem anymore (like everything in this text, this is oversimplified).

But do you think, being OS independent is enough? Are you content with having your programs run everywhere? If so, fine. But you shouldn't be. You should ask for more. You also want to be independent of applications! Take back your data. Data wants to be free, not locked inside an application. After you have written your text in Word, you want to be able to work with it in your Latex typesetter. After getting contact information via a Bluetooth connection to your mobile phone, you want to be able to send an eMail to the contact from your web mail account.

There are two ways to achieve this: the one is with standard data formats. If everyone uses vCard-files for contact information, the data should flow freely, shouldn't it? OpenOffice can read Word files, so there we see interoperability of data, don't we?

Yes, we do. And if it works, fine. But more often than not it doesn't. You need to export and import data explicitly. Tedious, boring, error prone, unnerving. Standards don't happen that easily. Often enough interoperability is achieved with reverse engineering. That's not the way to go.

Using a common data model with well defined semantics and solving tons of interoperability questions (Charset, syntax, file transfer) and being able to declare semantic mappings with ontologies - just try to imagine that! Applications being aware of each other, speaking a common language - but without standard bodies discussing it for years, defining it statically, unmoving.

There is a common theme in the IT history towards more freedom. I don't mean free like in free speech, I mean free like in free will.

That's why we will win.

Wiki workshop 2019

24 May 2019

Last week, May 14, saw the fifth incarnation of the Wiki workshop, co-located with the Web Conference (formerly known as dubdubdub), in San Francisco. The room was tight and very full - I am bad at estimating, but I guess 80-110 people were there.

I was honored to be invited to give the opening talk, and since I had a bit more time than in the last few talks, I really indulged in sketching out the proposal for the Abstract Wikipedia, providing plenty of figures and use cases. The response was phenomenal, and there were plenty of questions not only after the talk but also throughout the day and in the next few days. In fact, the Open Discussion slot was very much dominated by more questions about the proposal. I found that extremely encouraging. Some of the comments were immediately incorporated into a paper I am writing right now and that will be available for public reviews soon.

The other presentations - both the invited and the accepted ones - were super interesting.

Thanks to Dario Taraborelli, Bob West, and Miriam Redi for organizing the workshop.

A little extra was that I smuggled my brother and his wife into the workshop for my talk (they are visiting, and they have never been to one of my talks before). It was certainly interesting to hear their reactions afterwards - if you have non-academic relatives, you might underestimate how much they may enjoy such an event as mere spectators. I certainly did.

See also the #wikiworkshop2019 tag on Twitter.

Wikidata - The Making of

19 May 2023

Markus Krötzsch, Lydia Pintscher and I wrote a paper on the history of Wikidata. We published it in the History of the Web track at The Web Conference 2023 in Austin, Texas (what used to be called the WWW conference). This spun out of the Ten years of Wikidata post I published here.

The open access paper is available here as HTML: dl.acm.org/doi/fullHtml/10.1145/3543873.3585579

Here as a PDF: dl.acm.org/doi/pdf/10.1145/3543873.3585579

Here on Wikisource, thanks to Mike Peel for reformatting: Wikisource: Wikidata - The Making Of

Here is a YouTube trailer for the talk: youtu.be/YxWs_BS31QE

And here is the full talk (recreated) on YouTube: youtu.be/P3-nklyrDx4

Wikidata crossed 2 billion edits

The Wikidata community edited Wikidata 2 billion times!

Wikidata is, to the best of my knowledge, the first and only wiki to cross 2 billion edits (the second most edited one being English Wikipedia with 1.18 billion edits).

Edit Nr 2,000,000,000 was adding the first person plural future of the Italian verb 'grugnire' (to grunt) by user Luca.favorido.

Wikidata also celebrated 11 years since launch with the hybrid WikidataCon 2023 in Taipei last weekend.

It took from 2012 to 2019 to get the first billion, and from 2019 to now for the second. As they say, the first billion is the hardest.

That the two billionth edit happens right on the Birthday is a nice surprise.

Wikidata crossed Q100000000

Wikidata crossed Q100000000 (and, in fact, skipped it and got Q100000001 instead).

Here's a small post by Lydia Pintscher and me: https://diff.wikimedia.org/2020/10/06/wikidata-reaches-q100000000/

Wikidata lexicographic data coverage for Croatian in 2023

Last year, I published ambitious goals for the coverage of lexicographic data for Croatian in Wikidata. My self-proclaimed goal was widely missed: I wanted to go from 40% coverage to 60% -- instead, thanks to the help of contributors, we reached 45%.

We grew from 3,124 forms to 4,115, i.e. almost a thousand new forms, or about 31%. The coverage grew from around 11 million tokens to about 13 million tokens in the Croatian Wikipedia, or, as said, from 40% to 45%. The covered forms grew from 1.4% to 1.9%, which illustrates neatly the increased difficulty to reach more coverage (thanks to Zipf's law): last year, we increased covered forms by 1%, which translated to an overall coverage increase of occurrences by 35%. This year, although we increased the covered forms by another 0.5%, we only got an overall coverage increase of occurrences by 5%.

But some of my energy was diverted from adding more lexicographic data to adding functions that help with adding and checking lexicographic data. We launched a new project, Wikifunctions, that can hold functions. There, we collected functions to create the regular forms for Croatian nouns. All nouns are now covered.

I think that's still a great achievement and progress. Sure, we didn't meet the 60%, but the functions helped a lot to get to the 45%, and they will continue to benefit us 2024 too. Again, I want to declare some goals, at least for myself, but not as ambitious with regards to coverage: the goal for 2024 is to reach 50% coverage of Croatian, and in addition, I would love us to have Lexeme forms available for verbs and adjectives, not only for nouns, (for verbs, Ivi404 did most of the work already), and maybe even have functions ready for adjectives.

Wikidata or scraping Wikipedia

Yesterday I was pointed to a blog post describing how to answer an interesting project: how many generations from Alfred the Great to Elizabeth II? Alfred the Great was a king in England at the end of the 9th century, and Elizabeth II is the current Queen of England (and a bit more).

The author of the blog post, Bill P. Godfrey, describes in detail how he wrote a crawler that started downloading the English Wikipedia article of Queen Elizabeth II, and then followed the links in the infobox to download all her ancestors, one after the other. He used a scraper to get the information from the Wikipedia infoboxes from the HTML page. He invested quite a bit of work in cleaning the data, particularly doing entity reconciliation. This was then turned into a graph and the data analyzed, resulting in a number of paths from Elizabeth II to Alfred, the shortest being 31 generations.

I honestly love these kinds of projects, and I found Bill’s write-up interesting and read it with pleasure. It is totally something I would love to do myself. Congrats to Bill for doing it. Bill provided the dataset for further analysis on his Website. Thanks for that!

Everything I say in this post is not meant, in any way, as a criticism of Bill. As said, I think he did a fun project with interesting results, and he wrote a good write-up and published his data. All of this is great. I left a comment on the blog post sketching out how Wikidata could be used for similar results.

He submitted his blog post to Hacker News, where a, to me, extremely surprising discussion ensued. He was pointed rather naturally and swiftly to Wikidata and DBpedia. DBpedia is a project that started and invested heavily in scraping the infoboxes from Wikipedia. Wikidata is a sibling project of Wikipedia where data can be directly maintained by contributors and accessed in a number of machine-readable ways. Asked why he didn’t use Wikidata, he said he didn’t know about it. All fair and good.

But some of the discussions and comments on Hacker News surprised me entirely.

Expressing my consternation, I started discussions on Twitter and on Facebook. And there were some very interesting stories about the pain of using Wikidata, and I very much expect us to learn from them and hopefully make things easier. The number of API queries one has to make in order to get data (although, these numbers would be much smaller than with the scraping approach), the learning curve about SPARQL and RDF (although, you can ignore both, unless you want to use them explicitly - you can just use JSON and the Wikidata API), the opaqueness of the identifiers (wdt:P25 wd:Q9682 instead of “mother” and “Queen Elizabeth II”) were just a few. The documentation seems hard to find, there seem to be a lack of libraries and APIs that are easy to use. And yet, comments like "if you've actually tried getting data from wikidata/wikipedia you very quickly learn the HTML is much easier to parse than the results wikidata gives you" surprised me a lot.

Others asked about the data quality of Wikidata, and complained about the huge amount of bad data, duplicates, and the bad ontology in Wikidata (as if Wikipedia wouldn’t have these problems. I mean how do you figure out what a Wikipedia article is about? How do you get a list of all bridges or events from Wikipedia?)

I am not here to fight. I am here to listen and to learn, in order to help figuring out what needs to be made better. I did dive into the question of data quality. Thankfully, Bill provides his dataset on the Website, and downloading the query result for the following query - select * { wd:Q9682 (wdt:P25|wdt:P22)* ?p . ?p wdt:P25|wdt:P22 ?q } - is just one click away. The result of this query is equivalent to what Bill was trying to achieve - a list of all ancestors of Elizabeth II. (The actual query is a little bit more complex, because we also fetch the names of the ancestors, and their Wikipedia articles, in order to help match the data to Bill’s data).

I would claim that I invested far less work than Bill in creating my graph data. No data cleansing, no scraping, no crawling, no entity reconciliation, no manual checking. How about the quality of the two datasets?

Update: Note, this post is not a tutorial to SPARQL or Wikidata. You can find an explanation of the query in the discussion on Hacker News about this post. I really wanted to see how the quality of the data using the two approaches compares. Yes, it is an unfamiliar language for many, but I used to teach SPARQL and the basics of the languages seem not that hard to learn. Try out this tutorial for example. Update over

So, let’s look at the datasets. I will refer to the two datasets as the scrape (that’s Bill’s dataset) and Wikidata (that’s the query result from Wikidata, as of the morning of August 20 - in particular, none of the errors in Wikidata mentioned below have been fixed).

In the scrape, we find 2,584 ancestors of Elizabeth II (including herself). They are connected with 3,528 parenthood relationships.

In Wikidata, we find 20,068 ancestors of Elizabeth II (including herself). They are connected with 25,414 parenthood relationships.

So the scrape only found a bit less than 13% of the people that Wikidata knows about, and close to 14% of the relationships. If you ask me, that’s quite a bad recall - almost seven out of eight ancestors are missing.

Did the scrape find things that are missing in Wikidata? Yes. 43 ancestors are in the scrape which are missing in Wikidata, and 61 parenthood relationships are in the scrape which are missing from Wikidata. That’s about 1.8% of the data in the scrape, or 0.24% compared to the overall parent relationship data of Elizabeth II in Wikidata.

I evaluated the complete list of those relationships from the scrape missing from Wikidata. They fall into five categories:

  • Category 1: Errors that come from the scraper. 40 of the 61 relationships are errors introduced by the scrapers. We have cities or countries being parents - which isn’t too terrible, as Bill says in the blog post because they won’t have parents themselves and won’t participate in the original question of findinging the lineage from Alfred to Elizabeth, so no problem. More problematic is when grandparents or great-grandparents are identified as the parent, because this directly messes up the counting of generations: Ügyek is thought to be a son, not a grandson of Prince Csaba, Anna Dalassene is skipping two generations to Theophylact Dalassenos, etc. This means we have an error rate of at least 1.1% in the scraper dataset, besides having the low recall rate mentioned above.
  • Category 2: Wikipedia has an error. Those are rare, it happened twice. Adelaide of Metz had the wrong father and Sophie of Mecklenburg linked to the wrong mother in the infobox (although the text was linking to the right one). The first one has been fixed since Bill ran his scraper (unlucky timing!), and I fixed the second one. Note I am linking to the historic version of the article with the error.
  • Category 3: Wikidata was missing data. Jeanne de Fougères, Countess of La Marche and of Angoulême and Albert Azzo II, Margrave of Milan were missing one or both of their parents, and Bill’s scraping found them. So of the more than 3,500 scraped relationships, only 2 were missing! I added both.
  • In addition, correct data was marked deprecated once. I fixed that, too.
  • Category 4: Wikidata has duplicates, and that breaks the chain. That happened five times, I think the following pairs are duplicates: Q28739301/Q106688884, Q105274433/Q40115489, Q56285134/Q354855, Q61578108/Q546165 and Q15730031/Q59578032. Duplicates were mentioned explicitly in one of the comments as a problem, and here we can see that they happen with quite a bit of frequency, particularly for non-central items. I merged all of these.
  • Category 5: the situation is complicated, and different Wikipedia versions disagree, because the sources seem to disagree. Sometimes Wikidata models that disagreement quite well - but often not. After all, we are talking about people who sometimes lived more than a millennium ago. Here are these cases: Albert II, Margrave of Brandenburg to Ada of Holland; Prince Álmos to Sophia to Emmo of Loon (complicated by a duplicate as well); Oldřich, Duke of Bohemia to Adiva; William III to Raymond III, both Counts of Toulouse; Thored to Oslac of York; Bermudo II of León to Ordoño III of León (Galician says IV); and Robert Fitzhamon to Hamo Dapifer. In total, eight cases. I didn't edit those as these require quite a bit of thought.

Note that there was not a single case of “Wikidata got it wrong”, which surprised me a lot - I totally expected errors to happen. Unless you count the cases in Category 5. I mean, even English Wikipedia had errors! This was a pleasant surprise. Also, the genuine complicated cases are roughly as frequent as missing data, duplicates, and errors together. To be honest, that sounds like a pretty good result to me.

Also, the scraped data? Recall might be low, but the precision is pretty good: more than 98% of it is corroborated by Wikidata. Not all scraping jobs have such a high correctness.

In general, these results are comparable to a comparison of Wikidata with DBpedia and Freebase I did two years ago.

Oh, and what about Bill’s original question?

Turns out that Wikidata knows of a path between Alfred and Elizabeth II that is even shorter than the shortest 31 generations Bill found, as it takes only 30 generations.

This is Bill’s path:

  • Alfred the Great
  • Ælfthryth, Countess of Flanders
  • Arnulf I, Count of Flanders
  • Baldwin III, Count of Flanders
  • Arnulf II, Count of Flanders
  • Baldwin IV, Count of Flanders
  • Judith of Flanders
  • Henry IX, Duke of Bavaria
  • Henry X, Duke of Bavaria
  • Henry the Lion
  • Henry V, Count Palatine of the Rhine
  • Agnes of the Palatinate
  • Louis II, Duke of Bavaria
  • Louis IV, Holy Roman Emperor
  • Albert I, Duke of Bavaria
  • Joanna Sophia of Bavaria
  • Albert II o _Germany
  • Elizabeth of Austria
  • Barbara Jagiellon
  • Christine of Saxony
  • Christine of Hesse
  • Sophia of Holstein-Gottorp
  • Adolphus Frederick I, Duke of Mecklenburg-Schwerin
  • Adolphus Frederick II, Duke of Mecklenburg-Strelitz
  • Duke Charles Louis Frederick of Mecklenburg
  • Charlotte of Mecklenburg-Strelitz
  • Prince Adolphus, Duke of Cambridge
  • Princess Mary Adelaide of Cambridge
  • Mary of Teck
  • George VI
  • Elizabeth II

And this is the path that I found using the Wikidata data:

  • Alfred the Great
  • Edward the Elder (surprisingly, it deviates right at the beginning)
  • Eadgifu of Wessex
  • Louis IV of France
  • Matilda of France
  • Gerberga of Burgundy
  • Matilda of Swabia (this is a weak link in the chain, though, as there might possibly be two Matildas having been merged together. Ask your resident historian)
  • Adalbert II, Count of Ballenstedt
  • Otto, Count of Ballenstedt
  • Albert the Bear
  • Bernhard, Count of Anhalt
  • Albert I, Duke of Saxony
  • Albert II, Duke of Saxony
  • Rudolf I, Duke of Saxe-Wittenberg
  • Wenceslaus I, Duke of Saxe-Wittenberg
  • Rudolf III, Duke of Saxe-Wittenberg
  • Barbara of Saxe-Wittenberg (Barbara has no article in the English Wikipedia, but in German, Bulgarian, and Italian. Since the scraper only looks at English, they would have never found this path)
  • Dorothea of Brandenburg
  • Frederick I of Denmark
  • Adolf, Duke of Holstein-Gottorp (husband to Christine of Hesse in Bill’s path)
  • Sophia of Holstein-Gottorp (and here the two lineages merge again)
  • Adolphus Frederick I, Duke of Mecklenburg-Schwerin
  • Adolphus Frederick II, Duke of Mecklenburg-Strelitz
  • Duke Charles Louis Frederick of Mecklenburg
  • Charlotte of Mecklenburg-Strelitz
  • Prince Adolphus, Duke of Cambridge
  • Princess Mary Adelaide of Cambridge
  • Mary of Teck
  • George VI
  • Elizabeth II

I hope that this is an interesting result for Bill coming out of this exercise.

I am super thankful to Bill for doing this work and describing it. It led to very interesting discussions and triggered insights into some shortcomings of Wikidata. I hope the above write-up is also helpful, particularly in providing some data regarding the quality of Wikidata, and I hope that it will lead to work in making Wikidata more and easier accessible to explorers like Bill.

Update: there has been a discussion of this post on Hacker News.

Wikidata reached a billion edits

As of today, Wikidata has reached a billion edits - 1,000,000,000.

This makes it the first Wikimedia project that has reached that number, and possibly the first wiki ever to have reached so many edits. Given that Wikidata was launched less than seven years ago, this means an average edit rate of 4-5 edits per second.

The billionth edit is the creation of an item for a 2006 physics article written in Chinese.

Congratulations to the community! This is a tremendous success.

Wikidatan in residence at Google

Over the last few years, more and more research teams all around the world have started to use Wikidata. Wikidata is becoming a fundamental resource. That is also true for research at Google. One advantage of using Wikidata as a research resource is that it is available to everyone. Results can be reproduced and validated externally. Yay!

I had used my 20% time to support such teams. The requests became more frequent, and now I am moving to a new role in Google Research, akin to a Wikimedian in Residence: my role is to promote understanding of the Wikimedia projects within Google, work with Googlers to share more resources with the Wikimedia communities, and to facilitate the improvement of Wikimedia content by the Wikimedia communities, all with a strong focus on Wikidata.

One deeply satisfying thing for me is that the goals of my new role and the goals of the communities are so well aligned: it is really about improving the coverage and quality of the content, and about pushing the projects closer towards letting everyone share in the sum of all knowledge.

Expect to see more from me again - there are already a number of fun ideas in the pipeline, and I am looking forward to see them get out of the gates! I am looking forward to hearing your ideas and suggestions, and to continue contributing to the Wikimedia goals.

Wikimania 2006 is over

And it sure was one of the hottest conferences ever! I don't mean just because of the 40°C/100°F that we had to endure in Boston, but also because of the further speakers there.

Brewster Kahle, the man behind the Internet Archive, and who started Alexa and WAIS Inc., told us about his plans to digitalize every book (just a few Petabytes), every movie (just a few Petabytes), every record (just a... well, you get the drill), and to make a snapshot of the web every few months, and archive this. Wow.

Yochai Benkler spoke about the Wealth of Networks. You can download his book from his site, or go to a bookstore and get it there. The talk was really inviting to read it: why does a network thingy like Wikipedia work and not suck? How does this change basically everything?

Next day, there was Mitch Kapor, president of the Open Source Application Foundation -- and I am really sorry I had to miss his talk, because at the same time we were giving our workshop on how to reuse the knowledge within a Semantic MediaWiki in your own applications and websites. Markus Krötzsch, travel companion and fellow AIFB PhD student, and basically the wizard who programmed most of the Semantic MediaWiki extension, totally surprised me by being surprised about what you can do with this Semantic Web stuff. Yes, indeed, the idea is to be able to ask another website to put stuff up on yours. And to mush data.

There was David Weinberger, whose talk made me laugh more than I had for a while (and I am quite merry, usually!). I still have to rethink what he actually said, contentwise, but it made a lot of sense, and I took some notes, it was on the structure of knowledge, and how it changes in the new world we are living in.

Ben Shneiderman, the pope on visualization and User Interfaces had an interesting talk on visualizing the Wikipedia. The two talks before his, by Fernanda Viegas and Martin Wattenberg, were really great, because they have visualized real Wikipedia data -- and showed us a lot of interesting data. I hope their tools will become available soon. (Ben's own talk was rather a bit disappointing, as he didn't seem to have the time to take some real data, but only used fake data to show some general possible visualizations. As i had the chance to see him in Darmstadt last year anyway, I didn't see much new stuff).

The party at the MIT Museum was great! Even though I wasn't allow to drink, because I forgot my ID. I'd never think anyone would consider me looking younger than 21. So I take this as the most sincere compliment. Don't bother explaining they had to check my ID even if I looked 110, I really don't want to hear :) I saw Kismet! Pitily, he was switched off.

Trust me. I was kinda tired after this week. It was lots of fun, it was enormously interesting. Thanks to all the Wikipedians, who made Wikipedia and Wikimania possible. Thanks to all these people for organizing this event and helping out! I am looking forward to Wikimania 2007, wherever it will be. The bidding for hosting Wikimania 2007 are open!

Wikimania is coming

Wikimania starts on Friday. Looking forward to it, I'll be there with a collegue and we will present a paper on Wikipedia and the Semantic Web - The Missing Links on Friday. Should you be in Frankfurt, don't miss it!

Here's the abstract: "Wikipedia is the biggest collaboratively created source of encyclopaedic knowledge. Growing beyond the borders of any traditional encyclopaedia, it is facing new problems of knowledge management: The current excessive usage of article lists and categories witnesses the fact that 19th century content organization technologies like inter-article references and indices are no longer sufficient for today's needs.

Rather, it is necessary to allow knowledge processing in a computer assisted way, for example to intelligently query the knowledge base. To this end, we propose the introduction of typed links as an extremely simple and unintrusive way for rendering large parts of Wikipedia machine readable. We provide a detailed plan on how to achieve this goal in a way that hardly impacts usability and performance, propose an implementation plan, and discuss possible difficulties on Wikipedia's way to the semantic future of the World Wide Web. The possible gains of this endeavour are huge; we sketch them by considering some immediate applications that semantic technologies can provide to enhance browsing, searching, and editing Wikipedia."

Basically we suggest to introduce typed links to the Wikipedia, and an RDF-export of the articles annotated with these typed links being regarded as relations. And suddenly, you get the a huge ontology, created by thousands and thousands of editors, queryable and usable, a really big starting block and incubator for Semantic Web technologies - and all this, still scalable!

If the Wikipedia community agrees that this is a nice idea, which I hope with all my heart. We'll see this weekend.

Wikipedia demonstriert

Eine Reihe von Wikipedien (Deutsch, Dänisch, Estnisch, Tschechisch) tragen heute schwarz um schlecht gemachte Gesetzesänderungen zu verhindern. Ich bin stolz auf die Freiwilligen der Wikipedien, die das organisiert bekommen haben.

Willkommen auf Nodix!

Die letzten zwei Monate waren bei mir eindeutig vom Studium bestimmt. Zwar hatte ich das Glück auch auf zwei wunderschönen Cons gewesen zu sein - einmal auf dem WWW, dem Rahjacon, ein fantastisches Live, sicherlich mein zweitliebstes bisher, das andere mal auf dem Ebertreffen, welches mir ebenfalls viel Freude und eine scheinbar gelungen gemeisterte Herausforderung bot, und über beide Konvente erhaltet ihr weitere Informationen, wohl auch Bilder, auf Sven Wedekens Website - aber meine Prüfungen in Theoretischer Informatik (vorbei, bestanden!) und demnächst in Compilerbau (mit Respekt erwartet) und das zeitgleich laufende, sehr fordernde Fachpraktikum Visualisierung (in welchem ich C++, OpenGL, Qt, Doxygen, den Emacs, das Prinzip der Szenengraphen, des Raytracing und des Volumenrendering ziemlich gleichzeitig lernen darf) haben es mir leider nicht ermöglicht, Nodix weitere Inhalte zu geben. Seitdem ich wieder mir einen klitzekleinen Funken mehr Zeit für meine Freizeit stehle, hoffe ich jedoch, nachdem ich nun dazu kam, mein eMail-Fach ein wenig auszuräumen, auch hier wieder weiterarbeiten zu können. In Zukunft soll zumindest ein regelmäßiges Editorial hier stehen, welches über die Website als solches informiert.

Diese Webseite füllt sich leider sehr langsam mit Inhalten. Bislang bleibt vor allem auf das Referat 'Was ist Software Architektur?', die wenigen von mir verfassten Buch-Rezensionen, die Kurzgeschichte 'Legende von Castle Darkmore' hinzuweisen. Am weitesten fortgeschritten ist das Bereitstellen von Material zum Rollenspiel Das Schwarze Auge, worin sich Texte zur Magietheorie, zu dotdsa und vor allem zu meiner Spielrunde finden, zudem noch Niobaras Foliant zum runterladen, ein Programm zur Darstellung des aventurischen Sternenhimmels.
Ich plane, binnen der nächsten Monats ein paar weitere Kurzgeschichten von mir veröffentlichen, zudem das Material zu meiner Spielrunde stark vergrößern und weitere Buchrezensionen hinzufügen zu können. Alles das hängt natürlich sehr davon ab, wieviel Zeit ich habe, jedoch kann ich eines versprechen: diese Website wird erblühen! Wenn sich auch noch nicht der tägliche Klick lohnt, so doch meistens der monatliche Blick.

Ich hoffe, euch bald wieder begrüßen zu können,
Denny Vrandecic

Willkommen auf Simia

Willkommen auf Simia, der neuen Website von Denny Vrandecic. Nachdem ich in meinem Blog seit gefühlten drei Zeitaltern nix mehr geschrieben habe, und auf meinen Seiten seit Anbeginn der Zeiten keine neuen Inhalte eingestellt habe, kann ich euch jetzt sagen, es lag daran, dass ich die ganze Technik umstellen wollte.

Womit ich endlich gut vorangekommen bin. Zur Zeit finden sich hier alle Blogeinträge von Nodix und die Kommentare. Die Funktion zum Erstellen neuer Kommentare funktioniert noch nicht, aber ich arbeite daran. Ihr werdet auch merken, dass deutlich mehr Inhalte auf der Seite in Englisch sind als früher.

Technisch gesehen ist Simia eine Semantic MediaWiki Installation. Damit gehört dieser Blog auch zu meiner Forschung, indem ich ein wenig Erfahrung aus erster Hand sammeln möchte, wie es ist, sein Blog und seine persönliche Homepage mit Semantic MediaWiki zu führen. (Insofern ist das natürlich kein Blog mehr, sondern ein so genanntes Bliki, aber wen schert's?). Und da das ganze semantisch ist, will ich herausfinden, wie so eine persönliche Website ins Semantic Web passt...

Um Up to date zu bleiben, gibt es eine Reihe von feeds auf Simia. Wählt Euch aus, was ihr wollt. Schöne Grüße, und ich hoffe, Ihr habt Euch gut durch die Weihnachtszeit gemampft! :)

Willkommen auf der Webseite von Denny Vrandecic!

Diese Webseite wurde erst vor kurzem erstellt und enthält dementsprechend erst wenige Inhalte. Sie wird binnen mir schnellstmöglicher Zeit mit Inhalten über die Themen meines Studiums - Informatik und Philosophie -, über mich selber, über meine Zukunftsvisionen und über Rollenspiele - vor allem DSA und Shadowrun - gefüllt werden, dazu noch einige Kleinigkeiten, die ich über die Jahre hinweg erstellt habe. Außerdem wird zur Zeit an einer XML-Version dieser Seiten gearbeitet. Es wäre nett, wenn Ihr versuchen würdet, diese anzuschauen und mir Kommentare senden würdet (Rückkehr auf diese Seite über die Browserbuttons).

Schauen Sie doch bald mal wieder rein!


Wired: "Wikipedia is the last best place on the Internet"

WIRED published a beautiful ode to Wikipedia, painting the history of the movement with broad strokes, aiming to capture its impact and ambition with beautiful prose. It is a long piece, but I found the writing exciting.

Here's my favorite paragraph:

"Pedantry this powerful is itself a kind of engine, and it is fueled by an enthusiasm that verges on love. Many early critiques of computer-assisted reference works feared a vital human quality would be stripped out in favor of bland fact-speak. That 1974 article in The Atlantic presaged this concern well: “Accuracy, of course, can better be won by a committee armed with computers than by a single intelligence. But while accuracy binds the trust between reader and contributor, eccentricity and elegance and surprise are the singular qualities that make learning an inviting transaction. And they are not qualities we associate with committees.” Yet Wikipedia has eccentricity, elegance, and surprise in abundance, especially in those moments when enthusiasm becomes excess and detail is rendered so finely (and pointlessly) that it becomes beautiful."

They also interviewed me and others for the piece, but the focus of the article is really on what the Wikipedia communities have achieved in our first two decades.

Two corrections: - I cannot be blamed for Wikidata alone, I blame Markus Krötzsch as well - the article says that half of the 40 million entries in Wikidata have been created by humans. I don't know if that is correct - what I said is that half of the edits are made by human contributors

Wissenswertes über Jamba

Herzallerliebst geschrieben, dazu noch äußerst unterhaltsam, und dennoch aufklärerischer und kritischer Inhalt:

http://spreeblick.de/wp/index.php?p=324

Da freut man sich. Was las ich vor kurzem in einem Telepolis-Interview mit Norbert Bolz gelesen?
"Ich lese am liebsten das ›Streiflicht‹ in der Süddeutschen Zeitung und ›Das Letzte‹ in der Zeit, also Glossen. Diese Glossen haben sehr viel mehr Sprengkraft als die Kommentare von irgendeinem Leitartikler. Solche Texte sind so voraussehbar in ihrer politischen Korrektheit, dass sie mich einfach nur anöden. Über die Form des Witzes lassen sich so manche politischen Informationen und Kritik viel besser vermitteln."

Na, dann ist der obige Link ein Beispiel für die Information der Zukunft.

Wodka

Neulich, im Getränkemarkt...

"Oh, wir könnten noch Bananen- und Kirschsaft kaufen, für KiBa."
"Coole Idee. Das sind zwei Flaschen Banane und eine Flasche Kirsch."
"Nö, das mischt man 1:1. Wir nehmen zwei von jeder."
"Echt? Na gut."
"Oh, schau, Mangosaft! Nehmen wir auch eine Flasche Mangosaft."
"Soviel passt aber nicht in die Kiste. Wir müssen was raustun."
"Hier, nehmen wir nur eine Flasche Banane."
"Aber dann haben wir ja zwei Flaschen Kirsche auf nur eine Flasche Banane."
"Ja und?"
"Du sagtest doch, die mischt man 1:1."
"Ja, schon, aber den Kirschsaft kann man auch für anderes hernehmen."
"Ach ja? Für was denn?"
"Kirsch Wodka zum Beispiel."
"Wir haben Wodka da?"
"Nein."
"..."
"Blogg das bitte nicht!"

Glaubst Du doch selbst nicht. Wer solche Bilder von mir bloggt, der hat kaum Gnade verdient... ;)

Wordle is good and pure

The nice thing about Wordle - whether you play it or not, whether you like it or not - is that it is one of those good, pure things the Web was made for. A simple Website, without ads, popups, monetization, invasive tracking, etc.

You know, something that can chiefly be done by someone who already has a comfortable life and won't regret not having monetized this. The same way scientists mainly have been "gentleman scientist". Or tenured professors who spent years on writing novels.

And that is why I think that we should have a Universal Basic Income. To unlock that creativity. To allow for ideas from people who are not already well off to see the light. To allow for a larger diversity of people to try more interesting things.

Thank you for coming to my TED talk.

P.S.: on January 31, five days after I wrote this text, Wordle was acquired by the New York Times for an undisclosed seven-digit sum. I think that is awesome for Wardle, the developer of Wordle, and I still think that what I said was true at that time and still mostly is, although I expect the Website now to slowly change to have more tracking, branding, and eventually a paywall.

World Wide Prolog

Today I had an idea - maybe this whole Semantic Web idea is nothing else than a big worldwide Prolog program. It's the AI researchers trying to enter the real world through the W3Cs backdoor...

No, really, think about it: almost all most people do with OWL is actually some logic programing. Declaring subsumptions, predicates, conjunctions, testing for entailment, getting answers out of this - but on a world wide scale. And your browser does the inferencing for you (or maybe the server? Depends on your architecture).

They are still a lot of questions open (and the actual semantic differences between Description Logics, and Logic Programming surely ain't the smalles ones of them), like how to infere anything with contradicting data (something that surely will happen in the World Wide Semantic Web), how to treat dynamics (I'm not sure how to do that without reification in RDF), and much more. Looking forward to see this issues resolved...

Wort des Jahres

Merriam-Webster hat das (englischsprachige) Wort des Jahres bekanntgegeben: Blog.

Wowarich?

Sehr nette idee, von Fred darauf gebracht:

Wo war ich schon überall auf der Welt, bzw. in Europa? (bei einer Weltkarte würde noch die USA dazukommen, der Rest wäre gähnendes Grau, darum habe ich lieber die Europakarte gewählt).

Hier fehlt noch ein Bild.

Könnt ihr auch ganz einfach für euch selber zusammenbasteln, auf World66.

Wurzelbehandlung

Zurück aus dem Urlaub, für die Uni aufgeräumt, haufenweise eMails bearbeitet und eine - ächz! - Wurzelbehandlung hinter und auch noch ein wenig vor mir (und das mir, der sich vor Zahnärzten so höllisch fürchtet, verdammt).

Dies als Erklärung für die Inaktivität auf dieser Seite, ansonsten verbleibe ich bloß mit dem Versprechen: es passiert bald mehr. Nach wie vor sind ein paar größere Neuerungen in Planung. Danke allen fleißigen und treuen Besuchern!

XML4Ada95 0.9

Version 0.9 von XML4Ada95 ist erschienen! Das bedeutet, die Dokumentation ist ausgebessert, ein paar grobe Bugs sind raus, und ich teile der ganzen Welt mit: hier ist es! Holt es euch...

Ich befürchte nur ernsthaft, dass meine Veröffentlichung des Pakets für meine Diplomarbeit von Nachteil sein kann: dadurch, dass schon jetzt viele Augen das Projekt begutachten werden, wird es auch viel Kritik, auch gerechtfertigte geben. Ich hoffe, dass dies die Note nicht negativ beeinflusst. Dem Projekt kann es ja nur zu Gute kommen, wenn es korrigiert wird. Ächz, ich hoffe, das Richtige getan zu haben, indem ich auf mein Gefühl hörte.

XML4Ada95 wächst

XML4Ada95 ist ganz schön gewachsen, drei Dutzend neue Seiten wurden hinzugefügt. Und die Seite ist noch lange nicht fertig!

Auch auf Nodix wurde ein klein wenig aufgeräumt, von den Einträgen auf der Titelseite wurde der Mai ins Archiv geschoben. War aber eher mal wieder zum üben, ich vergesse sonst, wie der Nodix Webseiten Generator bedient wird...

XML4Ada95 wächst weiter

Weiterer Wachstum für XML4Ada95. Beispiele, Erweiterung der Dokumentation (mehr denn 100 Seiten inzwischen, gut, dass dieser Teil nicht in der Ausarbeitung der Diplomarbeit mitgedruckt wurde, puh!)

Es geht voran. Ich mache mir auch schon wieder die ersten Notizen zum DSA4 Werkzeug und werde auch daran bald wieder arbeiten. Zudem steht eine Überarbeitung von Nodix an, aber das eilt weniger - diese Website hat sich doch anders entwickelt als gedacht, und dem sollte Rechnung getragen werden.

Den Juni, Juli und Augst von dieser Seite ins Archiv geschoben, und auch die History rechts mal wieder gekürzt (das macht die Anfangsseite kleiner und somit schneller zu laden).

Schönes Wochenende allen!

Zeitverschiebung

Es ist Mittag in Hawaii, und ich bin müde! Herrje.

Liegt wahrscheinlich daran, dass ich in Karlsruhe bin.

Zen and the Art of Motorcycle Maintenance

13 May 2021

During my PhD, on the topic of ontology evaluation - figuring out what a good ontology is and what is not - I was running circles up and down trying to define what "good" means for an ontology (Benjamin Good, another researcher on that topic, had it easier, as he could call his metric "Good metric" and be done with it).

So while I was struggling with the definition in one of my academic essays, a kind anonymous reviewer (I think it was Aldo Gangemi) suggested I should read "Zen and the Art of Motorcycle Maintenance".

When I read the title of the suggested book, I first thought the reviewer was being mean or silly and suggesting a made-up book because I was so incoherent. It took me two days to actually check whether that book existed, as I wouldn't believe it.

It existed. And it really helped me, by allowing me to set boundaries of how far I can go in my own work, and that it is OK to have limitations, and that trying to solve EVERYTHING leads to madness.

(Thanks to Brandon Harris for triggering this memory)

Zum 500.

Herzlichen Glückwünsch an Schwesterchen für Ihren 500. Eintrag in nakit-arts. Wow, 500 Einträge! Sehr fleißig.

Spaßigerweise ist dieser Eintrag wiederum der 250. Eintrag auf Nodix. Koinzidenz.

Zum neuen Jahr

Mögen all' Eure Wünsche für das Neue Jahr 2003 wahr werden!

Heute vielerlei Sachen erledigt, und noch viele andere warten darauf. Noch dröhnen mir die Ohren von der Musik einer fantastisch-genialen Silvester-Party - Danke der Orga dafür nochmals, wow! - und noch füllt sich weiter mein eMail-Fach, aber immerhin habe ich heute den größten Teil meiner konventionellen Post erledigt (ja, so etwas gibt es auch noch). Gestern begann zudem hochoffiziell meine Diplomarbeit - es wird ein vielbeschäftigtes Jahr!

Zudem beginne ich ein wenig hier auf den Seiten aufzuräumen. Neben einem neuen Werbebanner haben heute die nutkidz einen schöneren, wenn auch sehr schlichten Rahmen erhalten. Zudem sind sie nun direkt in den Nodix Website Generator eingebunden statt, wie gestern, nur per Hand eingefügt. Macht die Wartung einfacher. Bis bald!

Zur Macht der Blogger

sympatexter hat einen Eintrag dazu geschrieben, dass sich Blogger gerne für zu wichtig nehmen (als Antwort auf ein Stück von Robert Basic, der darüber schreibt, dass sich Blogger noch nicht wichtig genug nehmen). Als Randbemerkung: es ist amüsant zu sehen, dass ausgerechnet sympatexter auf diesen Missstand hinweißt, insbesondere da die Tagline des eigenen Blogs sympatexter rules the world ist. (Mein Fehler, sorry)

Was ist der Sinn des Bloggens? Das würde vielleicht zu weit führen. Aber einzelne Argumente des sympatexters möchte ich doch genauer beleuchten:

  • "Was die Blogger interessiert, interessiert auch leider NUR die Blogger." Stimmt nicht ganz - oder zumindest würde ich dafür gerne mehr Beleg sehen. Blogger werden von der Werbewirtschaft als Multiplikatoren betrachtet - eine Eigenschaft, die sie nur haben können, wenn mehr Leute Blogs lesen als sie schreiben. Außerdem bloggen viele über allgemein interessante Themen, von Lost über Britney Spears, Verbrauchererfahrungen mit Produkten und Dienstleistungen, die Bundestagswahlen bis hin zu Menschenrechtsverletzungen in Guantanamo oder direkten Berichten aus Krisengebieten im Nahen Osten oder Thailand. Glaubt Ihr nicht? Schaut auf Technorati nach, die haben eine aktuelle Liste von populären Themen. Heutige Favoriten: die Oscars, Antonella Barba, und Al Gore. Alles Themen die auch außerhalb der Blogosphäre relevant sind.
  • Meine Zustimmung zu der Beobachtung bezüglich der Statistiken. Die Zahlen, die in den Medien genannt werden, sind häufig irreführend, aber das ist eine Eigenschaft von Statistiken und Medien. Verfolgt man die Zahlen auf die Quelle, wird man oft enttäuscht sein.
  • "In Deutschland lesen sehr wenige Menschen Blogs." Auch hier hätte ich gerne Zahlen. Ich bin mir sicher, dass ein großer Teil der webnutzenden Bevölkerung schon mal einen Blog gelesen hat, schlicht, weil sie bei Anfragen bei den Suchmaschinen häufig auf Blogeinträge stoßen. Vielleicht sind sich die Leser nicht mal bewusst, dass sie einen Blog lesen (ebenso wie der Anteil der Wikipedia-Leser, der nicht weiß, dass die Wikipedia von jedem verändert werden kann, stark zugenommen hat). Einige meiner bestbesuchten Einträge haben mit dem Kochen von Milchreis, den Machenschaften des Kleeblatt-Verlags, und Filmen zu tun. Die Leute, die das Lesen sind nicht die üblichen Leser meines Blogs -- aber ein wichtiger Anteil.
  • "Die meisten Blogs haben noch nicht mal dreistellige Zugriffszahlen pro Tag und werden meistens von Freunden gelesen." Zustimmung, und gleichzeitig die Frage: na und? Ich erwarte ja, dass dieser Blog hier eigentlich nur von Leuten gelesen wird, die mich kennen. Das kann wieder für einzelne Beiträge anders sein, aber im Allgemeinen trifft das zu. Und das, was ich schreibe, interessiert auch meistens nur diese Wenigen -- wenn überhaupt. Aber das ist OK. Blogs werden vielfach dafür verwendet, die Kommunikation zu Freunden und Bekannten, oder gar zur Familie, zu vereinfachen, gar zu ermöglichen, oder sie schlicht aufrechtzuerhalten. Und das ist gut so. Nicht jeder Blog muss Hunderttausende von Lesern haben, das wäre nicht mal möglich. Man darf halt als Blogger dann aber auch nicht erwarten, dass Hunderttausende lesen und durch die Einträge beeinflusst werden.
  • "Etwas zu verlinken, was älter als eine Woche ist, ist ja schon fast Blasphemie - so versinkt das meiste, kaum wahrgenommen, in den Archiven." Zurecht beanstandet. Man sollte häufiger in die Archive verlinken, und strukturierte Einträge machen, die langfristig von Interesse sind. Semantische Technologien, wie ich sie auch in meiner Arbeit entwickle, sollen auch konkret an diesen Baustellen arbeiten. Ein Probekapitel zu semantischen Blogs und Wikis aus einem jüngst erschienen Buch über Wikis und Blogs gibt dazu ein wenig Einsicht, wie man sich das vorstellen kann. Leider sind nur die ersten 8 Seiten online verfügbar. (Achtung Werbung!) Kauft das Buch! (Werbung Ende) Solche Technologien sollen helfen, Blogeinträge dann verfügbar zu machen, wenn sie relevant sind. Einen ersten Vorgeschmack bietet die Firefox Extension Blogger Web Comments von Google.

Letztlich aber bleibt ein Argument vor allem: selbst wenn es wenige lesen, und es viel zu häufig Nabelschau ist, was die Blogger machen -- dieser Eintrag mit eingeschlossen, ironischerweise -- ist das Bloggen eine Technik, die es zum ersten Mal in der Geschichte der Menschheit tatsächlich so vielen Leuten konkret ermöglicht, aktiv eine Stimme zu haben. Ob das, was diese Leute damit anfangen, gut ist oder nicht, dass sei eine Entscheidung des Einzelfalls. Aber allein die Tatsache, dass heute Klein-Gretchen aus Hintertupfingen ihre handgekrakelten Bilder hochladen kann, und sie sofort weltweit zugänglich sind, ist ein Schritt auf dem Weg zu einer globalen Gesellschaft. Ein kleiner, ja, aber ein notwendiger und auch wichtiger.

Zurück aus Sheffield

Was wirklich gemein ist: die ganze Zeit in Sheffield hat es geregnet und gewindet. War ja nichts anders zu erwarten in England, oder?

Aber heute, da ich zurück bin, ist es dort über 10 Grad wärmer als hier, und sie haben Nieselregen und Sonnenschein statt Schnee und Grauingrau.

Die gute Nachricht: der Vortrag ist phantastisch gelaufen, das ganze Review war ein großer Erfolg. Danke allen Daumendrückern. Und danke den nutkidz, die ebenfalls eine Rolle spielten. Mehr dazu bald.

Zwei Jahre Nodix

Zwei Jahre alt ist Nodix heute geworden! Taräääää!!! Über 22.000 Besucher in zwei Jahren - eine Zahl, die für mich schon unvorstellbar war als ich das hier anfing. Danke, danke, allen Besuchern, für ihre Treue, auch wenn ich mal wieder etwas ruhiger war, vielen, vielen Dank!

Zwei gute Nachrichten und eine schlechte

Drucken und Binden der Studienarbeit hat geklappt. Sie liegt jetzt beim Betreuer und ist auf dem Weg zum Professor und zur Bibliothek. Und da auf die Anfrage beim Betreuer, ob möglicherweise eine Diplomarbeit auf mich warte, er mich weder zum Teufel scherte noch sagte, dass ich erst mal abwarten soll, ob die Studienarbeit akzeptiert würde, sondern mir gleich vorschlug, dass er sich umhören wolle und wir in ein paar Tagen darüber sprechen würden, nehme ich das mal als positives Zeichen. Danke, Rainer!

Die zweite gute Nachricht: meine Mutter fährt in Urlaub, nach Kroatien. Hat sie sich verdient, und ist für etwa 10 Tage weg. Die schlechte Nachricht kommt damit postwendend: ich muss für diese Zeit auf das Geschäft aufpassen.

Für alle, die es noch nicht wissen: meine Mutter führt Stuttgarts ältestes Balkanrestaurant, die Alte Mira. Dass das Essen gut sein muss, weiß jeder, der mich mal gesehen hat. Die nächsten Tage also werde ich jetzt voraussichtlich von 12 bis 24 Uhr täglich dort sein, um meiner Mutter den Urlaub zu ermöglichen. Sollte sich jemand zufällig in die Stadt verirren, schaut vorbei! Alte Mira, Büchsenstraße 24, gleich bei der Haltestelle Stadtmitte. Abgesehen wahrscheinlich vom Montag werde ich mich stets über Besuch freuen!

Zwischenvortrag

So, hier der neue nutkidz. Und wen es interessiert: am 1. August findet meine Zwischenpräsentation zu meiner Diplomarbeit statt, Thema XML für Ada 95. Wen es interessiert, kann sich melden, ansonsten werden hier danach auch die Folien runterladbar werden.

Übersetzungen

Heute mal die nutkidz um ein paar Schreibfehler ärmer gemacht und gleichzeitig die englischen Übersetzungen, die ausstanden, nachgeliefert. Danke bei dieser Gelegenheit noch an Buddy, der die Übersetzungen stets rechtzeitig Woche für Woche schickte - hier, sieh selbst, die Arbeit war nicht umsonst!