Semantic search

Jump to navigation Jump to search

Nutkidz reloaded

23 May 2003

Wie bereits angekündigt kommt diese Woche der nutkidz-Comic mit ein wenig Verspätung: dafür aber passend zu Matrix Reloaded! Die nutrix ist wieder da!

Nutkidz sind wieder da!

Auf nutkidz.de finden sich die ersten drei Folgen der nutkidz wieder online! Nach dem Angriff auf Nodix mussten die Sicherheitskopien von meinem alten Rechner im Keller geholt werden. Jetzt ist die Technik erneuert: es gibt einen nutkidz-feed, so dass man sich auch mit seinem RSS-Reader den Webcomic ins Haus holen kann! So einfach sollte jeder Webcomic zu lesen sein.

In den nächsten Tagen werden die weiteren Folgen relativ bald aufeinander kommen, ich plane auf mindestens zwei oder drei täglich.

Ach ja -- und auf nutkidz.net gibt es von jetzt an die nutkidz auch auf Englisch.

Nutkidz!

Frohes Neues Jahr! Und hier endlich die lang angekündigte Überraschung: nutkidz, ein Webcomic - vorerst jeden Donnerstag eine neue Folge!

Mehr dazu folgt, im Moment bin ich bloß toooootmüde...

Nutkidz.de

Und wieder zurück. Zwei Hundertschaften eMails warten jetzt auf Beantwortung, es wird also noch ein wenig dauern. Aber die Internetpause wurde sinnvoll genutzt: unter anderem wurden die nutkidz vollständig neu designt und vor allem, die nutkidz haben jetzt ihre eigene Website! www.nutkidz.de selbst wird diese Tage konnektiert werden, aber hier auf Nodix erhaltet ihr bereits eine Vorschau auf das neue Layout! Viel Spaß damit.

OK

11 May 2020

I often hear "don't go for the mediocre, go for the best!", or "I am the best, * the rest" and similar slogans. But striving for the best, for perfection, for excellence, is tiring in the best of times, never mind, forgive the cliché, in these unprecedented times.

Our brains are not wired for the best, we are not optimisers. We are naturally 'satisficers', we have evolved for the good-enough. For this insight, Herbert Simon received a Nobel prize, the only Turing Award winner to ever get one.

And yes, there are exceptional situations where only the best is good enough. But if good enough was good enough for a Turing-Award winning Nobel laureate, it is probably for most of us too.

It is OK to strive for OK. OK can sometimes be hard enough, to be honest.

May is mental health awareness month. Be kind to each other. And, I know it is even harder, be kind to yourself.

Here is OK in different ways. I hope it is OK.

Oké ఓకే ਓਕੇ オーケー ओके 👌 ওকে או. קיי. Окей أوكي Օքեյ O.K.


OWL 2.0

31 May 2005

I posted this to the public OWL dev mailing list as a response to a question posed by Jim Hendler quite some while ago. I publish it here for easier reference.

Quite some while ago the question of OWL 2.0 was rised here, and I wrote already two long replies with a wishlist - but both were never sent and got lost in digital nirvana, one due to a hardware, the second due to a software failure. Well, let's hope this one passes finally through. That's why this answer is so late.

Sorry for the lengthy post. But I tried to structure it a bit and make it readable, so I hope you find some interesting stuff here. So, here is my wishlist.

  1. I would like yet another OWL language, call it OWL RDF or OWL Superlite, or whatever. This is like the subset of OWL Lite and RDFS. For this the difference between of owl:Class and rdf:Class needs to be somehow standardly solved. Why is this good? It makes moving from RDF to OWL easier, as it forces you to keep Individuals, Classes and Relations in different worlds, and forgets about some of the more sophisticated constructs of RDF(S) like lists, bags and such. This is a real beginners language, really easy to learn and implement.
  2. Defined Semantics for OWL FUll. It is unclear -- at least to me -- what some combinations of RDF(S)-Constructs and OWL DL-constructs are meant to mean.
  3. Add easy reification to OWL. I know, I know, making statements about statements is meant to be the root of all evil, but I find it pretty useful. If you like, just add another group of elements to OWL, statements, that are mutually disjoint from classes, instances and relations in OWL DL, but there's a sublanguage that enables us to speak about statements. Or else OWL will suck a lot in comparison to RDF(S) and RDF(S) + Rules will win, because you can't do a lot of the stuff you need to do, like saying what the source of a certain statement is, how reliable this source is, etc. Trust anyone? This is also needed to extend ontologies toward probabilistic, fuzzy or confidence-carrying models.
  4. I would love to be able to define syntactic sugar, like partitionOf (I think, this is from Asun's Book on Ontology Engineering). ((A, B, C) partitionOf D) means that every D is either an A or a B or a C, that every A, B or C is a D, and that A, B and C are mutually disjunct. So you can say this already, but it needs a lot of footwork. It would be nice to be able to define such shotcuts that lever upon the semantics of existing constructors.
  5. That said, another form of syntactic sugar - because again you can use existing OWL constructs to reach the same goal, but it is very strenuous to do so - would be to define UNA locally. Like either to say "all individuals in this ontology are mutually different" or "all individuals with this namespace are mutually different". I think, due to XML constraints the first one would be the weapon of choice.
  6. I would like to be able to have more ontologies in the same file. So you can use ontologies to group a number of axioms, and you also could use the name of this group to refer to it. Oh well, using the name of an ontology as an individual, what does this mean? Does it imply any further semantics? I would like to see this clarified. Is this like named graphs?
  7. The DOM has quite nicely partitioned itself in levels and modules. Why not OWL itself? So you could have like a level 2 ontology of mereological questions, and such stuff, all with well defined semantics, for the generic questions. I am not sure there are too many generic questions, but taxonomy is (already covered), mereology would be, and spatiotemporal and dynamic issues would be as well. Mind you, not everyone must use them, but many will need them. It would be fine to find stan dard answers to such generic questions.
  8. Procedural attachments would be a nice thing. Like have a a standardized possibilities to add pieces of code and have them executed by an appropriate execution environment on certain events or requests by the reasoner. Yes, I am totally aware of the implications on reasoning and decidability, but hey, you asked what people need, and did not ask for theoretical issues. Those you understand better.
  9. There are some ideas of others (which doesn't mean that the rest is necessarily original mine) I would like to see integrated, like a well-defined epistemic operator or streamlining the concrete domains to be more consistent with abstract domains, or to define domain and range _constraints_ on relations, and much more. Much of this stuff could be added optional in the sense of point 7.
  10. And not to forget that we have to integrate with rules later, and to finally have an OWL DL query language. One goal is to make it clear what OWL offers over simply adding rules atop of RDF and ignoring the ontology layer completely.

So, you see, this is quite a list, and it sure is not complete. Even if only two or three points were finally picked up I would be very happy :)

OWL luna, nicer latex, OWL/XML to Abstract Syntax, and more

After a long hiatus, due to some technical problems, finally I could create a new version of the owl tools. So, version 0.27 of the owl tools is now released. It works with the new version of KAON2, and includes six months of bug fixing, but also a number of new features that have introduced a whole new world of new, exciting bugs as well.

The owl latex support was improved greatly. Translation of an owl ontology is done now more careful, and the user can specify much more of the result than before.

A new tool is owl luna - luna like local unique name assumption. It adds an axiom to your ontology that states that all individuals are different from each other. Due to most ontology editors not allowing to do this automatically, here you find a nice maintenance tool to make your ontology much less ambiguous.

The translations of OWL/RDF to OWL/XML and back have been joined in one new tool, called owl syntax, that allows you to translate owl ontologies also to OWL Abstract Syntax, a much nicer syntax for owl ontologies.

owl dlpconvert has been extended as it now also serialzes it results as RuleML if you like. So you can just pipe your ontologies to RuleML.

So, the tools have both become sharper and more numerous, making your toolbelt to work with owl in daily life more usable. Get the new owl tools now. And if you peek into the source code, the code has underwent a major clean up, and you will also see the new features I am working on, that have to do with ontology evaluation and more.

Have fun with the tools! And send me your comments, wishes, critiques!

On the competence of conspiracists

“Look, I’ll be honest, if living in the US for the last five years has taught me anything is that any government assemblage large enough to try to control a big chunk of the human population would in no way be consistently competent enough to actually cover it up. Like, we would have found out in three months and it wouldn’t even have been because of some investigative reporter, it would have been because one of the lizards forgot to put on their human suit on day and accidentally went out to shop for a pint of milk and like, got caught in a tik-tok video.” -- Os Keyes, WikidataCon, Keynote "Questioning Wikidata"

One Day in Europe

Aus der Reihe Filme in 50 Worten

Letzten Freitag war im Delphi eine kleine Premiere dieses Streifens, mitsamt Hauptdarstellern, Regisseur und so ein Krams. Das drumherum ist zwar nett, aber hier gehts trotzdem nur um den Film, und der war - trommelwirbel! - lustig, schnell, bildreich, musikalisch prima unterlegt, überzeugend gespielt, interessante Charaktere, und allerlei sonst.

Vier Episoden in Moskau, Istanbul, Santiago de Compostela und Berlin erzählen von kleinen Ereignissen, die in Europa geschehen. Weil es eben nicht um dramatische, existenzielle Dinge geht, hat der Zuschauer die Muse und der Film die Zeit die ganzen vielen Details aufzuzeigen.

Auch sehr nett: auf die Frage, was der Regisseur denn von Europa halte, antwortete er: "Ich maße mich nicht an, über das Vereinigte Europa zu orakeln. Ich weiß nicht, ob es in 20 oder 40 oder 60 Jahren da ist..." Aber dass es da sein wird, schien er nicht einen Moment zu bezweifeln. So ähnlich auch der Film: die verbindenden Elemente Europas sind Sprachenwirrwar und Fußball (und Diebstahl und Versicherungsbetrug, na gut).

Kurz: wer die Gelegenheit bekommt, anschauen! Der Film lohnt sich.

One world. One web.

I am in Beijing at the opening of the WWW2008 conference. Like all WWWs I was before, it is amazing. The opening ceremony was preceded by a beautiful dance, combining tons of symbols. First a woman in a traditional Chinese dress, then eight dancers in astronaut uniforms, a big red flag with "Welcome to Beijing" on it (but not on the other side, when he came back), and then all of them together... beautiful.

Boris Motik's paper is a best paper candidate! Yay! Congratulations.

I rather listen to the keynote now :)

Blogging from my XO One.

Oscar winning families

Yesterday, when Jamie Lee Curtis won her Academy Award, I learned that both her parents were also nominated for Academy Awards. Which lead to the question: who else?

I asked Wikidata, which lists four others:

  • Laura Dern
  • Liza Minnelli
  • Nora Ephron
  • Sean Astin

Only one of them belongs to the even more exclusive club of people who won an Academy Award, and where both parents also did: Liza Minnelli, daughter of Vincente Minelli and Judy Garland.

Wikidata query

Also interesting: List of Academy Award-winning families

Oscars

So, heute Nacht gibt es endlich wieder Oscars. Es war ein tolles Kinojahr! Und hier kommen meine persönlichen Favoriten, damit ich auch morgen noch kraftvoll blamieren kann (in Klammern findet Ihr, sofern es sich von meinem Favoriten unterscheidet, den Kandidaten, den ich gewählt hätte, und den ich am liebsten gewinnen sähe - dummerweise unterscheidet er sich von dem Kandidaten, den ich am wahrscheinlichsten halte):

  • Nicht-adaptiertes Drehbuch: Lost in Translation
  • Adaptiertes Drehbuch: The Lord of the Rings: The Return of the King
  • Visuelle Effekte: The Lord of the Rings: The Return of the King
  • Sound mixing: The Lord of the Rings: The Return of the King
  • Tonschnitt: Pirates of the Caribbean: The Curse of the Black Pearl (Finding Nemo)
  • Animierter Kurzfilm: zwar nix gesehen, dennoch tippe ich auf Boundin' und hoffe auf Destino
  • Bester Song: Annie Lennox für Into the West in The Lord of the Rings: The Return of the King
  • Filmmusik: The Lord of the Rings: The Return of the King
  • Makeup: The Lord of the Rings: The Return of the King
  • Schnitt: City of God
  • Kostüme: The Last Samurai (The Lord of the Rings: The Return of the King)
  • Cinematography: Master and Commander: The Far Side of the World (City of God)
  • Art direction: The Lord of the Rings: The Return of the King
  • Bester Zeichentrickfilm: Finding Nemo
  • Weibliche Nebenrolle: Renee Zellweger in Cold Mountain (Marcia Gay Harden in Mystic River)
  • Weibliche Hauptrolle: Charlize Theron in Monster (Diane Keaton in Something's Gotta Give - hier soll nochmal erwähnt werden, dass Keisha Castle-Hughes' Nominierung für Whale Rider einfach unglaublich ist! Schade, dass sie keine Chancen zu gewinnen hat)
  • Männliche Nebenrolle: Tim Robbins in Mystic River (Benicio Del Toro in 21 Grams)
  • Männliche Hauptrolle: Sean Penn in Mystic River (Johnny Depp in Pirates of the Caribbean: The Curse of the Black Pearl, ganz ganz knapp vor dem brillianten Bill Murray in Lost in Translation)
  • Regie: Peter Jackson für The Lord of the Rings: The Return of the King (wobei in auch Fernando Meirelles für City of God und Sofia Coppola für Lost in Translation mehr als verdient hätten!)
  • Bester Film: The Lord of the Rings: The Return of the King (Schade für Lost in Translation, doch selbst in der Klammer würde The Lord of the Rings: The Return of the King stehen. Warum konnte Miss Coppola nicht einfach ein Jahr warten mit ihrem Film?)

Zu ein paar Kategorien (Best documentary short subject, Best documentary feature, Best live action short film, Best foreign language film of the year) kann ich nichts sagen, weil ich die Filme nicht kenne. Von den Kategorien oben hingegen kenne ich die meisten Kandidaten (bis auf Seabiscuit und Cold Mountain). Also, immerhin ein informierter Tip!
So, wer mir vor heute Nacht noch eine eMail mit seinen Tipps schickt (oder eh hier vorbeikommt, um sich die lange Oscarnacht reinzuziehen ;), mit dem wette ich gerne. Um ein Ü-Ei oder alternativ eine Erdnuss, je nach Wunsch.

Oscars - die Auswertung

Na, ich habe doch gestern gar nicht so schlecht gelegen. Von 20 Tipps lagen 17 richtig, zudem stimmten meine Voraussagen bei allen wichtigen Oscars mit den Tatsachen überein. Spitze! Also, nächstes Jahr hier wieder vorbeischneien, wenn ich wieder mit prophetischen Gaben die Oscars prophezeie...Ich meine, zwei meiner drei Fehltipps habe ich auch nur dem Gedanken zu verdanken "Der Herr der Ringe kann ja nicht alle Nominierungen verwirklichen!"

Er kann.

Our four freedoms for our technology

(This is a draft. Comments are welcome. This is not meant as an attack on any person or company individually, but at certain practises that are becoming increasingly prevalent)

We are not allowed to use the devices we paid for in the ways we want. We are not allowed to use our own data in the way we want. We are only allowed to use them in the way the companies who created the devices and services allow us.

Sometimes these companies are nice and give us a lot of freedom in how to use the devices and data. But often they don’t. They close them down for all kinds of reasons. They may say it is for your protection and safety. They might admit it is for profit. They may say it is for legal reasons. But in the end, you are buying a device, or you are creating some data, and you are not allowed to use that device and that data in the way you want to, you are not allowed to be creative.

The companies don’t want you to think of the devices that you bought and the data that you created as your devices and your data. They want you to think of them as black boxes that offer you services they create for you. They don’t want you to think of a Ring doorbell as a camera, a microphone, a speaker, and a button, but they want you to think of it as providing safety. They don’t want you to think of the garage door opener as a motor and a bluetooth module and a wifi module, but as a garage door opening service, and the company wants to control how you are allowed to use that service. Companies like Chamberlain and SkyLink and Genie don’t allow you to write a tool to check on your garage door, and to close or open it, but they make deals with Google and Amazon and Apple in order to integrate these services into their digital assistants, so that you can use it in the way these companies have agreed on together, through the few paths these digital assistants are available. The digital assistant that you buy is not a microphone and a speaker and maybe a camera and maybe a screen that you buy and use as you want, but you buy a service that happens to have some technical ingredients. But you cannot use that screen to display what you want. Whether you can watch your Amazon Prime show on the screen of a Google Nest Hub depends on whether Amazon and Google have an agreement with each other, not on whether you have paid for access to Amazon Prime and you have paid for a Google Nest Hub. You cannot use that camera to take a picture. You cannot use that speaker to make it say something you want it to say. You cannot use the rich plethora of services on the Web, and you cannot use the many interesting services these digital assistants rely on, in novel and creative combinations.

These companies don’t want you to think of the data that you have created and that they have about you as your data. They don’t want you to think about this data at all. They just want you to use their services in the way they want you to use their services. On the devices they approve. They don’t want you to create other surfaces that are suited to the way you use your data. They don’t want you to decide on what you want to see in your feed. They don’t want you to be able to take a list of your friends and do something with it. They will say it is to protect privacy. They will say that it is for safety. That is why you cannot use the data you and your friends have created. They want to exactly control what you can and cannot do with the data you and your friends have created. They want to control how many ads you must see in order to be allowed to see your friends’ posts. They don't want anyone else to have the ability to provide you creative new interfaces to your feed. They don’t want you yourself the ability to look at your feed and do whatever you want with it.

Those are devices you paid for.

These are data you and your friends have created.

And more and more we are losing our freedom of using our devices and our data as we like.

It would be impossible to invent email today. It would be impossible to invent the telephone today. Both are protocols that allow everyone to communicate with anyone no matter what their email provider or their phone is. Try reading your friend’s Facebook feed on Instagram, or send a direct message from your Twitter account to someone on WhatsApp, or call your Skype contact on Facetime.

It would be impossible to launch the Web today - many companies don’t want you browsing the Web. They want you to be inside of your Facebook feed and consume your content there. They want you to be on your Twitter feed. They don’t want you to go to the Website of the New York Times and read an article there, they don’t want you to visit the Website of your friend and read their blog there. They want you to stay on their apps. Per default, they open Websites inside their app, and not in your browser, so you are always within their app. They don’t want you to experience the Web. The Web is dwindling and all the good things on it are being recut and rebundled within the apps and services of tech companies.

Increasingly, we are seeing more and more walls in the world. Already, it is becoming impossible to pay and watch certain movies and shows without buying into a full subscription in a service. We will likely see the day where you will need a specific device to watch a specific movie. Where the only way to watch a Disney+ exclusive movie is on a Disney+ tablet. You don’t think so? Think about how easy it is to get your Kindle books onto another Ebook reader. How do you enable a skill or capability available in Alexa on your Nest smart speaker? How can you search through the books that you bought and are in your digital library, besides by using a service provided by the company that allows you to search your digital library? When you buy a movie today on YouTube or on iMovies, what do you own? What are you left with when the companies behind these services close that service, or go out of business altogether?

Devices and content we pay for, data we and our friends create, should be ours to use in empowering and creative ways. Services and content should not be locked in with a certain device or subscription service. The bundling of services, content, devices, and locking up user data creates monopolies that stifle innovation and creativity. I am not asking to give away services or content or devices for free, I am asking to be allowed to pay for them and then use them as I see fit.

What can we do?

As far as I can tell, the solution, unfortunately, seems to be to ask for regulation. The market won’t solve it. The market doesn’t solve monopolies and oligopolies.

But don’t ask to regulate the tech giants individually. We don’t need a law that regulates Google and a law that regulates Apple and a law that regulates Amazon and a law to regulate Microsoft. We need laws to regulate devices, laws to regulate services, laws to regulate content, laws that regulate AI.

Don’t ask for Facebook to be broken up because you think Mark Zuckerberg is too rich and powerful. Breaking up Facebook, creating Baby Books, will ultimately make him and other Facebook shareholders richer than ever before. But breaking up Facebook will require the successor companies to work together on a protocol to collaborate. To share data. To be able to move from one service to another.

We need laws that require that every device we buy can be made fully ours. Yes, sure, Apple must still be allowed to provide us with the wonderful smooth User Experience we value Apple for. But we must also be able to access and share the data from the sensors in our devices that we have bought from them. We must be able to install and run software we have written or bought on the devices we paid for.

We need laws that require that our data is ours. We should be able to download our data from a service provider and use it as we like. We must be allowed to share with a friend the parts of our data we want to share with that friend. In real time, not in a dump download hours later. We must be able to take our social graph from one social service and move to a new service. The data must be sufficiently complete to allow for such a transfer, and not crippled.

We need laws that require that published content can be bought and used by us as we like. We should be able to store content on our hard disks. To lend it to a friend. To sell it. Anything I can legally do with a book I bought I must be able to legally do with a movie or piece of music I bought online. Just as with a book you are not allowed to give away the copies if the work you bought still enjoys copyright.

We need laws that require that services and capabilities are unbundled and made available to everyone. Particularly as technological progress with regards to AI, Quantum computing, and providing large amounts of compute becomes increasingly an exclusive domain for trillion dollar companies, we must enable other organizations and people to access these capabilities, or run the risk that sooner or later all and any innovation will be happening only in these few trillion dollar companies. Just because a company is really good at providing a specific service cheaply, it should not be allowed to unfairly gain advantage in all related areas and products and stifle competition and innovation. This company should still be allowed to use these capabilities in their products and services, but so should anyone else, fairly prized and accessible by everyone.

We want to unleash creativity and innovation. In our lifetimes we have seen the creation of technologies that would have been considered miracles and impossible just decades ago. These must belong to everybody. These must be available to everyone. There cannot be equity if all of these marvellous technologies can be only wielded by a few companies on the West coast of the United States. We must make them available to all the people of the world: the people of the Indian subcontinent, the people of Subsaharan Africa,the people of Latin America, and everyone else. They all should own the devices they paid for, the data they created, the content they paid for. They all should have access to the same digital services and capabilities that are available to the engineers at Amazon or Google or Microsoft. The universities and research centers of the world should be able to access the same devices and services and extend them with their novel and creative ideas. The scrappy engineers in Eastern Europe and India and Nigeria and Central Asia should be able to call the AI models trained by Google and Microsoft and use them in novel ways to run their devices and chip-powered cars and agricultural machines. We want a world of freedom, tinkering, where creativity and innovation are unleashed, and where everyone can contribute their ideas, their creativity, and where everyone can build their fortune.


PANIKMODUS AN!

Hilfe! Schwesterchen wurde von Außerirdischen vom Planten Zirkonia-B entführt (rechts ein Steckbrief - von Schwesterchen, nicht von den Außerirdischen!). Bedeutet das, dass es diese Woche keine nutkidz geben wird? Nein, viel schlimmer! Ich, der vollständig künstlerisch unbegabte Denny wird diese Woche das nutkidz-Comic machen. Ihr seht, die Panik ist vollkommen berechtigt: denn ab Donnerstag werden die Besucherzahlen auf dieser Website rapide zurückgehen, kein Mensch wird mehr hinschauen, das Projekt 100.000 ist zu Ende, und meine Pläne zur Übernahme der Weltherrschaft sind definitiv im Eimer. Das ist nicht gut!

Es gibt noch eine Möglichkeit das zu ändern: Freunde von Nodix, Fans von nutkidz: erstellt IHR ein Meisterwerk, welche das nutkidz-Comic #10 werden soll! Ihr habt bis Mittwoch, 5. März, 23:59 Uhr, Zeit (Aufschübe nur nach e-Mail-Anfrage) - also keine 36 Stunden mehr - um ein nutkidz-Comic zu verfassen und so heldenhaft zu verhindern, dass ein Comic von mir am Donnerstag das Licht der Onlinewelt erblickt. Überlegt Euch gut, ob Ihr Euch einfach zurücklehnen wollt und die Katastrophe über uns kommen lassen wollt...

Außerdem lässt mir das mehr Zeit, an der heldenhaften Rettung meiner Schwester zu arbeiten. Wo ist Bruce Willis, wenn man ihn mal braucht?


Hier fehlt noch ein Bild.

Pagerank hoch

So, endlich wieder weiter an Nodix gebastelt. Nachdem ich gestern ganz fleißig war, und an meiner Hausarbeit konzentriert gearbeitet habe, gönnte ich mir zum Abschluss des Tages noch ein wenig an Nodix zu basteln. Hierbei wurde die erste Ebene der neuen Hierarchie endlich auch eingeführt. Heißt das, es gibt irgendwelche Änderungen?
Tatsächlich, nein. Immer noch werdet ihr beim Navigieren durch Nodix furchtbar verwirrt durch die Gegend irren, tut mir leid. Erst wenn alles fertig ist, wird sich Nodix wieder angenehm lesen lassen, angenehmer als je zuvor.

An dieser Stelle vielen Dank an Google, die mir heute das Google-Ranking von 4 auf 5/10 verbesserten! Die ständige Arbeit an Nodix scheint sich bezahlt zu machen.

Pah!

Geht nicht auf diese Seite. Schaut euch auf gar keinen Film das grausige Foto von mir an. Das eigentlich Foto, dass ich dann auch für meine neue Webseite verwendet habe sieht nämlich viel besser aus. Ignoriert also diese Seite. Kapiert?

Habe es hingegen tatsächlich endlich geschafft, hinter den Link über Denny, der schon seit Anfangszeiten von Nodix vorhanden ist, auch mal Inhalt zu bringen. Die Seite verweist jetzt auf die äußerst schreibfreundliche Seite denny.vrandecic.de. Letztlich soll sich hier auch irgendwann CV, Publikationsliste und ähnliches tummeln, eben das, was man als eigentlich Homepage im klassischen Sinn versteht. So, jetzt aber endgültig, auf Richtung Neandertal...

Papaphobia

In a little knowledge engineering exercise, I was trying to add the causes of a phobia to the respective Wikidata items. There are currently about 160 phobias in Wikidata, and only a few listed in a structured way what they are afraid of. So I was going through them, trying to capture it in s a structured way. Here's a list of the current state:

Now, one of those phobias was the Papaphobia - the fear of the pope. Now, is that really a thing? I don't know. CDC does not seem to have an entry on it. On the Web, in the meantime, some pages have obviously taken to mining lists of phobias and creating advertising pages that "help" you with Papaphobia - such as this one:

This page is likely entirely auto-generated. I doubt it that they have "clients for papaphobia in 70+ countries", whom they helped "in complete discretion" within a single day! "People with severe fears and phobias like papaphobia (which is in fact the formal diagnostic term for papaphobia) are held prisoners by their phobias."

This site offers more, uhm, useful information.

"Group psychotherapy can also help where individuals share their experience and, in the process, understand and recover from their phobia." Really? There are enough cases that we can even set up a group therapy?

Now, maybe I am entirely off here - maybe, papaphobia is really a thing. With search in Scholar I couldn't find any medical sources (the term is mentioned in a number of sociological and historical works, to express general sentiments in a population or government against the authority of the pope, but I could not find any mentions of it in actual medical literature).

Now could those pages up there be benign cases of jokes? Or are they trying to scam people with promises to heal their actual fears, and they just didn't curate the list of fears sufficiently, because, really, you wouldn't find this page unless you actually search for this term?

And now what? Now what if we know these pages are made by scammers? Do we report them to the police? Do we send a tip to journalists? Or should we just do nothing, allowing them to scam people with actual fears? Well, by publishing this text, maybe I'll get a few people warned, but it won't reach the people it has to reach at the right time, unfortunately.

Also, was it always so hard to figure out what is real and what is not? Does papaphobia exist? Such a simple question. How should we deal with it on Wikidata? How many cases are there, if it exists? Did it get worse for people with papaphobia now that we have two people living who have been made pope?

My assumption now is that someone was basically working on a corpus, looking for words ending in -phobia, in order to generate a list of phobias. And then the term papaphobia from sociological and historical literature popped up, and it landed in some list, and was repeated in other places, etc., also because it is kind of a funny idea, and so a mixture of bad research and joking bubbled through, and rolled around on the Web for so long that it looks like it is actually a thing, to the point that there are now organizations who will gladly take your money (CTRN is not the only one) to treat you for papaphobia.

The world is weird.

Partial copyright for an AI generated work

Interesting development in US cases around copyright and AI: author Elisa Shupe asked for copyright registration on a book that was created with the help of generative AI. Shupe stated that not giving her registration would be disabilities discrimination, since she would not have been able to create her work otherwise. On appeal, her work was partially granted protection for the “selection, coordination, and arrangement of text generated by artificial intelligence”, without referral to the disability argument.

Pastir Loda

Vladimir Nazor is likely the most famous author from the island of Brač, the island my parents are from. His most acclaimed book seems to be Pastir Loda, Loda the Shepherd. It tells the story of a satyr that, through accidents and storms, was stranded on the island of Brač, and how he lived on Brač for the next almost two thousand years.

It is difficult to find many of his works, they are often out of print. And there isn't much available online, either. Since Nazor died in 1949, his works are in the public domain. I acquired a copy of Pastir Loda from an antique book shop in Zagreb, which I then forwarded to a friend in Denmark who has a book scanner, and who scanned the book so I can make the PDF available now.

The book is written in Croatian. There is a German translation, but that won't get into the public domain until 2043 (the translator lived until 1972), and according to WorldCat there is a Czech translation, and according to Wikipedia a Hungarian translation. For both I don't know who the translator is, and so I don't know the copyright status of these translations. I also don't know if the book has ever been translated to other languages.

I wish to find the time to upload and transcribe the content on Wikisource, and then maybe even do a translation of it into English. For now I upload the book to archive.org, and I also make it available on my own Website. I want to upload it to Wikimedia Commons, but I immediately stumbled upon the first issue, that it seems that to upload it to Commons the book needs to be published before 1928 and the author has to be dead for more than 70 years (I think that should be an or). I am checking on Commons if I can upload it or not.

Until then, here's the Download:


Pendeln

"Und, pendelst Du immer noch?"
"Na ja, eigentlich lege ich eher Tarot-Karten als zu pendeln, aber manchmal schon."
"Ich meinte zur Arbeit."

Danke für die Heimfahrt!

Perspektive Deutschland

Ich mag diese Umfrage, und mache schon seit Jahren mit: ich kann auch nur empfehlen, selber mal die Meinung reinzudrücken. Schirmherr ist Richard von Weizsäcker, den älteren von uns noch bekannt als vielleicht meistrespektiertester Bundespräsidents des letzten Vierteljahrhunderts. Also, mitmachen! Ich glaube, gewinnen kann man auch etwas...

Pflichtlektüre

Der Aufstieg Chinas zur politischen und wirtschaftlichen Weltmacht

Pforzheim

Soeben, beim ein paar Links zu Blogs verfolgen...

Ich: "Schon wieder Pforzheim! Ich glaube, Pforzheim hat die größte Blogdichte der Welt..."
WG-Mitbewohnerin: "Vielleicht haben die dort sonst nichts zu tun?"

Ich bin mir auch ganz sicher, Pforzheimer haben alle ganz, ganz viel Humor... ;)

Philosophie bestanden

Hallo, hier geht es vielleicht chaotisch zu! Da denkt man noch, dass mit den letzten Prüfungen endlich Ruhe einkehrt, aber ich habe noch nichtmal die Zeit gefunden, meine Bücher wieder zu sortieren. Der erste Tag, wo ich ein wenig Zeit gehabt hätte, wäre vorgestern gewesen - aber an einem 1. April Nachrichten zu verkünden ist immer so gewagt...

Also, zunächst, damit es alle wissen (Danke für die vielen Nachfragen, übrigens!) - ja, die Prüfungen sind vorbei, und ich habe sie bestanden! Die mündliche mit einer 1,3, und die beiden Hausarbeiten, die sich auch hier auf der Website einsehen lassen, gar jeweils mit einer 1,0 - sehr zufriedenstellend!
Damit ist mein Studium beendet und ich bin jetzt ganz hochoffiziell Diplom-Informatiker! Das Zeugnis jedoch scheint noch ein wenig Zeit zu brauchen, da meine Diplomarbeit selbst immer noch keine Note hat - ach ja, immer das Selbe. Und jetzt kommt die spannende Suche nach dem richtigen Beruf...
Wünscht mir viel Glück! Und wer von einer interessanten Stelle erfährt, darf es mir gerne flüstern.

Philosophische Grundlagen

I had a talk on Philosophical Foundations of Ontologies last week at the AIFB. I prepared it in German (and thus, all the slides were in German) and just before I started I got asked if I may give the talk in English.

Having never heard a single lesson in philosophy in English and having read English philosophy only on Wikipedia before, I said yes. Nevertheless, the talk was very well perceived, and so I decided to upload it. It's pure evil PowerPoint, no semantic slides format, and I didn't yet manage to translate it to English. If anyone can offer me some help with that - I simply don't know many of the technical terms, and I don't have ready access to the sources - I would be very happy! Just drop me a note, please.

Philosophische Grundlagen der Ontologie (PowerPoint, ca. 4,5 MB)

Broken link

Playing around with Aliquot

Warning! Very technical, not particularly insightful, and overly long post about hacking, discrete mathematics, and rabbit holes. I don't think that anything here is novel, others have done more comprehensive work, and found out more interesting stuff. This is not research, I am just playing around.

Ernest Davis is a NYU professor, author of many publications (including “Rebooting AI” with Gary Marcus) and a friend on Facebook. Two days ago he posted about Aliquot sequences, and that it is yet unknown how they behave.

What is an Aliquot sequence? Given a number, take all its proper divisors, and add them up. That’s the next number in the sequence.

It seems that most of these either lead to 0 or end in a repeating pattern. But it may also be that they just keep on going indefinitely. We don’t know if there is any starting number for which that is the case. But the smallest candidate for that is 276.

So far, we know the first 2,145 steps for the Aliquot sequence starting at 276. That results in a number with 214 digits. We don’t know the successor of that number.

I was curious. I know that I wouldn’t be able to figure this out quickly, or, probably ever, because I simply don’t have enough skills in discrete mathematics (it’s not my field), but I wanted to figure out how much progress I can make with little effort. So I coded something up.

Anyone who really wanted to make a crack on this problem would probably choose C. Or, on the other side of the spectrum, Mathematica, but I don’t have a license and I am lazy. So I chose JavaScript. There were two hunches for going with JavaScript instead of my usual first language, Python, which would pay off later, but I will reveal them later in this post.

So, my first implementation was very naïve (source code). The function that calculates the next step in the Aliquot sequence is usually called s in the literature, so I kept that name:

 const divisors = (integer) => {
   const result = []
   for(let i = BigInt(1); i < integer; i++) {
     if(integer % i == 0) result.push(i)
   }
   return result
 }
 const sum = x => x.reduce(
   (partialSum, i) => partialSum + i, BigInt(0)
 )
 const s = (integer) => sum(divisors(integer))

I went for BigInt, not integer, because Ernest said that the 2,145th step had 214 digits, and the standard integer numbers in JavaScript stop being exact before we reach 16 digits (at 9,007,199,254,740,991, to be exact), so I chose BigInt, which supports arbitrary long integer numbers.

The first 30 steps ran each under a second on my one decade old 4 core Mac occupying one of the cores, reaching 8 digits, but then already the 36th step took longer than a minute - and we only had 10 digits so far. Worrying about the limits of integers turned out to be a bit preliminary: With this approach I would probably not reach that limit in a lifetime.

I dropped BigInt and just used the normal integers (source code). That gave me 10x-40x speedup! Now the first 33 steps were faster than a second, reaching 9 digits, and it took until the 45th step with 10 digits to be the first one to take longer than a minute. Unsurprisingly, a constant factor speedup wouldn’t do the trick here, we’re fighting against an exponential problem after all.

It was time to make the code less naïve (source code), and the first idea was to not check every number smaller than the target integer whether it divides (line 3 above), but only up to half of the target integer.

 const divisors = (integer) => {
   const result = []
   const half = integer / 2
   for(let i = 1; i <= half; i++) {
     if(integer % i == 0) result.push(i)
   }
   return result
 }

Tiny change. And exactly the expected impact: it ran double as fast. Now the first 34 steps ran under one second each (9 digits), and the first one to take longer than a minute was the 48th step (11 digits).

Checking until half of the target seemed still excessive. After all, for factorization we only need to check until the square root. That should be a more than constant speedup. And once we have all the factors, we should be able to quickly reconstruct all the divisors. Now this is the point where I have to admit that I have a cold or something, and the code for creating the divisors from the factors is probably overly convoluted and slow, but you know what? It doesn’t matter. The only thing that matters will be the speed of the factorization.

So my next step (source code) was a combination of a still quite naïve approach to factorization, with another function that recreates all the divisors.

 const factorize = (integer) => {
   const result = [ 1 ]
   let i = 2
   let product = 1
   let rest = integer
   let limit = Math.ceil(Math.sqrt(integer))
   while (i <= limit) {
     if (rest % i == 0) {
       result.push(i)
       product *= i
       rest = integer / product
       limit = Math.ceil(Math.sqrt(rest))
     } else {
       i++
     }
   }
   result.push(rest)
   return result
 }
 const divisors = (integer) => {
   const result = [ 1 ]
   const inner = (integer, list) => {
     result.push(integer)
     if (list.length === 0) {
       return [ integer ]
     }
     const in_results = [ integer ]
     const in_factors = inner(list[0], list.slice(1))
     for (const f of in_factors) {
       result.push(integer*f)
       result.push(f)
       in_results.push(integer*f)
       in_results.push(f)
     }
     return in_results
   }
   const list = factorize(integer)
   inner(list[0], list.slice(1))
   const im = [...new Set(result)].sort((a, b) => a - b)
   return im.slice(0, im.length-1)
 }

That made a big difference! The first 102 steps all were faster than a second, reaching 20 digits! That’s more than 100x speedup! And then, after step 116, the thing crashed.

Remember, integer only does well until 16 digits. The numbers were just too big for the standard integer type. So, back to BigInt. The availability of BigInt in JavaScript was my first hunch for choosing JavaScript (although that would have worked just fine in Python 3 as well). And that led to two surprises.

First, sorting arrays with BigInts is different from sorting arrays with integers. Well, I find already sorting arrays of integers a bit weird in JavaScript. If you don’t specify otherwise it sorts numbers lexicographically, instead of by value:

 [ 10, 2, 1 ].sort() == [ 1, 10, 2 ]

You need to provide a custom sorting function in order to sort the numbers by value, e.g.

 [ 10, 2, 1 ].sort((a, b) => a-b) == [ 1, 2, 10 ]

The same custom sorting functions won’t work for BigInt, though. The custom function for sorting requires an integer result, not a BigInt. We can write something like this:

 .sort((a, b) => (a < b)?-1:((a > b)?1:0))

The second surprise was that BigInt doesn’t have a square root function in the standard library. So I need to write one. Well, Newton is quickly implemented, or copy and pasted (source code).

Now, with switching to BigInt, we get the expected slowdown. The first 92 steps run faster than a second, reaching 19 digits, and then the first step to take longer than a minute is step 119, with 22 digits.

Also, the actual sequences started indeed diverging, due to the inaccuracy of large JavaScript integer: step 83 resulted in 23,762,659,088,671,304 using integers, but 23,762,659,088,671,300 using BigInt. And whereas that looks like a tiny difference on only the last digit, the number for the 84th step showed already what a big difference that makes: 20,792,326,702,587,410 with integers, and 35,168,735,451,235,260 with BigInt. The two sequences went entirely off.

What was also very evident is that at this point some numbers took a long time, and others were very quick. This is what you would expect from a more sophisticated approach to factorization, that it depends on the number and size of the factors. For example, calculating step 126 required to factorize the number 169,306,878,754,562,576,009,556, leading to 282,178,131,257,604,293,349,484, and that took more than 2 hours with that script on my machine. But then in Step 128 the result 552,686,316,482,422,494,409,324 was calculated from 346,582,424,991,772,739,637,140 in less than a second.

At that point I also started taking some of the numbers, and googled them, surfacing a Japanese blog post from ten years ago that posted the first 492 numbers, and also confirming that the 450th of these numbers corresponds to a published source. I compared the list with my numbers and was happy to see they corresponded so far. But I guesstimated I would not in a lifetime reach 492 steps, never mind the actual 2,145.

But that’s OK. Some things just need to be let rest.

That’s also when I realized that I was overly optimistic because I simply misread the number of steps that have been already calculated when reading the original post: I thought it was about 200-something steps, not 2,000-something steps. Or else I would have never started this. But now, thanks to the sunk cost fallacy, I wanted to push it just a bit further.

I took the biggest number that was posted on that blog post, 111,953,269,160,850,453,359,599,437,882,515,033,017,844,320,410,912, and let the algorithm work on that. No point in calculating steps 129, which is how far I have come, through step 492, if someone else already did that work.

While the computer was computing, I leisurely looked for libraries for fast factorization, but I told myself that no way am I going to install some scientific C library for this. And indeed, I found a few, such as the msieve project. Unsurprising, it was in C. But I also found a Website, CrypTool-Online with msieve on the Web (that’s pretty much one of the use cases I hope Wikifunctions will also support rather soonish). And there, I could not only run the numbers I already calculated locally, getting results in subsecond speed which took minutes and hours on my machine, but also the largest number from the Japanese website was factorized in seconds.

That just shows how much better a good library is than my naïve approach. I was slightly annoyed and challenged by the fact how much faster it is. Probably also runs on some fast machine in the cloud. Pretty brave to put a site like that up, and potentially have other people profit from the factorization of large numbers for, e.g. Bitcoin mining on your hardware.

The site is fortunately Open Source, and when I checked the source code I was surprised, delighted, and I understood why they would make it available through a Website: they don’t factorize the number in the cloud, but on the edge, in your browser! If someone uses a lot of resources, they don’t mind: it’s their own resources!

They took the msieve C library and compiled it to WebAssembly. And now I was intrigued. That’s something that’s useful for my work too, to better understand WebAssembly, as we use that in Wikifunctions too, although for now server side. So I rationalized, why not see if I can get that run on Node.

It was a bit of guessing and hacking. The JavaScript binding was written to be used in the browser, and the binary was supposed to be loaded through fetch. I guessed a few modifications, replacing fetch with Node’s FS, and managed to run it in Node. The hacks are terrible, but again, it doesn’t really matter as long as the factorization would speed up.

And after a bit of trying and experimenting, I got it running (source code). And that was my second hunch for choosing JavaScript: it had a great integration for using WebAssembly, and I figured it might come in handy to replace the JavaScript based solution. And now indeed, the factorization was happening in WebAssembly. I didn’t need to install any C libraries, no special environments, no nothing. I just could run Node, and it worked. I am absolutely positive there are is cleaner code out there, and I am sure I mismanaged that code terribly, but I got it to run. At the first run I found that it added an overhead of 4-5 milliseconds on each step, making the first few steps much slower than with pure JavaScript. That was a moment of disappointment.

But then: the first 129 steps, which I was waiting hours and hours to run, zoomed by before I could even look. Less than a minute, and all the 492 steps published on the Japanese blog post were done, allowing me to use them for reference and compare for correctness so far. The overhead was irrelevant, even across the 2,000 steps the overhead wouldn’t amount to more than 10 seconds.

The first step that took longer than a second was step 596, working on a 58 digit number. All the first 595 steps took less than a second each. The first step that took more than a minute, was step 751, a 76 digit number, taking 62 seconds, factorizing 3,846,326,269,123,604,249,534,537,245,589,642,779,527,836,356,985,238,147,044,691,944,551,978,095,188. The next step was done in 33 milliseconds.

The first 822 steps took an hour. Step 856 was reached after two hours, so that’s another 34 steps in the second hour. Unexpectedly, things slowed down again. Using faster machines, modern architectures, GPUs, TPUs, potentially better algorithms, such as CADO-NFS or GGNFS, all of that could speed that up by a lot, but I am happy how far I’ve gotten with, uhm, little effort. After 10 hours, we had 943 steps and a 92 digit number, 20 hours to get to step 978 and 95 digits. My goal was to reach step 1,000, and then publish this post and call it a day. By then, a number of steps already took more than an hour to compute. I hit step 1000 after about 28 and a half hours, a 96 digit number: 162,153,732,827,197,136,033,622,501,266,547,937,307,383,348,339,794,415,105,550,785,151,682,352,044,744,095,241,669,373,141,578.

I rationalize this all through “I had fun” and “I learned something about JavaScript, Node, and WebAssembly that will be useful for my job too”. But really, it was just one of these rabbit holes that I love to dive in. And if you read this post so far, you seem to be similarly inclined. Thanks for reading, and I hope I didn’t steal too much of your time.

I also posted a list of all the numbers I calculated so far, because I couldn’t find that list on the Web, and I found the list I found helpful. Maybe it will be useful for something. I doubt it. (P.S.: the list was already on the Web, I just wasn't looking careful enough. Both, OEIS and FactorDB have the full list.)

I don’t think any of the lessons here are surprising:

  • for many problems a naïve algorithm will take you to a good start, and might be sufficient
  • but never fight against an exponential algorithm that gets big enough, with constant speedups such as faster hardware. It’s a losing proposition
  • But hey, first wait if it gets big enough! Many problems with exponential complexity are perfectly solvable with naïve approaches if the problem stays small enough
  • better algorithms really make a difference
  • use existing libraries!
  • advancing research is hard
  • WebAssembly is really cool and you should learn more about it
  • the state of WebAssembly out there can still be wacky, and sometimes you need to hack around to get things to work

In the meantime I played a bit with other numbers (thanks to having a 4 core computer!), and I will wager one ambitious hypothesis: if 276 diverges, so does 276,276, 276,276,276, 276,276,276,276, etc., all the way to 276,276,276,276,276,276,276,276,276, i.e. 9 times "276".

I know that 10 times "276" converges after 300 steps, but I have tested all the other "lexical multiples" of 276, and reached 70 digit numbers after hundreds of steps. For 276,276 I pushed it further, reaching 96 digits at step 640. (And for at least 276,276 we should already know the current best answer, because the Wikipedia article states that all the numbers under one million have been calculated, and only about 9,000 have an unclear fate. We should be able to check if 276,276 is one of those, which I didn't come around to yet).

Again, this is not research, but just fooling around. If you want actual systematic research, the Wikipedia article has a number of great links.

Thank you for reading so far!

P.S.: Now that I played around, I started following some of the links, and wow, there's a whole community with great tools and infrastructure having fun about Aliquot sequences and prime numbers doing this for decades. This is extremely fascinating.

Popculture in logics

  1. You ⊑ ∀need.Love (Lennon, 1967)
  2. ⊥ ≣ compare.You (Nelson, 1985)
  3. Cowboy ⊑ ∃sing.SadSadSong (Michaels, 1988)
  4. ∀t : I ⊑ love.You (Parton, 1973)
  5. ∄better.Time ⊓ ∄better­­­­­­­⁻¹.Time (Dickens, 1859)
  6. {god} ⊑ Human ⊨ ? (Bazilian, 1995)
  7. Bad(X)? (Jackson, 1987)
  8. ⃟(You ⊑ save.I) (Gallagher, 1995)
  9. Dreamer(i). ∃x : Dreamer(x) ∧ (x ≠ i). ⃟ ∃t: Dreamer(you). (Lennon, 1971)
  10. Spoon ⊑ ⊥ (Wachowski, 1999)
  11. ¬Cry ← ¬Woman (Marley, 1974)
  12. ∄t (Poe, 1845)

Solutions: Turn around your monitor to read them.

sǝlʇɐǝq ǝɥʇ 'ǝʌol sı pǝǝu noʎ llɐ ˙ǝuo
ǝɔuıɹd ʎq ʎllɐuıƃıɹo sɐʍ ʇı 'ƃuos ǝɥʇ pǝɹǝʌoɔ ʇsnɾ pɐǝuıs ˙noʎ oʇ sǝɹɐdɯoɔ ƃuıɥʇou ˙oʍʇ
˙uosıod ʎq uɹoɥʇ sʇı sɐɥ ǝsoɹ ʎɹǝʌǝ ɯoɹɟ '"ƃuos pɐs pɐs ɐ sƃuıs ʎoqʍoɔ ʎɹǝʌǝ" ˙ǝǝɹɥʇ
ʞɔɐɹʇpunos ǝıʌoɯ pɹɐnƃʎpoq ǝɥʇ ɹoɟ uoʇsnoɥ ʎǝuʇıɥʍ ʎq ɹɐlndod ǝpɐɯ ʇnq uoʇɹɐd ʎllop ʎq ʎllɐuıƃıɹo 'noʎ ǝʌol sʎɐʍlɐ llıʍ ı 'ɹo - ",noʎ, ɟo ǝɔuɐʇsuı uɐ ɥʇıʍ pǝllıɟ ,ǝʌol, ʎʇɹǝdoɹd ɐ ƃuıʌɐɥ" uoıʇdıɹɔsǝp ǝɥʇ ʎq pǝɯnsqns ɯɐ ı 'ʇ sǝɯıʇ llɐ ɹoɟ ˙ɹnoɟ
suǝʞɔıp sǝlɹɐɥɔ ʎq sǝıʇıɔ oʍʇ ɟo ǝlɐʇ ɯoɹɟ sǝɔuǝʇuǝs ƃuıuǝdo ǝɥʇ sı sıɥʇ ˙(ʎʇɹǝdoɹd ǝɥʇ ɟo ǝsɹǝʌuı suɐǝɯ 1- ɟo ɹǝʍod" ǝɥʇ) ǝɯıʇ ɟo ʇsɹoʍ ǝɥʇ sɐʍ ʇı ˙sǝɯıʇ ɟo ʇsǝq ǝɥʇ sɐʍ ʇı ˙ǝʌıɟ
(poƃ)ɟoǝuo ƃuıɯnsqns sn ʎq pǝlıɐʇuǝ sı ʇɐɥʍ sʞsɐ ʇı ʎllɐɔısɐq ˙ʇıɥ ɹǝpuoʍ ʇıɥ ǝuo 5991 ǝɥʇ 'sn ɟo ǝuo sɐʍ poƃ ɟı ʇɐɥʍ ˙xıs
pɐq ǝlƃuıs ʇıɥ ǝɥʇ uı "pɐq s,oɥʍ" ƃuıʞsɐ 'uosʞɔɐɾ lǝɐɥɔıɯ ˙uǝʌǝs
ɔıƃol lɐpoɯ ɯoɹɟ ɹoʇɐɹǝdo ʎılıqıssod ǝɥʇ sı puoɯɐıp ǝɥʇ ˙"ǝɯ ǝʌɐs oʇ ǝuo ǝɥʇ ǝɹ,noʎ ǝqʎɐɯ" ǝuıl ǝɥʇ sɐɥ ʇı ˙sısɐo ʎq 'llɐʍɹǝpuoʍ ˙ʇɥƃıǝ
˙ooʇ ǝuo ǝɹɐ noʎ ǝɹǝɥʍ ǝɯıʇ ɐ sı ǝɹǝɥʇ ǝqʎɐɯ puɐ ˙(ǝɯ ʇou ǝɹɐ sɹǝɥʇo ǝsoɥʇ puɐ 'sɹǝɯɐǝɹp ɹǝɥʇo ǝɹɐ ǝɹǝɥʇ) ǝuo ʎluo ǝɥʇ ʇou ɯɐ ı ʇnq ˙ɹǝɯɐǝɹp ɐ ɯɐ ı" ˙ǝuıƃɐɯı 'uıɐƃɐ uouuǝl uɥoɾ ˙ǝuıu
(ǝlɔɐɹo ǝɥʇ sʇǝǝɯ ǝɥ ǝɹoɟǝq ʇsnɾ oǝu oʇ ƃuıʞɐǝds pıʞ oɥɔʎsd ǝɥʇ) xıɹʇɐɯ ǝıʌoɯ ǝɥʇ ɯoɹɟ ǝʇonb ssɐlɔ ˙uoods ou sı ǝɹǝɥʇ ˙uǝʇ
ʎǝuoɯ ǝɯos sʇǝƃ puǝıɹɟ sıɥ os ƃuıʎl ʎlqɐqoɹd sɐʍ ǝɥ ʇnq 'puǝıɹɟ ɐ oʇ sɔıɹʎl ǝɥʇ pǝʇnqıɹʇʇɐ ʎǝlɹɐɯ ˙"ʎɹɔ ʇou" sʍolloɟ "uɐɯoʍ ʇou" ɯoɹɟ ˙uǝʌǝlǝ
ǝod uɐllɐ ɹɐƃpǝ ʎq '"uǝʌɐɹ ǝɥʇ" ɯoɹɟ ɥʇonb ˙ǝɹoɯɹǝʌǝu :ɹo ˙ǝɯıʇ ou sı ǝɹǝɥʇ ˙ǝʌlǝʍʇ

Power in California

It is wonderful to live in the Bay Area, where the future is being invented.

Sure, we might not have a reliable power supply, but hey, we have an app that connects people with dogs who don't want to pick up their poop with people who are desperate enough to do this shit.

Another example how the capitalism that we currently live failed massively: last year, PG&E was found responsible for killing people and destroying a whole city. Now they really want to play it safe, and switch off the power for millions of people. And they say this will go on for a decade. So in 2029 when we're supposed to have AIs, self-driving cars, and self-tieing Nikes, there will be cities in California that will get their power shut off for days when there is a hot wind for an afternoon.

Why? Because the money that should have gone into, that was already earmarked for, making the power infrastructure more resilient and safe went into bonus payments for executives (that sounds so cliché!). They tried to externalize the cost of an aging power infrastructure - the cost being literally the life and homes of people. And when told not to, they put millions of people in the dark.

This is so awfully on the nose that there is no need for metaphors.

San Francisco offered to buy the local power grid, to put it into public hands. But PG&E refused that offer of several billion dollars.

So if you live in an area that has a well working power infrastructure, appreciate it.

Prediction coming true

I saw my post from seven years ago, where I said that I really like Facebook and Google+, but I want a space where I have more control about my content so it doesn't just disappear. "Facebook and Google+ -- maybe they won't disappear in a year. But what about ten?"

And there we go, Google+ is closing in a few days.

I downloaded my data from there (as well as my data from Facebook and Twitter), to see if there is anything to be salvaged from that, but I doubt it.

Preisschock

Innerlich beschwerte ich mich über das teure Zimmer. Gestern erfuhr ich, wieviel es eigentlich kostet -- schlappe 165 Euro pro Nacht. Das Ding ist winzig! Nein, versteht mich nicht falsch -- die Leute sind echt nett, es ist gemütlich, es ist sauber, es ist hervorragend gelegen. Aber 165 Euro pro Nacht für nichtmal 8 Quadratmeter, inklusive Badezimmer und Wandschrank? Ich meine, 1 Euro pro Stunde und Quadratmeter?

Dann bin ich lieber ruhig und bedanke mich für den Preis, den mein Gastgeber am Institut mir organisiert hat.

Preisuhr

Dies ist ein weiterer Test, ob meine Moblog-Einstellungen funktionieren. Das Bild ist mit meinem Handy aufgenommen und direkt von da an mein Blog geschickt. Cool, ne'?

Hier fehlt noch ein Bild.

Progress in lexicographic data in Wikidata 2023

Here are some highlights of the progress in lexicographic data in Wikidata in 2023

What does the coverage mean? Given a text (usually Wikipedia in that language, but in some cases a corpus from the Leipzig Corpora Collection), how many of the occurrences in that text are already represented as forms in Wikidata's lexicographic data. Note that every percent more gets much more difficult than the previous one: an increase from 1% to 2% usually needs much much less work than from 91% to 92%.

Prüfungen und Verspätungen

Verzeihung ob der unregelmäßigen nutkidz! In den nächsten Tagen sollten auch die englischsprachigen auf den neuesten Stand gebracht werden, doch hier geht es nach wie vor drunter und drüber. Schwesterchen hat heute ihre Abschlussfeier, und seit dieser Woche darf sie sich staatlich geprüfte Designerin nennen! Und wegen ihrer mündlichen, praktischen und schriftlichen Prüfungen - die sie allesamt glänzend bestanden hat! - hatte sie wenig Zeit! Ich bin stolz auf Dich.

Ich selber tümmle mich mit meiner DA, wo ich ein wenig in Zeitnot verrutsche. Diese Tage bereite ich mich auf die Zwischenpräsentation vor, und mache gerade meine Folien, aber zunächst geht es zu Schwesterchen. So, Danke vielmals für Eure Geduld, und wie immer viel Spaß mit den nutkidz!

RDF in Mozilla 2

I read last week Brendan Eich's post on Mozilla 2, where he said that with Moz2 they hope that they can "get rid of RDF, which seems to be the main source of "Mozilla ugliness"". Danny Ayers commented on this, saying that RDF "should be nurtured not ditched."

Well, RDF in Mozilla always was crappy, and it was based on a pre-1999-standard RDF. No one ever took up the task -- remind you, it's open source -- to dive into the RDF inside Mozilla and repair and polish it, make it a) compatible to the 2004 RDF standard, (why didn't RDF get version numbers, by the way?) and b) clean up the code and make it faster.

My very first encounter with RDF was through Mozilla. I was diving into Mozilla as a platform for application development, which seemed like a very cool idea back then (maybe it still is, but Mozilla isn't really moving into this direction). RDF was prominent there: as an internal data structure for the developed applications. Let's repeat that: RDF, which was developed to allow the exchange of data on the web, was used within Mozilla as an internal data structure.

Surprisingly, they had performance issues. And the code was cluttered with URIs.

I still think it is an interesting idea. In last year's Scripting for the Semantic Web workshop I presented an idea on integrating semantic technologies tightly into your programming. Marian Babik and Ladislav Hluchy picked that idea up and expanded and implemented the work much better than I ever could.

It would be very interesting how this combination could really be put to work. An internal knowledge base for your app that is queried via SPARQL. Doesn't sound like the most performant idea. But then -- imagine not having one tool accessing that knowledge base. But rather a system architecture with a number of tools accessing that knowledge base. Adding data to existing data. Imagine just firing up a newly installed Adressbook, and -- shazam! -- all your data is available. You don't like it anymore? Switch back. No changes lost. Everything is just views.

But then again, didn't the DB guys try to do the same? Yes, maybe. But with semantic web technology, shouldn't it be easier to do it, because it is meant to integrate?

Just thinking. Would love to see a set of tools based on a plugabble knowledge base. I'm afraid, performance would suck. I hope we will see.


Comments are still missing on this post.

RDF is not just for dreamers

Sometimes I stumble upon posts that leave me wonder, what actually do people think about the whole Semantic Web idea, and about standards like RDF, OWL and the like. Do you think academia people went out and purposefully made them complicated? That they don't want them to get used?

Christopher St. John wrote down some nice experience with using RDF for logging. And he was pretty astonished that "RDF can actually be a pretty nifty tool if you don't let it go to your head. And it worked great."

And then: "Using RDF doesn't actually add anything I couldn't have done before, but it does mean I get to take advantage of tons of existing tools and expertise." Well, that's pretty much the point of standards. And the point of the whole Semantic Web idea. There won't be anything you will be able to do later, that you're not able to do today! You know, assembler was pretty turing-complete already. But having your data in standard formats helps you. "You can buy a book on RDF, but you're never going to buy a book on whatever internal debug format you come up with"

Stop being surprised that some things on the Semantic Web work. And don't expect miracles either.

RIP Christopher Alexander

RIP Christopher Alexander, the probably most widely read actual architect in all of computer science. His work, particularly his book "A Pattern Language" was popularized, among others, by the Gang of Four and Design Pattern work, and is frequently read and cited in Future of Programming and UX circles for the idea that everyone should be able to create, but in order to enable them, they need patterns that make creation possible. His work inspired Ward Cunningham when developing wikis and Will Wright when developing that most ungamelike of games, Sim City. Alexander died on March 17, 2022 at the age of 85.

RIP Niklaus Wirth

RIP Niklaus Wirth;

BEGIN

I don't think there's a person who created more programming languages that I used than Wirth: Pascal, Modula, and Oberon; maybe Guy Steele, depending on what you count;

Wirth is also famous for Wirth's law: software becomes slower more rapidly than hardware becomes faster;

He received the 1984 Turing Award, and had an asteroid named after him in 1999; Wirth died at the age of 89;

END.

RIP Steve Wilhite

RIP Steve Wilhite, who worked on CompuServe chat for decades and was the lead of the CompuServe team that developed the GIF format, which is still widely used, and which made the World Wide Web a much more colorful and dynamic place by having a format that allowed for animations. Wilhite incorrectly insisted on GIF being pronounced Jif. Wilhite died on March 14, 2022 at the age of 74.

Rad der Zeit 10

Letzte Woche kam endlich die Nachricht per Amazon, dass Band 10 der Rad der Zeit-Reihe endlich erschienen ist (gleich bestellen). Natürlich habe ich mich den ganzen neunten Band hindurch darüber geärget, dass die Geschichte so langsam weiterkommt - doch als dann Band 10 angekündigt war, musste ich es dennoch sofort vorbestellen. Im englischen Hardcover, natürlich, weil man muss ja gleich wissen, wie mit den Helden weitergeht. Ich meine natürlich, wie sie die Welt retten, ihren Kampf gegen das Böse ausfechten und so. Mir ist es nämlich ganz egal, wie es zwischen Lan und Nynaeve weitergeht. Oder ob sich Rand endlich entscheidet. Oder ob Mat die Prinzessin heiratet. Ob Perrin Faile wiederbekommt. Wirklich. Ganz egal. Ehrlich... Schluss für jetzt, ich muss jetzt lesen gehen...

Rainbows end

Rainbows end.

The book, written in 2006, was set in 2025 in San Diego. Its author, Vernor Vinge, died yesterday, March 20, 2024, in nearby La Jolla, at the age of 79. He is probably best known for popularizing the concept of the Technological Singularity, but I found many other of his ideas far more fascinating.

Rainbows end explores themes such as shared realities, digital surveillance, and the digitisation of the world, years before Marc Andreessen proclaimed that "software is eating the word", describing it much more colorfully and rich than Andreessen ever did.

His other work that I enjoyed is True Names, discussing anonymity and pseudonymity on the Web. A version of the book was published with essays by Marvin Minsky, Danny Hillis, and others. who were inspired by True Names.

His Science Fiction was in a rare genre, which I love to read more about: mostly non-dystopian, in the nearby future, hard sci-fi, and yet, imaginative, exploring novel technologies and their implications on individuals and society.

Rainbows end.

Regenbogen

Ich bin gerade in Galway, und hier regnet es ständig. Wirklich. Zwar meist nur kurz, aber doch halt immer wieder neu.

Dafür wurde ich heute mit einem schier unglaublichen Regenbogen belohnt: er ging wirklich über den ganzen Horizont, ein vollständiger Bogen! So was habe ich noch nie gesehen. Das Bild ist untertrieben, deutlich, in Wirklichkeit schien er noch viel heller.

Besonders spannend war, dass er kaum zwei-, dreihundert Meter entfernt aus dem Wasser zu steigen schien. Nicht irgendwo weit weg, er war direkt da - man kann sogar die Häuser durch den Regenbogen hindurch auf dem Bild sehen, der Regenbogen war vor den Häusern. So etwas habe ich noch nie zuvor gesehen. Wahnsinnig beeindruckend.

Ich hoffe, dass ich bald noch ein paar bessere Bilder bekomme.

Hier fehlt noch ein Bild.

Released dlpconvert

There's so much to do right now here, and even much, much more I'd like to do - and thus I'm getting a bit late on my announcements. Finally, here we go: dlpconvert is released in a 0.5 version.

You probably think: what is dlpconvert? dlpconvert is able to convert ontologies, that are within the dlp-fragment, from one syntax - namely the OWL XML Presentation Syntax- into another, here Datalog. Thus you can just take your ontology, convert it and then use the result as your program in your Prolog-engine. Isn't that cool?

Well, it would be much cooler if it were stronger tested, and if it could read some more common syntaxes like RDF/XML-Serialisation of OWL ontologies, but both is on the way. As for testing I would hope that you may test a bit - as for the serialisation, it should be available pretty soon.

dlpconvert is based totally on KAON2 for the reduction of the ontology.

I will write more as soon as I have more time.

Renovierung

So, ein weiterer Schritt bei der Umgestaltung von Nodix ist getan. Eben diese Texte, die ihr hier lest, die sogenannten News, werden jetzt von einem neuem Stück Software verwaltet, dem Nodix Bloggerskript, in Python geschrieben. Ich werde bei Gelegenheit mal eine Übersicht darüber schreiben, wie Nodix eigentlich technisch funktioniert.

Jedenfalls sorgt er für eine automatische Verwaltung der News, dafür, dass die Titelseite nicht zu lang wird, und dass das Archiv entsprechend erstellt wird. Als kleinen Gag bietet er bei jedem Eintrag einen kleinen schwarzen Kasten an. Wenn man den Mauszeiger über diesen Kasten verharren lässt, erfährt man, wie man genau diese Meldung verlinken kann. Mag vielleicht praktisch sein...

Auch sind die Subseiten von Nodix nun direkt von der Nodix-Titelseite erreichbar, wie ihr links sehen könnt. Dies ist noch nicht die endgültige Fassung - die Umstrukturierung von Nodix geht noch weiter. Es wird insgesamt leichter zu bedienen, schöner, übersichtlicher, auch schneller, und sogar für mich einfacher zu verwalten. Was will ich mehr?

Übrigens wurde ich darauf aufmerksam gemacht, dass die Rückkehr des Königs gar nicht mehr auf Platz 4 der besten Filme laut imdb ist, sondern schon Platz 3. Na, es wird spannend - knackt der Film die ewige Herrschaft des Paten, wie es eines Königs gebührt? Warten wir's ab.

Renovierung die zweite

So, die letzten Tage mal wirklich nichts getan, nur Warcraft gespielt - Schwesterchen hat es mir zu Weihnachten geschenkt! So konnte ich diese Weihnachten behaglich damit verbringen, allerlei virtuelle Leben zu vernichten, Städte niederzubrennen, und einer ziemlich coolen Story (für ein Strategiespiel!) zu folgen.Der neue nutkidz-comic liegt eigentlich schon ein Weilchen bei mir auf der Festplatte, aber aus oben genannten Gründen kam ich erst heute dazu, in hochzuladen. Nächstes Jahr versuchen wir wieder regelmäßiger zu erscheinen - ach, diese guten Vorsätze... :)Schließlich habe ich auch endlich die Galerie nach nakit-arts verschoben, so dass auch dort mal etwas Leben in die Bude kommt. Schwesterchen plant zwar immer noch an nakit-arts, so wie sie es haben möchte, aber bis es soweit ist, finden sich schon mal Ihre Bilder dort.Das ein weiterer Schritt beim Umbau von Nodix. Nodix selbst wird damit einerseits zu einem Portal zu den ganzen Seiten, die hier ihren Anfang hatten - links die Übersicht mit den Logos, das sind nakit-arts, nutkidz, DSA4 Werkzeug und XML4Ada95. Andererseits wird Nodix selber noch weiter umgebaut - meine eigenen Texte finden sich ja hier, die will ich irgendwie zugänglicher einrichten, und auf dieser Seite schließlich noch diese Einträge.Bis Nodix fertig umgebaut ist, dauert es noch ein wenig. Eine große Überraschung ist noch für das Jubiläum von Nodix geplant - bald wird Nodix 3 Jahre alt! - und vielleicht ist bis dahin ja auch der neue Aufbau ersichtlich. Ich suche nur noch nach einer Möglichkeit, wie man sich schnell über die Updates auf den diversen Subsiten informieren kann... Über Kommentare, Anregungen, Kritiken usw. würde ich mich sehr freuen, schließlich mache ich ja die Seite für Euch! Schreibt einfach an die übliche Adresse - denny@nodix.de.

Research visit

So, I have arrived for my very first longer research visit. I am staying at the Laboratory of Applied Ontologies in Rome. I have never been to Rome before, and all my type of non-work experience I'll chat about in my regular blog, in German. Here I'll stick to Semantic Stuff and such.

So, if you are nearby and would like to meet -- give me a note! I am staying in Rome up to the WWW, i.e. up to May 20th. The plan is to work on my dissertation topic, Ontology Evaluation, especially in the wake of the EON2006 workshop, but that's not all it seems. People are interested in and knowledge about Semantic Wikis as well. So there will be quite a lot stuff happening in the next few weeks -- I'm excited about it all.

Who knows what will happen? If my plan works out, at the end of the stay we will have a common framework for Ontology Evaluation. And I am not talking about this paper frameworks -- you know, that are presented in papers with titles starting "Towards a..." or "A framework for...". No, but real software, stuff you can download, and play with.

Restarting, 2019 edition

I had neglected Simia for too long - there were five entries in the last decade. A combination of events lead me to pour some effort back into it - and so I want to use this chance to restart it, once again.

Until this weekend, Simia was still running on a 2007 version of Semantic MediaWiki and MediaWiki - which probably helped with Simia getting hacked a year or two ago. Now it is up to date with a current version, and I am trying to consolidate what is already there with some content I had created in other places.

Also, life has been happening. If you have been following me on Facebook (that link only works if you're logged in), you have probably seen some of that. I married, got a child, changed jobs, and moved. I will certainly catch up on this too, but there is no point in doing that all in one huge mega-post. Given that I am thinking about starting a new project just these days, this might be the perfect outlet to accompany that.

I make no promises with regards to the quality of Simia, or the frequency of entries. What I would really love recreate would be a space that is as interesting and fun for as my Facebook wall was, before I stopped engaging there earlier this year - but since you cannot even create comments here, I have to figure out how to make this even remotely possible. For now, suggestions on Twitter or Facebook are welcome. And no, moving to WordPress or another platform is hardly an option, as I really want to stay with Semantic MediaWiki - but pointers to interesting MediaWiki extensions are definitely welcome!

Rettet die Kinos!

Soeben wurde ich von Thomas Glatzer auf ein äußerst interessantes Anliegen aufmerksam gemacht:

MeinKinoBinIch.de ist ein Portal deutscher Kinobetreiber und weist auf ein für alle Kinofreunde sicher interessantem Gesetzesvorhaben hin. Da ich ein großer Freund des Kinos bin, würde ich mich freuen, wen die Kampagne mit den dort dargestellten Zielen Erfolg hätte. Wenn jemand mit Thomas Kontakt deswegen aufnehmen möchte, hier seine ICQ: 321969141

(Anmerken möchte ich zudem noch, rein persönlich: ob Internet "Unterschriften"-Sammlungen sinnvoll sind, weiß ich nicht. Ob die Kampagne korrekt dargestellt wird, habe ich nicht recherchiert. Aber mündige Leser werden dazu sicher in der Lage sein, und wenn jemand neutralen Informationen findet, mag er mich gerne ins Bild setzen. Seiten von Kinobetreibern sind keine neutrale Information.)

Ring 2

Filme in 50 Worten

Angst. Anspannung. Ring 2 ist die würdige Fortsetzung von Ring. Wie soll ich sagen? Kinder sind der größte Horror.
Wir tauchen tiefer in die Geschichte von Samara ein, und wieder sehen wir, wie Rachel um ihren Sohn kämpft. Der Mut, den sie dabei aufbringt, grenzt ans unglaubwürdige - bis man sich vor Augen hält, dass sie um das blanke Leben ihres Kindes kämpft.

Ring ist bislang der einzige Horror-Film, der mir überhaupt gefiel - und ich habe so manchen gesehen. Es ist insbesondere, der einzige, der Horror herberief. Nun gesellt sich Ring 2 dazu.

So spannend, dass ich ganz verspannt im Kino saß, und danach echt verspannt war...

Risiko Alter

Die Bundesregierung hat ein kleines Heftchen veröffentlicht, mit dem zauberhaften Titel "agenda 2010 - Deutschland bewegt sich". Ich hatte mir es heute, aus Interesse und auch aus mangelnder Alternative in der Straßenbahn durchgelesen, und fand einiges Interessant. Herrje, wieviel Geld des erarbeiteten Einkommens gar nicht an mich gehen wird! Als unbedarfter Student ahnt man das ja gar nicht, ich werde hoffentlich bald persönlich in den Genuss kommen, die Begriffe Brutto und Netto mit Bedeutung zu füllen.
Aber darum geht es nicht. Auch nicht darum, dass einige sehr nette Ideen drin waren, und einige fehlleitende Informationen und offensichtlich verfälschende Statistiken. Das ist ja üblich, und man sollte das gewohnt sein. Viel erschreckender jedoch führte mir ein kleiner Satz im Glossar die Einstellung der Bundesregierung vor Augen:
"Soziale Sicherungssysteme [...] Damit sind alle Versicherten solidarisch gegen die großen Risiken Alter und Erwerbsminderung [...] abgesichert."
Das Risiko Alter? Auf der Suche nach einer Quellenangabe im Netz stieß ich auf über 300 Treffer, die über das Risiko Alter sprechen.

Meine sehr geehrte Bundesregierung, liebe Herren und Damen Versicherer, lasst Euch hiermit deutlich gesagt sein, dass das Alter meiner bescheidenen Meinung nach kein Risiko ist, sondern vielmehr Ergebnis des offensichtlichen Meisterns zahlreicher vorhergehender Risiken. Mir ist durchaus bewusst, dass für einen Versicherer wie auch für die Bundesregierung, die ja unter anderem für eine Sicherung des Rentensystems zuständig ist, das Alter der einzelnen Personen durchaus ein Risiko ist, doch glauben sie mir, wir, als Alternde, wünschen selten mit so offensichtlichem Zynismus und unüblicher Ehrlichkeit darauf hingewiesen zu werden.
Auf alle, die das Risiko Alter bereit sind einzugehen! (- und das nicht nur ob der zweifelhaften Alternative)