AcceLogiChip

From Simia
Jump to navigation Jump to search

Accelerated logic chips - that would be neat.

The problem with all this OWL stuff is, that it is computationally expensive. Google beats you in speed easily, having some 60.000 PCs or so, but indexing some 8 billion web pages, each with maybe a thousand words. And if you ever tried Googles Desktop Search, you will see they can perform this miracles right on your PC too! (Never mind that there are a dozen tools doing exactly the same stuff Googles Desktop Search does, just better - but hey, they lack the name!)

What does the Semantic Web achieve? Well, ever tried to run a logic inferencing engine with a few million instances? With a highly axiomatized TBox of, let's say, just a few thousand terms? No? You really should.

Sure, our PCs do get faster all the time (thanks to Moores Law!), but is that fast enough? We want to see the Semantic Web up and running not in a few more iterations of Moores Law, but much, much earlier. Why not use the same trick graphic magicians did? Highly specialized accelerated logic chips, things that can do your tableu reasoning in just a fraction of the time needed with your bloated all-purpose-CPU.


Originally published on Semantic Nodix

Previous post:
World Wide Prolog
Following post:
Imagine there's a revolution...


Comments

Max
14 December 2004 22:32:00

I love that idea. As soon as Intel believes it, they will make advertisements for SemWeb in order to sell more CPUs - specialized one. Maybe existing graphic engine chips could be used to to logic calculation? Would be something for compiler construction people to try. But I fear that floating-point artihmetics doensn't map nicely to logic inferencing.


Denny
15 December 2004 16:46:00

Maybe not the floating point stuff, but the matrix accelerators in modern graphic processing units could maybe used in order to speed up the tablaeux algorithms or for disjunctions. I am not too sure yet, though, just an idea.