开发者

Business Intelligence (BI) on Wikipedia data

Intro:

I am a BI addict and would like to develop a project to drill-down Wikipedia's data.

I would write script开发者_StackOverflow中文版s to extract data from dbpedia (probably beginning by people articles) and load it into a people table.

My question is:

Has anyone done this before? Even better, is there a community dedicated to this?

If it the scripts are somewhere, I would rather contribute to them than rewrite them.

Just an example:

In the OLAP cube of people, I can drill-down by first name, choose drill-through "Remi", check in which areas this name is used, then for all areas drill-down on gender to check where this name is popular for girls and where it is popular for boys. For each of them, I can then drill-down through time to see the trends. You can not do this kind of investigation without a BI tool, or it will take days instead of seconds.


Check out Mahout which is a distributed machine learning library. One of the examples there uses a dump of wikipedia

https://cwiki.apache.org/MAHOUT/wikipedia-bayes-example.html http://mahout.apache.org

I'm not familiar with the exact details of business intelligence, however machine learning is about finding relevant patterns and clustering of information together. At the very least this should give an example of loading wiki into memory and doing some simple and not so simple things with the data.


You could set up a virtuoso server (there is an open source version) and load the dbpedia data sets in a local machine and use virtuoso as an "SQL DB" with SPARQL (it has jdbc interface)

from your example you could load only the "ontology infobox *" and "raw infobox *" datasets


Do you want an open source OLAP server for that ?

Do you need to setup a DB for your datasets or rather use files ? We (at www.icCube.com) do not need DB to setup our cubes.

How large are your datasets ?

0

上一篇:

下一篇:

精彩评论

暂无评论...
验证码 换一张
取 消

最新问答

问答排行榜