开发者

How to load large nt/rdf dump into a Jena/PostgreSQL model (TDB/RDB?)

I'm using DBpedia in my project. So far I've been using a SPARQL client, but the performance is far from being acceptable (not to mention the frequent downtime of the end point).

So I want to load the big NT files available at http://wiki.dbpedia.org/Downloads36 in a local dbms (I have a server with PostgreSQL).

In my application (built on Java and Groovy) I open a connection with a Jena persistent graph with:

def jenaConnection = new DBConnection( ... )
def maker = ModelFactory.createModelRDBMaker( jenaConnection )
def globalModel = maker.openModel( "my_big_fat_model" )

This is OK for a few thousand triples, but when I try to load a large NT file using a reader

RDFReader r = m.getReader( "N-TRIPLE")
r.r开发者_如何学编程ead( inputStreamFromBigFile ... ) 

The performance is appaling. It loads about 2-3K triples per minute, meaning that the whole DBpedia dataset (millions of triples) might take days to load. Other people using JENA with large datasets don't seem to have this issue.

I read that I should use TDB for large datasets (http://jena.apache.org/documentation/tdb/) but I don't understand what I am supposed to do with it.

Is it a similar concept to the RDB interface or what? Do I need to load the NT in the PostgreSQL DB?

The JENA documentation doesn't seem to be very clear on this point.


If you want to stick with using PostgresQL as the back-end, you should use SDB. It's a more up-to-date wrapper for relational stores for Jena models than the old db driver. There is also quickstart documentation for getting going with SDB.

TDB is a persistent store that provides an alternative to using a relational database as the back-end triplestore. TDB builds its own b-tree indexes on disk, and manages the caching for you. In every other respect, it appears to the programmer just like a normal Jena Model. TDB has command line tools that help with the loading process, although as they are bash scripts they require Linux or cygwin. To load dbpedia, this is what I have done in the past:

$> tdbloader2 --loc ./tdb ./source/*.nt

where ./source is the directory where I downloaded the various .nt files from dpbedia. It takes a few hours on a reasonable machine, but certainly not days.

Once you have the TDB image in ./tdb, just follow the documentation to load the Model in your Java program:

String directory = "./tdb" ;
Model model = TDBFactory.createModel(directory) ;
...
model.close() ;

From there, just use model as you normally would use any Jena model. There is one caveat: TDB doesn't provide any concurrency support. If your app requires concurrent access to the store (specifically, any writes concurrent with one or more reads) you will need to handle locking at the app level.

0

上一篇:

下一篇:

精彩评论

暂无评论...
验证码 换一张
取 消

最新问答

问答排行榜