开发者

high performance database update (oracle)

I have several services which dumps data to database (oracle) after processing different input file formats (XML, flat files etc). I was wondering if I can have them generate SQL st开发者_如何学Catements instead and log them to some file system, and have a single SQL processor ( something like java hibernet) which will process these SQL files and upload to DB. What's the fastest way to execute a huge set of SQL statements ( spread over a file system, and written by multiple writers) into an oracle DB? I was considering partioning the DB and batch updates. However, I want to know the best practice here. Seems like this is a common problem and somebody must have faced/resolved this issue already. Thanks Atanu


atanu, the worst thing to do is to generate huge lists of insert statements. If you want speed and if you know the layout of your data, use external tables to load the data into your oracle database. This looks a lot like using sql*loader but you can access your data using a table. In the table definition your data fields are mapped to your column names and data types. This will be the fastest way to do bulk loads into your database, for sure it is.

See Managing External Tables for some documentation.


What is the best practice rather depends on your criteria for determining "best". In many places the approach taken in many places is to use an ETL tool, perhaps Oracle Warehouse Builder, perhaps a third-party product. This need not be an expensive product: Pentaho offers Kettle in a free "self-supported" community edition.

When it comes to rolling your own, I don't think Hibernate is the way to go. Especially if your main concern is performance. I also think changing your feeds to generate SQL statements is an overly-complicated solution. What is wrong with PL/SQL modules to read the files and execute the SQL natively?

Certainly when I have done things like this before it has been with PL/SQL. The trick is to separate the input reading layer from the data writing layer. This is because the files are likely to require a lot of bespoke coding whereas the writing stuff is often fairly generic (this obviously depends on the precise details of your application).

A dynamic metadata-driven architecture is an attractive concept, particularly if your input structures are subject to a lot of variability. However such an approach can be difficult to debug and to tune. Code generation is an alternative technique.

When it comes to performance look to use bulk processing as much as possible. This is the main reason to prefer PL/SQL over files with individual SQL statements. Find out more.


The last thing you want is a bunch of insert statements...SUPER slow approach (doesn't matter how many processes you're running, trust me). Get all files into a delimited format and do a DIRECT load into Oracle via sqlldr would be the simplest approach (and very fast).


If you want maximum performance, you don't want tons of SQL statement. Instead have a look at Oracle Data Pump.

And don't do any preprocessing for flat files. Instead feed them directly to impdp (the Oracle Data Pump Importer).

If the importing the data requires transformations, updates etc., then best practice is to load the data into a staging table (with data pump), do some preprocessing on the staging table and then merge the data into the productive tables.

Preprocessing outside the database is usually very limited, since you don't have access to the already loaded data. So you cannot even check whether a record is new or an update to an existing one.


As others have mentioned, there are some tools you should look into if performance is your only concern.

But there are some advantages to using plain SQL statements. Many organizations have regulations, policies, and stubborn developers that will block any new tools. A simple SQL script is the universal language of your database, it's pretty much gaurenteed to work anywhere.

If you decide to go with SQL statements you need to avoid scripts like this:

insert into my_table values(...);
insert into my_table values(...);
...

And replace it a single statement that unions multiple lines:

insert into my_table
select ... from dual union all
select ... from dual union all
...

The second version will run several times faster.

However, picking the right size is tricky. A large number of small inserts will waste a lot of time on communication and other overhead. But Oracle parse time grows exponentially with very large sizes. In my experience 100 is usually a good number. Parsing gets really slow around a thousand. Also, use the "union all" method, avoid the multi-table insert trick. For some reason multi-table insert is much slower, and some Oracle versions have bugs that will cause your query to hang at 501 tables.

(You can also create a somewhat similar script using PL/SQL. A 1 megabyte PL/SQL procedure will compile much faster than a 1 megabyte SQL statement will parse. But creating the script is complicated; collections, dynamic sql, handling all the types correctly, creating a temporary object instead of an anonymous block because large anonymous blocks cause Diana node errors, etc. I've built a procedure like this, and it worked well, but it probably wasn't worth the effort.)

0

上一篇:

下一篇:

精彩评论

暂无评论...
验证码 换一张
取 消

最新问答

问答排行榜