perl and huge databases, how to search and store?
I have a task and would like to develop in my mind how i should go around programming this.
I will probably be given a csv format database which would have minimum 36 million lines of data. In the future, users would need to search this "databas开发者_运维百科e" through CGI/perl interface based on some conditions depending on multiple column values and display the matching rows.
How should I using perl read the csv format (probably using CSV parser from CPAN) and store into what type of database? Key priority would be speed of searching of the database.
Any sample coding would appreciated
You probably want to go with a proper database solution. The easiest to set up (depending on your familiarity with RDBMSes) is probably MySQL. Once you have that set up you want to look into Perl modules for interfacing with the database. DBIx::Class
is the "in thing" these days, and as such, there are many people using it who can answer questions.
Oh, and for your CSV parsing, look at Text::CSV
, if you don't want to load it directly into the database (and if your RDBMS of choice doesn't support ingesting of CSV files directly).
PostgreSQL
has the ability to import CSV files:
http://www.postgresql.org/docs/current/static/sql-copy.html
The COPY command is also more efficient than committing 36M inserts, one at a time.
You should look into ways to import the data once you design on a DBMS. With that many records I'd stand clear of MySQL
, though.
If the data is not relational and will only get larger, you might want to look into using Hadoop
, or some other form of MapReduce
. It'll turn those 30 min queries into 5min.
Most databases will have a means of directly loading a CSV file into a table. For example SQLLoader for Oracle or the load command for MySQL.
Searching the database in an efficient manner will depend on the data and how you expect to search it. (i.e. what fields will be interesting, which ones may you do sorts on, etc.) Without more information, it's hard to give you a solid answer, though you should follow general best practices for indexing.
Concerning code examples for accessing a database, see the following links:
MySQL DBI Example
PERL DBI Doc
First, use Text::CSV_XS to parse to the CSV file.
Second, what sort of database to use and how it should be structured depends on what sort of searches are going to be made.
If you are doing simple keyword matching, then a key-value store will be very fast. Something like the Berkeley DB will do nicely.
If you have more complex needs, you may want to consider an SQL database like MySQL, PostgreSQL, Oracle, SyBase, or so forth. SQL database tuning and design is an entire field of study on its own. I will offer a bit of advice though, and suggest that you need to think very carefully about what indexes you can apply to your fields so that you can maximize query speed.
Sometimes Perl surprises you with its efficiency in handling basic scenarios. That said, if your use cases are all read-only and the information is static/unchanging, I'd see how the brute force method of just opening up the file and searching it worked first.
Assuming that is unacceptable, then you just have to look at the DBI, Perl's way of talking to a database, and run your millions of inserts once, and then your reads will be fast with a modern RDBMS like MySQL or SQL Server.
精彩评论