开发者

Search large CSV files with multiple search criteria on unix

I have several large CSV files that I need to search with 1 to many parameters, if I find a hit I need to save that 开发者_开发技巧line in another file. Below is an example of perl code that runs successfully but is very slow against a 5gb file. Any suggestions on speeding this up would be greatly appreciated.

#!/usr/bin/env perl
use Text::CSV_XS;

$numArgs = $#ARGV;

#First Parameter is the input file name
$Finput = $ARGV[0];
chomp($Finput);

#Second Parameter is the output file name
$Foutput = $ARGV[1];
chomp($Foutput);

# Open the Control file but quit if it doesn't exist
open(INPUT1, $Finput) or die "The Input File $Finput could not be found.\n";
open(OUTPUT1, ">$Foutput") or die "Cannot open output $Foutout file.\n";


my $csv = Text::CSV_XS->new();
open my $FH, "<", $Finput;

while (<$FH>) {
    $csv->parse($_);
    my @fields = $csv->fields;

    if ($fields[0] == 10000) {
        if ($fields[34] eq 'abcdef') {
            if ($fields[103] == 9999) {
                print OUTPUT1 "$_\n";
            }
        }
    }
}


I don't know your data, or your criteria.

But if we could use your example given above, then I would try trivial tests against the lines BEFORE doing the CSV handling.

For example (note, my perl is terrible, this is meant to be exemplar, not correct):

if (/.*10000.*abcdef.*9999.*/) {
    $csv->parse($_);
    if ($fields[0] = 10000) {
        ...
    }
}

Basically, you do some simpler, faster checks to more quickly DISQUALIFY rows before performing the additional processing necessary to qualify them.

Clearly if more of your rows match than do not, or if the check for simple qualification isn't really practical, then this technique won't work.

Done right, CSV parsing is a bit expensive (in fact you have a error here assuming that a single line of CSV is a single record, that may be true for your data, but CSV actually allows embedded newlines, so it's not a generic assumption that can be made for all CSV).

So, it's good to not have to pay the price of parsing it if, "at a glance", the line isn't going to match anyway.


This is code that runs "successfully"? I find that hard to believe.

if ($fields[0] = 10000) {
    if ($fields[34] = 'abcdef') {
        if ($fields[103] = 9999) {

These are not checks for equality, but assignments. All of these if-clauses will always return true. What you probably wanted here was == and eq, not =.

You also open two filehandles on the input file, and use the CSV module in the wrong way. I'm not convinced that these minor errors should cause the script to be too slow, but it would be printing all the records in that 5gb file.

Here's a revised version of your script.

use strict;
use warnings;
use Text::CSV;
use autodie;

my $Finput = $ARGV[0];
my $Foutput = $ARGV[1];

open my $FH, "<", $Finput;
open my $out, ">", $Foutput;

my $csv = Text::CSV->new();

while (my $row = $csv->getline($FH)) {
    my @fields = @$row;
    if ($fields[0] == 10000) {
        if ($fields[34] eq 'abcdef') {
            if ($fields[103] == 9999) {
                $csv->print($out, $row);
            }
        }
    }
}

The autodie pragma will take care of checking the return value from open for us (and other things). use strict; use warnings; will make our brains hurt less. Oh, and I am using Text::CSV, not the _XS version.


You want to use grep "{searchstring}" filename1.csv filename2.csv > savefile.txt on each file. Maybe you want to read the filename.csv line-by-line:

#!/bin/bash
exec 3<filename.csv
while read haystack <&3
do
  grep "{needle}" $haystack > result.txt 
done
0

上一篇:

下一篇:

精彩评论

暂无评论...
验证码 换一张
取 消

最新问答

问答排行榜