Dec 18, 2012, 1:23 PM
Need to speed up Perl script
I have written a few programs, not so much in perl. I have recently written a program to compare two hash tables (the files are huge), and could benefit from some advice on how to make my code more efficient. The script is taking days to run.
Data -- key value
open (F, 'F.dump');
print "File F.dump opened \n";
Step through the input file, taking $blocksize chunks.
# Read in a block of records from the female dataset
# and append to the female hash table
my $row = <F>;
my @record=split(' ', $row);
push (@array, @record);
last if (eof(F));
print "$a records from file F.dump read into linear array \n";
print " added to Female hash, now ".keys(%fhash)." records \n";
I think the most obvious problem is that I am reading the lines from a file into an array, creating a hash table from the array, then appending the resulting hash table to that created in the previous iteration.
There is probably a way to read the lines from the file directly into the hash table. I just do not seem to be able to get anything to work. Any suggestions?
Full script attached.
(This post was edited by Gorgarian on Dec 18, 2012, 1:26 PM)