CGI/Perl Guide | Learning Center | Forums | Advertise | Login
Site Search: in

  Main Index MAIN
INDEX
Search Posts SEARCH
POSTS
Who's Online WHO'S
ONLINE
Log in LOG
IN

Home: Perl Programming Help: Beginner:
Reading results back that were dumped?

 

First page Previous page 1 2 Next page Last page  View All


recruiter
User

Apr 8, 2013, 4:40 PM

Post #1 of 45 (1716 views)
Reading results back that were dumped? Can't Post

I am not a fan of Data::Dumper when it comes to things like this but I figured I would give it a try. I am getting all records in a table at once and dumping them. I am curious on how I can attempt to save the dumped results to a file, making it more compact to read. And then I know i can use eval to store it back later, but am I able to access the results to match, find what I need? Such as If I want to find the first and last id of the dumped results?


Code
  my $sth = $dbh->selectall_arrayref (q/SELECT newsid, newsdate, newshead FROM news ORDER by newsid/);  

my @rows;
for (my $i = 0; $i < @{$sth}; $i++)
{
my @cols = @{$sth->[$i]};
my %row = (
id => $cols[0],
date => $cols[1],
headline => $cols[2],
);

push @rows, \%row;

}

use Data::Dumper;
$Data::Dumper::Sortkeys = sub { [reverse sort keys %{$_[0]}] };

print Dumper(\@rows);



g4143
Novice

Apr 8, 2013, 5:18 PM

Post #2 of 45 (1711 views)
Re: [hwnd] Reading results back that were dumped? [In reply to] Can't Post

You should be able to print it to a open file.


Code
 
open(my $OFILE, ">", "datafile");

print {$OFILE} Dumper(\@rows);



recruiter
User

Apr 8, 2013, 5:30 PM

Post #3 of 45 (1704 views)
Re: [g4143] Reading results back that were dumped? [In reply to] Can't Post

Yes I know I can write to a file with dumper as you stated. Sorry if my question was not clearly stated. After dumping the table to a file, is their a way to breakdown its data easier than how Data::Dumper dumps it, so I can easily access it?


g4143
Novice

Apr 8, 2013, 6:42 PM

Post #4 of 45 (1699 views)
Re: [hwnd] Reading results back that were dumped? [In reply to] Can't Post

Instead of dumping it with Dumper, you could try using Storable so that the data is easily fetched back into a data container.


FishMonger
Veteran / Moderator

Apr 8, 2013, 6:44 PM

Post #5 of 45 (1698 views)
Re: [hwnd] Reading results back that were dumped? [In reply to] Can't Post

Use the Storable module to save the data structure.

http://search.cpan.org/~ams/Storable-2.39/Storable.pm


BillKSmith
Veteran

Apr 8, 2013, 8:28 PM

Post #6 of 45 (1689 views)
Re: [hwnd] Reading results back that were dumped? [In reply to] Can't Post

The book "Intermediate Perl" devotes most of a chapter to this subject. I do not find its treatment of Dumper very helpful, but the teatment of alternate modules is excellent.
Good Luck,
Bill


recruiter
User

Apr 9, 2013, 6:42 AM

Post #7 of 45 (1676 views)
Re: [BillKSmith] Reading results back that were dumped? [In reply to] Can't Post

Thanks everyone for the feedback. I've search and read up Storable on CPAN and search for examples but could not find many. Does anyone have an example of formatting the data using this?


g4143
Novice

Apr 9, 2013, 7:30 AM

Post #8 of 45 (1670 views)
Re: [hwnd] Reading results back that were dumped? [In reply to] Can't Post


In Reply To
Thanks everyone for the feedback. I've search and read up Storable on CPAN and search for examples but could not find many. Does anyone have an example of formatting the data using this?


I think you have to format the data before you store it. Could you give us a small sample of the database data and the stored format your looking for?

If I was doing it I would...


Code
my @cols = @{$sth->[$i]}; 
$row{$cols[0]} = [$cols[1],$cols[2]], #now the hash has id for the key and a array ref to the data.

and eliminate @rows altogether.


(This post was edited by g4143 on Apr 9, 2013, 8:14 AM)


recruiter
User

Apr 9, 2013, 9:51 AM

Post #9 of 45 (1646 views)
Re: [g4143] Reading results back that were dumped? [In reply to] Can't Post

Well the database is setup as follows: Mysql table -> news, Cols -> (id) as integer and primary key, (date) as date, (head) as text. I am looking for Storable to store it where I can read it such as an hash or array refs where I can access each $_{id}->[0] and be able to find what I am looking for.


g4143
Novice

Apr 9, 2013, 11:26 AM

Post #10 of 45 (1633 views)
Re: [hwnd] Reading results back that were dumped? [In reply to] Can't Post

This is how I would do it but I'm relatively new to Perl..


Code
#!/usr/bin/perl 

use warnings;
use strict;
use feature qw/state/;
use Storable qw/nstore_fd fd_retrieve/;

my %rows = ();

my @cols = (123, 'some date', 'newshead');
my @cols1 = (234, 'anothe date', 'more_newshead');

$rows{$cols[0]} = [$cols[1], $cols[2]];
$rows{$cols1[0]} = [$cols1[1], $cols1[2]];

foreach ( sort(keys(%rows)) )
{
state $line_no;
print ++$line_no, ": $_ has a value-> ${$rows{$_}}[0], ${$rows{$_}}[1]\n";
}

open(my $OFILE, ">", \my $data_str);

nstore_fd(\%rows, $OFILE);

close($OFILE);

open(my $IFILE, "<", \$data_str);

my $ref_data= fd_retrieve($IFILE);

close($IFILE);

if (exists ${$ref_data}{123})
{
print "key 123 has values ${${$ref_data}{123}}[0], ${${$ref_data}{123}}[1]\n";
}
if (exists ${$ref_data}{234})
{
print "key 234 has values ${${$ref_data}{234}}[0], ${${$ref_data}{234}}[1]\n";
}
if (exists ${$ref_data}{999})
{
print "\n";
}

__END__



Laurent_R
Veteran / Moderator

Apr 9, 2013, 1:46 PM

Post #11 of 45 (1615 views)
Re: [g4143] Reading results back that were dumped? [In reply to] Can't Post

Data Dumper (with eval) is the poor man's serialization tool. It works, but it is really not ideal.

Storable is much better, faster and the store is more compact. But it stored data binary format (more compact) which can also be a problem if you want to read the data on another computer or with another programming language.

JSON and YAML data formats may be useful alternatives in such cases.


recruiter
User

Apr 9, 2013, 3:35 PM

Post #12 of 45 (1605 views)
Re: [Laurent_R] Reading results back that were dumped? [In reply to] Can't Post

Thanks, pretty much what I was looking for. Here is a rough example of how I could do this I'm guessing?


Code
  


use strict;
use warnings FATAL => 'all';

use lib qw( /home/88/64/2016488/lib/perl5/lib/perl5 );
use hwnd;

use Data::Dumper;
$Data::Dumper::Sortkeys = sub { [sort keys %{$_[0]}] };

use Storable qw( nstore retrieve );


my $dbh = hwnd->new(undef, undef, undef);

my $sth = $dbh->dbh->selectall_arrayref (q/SELECT newsid, newsdate, newshead
FROM news ORDER by newsid/);

my %rows = ();

for (my $i = 0; $i < @{$sth}; $i++)
{
my @cols = @{$sth->[$i]};
$rows{$cols[0]} = [$cols[1], $cols[2]];
}

nstore( \%rows, 'dumpfile' ) unless (-e 'dumpfile');

my $results = retrieve( 'dumpfile' );


# loop through the results here if i want
# or print Dumper( $results ), "\n";
# or see if certain data exists?


printf "1, %s, %s", ${${$results}{1}}[0], ${${$results}{1}}[1] if (exists ${$results}{1});



Laurent_R
Veteran / Moderator

Apr 9, 2013, 11:27 PM

Post #13 of 45 (1587 views)
Re: [hwnd] Reading results back that were dumped? [In reply to] Can't Post

If it works correctly, it's probably what you want. You should try to close the file and open it afterwards, or even feed the file with one script and read from it with another script, to make suree everything workes as needed.


In Reply To

Code
  
for (my $i = 0; $i < @{$sth}; $i++)
{
my @cols = @{$sth->[$i]};
$rows{$cols[0]} = [$cols[1], $cols[2]];
}



Use the foreach syntax rather.


recruiter
User

Apr 10, 2013, 7:41 AM

Post #14 of 45 (1576 views)
Re: [Laurent_R] Reading results back that were dumped? [In reply to] Can't Post

I know you can check the increments without using C style for, It just seems more likely for checking how many rows are coming in. You said use foreach rather, how come?


Code
  

my %rows = ();
my $i = 0;

foreach ( @{$sth} )
{
++$i if ($_ < @{$sth});

my @cols = @{$sth->[$i]};
$rows{$cols[0]} = [$cols[1], $cols[2]];
}



FishMonger
Veteran / Moderator

Apr 10, 2013, 8:34 AM

Post #15 of 45 (1562 views)
Re: [hwnd] Reading results back that were dumped? [In reply to] Can't Post

It's cleaner and more efficient (and more self documenting) to do it like this:

Code
for my $i ( 0..$#{$sth} ) { 
my ($newsid, $newsdate, $newshead) = @{$sth->[$i]};
$rows{$newsid} = [ $newsdate, $newshead ];
}



recruiter
User

Apr 10, 2013, 9:17 AM

Post #16 of 45 (1549 views)
Re: [FishMonger] Reading results back that were dumped? [In reply to] Can't Post

Thanks FishMonger. Always a help!


Code
  

my %rows = ();

for my $i ( 0..$#{$sth} ) {
my ($nid, $ndate, $nhead) = @{$sth->[$i]};
$rows{$nid} = [ $ndate, $nhead ];
}

foreach ( sort(keys(%rows)) ) {
print "$_, -> ${$rows{$_}}[0], -> ${$rows{$_}}[1]\n";
}



FishMonger
Veteran / Moderator

Apr 10, 2013, 9:29 AM

Post #17 of 45 (1547 views)
Re: [hwnd] Reading results back that were dumped? [In reply to] Can't Post

You may want to look at using selectall_hashref instead of selectall_arrayref.

That will remove the need of the for loop.

http://search.cpan.org/~timb/DBI-1.625/DBI.pm#selectall_hashref


Chris Charley
User

Apr 10, 2013, 9:45 AM

Post #18 of 45 (1539 views)
Re: [hwnd] Reading results back that were dumped? [In reply to] Can't Post


Code
print "$_, -> ${$rows{$_}}[0], -> ${$rows{$_}}[1]\n";    

print "$_, -> $rows{$_}[0], -> $rows{$_}[1]\n";


The second form is the correct one, I believe. And the suggestion by FishMonger to use selectall_hashref may be applicable to your case.
Update: or maybe (unsure how the code reads

Code
print "$_, -> $rows{$_}->[0], -> $rows{$_}->[1]\n";



Here is a sample use in a small program, (followed by the results of the run).

Code
 #!/usr/bin/perl  
use strict;
use warnings;
use 5.014;
use DBI;
use Data::Dumper;

my $dbh = DBI->connect(qq{DBI:CSV:});
$dbh->{'csv_tables'}->{'data'} = { 'file' => 'o33.txt',
'csv_sep_char' => ' '};

my $statement = (qq{
SELECT lastname, firstname, age, gender, phone
FROM data
});

my $key_field = 'lastname';
my $href = $dbh->selectall_hashref($statement, $key_field);
$dbh->disconnect();

print Dumper $href;

__END__
contents o33.txt

lastname firstname age gender phone
mcgee bobby 27 M 555-555-5555
kincaid marl 67 M 555-666-6666
hofhazards duke 22 M 555-696-6969


Code
   
C:\Old_Data\perlp>perl t7.pl
$VAR1 = {
'hofhazards' => {
'firstname' => 'duke',
'lastname' => 'hofhazards',
'phone' => '555-696-6969',
'age' => '22',
'gender' => 'M'
},
'mcgee' => {
'firstname' => 'bobby',
'lastname' => 'mcgee',
'phone' => '555-555-5555',
'age' => '27',
'gender' => 'M'
},
'kincaid' => {
'firstname' => 'marl',
'lastname' => 'kincaid',
'phone' => '555-666-6666',
'age' => '67',
'gender' => 'M'
}
};

C:\Old_Data\perlp>

An observation - in your sql statement, you have an ORDER BY clause, but I think that may not be necessary because, the order of newsid gets lost once inserted to the hash.


(This post was edited by Chris Charley on May 1, 2013, 5:22 PM)


recruiter
User

Apr 10, 2013, 10:57 AM

Post #19 of 45 (1529 views)
Re: [Chris Charley] Reading results back that were dumped? [In reply to] Can't Post

So by using selectall_hashref, it breaks it down already as hash references and saves me time?


Code
  

my $sth = $dbh->dbh->selectall_hashref (q/SELECT newsid, newsdate, newshead
FROM news/, q/newsid/);


foreach ( sort(keys( %$sth )) )
{
printf "id: %s, %s, %s\n", $_, $sth->{$_}->{newsdate}, $sth->{$_}->{newshead};
}



Laurent_R
Veteran / Moderator

Apr 10, 2013, 10:58 AM

Post #20 of 45 (1529 views)
Re: [hwnd] Reading results back that were dumped? [In reply to] Can't Post


In Reply To
I know you can check the increments without using C style for, It just seems more likely for checking how many rows are coming in. You said use foreach rather, how come?


Fishmonger gave you an answer. But you could even do without the $i variable, by dereferencing the the references at the first level of your array, with a syntax of this form:



Code
foreach my $row_ref (@$sth) { 
my @cols = @{$row_ref};
# do something with @cols
}


I do not have your data at hand and could not test it, but it should work I I understood correectly your data structure.


(This post was edited by Laurent_R on Apr 10, 2013, 2:29 PM)


Chris Charley
User

Apr 10, 2013, 11:04 AM

Post #21 of 45 (1528 views)
Re: [hwnd] Reading results back that were dumped? [In reply to] Can't Post

Not faster probably, but requires less code is all.

And since your id is numeric, (I believe), you should probably use a numeric sort.


Code
 foreach ( sort(keys( %$sth ))   

Should be:

foreach (sort {$a <=> $b} keys %$sth) {
....
}



(This post was edited by Chris Charley on Apr 10, 2013, 12:14 PM)


recruiter
User

Apr 10, 2013, 9:57 PM

Post #22 of 45 (1486 views)
Re: [Chris Charley] Reading results back that were dumped? [In reply to] Can't Post

Ok I do understand the concept of saving the coding and looping with hash ref instead of array ref, I was using arref ref to get the column names and rows to split my pages into my results which I can do either way. But I am trying to use this to store my results in a dump file, read from the dump file and split my results by the rows needed. Is this possible with Storable?


Chris Charley
User

Apr 11, 2013, 8:12 AM

Post #23 of 45 (1471 views)
Re: [hwnd] Reading results back that were dumped? [In reply to] Can't Post

split my results by the rows needed

Don't know what you mean. You can retrieve you data from the hash individually with a key or multiple results using a hash slice.

Just a comment on your choice of a variable name for your hash. '$sth' usually is used to mean 'statement handle' and that is not what 'selectall...' returns. Probably better to call it $href, (for hash reference), or $news_ref or something other than $sth. :-)


recruiter
User

Apr 11, 2013, 11:45 AM

Post #24 of 45 (1464 views)
Re: [Chris Charley] Reading results back that were dumped? [In reply to] Can't Post

What I mean is splitting the row results, such as in a regular sql query i can use limit clause for per page results. How is that accessible towards opening the database, dumping your table results to a file or memory, being able to read them back in with a hash/hashref and split the records by 5 each. That is why I was asking if it is possible with using Dumper or Storable. Also with using selectall_hashref, Is their a certain way I can get the column names into hashes example:

{ 1 } = { date, head }


Chris Charley
User

Apr 11, 2013, 3:31 PM

Post #25 of 45 (1452 views)
Re: [hwnd] Reading results back that were dumped? [In reply to] Can't Post

To limit results to 5 at a time, maybe something like below.

Code
  my @data;   
for my $key (keys %hash) {
if (@data == 5) {
... do something with @data
@data = ();
}
else {
push @data, $key;
}
}

... do something with remainder of @data

To get the column names, I think you would have to store them, like you did with your hash.

my @cols = qw/ newsid newsdate newshead /;
and
nstore( \@cols, 'colnames') unless (-e 'colnames');
my $cols= retrieve( 'colnames');


(This post was edited by Chris Charley on Apr 11, 2013, 3:59 PM)

First page Previous page 1 2 Next page Last page  View All
 
 


Search for (options) Powered by Gossamer Forum v.1.2.0

Web Applications & Managed Hosting Powered by Gossamer Threads
Visit our Mailing List Archives