CGI/Perl Guide | Learning Center | Forums | Advertise | Login
Site Search: in

  Main Index MAIN
INDEX
Search Posts SEARCH
POSTS
Who's Online WHO'S
ONLINE
Log in LOG
IN

Home: Perl Programming Help: Beginner:
Reading results back that were dumped?

 



hwnd
User

Apr 8, 2013, 4:40 PM

Post #1 of 45 (1867 views)
Reading results back that were dumped? Can't Post

I am not a fan of Data::Dumper when it comes to things like this but I figured I would give it a try. I am getting all records in a table at once and dumping them. I am curious on how I can attempt to save the dumped results to a file, making it more compact to read. And then I know i can use eval to store it back later, but am I able to access the results to match, find what I need? Such as If I want to find the first and last id of the dumped results?


Code
  my $sth = $dbh->selectall_arrayref (q/SELECT newsid, newsdate, newshead FROM news ORDER by newsid/);  

my @rows;
for (my $i = 0; $i < @{$sth}; $i++)
{
my @cols = @{$sth->[$i]};
my %row = (
id => $cols[0],
date => $cols[1],
headline => $cols[2],
);

push @rows, \%row;

}

use Data::Dumper;
$Data::Dumper::Sortkeys = sub { [reverse sort keys %{$_[0]}] };

print Dumper(\@rows);



g4143
Novice

Apr 8, 2013, 5:18 PM

Post #2 of 45 (1862 views)
Re: [hwnd] Reading results back that were dumped? [In reply to] Can't Post

You should be able to print it to a open file.


Code
 
open(my $OFILE, ">", "datafile");

print {$OFILE} Dumper(\@rows);



hwnd
User

Apr 8, 2013, 5:30 PM

Post #3 of 45 (1855 views)
Re: [g4143] Reading results back that were dumped? [In reply to] Can't Post

Yes I know I can write to a file with dumper as you stated. Sorry if my question was not clearly stated. After dumping the table to a file, is their a way to breakdown its data easier than how Data::Dumper dumps it, so I can easily access it?


g4143
Novice

Apr 8, 2013, 6:42 PM

Post #4 of 45 (1850 views)
Re: [hwnd] Reading results back that were dumped? [In reply to] Can't Post

Instead of dumping it with Dumper, you could try using Storable so that the data is easily fetched back into a data container.


FishMonger
Veteran / Moderator

Apr 8, 2013, 6:44 PM

Post #5 of 45 (1849 views)
Re: [hwnd] Reading results back that were dumped? [In reply to] Can't Post

Use the Storable module to save the data structure.

http://search.cpan.org/~ams/Storable-2.39/Storable.pm


BillKSmith
Veteran

Apr 8, 2013, 8:28 PM

Post #6 of 45 (1840 views)
Re: [hwnd] Reading results back that were dumped? [In reply to] Can't Post

The book "Intermediate Perl" devotes most of a chapter to this subject. I do not find its treatment of Dumper very helpful, but the teatment of alternate modules is excellent.
Good Luck,
Bill


hwnd
User

Apr 9, 2013, 6:42 AM

Post #7 of 45 (1827 views)
Re: [BillKSmith] Reading results back that were dumped? [In reply to] Can't Post

Thanks everyone for the feedback. I've search and read up Storable on CPAN and search for examples but could not find many. Does anyone have an example of formatting the data using this?


g4143
Novice

Apr 9, 2013, 7:30 AM

Post #8 of 45 (1821 views)
Re: [hwnd] Reading results back that were dumped? [In reply to] Can't Post


In Reply To
Thanks everyone for the feedback. I've search and read up Storable on CPAN and search for examples but could not find many. Does anyone have an example of formatting the data using this?


I think you have to format the data before you store it. Could you give us a small sample of the database data and the stored format your looking for?

If I was doing it I would...


Code
my @cols = @{$sth->[$i]}; 
$row{$cols[0]} = [$cols[1],$cols[2]], #now the hash has id for the key and a array ref to the data.

and eliminate @rows altogether.


(This post was edited by g4143 on Apr 9, 2013, 8:14 AM)


hwnd
User

Apr 9, 2013, 9:51 AM

Post #9 of 45 (1797 views)
Re: [g4143] Reading results back that were dumped? [In reply to] Can't Post

Well the database is setup as follows: Mysql table -> news, Cols -> (id) as integer and primary key, (date) as date, (head) as text. I am looking for Storable to store it where I can read it such as an hash or array refs where I can access each $_{id}->[0] and be able to find what I am looking for.


g4143
Novice

Apr 9, 2013, 11:26 AM

Post #10 of 45 (1784 views)
Re: [hwnd] Reading results back that were dumped? [In reply to] Can't Post

This is how I would do it but I'm relatively new to Perl..


Code
#!/usr/bin/perl 

use warnings;
use strict;
use feature qw/state/;
use Storable qw/nstore_fd fd_retrieve/;

my %rows = ();

my @cols = (123, 'some date', 'newshead');
my @cols1 = (234, 'anothe date', 'more_newshead');

$rows{$cols[0]} = [$cols[1], $cols[2]];
$rows{$cols1[0]} = [$cols1[1], $cols1[2]];

foreach ( sort(keys(%rows)) )
{
state $line_no;
print ++$line_no, ": $_ has a value-> ${$rows{$_}}[0], ${$rows{$_}}[1]\n";
}

open(my $OFILE, ">", \my $data_str);

nstore_fd(\%rows, $OFILE);

close($OFILE);

open(my $IFILE, "<", \$data_str);

my $ref_data= fd_retrieve($IFILE);

close($IFILE);

if (exists ${$ref_data}{123})
{
print "key 123 has values ${${$ref_data}{123}}[0], ${${$ref_data}{123}}[1]\n";
}
if (exists ${$ref_data}{234})
{
print "key 234 has values ${${$ref_data}{234}}[0], ${${$ref_data}{234}}[1]\n";
}
if (exists ${$ref_data}{999})
{
print "\n";
}

__END__



Laurent_R
Veteran / Moderator

Apr 9, 2013, 1:46 PM

Post #11 of 45 (1766 views)
Re: [g4143] Reading results back that were dumped? [In reply to] Can't Post

Data Dumper (with eval) is the poor man's serialization tool. It works, but it is really not ideal.

Storable is much better, faster and the store is more compact. But it stored data binary format (more compact) which can also be a problem if you want to read the data on another computer or with another programming language.

JSON and YAML data formats may be useful alternatives in such cases.


hwnd
User

Apr 9, 2013, 3:35 PM

Post #12 of 45 (1756 views)
Re: [Laurent_R] Reading results back that were dumped? [In reply to] Can't Post

Thanks, pretty much what I was looking for. Here is a rough example of how I could do this I'm guessing?


Code
  


use strict;
use warnings FATAL => 'all';

use lib qw( /home/88/64/2016488/lib/perl5/lib/perl5 );
use hwnd;

use Data::Dumper;
$Data::Dumper::Sortkeys = sub { [sort keys %{$_[0]}] };

use Storable qw( nstore retrieve );


my $dbh = hwnd->new(undef, undef, undef);

my $sth = $dbh->dbh->selectall_arrayref (q/SELECT newsid, newsdate, newshead
FROM news ORDER by newsid/);

my %rows = ();

for (my $i = 0; $i < @{$sth}; $i++)
{
my @cols = @{$sth->[$i]};
$rows{$cols[0]} = [$cols[1], $cols[2]];
}

nstore( \%rows, 'dumpfile' ) unless (-e 'dumpfile');

my $results = retrieve( 'dumpfile' );


# loop through the results here if i want
# or print Dumper( $results ), "\n";
# or see if certain data exists?


printf "1, %s, %s", ${${$results}{1}}[0], ${${$results}{1}}[1] if (exists ${$results}{1});



Laurent_R
Veteran / Moderator

Apr 9, 2013, 11:27 PM

Post #13 of 45 (1738 views)
Re: [hwnd] Reading results back that were dumped? [In reply to] Can't Post

If it works correctly, it's probably what you want. You should try to close the file and open it afterwards, or even feed the file with one script and read from it with another script, to make suree everything workes as needed.


In Reply To

Code
  
for (my $i = 0; $i < @{$sth}; $i++)
{
my @cols = @{$sth->[$i]};
$rows{$cols[0]} = [$cols[1], $cols[2]];
}



Use the foreach syntax rather.


hwnd
User

Apr 10, 2013, 7:41 AM

Post #14 of 45 (1727 views)
Re: [Laurent_R] Reading results back that were dumped? [In reply to] Can't Post

I know you can check the increments without using C style for, It just seems more likely for checking how many rows are coming in. You said use foreach rather, how come?


Code
  

my %rows = ();
my $i = 0;

foreach ( @{$sth} )
{
++$i if ($_ < @{$sth});

my @cols = @{$sth->[$i]};
$rows{$cols[0]} = [$cols[1], $cols[2]];
}



FishMonger
Veteran / Moderator

Apr 10, 2013, 8:34 AM

Post #15 of 45 (1713 views)
Re: [hwnd] Reading results back that were dumped? [In reply to] Can't Post

It's cleaner and more efficient (and more self documenting) to do it like this:

Code
for my $i ( 0..$#{$sth} ) { 
my ($newsid, $newsdate, $newshead) = @{$sth->[$i]};
$rows{$newsid} = [ $newsdate, $newshead ];
}



hwnd
User

Apr 10, 2013, 9:17 AM

Post #16 of 45 (1700 views)
Re: [FishMonger] Reading results back that were dumped? [In reply to] Can't Post

Thanks FishMonger. Always a help!


Code
  

my %rows = ();

for my $i ( 0..$#{$sth} ) {
my ($nid, $ndate, $nhead) = @{$sth->[$i]};
$rows{$nid} = [ $ndate, $nhead ];
}

foreach ( sort(keys(%rows)) ) {
print "$_, -> ${$rows{$_}}[0], -> ${$rows{$_}}[1]\n";
}



FishMonger
Veteran / Moderator

Apr 10, 2013, 9:29 AM

Post #17 of 45 (1698 views)
Re: [hwnd] Reading results back that were dumped? [In reply to] Can't Post

You may want to look at using selectall_hashref instead of selectall_arrayref.

That will remove the need of the for loop.

http://search.cpan.org/~timb/DBI-1.625/DBI.pm#selectall_hashref


Chris Charley
User

Apr 10, 2013, 9:45 AM

Post #18 of 45 (1690 views)
Re: [hwnd] Reading results back that were dumped? [In reply to] Can't Post


Code
print "$_, -> ${$rows{$_}}[0], -> ${$rows{$_}}[1]\n";    

print "$_, -> $rows{$_}[0], -> $rows{$_}[1]\n";


The second form is the correct one, I believe. And the suggestion by FishMonger to use selectall_hashref may be applicable to your case.
Update: or maybe (unsure how the code reads

Code
print "$_, -> $rows{$_}->[0], -> $rows{$_}->[1]\n";



Here is a sample use in a small program, (followed by the results of the run).

Code
 #!/usr/bin/perl  
use strict;
use warnings;
use 5.014;
use DBI;
use Data::Dumper;

my $dbh = DBI->connect(qq{DBI:CSV:});
$dbh->{'csv_tables'}->{'data'} = { 'file' => 'o33.txt',
'csv_sep_char' => ' '};

my $statement = (qq{
SELECT lastname, firstname, age, gender, phone
FROM data
});

my $key_field = 'lastname';
my $href = $dbh->selectall_hashref($statement, $key_field);
$dbh->disconnect();

print Dumper $href;

__END__
contents o33.txt

lastname firstname age gender phone
mcgee bobby 27 M 555-555-5555
kincaid marl 67 M 555-666-6666
hofhazards duke 22 M 555-696-6969


Code
   
C:\Old_Data\perlp>perl t7.pl
$VAR1 = {
'hofhazards' => {
'firstname' => 'duke',
'lastname' => 'hofhazards',
'phone' => '555-696-6969',
'age' => '22',
'gender' => 'M'
},
'mcgee' => {
'firstname' => 'bobby',
'lastname' => 'mcgee',
'phone' => '555-555-5555',
'age' => '27',
'gender' => 'M'
},
'kincaid' => {
'firstname' => 'marl',
'lastname' => 'kincaid',
'phone' => '555-666-6666',
'age' => '67',
'gender' => 'M'
}
};

C:\Old_Data\perlp>

An observation - in your sql statement, you have an ORDER BY clause, but I think that may not be necessary because, the order of newsid gets lost once inserted to the hash.


(This post was edited by Chris Charley on May 1, 2013, 5:22 PM)


hwnd
User

Apr 10, 2013, 10:57 AM

Post #19 of 45 (1680 views)
Re: [Chris Charley] Reading results back that were dumped? [In reply to] Can't Post

So by using selectall_hashref, it breaks it down already as hash references and saves me time?


Code
  

my $sth = $dbh->dbh->selectall_hashref (q/SELECT newsid, newsdate, newshead
FROM news/, q/newsid/);


foreach ( sort(keys( %$sth )) )
{
printf "id: %s, %s, %s\n", $_, $sth->{$_}->{newsdate}, $sth->{$_}->{newshead};
}



Laurent_R
Veteran / Moderator

Apr 10, 2013, 10:58 AM

Post #20 of 45 (1680 views)
Re: [hwnd] Reading results back that were dumped? [In reply to] Can't Post


In Reply To
I know you can check the increments without using C style for, It just seems more likely for checking how many rows are coming in. You said use foreach rather, how come?


Fishmonger gave you an answer. But you could even do without the $i variable, by dereferencing the the references at the first level of your array, with a syntax of this form:



Code
foreach my $row_ref (@$sth) { 
my @cols = @{$row_ref};
# do something with @cols
}


I do not have your data at hand and could not test it, but it should work I I understood correectly your data structure.


(This post was edited by Laurent_R on Apr 10, 2013, 2:29 PM)


Chris Charley
User

Apr 10, 2013, 11:04 AM

Post #21 of 45 (1679 views)
Re: [hwnd] Reading results back that were dumped? [In reply to] Can't Post

Not faster probably, but requires less code is all.

And since your id is numeric, (I believe), you should probably use a numeric sort.


Code
 foreach ( sort(keys( %$sth ))   

Should be:

foreach (sort {$a <=> $b} keys %$sth) {
....
}



(This post was edited by Chris Charley on Apr 10, 2013, 12:14 PM)


hwnd
User

Apr 10, 2013, 9:57 PM

Post #22 of 45 (1637 views)
Re: [Chris Charley] Reading results back that were dumped? [In reply to] Can't Post

Ok I do understand the concept of saving the coding and looping with hash ref instead of array ref, I was using arref ref to get the column names and rows to split my pages into my results which I can do either way. But I am trying to use this to store my results in a dump file, read from the dump file and split my results by the rows needed. Is this possible with Storable?


Chris Charley
User

Apr 11, 2013, 8:12 AM

Post #23 of 45 (1622 views)
Re: [hwnd] Reading results back that were dumped? [In reply to] Can't Post

split my results by the rows needed

Don't know what you mean. You can retrieve you data from the hash individually with a key or multiple results using a hash slice.

Just a comment on your choice of a variable name for your hash. '$sth' usually is used to mean 'statement handle' and that is not what 'selectall...' returns. Probably better to call it $href, (for hash reference), or $news_ref or something other than $sth. :-)


hwnd
User

Apr 11, 2013, 11:45 AM

Post #24 of 45 (1615 views)
Re: [Chris Charley] Reading results back that were dumped? [In reply to] Can't Post

What I mean is splitting the row results, such as in a regular sql query i can use limit clause for per page results. How is that accessible towards opening the database, dumping your table results to a file or memory, being able to read them back in with a hash/hashref and split the records by 5 each. That is why I was asking if it is possible with using Dumper or Storable. Also with using selectall_hashref, Is their a certain way I can get the column names into hashes example:

{ 1 } = { date, head }


Chris Charley
User

Apr 11, 2013, 3:31 PM

Post #25 of 45 (1603 views)
Re: [hwnd] Reading results back that were dumped? [In reply to] Can't Post

To limit results to 5 at a time, maybe something like below.

Code
  my @data;   
for my $key (keys %hash) {
if (@data == 5) {
... do something with @data
@data = ();
}
else {
push @data, $key;
}
}

... do something with remainder of @data

To get the column names, I think you would have to store them, like you did with your hash.

my @cols = qw/ newsid newsdate newshead /;
and
nstore( \@cols, 'colnames') unless (-e 'colnames');
my $cols= retrieve( 'colnames');


(This post was edited by Chris Charley on Apr 11, 2013, 3:59 PM)


hwnd
User

Apr 12, 2013, 1:57 PM

Post #26 of 45 (1149 views)
Re: [Chris Charley] Reading results back that were dumped? [In reply to] Can't Post

Ok I see what you mean. My question with this is, by using Storable to (store, retreive) to a file or (freeze, thaw) to memory, the format would be hash ref->hash>hashes since I am using selectall_hashref for my needed data, should I format my own data I am retrieving to my dump file so that on one line, it would look like so:


Code
  

id, date, head



Or can I just use the dump file Storable stores and reference to find what I need by using references to my hashes?


Laurent_R
Veteran / Moderator

Apr 12, 2013, 2:03 PM

Post #27 of 45 (1147 views)
Re: [hwnd] Reading results back that were dumped? [In reply to] Can't Post

When you "thaw" a data structure stored ("frozen") with Storable, you recreate the data structure into memory.


hwnd
User

Apr 15, 2013, 6:57 AM

Post #28 of 45 (1123 views)
Re: [Laurent_R] Reading results back that were dumped? [In reply to] Can't Post

In referencing to closing my filehandles before each instance of running this, can I just use a simple method like below instead of getting into the CORE to find opened handles?


Code
  

# Close if open handles from previous?

close_handles(q/dumpfile/);

# Go ahead and open/store the hash

store ( \%hash, q/dumpfile/ );

sub close_handles {
my $file = shift;
my $fh;

return unless defined $file;

return close($fh) if open $fh, '<', $file ||
defined fileno($fh);

}



FishMonger
Veteran / Moderator

Apr 15, 2013, 8:22 AM

Post #29 of 45 (1119 views)
Re: [hwnd] Reading results back that were dumped? [In reply to] Can't Post


Quote
# Close if open handles from previous?

What "previous"? Lexical filehandles will automatically be closed when they go out of scope.

Your close_handles sub doesn't make sense. Just use the -r file test to verify if the file is readable and use proper error handling on all open calls.


(This post was edited by FishMonger on Apr 15, 2013, 8:24 AM)


hwnd
User

Apr 17, 2013, 1:24 PM

Post #30 of 45 (1099 views)
Re: [FishMonger] Reading results back that were dumped? [In reply to] Can't Post

Instead, using Storable I could just do something like this?


Code
  

##SNIPPET##

use Data::Dumper;
use Storable qw/ store_fd fd_retrieve /;

my $href = $dbh->selectall_hashref ( q/SELECT rec_id, rec_date,
rec_head FROM news/, q/rec_id/ );

store_DATA($href);

undef $/;
my $ref = fd_retrieve \*DATA;

print Dumper $ref;


sub store_DATA {
my($data) = @_;

open my $fh, "+>", undef or die "$0: $!\n";

*DATA = $fh;
store_fd $data, \*DATA or die "$0: print: $!";

seek DATA, 0, 0 or die "$0: seek: $!";
}



FishMonger
Veteran / Moderator

Apr 17, 2013, 1:36 PM

Post #31 of 45 (1096 views)
Re: [hwnd] Reading results back that were dumped? [In reply to] Can't Post

I see several problems with that code.

What happens when you try that?


hwnd
User

Apr 17, 2013, 1:48 PM

Post #32 of 45 (1089 views)
Re: [FishMonger] Reading results back that were dumped? [In reply to] Can't Post

Well as I stated it was a snippet of code from my script, when I run it, it opens the undef temporary file and stores the data and dumps it out for me.


FishMonger
Veteran / Moderator

Apr 17, 2013, 1:54 PM

Post #33 of 45 (1086 views)
Re: [hwnd] Reading results back that were dumped? [In reply to] Can't Post

You should explain what you really need to accomplish.

Are you wanting to store this data structure in a file to be used later in this script, or to be retrieved and used in a script executed at some later time?

If it's to be used later in the same script, it makes no sense to store it in an external file. Just keep it in a var which has proper scope.

If it needs to be retrieved in a later script, then why not have that script query the db directly?


hwnd
User

Apr 17, 2013, 2:04 PM

Post #34 of 45 (1083 views)
Re: [FishMonger] Reading results back that were dumped? [In reply to] Can't Post

Yes I am sorry for not clearing things up, I am wanting to retrieve and used in the same script at some later time. I want to be able to access the stored data to check if items exists and access them. In which I can do that using the hash references.


FishMonger
Veteran / Moderator

Apr 17, 2013, 2:08 PM

Post #35 of 45 (1078 views)
Re: [hwnd] Reading results back that were dumped? [In reply to] Can't Post


Quote
I am wanting to retrieve and used in the same script at some later time.


Then what you're doing is weird, wasteful and inefficient.

Just use the hash ref retrieved from the db and forget about storing the data structure in an external or in-memory file.


hwnd
User

Apr 17, 2013, 2:30 PM

Post #36 of 45 (1073 views)
Re: [FishMonger] Reading results back that were dumped? [In reply to] Can't Post

Ok, that makes more sense to me what you have explained just now. In that case,


Code
  

my @items;
for my $k ( keys %$href )
{
if (@items == 10) {
# how would i get every 10 records out of the hash?
# would i just use the c style (for my $i, $i < 10 $i++) ?

@items = (); #clear the data
}
else {
push @items, $k;
}

# rest of data?
}



FishMonger
Veteran / Moderator

Apr 17, 2013, 2:55 PM

Post #37 of 45 (1070 views)
Re: [hwnd] Reading results back that were dumped? [In reply to] Can't Post

I'm not sure what you're trying to achieve, but I suspect that you're trying to break up the data so you can "page" over the results. If that's the case, then you should look at Data::Page or one of its brothers.

http://search.cpan.org/~lbrocard/Data-Page-2.02/lib/Data/Page.pm

and/or you should look at using the modulus operator.


Chris Charley
User

Apr 17, 2013, 4:27 PM

Post #38 of 45 (1059 views)
Re: [hwnd] Reading results back that were dumped? [In reply to] Can't Post

A better approach:

Code
my @items;  
for my $k ( keys %$href )
{
push @items, $k;
if (@items == 10) {
# how would i get every 10 records out of the hash?
# would i just use the c style (for my $i, $i < 10 $i++) ?

# print for 1 page of ten items
for my $id (@items) {
print $id, ' ', $href->{$id}{date}, ' ', $href->{$id}{head}, "\n";
}

@items = (); # (clear the data)
}
}

# proces any leftover items after the loop above
for my $id (@items) {
print $id, ' ', $href->{$id}{date}, ' ', $href->{$id}{head}, "\n";
}



FishMonger
Veteran / Moderator

Apr 17, 2013, 4:43 PM

Post #39 of 45 (1055 views)
Re: [Chris Charley] Reading results back that were dumped? [In reply to] Can't Post

Hmm, that's just a very verbose way to write:

Code
for my $key ( keys %$href ) { 
print "$key $href->{$key}{date} $href->{$key}{head}\n";
}


More context from the OP would be needed to suggest any specific code.


(This post was edited by FishMonger on Apr 17, 2013, 4:46 PM)


Chris Charley
User

Apr 17, 2013, 6:04 PM

Post #40 of 45 (1049 views)
Re: [FishMonger] Reading results back that were dumped? [In reply to] Can't Post

Right, but he wants to limit each page to 10 items. Somewhere in that print routine, he would probably direct it to a separate page somehow?


FishMonger
Veteran / Moderator

Apr 17, 2013, 6:50 PM

Post #41 of 45 (1046 views)
Re: [Chris Charley] Reading results back that were dumped? [In reply to] Can't Post

That's why I suggested Data::Page

Why reinvent the wheel?


hwnd
User

Apr 17, 2013, 8:16 PM

Post #42 of 45 (1038 views)
Re: [FishMonger] Reading results back that were dumped? [In reply to] Can't Post

Right I am familar with using data page, I am trying to do it another way for the reason of not being able to install certain modules on my server. Thanks so far for the help and feedback though. That was the reason initially that I was using selectall_arrayref to split the results with mysql limit, and then trying to use Storable to store the complete hash results into memory or a file and page the dump file by results.


(This post was edited by hwnd on Apr 17, 2013, 8:29 PM)


FishMonger
Veteran / Moderator

Apr 17, 2013, 8:43 PM

Post #43 of 45 (1033 views)
Re: [hwnd] Reading results back that were dumped? [In reply to] Can't Post


Quote

I am trying to do it another way for the reason of not being able to install certain modules on my server

Quote

What makes you think you can't install "certain modules"? The only reason I can think of is that's a requirement of a class homework assignment. Is that the case here?

If you can write to your home directory, you can install pretty much any module you wish.


FishMonger
Veteran / Moderator

Apr 17, 2013, 8:51 PM

Post #44 of 45 (1031 views)
Re: [hwnd] Reading results back that were dumped? [In reply to] Can't Post

If you don't wish to, not can't, install modules, then look over the source code of those modules and learn from them and utilize the sections of them that you need to accomplish your task.


hwnd
User

Apr 18, 2013, 5:33 PM

Post #45 of 45 (1010 views)
Re: [FishMonger] Reading results back that were dumped? [In reply to] Can't Post

Ok, so I have managed to get everything working. Thanks for the help so far everyone, I have learned multiple ways of doing this.


Code
   

use strict;
use warnings;

use constant PER_PAGE => 10;

use lib qw( /home/88/64/2016488/lib/perl5/lib/perl5 );
use hwnd;

use CGI;
use Data::Page;

my $q = CGI->new;
my $h = hwnd->new;

my $page_id =
defined $q->param('page') ? $q->param('page') : 1;

my $dbh = $h->connect('DBI:mysql:*****:*****',
'*****', '*****');

my $href = $dbh->selectall_hashref ( q/SELECT rec_id, rec_date,
rec_head FROM news/, q/rec_id/ );

my @keys = ( defined $href ? sort {$a <=> $b} keys %$href : () );

my $page = Data::Page->new(scalar @keys, PER_PAGE, $page_id);
my @items = $page->splice(\@keys);

print $q->header;

print $q->h1("Page $page_id (of ", $page->last_page, ')');

my $i = 0;
foreach (@items) {
print ++$i, ": $_, $href->{$_}->{rec_date}, $href->{$_}->{rec_head}\n";
}

print $q->a({-href => '?page=' . $page->previous_page}, 'Prev') if $page->previous_page;
print $q->a({-href => '?page=' . $page->next_page}, 'Next') if $page->next_page;



(This post was edited by hwnd on Apr 18, 2013, 5:39 PM)

 
 


Search for (options) Powered by Gossamer Forum v.1.2.0

Web Applications & Managed Hosting Powered by Gossamer Threads
Visit our Mailing List Archives