CGI/Perl Guide | Learning Center | Forums | Advertise | Login
Site Search: in

  Main Index MAIN
INDEX
Search Posts SEARCH
POSTS
Who's Online WHO'S
ONLINE
Log in LOG
IN

Home: Perl Programming Help: Beginner:
Making a hash recognize unique data

 



mbuehl
New User

Dec 13, 2013, 12:01 PM

Post #1 of 4 (894 views)
Making a hash recognize unique data Can't Post

Hey all, I'm working on a intro to comp. prog. final, and I'm coding in perl. I'm trying to use a hash to filter through a list of IP addresses and push all the unique ones i find into an array. For some reason though, it's only holding one of the two IP's. Here's my full code, and if you notice any other glaring mistakes feel free to correct me! The particular part i'm asking about comes just before the <SFO> print statement. Thanks in advance!


Code
#!/usr/bin/perl -w 

use strict;
use warnings;

#declaring varriable

my @broken_data;
my @source_ip;
my @source_ip_mod;
my @destin_ip;
my @destin_ip_mod;
my $file_input;
my $file_output;
my $countline = 0; #set counter to 0
my $countuser = 0;
my $countpass = 0;


#Command to open the source file for use. Gives user the option of what file to look at.
print "Please enter a file name for diagnosis. \n";
$file_input = <STDIN>; #file name input
chomp $file_input;
open SF, $file_input or die "Couldn't open Source File: $!\n"; #open the users file


#allows the user to name the File Output
print "Please enter a file name for the summary output. \n";
$file_output = <STDIN>; #collects name
chomp $file_output; #chomps the input
open(SFO, ">$file_output") or die "Couldn't create $file_output. \n"; #creates a file in current directory for output

while (<SF>) #while SF is open
{
$countline++; #counts each line

if ($_ =~ /USER/i)
{
$countuser++
}

if ($_ =~ /PASS/i)
{
$countpass++
}


chomp ($_);

if ($_ =~ /^22:28/) #look for any instence of 22:28, ^ to match with the beginning of string
{

@broken_data = split (' ', $_); #takes the data and splits it at the space
print "$broken_data[0], $broken_data[2], $broken_data[4], $broken_data[-1]\n"; #takes out each element that i need to work with

print "\tTime: $broken_data[0]\n"; #Prints the time portion of the array

@source_ip = split('\.', $broken_data[2]); #splits the source ip at the period


print "\tSource IP: $source_ip[0].$source_ip[1].$source_ip[2].$source_ip[3] Port: $source_ip[-1]\n"; #Prints the Source IP

@destin_ip = split('\.', $broken_data[4]); #splits the destination ip at the period
@destin_ip_mod = split (':', $destin_ip[4]); #cuts off the trailing semi-colon
$destin_ip[4] = $destin_ip_mod[0];


print "\tDestination IP: $destin_ip[0].$destin_ip[1].$destin_ip[2].$destin_ip[3] Port: $destin_ip[4]\n";

print "\tPacket size: $broken_data[-1].\n";



}



}

my @unique_source_ip; #creates an array for the unique source ips
my %seen_source_ip; #hash to sort the data
foreach my $value ($broken_data[2]) #foreach loop, declaring $value.
{
if (! $seen_source_ip{$value})
{
push @unique_source_ip, $value;
$seen_source_ip{$value} = 1;
}
}
my $unique_source_cnt = @unique_source_ip;


#print statement to write to the File that was created
print SFO
"Summary Section: \n
\tTotal number of lines in the file: $countline\n
\tRange of time the file encompasses:\n
\tStarting time: 22:28:28.374595 (Approx. 10:28)\n
\tEnding time: 22:28:44.593813 (Approx. 10:28) \n
\tTotal Time: 16.219218 seconds \n
\tTotal number of distinct SOURCE ip addresses: $unique_source_cnt \n
\tTotal number of distinct DESTINATION ip addresses: \n
\tListing of distinct SOURCE ip addresses: \n
\tListing of distinct DESTINATION ip addresses: \n
\tTotal number of distinct SOURCE TCP ports: \n
\tTotal number of distinct DESTINATION TCP ports: \n
\tListing of distinct SOURCE TCP ports: \n
\tListing of distinct DESTINATION TCP ports: \n
\tTotal number of times phrases were used: \n
\tUSER (variations thereof): $countuser \n
\tPASS (variations thereof): $countpass\n
\n
Detail Section: \n
\tSource IP address activity by port over time: \n
\tMean, Median packet size for above: \n
\tDetail IP address activity by port over time: \n
\tMean, Median packet size for above: \n
\tAny and all interesting text within the DATA section of the file. \n
\tIn chronological order \n";



(This post was edited by mbuehl on Dec 13, 2013, 12:09 PM)


FishMonger
Veteran / Moderator

Dec 13, 2013, 12:58 PM

Post #2 of 4 (877 views)
Re: [mbuehl] Making a hash recognize unique data [In reply to] Can't Post

There are lots of problems with your code, but I don't have time right now to go over all of them, so I'll focus on the part you're asking about.

@broken_data is being overwritten each time your /^22:28/ regex matches. So, when the while loop completes, that array will only contain 1 row of data.


Quote

Code
foreach my $value ($broken_data[2])


$broken_data[2] is a single element, not a list. So, your foreach loop will only iterate once.

Those are the 2 key reasons why you're only getting 1 IP.


mbuehl
New User

Dec 13, 2013, 1:52 PM

Post #3 of 4 (871 views)
Re: [FishMonger] Making a hash recognize unique data [In reply to] Can't Post

So how do I get the loop to look at the entire list rather than just that 1 row of data? Do I have to store them all to a separate array as the regex runs or something like that?


Laurent_R
Enthusiast / Moderator

Dec 14, 2013, 4:57 AM

Post #4 of 4 (814 views)
Re: [mbuehl] Making a hash recognize unique data [In reply to] Can't Post

Hi,

you probably want to create an array for storing your IP addresses.

Declare an @IP_list array before entering your while loop.

After this line:

Code
@broken_data = split (' ', $_); #takes the data and splits it at the space


add this one:

Code
push @IP_list, $broken_data[2];


and change this line:


Code
foreach my $value ($broken_data[2]) #foreach loop, declaring $value.

to this:

Code
foreach my $value (@IP_list) #...


This should solve your immediate problem. But, as Fishmonger said, there are a number of other issues with your code.

 
 


Search for (options) Powered by Gossamer Forum v.1.2.0

Web Applications & Managed Hosting Powered by Gossamer Threads
Visit our Mailing List Archives