CGI/Perl Guide | Learning Center | Forums | Advertise | Login
Site Search: in

  Main Index MAIN
INDEX
Search Posts SEARCH
POSTS
Who's Online WHO'S
ONLINE
Log in LOG
IN

Home: Perl Programming Help: Intermediate:
alternatives to File::Tail for performance

 



young_matthewd
New User

Apr 2, 2009, 12:37 AM

Post #1 of 2 (1075 views)
alternatives to File::Tail for performance Can't Post

check out the forum post (http://www.perlmonks.org/index.pl?node_id=162034)....

using File::Tail to continuously read lines from a file which can grow heavily under testing of services. the perl process kicks off Tail via tie with the:

interval of 15 (seconds)
maxinterval also 15
tail equal to zero (start reading at end of file)
adjustafter is zero so the Tail never waits to adjust the interval (ie. a constant interval)

on a Solars Sun-Fire (V890) the perl script can eat up to 5 procent of the cpu which is alot considering i have up to 10 processes running monitoring these service files.

wondering if the community has suggestions on either using another module or going direct against Unix shell scripts to improve performance? nice thing with Fail::Tail is that if the file rolls over the tailing stays with the original file name.

it may be that with large file data i may need to move away from hard-coded intervals and let Tail dynamically throttle. heavy data let Tail sleep only 5 seconds instead of 15?


1arryb
User

Apr 2, 2009, 12:19 PM

Post #2 of 2 (1066 views)
Re: [young_matthewd] alternatives to File::Tail for performance [In reply to] Can't Post

Hi matthew,

You can tail the log yourself. Take a look at Recipe 8.18 in the Perl Cookbook http://www.digital-deception.net/books/O%27Reilly%20Perl%20Cookbook.pdf (PDF) for one possible implementation. I haven't benchmarked this function but it's probably faster the File::Tail.

However, before you write your own tailer, better find out if the problem is actually File::Tail or is it what you are doing in your monitor program with the tailed lines? Try breaking this down by trowing away the data right after it's read. You might get better performance using File::Tail by filtering the tail with the object of processing only interesting messages.

Finally, depending on how fast your log is growing, 5% isn't necessarily excessive. For example, if I execute:

Code
$ cat <some huge log file> | tail -f

On one of my honking fast HP servers, it takes 5-6% of one of the cpus. The bottom line is that monitoring busy processes can be expensive. There is always a trade off between how much you log/how much you monitor and system performance.

Consider lowering the log verbosity of your servers. Do you really need all of this data?

Larry

 
 


Search for (options) Powered by Gossamer Forum v.1.2.0

Web Applications & Managed Hosting Powered by Gossamer Threads
Visit our Mailing List Archives