开发者

High Performance App: How to save data? Perl, Ajax, many requests at once

just a very short question.

I have a website that forges ~ 4-16 AJAX requests for one website request. It's a meta-search. I want to log the results of these AJAX requests.

What would be the best way to do that?

Some that I have in mind

  • a) db - not so good, because connection to the server takes a lot of time
  • b) text file - possibly the multiple scripts run on the server (e.g. if 16 processes run at the same time) might corrupt the text file. It can be flocked(), but is that sufficient?
  • c) text files - one for each perl process. However that would create 16 files per request, and if I have only like 1000 requests a day that would still mean 16k files per day.

Any ideas what would be the best way?

Short recap: Ajax Request->16 Perl Sc开发者_运维技巧ript run->Results of these scripts should be saved


If you want high-performance application, good way is to use some persistent environment like fastcgi. This way, you will have few processes running to response. For database approach, this would allow to leave connection open, which makes this option pretty fast.

For file-based option, you will need locking. I am using Log::Log4perl with locking mode like this:

use Log::Log4perl qw(:easy);

Log::Log4perl->init(\ qq{
    log4perl.logger               = DEBUG, A1
    log4perl.appender.A1          = Log::Dispatch::File::Locked
    log4perl.appender.A1.filename = requests.log
    log4perl.appender.A1.mode     = append
    log4perl.appender.A1.close_after_write = 1
});

...
DEBUG "A message into log";
...

Notice Log::Dispatch::File::Locked and close_after_write option set in configuration.


A better solution would be mod_perl for Apache together with Apache::DBI - giving you persistent DB connects.

If you are forced to use fastcgi you have to create a $dbh "global" for your self.

Log4Perl is always a good logging backend ( use it. ) - but you wont need the lock as long as you open your file in append mode and not in binary mode - except you need a strict order. The IO buffer will be writen in a random order - and not in the same time.

Using locks will perl force to always flush its IO buffer on open/close, and the other process may have to wait - Wich may be not a good idea speed wise ( dont know though )

In normal cases you only need locks if you change something within the file.


Since you said you can't use mod_perl and Apache, there's another option.

Create a server to process the log messages (outside of your web server).

Then, have each of the 16 Perl scripts connect to that server (ideally, on a TCP port that the server listens to, but any other form of IPC might be doable if you are careful about concurrency).

Then, each script which needs to log connects to that logging server (again, ideally on a TCP port) and sends the text to log and which of the 16 scripts is logging (not needed if you will be merging the logs).

That way, your logging server simply creates the unified log and does whatever (writes to DB, writes to a single file), when servicing each request. Given your volumes, it should be fast enough.

0

上一篇:

下一篇:

精彩评论

暂无评论...
验证码 换一张
取 消

最新问答

问答排行榜