开发者

How can I detach a process from a CGI so I can store and read files from memory?

Is it possible that I would be able to spawn a detached daemon like process, from a CGI script that stores read text files in memory, then re-access the memory in the next cgi execution, reading the data using a pipe?

Would most hosting ISP's, allow detached processes? Are memory pipes fast, an开发者_JAVA百科d easy to code/work with on a unix/linux system?

Is there a solution, that can be done without using any extra CPAN modules? This is a CGI process, so I want to keep it to a minimum.


If you absolutely want the content of the files to be present in memory, a much simpler solution would be to create a RAM disk and store them there. Then you do not have to do anything special with the cgi-scripts.


Say you have a simple resource.cgi:

#! /usr/bin/perl

use warnings;
use strict;

use Reader;
use CGI qw/ :standard /;

print header("text/plain"),
      "Contents:\n",
      Reader::data,
      "-" x 40, "\n";

Its output is

Content-Type: text/plain; charset=ISO-8859-1

Contents:
This is a data file
with some very interesting
bits.
----------------------------------------

The fun part is in Reader.pm, which begins with familiar-looking boilerplate:

package Reader;

use warnings;
use strict;

use Fcntl qw/ :DEFAULT :flock :seek /;
use POSIX qw/ setsid /;    

Next it defined rendezvous points:

my $PIDFILE = "/tmp/reader.pid";
my $DATA    = "/tmp/file.dat";
my $PIPE    = "/tmp/reader.pipe";

The sub import is called as part of use Module. If the daemon is already running, then there's nothing to do. Otherwise, we fork off the daemon and write its process ID to $PIDFILE.

sub import {
  return unless my $fh = take_lock();

  my $child = fork;
  die "$0: fork: $!" unless defined $child;

  if ($child) {
    print $fh  "$child\n" or die "$0: write $PIDFILE: $!";
    close $fh             or die "$0: close $PIDFILE: $!";
    return;
  }

  # daemonize
  close $fh;
  chdir "/";
  open STDIN,  "<", "/dev/null";
  open STDOUT, ">", "/dev/null";
  open STDERR, ">", "/dev/null";
  setsid;

  open $fh, "<", $DATA or die;
  undef $/;
  my $data = <$fh>;
  close $fh;

  while (1) {
    open my $fh, ">", $PIPE or die;
    print $fh $data         or die;
    close $fh;
  }
}

Every client needs to wait its turn getting a lock on $PIDFILE. Once we have the lock, we then check that the process identified is still running and create the named pipe if necessary.

sub take_lock {
  sysopen my $fh, $PIDFILE, O_RDWR | O_CREAT or die "$0: open $PIDFILE: $!";
  flock $fh => LOCK_EX                       or die "$0: flock $PIDFILE: $!";

  my $pid = <$fh>;

  if (defined $pid) {
    chomp $pid;

    if (kill 0 => $pid) {
      close $fh;
      return;
    }
  }
  else {
    die "$0: readline $PIDFILE: $!" if $!;
  }

  sysseek  $fh, 0, SEEK_SET or die "$0: sysseek $PIDFILE: $!";
  truncate $fh, 0           or die "$0: truncate $PIDFILE: $!";

  unless (-p $PIPE) {
    system("mknod", $PIPE, "p") == 0
                            or die "$0: mknod exited " . ($? >> 8);
  }

  $fh;
}

Finally, reading the pipe is trivial:

sub data {
  open my $fh, "<", $DATA or die "$0: open $DATA: $!";
  local $/;
  scalar <$fh>;
}

Don't forget to return a true value from the module:

1;

You'll note that operations can still fail in the daemon. For your sanity, you'll want to log events somehow rather than choking silently.

As to whether hosts will permit long-running processes, that will vary from provider to provider, but even if your daemon is killed off from time to time, the code above will restart it on demand.


Why do you want to do this? What problem are you trying to solve? Would something like File::Map work? It mmap files, so the files aren't in memory, but they act like they are. I wrote a bit about this in Memory-map files instead of slurping them.

0

上一篇:

下一篇:

精彩评论

暂无评论...
验证码 换一张
取 消

最新问答

问答排行榜