开发者

PHP Singleton class for all requests

I have a simple problem. I use php as server part and have an html output. My site shows a status about an other server. So the flow is:

  1. Browser user goes on www.example.com/status
  2. Browser contacts www.example.com/status
  3. PHP Server receives request and ask for stauts on www.statusserver.com/status
  4. PHP Receives the data, transforms it in readable HTML output and send it back to the client
  5. Browser user can see the status.

Now, I've created a singleton class in php which accesses the statusserver only 8 seconds. So it updates the status all 8 seconds. If a user requests for update inbetween, the server returns the locally (on www.example.com) stored status.

That's nice isn't it? But then I did an easy test and started 5 browser windows to see if it works. Here it comes, the php server created a singleton class for each request. So now 5 Clients requesting all 8 seconds the status on the statusserver. this means I have every 8 second 5 calls to the status server instead of one!

Isn't there a possibility to provide only one instance to all users within an apache server? That would be solve the problem in case 1000 users are connecting to www.example.com/status....

thx for any hints

============================= EDIT:

I already use a caching on harddrive:

public function getFile($filename)
{
   开发者_Python百科 $diff = (time()-filemtime($filename));
    //echo "diff:$diff<br/>";
    if($diff>8){
        //echo 'grösser 8<br/>';
        self::updateFile($filename);
    }
    if (is_readable($filename)) {
        try {
            $returnValue = @ImageCreateFromPNG($filename);
            if($returnValue == ''){
                sleep(1);
                return self::getFile($filename);
            }else{
                return $returnValue;    
            }
        } catch (Exception $e){
            sleep(1);
            return self::getFile($filename);
        }
    } else {
        sleep(1);
        return self::getFile($filename);
    }
}

this is the call in the singleton. I call for a file and save it on harddrive. but all the request call it at same time and start requesting the status server.

I think the only solution would be a standalone application which does an update every 8 seconds on the file... All request should just read the file and nomore able to update it. This standalone could be a perl script or something similar...


Php requests are handled by different processes and each of them have a different state, there isn't any resident process like in other web development framework. You should handle that behavior directly in your class using for instance some caching.

The method which query the server status should have this logic

public function getStatus() {
  if (!$status = $cache->load()) {
    // cache miss
    $status = // do your query here
    $cache->save($status); // store the result in cache
  }
  return $status;
}

In this way only one request of X will fetch the real status. The X value depends on your cache configuration.

Some cache library you can use:

  • APC
  • Memcached
  • Zend_Cache which is just a wrapper for actual caching engines

Or you can store the result in plain text file and on every request check for the m_time of the file itself and rewrite it if more than xx seconds are passed.

Update

Your code is pretty strange, why all those sleep calls? Why a try/catch block when ImageCreateFromPNG does not throw?

You're asking a different question, since php is not an application server and cannot store state across processes your approach is correct. I suggest you to use APC (uses shared memory so it would be at least 10x faster than reading a file) to share status across different processes. With this approach your code could become

public function getFile($filename)
{
    $latest_update = apc_fetch('latest_update');
    if (false == $latest_update) {
      // cache expired or first request
      apc_store('latest_update', time(), 8); // 8 is the ttl in seconds
      // fetch file here and save on local storage
      self::updateFile($filename);
    }
    // here you can process the file
    return $your_processed_file;
}

With this approach the code in the if part will be executed from two different processes only if a process is blocked just after the if line, which should not happen because is almost an atomic operation.

Furthermore if you want to ensure that you should use something like semaphores to handle that, but it would be an oversized solution for this kind of requirement.

Finally imho 8 seconds is a small interval, I'd use something bigger, at least 30 seconds, but this depends from your requirements.


As far as I know it is not possible in PHP. However, you surely can serialize and cache the object instance.
Check out http://php.net/manual/en/language.oop5.serialization.php

0

上一篇:

下一篇:

精彩评论

暂无评论...
验证码 换一张
取 消

最新问答

问答排行榜