开发者

Logging from Django under UWSGI

I am running my django app via uwsgi server and am starting 32 processes -args in my init script are:

ARGS="--pidfile ${PIDFILE} --uid ${UID} -s /tmp/${NAME}.sock --pythonpath ${GCS_HOME}/server/src/gcs --master -w wsgi -d ${GCS_HOME}/logs/uwsgi.log -p 32 -z 30"

Versions are Python 2.6.5 , Django 1.2.1, uWSGI 0.9.5.1

I want to have a single log file so I am using a multprocessing based log handler as described in question 641420.

The multilogging handler works fine in a simple test app that I have and also when I run the manage.py runserver_plus with werkzeug, but nothing is logged when I run with django and uwsgi (I get no errors or exceptions from uwsgi process either though) .

My wsgi file is below, if anyone can identify a problem with my config or an explanation for what is happening I'd be grateful:

APP_VIRTUAL_ENV = "/home/开发者_StackOverflow中文版devadmin/gcs/server/gcs_env/"
APP_PARENT_PATH = "/home/devadmin/gcs/server/src/"

##                                                                              

import sys
# Redirect stdout to comply with WSGI                                           
sys.stdout = sys.stderr

import os, site

# Set the settings module django should use                                     
os.environ['DJANGO_SETTINGS_MODULE'] = "gcs.settings"

# set the sys.path                                                              
site_packages_subpath = "/lib/python%s.%s/site-packages" % (sys.version_info[0]\
, sys.version_info[1], )
site_packages_path = os.path.join(APP_VIRTUAL_ENV, site_packages_subpath[1:])

sys_path = []
for path in sys.path:
    if site_packages_subpath in path and not path.startswith(APP_VIRTUAL_ENV):
        continue
    sys_path.append(path)

sys.path = [ APP_PARENT_PATH ]
sys.path += sys_path
site.addsitedir(site_packages_path)

# reorder sys.path                                                              
for path in sys_path:
    sys.path.remove(path)
sys.path += sys_path

# setup logging                                                                 
import os.path
import logging
import logging.config
logging.config.fileConfig(os.path.join(os.path.dirname(__file__), "logging.conf\
"))


ANSWER HAS BEEN UPDATED -May 15, 2013 - see bottom for additional logging option

If you want to have a single log file - use syslog, let it handle multiplexing all the inputs into a single file. Having multiple processes appending to a single file is ugly, even with multiprocessing's workarounds.

Aside from the advantage of thread / process safe 'downmixing' of various streams of logging information, you can always specify a remote host to send the logs to if you wish, as well it makes log-file rotation a breeze as your clients are writing to either a domain socket or UDP socket - they don't have to wait while you manage the files underneath them. And better yet, you won't lose messages.

Used in combination with a syslog daemon like syslog-ng, you can do lots of fancy slicing and dicing, message relaying, duplicate message filtering, etc.

Long story short - syslog is better than managing your own log file (in my opinion), the best argument against syslog is, you don't 'own' the server (and, ostensibly the log files may be off limits to you).

If you want to be super awesome, send your log data to splunk and you'll take your game to the next level. Most folks use Splunk for IT log aggregation, but syslogging from your application into splunk is a shortcut to awesome data mining capabilities to understand performance bottlenecks, use patterns and much more.

#!/usr/bin/python

import logging
from logging.handlers import SysLogHandler

# Setup
logger = logging.getLogger( "mything" )
hdlr = SysLogHandler( address = '/dev/log', facility = SysLogHandler.LOG_USER )
logger.addHandler( hdlr )
formatter = logging.Formatter('%(name)s: %(levelname)s %(message)s')
hdlr.setFormatter( formatter )
logger.setLevel( logging.INFO )


logger.info( 'hello Laverne!' )
logger.debug( 'The Great Ragu has taken ill!' )

NEW CONTENT - May 15, 2013

There is an additional option worth mentioning if you have the infrastructure / tenacity to set it up - Sentry, which has libraries available for Python (as well as Javascript and others), which provides a centralized location for you to send errors to for monitoring. It looks neat.

0

上一篇:

下一篇:

精彩评论

暂无评论...
验证码 换一张
取 消

最新问答

问答排行榜