How to manage local vs production settings in Django?
What is the recommended way of handling settings for local development and the production server? Some of them (like constants, etc) can be changed/accessed in both, but some of them (like paths to static files) need to remain different, and hence should not be overwritten every time the new code is deployed.
Currently, I am adding all constants to settings.py
. But every time I change some constant locally, I have to copy it to the production server and edit the file for production specific changes... :(
开发者_JAVA百科Edit: looks like there is no standard answer to this question, I've accepted the most popular method.
Two Scoops of Django: Best Practices for Django 1.5 suggests using version control for your settings files and storing the files in a separate directory:
project/
app1/
app2/
project/
__init__.py
settings/
__init__.py
base.py
local.py
production.py
manage.py
The base.py
file contains common settings (such as MEDIA_ROOT or ADMIN), while local.py
and production.py
have site-specific settings:
In the base file settings/base.py
:
INSTALLED_APPS = (
# common apps...
)
In the local development settings file settings/local.py
:
from project.settings.base import *
DEBUG = True
INSTALLED_APPS += (
'debug_toolbar', # and other apps for local development
)
In the file production settings file settings/production.py
:
from project.settings.base import *
DEBUG = False
INSTALLED_APPS += (
# other apps for production site
)
Then when you run django, you add the --settings
option:
# Running django for local development
$ ./manage.py runserver 0:8000 --settings=project.settings.local
# Running django shell on the production site
$ ./manage.py shell --settings=project.settings.production
The authors of the book have also put up a sample project layout template on Github.
In settings.py
:
try:
from local_settings import *
except ImportError as e:
pass
You can override what needed in local_settings.py
; it should stay out of your version control then. But since you mention copying I'm guessing you use none ;)
Instead of settings.py
, use this layout:
.
└── settings/
├── __init__.py <= not versioned
├── common.py
├── dev.py
└── prod.py
common.py
is where most of your configuration lives.
prod.py
imports everything from common, and overrides whatever it needs to override:
from __future__ import absolute_import # optional, but I like it
from .common import *
# Production overrides
DEBUG = False
#...
Similarly, dev.py
imports everything from common.py
and overrides whatever it needs to override.
Finally, __init__.py
is where you decide which settings to load, and it's also where you store secrets (therefore this file should not be versioned):
from __future__ import absolute_import
from .prod import * # or .dev if you want dev
##### DJANGO SECRETS
SECRET_KEY = '(3gd6shenud@&57...'
DATABASES['default']['PASSWORD'] = 'f9kGH...'
##### OTHER SECRETS
AWS_SECRET_ACCESS_KEY = "h50fH..."
What I like about this solution is:
- Everything is in your versioning system, except secrets
- Most configuration is in one place:
common.py
. - Prod-specific things go in
prod.py
, dev-specific things go indev.py
. It's simple. - You can override stuff from
common.py
inprod.py
ordev.py
, and you can override anything in__init__.py
. - It's straightforward python. No re-import hacks.
I use a slightly modified version of the "if DEBUG" style of settings that Harper Shelby posted. Obviously depending on the environment (win/linux/etc.) the code might need to be tweaked a bit.
I was in the past using the "if DEBUG" but I found that occasionally I needed to do testing with DEUBG set to False. What I really wanted to distinguish if the environment was production or development, which gave me the freedom to choose the DEBUG level.
PRODUCTION_SERVERS = ['WEBSERVER1','WEBSERVER2',]
if os.environ['COMPUTERNAME'] in PRODUCTION_SERVERS:
PRODUCTION = True
else:
PRODUCTION = False
DEBUG = not PRODUCTION
TEMPLATE_DEBUG = DEBUG
# ...
if PRODUCTION:
DATABASE_HOST = '192.168.1.1'
else:
DATABASE_HOST = 'localhost'
I'd still consider this way of settings a work in progress. I haven't seen any one way to handling Django settings that covered all the bases and at the same time wasn't a total hassle to setup (I'm not down with the 5x settings files methods).
I use a settings_local.py and a settings_production.py. After trying several options I've found that it's easy to waste time with complex solutions when simply having two settings files feels easy and fast.
When you use mod_python/mod_wsgi for your Django project you need to point it to your settings file. If you point it to app/settings_local.py on your local server and app/settings_production.py on your production server then life becomes easy. Just edit the appropriate settings file and restart the server (Django development server will restart automatically).
TL;DR: The trick is to modify os.environment
before you import settings/base.py
in any settings/<purpose>.py
, this will greatly simplify things.
Just thinking about all these intertwining files gives me a headache.
Combining, importing (sometimes conditionally), overriding, patching of what was already set in case DEBUG
setting changed later on.
What a nightmare!
Through the years I went through all different solutions. They all somewhat work, but are so painful to manage.
WTF! Do we really need all that hassle? We started with just one settings.py
file.
Now we need a documentation just to correctly combine all these together in a correct order!
I hope I finally hit the (my) sweet spot with the solution below.
Let's recap the goals (some common, some mine)
Keep secrets a secret — don't store them in a repo!
Set/read keys and secrets through environment settings, 12 factor style.
Have sensible fallback defaults. Ideally for local development you don't need anything more beside defaults.
…but try to keep defaults production safe. It's better to miss a setting override locally, than having to remember to adjust default settings safe for production.
Have the ability to switch
DEBUG
on/off in a way that can have an effect on other settings (eg. using javascript compressed or not).Switching between purpose settings, like local/testing/staging/production, should be based only on
DJANGO_SETTINGS_MODULE
, nothing more.…but allow further parameterization through environment settings like
DATABASE_URL
.…also allow them to use different purpose settings and run them locally side by side, eg. production setup on local developer machine, to access production database or smoke test compressed style sheets.
Fail if an environment variable is not explicitly set (requiring an empty value at minimum), especially in production, eg.
EMAIL_HOST_PASSWORD
.Respond to default
DJANGO_SETTINGS_MODULE
set in manage.py during django-admin startprojectKeep conditionals to a minimum, if the condition is the purposed environment type (eg. for production set log file and it's rotation), override settings in associated purposed settings file.
Do not's
Do not let django read DJANGO_SETTINGS_MODULE setting form a file.
Ugh! Think of how meta this is. If you need to have a file (like docker env) read that into the environment before staring up a django process.Do not override DJANGO_SETTINGS_MODULE in your project/app code, eg. based on hostname or process name.
If you are lazy to set environment variable (like forsetup.py test
) do it in tooling just before you run your project code.Avoid magic and patching of how django reads it's settings, preprocess the settings but do not interfere afterwards.
No complicated logic based nonsense. Configuration should be fixed and materialized not computed on the fly. Providing a fallback defaults is just enough logic here.
Do you really want to debug, why locally you have correct set of settings but in production on a remote server, on one of hundred machines, something computed differently? Oh! Unit tests? For settings? Seriously?
Solution
My strategy consists of excellent django-environ used with ini
style files,
providing os.environment
defaults for local development, some minimal and short settings/<purpose>.py
files that have an
import settings/base.py
AFTER the os.environment
was set from an INI
file. This effectively give us a kind of settings injection.
The trick here is to modify os.environment
before you import settings/base.py
.
To see the full example go do the repo: https://github.com/wooyek/django-settings-strategy
.
│ manage.py
├───data
└───website
├───settings
│ │ __init__.py <-- imports local for compatibility
│ │ base.py <-- almost all the settings, reads from proces environment
│ │ local.py <-- a few modifications for local development
│ │ production.py <-- ideally is empty and everything is in base
│ │ testing.py <-- mimics production with a reasonable exeptions
│ │ .env <-- for local use, not kept in repo
│ __init__.py
│ urls.py
│ wsgi.py
settings/.env
A defaults for local development. A secret file, to mostly set required environment variables.
Set them to empty values if they are not required in local development.
We provide defaults here and not in settings/base.py
to fail on any other machine if the're missing from the environment.
settings/local.py
What happens in here, is loading environment from settings/.env
, then importing common settings
from settings/base.py
. After that we can override a few to ease local development.
import logging
import environ
logging.debug("Settings loading: %s" % __file__)
# This will read missing environment variables from a file
# We wan to do this before loading a base settings as they may depend on environment
environ.Env.read_env(DEBUG='True')
from .base import *
ALLOWED_HOSTS += [
'127.0.0.1',
'localhost',
'.example.com',
'vagrant',
]
# https://docs.djangoproject.com/en/1.6/topics/email/#console-backend
EMAIL_BACKEND = 'django.core.mail.backends.console.EmailBackend'
# EMAIL_BACKEND = 'django.core.mail.backends.dummy.EmailBackend'
LOGGING['handlers']['mail_admins']['email_backend'] = 'django.core.mail.backends.dummy.EmailBackend'
# Sync task testing
# http://docs.celeryproject.org/en/2.5/configuration.html?highlight=celery_always_eager#celery-always-eager
CELERY_ALWAYS_EAGER = True
CELERY_EAGER_PROPAGATES_EXCEPTIONS = True
settings/production.py
For production we should not expect an environment file, but it's easier to have one if we're testing something.
But anyway, lest's provide few defaults inline, so settings/base.py
can respond accordingly.
environ.Env.read_env(Path(__file__) / "production.env", DEBUG='False', ASSETS_DEBUG='False')
from .base import *
The main point of interest here are DEBUG
and ASSETS_DEBUG
overrides,
they will be applied to the python os.environ
ONLY if they are MISSING from the environment and the file.
These will be our production defaults, no need to put them in the environment or file, but they can be overridden if needed. Neat!
settings/base.py
These are your mostly vanilla django settings, with a few conditionals and lot's of reading them from the environment. Almost everything is in here, keeping all the purposed environments consistent and as similar as possible.
The main differences are below (I hope these are self explanatory):
import environ
# https://github.com/joke2k/django-environ
env = environ.Env()
# Build paths inside the project like this: os.path.join(BASE_DIR, ...)
BASE_DIR = os.path.dirname(os.path.dirname(os.path.dirname(os.path.abspath(__file__))))
# Where BASE_DIR is a django source root, ROOT_DIR is a whole project root
# It may differ BASE_DIR for eg. when your django project code is in `src` folder
# This may help to separate python modules and *django apps* from other stuff
# like documentation, fixtures, docker settings
ROOT_DIR = BASE_DIR
# Quick-start development settings - unsuitable for production
# See https://docs.djangoproject.com/en/1.11/howto/deployment/checklist/
# SECURITY WARNING: keep the secret key used in production secret!
SECRET_KEY = env('SECRET_KEY')
# SECURITY WARNING: don't run with debug turned on in production!
DEBUG = env('DEBUG', default=False)
INTERNAL_IPS = [
'127.0.0.1',
]
ALLOWED_HOSTS = []
if 'ALLOWED_HOSTS' in os.environ:
hosts = os.environ['ALLOWED_HOSTS'].split(" ")
BASE_URL = "https://" + hosts[0]
for host in hosts:
host = host.strip()
if host:
ALLOWED_HOSTS.append(host)
SECURE_SSL_REDIRECT = env.bool('SECURE_SSL_REDIRECT', default=False)
# Database
# https://docs.djangoproject.com/en/1.11/ref/settings/#databases
if "DATABASE_URL" in os.environ: # pragma: no cover
# Enable database config through environment
DATABASES = {
# Raises ImproperlyConfigured exception if DATABASE_URL not in os.environ
'default': env.db(),
}
# Make sure we use have all settings we need
# DATABASES['default']['ENGINE'] = 'django.contrib.gis.db.backends.postgis'
DATABASES['default']['TEST'] = {'NAME': os.environ.get("DATABASE_TEST_NAME", None)}
DATABASES['default']['OPTIONS'] = {
'options': '-c search_path=gis,public,pg_catalog',
'sslmode': 'require',
}
else:
DATABASES = {
'default': {
'ENGINE': 'django.db.backends.sqlite3',
# 'ENGINE': 'django.contrib.gis.db.backends.spatialite',
'NAME': os.path.join(ROOT_DIR, 'data', 'db.dev.sqlite3'),
'TEST': {
'NAME': os.path.join(ROOT_DIR, 'data', 'db.test.sqlite3'),
}
}
}
STATIC_ROOT = os.path.join(ROOT_DIR, 'static')
# django-assets
# http://django-assets.readthedocs.org/en/latest/settings.html
ASSETS_LOAD_PATH = STATIC_ROOT
ASSETS_ROOT = os.path.join(ROOT_DIR, 'assets', "compressed")
ASSETS_DEBUG = env('ASSETS_DEBUG', default=DEBUG) # Disable when testing compressed file in DEBUG mode
if ASSETS_DEBUG:
ASSETS_URL = STATIC_URL
ASSETS_MANIFEST = "json:{}".format(os.path.join(ASSETS_ROOT, "manifest.json"))
else:
ASSETS_URL = STATIC_URL + "assets/compressed/"
ASSETS_MANIFEST = "json:{}".format(os.path.join(STATIC_ROOT, 'assets', "compressed", "manifest.json"))
ASSETS_AUTO_BUILD = ASSETS_DEBUG
ASSETS_MODULES = ('website.assets',)
The last bit shows the power here. ASSETS_DEBUG
has a sensible default,
which can be overridden in settings/production.py
and even that that can be overridden by an environment setting! Yay!
In effect we have a mixed hierarchy of importance:
- settings/.py - sets defaults based on purpose, does not store secrets
- settings/base.py - is mostly controlled by environment
- process environment settings - 12 factor baby!
- settings/.env - local defaults for easy startup
I manage my configurations with the help of django-split-settings.
It is a drop-in replacement for the default settings. It is simple, yet configurable. And refactoring of your exisitng settings is not required.
Here's a small example (file example/settings/__init__.py
):
from split_settings.tools import optional, include
import os
if os.environ['DJANGO_SETTINGS_MODULE'] == 'example.settings':
include(
'components/default.py',
'components/database.py',
# This file may be missing:
optional('local_settings.py'),
scope=globals()
)
That's it.
Update
I wrote a blog post about managing django
's settings with django-split-sttings
. Have a look!
Remember that settings.py is a live code file. Assuming that you don't have DEBUG set on production (which is a best practice), you can do something like:
if DEBUG:
STATIC_PATH = /path/to/dev/files
else:
STATIC_PATH = /path/to/production/files
Pretty basic, but you could, in theory, go up to any level of complexity based on just the value of DEBUG - or any other variable or code check you wanted to use.
The problem with most of these solutions is that you either have your local settings applied before the common ones, or after them.
So it's impossible to override things like
- the env-specific settings define the addresses for the memcached pool, and in the main settings file this value is used to configure the cache backend
- the env-specific settings add or remove apps/middleware to the default one
at the same time.
One solution can be implemented using "ini"-style config files with the ConfigParser class. It supports multiple files, lazy string interpolation, default values and a lot of other goodies. Once a number of files have been loaded, more files can be loaded and their values will override the previous ones, if any.
You load one or more config files, depending on the machine address, environment variables and even values in previously loaded config files. Then you just use the parsed values to populate the settings.
One strategy I have successfully used has been:
- Load a default
defaults.ini
file - Check the machine name, and load all files which matched the reversed FQDN, from the shortest match to the longest match (so, I loaded
net.ini
, thennet.domain.ini
, thennet.domain.webserver01.ini
, each one possibly overriding values of the previous). This account also for developers' machines, so each one could set up its preferred database driver, etc. for local development - Check if there is a "cluster name" declared, and in that case load
cluster.cluster_name.ini
, which can define things like database and cache IPs
As an example of something you can achieve with this, you can define a "subdomain" value per-env, which is then used in the default settings (as hostname: %(subdomain).whatever.net
) to define all the necessary hostnames and cookie things django needs to work.
This is as DRY I could get, most (existing) files had just 3 or 4 settings. On top of this I had to manage customer configuration, so an additional set of configuration files (with things like database names, users and passwords, assigned subdomain etc) existed, one or more per customer.
One can scale this as low or as high as necessary, you just put in the config file the keys you want to configure per-environment, and once there's need for a new config, put the previous value in the default config, and override it where necessary.
This system has proven reliable and works well with version control. It has been used for long time managing two separate clusters of applications (15 or more separate instances of the django site per machine), with more than 50 customers, where the clusters were changing size and members depending on the mood of the sysadmin...
I am also working with Laravel and I like the implementation there. I tried to mimic it and combining it with the solution proposed by T. Stone (look above):
PRODUCTION_SERVERS = ['*.webfaction.com','*.whatever.com',]
def check_env():
for item in PRODUCTION_SERVERS:
match = re.match(r"(^." + item + "$)", socket.gethostname())
if match:
return True
if check_env():
PRODUCTION = True
else:
PRODUCTION = False
DEBUG = not PRODUCTION
Maybe something like this would help you.
My solution to that problem is also somewhat of a mix of some solutions already stated here:
- I keep a file called
local_settings.py
that has the contentUSING_LOCAL = True
in dev andUSING_LOCAL = False
in prod - In
settings.py
I do an import on that file to get theUSING_LOCAL
setting
I then base all my environment-dependent settings on that one:
DEBUG = USING_LOCAL
if USING_LOCAL:
# dev database settings
else:
# prod database settings
I prefer this to having two separate settings.py files that I need to maintain as I can keep my settings structured in a single file easier than having them spread across several files. Like this, when I update a setting I don't forget to do it for both environments.
Of course that every method has its disadvantages and this one is no exception. The problem here is that I can't overwrite the local_settings.py
file whenever I push my changes into production, meaning I can't just copy all files blindly, but that's something I can live with.
For most of my projects I use following pattern:
- Create settings_base.py where I store settings that are common for all environments
- Whenever I need to use new environment with specific requirements I create new settings file (eg. settings_local.py) which inherits contents of settings_base.py and overrides/adds proper settings variables (
from settings_base import *
)
(To run manage.py with custom settings file you simply use --settings command option: manage.py <command> --settings=settings_you_wish_to_use.py
)
1 - Create a new folder inside your app and name settings to it.
2 - Now create a new __init__.py
file in it and inside it write
from .base import *
try:
from .local import *
except:
pass
try:
from .production import *
except:
pass
3 - Create three new files in the settings folder name local.py
and production.py
and base.py
.
4 - Inside base.py
, copy all the content of previous settings.py
folder and rename it with something different, let's say old_settings.py
.
5 - In base.py change your BASE_DIR path to point to your new path of setting
Old path->
BASE_DIR = os.path.dirname(os.path.dirname(os.path.abspath(__file__)))
New path ->
BASE_DIR = os.path.dirname(os.path.dirname(os.path.dirname(os.path.abspath(__file__)))
This way, the project dir can be structured and can be manageable among production and local development.
I use a variation of what jpartogi mentioned above, that I find a little shorter:
import platform
from django.core.management import execute_manager
computername = platform.node()
try:
settings = __import__(computername + '_settings')
except ImportError:
import sys
sys.stderr.write("Error: Can't find the file '%r_settings.py' in the directory containing %r. It appears you've customized things.\nYou'll have to run django-admin.py, passing it your settings module.\n(If the file local_settings.py does indeed exist, it's causing an ImportError somehow.)\n" % (computername, __file__))
sys.exit(1)
if __name__ == "__main__":
execute_manager(settings)
Basically on each computer (development or production) I have the appropriate hostname_settings.py file that gets dynamically loaded.
There is also Django Classy Settings. I personally am a big fan of it. It's built by one of the most active people on the Django IRC. You would use environment vars to set things.
http://django-classy-settings.readthedocs.io/en/latest/
Making multiple versions of settings.py is an anti pattern for 12 Factor App methodology. use python-decouple or django-environ instead.
In order to use different settings
configuration on different environment, create different settings file. And in your deployment script, start the server using --settings=<my-settings.py>
parameter, via which you can use different settings on different environment.
Benefits of using this approach:
Your settings will be modular based on each environment
You may import the
master_settings.py
containing the base configuration in theenvironmnet_configuration.py
and override the values that you want to change in that environment.If you have huge team, each developer may have their own
local_settings.py
which they can add to the code repository without any risk of modifying the server configuration. You can add these local settings to.gitnore
if you use git or.hginore
if you Mercurial for Version Control (or any other). That way local settings won't even be the part of actual code base keeping it clean.
I had my settings split as follows
settings/
|
|- base.py
|- dev.py
|- prod.py
We have 3 environments
- dev
- staging
- production
Now obviously staging and production should have the maximum possible similar environment. So we kept prod.py
for both.
But there was a case where I had to identify running server is a production server. @T. Stone 's answer helped me write check as follows.
from socket import gethostname, gethostbyname
PROD_HOSTS = ["webserver1", "webserver2"]
DEBUG = False
ALLOWED_HOSTS = [gethostname(), gethostbyname(gethostname()),]
if any(host in PROD_HOSTS for host in ALLOWED_HOSTS):
SESSION_COOKIE_SECURE = True
CSRF_COOKIE_SECURE = True
I differentiate it in manage.py and created two separate settings file: local_settings.py and prod_settings.py.
In manage.py I check whether the server is local server or production server. If it is a local server it would load up local_settings.py and it is a production server it would load up prod_settings.py. Basically this is how it would look like:
#!/usr/bin/env python
import sys
import socket
from django.core.management import execute_manager
ipaddress = socket.gethostbyname( socket.gethostname() )
if ipaddress == '127.0.0.1':
try:
import local_settings # Assumed to be in the same directory.
settings = local_settings
except ImportError:
import sys
sys.stderr.write("Error: Can't find the file 'local_settings.py' in the directory containing %r. It appears you've customized things.\nYou'll have to run django-admin.py, passing it your settings module.\n(If the file local_settings.py does indeed exist, it's causing an ImportError somehow.)\n" % __file__)
sys.exit(1)
else:
try:
import prod_settings # Assumed to be in the same directory.
settings = prod_settings
except ImportError:
import sys
sys.stderr.write("Error: Can't find the file 'prod_settings.py' in the directory containing %r. It appears you've customized things.\nYou'll have to run django-admin.py, passing it your settings module.\n(If the file prod_settings.py does indeed exist, it's causing an ImportError somehow.)\n" % __file__)
sys.exit(1)
if __name__ == "__main__":
execute_manager(settings)
I found it to be easier to separate the settings file into two separate file instead of doing lots of ifs inside the settings file.
As an alternative to maintain different file if you wiil: If you are using git or any other VCS to push codes from local to server, what you can do is add the settings file to .gitignore.
This will allow you to have different content in both places without any problem. SO on server you can configure an independent version of settings.py and any changes made on the local wont reflect on server and vice versa.
In addition, it will remove the settings.py file from github also, the big fault, which i have seen many newbies doing.
I think the best solution is suggested by @T. Stone, but I don't know why just don't use the DEBUG flag in Django. I Write the below code for my website:
if DEBUG:
from .local_settings import *
Always the simple solutions are better than complex ones.
I found the responses here very helpful. (Has this been more definitively solved? The last response was a year ago.) After considering all the approaches listed, I came up with a solution that I didn't see listed here.
My criteria were:
- Everything should be in source control. I don't like fiddly bits lying around.
- Ideally, keep settings in one file. I forget things if I'm not looking right at them :)
- No manual edits to deploy. Should be able to test/push/deploy with a single fabric command.
- Avoid leaking development settings into production.
- Keep as close as possible to "standard" (*cough*) Django layout as possible.
I thought switching on the host machine made some sense, but then figured the real issue here is different settings for different environments, and had an aha moment. I put this code at the end of my settings.py file:
try:
os.environ['DJANGO_DEVELOPMENT_SERVER'] # throws error if unset
DEBUG = True
TEMPLATE_DEBUG = True
# This is naive but possible. Could also redeclare full app set to control ordering.
# Note that it requires a list rather than the generated tuple.
INSTALLED_APPS.extend([
'debug_toolbar',
'django_nose',
])
# Production database settings, alternate static/media paths, etc...
except KeyError:
print 'DJANGO_DEVELOPMENT_SERVER environment var not set; using production settings'
This way, the app defaults to production settings, which means you are explicitly "whitelisting" your development environment. It is much safer to forget to set the environment variable locally than if it were the other way around and you forgot to set something in production and let some dev settings be used.
When developing locally, either from the shell or in a .bash_profile or wherever:
$ export DJANGO_DEVELOPMENT_SERVER=yep
(Or if you're developing on Windows, set via the Control Panel or whatever its called these days... Windows always made it so obscure that you could set environment variables.)
With this approach, the dev settings are all in one (standard) place, and simply override the production ones where needed. Any mucking around with development settings should be completely safe to commit to source control with no impact on production.
精彩评论