Avoiding "MySQL server has gone away" on infrequently used Python / Flask server with SQLAlchemy
How can Flask / SQLAlchemy be configured to create a new database connection if one is not present?
I have an infrequently visited Python / Flask server which uses SQLAlchemy. It gets visited every couple of days, and on the first visit it often throws a "MySQL server has gone away" error. Subsequent page views are fine, but it looks unprofessional to have this initial error.
I'd like to know the correct way to handle this - advice like "make开发者_Python百科 a really long time out", which would be about 4 days long in this case, doesn't seem correct. How can I test for the lack of a database connection and create one if needed?
I've had trouble with this before, and found that the way to handle it is by not keeping sessions around. The trouble is you are trying to keep a connection open for way too long. Instead, use a thread local scoped session like so either in __init__.py
or in a utility package that you import everywhere:
from sqlalchemy.orm import scoped_session, sessionmaker
Session = scoped_session( sessionmaker() )
Then set up your engines and metadata once. This allows you to skip configuration mechanics every time you connect/disconnect. After that, you can do your db work like this:
session = Session()
someObject = session.query( someMappedClass ).get( someId )
# use session like normal ...
session.close()
If you want to hold on to old objects and you don't want to leave your session open, then you can use the above pattern and reuse old objects like this:
session = Session()
someObject = session.merge( someObject )
# more db stuff
session.close()
The point is, you want to open your session, do your work, then close your session. This avoids timeouts very well. There are lots of options for .merge and .add that allow you to either include changes you've made to detached objects or to load new data from the db. The docs are very verbose, but once you know what you are looking for it might be a little easier to find.
To actually get all the way there and prevent the MySQL from "going away", you need to solve the issue of your connection pool keeping connections open too long and checking out an old connection for you.
To get a fresh connection, you can set the pool_recycle
option in your create_engine
call. Set this pool_recycle
to the number of seconds of time in the connection pool between checkouts that you want a new connection to be created instead of an existing connection to be returned.
I had a similar issue, but for me I'd get the 'MySQL has gone away' error somewhere between 5 minutes and 2 hours of each session.
I'm using Flask-SQLAlchemy so it's supposed to close idle connections, but didn't seem to be doing that unless the connection had been idle for over a couple of hours.
Eventually I narrowed it down to the following Flask-SQLAlchemy settings:
app.config['SQLALCHEMY_POOL_SIZE'] = 100
app.config['SQLALCHEMY_POOL_RECYCLE'] = 280
The default settings for these are 10 and 7200 (2 hours) respectively.
It's a matter of playing around with these settings to fit your environment.
For example, I'd read in many places that SQLALCHEMY_POOL_RECYCLE should be set to 3600, but that didn't work for me. I'm hosting with PythonAnywhere and they kill idle MySQL connections after 5 minutes (300 seconds). So setting my value to less than 300 solved the problem.
I hope this helps others, because I wasted WAY too much time on this issue.
http://flask-sqlalchemy.pocoo.org/2.1/config/#configuration-keys
UPDATE: 2019-OCT-08
The configuration keys 'SQLALCHEMY_POOL_SIZE'
and 'SQLALCHEMY_POOL_RECYCLE'
are deprecated as of v2.4 and will be removed in v3.0 of SQLAlchemy. Use 'SQLALCHEMY_ENGINE_OPTIONS'
to set the corresponding values.
app.config['SQLALCHEMY_ENGINE_OPTIONS'] = {'pool_size' : 100, 'pool_recycle' : 280}
2018 answer: In SQLAlchemy v1.2.0+, you have the connection pool pre-ping feature available to address this issue of "MySQL server has gone away".
Connection pool pre-ping - The connection pool now includes an optional "pre ping" feature that will test the "liveness" of a pooled connection for every connection checkout, transparently recycling the DBAPI connection if the database is disconnected. This feature eliminates the need for the "pool recycle" flag as well as the issue of errors raised when a pooled connection is used after a database restart.
Pessimistic testing of connections upon checkout is possible with the new argument:
engine = create_engine("mysql+pymysql://user:pw@host/db", pool_pre_ping=True)
IF you are using Flask-SQLAlchemy:
Seems like a fix is available: https://github.com/mitsuhiko/flask-sqlalchemy/issues/2
Sadly, the default installation (pip install flask-sqlalchemy) doesn't apply the patch properly yet, especially on this issue: https://github.com/e-dard/flask-sqlalchemy/commit/cf659f346e005d34257d256fa4c42889741fc31f
Getting the latest version from github should fix it.
The pessimistic approach as described by @wim
pool_pre_ping=True
can now be done for Flask-SQLAlchemy using a config var -->
SQLALCHEMY_POOL_PRE_PING = True
When I encountered this error I was storing a LONGBLOB
/ LargeBinary
image ~1MB in size. I had to adjust the max_allowed_packet
config setting in MySQL.
I used mysqld --max-allowed-packet=16M
If you use Pool, you should set recyle less than wait_timeout of DB wait_timeout is 60. So I set 40 to recyle
from sqlalchemy.pool import Pool
pool.QueuePool(self.get_connection, max_overflow=0,pool_size=40,recycle=50)
精彩评论