C3P0 + MySQL : Got an error reading communication packets
An error many have experienced is related with the message:
[Warning] Aborted connection 38 to db: 'database_name' user:
'root' host: 'localhost' (Got an error reading communication packets)
which is found in the MySQL logs. In my case, the database is accessed locally through a java Client using the driver com.mysql.jdbc.Driver
and the well known C3P0 pool. My MySQL server is configured to accept quite a lot of connections and the max_allowed_packet value is set to 64M. Here is an excerpt from my my.cnf file (MySQL configuration):
[mysqld]
max_allowed_packet = 64M
thread_concurrency = 8
thread_cache_size = 8
thread_stack = 192K
query_cache_size = 0
query_cache_type = 0
max_connections = 1024
back_log = 50
innodb_thread_concurrency = 6
innodb_lock_wait_timeout = 120
log_warnings
and
[mysqldump]
quick
quote-names
max_allowed_packet = 64M
The table User
in my database has the following simple structure:
CREATE TABLE `User` (
`uid` varchar(255) COLLATE utf8_bin NOT NULL,
`name` varchar(255) COLLATE utf8_bin DEFAULT NULL,
`mail` varchar(255) COLLATE utf8_bin DEFAULT NULL,
`password` varchar(255) COLLATE utf8_bin DEFAULT NULL,
`created` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP,
`maxParallelTasks` tinyint(4) NOT NULL DEFAULT '10',
`maxModels` int(11) NOT NULL DEFAULT '2000',
`maxBibTeX` int(11) NOT NULL DEFAULT '2000',
PRIMARY KEY (`uid`) USING BTREE,
UNIQUE KEY `mail` (`mail`) USING BTREE,
KEY `uid` (`uid`) USING BTREE,
KEY `index_user_name` (`name`) USING BTREE,
KEY `index_user_mail` (`mail`) USING BTREE,
KEY `index_user_maxModels` (`maxModels`) USING BTREE,
KEY `index_user_maxBibTeX` (`maxBibTeX`) USING BTREE,
KEY `index_user_maxParallelTasks` (`maxParallelTasks`) USING BTREE
) ENGINE=InnoDB DEFAULT CHARSET=utf8 COLLATE=utf8_bin
when I INSERT
a value in that table from using the mysql client (i.e. mysql -u root -p
) I get no warnings in the logs. However when trying the same Java-side the aforementioned warning appears in the logs quite a lot of times! So, Java-side, my connections are drawn from a C3P0 connection pool which is configured as follows:
datasource = new ComboPooledDataSource();
datasource.setJdbcUrl(connectURI);
datasource.setMaxPoolSize(1000);
datasource.setMinPoolSize(20);
datasource.setInitialPoolSize(50);
datasource.setNumHelperThreads(6);
datasource.setTestConnectionOnCheckin(true);
datasource.setTestConnectionOnCheckout(true);
The statement is first prepared:
PreparedStatement ps = connection.prepareSt开发者_运维技巧atement(getSql());
then parametrized:
ps.setString(1, user.getUid());
ps.setString(2, user.getName());
ps.setString(3, user.getMail());
ps.setString(4, user.getHashedPass());
and then executed:
int update = ps.executeUpdate();
The prepared statement is then closed:
ps.close();
and the SQL connection is closed:
if (connection != null) {
try {
if (!connection.isClosed()) {
connection.close();
}
} catch (SQLException ex) {
throw new DbException(ex);
}
}
The overall procedure seems (to me) to be legal. According to the MySQL manual at http://dev.mysql.com/doc/refman/5.0/en/communication-errors.html, this warning might mean that:
The client program did not call mysql_close() before exiting.
or that:
The client program ended abruptly in the middle of a data transfer.
For the purposes of my application, I have used C3P0 to write and read in this databases using more than 200 parallel threads and I have no memory leaks while I have checked that the data are actually transfered into and retrieved from the database normally. Does anyone else have similar experience?
Finally, I include my MySQL version as well as other information that might be useful for troubleshooting:
mysql> show variables like "%version%";
+-------------------------+-------------------+
| Variable_name | Value |
+-------------------------+-------------------+
| protocol_version | 10 |
| version | 5.1.37-1ubuntu5.5 |
| version_comment | (Ubuntu) |
| version_compile_machine | x86_64 |
| version_compile_os | debian-linux-gnu |
+-------------------------+-------------------+
and
Java version: 1.6.0_22, (build 1.6.0_22-b04)
Well, it is good to see a question that carries just about the right amount of information that would hint at the possible places where the system would be encountering issues. Here's what I would be checking:
- Attempt closing the
PreparedStatement
object before closing theConnection
object, and verify if this has resolved the problem. This might sound unnecessary, but is required in the case of certain JDBC drivers, most notably Oracle (and could also be MySQL). The rationale is that theConnection
object isn't really closed especially if derived objects like theResultSet
andStatement
objects haven't been closed first (in the order mentioned). In part, this is due to the manner in which the JDBC driver would have been written. - Verify if the machine(s) involved can actually handle the load that you want them to. If the underlying network infrastructure simply cannot handle the specified number of connections, then it is quite possible that connections are dropped. You might want to look at a wireshark dump and draw inferences from it, if the first recommendation doesn't help.
- Use the
debugUnreturnedConnectionStackTraces
flag to detect any connection pool leakges. This also requires that the unreturnedConnectionTimeout be positive. The flag ensures that a stack trace is provided for conditions where the application does not return connection back to the pool. The stack trace will indicate the point in code where the connection was originally reserved.
Trying to probe deeper into the cause of this warning...
First, I noticed that the number of warning messages I get in my log file is equal or very close to the minimum pool size as specified by datasource.setMinPoolSize(int);
while the initial pool size for the sake of testing was set every time equal to the minimum size.
Furthermore, using the debugger and executing statements step by step, I had the time to observe that the warnings are logged as soon as the unit tests finish. These facts led me to the conclusion that the connections that are pooled by C3P0, actually are not closed before being discarded. What is needed is that one closes the datasource once it won't be used any more invoking the method com.mchange.v2.c3p0.impl.AbstractPoolBackedDataSource#close()
. It makes sense of course to do so only at the end of the Unit test, so in my case the amendment looks like that:
@AfterClass
public static void tearDownClass() throws Exception {
// Close the pool
DataSourceFactory.getInstance().close();
}
In my application, I added a shutdown hook so that the datasource closes before the execution stops. Before that, the pooled connections were being left intact and the driver had to close them thus causing warning in the MySQL log. This solved the problem for all INSERT operations but unfortunately, not for SELECT ones. Note that I close all result sets, statements, prepared statements and connections and have checked my source code using FindBugs (a static analysis tool). A workaround makes me suspect that something is wrong with C3P0 again, but I can't be sure. So the following makes the warnings disappear...
@AfterClass
public static void tearDownClass() throws Exception {
Thread.sleep(100000); // wait for 100 secs
DataSourceFactory.getInstance().close();
}
Note that until the period of 100secs ends (and the datasource factory closes), no warnings appear in the logs! So, it's datasource.close()
that causes them...
精彩评论