druid连接池获取不到连接的一种情况

时间:2023-03-09 07:45:59
druid连接池获取不到连接的一种情况

数据源一开始配置:

jdbc.initialSize=1
jdbc.minIdle=1
jdbc.maxActive=5

程序运行一段时间后,执行查询抛如下异常:

exception=org.mybatis.spring.MyBatisSystemException: nested exception is org.apache.ibatis.exceptions.PersistenceException:
### Error querying database. Cause: org.springframework.jdbc.CannotGetJdbcConnectionException: Could not get JDBC Connection; nested exception is com.alibaba.druid.pool.GetConnectionTimeoutException: wait millis 60000, active 5, maxActive 5
### The error may exist in ...

怀疑是最大连接数不够,讲配置改为

jdbc.initialSize=1
jdbc.minIdle=1
jdbc.maxActive=20

运行一段时间后,又出现上述异常。在datasource配置中添加如下参数(参考http://www.cnblogs.com/netcorner/p/4380949.html):

<!-- 超过时间限制是否回收 -->
<property name="removeAbandoned" value="true" />
<!-- 超时时间;单位为秒。180秒=3分钟 -->
<property name="removeAbandonedTimeout" value="180" />
<!-- 关闭abanded连接时输出错误日志 -->
<property name="logAbandoned" value="true" />

抛如下异常:

2016-12-27 14:35:22.773 [Druid-ConnectionPool-Destroy-1821010113] ERROR com.alibaba.druid.pool.DruidDataSource - abandon connection, owner thread: quartzScheduler_Worker-1, connected time nano: 506518587214834, open stackTrace
at java.lang.Thread.getStackTrace(Thread.java:1552)
at com.alibaba.druid.pool.DruidDataSource.getConnectionDirect(DruidDataSource.java:1014)
at com.alibaba.druid.filter.FilterChainImpl.dataSource_connect(FilterChainImpl.java:4544)
at com.alibaba.druid.filter.stat.StatFilter.dataSource_getConnection(StatFilter.java:662)
at com.alibaba.druid.filter.FilterChainImpl.dataSource_connect(FilterChainImpl.java:4540)
at com.alibaba.druid.pool.DruidDataSource.getConnection(DruidDataSource.java:938)
at com.alibaba.druid.pool.DruidDataSource.getConnection(DruidDataSource.java:930)
at com.alibaba.druid.pool.DruidDataSource.getConnection(DruidDataSource.java:102)
at com.xxx.doCopyIn(AppQustionSync.java:168)
at com.xxx.executeInternal(AppQustionSync.java:99)
at org.springframework.scheduling.quartz.QuartzJobBean.execute(QuartzJobBean.java:75)
at org.quartz.core.JobRunShell.run(JobRunShell.java:202)
at org.quartz.simpl.SimpleThreadPool$WorkerThread.run(SimpleThreadPool.java:573)

红色标注处连接未释放,查看代码,

    try {
InputStream input = new FileInputStream(dataFile);
conn = copyInDataSource.getConnection();
baseConn = (BaseConnection) conn.getMetaData()
.getConnection();
baseConn.setAutoCommit(false);
stmt = baseConn.createStatement();
LOG.info("delete data: " + delSql);
stmt.executeUpdate(delSql);
CopyManager copyManager = new CopyManager(baseConn);
LOG.info("copy in: " + copyInSql);
copyManager.copyIn(copyInSql, input);
baseConn.commit();
jobDataMap.remove(data_file_path_key);
} catch (SQLException e) {
try {
LOG.warn(JobDataMapHelper.jobName(jobDataMap) + ":"
+ "批量更新任务失败回滚...");
baseConn.rollback();
} catch (SQLException ex) {
String errorMessage = String.format(
"JobName:[%s] failed: ",
JobDataMapHelper.jobName(jobDataMap));
throw new SQLException(errorMessage, ex);
}
} finally {
stmt.close();
baseConn.close();
jobDataMap.remove(data_file_path_key);
FileUtils.deleteQuietly(dataFile);
}

这里baseConn是通过druid datasource获取的一个PgSQL 底层连接, 上面代码执行完后,finally中调用baseConn.close()关闭了这个连接,(猜测关闭这个底层连接,druid连接池却不知道,还认为自己拥有这个连接,但实际该连接是不可用的,这段代码执行多次,就将druid连接池中所有连接都耗光了,于是便抛出Could not get JDBC Connection错误。

将上面finally代码块改为

finally {
stmt.close();
conn.close();
jobDataMap.remove(data_file_path_key);
FileUtils.deleteQuietly(dataFile);
}

即只调用druid 连接的close方法(只是释放该连接,而不是直接关闭底层连接),问题解决。