开发者

Quartz & Spring - Clustered but NOT Persistent?

In my Spring application I'm using the SchedulerFactoryBean to integrate with Quartz. We're going to have clustered Tomcat instances, and thus I want to have a clustered Quartz environment, so that the same jobs don't run at the same time on different web servers.

To do this, my app-context.xml is as follows:

<bean class="org.springframework.scheduling.quartz.SchedulerFactoryBean">
    <property name="triggers">
        <list>
            <ref bean="cronTrigger"/>
            <ref bean="simpleTrigger" />
        </list>
    </property>
    <property name="dataSource" ref="dataSource"/>
    <property name="overwriteExistingJobs" value="true"/>
    <!-- found in applicationContext-data.xml -->
    <property name="applicationContextSchedulerContextKey" value="applicationContext"/>
    <property name="quartzProperties">
        <props>
            <prop key="org.quartz.scheduler.instanceName">SomeBatchScheduler</prop>
            <prop key="org.quartz.scheduler.instanceId">AUTO</prop>
            <prop key="org.quartz.jobStore.misfireThreshold">60000</prop>
            <!--<prop key="org.quartz.jobStore.class">org.quartz.simpl.RAMJobStore</prop>-->
            <prop key="org.quartz.jobStore.class">org.quartz.impl.jdbcjobstore.JobStoreTX</prop>
            <prop key="org.quartz.jobStore.driverDelegateClass">org.quartz.impl.jdbcjobstore.StdJDBCDelegate</prop>
            <prop key="org.quartz.jobStore.tablePrefix">QRTZ_</prop>
            <prop key="org.quartz.jobStore.isClustered">true</prop>
            <prop key="org.quartz.threadPool.class">org.quartz.simpl.SimpleThreadPool</prop>
            <prop key="org.quartz.threadPool.threadCount">25</prop>
            <prop key="org.quartz.threadPool.threadPriority">5</prop>
        </props>
    </property>
</bean>

Everything works well, except that when I attempt to remove or change a trigger, then restart my app, the old triggers are still persisted in the DB, and still run. I don't want this, I just want them to be deleted when the app stops (or is restarted). I set the value of the overwriteExistingJobs property to be true, since I thought that's what it did.

Any ideas? All I want to use the DB for is clustering, not any sort of persistence beyond tha开发者_运维技巧t.


I have done research on the topic and that's a well-known bug in Quartz, I found a few posts on their forum. To solve this problem I created a bean that deletes all the records in the Quartz table. You can call this bean before your Quartz bean is loaded (add a "depends-on" on your Scheduler bean), when your spring context is being destroyed (make sure the DB connection pool is still opened), or manually through some form of UI. There's also a bug on job groups, don't be surprised. My first fix was to create a customer Quartz jar with the fix but it got pretty hard to upgrade whenever they released a new version (I was using 1.4 or 1.5 at the time - don't really remember).


I ran into a similar problem with clustered quartz 2. I wasn't running camel, but it's the same problem.

1) There's no way I've seen to delete the jobs in a clustered environment by simply removing the jobs/triggers from the spring context xml.

2) Because the database stores the job/trigger information, rolling deployments across servers become problematic if you're adding or modifying jobs. Servers can start running jobs before the job implementation can be deployed to the app server, unless you take down all servers prior to deploying your changes.

To solve this, I came up with a pretty simple solution. As part of our build process, we were already capturing and storing a unique build version + number w/in the build artifact (using gradle variable substitution). To solve this problem, we simply made the scheduler's name include the unique build version+number. This results in the latest set of jobs+triggers being added to the db under the new scheduler's name, and once the rolling deploy is done, all servers are running w/ the new name. This solves the delete problem and also solves the rolling deployment problem. If all of the extra scheduler names becomes an issue in the db, something could be written to clean them up if needed.


This is an old post, but for the benefit of those who need a solution, here it is. Specify "true" for the property "overwriteExistingJobs". You will have to restart your server and every time you restart, the old jobs will be removed. I do not know if this was possible in older versions of quartz-scheduler, I am using 2.1.7

0

上一篇:

下一篇:

精彩评论

暂无评论...
验证码 换一张
取 消

最新问答

问答排行榜