开发者

What is recommended way for spawning threads from a servlet in Tomcat

Probably a repeat! I am using Tomcat as my server and want to know what is best way to spawn threads in the servlet with deterministic outcomes. I am running some long running updates from a servlet action and would like for the request to complete and the updates to happen in the background. Instead of adding a messaging middleware like RabbitMQ, I thought I could spawn a thread that could run in the background and finish in its own time. I read in other SO threads that the server terminates threads spawned by the server in order for it to manage resources w开发者_如何转开发ell.

Is there a recommended way of spawning threads, background jobs when using Tomcat. I also use Spring MVC for the application.


In a barebones servletcontainer like Tomcat or Jetty, your safest bet is using an applicaton wide thread pool with a max amount of threads, so that the tasks will be queued whenever necessary. The ExecutorService is very helpful in this.

Upon application startup or servlet initialization use the Executors class to create one:

executor = Executors.newFixedThreadPool(10); // Max 10 threads.

Then during servlet's service (you could ignore the result for the case that you aren't interested, or store it in the session for later access):

Future<ReturnType> result = executor.submit(new YourTask(yourData));

Where YourTask must implement Runnable or Callable and can look something like this, whereby yourData is just your data, e.g. populated with request parameter values (just keep in mind that you should absolutely not pass Servlet API artifacts such as HttpServletRequest or HttpServletResponse along!):

public class YourTask implements Runnable {

    private YourData yourData;

    public YourTask(YourData yourData) {
        this.yourData = yourData;
    }

    @Override
    public void run() {
        // Do your task here based on your data.
    }
}

Finally, during application's shutdown or servlet's destroy you need to explicitly shutdown it, else the threads may run forever and prevent the server from properly shutting down.

executor.shutdownNow(); // Returns list of undone tasks, for the case that.

In case you're actually using a normal JEE server such as WildFly, Payara, TomEE, etc, where EJB is normally available, then you can simply put @Asynchronous annotation on an EJB method which you invoke from the servlet. You can optionally let it return a Future<T> with AsyncResult<T> as concrete value.

@Asynchronous
public Future<ReturnType> submit() {
    // ... Do your job here.

    return new AsyncResult<ReturnType>(result);
}

see also:

  • Using special auto start servlet to initialize on startup and share application data
  • How to run a background task in a servlet based web application?
  • Is it safe to manually start a new thread in Java EE?


You could maybe use a CommonJ WorkManager (JSR 237) implementation like Foo-CommonJ:

CommonJ − JSR 237 Timer & WorkManager

Foo-CommonJ is a JSR 237 Timer and WorkManager implementation. It is designed to be used in containers that do not come with their own implementation – mainly plain servlet containers like Tomcat. It can also be used in fully blown Java EE applications servers that do not have a WorkManager API or have a non-standard API like JBoss.

Why using WorkManagers?

The common use case is that a Servlet or JSP needs to aggregate data from multiple sources and display them in one page. Doing your own threading a managed environement like a J2EE container is inappropriate and should never be done in application level code. In this case the WorkManager API can be used to retrieve the data in parallel.

Install/Deploy CommonJ

The deployment of JNDI resources vendor dependant. This implementation comes with a Factory class that implements the javax.naming.spi.ObjectFactory interface with makes it easily deployable in the most popular containers. It is also available as a JBoss service. more...

Update: Just to clarify, here is what the Concurrency Utilities for Java EE Preview (looks like this is the successor of JSR-236 & JSR-237) writes about unmanaged threads:

2.1 Container-Managed vs. Unmanaged Threads

Java EE application servers require resource management in order to centralize administration and protect application components from consuming unneeded resources. This can be achieved through the pooling of resources and managing a resource’s lifecycle. Using Java SE concurrency utilities such as the java.util.concurrency API, java.lang.Thread and java.util.Timer in a server application component such as a servlet or EJB are problematic since the container and server have no knowledge of these resources.

By extending the java.util.concurrent API, application servers and Java EE containers can become aware of the resources that are used and provide the proper execution context for the asynchronous operations to run with.

This is largely achieved by providing managed versions of the predominant java.util.concurrent.ExecutorService interfaces.

So nothing new IMO, the "old" problem is the same, unmanaged thread are still unmanaged threads:

  • They are unknown to the application server and do not have access to Java EE contextual information.
  • They can use resources on the back of the application server, and without any administration ability to control their number and resource usage, this can affect the application server's ability to recover resources from failure or to shutdown gracefully.

References

  • Concurrency Utilities for Java EE interest site
  • Concurrency Utilities for Java EE Preview (PDF)


I know it is an old question, but people keep asking it, trying to do this kind of thing (explicitly spawning threads while processing a servlet request) all the time... It is a very flawed approach - for more than one reason... Simply stating that Java EE containers frown upon such practice is not enough, although generally true...

Most importantly, one can never predict how many concurrent requests the servlet will be receiving at any given time. A web application, a servlet, by definition, is meant to be capable of processing multiple requests on the given endpoint at a time. If you are programming you request processing logic to explicitly launch a certain number of concurrent threads, you are risking to face an all but inevitable situation of running out of available threads and choking your application. Your task executor is always configured to work with a thread pool that is limited to a finite reasonable size. Most often, it is not larger than 10-20 (you don't want too many threads executing your logic - depending on the nature of the task, resources they compete for, the number of processors on your server, etc.) Let's say, your request handler (e.g. MVC controller method) invokes one or more @Async-annotated methods (in which case Spring abstracts the task executor and makes things easy for you) or uses the task executor explicitly. As your code executes it starts grabbing the available threads from the pool. That's fine if you are always processing one request at a time with no immediate follow-up requests. (In that case, you are probably trying to use the wrong technology to solve your problem.) However, if it is a web application that is exposed to arbitrary (or even known) clients who may be hammering the endpoint with requests, you will quickly deplete the thread pool, and the requests will start piling up, waiting for threads to be available. For that reason alone, you should realize that you may be on a wrong path - if you are considering such design.

A better solution may be to stage the data to be processed asynchronously (that could be a queue, or any other type of a temporary/staging data store) and return the response. Have an external, independent application or even multiple instances of it (deployed outside your web container) poll the staging endpoint(s) and process the data in the background, possibly using a finite number of concurrent threads. Not only such solution will give you the advantage of asynchronous/concurrent processing, but will also scale because you will be able to run as many instances of such poller as you need, and they can be distributed, pointing to the staging endpoint. HTH


Spring supports asynchronous task (in your case long running) through spring-scheduling. Instead of using Java threads direct I suggest to use it with Quartz.

Recourses:

  • Spring reference: Chapter 23


Strictly speaking, you're not allowed to spawn threads according to the Java EE spec. I would also consider the possibility of a denial of service attack (deliberate or otherwise) if multiple requests come in at once.

A middleware solution would definitely be more robust and standards-compliant.

0

上一篇:

下一篇:

精彩评论

暂无评论...
验证码 换一张
取 消

最新问答

问答排行榜