开发者

Java Multi threading - Avoid duplicate request processing

I have following multi threaded environment scenario - Requests are coming to a method and I want to avoid the duplicate processing of concurrent requests coming. As multiple similar requests might be waiting for being processed in blocked state. I used hashtable to keep track of processed request, but it will create memory leaks, so how should keep track of processed request and avoid the same requests to be processed which may be in blocking state.

How to check, that any waiting/blocked incoming req开发者_运维问答uest is not the one which are processed in current threads.


Okay, I think I kinda understand what you want.

You can use a ConcurrentSkipListSet as a queue. Implement your queued elements like this:

 class Element implements Comparable<Element> {
      //To FIFOnize
      private static final AtomicLong SEQ = new AtomicLong();
      private final long id = SEQ.incrementAndGet();

      //Can only be executed once.
      private final Semaphore execPermission = new Semaphore(1);


      public int compareTo(Element e){
            // If element e1 exists on the queue such that 
            // e.compareTo(e1) == 0, that  element will not
            // be placed on the queue.
            if(this.equals(e)){
               return 0;
            }else{
               //This will enforce FIFO.
               this.id > e.id ? 1 : ( this.id < e.id ? -1 : 0);
            }
      }
      //implement both equals and hashCode

      public boolean tryAcquire(){
          return execPermission.tryAcquire();
      }
 }

Now your threads should,

 while(!Thread.currentThread().isInterrupted()){
     //Iterates from head, therefore simulates FIFO
     for(Element e : queue){
          if(e.tryAcquire()){
               execute(e); //synchronous
               queue.remove(e);
          }
     }
 }

You can also use a blocking variant of this solution (have a bounded SortedSet and let worker threads block if there are no elements etc).


If the memory leak is the problem have a look at WeakHashMap to keep your request during processing.

Another solution would be to use a memory bound cache...


There is no inherent reason why keeping track of requests in a HashMap (or any other way you might choose) would lead to memory leaks. All that's needed is a way for entries to be removed once they have been processed.

This could mean having your request processing threads:

  • directly remove the entry;
  • communicate back to the dispatcher; or
  • mark the request as processed, so that the dispatcher can remove the entries.
0

上一篇:

下一篇:

精彩评论

暂无评论...
验证码 换一张
取 消

最新问答

问答排行榜