开发者

What's wrong with my log4j filter ? ( It make tomcat hang up ! )

We recently add a filter to our log4j configuration with a custom filter. The goal is to stop repeating the same log again and again, but replace it by :

the log + "last line repeated x times"

We have write the following filter, who works fine. But then, we have beggin to remark strange deadlock and hang up of tomcat5.5.

When we remove this filter, the error stop to occurs. ( Yes, we are pretty sure of that ).

Beside this e开发者_开发百科mpirical observation, the Heap Stack of the JVM has many log4j thread blocked and waiting to a tomcat monitor to be realeased.

Here is the code of our filter. Pretty basic. What's wrong with it ?

public class RepeatFilter extends Filter {
String lastMessage;
Category lastLogger;
int repeatCount = 0;

@Override
public int decide(LoggingEvent event) {
    // get the rendered (String) form of the message
    String msg = event.getRenderedMessage();

    if(msg == null || msg.startWith("Last message repeated "){
        return Filter.NEUTRAL;
    }

    if(msg.equals(lastMessage)) {
        repeatCount++;
        return Filter.DENY;
    } else {
        if(repeatCount>0){
            String msgToLog = "Last message repeated " + repeatCount + " time(s).";
            repeatCount = 0;
            lastLogger.log(event.getLevel(), msgToLog);
        }
    }

    lastLogger = event.getLogger();
    lastMessage = msg;
    return Filter.NEUTRAL;
}

}

EDIT : Yes, theire is a recursion when we use a logger inside the filter. In fact, the server hang after the line lastLogger.log(... ). but we really need to writte a custom message ( the repeated x time ). We have try not to use a logger inside the filter, but we havent find a way.

Edit 2 : I use the version 1.2.15 of log4j.

Edit 3 : Something we will try :

  • make all our appender nested in a AsyncAppender

Edit 4 : Wrap each appender in a asyncAppender does not solve the issue


Possible Deadlock

Looking at the source code, I see that a lock is obtained on the Category when an event is logged, and then a lock is obtained on each AppenderSkeleton to which the event is dispatched. If two Category instances use the same appender (which is common), then trying to log from a an Appender (or a Filter attached to that appender) could definitely cause a deadlock.

For example, there are two Category objects, C1 and C2, and one Appender, A. Thread T1 acquires the lock on C1 and then A, then it starts processing the Filter chain on A.

At the same time, Thread T2 acquires the lock on C2. It cannot acquire the lock on A because it is held by T1, so it waits.

Now suppose that T1 is directed (by the filter) to log a message to C2. It cannot acquire the lock on C2 because it's held by T2. Deadlock.

If you posted more info from the thread dump, it should be possible to tell whether it's a true deadlock and, if so, which objects are under contention.

Possible Solution

If this is indeed what's happening, it seems that the problem can be avoided by logging the "repeated" message only to the "current" logger—that is, the logger of the event being evaluated by the filter. One way to do this is to track repeats on a per-logger basis.

public class RepeatFilter
  extends Filter
{

  private final Map<Category, Repeat> repeats = 
    new HashMap<Category, Repeat>();

  @Override
  public int decide(LoggingEvent event)
  {
    String message = event.getRenderedMessage();
    if ((message == null) || message.startsWith("Last message repeated "))
      return Filter.NEUTRAL;
    Category logger = event.getLogger();
    Repeat r = repeats.get(logger);
    if (r == null)
      repeats.put(logger, r = new Repeat());
    if (message.equals(r.message)) {
      ++r.count;
      return Filter.DENY;
    }
    if (r.count > 0) {
      logger.log(r.level, "Last message repeated " + r.count + " time(s).");
      r.count = 0;
    }
    r.message = message;
    r.level = event.getLevel();
    return Filter.NEUTRAL;
  }

  private final class Repeat
  {

    Priority level;

    String message;

    int count;

  }

}

This might not be what you want, because events on other loggers will not "flush" the category's "repeated" message. On the other hand, it's likely to compress the log even more effectively, since the same message is likely to be repeated on the same logger.

0

上一篇:

下一篇:

精彩评论

暂无评论...
验证码 换一张
取 消

最新问答

问答排行榜