开发者

How reliable is current_kernel_time()?

I am working on performance benchmarking of a SDIO UART Linux/Android 开发者_开发百科driver and used current_kernel_time() at start and end of the to-be-analysed read, write function implementation, then printing the time difference.

Most of the time I get time difference as 0 (zero) nanoseconds (irrespective of size of the data to read/write : 16-2048 bytes) which logically I think is incorrect, only a very few times I get some values hopefully those are correct.

How reliable is the current_kernel_time()?

Why I get 0ns most of the times?

I am planning to profile at kernel level to get more details..before that can somebody throw some light on this behavior..has anybody observed anything like this before...

Also, any suggestions to help/correct my approach to benchmarking are also welcome!

Thank you.

EDIT:

This is the read code from Linux kernel version 2.6.32.9. I added current_kernel_time() as below under #ifdef-endif:

static void sdio_uart_receive_chars(struct sdio_uart_port *port, unsigned int *status)
{
#ifdef SDIO_UART_DEBUG
struct timespec time_spec1, time_spec2;
time_spec1 = current_kernel_time();
#endif

    struct tty_struct *tty = port->tty;
    unsigned int ch, flag;
    int max_count = 256;

    do {
        ch = sdio_in(port, UART_RX);
        flag = TTY_NORMAL;
        port->icount.rx++;

        if (unlikely(*status & (UART_LSR_BI | UART_LSR_PE |
                        UART_LSR_FE | UART_LSR_OE))) {
            /*
             * For statistics only
             */
            if (*status & UART_LSR_BI) {
                *status &= ~(UART_LSR_FE | UART_LSR_PE);
                port->icount.brk++;
            } else if (*status & UART_LSR_PE)
                port->icount.parity++;
            else if (*status & UART_LSR_FE)
                port->icount.frame++;
            if (*status & UART_LSR_OE)
                port->icount.overrun++;

            /*
             * Mask off conditions which should be ignored.
             */
            *status &= port->read_status_mask;
            if (*status & UART_LSR_BI) {
                flag = TTY_BREAK;
            } else if (*status & UART_LSR_PE)
                flag = TTY_PARITY;
            else if (*status & UART_LSR_FE)
                flag = TTY_FRAME;
        }

        if ((*status & port->ignore_status_mask & ~UART_LSR_OE) == 0)
            tty_insert_flip_char(tty, ch, flag);

        /*
         * Overrun is special.  Since it's reported immediately,
         * it doesn't affect the current character.
         */
        if (*status & ~port->ignore_status_mask & UART_LSR_OE)
            tty_insert_flip_char(tty, 0, TTY_OVERRUN);

        *status = sdio_in(port, UART_LSR);
    } while ((*status & UART_LSR_DR) && (max_count-- > 0));
    tty_flip_buffer_push(tty);

#ifdef SDIO_UART_DEBUG
time_spec2 = current_kernel_time();
printk(KERN_INFO "\n MY_DBG : read took: %ld nanoseconds",
    (time_spec2.tv_sec  - time_spec1.tv_sec) * 1000000000 + (time_spec2.tv_nsec - time_spec1.tv_nsec));
#endif

}


current_kernel_time is meant for timekeeping, not for performance measurement. It returns, a value, not based on an actual timer, but on a time value that is updated by a timer interrupt. So the precision depends on the timer interrupt period. and you get poor resolution.

However, perhaps getnstimeofday, is more suited to your need, since it also read the actual clock source to adjust the time value. It should be more fine grained.

Based on kernel source, maybe the best function is getrawmonotonic, in the unlikely event that the system time is adjusted backward during your measurement.

0

上一篇:

下一篇:

精彩评论

暂无评论...
验证码 换一张
取 消

最新问答

问答排行榜