The Jprof Profiler

jim_nance@yahoo.com
Introduction | Operation | Setup | Usage | Interpretation

Introduction

Jprof is a profiling tool. I am writing it because I need to find out where mozilla is spending its time, and there do not seem to be any profilers for Linux that can handle threads and/or shared libraries. This code is based heavily on Kipp Hickman's leaky.

Operation

Jprof operates by installing a timer which periodically interrupts mozilla. When this timer goes off, the jprof code inside mozilla walks the function call stack to determine which code was executing and saves the results into the jprof-log and jprof-map files. By collecting a large number of these call stacks, it is possible to deduce where mozilla is spending its time.

Setup

First, check out the jprof source code since it is not a part of the default pull. To do this do:
  cvs co mozilla/tools/jprof

Next, configure your mozilla with jprof support by adding --enable-jprof to your configure options (eg adding ac_add_options --enable-jprof to your .mozconfig) and making sure that you do not have the --enable-strip configure option set -- jprof needs symbols to operate.

Finally, build mozilla with your new configuration. Now you can run jprof.

Usage

The behavior of jprof is determined by the value of the JPROF_FLAGS environment variable. This environment variable can be composed of several substrings which have the following meanings:

Examples of JPROF_FLAGS usage

Pausing profiles

jprof can be paused at any time by sending a SIGUSR1 to mozilla (kill -USR1). This will cause the timer signals to stop and jprof-map to be written, but it will not close jprof-log. Combining SIGUSR1 with the JP_DEFER option allows profiling of one sequence of actions by starting the timer right before starting the actions and stopping the timer right afterward.

After a SIGUSR1, sending another timer signal (SIGPROF, SIGALRM, or SIGPOLL (aka SIGIO), depending on the mode) can be used to continue writing data to the same output.

Looking at the results

Now that we have jprof-log and jprof-map files, we can use the jprof executable is used to turn them into readable output. To do this jprof needs the name of the mozilla binary and the log file. It deduces the name of the map file:
  ./jprof /home/user/mozilla/debug/dist/bin/mozilla-bin ./jprof-log > tmp.html
This will generate the file tmp.html which you should view in a web browser.

Interpretation

The Jprof output is split into a flat portion and a hierarchical portion. There are links to each section at the top of the page. It is typically easier to analyze the profile by starting with the flat output and following the links contained in the flat output up to the hierarchical output.

Flat output

The flat portion of the profile indicates which functions were executing when the timer was going off. It is displayed as a list of functions names on the right and the number of times that function was interrupted on the left. The list is sorted by decreasing interrupt count. For example:
Total hit count: 151603
Count %Total  Function Name

8806   5.8     __libc_poll
2254   1.5     __i686.get_pc_thunk.bx
2053   1.4     _int_malloc
1777   1.2     nsStyleContext::GetStyleData(nsStyleStructID)
1600   1.1     __libc_malloc
1552   1.0     nsCOMPtr_base::~nsCOMPtr_base()
This shows that of the 151603 times the timer fired, 1777 (1.2% of the total) were inside nsStyleContext::GetStyleData() and 1552 (1.0% of the total) were in the nsCOMPtr_base destructor.

In general, the functions with the highest count are the functions which are taking the most time.

The function names are linked to the entry for that function in the hierarchical profile, which is described in the next section.

Hierarchical output

The hierarchical output is divided up into sections, with each section corresponding to one function. A typical section looks something like this:
             141300 PL_ProcessPendingEvents
                927 PL_ProcessEventsBeforeID
 29358   0   142227 PL_HandleEvent
              92394 nsInputStreamReadyEvent::EventHandler(PLEvent*)
              49181 HandlePLEvent(ReflowEvent*)
                481 handleTimerEvent(TimerEventType*)
                158 nsTransportStatusEvent::HandleEvent(PLEvent*)
                  9 PL_DestroyEvent

                  4 __restore_rt
The information this block tells us is: The rest of this section explains how to read this information off from the jprof output.

This block corresponds to the function PL_HandleEvent, which is therefore bolded and not a link. The name of this function is preceded by three numbers which have the following meaning. The number on the left (29358) is the index number, and is not important. The center number (0) is the number of times this function was interrupted by the timer. The last number (142227) is the number of times this function was in the call stack when the timer went off. That is, the timer went off while we were in code that was ultimately called from PL_HandleEvent.

For our example we can see that our function was in the call stack for 142227 interrupt ticks, but we were never the function that was running when the interrupt arrived.

The functions listed above the line for PL_HandleEvent are its callers. The numbers to the left of these function names are the numbers of times these functions were in the call stack as callers of PL_HandleEvent. In our example, we were called 927 times by PL_ProcessEventsBeforeID and 141300 times by PL_ProcessPendingEvents.

The functions listed below the line for PL_HandleEvent are its callees. The numbers to the left of the function names are the numbers of times these functions were in the callstack as callees of PL_HandleEvent. In our example, of the 142227 profiler hits under PL_HandleEvent 92394 were under nsInputStreamReadyEvent::EventHandler, 49181 were under HandlePLEvent(ReflowEvent*), and so forth.

Bugs

Jprof has only been tested under Red Hat Linux 6.0, 6.1, and 6.2. It does not work under 6.0, though it is possible hack up the source code and make it work there. The way I determine the stack trace from inside the signal handler is tightly bound to the version of glibc that is running. If you know of a more portable way to get this information please let me know.

Update