1) We were counting "dirty" pages in "waste", when we shouldn't have
been. This was causing the assertion at the end of jemalloc_stats()
which checks that mapped memory is greater than committed memory to
fail.
2) jemalloc_stats used stats_chunks.curchunks to measure the number of
mapped pages. This was problematic for two reasons.
a) stats_chunks.curchunks was not locked when it was modified in
chunk_{de}alloc(), so its value could be garbage.
b) Even if it had been locked properly, it was possible for an
allocation to occur during a call to jemalloc_stats which would
cause the measured amount of allocated memory to exceed the
measured amount of mapped memory, tripping the assertion we
tripped in (1).
We fixed these issues by deleting stats_chunks entirely, and by
introducing huge_mapped, which measures the amount of memory mapped
by huge allocations (and is properly protected by huge_mtx).
We now measure the amount of mapped memory by adding huge_mapped and
each arena's mapped memory, and we do this in such a way that even if
an allocation occurs during our call to jemalloc_stats, we'll still
get a consistent result (where mapped >= committed).
This patch also gets rid of the redundant "committed" entry, so now
there's no confusion as to which entries overlap.
--HG--
extra : rebase_source : 429f3d44011f02dda43aa10b077c917c4d02fe50