Changeset 10e16a7 in mainline for generic/src/mm/slab.c


Ignore:
Timestamp:
2006-02-04T13:51:35Z (19 years ago)
Author:
Ondrej Palkovsky <ondrap@…>
Branches:
lfn, master, serial, ticket/834-toolchain-update, topic/msim-upgrade, topic/simplify-dev-export
Children:
428aabf
Parents:
c5613b7
Message:

Added scheduler queues output. The scheduler is buggy - on SMP
the cpus never get tu cpu_sleep, in slab2 test on 4 cpus everything
is on the first cpu.
The slab allocator passes tests in this configuration, but in slightly
different(more efficient) locking order it panics. TODO Find out why
does it panic.

File:
1 edited

Legend:

Unmodified
Added
Removed
  • generic/src/mm/slab.c

    rc5613b7 r10e16a7  
    7777 * magazines.
    7878 *
    79  *
     79 * TODO: For better CPU-scaling the magazine allocation strategy should
     80 * be extended. Currently, if the cache does not have magazine, it asks
     81 * for non-cpu cached magazine cache to provide one. It might be feasible
     82 * to add cpu-cached magazine cache (which would allocate it's magazines
     83 * from non-cpu-cached mag. cache). This would provide a nice per-cpu
     84 * buffer. The other possibility is to use the per-cache
     85 * 'empty-magazine-list', which decreases competing for 1 per-system
     86 * magazine cache.
     87 *
    8088 */
    8189
     
    296304 * Free all objects in magazine and free memory associated with magazine
    297305 *
    298  * Assume mag_cache[cpu].lock is locked
     306 * Assume cache->lock is held
    299307 *
    300308 * @return Number of freed pages
     
    620628       
    621629        spinlock_unlock(&cache->lock);
     630        /* We can release the cache locks now */
    622631        if (flags & SLAB_RECLAIM_ALL) {
    623632                for (i=0; i < config.cpu_count; i++)
     
    778787{
    779788        int idx;
    780 
     789       
    781790        ASSERT( size && size <= (1 << SLAB_MAX_MALLOC_W));
    782791       
Note: See TracChangeset for help on using the changeset viewer.