[uClinux-dev] ucLinux and XIP memory savings

Larry Baker baker at usgs.gov
Fri Jul 5 03:04:28 EDT 2013


Anna,

> Hi Larry,
> 
> Thanks for your response.
> 
>> Subject: Re: [uClinux-dev] ucLinux and XIP memory savings
>> 
>> Anna,
>> 
>> It is a national holiday in the US, so I am out of the office until
>> Monday when I will be able to send you more details.
>> 
>> I tried to use a Lantronix EDS2100 for an RS-232 data-logging
>> application with remote access.  That box has an M68K ColdFire
>> processor, 8MB RAM, 8 MB flash.  I used XIP and any other technique I
>> could find to increase RAM.  The biggest headache was the Linux 2.6
>> power-of-2 buddy system memory allocator.  I guess in the 2.4 kernel,
>> there was a boxcar memory allocator.  That would have been better for
>> such a small memory system.  I had to resort to fixing GCC to try to
>> catch stack overflow problems in standard apps (NTP, for time -- no
>> RTC).  But, I ran out of time to get the system to run reliably -- it
>> kept locking up because of memory allocation failures due to the power-
>> of-2 memory allocation scheme.
> 
> Is this really still true? I had read somewhere that it is possible to replace the standard kernel memory allocator under ucLinux with one that is better suited to embedded systems, e.g. a block-based memory pool type allocator. I cannot find the reference anymore now though.

I believe these are references to the choice of SLAB/SLOB/SLUB for the kernel memory allocator.  The trouble I ran into are with the user memory allocator.  It is my impression that the kernel memory allocator calls the user memory allocator for big chunks, then uses its own SLAB/SLOB/SLUB allocation strategy to hand out to kernel clients.

> Also, I found in the kernel documentation that in the no-MMU configuration you can disable power-of-2 round-ups by setting sysctl `vm.nr_trim_pages' to 0. This would allow finer-grained memory allocation and help limit fragmentation. 

I've not heard of this.  In https://www.kernel.org/doc/Documentation/sysctl/vm.txt it says
> ==============================================================
> 
> nr_trim_pages
> 
> This is available only on NOMMU kernels.
> 
> This value adjusts the excess page trimming behaviour of power-of-2 aligned
> NOMMU mmap allocations.
> 
> A value of 0 disables trimming of allocations entirely, while a value of 1
> trims excess pages aggressively. Any value >= 1 acts as the watermark where
> trimming of allocations is initiated.
> 
> The default value is 1.
> 
> See Documentation/nommu-mmap.txt for more information.
> 
> ==============================================================

In https://www.kernel.org/doc/Documentation/nommu-mmap.txt it says
> =================================
> ADJUSTING PAGE TRIMMING BEHAVIOUR
> =================================
> 
> NOMMU mmap automatically rounds up to the nearest power-of-2 number of pages
> when performing an allocation.  This can have adverse effects on memory
> fragmentation, and as such, is left configurable.  The default behaviour is to
> aggressively trim allocations and discard any excess pages back in to the page
> allocator.  In order to retain finer-grained control over fragmentation, this
> behaviour can either be disabled completely, or bumped up to a higher page
> watermark where trimming begins.
> 
> Page trimming behaviour is configurable via the sysctl `vm.nr_trim_pages'.

I think

   1) is not a different user memory allocator -- memory is still always allocated in powers-of-2;
   2) the "excessive"trimming would result in small fragments for the life of the larger allocation;
   3) if none of those smaller fragments happen to get allocated in the mean time, when the larger
allocation is released, they will all get agglomerated into the original block;
   4) if one of those smaller fragments happens to get allocated in the mean time, and is held for a
long time, that has the negative effect that can be overcome using vm.nr_trim_pages.

Still, it might be worth experimenting with on a memory constrained system.  I ran out of time to
continue butting my head against the limitations of a noMMU system, I had to put that effort aside.

In addition to the problems with large enough chunks of free user memory disappearing over time,
I also had stack overflow problems with applications like NTP.  An MMU Linux automatically extends
the user stack when it overflows (to a limit).  Even though GCC says it supports stack limit checking
for M68K processors, it causes an Internal Compiler Error for M68000 processors, such as ColdFire.
(See GCC 28896 at Bug http://gcc.gnu.org/bugzilla/show_bug.cgi?id=28896.)  I added support to
GCC for M68000 processors, and fixed the other bugs I found for stack limit checking.  (Patches and
build instructions are also at http://gcc.gnu.org/bugzilla/show_bug.cgi?id=28896.)

> I haven't used any of these myself, so any guidance on the suitability of those configuration options would be great.
> 
> Thanks,
> Anna

Larry Baker
US Geological Survey
650-329-5608
baker at usgs.gov

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mailman.uclinux.org/pipermail/uclinux-dev/attachments/20130705/7f9682ef/attachment.html>


More information about the uClinux-dev mailing list