Close Advertisement


Memory Allocation Strategy in Safety-Critical Mil Systems

Dynamic memory allocation is convenient, but can threaten application stability and predictability. MilAero developers can head off these risks using innovative memory management techniques


Keywords in this Article:

  • S-C Sofware
  • Development Tools & Platforms
  • Avionics
Find related articles with this collection of keywords
  • Page 1 of 1
    Bookmark and Share

Article Media

Developers of embedded software for military and aerospace projects often lack the luxury of knowing the exact amount of memory that program variables will require at runtime. In C/C++, this usually doesn’t matter. These languages popularized dynamic memory allocation, a powerful feature that hands out memory “as needed” at runtime. In its absence, static memory must be declared prior to compilation. Developers often underestimate memory requirements, putting unreasonable restrictions on what the program can handle, or overshoot, which wastes memory. By avoiding this, dynamic allocation provides both convenience—which contributes to meeting defense projects’ tight development schedules—and economy. The economy of it helps in completing work at or under budget. 

But dynamic memory allocation is also risky, carrying the threat of memory leaks and fragmentation that are unacceptable in safety-critical military embedded code. Because of these threats, the DO-178B standard, developed by the Radio Technical Commission for Aeronautics (RTCA), mandates that “Software Design Standards should include…constraints on design, for example, exclusion of recursion, dynamic objects, data aliases and compacted expressions.” The “dynamic objects” in this quote refers to objects created in the application through dynamic memory allocation, and thus bans the technique. 

DO-178B Compliance

DO-178B compliance is a requirement for many civilian and military software projects, specifically in avionics, such as navigation, communications, collision avoidance, monitoring, flight control and other systems of military aircraft. The question is: Does a ban on dynamic memory allocation—in order to comply with DO-178B, or simply out of concern for safety and reliability—relegate developers of safety-critical code for fixed- and rotary-wing aircraft to the “dark ages” of static allocation? Presented here are solutions that enable military embedded systems applications to gain the benefits of dynamic memory allocation—simplified programming and more efficient memory use—while eliminating the risks of relying on the C/C++ runtime’s implementation of the feature. 

Normally, the C runtime library provides malloc() and free() APIs that allow applications to allocate and release memory. Application developers tend to use these functions liberally. The greatest risk is failure to diligently free allocated memory when it is no longer needed, resulting in growing areas of unavailable memory (leaks). Fragmentation is a related problem, illustrated in Figure 1. The application starts out with 100 bytes free, allocates 100 bytes, and de-allocates 40 bytes.

Figure 1
A simplified depiction of fragmentation due to memory allocation and de-allocation. An actual buffer would use additional memory to hold allocator meta-data – resulting in an overall size as large as 180 bytes in order to provide 100 bytes for use by the application.

Because of fragmentation, if the application needs to allocate 30 bytes, it can’t. There is no free section that large, even though 40 bytes are free in the aggregate. Standard memory management routines in C/C++, which are typically limited only by the amount of physical memory available, can bring this about on a wide scale. Both fragmentation and leaks can cause the system to bog down and eventually fail as it tries (sometimes unsuccessfully) to find memory resources.

‘Boxing In’ Memory Management

Developers can head off risks by taking away responsibility for memory management in safety-critical tasks from malloc and free and assigning it to the application. The developer replaces the standard allocator with a custom allocator that sets aside a buffer for the exclusive use of that task, and satisfies all memory allocation requests out of that buffer. If the memory reserved for this buffer is exhausted, the application is informed, and can then free up memory within the buffer or find more memory to devote to the task. Exhausting the memory in this dedicated pool has no impact on other parts of the system.

In fact, avoiding general-purpose allocators is a good strategy for military embedded systems generally (and not just for the avionics projects covered by DO-178B), because such allocators often introduce unpredictability, use memory inefficiently and are too slow. They’re usually based on a list allocator algorithm that organizes a pool of contiguous memory locations—often called free holes—into a singly linked list.

To service a request, the allocator traverses the list looking for a large enough hole, using lookup strategies such as first-fit—walking the list from the beginning to find the first available hole; next-fit—searching from where a previous search left off; best-fit—searching for the smallest block large enough to satisfy the request; and quick-fit—the allocator uses its own list of common memory sizes to quickly allocate a block. These strategies are designed to satisfy many application scenarios, but in the end, they all introduce fragmentation.

Block Allocation

In contrast, custom allocators typically focus on the specific allocation patterns used by the system. For example: block allocators. The allocator is given a quantity of memory, divides this block into equal-size pieces, and organizes them in a linked-list of free elements. To serve a request for memory, the allocator returns a pointer to one piece and removes it from the list. When there are no more elements in the “free list,” a new large block can be selected from the memory pool. Freed elements are placed back into the original block’s “free list.” Since allocated objects are of the same size, there is no need for the block allocator to “remember” each element’s size, or to aggregate smaller chunks into larger ones. This minimizes overhead and conserves CPU cycles.

To satisfy several allocation patterns, a block allocator can be combined with some other techniques into a hybrid memory manager. For example, a block allocator can maintain multiple lists of different-sized elements, as in Figure 2. Meanwhile, the blocks themselves, and objects that exceed the lists’ maximum “chunk size,” can be allocated using a general-purpose allocator. Such techniques can achieve a processing rate that is orders of magnitude higher than the general-purpose malloc().

Figure 2
A block allocator can maintain lists of different-sized elements.

Stack-based allocators should not be confused with the application’s call stack, but they use a similar concept. With each allocation, the address of the current position of the stack pointer is returned, and the pointer is advanced by the amount of the allocation request (Figure 3). When the memory is no longer needed, the stack pointer is rewound. Overhead is minimized because there is no linked-list (chain) of pointers and it is not necessary to track the size of each allocation or free hole. An important by-product is improved safety: it is impossible to accidentally introduce a memory leak through improper de-allocation, because the application does not have to track individual allocations. Other custom allocators that might be chosen also include a bitmap allocator, a thread-local allocator and others.

Figure 3
With a stack allocator the address of the current position of the stack pointer is returned with each allocation, and the pointer is advanced by the amount of the allocation request

Allocation Using an IMDS

The design strategy discussed above—turning responsibility for memory allocation over to the application, replacing the standard allocator with a custom allocator, and satisfying all memory allocation requests out of a dedicated buffer—can also be achieved via third-party code that is integrated with a radar, mapping, flight control and other military electronics. For example, McObject’s eXtremeDB in-memory database system (IMDS), used in the example below, was designed to operate in resource-constrained embedded systems, with efficient custom algorithms for allocating precious memory and no reliance on the general-purpose C runtime allocator.

Last fall, BAE Systems chose McObject’s eXtremeDB in-memory embedded database running on Wind River’s VxWorks real-time operating system (RTOS) as part of an avionics upgrade for the high-profile Panavia Tornado GR4 multi-role combat jet (Figure 4). The database will help the aircraft improve its ability to perform a wide variety of missions, during day or night, through advanced radar, mapping and navigation technology that allows low-level flying even when poor weather prevents visual flight.

Figure 4
In-memory embedded database technology running on Wind River’s VxWorks real-time operating system (RTOS) is part of an avionics upgrade for the high-profile Panavia Tornado GR4 multi-role combat jet.

Figure 5a illustrates allocation/de-allocation of sensor data using malloc() and free(). Figure 5b illustrates the same process using eXtremeDB. Sensor data plays a key role in military avionics—today’s aircraft have been accurately referred to as “sensor platforms” for the profusion of such technology—and also can figure prominently in more general military applications such as battlefield surveillance and enemy tracking. 

Figure 5b
5b illustrates the same process using an in-memory database system.

Figure 5a
5a illustrates allocation/de-allocation of sensor data using malloc() and free().

At the start of Figure 5a, a C program defines a structure, usually in a header file. Within a function, the program declares a pointer to an instance of that structure and allocates memory for it via malloc(). When the program/function is done with the structure, memory is returned to the heap with a call to free().

Figure 5a
5a illustrates allocation/de-allocation of sensor data using malloc() and free().

Defining Classes

With eXtremeDB, the programmer defines a class or classes in a database schema file, and pre-processes the file with a schema compiler that produces a .C and a .H file. The .H file contains type definitions (typedefs) and function prototypes for working with the defined classes. The code in Figure 5b defines a class called Sensor and declares an instance of Sensor (theSensor) in the function body, along with a variable to hold return codes, and a handle to a transaction. 

Figure 5b
5b illustrates the same process using an in-memory database system.

If the program that uses malloc/free is multi-threaded and threads will share the Sensor object, the developer must implement concurrency control to regulate access to that object. With an in-memory database, concurrency control is automatic: interaction is carried out in the context of a database transaction, which guarantees atomicity (everything within the scope of the transaction succeeds or fails together) and isolation (transactions execute separately).

In Figure 5b, the function mco_trans_start begins a transaction. mco_trans_start is given a handle to a database (returned from mco_db_open, not shown), the transaction type (in this case read-write), priority level, and the address of the transaction handle to be returned to the program.

Figure 5b
5b illustrates the same process using an in-memory database system.

Upon beginning a transaction, the program calls Sensor_new(), which creates space in the in-memory database for a new Sensor object. The arguments are the transaction handle returned from mco_trans_start and the address of a handle to a Sensor object. This is the functional equivalent of malloc() in the C program.

Leveraging Interfaces

Unlike malloc(), Sensor_new() returns a handle to an object in the database. So whereas the C program works directly with the structure’s member fields, the eXtremeDB-based program works through the interfaces generated by the schema compiler, for instance Sensor_s_id_put() and Sensor_sub_nbr_put().

When the C program in Figure 5a is finished with the Sensor structure, free() returns memory to the heap. When the eXtremeDB-based program is finished, the space in the database is relinquished by the call to Sensor_delete(), which passes in the handle of the object to be removed from the database, and ends the transaction (mco_trans_commit()). (Both examples omit error handling, for brevity.)

Figure 5a
5a illustrates allocation/de-allocation of sensor data using malloc() and free().

While the database, like the program as a whole, can run low on memory, this would result in a “database full” error message that can be dealt with programmatically, rather than the much more dangerous and unpredictable scenario of heap memory fragmentation and leakage—which is obviously unacceptable in safety-critical military systems. Meanwhile, the allocation/de-allocation accomplished by the IMDS relies on custom allocators optimized for the given pattern of allocation.

The custom allocators work with memory that was dedicated to the in-memory database, but eXtremeDB doesn’t care how the memory is obtained. It could be global memory, heap memory, or in a flat memory model architecture (such as VxWorks 5.5), just a dedicated region of memory in a fashion similar to a video buffer or keyboard buffer. Using the IMDS retains the advantages of dynamic memory allocation while avoiding the DO-178B proscription on it.  

Issaquah, WA.
(425) 888-8505.