Home United States USA — software Why Memory Allocation Resilience Matters in IoT

Why Memory Allocation Resilience Matters in IoT

119
0
SHARE

Let’s look at how developers can build resilience into their approach to memory allocation and what it means for connected device performance going forward.
Join the DZone community and get the full member experience.
Memory allocation is one of those things developers don’t think too much about.
After all, modern computers, tablets, and servers count so much space that memory often seems like an infinite resource. And, if there is any trouble, a memory allocation failure or error is so unlikely that the system normally defaults to program exit. 
This is very different, however, when it comes to the Internet of Things (IoT). In these embedded connected devices, memory is a limited resource and multiple programs fight over how much they can consume. The system is smaller and so is the memory. Therefore, it is best viewed as a limited resource and used conservatively. 
It’s in this context that memory allocation — also known as malloc — takes on great importance in our sector. Malloc is the process of reserving a portion of the computer memory in the execution of a program or process. Getting it right, especially for devices connected to the internet, can make or break performance. 
So, let’s take a look at how developers can build resilience into their malloc approach and what it means for connected device performance going forward.
Let’s start from the beginning. Traditionally, malloc has not been used often in embedded systems. This is because older devices didn’t typically connect to the internet and, therefore, counted vastly different memory demands. 
These older devices did, however, create a pool of resources upon system start which to allocate resources. A resource could be a connection and a system could be configured to allow n connections from a statically allocated pool.
In a non-internet-connected system, the state of a system is normally somewhat restricted and therefore the upper boundaries of memory allocation are easier to estimate. But this can change drastically once an embedded system connects to the internet. 
For example, a device can count multiple connections and each can have a different memory requirement based on what the connection is used for. Here, the required buffer memory for a data stream on a connection is dependent on the latency of the connection to obtain a certain throughput using some probability function for packet losses or other network-dependent behavior.

Continue reading...