The advent of the Internet and Google has made remembering things so much easier. When I used to have to do research projects I would go to the library, look through the catalogs, and find some books on my subject. I’d then have to find relevant information with only the table of contents as my guide. Now I can type a few words into a search engine and come up with articles like this that tell me what I need to know. Today what you need to know about is low power memory management. Which kind of memory you access and how you access it, matters a great deal. The two main kinds we’ll look at are scratchpad memory (SPM) and cache memory. Both types have their own natural pros and cons providing better performance in certain cases.
In embedded systems, we’re often concerned about speed, especially when your software is running a vehicle. However, we’re also worried about energy usage. Car batteries aren’t infinite, and with more and more sensors being added in advanced driver assistance systems (ADAS), we need to use what power we have efficiently. We all know that computations can burn through energy, but memory operations also have a significant effect. That effect gets compounded the further away from the processor cores you get.
The two main memory actions we deal with are data accesses and instruction fetches. Depending on the kind of system you’re running the balance between the two may differ. One expert estimates that, on average, data access accounts for 40% of memory processes, while instruction fetches make up 60%. When designing your program it’s important to think about which operations you’re optimizing for. This estimation would suggest that you should spend your time trying to fetch instructions more efficiently, rather than worry about data access.
It took a lot more time and energy for me to drive to the library and pick out books for research than it does now when I can simply Google information. The same principle applies to embedded system memory. Nearer memory is easier to access and takes less energy to withdraw information from than memory that is relatively far. SPM or cache memory is close when compared with external RAM. In fact, for the cost of one external RAM operation you could execute 40 cache or 170 SPM accesses.
ADAS enabled vehicles need to save energy everywhere they can, including in memory.
Memory Type and Energy Usage
Memory comes in many different flavors, but the two we’ll be comparing are SPM and cache memory.
What I’m referring to as scratchpad memory is generally fast and wide SRAM with a very direct connection to the core. One example of this kind of memory is TCM in ARM systems. SPM will usually be the “closest” and most optimized kind of memory available to a system. This makes it the most energy efficient to access. Adding even small amounts of SPM to a system can greatly reduce its power consumption. However, as the size of a memory bank increases so does the amount time and energy required to access data. Thus you will eventually get diminishing returns from adding more and more SPM. The inflection point for size vs. efficiency will also depend on the size of your program. When dealing with ADAS cars, though, that is unlikely to be the limiting factor. It’s more likely that your SPM size will be limited by cost. SRAM costs a good deal more than DRAM, and at some point, the monetary cost will outweigh the energy savings.
I just said that SPM is the most energy efficient type of memory, so why should we even worry about caches? Well, cache memory has several advantages over SPM, namely its ease of use and its availability. SPM is specialized memory provided by the manufacturer of your processor, and if it doesn’t come with the chip you’re out of luck. Any circuits made for vehicles should have cache memory, though. Caches are also more easy to access and don’t require you to manage them as much as SPM. They’ll automatically adapt to your software while it’s running without your intervention. In terms of cons, cache memory by definition takes more energy to operate than SPM. In addition, you have to access this memory by line or group instead of by individual location. If the data you’re writing or reading only takes up half a line, you’re going to have to read/write an entire line anyway, wasting power. Lastly, cache memory is even more cost limited than SPM, so you’ll have to make do with what you can purchase.
While SPM and cache memory are both great, you should be sure not to mix and match them. Your program will guess and look for data in the last place it was accessed, like SPM. However, this time you may have assigned it to the cache. This means that your program will first access SPM, then look in the cache, wasting energy. It’s better if you keep everything in its right place so your processor will find it the first time.
Check and see where your system might meet diminishing returns when using SPM and caches.
With Google, we no longer have to pour through books in the library. Thanks to SPM and cache memory out programs don’t have to access external RAM every time they’re looking for data or instructions. These operations can take up a significant amount of memory, which is why it’s important to look at what kind of memory you’re using. Caches and SPM can both greatly improve the efficiency of your program, but they’re both limited by eventual diminishing returns and cost. Next time we’ll look at some software tricks that will help get the most out of different memory types.
Wasting energy in development is almost as bad as losing power by using memory incorrectly. TASKING makes tools specifically for the automotive industry that can help designers like you work efficiently. Software like standalone debuggers and static analysis tools will help you build and debug your system in record time.
Have more questions about embedded memory? Call an expert at TASKING.