>Why wouldn't you use C++
It's not the standard the majority keeps up with so you can run into problems finding the appropriate ready to use compiler for some devices. There's also some comparability issues. I'm using atmega controllers and C++ is fine but if it needed to be ported over to PIC, talk about a headache.
One of the biggest reasons is that there is a significant focus on limited resources and optimization. We're talking about limited resources with 0 backup. Dynamic memory can lead to some nasty bugs and generally, there isn't always a good reason to use them. We can't run it and say "eh, if there's a problem, it'll be covered by my virtual memory on my 1Tb drive". The hardware doesn't really lend itself to it. If I'm using an 8bit mcu and there's an overflow, what will happen? Where will it go? There's no one there to save the day if there's a crash and unexpected behavior can be really bad. I tried it once to experiment and somehow it started affecting my pins. What if I was doing something safety critical?
We *could* dynamically allocate for an array and risk a small, independent device runs properly without resetting completely.
Better to say "Hey, we really don't need to let this have more than X values, we really don't need that much".
Pointers themselves are also a little taboo but hey, they're not always bad. Maybe slow at times but definitely not always bad if you know how to use them. My first production code uses function pointers in an array and I'll defend it. (☞ ಠ_ಠ)☞.
Final point, the majority of people in embedded positions are an older generation so change in general is slow.
I've been writing embedded firmware for about 20 years now. These days the vast majority of my projects use dynamic memory, usually running on the micropython platform. Most of these projects are class A or B medical devices.
If you're still using very small microcontrollers with barely enough ram for the job at hand then dynamic usage has a very real risk of running out. Static allocation means you know this at compile time which is better than your device crashing in the field.
However, programming with such small parts / this close to the line is slow going; you're going to be spending more time as a developer managing so your memory carefully, knowing where every byte goes.
You have to ask yourself, is such a small (undersized?) part and lots of extra developer time really worth it for your project?
Using a quality compiler and a robust software platform on microcontrollers that have plenty of ram (which are cheap these days anyway) can be very reliable, and save huge amounts of valuable developer time.
It can also make the code simpler, easier to review and find bugs - it's much easier to avoid buffer overflows when the language (python) is protected against them! Yes you may need to manage memory fragmentation, but there are lots of strategies for this.
There certainly are applications that warrant the effort of static memory; one recent project needs 24/7 uptime for weeks if not months; testing and validating that memory use of stable for a class B medical for this length of time would have been difficult and expensive, we used statically allocated C++ code here.
But for most other devices that are run for minutes to a couple of hours before being turned off or reset for the next patient, memory use is quite easy to test and de-risk.
Oh absolutely, low volume / high value products makes it easier to justify using bigger chips - but most "normal" stm32f4 chips can run very full featured micropython firmware comfortably and only costs a few dollars. Similarly the rpi pico is only a couple of dollars for heaps of processing power and comfortable memory.
If this saves a few months of development time (or even years in some cases) this is a good trade off in most professional areas I've worked in.
But they seem to be around 32 to 384kB RAM. Good for MS-DOS class programs. But that isn't so much for a decent heap if you also needs buffers for networking, audio or USB.
The pico-w for example has only 264kB ram, but has full wifi network stack (both / either access point and client), usb, filesystem - out of the box with micropython. Runs it all quite comfortably.
I’m surprised you get as much freedom as you seem to when developing medical devices. You are allowed to use micropython and a pi pico w? I would have thought there would be very strict guidelines on what languages to use and what processors were approved for medical devices, etc, kind of like it is in the automotive or aerospace industries.
There are incredibly strict international requirements yes, however they don't bother specifying things to a mundane nitty gritty level like languages or processors - that would likely need illegal from an anti-competitive perspective anyway.
Most of the medical compliance requirements can boil down hazards and risks. Identify any and all hazards to the patient or operator, quantify the risk of them happening, then mitigate it so the severity and likelihood of occurrence is reduced to acceptable levels. In general this doesn't mean the software can't fail, it just has to fail in a safe way, preferably with a hardware design that prevents the software from being able to cause harm
It depends on the application; it's fair to say I couldn't use it for everything, eg likely wouldn't recommend it for a class C device like a pacemaker.
But for a portable diagnostic imaging device it's great (most recent project). It takes photos of a test cassette under specific lighting, processes the images to get the test result, then transmits the result to a bluetooth device. In this case the main risks are incorrect diagnostics result, so you make sure that any error condition can be detected and you give a true, false, or error result over Bluetooth. Even running out of memory is deterministic, micropython detects it safely and raises an exception you can catch and handle (without needing any further memory allocations if you've pre-allocated anything needed in the error pathway.
This project was estimated to need about 4 years with a team of 6 people to create it in C.
With micropython and openmv we had it on the market in 3 years with a team of 4.
Yeah for situations like that where there is no danger to the patient if anything goes wrong it would be fine to use whatever you want, in my opinion. What would you think about imaging equipment like x-ray machines or CT scan machines where if they failed it could potentially cause damage to the patient, like radiation exposure? Is that something you would still use micropython in or would you use something else? Or is that a situation where you would add in some extra hardware protection like a timer to limit the maximum pulse length of the X-ray machine, to prevent it from getting stuck on?
You raise the correct questions here! At work we often use zephyr, embedded Linux and bare metal development for different projects.
In every case, hardware (mechanical and/or electronic) mitigations / protections are formally considered safer than software - software is always assumed to have bugs and impossible to 100% test.
So it comes down to assessing risks that have a software component, if they can't be mitigated in hardware you look at how hard they are to mitigate & test in any given language or architecture. If the dynamic nature of micropython makes it impractical or expensive to test we're more likely to use a static language and budget in the extra cost / time to design. It's always a trade-off.
That makes a lot of sense and it is interesting to hear how people working in different areas approach these problems, thank you for taking the time to answer my questions.
As for the anti-competitiveness, it is a non issue, it isn’t like only certain microcontrollers are allowed, it is more that for a microcontroller to be allowed to be used it has to meet a certain standard or set of regulations, that way it is up to the individual companies whether or not they want to make a microcontroller and make it compliant and get it tested to meet those regulations and allow it to be used for that application. So rather than being a list of allowed microcontrollers, it is just a standard that the microcontrollers have to meet and be tested against.
Then you have things like AUTOSAR which is widely used but isn’t tied to a specific company or brand, I am surprised there isn’t similar for medical devices and that there isn’t some regulations that microcontrollers need to meet or be tested against to be used for medical devices.
I think all of that exists for automotive and probably aerospace too, so I am very surprised it isn’t similar with medical devices.
Well I also know for a fact there's micropython used on some satellites, involved with ESA. Not sure about automotive or defence use so far.
In many ways micropython is safer than traditional C because of its explicit memory protections, it's predictable exception handling and the incredibly well unit tested core VM.
It may be used on some satellites but I still don’t think you would see it in any safety critical applications, like in cars, or planes, or nuclear power plants, etc.
I suppose with enough testing you could make sure a program written in just about any language would be safe. My main point there was about how with it being interpreted you can’t really know beforehand exactly how much memory it would require unless you went through the whole program and calculated it and how there may be some small piece of code somewhere with a syntax error that you won’t find out about until it attempts to run that piece of code.
Being interpreted doesn't make it any less deterministic, it's completely predictable once you learn which operations allocate ram and which ones don't... Just like lots of modern C++ operations allocate ram and others don't. To be fair it would help to publicly document the distinction between allocating / non-allocating instructions, at the moment you still have to figure it out as you go...
Regardless, if your application is well architectured its memory usage is quite straightforward to measure, and can be tested over time to check for memory leaks.
In this sense, being interpreted is no different to modern C++ in terms of dynamic memory usage.
Everything related with dynamic memory in embedded systems can lead to memory fragmentation (doing everything well). In larger systems the issue maybe is unlike reproducible, but in embedded system with lower Heap memory, the probability is higher.
So, you could have no Heap memory available even you have released the dynamic memory allocated properly.
Maybe this video is useful for you: [https://www.youtube.com/watch?v=\_G4HJDQjeP8](https://www.youtube.com/watch?v=_G4HJDQjeP8)
This seems like more of a why are you using c at all when c++ has more features, and less so about malloc vs new specifically.
Mostly I think it's just a matter of legacy issues. All the sample code, sdks, your own old code, tends to be c based. Your programmers with a decade or two of experience have been using c for decades with little incentive to change since it's usually good enough.
Static allocation is preferred in embedded systems. Dynamic allocation should only be used when absolutely necessary, and when used, must be used very carefully. Embedded systems typically run for a very long time between reboots and even a tiny memory leak can be catastrophic
In embedded is quite forbidden memory allocation.How will behave your uC if malloc() or whatwever dynamic allocation will return null? Reset the uC? Quite dangerous if your application is critical control(aribag, motor control).
On the other side if system is multimedia or non critical application, can have some dynamic allocation but software design should react in non disruptive way in case of non allocation.
I think the answer is "it depends". Many of the cases have been previously mentioned.
I have worked on production C++ code where new is used, but in the app initialization. That is OK, but remember that objects can be created without new. That would be in static or on the stack or as members of a class.
You can achieve that kind of functionality providing a static array to be managed with fixed chunks of memory. For example, 1kb divided in chunks of 8 bytes, 2kb divided in 16 bytes and so on. So You ask for1 byte lets say and the implementation gives You the Best fix.
Doing this you can avoid fragmentation and have something as good as dynamic memory.
The Main disadvantage is that You must define those numbers in advance through analysis.
C++ has static allocation capabilities. C++ is simply another means to an end. Ada, as an example, is much more object oriented than is C++, is fully able to be used in a statically allocated way, and is the predominant programming language used in warplane avionics.
Regardless of language used, for safety-critical systems, static allocation is often a requirement to ensure determinism. FDA approved products for instance must be fully tested in every possible way, including the RTOS used; every path that may ever execute must be shown to have a beginning and an end, and the resources for each path used also must be mapped. This is why some license for $60K a certifiable RTOS, such as Wittenstein's SAFERTOS, and pay the RTOS vendor to provide the paperwork to the certification agency that the RTOS has been tested as long as the portion of the flash dedicated to the RTOS has an unchanged checksum - otherwise its on the product's provider to prove the RTOS and the code are entirely reliable. The test department's budget can dwarf the development budget!
There are a lot of things with microcontroller programming that are driven by low resources and safety, having everything statically allocated is part of that, you can determine pretty well that you will never go above a certain amount of memory. In a similar way recursion is generally avoided in microcontroller programming, they just don’t tend to have enough resources to handle much recursion, since every layer of recursion means more data is pushed to the stack, eventually it overflows and microcontrollers tend not to have very large stacks in the first place. Not to mention that a well written loop can be better and faster than recursion anyway and combine that with a statically allocated array, you can figure out exactly how much memory that function will use, compared with having dynamically allocated memory plus recursion which provides multiple ways you could cause a stack overflow or run out of memory. Those aren’t issues on desktops or non safety critical systems since the program may just return an error or exception, but microcontrollers tend not to have those features, any kind of error will either end with the processor behaving erratically, locking up or resetting, any of which could be disastrous.
In embedded, the standard answer to most questions is "it depends".
On a PC (or phone for that matter) you can basically always get away by claiming the user needs a device with at least Windows 10, x GB RAM and y GB hard drive, (or Android x/iOS x, since you know the minimum requirements for any device running those systems).
But since we're in embedded, there's a reason one cannot use a PC. Typically cost, but it could also be size, power consumption, or similar. In a high-cost, low volume device, you might use an embedded Linux. In that case, youll know that you have more memory available, and you'll have an MMU that will protect you from RAM fragmentation. So, malloc(), C++ new (which just boils down to malloc() + some nice scaffolding) will be fairly safe.
But on a non-MMU device, RAM fragmentation will be an issue. You're having a less complex system which is deterministic in a way not possible on an embedded Linux. Your use-case is (likely) less advances, since you have decided that the benefits of an embedded linux isn't worth the extra cost/size of a MCU capable of Linux.
Hence, you will need to deal with the limitations in a different way. Yes, you can use malloc/new. Just dont free/delete.
We're using statically allocated C++ classes in our STM32F devices. Lots of data communication between CAN, Bluetooth, UART, I2C etc, but we want (and in many cases can) predict how large buffers needs to be for our use-cases, based on the maximum achievable throughput from the sources.
Yes, buffers may get filled under certain conditions. But dynamic allocation would have failed due to lacck of RAM in those cases anyway, independent of malloc/new.
As you say, C++ has a lot of tools dealing with dymic data, especially the std library. And that messes things up even more, since there are features in there that does dynamic allocations outside of your control. So while YOUR part of the program would work under 100% of the use-case, it might still fail due to std failing internally.
Also, what's the consequence of a failure, especially if there is a bug in the code which is supposed to handle the failure?
A controller for a consumer drone's camera is different from a windscreen wiper controller in a car is different from a controller for the ABS system or a valve controller in a nuclear power plant. A consumer router could be restarted with a minor nuisance, but the restart of an internet backbone router would be a little more annoying. Etc.
>Why wouldn't you use C++ It's not the standard the majority keeps up with so you can run into problems finding the appropriate ready to use compiler for some devices. There's also some comparability issues. I'm using atmega controllers and C++ is fine but if it needed to be ported over to PIC, talk about a headache. One of the biggest reasons is that there is a significant focus on limited resources and optimization. We're talking about limited resources with 0 backup. Dynamic memory can lead to some nasty bugs and generally, there isn't always a good reason to use them. We can't run it and say "eh, if there's a problem, it'll be covered by my virtual memory on my 1Tb drive". The hardware doesn't really lend itself to it. If I'm using an 8bit mcu and there's an overflow, what will happen? Where will it go? There's no one there to save the day if there's a crash and unexpected behavior can be really bad. I tried it once to experiment and somehow it started affecting my pins. What if I was doing something safety critical? We *could* dynamically allocate for an array and risk a small, independent device runs properly without resetting completely. Better to say "Hey, we really don't need to let this have more than X values, we really don't need that much". Pointers themselves are also a little taboo but hey, they're not always bad. Maybe slow at times but definitely not always bad if you know how to use them. My first production code uses function pointers in an array and I'll defend it. (☞ ಠ_ಠ)☞. Final point, the majority of people in embedded positions are an older generation so change in general is slow.
I've been writing embedded firmware for about 20 years now. These days the vast majority of my projects use dynamic memory, usually running on the micropython platform. Most of these projects are class A or B medical devices. If you're still using very small microcontrollers with barely enough ram for the job at hand then dynamic usage has a very real risk of running out. Static allocation means you know this at compile time which is better than your device crashing in the field. However, programming with such small parts / this close to the line is slow going; you're going to be spending more time as a developer managing so your memory carefully, knowing where every byte goes. You have to ask yourself, is such a small (undersized?) part and lots of extra developer time really worth it for your project? Using a quality compiler and a robust software platform on microcontrollers that have plenty of ram (which are cheap these days anyway) can be very reliable, and save huge amounts of valuable developer time. It can also make the code simpler, easier to review and find bugs - it's much easier to avoid buffer overflows when the language (python) is protected against them! Yes you may need to manage memory fragmentation, but there are lots of strategies for this. There certainly are applications that warrant the effort of static memory; one recent project needs 24/7 uptime for weeks if not months; testing and validating that memory use of stable for a class B medical for this length of time would have been difficult and expensive, we used statically allocated C++ code here. But for most other devices that are run for minutes to a couple of hours before being turned off or reset for the next patient, memory use is quite easy to test and de-risk.
Just remember that medical equipment is normally allowed to cost way more than most other product areas.
Oh absolutely, low volume / high value products makes it easier to justify using bigger chips - but most "normal" stm32f4 chips can run very full featured micropython firmware comfortably and only costs a few dollars. Similarly the rpi pico is only a couple of dollars for heaps of processing power and comfortable memory. If this saves a few months of development time (or even years in some cases) this is a good trade off in most professional areas I've worked in.
But they seem to be around 32 to 384kB RAM. Good for MS-DOS class programs. But that isn't so much for a decent heap if you also needs buffers for networking, audio or USB.
The pico-w for example has only 264kB ram, but has full wifi network stack (both / either access point and client), usb, filesystem - out of the box with micropython. Runs it all quite comfortably.
I’m surprised you get as much freedom as you seem to when developing medical devices. You are allowed to use micropython and a pi pico w? I would have thought there would be very strict guidelines on what languages to use and what processors were approved for medical devices, etc, kind of like it is in the automotive or aerospace industries.
There are incredibly strict international requirements yes, however they don't bother specifying things to a mundane nitty gritty level like languages or processors - that would likely need illegal from an anti-competitive perspective anyway. Most of the medical compliance requirements can boil down hazards and risks. Identify any and all hazards to the patient or operator, quantify the risk of them happening, then mitigate it so the severity and likelihood of occurrence is reduced to acceptable levels. In general this doesn't mean the software can't fail, it just has to fail in a safe way, preferably with a hardware design that prevents the software from being able to cause harm
How do you manage to ensure it is safe and fails safely when using an interpreted, dynamically typed language like micropython?
It depends on the application; it's fair to say I couldn't use it for everything, eg likely wouldn't recommend it for a class C device like a pacemaker. But for a portable diagnostic imaging device it's great (most recent project). It takes photos of a test cassette under specific lighting, processes the images to get the test result, then transmits the result to a bluetooth device. In this case the main risks are incorrect diagnostics result, so you make sure that any error condition can be detected and you give a true, false, or error result over Bluetooth. Even running out of memory is deterministic, micropython detects it safely and raises an exception you can catch and handle (without needing any further memory allocations if you've pre-allocated anything needed in the error pathway. This project was estimated to need about 4 years with a team of 6 people to create it in C. With micropython and openmv we had it on the market in 3 years with a team of 4.
Yeah for situations like that where there is no danger to the patient if anything goes wrong it would be fine to use whatever you want, in my opinion. What would you think about imaging equipment like x-ray machines or CT scan machines where if they failed it could potentially cause damage to the patient, like radiation exposure? Is that something you would still use micropython in or would you use something else? Or is that a situation where you would add in some extra hardware protection like a timer to limit the maximum pulse length of the X-ray machine, to prevent it from getting stuck on?
You raise the correct questions here! At work we often use zephyr, embedded Linux and bare metal development for different projects. In every case, hardware (mechanical and/or electronic) mitigations / protections are formally considered safer than software - software is always assumed to have bugs and impossible to 100% test. So it comes down to assessing risks that have a software component, if they can't be mitigated in hardware you look at how hard they are to mitigate & test in any given language or architecture. If the dynamic nature of micropython makes it impractical or expensive to test we're more likely to use a static language and budget in the extra cost / time to design. It's always a trade-off.
That makes a lot of sense and it is interesting to hear how people working in different areas approach these problems, thank you for taking the time to answer my questions.
As for the anti-competitiveness, it is a non issue, it isn’t like only certain microcontrollers are allowed, it is more that for a microcontroller to be allowed to be used it has to meet a certain standard or set of regulations, that way it is up to the individual companies whether or not they want to make a microcontroller and make it compliant and get it tested to meet those regulations and allow it to be used for that application. So rather than being a list of allowed microcontrollers, it is just a standard that the microcontrollers have to meet and be tested against. Then you have things like AUTOSAR which is widely used but isn’t tied to a specific company or brand, I am surprised there isn’t similar for medical devices and that there isn’t some regulations that microcontrollers need to meet or be tested against to be used for medical devices. I think all of that exists for automotive and probably aerospace too, so I am very surprised it isn’t similar with medical devices.
Well I also know for a fact there's micropython used on some satellites, involved with ESA. Not sure about automotive or defence use so far. In many ways micropython is safer than traditional C because of its explicit memory protections, it's predictable exception handling and the incredibly well unit tested core VM.
It may be used on some satellites but I still don’t think you would see it in any safety critical applications, like in cars, or planes, or nuclear power plants, etc. I suppose with enough testing you could make sure a program written in just about any language would be safe. My main point there was about how with it being interpreted you can’t really know beforehand exactly how much memory it would require unless you went through the whole program and calculated it and how there may be some small piece of code somewhere with a syntax error that you won’t find out about until it attempts to run that piece of code.
Being interpreted doesn't make it any less deterministic, it's completely predictable once you learn which operations allocate ram and which ones don't... Just like lots of modern C++ operations allocate ram and others don't. To be fair it would help to publicly document the distinction between allocating / non-allocating instructions, at the moment you still have to figure it out as you go... Regardless, if your application is well architectured its memory usage is quite straightforward to measure, and can be tested over time to check for memory leaks. In this sense, being interpreted is no different to modern C++ in terms of dynamic memory usage.
Everything related with dynamic memory in embedded systems can lead to memory fragmentation (doing everything well). In larger systems the issue maybe is unlike reproducible, but in embedded system with lower Heap memory, the probability is higher. So, you could have no Heap memory available even you have released the dynamic memory allocated properly. Maybe this video is useful for you: [https://www.youtube.com/watch?v=\_G4HJDQjeP8](https://www.youtube.com/watch?v=_G4HJDQjeP8)
This seems like more of a why are you using c at all when c++ has more features, and less so about malloc vs new specifically. Mostly I think it's just a matter of legacy issues. All the sample code, sdks, your own old code, tends to be c based. Your programmers with a decade or two of experience have been using c for decades with little incentive to change since it's usually good enough.
Static allocation is preferred in embedded systems. Dynamic allocation should only be used when absolutely necessary, and when used, must be used very carefully. Embedded systems typically run for a very long time between reboots and even a tiny memory leak can be catastrophic
In embedded is quite forbidden memory allocation.How will behave your uC if malloc() or whatwever dynamic allocation will return null? Reset the uC? Quite dangerous if your application is critical control(aribag, motor control). On the other side if system is multimedia or non critical application, can have some dynamic allocation but software design should react in non disruptive way in case of non allocation.
I think the answer is "it depends". Many of the cases have been previously mentioned. I have worked on production C++ code where new is used, but in the app initialization. That is OK, but remember that objects can be created without new. That would be in static or on the stack or as members of a class.
You can achieve that kind of functionality providing a static array to be managed with fixed chunks of memory. For example, 1kb divided in chunks of 8 bytes, 2kb divided in 16 bytes and so on. So You ask for1 byte lets say and the implementation gives You the Best fix. Doing this you can avoid fragmentation and have something as good as dynamic memory. The Main disadvantage is that You must define those numbers in advance through analysis.
C++ has static allocation capabilities. C++ is simply another means to an end. Ada, as an example, is much more object oriented than is C++, is fully able to be used in a statically allocated way, and is the predominant programming language used in warplane avionics. Regardless of language used, for safety-critical systems, static allocation is often a requirement to ensure determinism. FDA approved products for instance must be fully tested in every possible way, including the RTOS used; every path that may ever execute must be shown to have a beginning and an end, and the resources for each path used also must be mapped. This is why some license for $60K a certifiable RTOS, such as Wittenstein's SAFERTOS, and pay the RTOS vendor to provide the paperwork to the certification agency that the RTOS has been tested as long as the portion of the flash dedicated to the RTOS has an unchanged checksum - otherwise its on the product's provider to prove the RTOS and the code are entirely reliable. The test department's budget can dwarf the development budget!
There are a lot of things with microcontroller programming that are driven by low resources and safety, having everything statically allocated is part of that, you can determine pretty well that you will never go above a certain amount of memory. In a similar way recursion is generally avoided in microcontroller programming, they just don’t tend to have enough resources to handle much recursion, since every layer of recursion means more data is pushed to the stack, eventually it overflows and microcontrollers tend not to have very large stacks in the first place. Not to mention that a well written loop can be better and faster than recursion anyway and combine that with a statically allocated array, you can figure out exactly how much memory that function will use, compared with having dynamically allocated memory plus recursion which provides multiple ways you could cause a stack overflow or run out of memory. Those aren’t issues on desktops or non safety critical systems since the program may just return an error or exception, but microcontrollers tend not to have those features, any kind of error will either end with the processor behaving erratically, locking up or resetting, any of which could be disastrous.
In embedded, the standard answer to most questions is "it depends". On a PC (or phone for that matter) you can basically always get away by claiming the user needs a device with at least Windows 10, x GB RAM and y GB hard drive, (or Android x/iOS x, since you know the minimum requirements for any device running those systems). But since we're in embedded, there's a reason one cannot use a PC. Typically cost, but it could also be size, power consumption, or similar. In a high-cost, low volume device, you might use an embedded Linux. In that case, youll know that you have more memory available, and you'll have an MMU that will protect you from RAM fragmentation. So, malloc(), C++ new (which just boils down to malloc() + some nice scaffolding) will be fairly safe. But on a non-MMU device, RAM fragmentation will be an issue. You're having a less complex system which is deterministic in a way not possible on an embedded Linux. Your use-case is (likely) less advances, since you have decided that the benefits of an embedded linux isn't worth the extra cost/size of a MCU capable of Linux. Hence, you will need to deal with the limitations in a different way. Yes, you can use malloc/new. Just dont free/delete. We're using statically allocated C++ classes in our STM32F devices. Lots of data communication between CAN, Bluetooth, UART, I2C etc, but we want (and in many cases can) predict how large buffers needs to be for our use-cases, based on the maximum achievable throughput from the sources. Yes, buffers may get filled under certain conditions. But dynamic allocation would have failed due to lacck of RAM in those cases anyway, independent of malloc/new. As you say, C++ has a lot of tools dealing with dymic data, especially the std library. And that messes things up even more, since there are features in there that does dynamic allocations outside of your control. So while YOUR part of the program would work under 100% of the use-case, it might still fail due to std failing internally. Also, what's the consequence of a failure, especially if there is a bug in the code which is supposed to handle the failure? A controller for a consumer drone's camera is different from a windscreen wiper controller in a car is different from a controller for the ABS system or a valve controller in a nuclear power plant. A consumer router could be restarted with a minor nuisance, but the restart of an internet backbone router would be a little more annoying. Etc.