Hello!
I'm not familiar with the avr environment and therefore I don't know what _delay_ms does.
I mean, I know it's a delay, but how does it delay? Does it simply eat CPU cycles?
I'm doing all my programs multithreaded, and in the latest I have in the pipeline, there are
20+ possible threads.
The embedded world is really a world of clock saving. So in case your delay function simply
eats cycles, rewrite it. Here is how to do with a simple example. Suppose you have some 1 ms
"tick" that calls a periodical process. If your 2ms cycles eater is in this time slot, you will loose
the next tick, and possibly a third one. What you should do is as follows. We will suppose that
you have a timeout variable:
Code C - [expand] |
1
2
3
4
5
6
7
8
9
| last_function_before_timeout();
if(timeout > 0) {
timeout--;
return;
}
else {
timeout = 2; // Will delay the next function of 2 ms
}
first_function after timeout(); |
As you can see, when your timeout is needed, you simply set a variable, which takes a few
clock cycles only instead of 40 000 cycles in case your processor works at 20 MHz.
FvM mentioned context switching. Indeed, saving the context and restoring the next takes a lot
of clock cycles. So the best is just to avoid context saving.
If you have a thread that needs to be in real-time (e.g. audio sampling) then promote it to master
thread. All the process will run around and you should ensure that any function you write
fits into a single time slot or at least can be split.
If you follow these 2 rules (never eat cycles idling, fit all functions in a single time slot):
- Your code will be simpler, cleaner with less "what if"s
- Your real-time characteristics will be guaranteed whatever happens.
- As your data thread is master, you will never loose a single sample.
Dora.