The OP doesn't mention what part he is using, so the answer to his question really is "It depends."
For example, on some PIC based parts with fixed depth call stacks, you may not be able to call a function. For example, if you are already 6 calls deep when an interrupt occurs, you won't be able to call a function since there will not be room on the call stack. Some tools, such as the HiTech compiler for the PIC, will tell you the worst case call depth and give you the information you need. I've not seen many OS's run on these PICs, so I suspect this isn't what the OP is using.
However, if you are using a part with a software stack (such as a Cortex-M3), the limitation on calling functions is limited only by memory, performance, latency, and re-entrancy. For memory, you need to be sure any function calls from the ISR do not overflow the stack. For performance, you need to make sure that your interrupt processing time does not exceed some defined threshold. For latency, you need to be sure that your ISR execution time doesn't starve other interrupts or tasks for time. And for re-entrancy, you need to be sure that any function call does not have unintended side effects.
Memory
Some processors have a single stack pointer. All executing code uses that single stack pointer for all operations, such as pushing function return addresses, function parameter, automatic variables, and preserving register contents. When using an OS on such a processor, individual tasks may have separate stacks while executing and the OS is responsible for setting the processor stack pointer to the appropriate memory for the task. Frequently in such a system, the task's stack is used for interrupt processing. So, when an interrupt is recognized, the processor pushes some set of registers and the return address onto whatever stack is currently in use. And any function calls you make from that ISR will also use that stack. So, you must consider the worst case stack usage of any individual task and add to that your worst case overhead for the ISR.
However, some processors may have separate stack pointers. For example, the Cortex-M3 has a main stack pointer (MSP) and a process stack pointer (PSP). The MSP is always used for interrupts, and the PSP is handled by the OS for task switches. In this configuration, when an interrupt occurs, the processor switches from the PSP to the MSP and uses a separate stack for the ISR. Thus, you only have to ensure that your main stack is sufficiently large enough to handle all functions you call from an ISR.
Some OS's emulate the behavior of a separate stack. In this case, the OS has a hook at the start of the ISR (or ISR's if there are more than one) to switch to a separate stack. This increases the size of the ISR (and reduces its performance as there are more instructions to execute), but eliminates the issue of a potential stack overflow in the task stacks.
Performance
Some processors do not push the entire state of the processor onto the stack when calling an ISR. The Cortex-M3 only pushes 8 registers (of 21 total) onto the stack when executing an ISR. Most compilers do not do analysis to see what might change in a function (or a chain of functions) that is called from an ISR, and assumes that any of the registers may be modified. Thus calling a function from an ISR will usually cause the compiler to push the entire state of the processor onto the stack. This can be expensive in terms of execution time, and are wasted cycles if you know that the leaf functions do not modify anything beyond the already saved registers.
There is also overhead associated with function calls. It depends upon the ABI, but many compilers push all the arguments to a function onto the stack along with the return address. The more function calls you have, the more this overhead begins to become a problem. If the overhead associated with the function calls become too large, you will spend more time updating the stack than actually getting work done.
This is one of the biggest reasons why calling functions from ISRs is discouraged.
Latency
This is usually one of the hardest things for people to grasp. Remember that many systems have lots of interrupts and tasks running. When an interrupt occurs, the processor stops what it is doing and turns its attention to the interrupt. In some systems, there may only be a single interrupt handler, and during that time, only the current interrupt is processed. Any other interrupts that come in are ignored until the current one is complete. (This can be alleviated with nested interrupts, but that is a more complex topic.) In some systems, there are interrupt priorities whereby a higher priority interrupt can interrupt an already executing ISR.
The issue here is similar to the performance issue. The more time you spend in your ISR, the less time other things get to run. This latency introduced due to interrupts can be a problem if your application needs to be responsive. Whether or not you can call a function depends upon maximum amount of introduced latency you can tolerate.
Re-entrancy
Some of the answers the thread aren't totally clear, so I hope to clarify some of that here. Re-entrancy is only an issue if a function is shared across multiple execution contexts. I could put some formal definition, but I think it better to demonstrate with an example. Say we have the following (pseudo-)code:
Code:
int x;
void foo(int z)
{
x = z;
}
void ISR()
{
x++;
}
Now, depending upon the architecture, this code could create strange values for x depending upon when foo() and ISR() run. For example, if x is a 32-bit integer and we are on an 8-bit platform, it usually requires several instructions to update x. If the interrupt occurred in the middle of that update to x, the contents of x could be corrupted.
The typical example I've seen is similar to the above. There may be a timer interrupt that updates some global counter, and tasks use that counter to determine how much time has passed. For example:
Code:
unsigned int global_timer;
void timer_isr()
{
global_timer++;
}
void wait_for_timeout(unsigned int timeout)
{
unsigned int start = global_timer;
while((global_timer - start) < timeout)
{
}
}
Now, what's wrong with this code? If somebody calls wait_for_timeout(), global_timer may change out from underneath it during the subtraction ("global_timer - start"), possibly corrupting the result. I'm not going to discuss the fix for this here (it is pretty straightforward), but the point is that wait_for_timeout() is not re-entrant. That is, once wait_for_timeout() is started, it will temporarily exit to handle the interrupt, and then re-enter with the function in a different state.
Conclusion
The answer to your question is "It depends". The issue of being able to call functions from an ISR depend upon a variety of issues, each of which depend upon your particular architecture (hardware and software) and application. There is no absolute ban on calling functions, but it is recommended to keep your ISRs as small, simple, and fast as possible. You will have to make the determination if calling a function does more good than harm, and only you can make that determination.