[SOLVED] How long is delay using for loop?

Status
Not open for further replies.
The thing is that sometimes i would like to calculate the time taken for a particular instruction in the c compiler. In my case for eg, i'm using a DAC through I2C to produce sine waves and i would want to know an approximate time taken to send a byte by the communication protocol library headers. Anyway i could find this without using a simulator ??
 

I understand your situation. I don't know much about C18, so I wouldn't be able to comment on this. Maybe some one else can help you find some document that can tell you about the timing. You should be able to use the simulator to find out how much time has been taken.

Hope this helps.
Tahmid.
 

In my previous posts in this thread I tried to advise against using simple No-Op delay loops and their equivalent functions from a library. I gave a few reasons including timing inconsistencies but omitted to mention the main reason, the incompatibility of these methods with any interrupt handling. The timings will erratically give longer time periods than they should due to delays caused by ALL interrupt processing. Another consequence I didn't mention is that your system will be unable to do lots of other unpredictable but important common tasks (such as switch debouncing) if its tied up in unproductive delay loops.

natraj20's recent request for assistance in calculating the time taken for any sequence of instructions to execute brings me back to my previous recommendation to use simple external hardware, but with a difference. The best way of timing a repeating sequence is also with external hardware: use an oscilloscope. (Folks on here should be familiar with using hardware and software together). If you pulse an available pin on an I/O port (or multiplexed output latch)once per instruction loop, then you can observe and measure the time. You can even add or subtract more instructions and record the differences.

My proposal for the original question still stands. I recommend using an external timing signal (you can use a crystal oscillator for precision, but for most purposes, the 0volt transitions of the transformed mains AC is adequate. This can issue an interrupt every 20mSec to increment a simple counter, clock and even calendar, with very little hardware or software.

This view comes from painful experience:
I have used this method sucessfully in many production designs.
I have also had a very embarrasing fault in a product when I did use software-only timing. That was for a global company with a very high public profile and the system was in a well known public place - it would very occassionally accumulate tiny timing errors which led to sudden and unexpected disturbing consequences for all visitors! I learned the error of using hard-coded No-Op delay loops the hard way.
 
Thanks Tahmid and DX,

@DX - I agree with you using a scope - again the situation for me is that i ve already been using a scope to check out the frequency of the sine wave at the I2C DAC output. I can only check the frequency over a period of time but if i would want to optimize the code by reducing the delay between outputting each analog value (which implies increasing the frequency of the sine wave), it is quite difficult to calculate the timing for each instruction coz for each analog value there will be back and forth I2C messages and i would approximately want to know the timing of, say a WriteI2C() function so as to produce the desired frequency with the delay!!!
 

Hmmm, I think I can see what you're doing. Its not quite what I imagined. A sine wave from a DAC is a bit too vague for me!
I was thinking of something simpler:
First, write a loop which initialises a binary value,
and then in the loop write OUTPUT a high bit, a few No-Ops, OUTPUT a low bit, and loop indefinitely.
When it runs just measure the time between the pulses.
Second, change the middle of the loop by adding the instruction(s) you are interested in: WriteI2C().
When it runs measure the time between the pulses.
The difference between the two experimental runs will be the processing time of the instruction(s) you added.

Have I correctly understood what you want to measure?
 
Thanks guys. Nice to know there have been posts, I didn't visit the forum for some days.

Natraja20, what you spoke about in your post is exactly what I was looking for.

Tahmid, thanks for your response. You used the value of X as 135. Since the accuracy of the delay is questionable, or uncertain, I just wanted to say that I don't need an accurate delay. But let me add that even if its value is not accurately known, it should produce the same delay (when I am using it), for all the delay calls. Will it do that, or is the same command delay, with the same argument going to be different within the same program when its executed?

Regards,
Jay

---------- Post added at 19:21 ---------- Previous post was at 19:18 ----------

DXNewcastle

I understand it would be more accruate for me to use a timer. I am using a 6 Mhz crystal, so how would I include an instruction that calls a timer delay in the C program?

Regards,
Jay
 

Jay_
You've said you're using a Keil compiler on an 8051 type processor?
That compiler provides two functions, one to initialise the timer and one to call it. It triggers an overflow interrupt if the counting reaches 65534.
Here's the link to some sample code : Keil - File Download Area - 8051 Timer 0 Mode 1 Example Program

I have not used this myself (not that I can recall) so cannot answer questions about it.
 
Reactions: Jay_

    Jay_

    Points: 2
    Helpful Answer Positive Rating
I didn't understand the main_c program completely. Can anyone explain the sequence of instructions and how it executes?

#include <reg52.h>
#include <stdio.h>

static unsigned long overflow_count = 0; /* why is this declared static? */

void timer0_ISR (void) interrupt 1 /* what is the 'intrruppt 1' label after func? */
{
overflow_count++;
}


void main (void)
{

TMOD = (TMOD & 0xF0) | 0x01; /* Set T/C0 Mode */
ET0 = 1; /* Enable Timer 0 Interrupts */
TR0 = 1; /* Start Timer 0 Running */
EA = 1; /* Global Interrupt Enable */

while (1)
{
}
}

I didn't even understand the significance of "Global Interrupt Enable" or ET0, TR0 and TMOD statements. I had studied C language (not for microcontroller programming though). The instructions here include some I haven't come across.

Thanks Newcastle, but if you could help me understand this it would be great.
 

Jay
Your questions don't seem to be about programming in C, but are about how the microcontroller's timers and interrupts work. I suggest you read up on how to use the timers and interrupts in the hardware data. You'll find the explanation of Enabling, Initialising & Disabling Timers there, as well as how to use interrupts. You can consider the C instructions for these operations as if they are Assembly code for the basic operations of the microcontroller. For example, the instruction TR0 is simply setting a one-bit on/off flag in the controller which is equivalent to starting/stopping the timer, its not a C language feature.

You need to be completely familiar with the concept of interrupt handling - in this case, there are two threads of your program which are asynchronous. One starts the timer. When the timer completes its delay it issues an interrupt which transfers processing to the second thread of your program.
I don't know what experience and knowledge you have to answer more appropriately
 
Reactions: Jay_

    Jay_

    Points: 2
    Helpful Answer Positive Rating
Thanks DXNewcastle. I do need to learn actually about intrrupt handling in the microcontroller. I will try and find the tutorials for it.

Regards to all,
Jay.
 

Status
Not open for further replies.
Cookies are required to use this site. You must accept them to continue using the site. Learn more…