Continue to Site

Welcome to EDAboard.com

Welcome to our site! EDAboard.com is an international Electronics Discussion Forum focused on EDA software, circuits, schematics, books, theory, papers, asic, pld, 8051, DSP, Network, RF, Analog Design, PCB, Service Manuals... and a whole lot more! To participate you need to register. Registration is free. Click here to register now.

Timing in digital signals.

X

Xenon02

Guest
Newbie level 1
Hello !

I've wanted to ask for a need teaching help. I wanted to learn more about digital circuits itself, so I've been playing around with STM32 nucleo boards.
The things I was pretty much lost was about signal timings and timing itself, I'll try to describe it as best as I can, if there are any problems please give let me know I'll try to clarify everything I want to ask, also I want to add that there won't be any code, basically theoretical stuff I wanted to understand or know the place where I can read about it.
First of all is timing in digital signals, I've stumbled in this timing thing in some places and didn't understand how do they affect to each other sometimes. One of these occurance happened while trying to find out why my ESP8266 didn't work as I2C slave (which has been resolved), one user said that the I2C standard speed is like 100kHz and the ESP chip has a clock of 80MHz and he somehow calculated that the chip has like 5us time ??? This is the first thing which confused me in man ways in the way that I didn't know what one frequency had something to do with another frequency but okey. I found out that timing can be pretty helpfull in some situations, but didn't know how to approach this topic, I can imagine some stuff but couldn't find the conflicts in it or this "gap" of free time.

Second thing is maybe a bit weird to ask but okey.
I also have a hard time imagining how fast are some protocols/functions/communication lines etc. like it is in ms,us, ns. Some say it is fast or not but it is less than a second and I find it sometimes hard to say whether it is fast or not ...
Also I just found out practicing programming that the amount of code doesn't say whether the whole code is proceeded slower, or faster, even small code can be proceeded slower than the longer code.
But basically I had a problem understanding like the communication lines like UART or I2C, and not in concept. I know that UART uses RX and TX lines and sends data using start byte, data, parity byte etc ... and I know how I2C works as well. The problem lies somewhere else, to be more specific and to give an example, I have 2 devices, one is STM and the other is let's say a Bluetooth device. I have to initialize in both deviced UART pins. But here is the thing what if STM32 will initialize his ports faster than Bluetooth device, and will start sending data, in this situation the bluetooth shouldn't even receive or store any data that was sent by STM32 because his pins didn't initialize yet (I know weird example). Why did I think about it like that ? I don't know how each functions are proceeded how fast are they proceeded ! if function starts and finishes in us or ns is unknown for me so I don't know which device will be faster (I know it always works so why bother).
Or another example which still occupies my mind, Let's say we have STM32 which first transmits data (HAL_UART_Transmit), and after that function he calls (HAL_UART_Receive). The device on the other end is already sending data after (HAL_UART_Transmit) call, and (HAL_UART_Receive) is being processed, so data comes while function is still being processes, the data comes with 9600 baud rate, so will the function finishes it's stuff before the full 8 bits will arrive ? How fast will the function finishes it's stuff ? there are other faster speeds/bigger baud rates or faster communication lines like I2C or SPI or CAN (I haven't learned how CAN work yet), USB, PCI-Express.

Those are very random and maybe no one thinks about it but I found it confusing and couldn't find answers to it, so I wanted to ask here if maybe someone know the answers. I am not an expert in those stuff because I know some basics and was having fun with STM32, but these stuff came in mind while setting I2C in both devices and didn't know which one will initialize faster, usually I had a device which didn't require from me to set it's pins to work as I2C or UART, so I wondered how do these people know it will initialize fast enough or something ...

Thanks for reading this long post, have a nice night !
 
I *think* you are talking about how to get two separate devices to communicate, whether it is via UART or I2C (or SPI or any other specification).

As you say, what is one device starts before the other is ready. Also this extends to what happens if there is noise or data corruption or one device drops out etc..
All of this comes down to the definition of a higher level protocol that you need to define - or use one that is defined by an industry standard.

For example, to get around the 'which starts first' issue, you could try having each device send a known character or sequence to the other, with the expectation that something will be received back. This is the equivalent of a 'ping' over the network. If nothing comes back the 'ping' again. If the 'other' device starts up half way though a 'ping' then it will receive rubbish (which it can ignore) and possibly framing errors (for UARTs) - all of which can be handled but still the overall thing is to ignore what is received. By doing so that will cause the first-started device to send the 'ping' again which , this time, you should receive properly and so you can respond.
When this initial 'handshake' is complete, both devices know that the other is there and can start the 'real' communication.

That is when you can start using another higher level protocol. To example to get around noise problems, you can encapsulate the data within a packet that includes a CRC check, or other error checking and correcting systems. If a packet is received that is all correct then you can send an 'acknowledge' packet back and if there is an error detected that cannot be corrected, you can request a resend of the packet.

You also seen to be asking about the transfer rates between devices. There are several factors that come in here.

Firstly, as you have mentioned, there is the time each bit takes to send. For UARTs this is commonly referred to as the BAOD rate and you have already mentioned two of the common I2C rates. Then there is the number of bits needed to send a value. for UARTs this is typically 8 bits for the character (but from 5 to 9 is possible) plus whether you have a parity bit and then the start and stop bits. All of these need to be agreed between the sending and receiving device as part of you communication design.

There may also be limits on the bandwidth of the line between the two devices. If they are on the same PCB (for example) then the rate can be very high. If they are many metres apart and connected by a wire or RF link etc., then you may need to slow things down to be reliable.

Next you have whether you send single character or streams. If you want to send a line of text (say) then you need something to tell when the line completes. You *could* always send a fixed number of characters, but normally text comes in varying line lengths which is where the '\r' and '\n' characters come in to mark the end of the line. (Again which of these, or a combination, is used is something that you need to determine as part of your design.) There are also other techniques you can use such as a knowns 'start of text' and 'end of text' character. These also let you re-synchronise if a message is corrupted part way through so you can ignore that but still know when the next message starts.

You can also get more fancy and start putting the data into a packet which is what the various network protocol (e.g. OSI layers) do. This makes the message longer (and therefore will take longer to send) but can increase the reliability

Once you have that sorted out, then you will know how long it will take for each packet to be sent. That brings up the next point - how fast does this need to be? Certainly you must have the first packet sent (and acknowledged if required) before the next packet needs to be sent. For time critical situations (e.g. monitoring a real-time sensor where you cannot miss a sample) this is important. If you can't complete one message before the next needs to be sent, then you need to make design decisions about the things mentioned before or make the message smaller (compression???) etc..

So you can see that there are many factors that you as the application designer need to make.

Susan
 
For example, to get around the 'which starts first' issue, you could try having each device send a known character or sequence to the other, with the expectation that something will be received back. This is the equivalent of a 'ping' over the network. If nothing comes back the 'ping' again. If the 'other' device starts up half way though a 'ping' then it will receive rubbish (which it can ignore) and possibly framing errors (for UARTs) - all of which can be handled but still the overall thing is to ignore what is received. By doing so that will cause the first-started device to send the 'ping' again which , this time, you should receive properly and so you can respond.
When this initial 'handshake' is complete, both devices know that the other is there and can start the 'real' communication.

I haven't thought about it this way. Although I haven't noticed it in any device I've used so far like DFPlayer mini or Bluetooth, I just sent an UART transmission code and it worked. Although it is quite interesting idea. The ping idea is pretty interesting, maybe using simple for statement might work or there is a better solution like while statement but with timeout dunno.
Still really cool to know thanks !

Because usually the other device is faster hmmm but I was usually uncertain after discovering that the amount of code lines doesn't say how fast the whole code runs. Well maybe I was a bit right and a bit not to worry about it. But still thinking always like : "hmm it may not work", I thought of using additional pin to set high or low to know if the device is ready to receive data but also will the function to change the pin will be fast enough before the STM32 will check the pin state ? So many gambling spots which one is faster in ms/us/ns. Hard to imagine ... Or just my brain works in minutes and that's why haha :D



You also seen to be asking about the transfer rates between devices. There are several factors that come in here.

Firstly, as you have mentioned, there is the time each bit takes to send. For UARTs this is commonly referred to as the BAOD rate and you have already mentioned two of the common I2C rates. Then there is the number of bits needed to send a value. for UARTs this is typically 8 bits for the character (but from 5 to 9 is possible) plus whether you have a parity bit and then the start and stop bits. All of these need to be agreed between the sending and receiving device as part of you communication design.

There may also be limits on the bandwidth of the line between the two devices. If they are on the same PCB (for example) then the rate can be very high. If they are many metres apart and connected by a wire or RF link etc., then you may need to slow things down to be reliable.

Next you have whether you send single character or streams. If you want to send a line of text (say) then you need something to tell when the line completes. You *could* always send a fixed number of characters, but normally text comes in varying line lengths which is where the '\r' and '\n' characters come in to mark the end of the line. (Again which of these, or a combination, is used is something that you need to determine as part of your design.) There are also other techniques you can use such as a knowns 'start of text' and 'end of text' character. These also let you re-synchronise if a message is corrupted part way through so you can ignore that but still know when the next message starts.

You can also get more fancy and start putting the data into a packet which is what the various network protocol (e.g. OSI layers) do. This makes the message longer (and therefore will take longer to send) but can increase the reliability

Once you have that sorted out, then you will know how long it will take for each packet to be sent. That brings up the next point - how fast does this need to be? Certainly you must have the first packet sent (and acknowledged if required) before the next packet needs to be sent. For time critical situations (e.g. monitoring a real-time sensor where you cannot miss a sample) this is important. If you can't complete one message before the next needs to be sent, then you need to make design decisions about the things mentioned before or make the message smaller (compression???) etc..

This part partly answered my questions I think :> I was more concerned about timing that's why I used transfer rates as an example and the 80MHz on the chip and 100kHz the I2C clock and some random calculated from this information that it has 5us to respond ? I don't know just came up to solution that maybe it is about timings and I need to know how it works.

Although what you've mentioned a bit overwhelmed me a bit. I usually don't know how to start new stuff because I usually learn stuff randomly like some bit of information and can't find the beginning source. Or that there is a simple idea and after that a super complex device or something that is a bit different to what you've learned but the amount of things is just overwhelming to understand.

Also there was something about the functions. Like HAL_Transfer and HAL_Receive thing I've mentioned and still didn't know how it looks like and why it works though (I know it works but you understand my idea ?)

Thanks for the respond !
 
Hi,

every device comes with datsheets. And in the datsheets you find the timing requirements and specifications.
And they do this using timing diagrams and charts.. for a ggod reason. And this is the way to go.
Your lengthy textual descriptions are hard to read, hard to understand and prone for misunderstandings.

Timing diagrams use lines, arrows, starting points, end points ... to visualize the problem.
Just imagine a map of a town described as text instead of picures. The same is here.
Additionally pictures are quite international ... no language barrier.

Klaus
 
fast and slow are relative terms using the rated limit as a reference or any reference in local context.

So doing 140 kph in a 100 kph zone in your car is "fast" or "too fast" :ROFLMAO:

But doing 9600 baud is "slow" on a UART capable of 921000 bps may be consider slower than 4 Mbps on SPI and both "slow" compared to a 1 Gbps ethernet port.

Often signal lines are rated in kbps-m or Mbps-m or Mb/s per meter.
Speed is limited by the line capacitance per unit length.
 
every device comes with datsheets. And in the datsheets you find the timing requirements and specifications.
And they do this using timing diagrams and charts.. for a ggod reason. And this is the way to go.
Your lengthy textual descriptions are hard to read, hard to understand and prone for misunderstandings.

Timing diagrams use lines, arrows, starting points, end points ... to visualize the problem.
Just imagine a map of a town described as text instead of picures. The same is here.
Additionally pictures are quite international ... no language barrier.


Hello !
Sorry for my long textual description. I've tried to somehow convey the essence of the problem I am facing and I didn't know how to describe it.
I've seen some charts/diagrams but couldn't understand that some clock of a processor or something isn't matching with the speed of something. Okey maybe something else because that one might sound stupid. Like calculating the window how much time processor/device/process has before doing something, from just knowing the chip clock and connection speed, I didn't understand it at that moment, and that moment gave me a thought if I should think about timing all the time and if so how.

There was also a small piecie of my thought about UART (not how it works but rather how the functions are executed).
Like here :

Or another example which still occupies my mind, Let's say we have STM32 which first transmits data (HAL_UART_Transmit), and after that function he calls (HAL_UART_Receive). The device on the other end is already sending data after (HAL_UART_Transmit) call, and (HAL_UART_Receive) is being processed, so data comes while function is still being processes, the data comes with 9600 baud rate, so will the function finishes it's stuff before the full 8 bits will arrive ? How fast will the function finishes it's stuff ? there are other faster speeds/bigger baud rates or faster communication lines like I2C or SPI or CAN (I haven't learned how CAN work yet), USB, PCI-Express.

I was then thinking like : Hmmm I've sent the whole command to the device and he know to send me back some data, so first he has to prepare the data, and send it back on the STM side (other side that receives the data back), he has to read HAL_UART_Receive, execute every line of the code of that function, and it takes time. What if the data is prepared and is already being sent ? The HAL_UART_Receive might still be reading the lines of code, and it ends at reading the RX flag that the data is prepared.
Well the problem is that first the data is already being transmitted to the STM but the STM might still be reading the HAL_UART_Receive and might not be prepared before the whole data is being sent to him. The second problem is that the data is being sent while the HAL_UART_Receive is being executed/proceeded don't know how to say it. This might not be a problem for 9600 Baud rate it might read the code of the HAL_UART_Receive function, but for faster UART ? Or for faster communication like I2C or SPI ? They are much faster at sending data ???

Tell me if what I said is unclear ! I'll try to change the way I asked this question.

But yea basically it all mixes up in my mind and I don't know how to start because I need to see some example or something because it is all so mixing ...
 
Hi,
I've seen some charts/diagrams but couldn't understand that some clock of a processor or something isn't matching with the speed of something. Okey maybe something else because that one might sound stupid. Like calculating the window how much time processor/device/process has before doing something, from just knowing the chip clock and connection speed, I didn't understand it at that moment, and that moment gave me a thought if I should think about timing all the time and if so how.
Basically there is not necessarily a relationship.

Example: UART.
You may have an 4MHz operated microcontroller .. and a 72MHz operated microcontroller.
Both may operate a UART with 200kBaud. No problem

And the bit time, all the UART timing is related to the baud rate, not the microcontroller clock frequency.
And the same you will see in the timing diagrams and the timing value tables.
In a genereal UART specificationn you will not find any reference to microcontroller clock frequency.

The only thing is that the UART periferal determines it´s clock from the microcontroler clock.
Microcontroller_clock --> clcok dvivider --> UART clock.

Now 200kBaud means it transmits data with a rate of 200kBits/second. (not necessarily continous).
200kBaud means 5us per bit.
But no microcontroller (unless the programmer decides to use shitty bit banging bode) needs to care about tthis 5us.
The microcontrolers have built in UART hardware (internal periferal). It sends yout byte wise. In most cases (8N1):
* one START bit
* 8 data bits
* one STOP bit
Thus 10 bits in total. This means 5us x 10 = 50us per byte.
But still a microcontroller does not necessarily need to react within those 50us (although it should be no problem when interrupt driven)
How fast it needs to react depend on a lot of factors:
* protocol (independent of microcontroller at all)
* microcontroller internal buffer size / or DMA

So if a microcontroller has a 16 byte buffer ... while the maximum message size is less tham 16 bytes ... then indeed the microcontroller has unlimited time to react.
It then solely is defined by protocols.

But what is a protocol?
UART is no protocol, it is an interface.
RS232 is no protocol either, it just defines voltage levels.
There are several protocols that may use the UART interface.
Examples: MODBUS, ProfiBus, printer (using RS232 signals), modem (using RS232 signals).
A protocol defines the meaning of the bits and bytes, defines the messages and the reaction time. It may (!!) define baud rate and voltage levels

Klaus
 
But still a microcontroller does not necessarily need to react within those 50us (although it should be no problem when interrupt driven)
How fast it needs to react depend on a lot of factors:
* protocol (independent of microcontroller at all)
* microcontroller internal buffer size / or DMA

So if a microcontroller has a 16 byte buffer ... while the maximum message size is less tham 16 bytes ... then indeed the microcontroller has unlimited time to react.
It then solely is defined by protocols.

Isn't the 1 byte stored physically somewhere and must be taken/received before the next data comes ?
So there must be some time to receive it so as you've mentioned 50us.

It is partly connected to my question there :

I was then thinking like : Hmmm I've sent the whole command to the device and he know to send me back some data, so first he has to prepare the data, and send it back on the STM side (other side that receives the data back), he has to read HAL_UART_Receive, execute every line of the code of that function, and it takes time. What if the data is prepared and is already being sent ? The HAL_UART_Receive might still be reading the lines of code, and it ends at reading the RX flag that the data is prepared.
Well the problem is that first the data is already being transmitted to the STM but the STM might still be reading the HAL_UART_Receive and might not be prepared before the whole data is being sent to him. The second problem is that the data is being sent while the HAL_UART_Receive is being executed/proceeded don't know how to say it. This might not be a problem for 9600 Baud rate it might read the code of the HAL_UART_Receive function, but for faster UART ? Or for faster communication like I2C or SPI ? They are much faster at sending data ???

But check this out. I have two functions one after another.
I call HAL_UART_Transmit, it goes inside this function and executes every line of the code, the end result is that he has sent the whole data do the device on the other side. After this function another one is called which is HAL_UART_Receive, here comes the twist this function is called but the data is already being sent ! so data is being sent by the device but HAL_UART_Receive is being executed so he isn't looking already instantly at the data that comes. But there is some register as I recon, so he doesn't have to look at the first bit that comes (the function HAL_UART_Receive doesn't have to be ready to read every single bit). But it has time 50us to check the flag and read data. If the function won't be ready before that time the data will be overwritten. It is only for slow baun rate, but what if I give it to the faster ones.

My whole clue is that the data is being sent while the function that is responsible to read the data that is being sent is being processed while the data is being sent while the function is not ready (like I said it is being processes read line after line of code).


I don't know if I have described it better or I have misunderstood your answer ;>
 
There are specific layers for both TCP/IP with 5 layers and a more general communication called Open System Interface or the 7 layer OSI model.

There are zillions of protocols
and some apply to one or more layers. Some protocols combine layers and some systems omit layers.
But each layer can have a set of unique set of "protocols"
  • Application. the machine or man-machine interface.
  • Presentation. where data can be encrypted / decrypted and converted into a readable form
  • Session. this manages what's happening between applications
  • Transport. the data hops between different points on the network to its destination.
  • Network. the address and data routing instructions in a network
  • Datalink. the point-to-point connection or multi-point that transmits the data to the network layer.
  • Physical. ... the hardware
You might have unique commands for data where you expect a response within a timeout. Execution depends on your design requirements or your understand of the existing design. You can receive streaming asynchronous data in synchronous frames like every second all the time or only on command or anything in between.

If you want feedback with reliable communication, then you use parity for each byte and CRC for long frames of bytes and feedback a NAK byte if you got an a error in the last command and which might be interpreted as "say again?" The more protocols you learn the better you can communicate rather than reinventing the wheel, which always feels easier when you are just learning. But keep mind protocols are associated with different levels of the OSI model.

say starting with basic async UARTs. or synchronous comm. systems

e.g. UARTs

Physical
parallel to serial >< TTL/CMOS level inverted to RS-232 +/-3V minimum <> to RS232 back to TTL/CMOS to parallel after interupts processed and then commands interpretted then executed then feedback data.

The requirements for command latency determine the overall design for physical/ logical that apply for any communication, translation, error detection and error correction. etc. Network congestion and contention rules may apply as well as network reliability and redundancy.

even human to SIRI " what is grok"?

p.s.
"man" in the historical sense includes all "humankind"
 
Last edited:
Isn't the 1 byte stored physically somewhere and must be taken/received before the next data comes ?
So there must be some time to receive it so as you've mentioned 50us.
Yes and no. This is where you need to look at your MCUs data sheet and see how the UART is configured. Often they have a FIFO (or at least are double-buffered) so that you can read the characters that have been fully received while the hardware is reading in the next one.

Also, if this is a real problem, then you might look to see if the MCU has DMA in which case you can again get the DMA hardware to transfer the characters as they are received into a memory buffer for you - you only need to be aware when the transfer is complete (however you define that).

Well the problem is that first the data is already being transmitted to the STM but the STM might still be reading the HAL_UART_Receive and might not be prepared before the whole data is being sent to him. The second problem is that the data is being sent while the HAL_UART_Receive is being executed/proceeded don't know how to say it.
This is where you need to design your information transfers correctly. You probably should not both be 'talking' at the same time. The device that sends the command should transmit and the device that processes that command should be listening. Then they swap around so that the device that sent the command starts listening for the response, and the other device then sends that response.

I get the feeling that you are trying to use what are known as blocking functions - those are functions that you all and they only return when the task is complete. For example the 'HAL_UART_Transmit()' function will only return when the number of characters you have specified have been sent. Nothing else in your main code will execute while that happens.

That might be acceptable, but the HAL_UART_Receive()' will block until the specified number of characters has been received. If the first character will take some time to arrive (you talk about the time taken to formulate the response) then that function will block for a long time.

The alternative is to use 'HAL_UART_Receive_IT()' which will set up the receive but then return to your code. The system in the background will process each character as it is received and buffer it. When all characters have been received, it will call a 'callback' function to say "here are the characters you requested" and you can do what you need to with them. the 'HAL_UART_Receive_DMA()' is similar (except that it uses the hardware DMA to do the character transfer to the buffer) but again you will get a callback to say the transfer is complete.

Both of these mean that you can be doing other things while you wait for the response.

Also, you are talking about a processor that runs at 80MHz. That means it will execute each instruction in 12.6nS, or 80 instructions in a uS. If you only get characters every 100uS (at about 9600 Baud) then you actually have heaps of time; more so if you only need to look at the complete received buffer and not individual characters.

(I have an application that monitors two sensors that are sending characters at 9600 Baud each more or less continuously, plus a terminal interface that runs at 115200 Baud. I send using blocking functions but all of the receive operations are interrupt driven (as they can interrupt the blocking send function). Even interrupting on each character an where I process things like line termination, backspace and possible character echo, and then parsing the complete inputs and generating appropriate commands to the sensors and data to the terminal, the processor is mostly idling.)

Please be aware of a thing called "premature optimisation" where you think there might be a problem and so try to code around it and end up spending more time trying to solve a problem that does not exist. It is much better to build your program up bit by bit and, if you strike a problem such as not processing an input fast enough, then you can look at ways to speed that bit up.

Susan
 
Isn't the 1 byte stored physically somewhere and must be taken/received before the next data comes ?
So there must be some time to receive it so as you've mentioned 50us.
1) Yes it is stored physically. But if you have a 16 byte buffer there is no need to fetch before the next by receives. You need to fetch befor buffer overrun.
2) and it depends on sending behaviour and protocol. So either the data are transmitted as byte, then it´s not unusual, that the mext byt becomes sent(recived) 1ms later, 10 ms later, or after a response by the receiver.
It simply is not: always 1 byte immediately after the other without gap.

And again: fetching 1 byte every 50us - interrupt controlled - isn´t a problem for any microcontroller. Even for a slow one.

******
Just to give you an idea of what is possible: I have an application using an 8 bit microcontroller AVR clocked with 8MHz:
* im´d doing 2 ch ADC sampling 16 bit with 10Smpl/s continously. Not losing a single value
* in parallel I do run 2 algorithms on these ADC values to look for trigger // or to detect the end of the measurement
* in parallel I check on ADC overflow every single sample, and in case I switch channels sensitivity (gain)
* in parallel I modify the data and store them in a 2Mbytes memory, acessed via parallel bus (16 bit address bus means one needs to process memory pages)
* in parallel I run the algorithm to calaculate a value drived from the 2 channel´s data (up to 40 bit math)
* in parallel I do the display update via I2C
* in parallel I do USB command receive - not losing a single byte
* in parallel I parse the USB sent command and reply them
* in parallel I read pushbuttons
* in parallel I perform the push button´s actions
* in parallel I read the battery state to swtich OFF in case..
* in parallel I control the ON/OFF of a beeper
* in parallel I run a software RTC - for sure not losing a single tick
* I surely have forgotten some tasks

Now when I say "in parallel" you should read it as "real time", because the microcontroller can´t process these tasks in parallel.

This was decades ago - writing a lot of the parts in assembler and by the (hardware) support of an external CPLD. I never would do it this way again. I´d use an STM32 ... or similar 32 bit processor, with higher processing power, and better DMA support / more hardware supported periferals.
It is not meant for anyone to do the same - it is just meant to show how much processing power an 8 bit AVR has.

The good thing was, it ran off a 9V block battery by using below 7mA. Thanks god nowadays we have LiIon batteries ;-)

Klaus
 
I get the feeling that you are trying to use what are known as blocking functions - those are functions that you all and they only return when the task is complete. For example the 'HAL_UART_Transmit()' function will only return when the number of characters you have specified have been sent. Nothing else in your main code will execute while that happens.

Not exactly the thing I was hitting but yes I was trying to hit on the blocking function. It doesn't have to be blocking it can be even IT.

My main problem was something else.
I'll try to describe it differently.

Let's say HAL_UART_Receive function has to check many flags before he reads data. After that he checks the RX flag that says the data Is ready to be taken.

Now after sending the HAL UART Transmit function let's say the device on the other side is already sending the first byte and the HAL UART Transmit call has returned because he has done it's job.

The HAL_UART_Receive function is being processed line by line and executes every line of the code like check flag nr1 and check time out, then check flag nr2 but at the same time the data comes from the device. What if the data comes before the function will end on checking the RX flag ? How fast the function reads it's lines of codes before he checks this RX flag. Know that the data transmitted from the other device but the STM is reading the function. How fast will the STM read the lines of codes before he takes the data ?

It the thing about the speed of reading lines of code while the data is being transmitted.

Like Klaus st said that the physical space exist. But I was wondering of the speeds of the lines of code. Will it be fast enough to be ready to check the RX flag before overwriting if the buffer was smaller ?

I was not concerned if the data transmitted is slower than reading it from HAL uart receive, he just patiently waits for the flag to take data. I was concerned wether the Hal UART receive is fast enough to take the data before it disappears because data is being transmitted to STM but STM is reading the Hal UART receive function. Reads line after line till he sees check flag RX if the data is ready to be taken.

I don't know if this description is better.

1) Yes it is stored physically. But if you have a 16 byte buffer there is no need to fetch before the next by receives. You need to fetch befor buffer overrun.
2) and it depends on sending behaviour and protocol. So either the data are transmitted as byte, then it´s not unusual, that the mext byt becomes sent(recived) 1ms later, 10 ms later, or after a response by the receiver.
It simply is not: always 1 byte immediately after the other without gap.

So physical is 16 byte ? What if I get 8 bytes ? It usually sends that data is ready to be taken by the fact that the register is full but in this case it is not.

Good to also know that the data is sent in a delay and not immediately one after another.

And again: fetching 1 byte every 50us - interrupt controlled - isn´t a problem for any microcontroller. Even for a slow one.

Why it is not a problem ? What if it interrupts a different communication. Like it interrupted in the middle of transmition of another communication. Of course DMA or IT is a good idea or interrupt priority.

So many things to be aware of ... Pretty overwhelming.

Also, you are talking about a processor that runs at 80MHz. That means it will execute each instruction in 12.6nS, or 80 instructions in a uS. If you only get characters every 100uS (at about 9600 Baud) then you actually have heaps of time; more so if you only need to look at the complete received buffer and not individual characters.

What does it mean I have a lot of time ? 100uS and he reads 80 instruction per uS. What is the relation to one and another? Is it that this 80MHz will read the Hal UART receive faster than the whole data will come ? For example it will read the function Hal UART receive in nS and will be ready to read RX flag ? And the data comes after 100us which means it had indeed a lot of time to read the function every line of code to the point of the RX flag. I wonder if for faster baud rate it also reads the function fast enough and in other communication like I2C or SPI etc. Because it also has to read their receive function and will it be fast enough to read each line before the full data comes ? They are a lot faster as far as I know from the UART
 
Hi,
So physical is 16 byte ? What if I get 8 bytes ? It usually sends that data is ready to be taken by the fact that the register is full but in this case it is not.

Good to also know that the data is sent in a delay and not immediately one after another.
You got me wrong:
I clearly wrote "IF" .. I did not write: it is always the case!
--> "IF" you have a 16 byte buffer
--> not unusual there is a gap
***
But if you have a 16 byte buffer and you receive 8 bytes only.
This is quite the identical problem: You have a box where 16 eggs can fit in. But you have only 8 eggs.
If you think you are not able to transport a half full box ... then life becomes really difficult. ;-)

****
Why it is not a problem ? What if it interrupts a different communication. Like it interrupted in the middle of transmition of another communication. Of course DMA or IT is a good idea or interrupt priority.
Did you read my list of what all is processd in parallel? It´s not a problem.
Waiting is a problem. This means blocking functions. Even polling functions use a lot processing power.

But receiving a byte via UART .. and in an ISR you take read the byte form the UART_Buffer and store it into an SRAM buffer .. maybe takes 1us on an AVR. Let´s say 2us including ISR overhead. So you have 48us or 96% of processing power left to really do meaningful jobs.

Mind: ISRs need to be short in time. If an ISR takes more than 10us .. I would be concerned.
I often test this, just by SETing an IO on entering the ISR and RESET it on leaving the ISR.

And yes, what should happen if the ISR happens during the transmission of another communication? Nothing. Let it happen!

****
So many things to be aware of ... Pretty overwhelming.
No. Don´t be scared.
Just do your job step by step.
First decide the application requirements properly, then draw your flow charts and timing diagrams, then .. then start coding.

Klaus
 
You got me wrong:
I clearly wrote "IF" .. I did not write: it is always the case!
--> "IF" you have a 16 byte buffer
--> not unusual there is a gap
***
But if you have a 16 byte buffer and you receive 8 bytes only.
This is quite the identical problem: You have a box where 16 eggs can fit in. But you have only 8 eggs.
If you think you are not able to transport a half full box ... then life becomes really difficult. ;-)

****

Aaa okey this "if" really just disappeared from my mind.

About the box, yea transporting 8 should be alright and not being able to transport half of it would be weird ;>
I just thought of it in a gate level logic, You must signal that something has been achieved to let know other devices to do other stuff etc. So I thought that only when the buffer is full it sends a signal that "hey I have the whole data, take it now before new data comes", that's what at least I heard because receive function waits for a flag from a physical buffer that it is full, and to take data.
Did you read my list of what all is processd in parallel? It´s not a problem.
Waiting is a problem. This means blocking functions. Even polling functions use a lot processing power.

But receiving a byte via UART .. and in an ISR you take read the byte form the UART_Buffer and store it into an SRAM buffer .. maybe takes 1us on an AVR. Let´s say 2us including ISR overhead. So you have 48us or 96% of processing power left to really do meaningful jobs.

Mind: ISRs need to be short in time. If an ISR takes more than 10us .. I would be concerned.
I often test this, just by SETing an IO on entering the ISR and RESET it on leaving the ISR.

And yes, what should happen if the ISR happens during the transmission of another communication? Nothing. Let it happen!

Yes I did read the parallel process, I assumed it as that it does first thing then it does another then another like one job at a time.
Because code is read from top to bottom, or if it is passed to other stuff to do in the background.

I also am having a hard time trying to explain a bit how my vision of it is concerning. I can see you've been trying to tell me it is not, but I don't feel that it answered my question ;>

I was like concerned in how fast it reads every line of code and how fast he executes every line of the code so there are like two times ?
Ehhhh I was just wondering while programming that : hey the data is being I guess already sent while I use HAL_UART_Receive or HAL_I2C_Receive. or any other receive function. The data is comming to the STM32 but he in the meanwhile is reading every line of code, this takes time, each read code must be then executed and it also takes time.

I even went deep into my thoughts that if a line of code is then interpreted into assembler code then it has more lines of code to read and to execute more times + some commands like mov requires more bits and might need more clock cycles to interpret it and execute it.

I might be overthinking it, but I didn't know how to answer it when there are so many layers of everything and it must be read and interpreted so like one line of code can take more or less time depending on it's assembler interpretation. Susan mentioned an example like 80MHz clock then 1 instruction is executed in 12.6nS, but each instruction can have longer or smaller assembler code so it is not even ... Ehhh To much to much ...



But comming back hmmm when waiting is a problem because it wastes time ? So like first byte is received and another comes in 50 us, so yea if it takes 2 us to execute it then yea 48 us free time and it waits. But what if the baud was faster ? A lot faster or different commucination that is faster than UART, reading lines of code can take more times etc.

Hmm I don't know how to describe it. I'll try to make a picture, sorry for overcomplicating stuff thanks for support.

No. Don´t be scared.
Just do your job step by step.
First decide the application requirements properly, then draw your flow charts and timing diagrams, then .. then start coding.

It is overwhelming ;>
I was usually codding without thinking about it, but after codding my wifi (had to turn on I2C on both side) I was just wondering how it works, or in stuff that I didn't have to think about it (turning UART only in STM the other device is waiting), so I thought how fast is their device so that it is ready to read and send do the other device.
Not only that, I usually when I have a problem I hear some bits of information which usually confuses me because I wasn't thinking about those stuff usually, and I don't know how to find the beginning of that bit of information to understand the answers, because those bits of information usually requires that I know what it means in an intermediate or advanced level, while I don't even know what it means, and I overthink it then.
 
I had this dilemma on my 1st project with continuous data streaming Rx data on a 100 channel frame with injected operator Tx commands to get specific data feedback to verify the change and I expected the ISR, flag and reading to take less time than the FIFO would overflow. I found latency was a problem, so I had to use DMA for the Rx data to resolve the latency. That was a long time ago using an HP SoS processor in the HP9825 and may not be the same as your situation.
 
Last edited:
Hi,
But comming back hmmm when waiting is a problem because it wastes time ? So like first byte is received and another comes in 50 us, so yea if it takes 2 us to execute it then yea 48 us free time and it waits.
It almost the other way round.

The main loop does some - less timing criticle tasks. I´d like to say it does the less important taskes. The main loop runs ... and indeed it just looks (polls) for new events.

And while the main loop runs und processes takst and polls for new flags .... sometimes inbetween a new data is received by the UART --> it riases an interrrupt --> it processes the ISR
ISR:
* reading the data from UART
* increment address pointer
* storing data to address pointer location.
* set Flag "new_UART_data_arrived"
(takes maybe less than 1 microsecond)
and after that it immediately returns to the main loop.

And the main loop does nto get notified, it will never know that there was an interrrupt. It will never know that there was a short delay in main loop processing.

But somewhere in the main loop there is
* if (new_UART_data_arrived) do_ParseUART;

other lines in the main loop could be:
* if (new_USB_data_arrived) do_ParseUSB;
* if (requestDisplayUpdate) do_DisplayUpdate;
* if (ADC_buffer_full) do_AnalogCalculations; // the ADC buffer may be 512 samples of data. And if the ADC_ISR received 256 samples it sets the ADC-buffer_full flag
* } // end of loop

So it does not wait at all.
The ISRs do the urgent jobs just to be sure no data is lost. They are very short in time.

***
Back to my "parallel example" of post#12:
All these tasks look as if they run continously.
* So it samples 10kSmpl/s.
* But when the UART receives a command ... it still samples without a gap ...
* and now it can parse and answer the UART command .. and while it is busy with this ... it still does the 10kSamples/s without a gap .. and it still can recieive Uart data
.. and so on
It does not miss a single ADConversion result, it misses not a single UART byte ....

It never waits. Instead it works. Continously.
*****

Usually one optimizes the processing time.
Often the UART command has a dedicated delimiter (like /n) then you can make the UART ISR to set the "data received flag" only when an /n is received.
***
Do you know the post boxes with a flag?
It´s exactly like this. It saves you from continously going outside (cold, rain) to check wheter there is a letter in the box. Checking a flag from behind the window is fast and convenient.
So you could have 20 different post boxes in your garden and check them all from remote within a second. And if there is a flag set, then you know at which box to go.
And in the meantime you can be busy by baking a cake. Checking the flags from the window won´t harm your bakery.

You can focus on the bakery ...and don´t need to focus to wait for a letter to be delivered.Still you won´t miss the letter and can answer it with a short delay.

Klaus
 
(takes maybe less than 1 microsecond)
and after that it immediately returns to the main loop.

And the main loop does nto get notified, it will never know that there was an interrrupt. It will never know that there was a short delay in main loop processing.

Well technically it stopped the main loop just to finish the ISR, but I believe for blocking it stops the main loop, with the IT, it stops the main loop goes to the ISR and checks the flag, goes back to main loop. Because in ISR I would use HAL_..._Receive, rather just a flag or something. So it must go through the whole while loop and start from the beginning where the data will be taken, and if the main is to long then the amount of time to get to that If statement, that takes the data.

So many if's question.

* So it samples 10kSmpl/s.
* But when the UART receives a command ... it still samples without a gap ..

Or rather the gap is so small it doesn't lose the data ?

There was also a different question in my post, that was about how fast the inside of the function are proceeded/executed.
 
Here comes a better visualization :

1.

1705703472885.png


It has a lot of code inside of this HAL_UART_Transmit before he even transmit the data.
Okey let's say he has sent it successfully !

2.

It now calls HAL_UART_Receive

1705703863830.png


He is at the red point so he is checking the first flag but this is not the flag that says to read the receiver !
So the data is coming and the function is still being processed

3.

1705703980329.png


Look here the first bits were received but the code is still being processed it is not in stand by to check the RX flag !!! By checking the RX flag he takes the data that is inside of that buffer when the buffer is full.

And what if the HAL_UART_Receive(); won't be ready on time because he has many lines of code to process ?

Maybe I made it look better now ;> Paint is cool.
 
Hi,

HAL_UART_Transmit():
* is a blocking function. Read documentation:

It´s exactly what one should not use.
Let´s say you send 8 bytes with a baud rate of 9600, then it blocks operation for 8bytes x 10bits / 9600 baud = 8333 us.
It uses 8333us of processing power, because it waits until the 8 bytes are sent out.
You can not do ADC stuff, you can not receive, you can not check button press, you can not do anything during this 8333 us.

It wastes probably 8300 us compared to an interrupt solution
It wastes 8333 us compared to an DMA solution
You call the function ... and 8333 us later you can perform your next function.

Post box with flag:
It´s you sitting outside in your garden in the rain waiting all the time for the letters to be fetched by a postman.
No cake today.

***
pseudo code:
char UART_TX_Buffer[32] = {"SendTxt\n\0}";
HAL_UART_Transmit_IT (UART1, UART_TX_buffer, 8);

... is the "interrupt version".
it sends the bytes out using interrupts.
it is non blocking. It does not wait until the bytes are sent out.
The function returns within let´s say 5us.
And you are able to perofrm other tasks ... while in background (interrupt) it sends out the data via UART .. you (main loop) won´t notice that during the next 8333us the interrrupt "steals" maybe 30 us from your (main loop) processing power. The main() processing maybe is now slowed down by 0.4% to 99.6%

Post box with flag:
You put 8 letters in a plastics box and deposit in your garden.
You go baking a cake.
And every day you have one short job (0.4% of your working time) to put a letter out of the box and put it on top of the box so the postman sees it and takes it.
99.6 % of your time you are free to bake cakes.
***

pseudo code:
char UART_TX_Buffer[32] = {"SendTxt\n\0}";
HAL_UART_Transmit_DMA (UART1, UART_TX_buffer, 8);

... is the "DMA version".
it sends the bytes out using DMA.
it is non blocking. It does not wait until the bytes are sent out.
The function returns within let´s say 5us.
And you are able to perform other tasks ... while in background (hardware) it sends out the data via UART .. you (main loop) has 100% of processing power left.
No additinal processing power is used.
The main() processing is not slowed down. you have 100% of processing power

Post box with flag:
You put 8 letters in a plastics box and deposit in your garden.
You go baking a cake.
And every day the postman takes a letter out of the plastics box by himself. You don´t need to do nothing.
100% of your time you are free to bake cakes.
***
The plastics box = your char_array. Letter = a byte. Baud rate = one letter per day. post man = UART hardware.

Klaus
 

LaTeX Commands Quick-Menu:

Similar threads

Part and Inventory Search

Welcome to EDABoard.com

Sponsor

Back
Top