sun_ray
Advanced Member level 3
- Joined
- Oct 3, 2011
- Messages
- 772
- Helped
- 5
- Reputation
- 10
- Reaction score
- 5
- Trophy points
- 1,298
- Activity points
- 6,828
Alternatively, the write data may be 16-bit data that was broken up into bytes to be sent, then the concatenation has restored the data rather than corrupting it.
In every case there is no corruption of data.
Let's not forget. YOU are the designer of the system. You saw a need for packing two bytes into a 16-bit word. You designed the FIFO. You placed the bits of those two bytes into the FIFO in an order of your choosing.
On the read side, you are reading the exact 16-bit word you designed. You know the position and purpose of every bit, nybble, field or byte in that word. Therefore you will know exactly how to process it.
These FIFOs do not sprout up by accident, nor do they randomly rearrange bits, so nothing will ever be unknown to you.
r.b.
sun_ray, what is your REAL problem? You accept the fact that FIFO is doing EXACTLY as it is supposed to: You write in two bytes, one at a time; you read out two bytes, two bytes at a time. THE BYTES DON'T CHANGE. Where is the problem?
time Write Read
0 aa -
1 bb -
2 cc -
3 dd -
4 ee aabb
5 ff ccdd
6 - eeff
I accept. But my problem is two bytes at a time will be processed by the read side design unit. So it is processing a date of two bytes width per clock cycle. But in write side the data is written as one bytes at a time. So it is leading to corruption of data. Read all my posts in this thread carefully to understand what the issue I am stating.
Regards
Are you assuming your data is 'corrupt' because it is 2 bytes wide instead of one?
The reason two bytes are being read is only to safely transfer the data using FIFO. It is mandatory to read 2 bytes to safely transfer the data. The data could not be read as eight bits for using the FIFO to transfer. I would have been happy if data could be read as eight bits. How is this situation taken care in practice?If so, why are you reading 2-byte-wide data?
So In write side you are getting the data as aa in first clock cycle, bb in second clock cycle like this. So in read side you are not getting 'aa' as the data in the first clock cycle, but you are getting 'aabb' as the data in the first clock cycle. So you are reading aabb as the data and not the correct data aa in first clock cycle. In the read side you are reading as aabb, ccdd, eeff in subsequent clock cycles instead of the correct data sequence aa, bb, cc in subsequent clock cycles. This is how corruption of data is happening.There's no corruption, surely you concede? Here is an example, where the sender wrote a total of 6 bytes (aa,bb,cc,dd,ee,ff in hexadecimal) at time=0,1,2,3,4,5:
Code:time Write Read 0 aa - 1 bb - 2 cc - 3 dd - 4 ee aabb 5 ff ccdd 6 - eeff
Where is the corruption? Can you draw a diagram like this to explain?
The reason two bytes are being read is only to safely transfer the data using FIFO. It is mandatory to read 2 bytes to safely transfer the data. The data could not be read as eight bits for using the FIFO to transfer. I would have been happy if data could be read as eight bits. How is this situation taken care in practice?
And you packed them that way for a reason, so you must want to use aabb etc,
You packed the bytes so you know how to unpack them.
Even if you don't, just split the read word up into two bytes. Use word[15:8] as one byte and word[7:0] as the other. Voila, you have your bytes back.
We use cookies and similar technologies for the following purposes:
Do you accept cookies and these technologies?
We use cookies and similar technologies for the following purposes:
Do you accept cookies and these technologies?