I2C: why do we use a separate HIGH FREQ system clock when we have SCL

Status
Not open for further replies.

abhinavpr

Junior Member level 2
Joined
Jun 19, 2013
Messages
21
Helped
0
Reputation
0
Reaction score
0
Trophy points
1
Visit site
Activity points
205
Hi,

I am learning I2C protocol and intend to design a slave module for it. I have gone through few designs by others and except for a few design, almost every where a system clock is used in addition to SCL. I would like to know if somebody can explain why its a common practice to use a HIGH FREQ system clock?



i understand it can help in debounce ckt. but this should not be the primary reason for using it.



i guess it must be related to generation/detection of start/stop condition but then again it can be done without using a separate system clk. I am attaching a code from fpga4fun.com where the i2c slave module does not use a separate sys clk.https://www.fpga4fun.com/I2C_2.html



it would be of a great help if somebody can provide me with the answer?
 

Your comment is a bit surprising. All I2C devices have no common "system clock". It is not common practice. Can you give some examples? Any slave or master design dependent on a common clock other than SCL does not meet the I2C specification. Furthermore, it would be a very poor design and in violation of the goals of I2C. Post some examples and we can comment.
 

If the I2C is being used for device control, but the device you are controlling is also getting data (like, say, digital audio) you may be seeing the clock for the data?
 
Last edited:

Your comment is a bit surprising. All I2C devices have no common "system clock". It is not common practice.
That's only true for "standalone" I2C slaves. Microcontrollers are usually operating the I2C interface as synchronous circuit from the system clock, e.g. PIC processors. You'll become aware of the fact if you study the I2C timing specifications, they involve a minimum core clock frequency with is a large multiple of the I2C bit rate.

The OP is obviously asking about programmable logic implementations of an I2C slave, here you have the same situation. You can implement a slave interface either as an odd asynchronous (in terms of FPGA design templates) or a straight forward synchronous design.

It should be also noted that the said standalone I2C slaves aren't actually pure digital designs. They have e.g. additional glitch filters that can't be implemented in an asynchronous logic design without analog components like monoflops.
 

The I2C specification, as of 2012, has a clock and a data line. Any relationship to a system/core/other clock is outside the specification. I think not so "obviously asking". The original post asked about the "I2C protocol" and "system clock is used in addition to SCL". I interpreted these comments to imply the use of a system clock going along with SCL and maintaining some sort of timing relationship between the two. Of course, implementation of an I2C interface will use a local clock since almost all designs are synchronous at some point but, please, do not confuse this with the I2C specification maintained by NXP. I think this is an important distinction for anyone getting started with I2C designs. Anyone designing a master or slave interface has to deal with the fact that an incoming SCL is not synchronous to any local clock. This is a very important consideration.
 

I agree that the original question is a bit ambiguous. The reference to the asynchronous fpga4fun I2C slave design however clarifies what it is talking about.

Correct domain crossing synchronization is required in synchronous I2C interfaces, as in most designs that process unrelated external signals.
 

thank you all for your informative replies.
i have one more query


I2c spec says that STOP condition is generated by master after acknowledge from slave(and other conditions). but i could not find out
what would happen and how should a slave behave if it detects a STOP condition in middle of a data transfer (i.e lets say it receive a STOP condition while receiving address bits or data bits)

-abhinav
 


The specification requires a complete address followed by data in complete bytes. As you say, the specification does not specify the behavior of a slave if this rule is violated. Good design practice would have the slave detect the stop condition and be ready for a start condition and an all new bus cycle. Not required but good design. A hint of this is given in Note 5 on page 14 of the 2012 specification, "5. A START condition immediately followed by a STOP condition (void message) is an illegal format. Many devices however are designed to operate properly under this condition." In general, most bus specifications cannot specify behavior for all illegal conditions since there are just so many of them.
 

When you say a STOP condition in middle of a data transfert, for a data transfert is only a byte transmission, or between bytes?
I don't expect a master generated a STOP condition between bits, but between bytes, and that should respect the protocol.
And in all case, when a STOP condition occurs at any time, the slave should be ready in the free time bus to received a START condition, beside that, added functionality to log this behavior is up to you to provide information to state-machine or the processor which is connected to this I2C slave.
 

Status
Not open for further replies.
Cookies are required to use this site. You must accept them to continue using the site. Learn more…