Continue to Site

Welcome to EDAboard.com

Welcome to our site! EDAboard.com is an international Electronics Discussion Forum focused on EDA software, circuits, schematics, books, theory, papers, asic, pld, 8051, DSP, Network, RF, Analog Design, PCB, Service Manuals... and a whole lot more! To participate you need to register. Registration is free. Click here to register now.

SDC: How to set delay in a clock path

Status
Not open for further replies.

NikosTS

Advanced Member level 4
Full Member level 1
Joined
Jan 30, 2017
Messages
119
Helped
1
Reputation
2
Reaction score
2
Trophy points
18
Activity points
1,054
Hello everyone,
I have a design where on the top level I have a clock signal as a port and I want to make sure it arrives with a delay at a specific pin inside the hierarchy.
I tried something : set_min_delay 0.5 -from CLK -to hier1/CLK (where hier1 is the module and CLK is the module's clock pin ) and I would expect to have some buffers/delay cells inserted.
However it doesn't seem to make a difference.

Any ideas on how to proceed?

Thank you
 

What is the motivation for this?
Do you want to make this clock sync with some input data?
If this is so, then it is much easier to work with putting delays on the data path rather than on the clock path.
 

Yes, i need to make sure that the data are captured at the correct time.
Unfortunatelly I dont think i can add delay at the data path, so is there a way to force delay at clock path?
 

Unfortunatelly I dont think i can add delay at the data path, so is there a way to force delay at clock path?
Then I can't help you.
I have made designs work by delaying the data signals to make them input clock synchronous. Have never touched the clock input.
 

Hi,

I have not much experience with this...
But from my understanding a "clock" is a dedicated net spread over the chip to provide most equal timing.
Thus a "delayed clock" at a dedicated node is quite unusual.

This usually is what one needs with standard (non clock) signals. And this is how I´d try to solve it:
* generate a standard signal from the clock
* feed this signal through gates/ delay lines to generate the desired delay.
..but it won´t work as a usual clock anymore
..depending on your chip you may be allowed to use a standard signal as clock input for a DFF

***
We need to know more details about what you want to achieve.
If your data_edge is too close to the clock_edge so you can´t guarantee setup- and hold-timing, then the usual way is to use the opposite clock edge.

Klaus
 

Hi,

I have not much experience with this...
But from my understanding a "clock" is a dedicated net spread over the chip to provide most equal timing.
Thus a "delayed clock" at a dedicated node is quite unusual.

This usually is what one needs with standard (non clock) signals. And this is how I´d try to solve it:
* generate a standard signal from the clock
* feed this signal through gates/ delay lines to generate the desired delay.
..but it won´t work as a usual clock anymore
..depending on your chip you may be allowed to use a standard signal as clock input for a DFF

***
We need to know more details about what you want to achieve.
If your data_edge is too close to the clock_edge so you can´t guarantee setup- and hold-timing, then the usual way is to use the opposite clock edge.

Klaus
Hello KlausST,
Thank you for the reply. The thing is that the data input is lets say 400ps delayed in relation to the arrival time of the "clock" signal.
My first idea was indeed to use the opposite clock edge in order to have enough setup time. However, the output of the sequential block needs to be computed in the same clock cycle. By using the opposite clock edge, the circuit becomes dependent on the duty cycle and can seriously lower the maximum operating frequency
 

However, the output of the sequential block needs to be computed in the same clock cycle.
Maybe in some special cases. (like feedbacked regulation loops)
But in most cases a "delayed" processing does not reduce throughput. It just increases latency time - which in most applications is not problematic.

There are many examples where a delayed processing increases throughput: likewith a pipelined RISC processor.

Pipelining means delayed processing with the result of increased throughput.
Most real time processing systems use some kind of pipelining.

Klaus
 

Maybe in some special cases. (like feedbacked regulation loops)
But in most cases a "delayed" processing does not reduce throughput. It just increases latency time - which in most applications is not problematic.

There are many examples where a delayed processing increases throughput: likewith a pipelined RISC processor.

Pipelining means delayed processing with the result of increased throughput.
Most real time processing systems use some kind of pipelining.

Klaus
In my case, a control block generates at the rising edge of a "clock" an address that is fed at a RAM. The RAM accesses the address and propagates the contents at the output. This exact output needs to be read back from the control block in the same clock cycle.
So I think that this is a special case indeed
 

Hi,

Maybe. I still have my doubts.

If I understand correctly, you say:
Data is written into RAM and in the same clock cycle these data has to be read from RAM. Is this possible?
For single port RAMs this is possible.
Even dual port RAMs (and FIFOs) may have problems to fulfill this.

If there is no feedbacked loop I strongly recommend to use pipelining techniques.

As said: in case you need assitance you should give more details about your application.

Klaus
 

Hi,

Maybe. I still have my doubts.

If I understand correctly, you say:
Data is written into RAM and in the same clock cycle these data has to be read from RAM. Is this possible?
For single port RAMs this is possible.
Even dual port RAMs (and FIFOs) may have problems to fulfill this.

If there is no feedbacked loop I strongly recommend to use pipelining techniques.

As said: in case you need assitance you should give more details about your application.

Klaus
Maybe i did not explain it well.
I am talking about a read operation from the RAM.
So the mentioned control block at the rising edge of the clock generated an address at its output . This is the address that should be read from the RAM at this clock cycle and fetch the address contents. The fetched data ( content of the address ) must be then read back from the control block at the same clock cycle.
--- Updated ---

On second thought, I could read back the fetched data at the next clock cycle. But if I specifically wanted to read them back in the same cycle, how should I constrain my design?
 
Last edited:

Hi,
I am talking about a read operation from the RAM.
Now I see.

It´s RAM access. Thus it needs to be constrained by the RAM ... at least I think so. I have no experience with RAM access in an ASIC.

Klaus
 

set_clock_latency my help you. Synopsys ICC read this command to adjust clock latency during CTS.
 

set_clock_latency my help you. Synopsys ICC read this command to adjust clock latency during CTS.
I tried to use this constraint, but it had no effect ( it did not insert any delay units/buffers). Also, i thought that constraint was used to "emulate" different duty cycles.
 

In ICC2 you should use derive_clock_balance_points before CTS in order to respect set_clock_latency.
 

Here's a delay concept which may or may not be adaptable. It generates an output pulse immediately with the first input. Thus it conveys data or clock pulses without loss.

The diode discharges the capacitor immediately at falling input edge. If you want 50 percent duty cycle then omit the diode.
The 100 ohm resistor is unnecessary.

clock 1MHz delay and shorten via pot cap diode logic gate.png
 

Hello Brad,
Thank you for your input. However I am looking for a completely digital solution that will be synthesized. If custom solutions were to be used, I have an analog delay line that can be used.
 

you are confusing a lot of concepts into one. there is no such thing as delaying a clock because all delays are with respect to the clock itself. you should not change the reference (t=0). it doesn't solve anything.

what you have to do is:
- if you have a combinational loop, remove it. (I suspect you do)
- slow down your clock frequency. if the performance is not acceptable, accept that you have to live with the one cycle delay and pipeline it.
- let CTS handle the clock automatically. it can shrink/expand clock cycles on specific areas of the circuit.
- control CTS yourself. given how confused you are I won't even tell you the commands to achieve this.
 

you are confusing a lot of concepts into one. there is no such thing as delaying a clock because all delays are with respect to the clock itself. you should not change the reference (t=0). it doesn't solve anything.

what you have to do is:
- if you have a combinational loop, remove it. (I suspect you do)
- slow down your clock frequency. if the performance is not acceptable, accept that you have to live with the one cycle delay and pipeline it.
- let CTS handle the clock automatically. it can shrink/expand clock cycles on specific areas of the circuit.
- control CTS yourself. given how confused you are I won't even tell you the commands to achieve this.
Thank you for your input.
The reason I was trying to do that is because the address of the SRAM is generated from another (external ) circuit. This external circuit shares though the same clock as the SRAM. The time it takes for the external circuit to produce the adress ( output ) is lets say 1ns.
Therefore the clock arrives at t=0 at the SRAM but the address arrives at t=1ns. Is it still wrong in this situation to try and delay the clock for lets say 1.2ns in order to capture the correct address value?
 

Hi,

Yes, I call it wrong (in most cases).
If there is a second clock domain you need to synchronize one clock to the other clock.
Synchronizing signals to the clock is a very important task in almost any FPGA / ASIC design.

Klaus
 

Status
Not open for further replies.

Part and Inventory Search

Welcome to EDABoard.com

Sponsor

Back
Top