Continue to Site

Welcome to EDAboard.com

Welcome to our site! EDAboard.com is an international Electronics Discussion Forum focused on EDA software, circuits, schematics, books, theory, papers, asic, pld, 8051, DSP, Network, RF, Analog Design, PCB, Service Manuals... and a whole lot more! To participate you need to register. Registration is free. Click here to register now.

Logic duplication and optimization

Status
Not open for further replies.

shaiko

Advanced Member level 5
Advanced Member level 5
Joined
Aug 20, 2011
Messages
2,644
Helped
303
Reputation
608
Reaction score
297
Trophy points
1,363
Activity points
18,302
This question is about writing "optimization friendly" VHDL code.
Suppose my application requires adding 2 numbers and then using the result for another 2 seperate additions.
Possible ways to write the code:

Method 1:
Code:
signal a,b,c, d, result1, result2 : ( 7 downto 0 ) ;
begin
result1 <= c + a+b ;
result2 <= d + a+b ;
Method 2:
Code:
signal a, b, c, d, result1, result2 : ( 7 downto 0 ) ;
signal a_plus_b : ( 7 downto 0 ) ;
begin
a_plus_b <= a + b ; -- a_plus_b will explicitly drive the following adders.
result1 <= c + a_plus_b ;
result2 <= d + a_plus_b ;
Logically, both methods describe the same functionality.
But given everything else equal - will they ALWAYS synthesize the same way ?
If not, which is preferable ?
 

Common subexpression elimination is a common optimization found in both software and hardware tools. You shouldn't have to write your code to help the optimizer.

That said, you should write your code making it easier to read and maintain. If a subexpression has a meaningful name, then by all means use a recognizable intermediate signal name. It also helps having the subexpression in one place so that if you have to change it, there's only one place you have to change it.
 

Synth tools would probably do a better job of this than you, and you may end up preventing some optimisations.

As dave says, just make it clear and readable. Id argue the first instance is most readible.
 

Synth tools would probably do a better job of this than you, and you may end up preventing some optimisations.

As dave says, just make it clear and readable. Id argue the first instance is most readible.

agree 100% with the posters above. if you want to do manual optimizations, it's better if you think a level above (i.e., pipelining and architecture options). this is really where your coding influences the outcome of the tool.

that being said... I remember a time not so long ago where naming your variables differently would yield in different synthesis results. that was always fun to see.
 

This may have changed since decades ago, but I had the opportunity in a training class to run synthesis (with an identical addition structure as the OP's example) using Leanardo, XST, and Synplify. In all of these tools by adding a simple pair of ( )'s around the part you wanted to isolate into a single output to be fed into the last stage of addition was sufficient to do the exact same thing as the second example code with a_plus_b.

If the purpose of such optimizations is to improve the sharing of resources, you are probably better off just using the first version, back when I attempted this the goal was to show that you can force the sharing of resources in the tools (because they weren't that good at doing that) by how you wrote your code. I find this is usually unnecessary now as the tools are much better at determining where sharing can be done and would automatically determine that a+b could be shared between the two results.

If you really want to see how the tools behave you should write a test case of the options and run it through synthesis and look at the results in a post synthesis schematic.
 

I would probably use "Method 2".
I will be confident that the synthesis result is optimal.
I will be able to see the intermediate sum "a+b" in the simulator, and easily connect it to Chipscope/ILA/SignalTap/Reveal etc.
 
Last edited:

I would probably use "Method 2".
I will be confident that the synthesis result is optimal.
I will be able to see the intermediate sum "a+b" in the simulator, and easily connect it to Chipscope/ILA/SignalTap/Reveal etc.

I disagree. If there is ANY optimization, your intermediate sum, "a+b", is going to be optimized away. If you keep that intermediate sum for the sole purpose of seeing it in Chipscope, you're not going to get the best results.
 

Actually, I need to add something to this, as I have been bitten by the following before, and I was just writing some similar code. This is a code issue rather than synthesis/optimisation issue.

Lets say you have the following code:


Code VHDL - [expand]
1
2
3
4
5
6
7
signal A,B : unsigned(3 downto 0);
signal C : unsigned(7 downto 0);
 
signal OP : unsigned(7 downto 0);
....
 
OP <= A + B + C;



This is perfectly legal code. And probably looks fine. But will give incorrect results in certain situations.
eg. A=B=8.

Here, OP = C, which would be wrong, because A+B gives a 4 bit result, and hence the carry is the 5th bit is lost, and results in 0. This is because the implied sum is OP <= (A+B) +C;
A slight change would give correct results:
OP <= A + (B+C);

This is because (B+C) is now done with both B and C at 8 bits, because in VHDL addition operations always expand all operands to the longest before doing addition. And it will also expand A to 8 bits.

In my current code, I was doing:

Code:
OP <= A + B + x"08";

You could also resize one or both of A or B to get correct results:

Code:
OP <= resize(A, OP'length) + B + x"08";

But dont get caught out (like I have in the past) with the following:

Code:
OP <= resize(A+B, 8) + x"08";
 

Status
Not open for further replies.

Part and Inventory Search

Welcome to EDABoard.com

Sponsor

Back
Top