Continue to Site

Welcome to EDAboard.com

Welcome to our site! EDAboard.com is an international Electronics Discussion Forum focused on EDA software, circuits, schematics, books, theory, papers, asic, pld, 8051, DSP, Network, RF, Analog Design, PCB, Service Manuals... and a whole lot more! To participate you need to register. Registration is free. Click here to register now.

FPGA Cost Model Help Please!

Status
Not open for further replies.

fgt4w

Newbie level 5
Newbie level 5
Joined
Sep 13, 2013
Messages
9
Helped
0
Reputation
0
Reaction score
0
Trophy points
1
Visit site
Activity points
146
Hi Everyone,

I'm quite new to FPGA development, and I was hoping you guys could help me out. I've been given a project to take an old ASIC development/production cost model, and adapt it to estimate FPGA costs. I've done some basic research, but I still have many questions.

1. My first main issue is how to measure "size" of a design. The ASIC model uses number of gates. From what I've read, active logic cells is the sizing metric used in the FPGA community, and there is no perfect formula to convert active logic cells to gates. I don't need a "perfect" solution - i need a pretty good solution that gets me in the ballpark. Are there some rules of thumb I could use to compare ASIC and FPGA design sizes? I also read that the 'size' of an active logic cell can vary from vendor to vendor. Maybe I need to find a conversion factor for each vendor? Anything to steer me in the right direction is appreciated.

2. Generally speaking, what is the development cost difference between an FPGA project and a standard cell ASIC project of the same 'size'? How much of this cost difference is due to a difference in scope (activities that must be performed for ASIC projects, but not for FPGAs. example: mask set generation) and how much is due to a difference in complexity (difficulty of implementing the same thing in a normal ASIC CAD tool vs. a normal FPGA CAD tool.)

3. I would guess that anyone using a high-level cost model like this could easily get a vendor quote for FPGA purchase prices, so I wasn't going to create anything to estimate it as its already known. Is this the right approach?

4. I've covered the purchase price, the design phase effort, downloading it to the FPGA and testing it. Am I missing any other significant costs?

Thanks so much for any help you can offer
 

1. There really isn't any good conversion for the ASIC gates to FPGA slices, ALMs or LUTs. The conversion is heavily design dependent. You could have two designs that both fit in an FPGA with the same LUT and register utilizaiton but have dramatically different equivalent ASIC gates. The problem stems from the fact that a LUT can implement any function of the number of inputs it has. So we can implement a simple 2 input AND gate or a 6-input registered equation in the same LUT/FF pair. These two will have dramatically different equivalent ASIC gate counts. The best suggestion I can make is to take some sampling of design files and run them on the FPGA tools to get a feel for how many LUT/ALMs are used. Here is a link on this subject on Xilinx's forum that is old but still valid. http://forums.xilinx.com/t5/Virtex-Family-FPGAs/Logic-cell-to-Gate-count/td-p/5459

Given that it's hard to directly convert ASIC gates to FPGA LUTs an easier method for sizing an FPGA is determine the number of registers and memory elements of a design and depending on how heavily pipelined the design is having betwen 1-3 LUTs per register as the number of LUTs feeding those registers. 1 LUT between registers maximum Fmax and 3 LUTs between registers being a slower Fmax.

2. Don't forget ASICs require things like test vector generation, scan chain insertion, layout (to generate masks, unless you have an internal team that does this instead of the vendor). FPGAs will just have a larger purchase cost per device which you should account for in the ASIC vs FPGA equation. The point where the NRE+device costs cross over from using an FPGA to using an ASIC has pushed out further for FPGAs with the smaller geometries used in say Cyclone V or Spartan 6 devices.

3. You need to cost out the purchase price of the FPGA as volume discounts and price reductions as the product matures are pretty normal.

4. Depending on the number of devices you'll require there could be costs associated with having to buy X number of devices as a minimum order of an ASIC and the need for keeping extra devices in inventory for possibly a long period of time. FPGAs at least have the benefit of being purchasable in very low quantities (even 1 part, though you'll pay a lot more).

regards,
 
Last edited:
  • Like
Reactions: fgt4w

    fgt4w

    Points: 2
    Helpful Answer Positive Rating
Another "benefit" to FPGAs is their flexability. You could build a generic board with features to suit several customers and build different firmware for each customer, or "future proof" your design, getting a basic model out faster and upgrade the design with extra features later. Obviously this comes at the cost of power, but for small runs, it makes perfect sense.
 

Thanks for your replies, they were very helpful.

So it seems that an ASIC gate is a pretty good sizing metric. 100 gates in "design A" is roughly equivalent to 100 ASIC gates in "design B"
The only reason design A would take more time than to implement than design B is that one is more "complex" - it requires more thought to achieve a solid understanding of how it should work, and how you can implement it, and how it should work with the rest of the design.

If I understand correctly, FPGA's are different. Let's talk about 2 different FPGA designs that, if it were implemented on an ASIC, they would need the same number of ASIC gates. Let's also say they are the same level of "complexity" or difficulty to wrap your head around.

Are you saying these 2 designs still might require very different numbers of logic cells to implement? And the reason is because one could be implemented with many simple functions (each of which use a logic cell, but doesn't take full advantage of all the inputs or outputs in the LUT) and the other design might use fewer logic cells, but take full advantage of the LUT? In other words - FPGA Active Logic Cells is a terrible sizing metric for a design?

My conundrum is this:
I know there are models out there that use Active Logic Cells as the sizing metric for estimating the cost of a proposed FPGA design. I know the intended users of the cost model i'm building have said that Active Logic Cells are what they want to see as the sizing metric. And yet - all my research seems to conclude that Active Logic Cells are an awful sizing metric. What am I missing?

Again, Thanks so much for the help - You guys are life savers.
 

The total number of active logic cells doesnt tell the whole story. If you have a design with 100 cells used, compared to 500 cells used, the 2nd still might be a better design. You also need to take into account memories and multiplier usage, and the routing between them. Often its not the logic usage thats the problem, its the memory or multiplier usage (and the timing differences). But it could still fit in the same chip, so the only cost is the design time . I think you're trying to be too general.

It is very difficult to estimate the logic usage on an FPGA from a given set of source code without compiling it. It is far easier to work out the memory and multiplier requirement (which is usually the limiting factor anyway).
 
  • Like
Reactions: fgt4w

    fgt4w

    Points: 2
    Helpful Answer Positive Rating
The total number of active logic cells doesnt tell the whole story. If you have a design with 100 cells used, compared to 500 cells used, the 2nd still might be a better design.

As tricky states number of active cells doesn't tell the whole story. In his example the 100 cell design may have been restricted to the minimum number of cells needed to implement the design (e.g. very restrictive partitioning). The 500 cell implementation might have no partitioning and the tools replicated logic and used cells as feed throughs to improve performace. This would result in the 100 cell design performing much worse than the performance optimized 500 cell design.

So it's usually just easier to look at memory, multipliers, IO pins (voltage banks), and number of transceivers to determine if a given FPGA is a good target fit.

My conundrum is this:
I know there are models out there that use Active Logic Cells as the sizing metric for estimating the cost of a proposed FPGA design. I know the intended users of the cost model i'm building have said that Active Logic Cells are what they want to see as the sizing metric. And yet - all my research seems to conclude that Active Logic Cells are an awful sizing metric. What am I missing?
You're not missing anything. Active logic cells being used as a sizing metric is a terrible metric that requires a break down of the cells with actual logic fully utilized with LUT+FF and percentage of feed through cells required, which may point to routing congestion if the percentage is large. To be meaning full the metric has to also include the maximum clock frequency that the design supports given the number of logic cells used. If you want to give a metric that is usable I would give both the minimum resource usage implementation and the clock frequency it reaches (perhaps without using timing constraints) and an implementation done for speed (use timing constraints and push the clock frequency to the point where it starts to have issues with meeting timing). I'm sure you'll see a large difference in the number of active logic cells between the implementations.

Regards
 
Last edited:
  • Like
Reactions: fgt4w

    fgt4w

    Points: 2
    Helpful Answer Positive Rating
OK, with all the reading i've been doing and with your crucial help, i'm starting to get a clearer picture of how this works.

Just to be clear, for my project I absolutely have to come up with SOMETHING that will convert active logic cells into a reasonable (not perfect) gate count. I can also ask for any other necessary information to enable this conversion, like max clock speed, or a factor for how heavily pipelined the design is. I can also require they normalize the definition of a logic cell similarly to how these guys do it (http://www.1-core.com/library/digital/fpga-logic-cells/). I can ask what the average utilization of the logic cells will be (since a logic cell can be a tiny 2-input NAND or a complicated 6-input logic function or anywhere inbetween). I can also factor IO pins into the equation.

I just need to figure out how to put it all together.

I know of a similar model (which I can't access or use, but i've seen it before) - that asks only for number of logic cells (not sure if its normalized or not), clock speed, and IO pins. Using only these things, they've developed a "size" number that could be compared to ASIC gates. Does this seem at all possible, or is this truly a impossible approach?

Thanks for putting up with my silly questions :)
 

I know of a similar model (which I can't access or use, but i've seen it before) - that asks only for number of logic cells (not sure if its normalized or not), clock speed, and IO pins. Using only these things, they've developed a "size" number that could be compared to ASIC gates. Does this seem at all possible, or is this truly a impossible approach?
Well the best way to do this would be to have a bunch of designs that have been implemented for area and speed in both multiple FPGA vendors offerings and ASICs to optimize the "normalization" factors. Then depending on some metric of complexity (or something) you could select the proper normalization factor to get a better estimate.

Thanks for putting up with my silly questions :)
Not silly questions at all. In fact I find it refreshing to find a posted question where the person asking the question has actually thought about their question and performed some research prior to posting. :-D

Hey you could have just asked something like... I need equation dat converts FPGA size two ASIC gates. send me now. thanxs. Of course you wouldn't have gotten any responses. ;-)
 
  • Like
Reactions: fgt4w

    fgt4w

    Points: 2
    Helpful Answer Positive Rating
Hey Guys,

It's been a month since I last posted. I've found alot of people suggesting VHDL lines of code as a good sizing metric for the entire development effort, from requirements defintion all the way through testing. I hear that many designers have a good feel for how many lines of code a project will need from an early stage. Of course the effort-per-line-of-code depends on alot of things. I was thinking of having the cost/effort estimation equations that look something like this:

Functional Spec = f(VHDL lines of code, Logic Complexity)
Architectural and Detailed Design = f(VHDL LOC, Logic Complexity)
Simulation, Verification, Implementation = f(VHDL LOC, Logic Complexity, expected timing issues (based on clock speed, how highly pipelined the design is, etc))
Floorplanning, Place and Route = f(VHDL LOC, expected timing issues)

Those would be the basic cost drivers for each activity, though there are many other drivers that i'll fit in somewhere (amount of reuse, experience of the team, etc.) Does this seem like a good approach?
 

Status
Not open for further replies.

Similar threads

Part and Inventory Search

Welcome to EDABoard.com

Sponsor

Back
Top