Continue to Site

Welcome to EDAboard.com

Welcome to our site! EDAboard.com is an international Electronics Discussion Forum focused on EDA software, circuits, schematics, books, theory, papers, asic, pld, 8051, DSP, Network, RF, Analog Design, PCB, Service Manuals... and a whole lot more! To participate you need to register. Registration is free. Click here to register now.

Can we reduce EDA license cost by using larger, faster servers?

nasica88

Newbie
Newbie level 1
Joined
Apr 2, 2024
Messages
1
Helped
0
Reputation
0
Reaction score
0
Trophy points
1
Activity points
33
Sorry for my novice question. I am a mere systems engineer for EDA servers.

I am trying to figure out how to reduce EDA software license cost like Cadence or Synopsys. In an effort for that, are the following options viable? If no, can you pls tell me why?

1) Just use the latest, faster processors from Intel or AMD or even alternative processors like ARM or IBM Power.
2) Just use larger servers with more CPU cores or more sockets - we have up to 16-socket servers in the market.
3) Use MPI for parallel processing across multiple server boxes.

Again, thanks for your advices.

** In a follow-up question, how many CPU cores do you usually use for a single job? I know this question is too vague, but let me hear your usual practice.
 
The software license cost is for the supplier to make money from designing and supporting the software, and he charges what the traffic will bear.

The cost has nothing to do with processor or server speed (which just determines how fast the software will run).
Why do you think it would.
 
Sorry for my novice question. I am a mere systems engineer for EDA servers.

I am trying to figure out how to reduce EDA software license cost like Cadence or Synopsys. In an effort for that, are the following options viable? If no, can you pls tell me why?

1) Just use the latest, faster processors from Intel or AMD or even alternative processors like ARM or IBM Power.
2) Just use larger servers with more CPU cores or more sockets - we have up to 16-socket servers in the market.
3) Use MPI for parallel processing across multiple server boxes.

Again, thanks for your advices.

** In a follow-up question, how many CPU cores do you usually use for a single job? I know this question is too vague, but let me hear your usual practice.
Novice or not, this question makes no sense. This is like asking if faster cars should cost less.
 
Comparing Software License cost to Hardware Power is like comparing Apples and Oranges. It’s a wrong assumption that they are opportunities for cost reduction.

You can have a big server. The fees are based on the number of licensed users or “seats”.
 
The only way a faster server will help you is if the engineers
using it, have the discipline to give up license once their task
is complete. (That rarely is the case)

Tailoring activity-based license checkin could help, but expect
plenty of bitching from everybody "who was just about to..."
when their license went away and now they're at the back of
the queue.

You are just a field hand on somebody else's plantation and
the rules are not set up to benefit you at all.

Now one point of leverage might be, find the license-hogging
activities and substitute tools that don't need license tokens.
Such as, get ngspice to run your Spectre netlists clean, and
you can get as much simulation horsepower as you can find
CPU cores. Find other ways than Calculator to get your info
out of the results (like see if you can force a SPICE format
output "rawfile" which would let you use various results
browsers / viewers) or use output methods which produce a
non-proprietary format, maybe "just the numbers" for the
cared-about quantities.

You might even be able to work from open schematic tools to
SPICE netlists on that side, and verify against Brand X layout
extracted netlist, in foundry-blessed Brand X LVS, eliminating
seat usage for that task-set. Presuming your expensive tools
can read the industry standard format and do their job.
 
Classical cost model for EDA is seat based. Slow or fast hardware would make no or little difference. Perhaps, and this is really unlikely, if you could reduce peak load maybe you could buy less licenses and still keep your engineers working happily. I would not count on it.

There are rumours of software-as-a-service offerings from big name vendors that have a cost model based on use time. I don't know the details. These would be cloud based and would run on Amazon/Google machines. Even if, in theory, you could save money by running things fast, you don't have much control over the hardware offered by the cloud.
 
Novice or not, this question makes no sense. This is like asking if faster cars should cost less.
Cadence Allegro says multiple cores and CPU with good RAM can increase efficiency of the application but i didn't see any changes when i change my machine from four cores to 28 cores and dual CPU ,any suggestion where i am lacking?
 

Attachments

  • Capture1.PNG
    Capture1.PNG
    724.1 KB · Views: 80
1) This has absolutely nothing to do with the original post.
2) Do you actually have a particularly demanding project? If not, I wouldn’t expect to see much improvement by adding more cores. Having a Ferrari isn’t going to get you down the driveway much faster than a bicycle.
 
1) This has absolutely nothing to do with the original post.
2) Do you actually have a particularly demanding project? If not, I wouldn’t expect to see much improvement by adding more cores. Having a Ferrari isn’t going to get you down the driveway much faster than a bicycle.
Yes its a 76 layers memory board with more than 36000 connections ,its taking too long in auto interactive routing in Cadence allegro PCB design editor.
 
Last edited:
Yes its a 76 layers memory board with more than 36000 connections ,its taking too long in auto interactive routing in Cadence allegro PCB design editor.
My question is, when considering a new computer, would it benefit autorouting algorithms more to have multiple cores of a given speed, or fewer cores of a higher speed? Does Allegro PCB design editor autorouting take advantage of multiple CPU cores?
 
Hi,

My question is, when considering a new computer...

Headline says:

Can we reduce EDA license cost by using larger, faster servers?​

Again: LICENSE cost!

A new computer may improve working hours ... but what is your idea about "LICENSE cost"?????

***
In case you don´t refer to the OP´s question ... then please start your own thread.

Klaus
 
any suggestion where i am lacking?
Big speed increases:

Windows- turned off Auto Updates and Auto Update Checker.

Macintosh - obtained computer with Solid State hard drive.

I read that software must specifically be written to take advantage of multiple cores or parallel processors. It doesn't do so automatically.

Can you find a way to allocate more memory to a program? Can you bring up an Activity Monitor which tells you in real time how much memory a program uses? Or what percentage of cpu cycles a program uses?

Macintosh has a utility called Activity Monitor. Windows (press Ctrl-Alt-Delete) brings up Task Manager.
 

LaTeX Commands Quick-Menu:

Part and Inventory Search

Welcome to EDABoard.com

Sponsor

Back
Top