It is not a hard and fast rule, it depends. Let's say that 'a' is a three bit number. Then to evaluate the condition 'a=5' would be 'a=101'. To evaluate 'a>=5' would be a=5, 6 or 7 which would be 'a=101', 'a=110', 'a=111' which would reduce to 'a=101' or 'a=11-'. It is likely that no matter what technology is used to implement the logic, roughly the same amount of resources would be used. If anything, in this instance, using >= could use more.
Now change it slightly. Instead of comparing to 5, what if it was a compare with 4. Now you would have 'a=4' would be 'a=100'. To evaluate 'a>=4' would be a=4, 5, 6 or 7 which would be 'a=100', 'a=101', 'a=110', 'a=111' which would reduce to 'a=1--' or simply a(2). In this case, using >= 4 rather than =4 would result in less logic or routing resources since to evaluate the expression using = requires all three bits, but only one bit for >=. So, depending on the exact comparison, there could be a resource advantage.
Another reason for using >= versus = would be to cover 'impossible' conditions. Let's says that normally 'a' can only range from 0 to 5 and 'should' never make it to 6 or 7. But what if it does? If you used >= then those 'impossible' conditions are covered, if you use = then it is not, how does the system recover from that condition?
Kevin Jennings