Continue to Site

Welcome to EDAboard.com

Welcome to our site! EDAboard.com is an international Electronics Discussion Forum focused on EDA software, circuits, schematics, books, theory, papers, asic, pld, 8051, DSP, Network, RF, Analog Design, PCB, Service Manuals... and a whole lot more! To participate you need to register. Registration is free. Click here to register now.

Basis Functions in HFSS

Status
Not open for further replies.

w123jsc

Newbie level 4
Newbie level 4
Joined
Jul 20, 2011
Messages
6
Helped
4
Reputation
8
Reaction score
3
Trophy points
1,283
Location
Dayton, OH
Activity points
1,343
I am using HFSS v14 to calculate transmission (S21) through a wire grid polarizer in the mm-wave region. The gap width between the wires is sub-wavelength. The period of the polarizer is also sub-wavelength. I am seeing a huge difference in the calculated transmission when I switch between first order and mixed order basis functions. The discrepancy is as much as 25 dB! I see the discrepancy in both s and p-polarizations but it is worse in p-polarization. The discrepancy also seems to get worse with increasing frequency and to get worse with shrinking geometries.

Has anyone seen this before? Can anyone explain it?

Also, the problem seems to get worse when selecting the Ansoft TAU mesh method. It is better when selecting Classical mesh method.

How do I know which of the results to trust? This is diminishing my confidence in HFSS.

If you help please comment.

v/r

John
 

I suspect this as edge effect and resonance, if your geometry has sharp corners or very fine features, it is actually a singular point and the field scattered or radiated from there contains all kinds of modes (lots of them are evanescent modes and are decaying very fast). However, when there is another object very close to such a sharp feature, evanescent modes counts since it is not decayed weak enough and will start to interact with the near located feature, this will create even more field modes between. This is especially true when your geometry is much smaller than source wavelength. Since the mesh density is based on excitation wavelength, (normally 10 points per wavelength (PPW)), and also based on material properties (since epsilon and mu decreases wavelength). You need denser mesh or high order basis functions on the gap or fine features in your problem to allow sufficient # of unknowns there to model the field. However, this might cost you a lot of solution time and memory since such kind of problem is know to be slow convergent.
 

I don't understand your application, it seems odd. What is your Frequency for the signal ?

Go look up Quarter wave Transformers. If you have a lot of "Sub_wavelength" wires in
series/parallel you are going to have very poor gain.
 

this seems to be a mesh problem. use a finer mesh and see if the results are different. if not, double check your reference calculations!
 
this seems to be a mesh problem. use a finer mesh and see if the results are different. if not, double check your reference calculations!

simuboss, yes, using a finer mesh in the small aperture volumes did change the results. I made a virtual object to cover the subwavelength gaps. I was then able to set the maximum mesh size in the virtual object and get excellent discretization in the gap volume. The results then seem to make some sense.

However, two questions remain for me.

1. Why does changing the basis function type cause such a large difference in the results?
2. In a narrow subwavelength slot aperture, how narrow can one make the slot before the results calculated by HFSS, or any FEM solver, cannot be trusted? Note, I have read that MoM methods are better for those types of geometries.

v/r

John
 

that's great john. for your questions, i don't have an exact answer, but you need to get such discrepancies to learn not to trust the simulations! in that world, mesh size plays the key role and you might see even two opposite phenomena by changing it! you need to be always careful about that. the finer the mesh, the closer the results, however there is a limit on minimum mesh size in every FEM, and the trade-off is simulation costs. a good rule of thump to stick into might be Nyquist theorem.

and i don't have any experience on method of moments! sorry ;)
 

w123jsc:

per your questions
1. Changing order of basis functions is critical in capturing accuracy, e.g. 0-th order basis function defines only 6 basis functions over a single tetrahedra. That means within that tet, there are six unknown vectors describing the EM field. That means you can only play with six coefficients to control the vector basis to model the EM field solution. With high-order basis, e.g. hierarchical curl-conforming, 1st order basis has around 36 (can't remember exactly) vector basis to describe the EM field, that means it can describe more pattern of the fields within that tet. This is why its more accurate. Of course, if you break your original tet with many smaller tets but stick to 0th order basis, you might get same phenomena. this is a well known topic called "hp-refinement" of FEM.

2. Typically, you want your element to be much smaller than the wavelength in your problem. e.g. your wavelength in your problem is 1m, a suggested value of max mesh size would be 0.1m, that means you have at least 10 elements describing the field within 1m. However, fine features and small gaps causes trouble, it create a lot of evanescent modes which need a lot of DoF to describe them accurately. In such case, especially when these evanescent modes plays a critical impact on results, you have to refine the mesh or increase basis # there at the cost of more computation time.

3. MoM face the same problem, I do not have data to compare against each other, but its similar for sure.
 

w123jsc:

per your questions
1. Changing order of basis functions is critical in capturing accuracy, e.g. 0-th order basis function defines only 6 basis functions over a single tetrahedra. That means within that tet, there are six unknown vectors describing the EM field. That means you can only play with six coefficients to control the vector basis to model the EM field solution. With high-order basis, e.g. hierarchical curl-conforming, 1st order basis has around 36 (can't remember exactly) vector basis to describe the EM field, that means it can describe more pattern of the fields within that tet. This is why its more accurate. Of course, if you break your original tet with many smaller tets but stick to 0th order basis, you might get same phenomena. this is a well known topic called "hp-refinement" of FEM.

2. Typically, you want your element to be much smaller than the wavelength in your problem. e.g. your wavelength in your problem is 1m, a suggested value of max mesh size would be 0.1m, that means you have at least 10 elements describing the field within 1m. However, fine features and small gaps causes trouble, it create a lot of evanescent modes which need a lot of DoF to describe them accurately. In such case, especially when these evanescent modes plays a critical impact on results, you have to refine the mesh or increase basis # there at the cost of more computation time.

3. MoM face the same problem, I do not have data to compare against each other, but its similar for sure.

I have been experimenting with HFSS. I have a design in which I have held all parameters constant while varying the basis function. What I have found is that mixed mode or second order give the least number of elements. First order will give more elements. Zero order gives the most elements. Therefore, zero order gives the smallest elements and in fact is recommended by Ansoft in the HFSS Advanced Course Chapter 2 - Solution Process Review for designs with structures that are small compared to wavelength.

You are correct, zero order basis functions have only 6 unknowns and use linear interpolation. First order basis functions use 20 unknowns with quadratic interpolation. Second order basis functions use 45 unknowns with cubic interpolation. However, since the actual size of each zero order element is less than the higher orders you can still get the same discretization coverage if you constrain the meshing. That is what I ended up doing.

For this problem, we ended up using zero order basis functions. In addition, I made virtual objects to enclose the really small subwavelength apertures. I was then able to set a minimum element size in this virtual object for really fine meshing without affecting the meshing of the rest of the design. I also set the lambda refinement to 0.1 wavelengths. This did affect the meshing of the rest of the design. Perhaps this was an unnecessary step to take but I wanted to make sure that the change in meshing around the critical apertures wasn't too abrupt. Finally, I used the Classical mesh algorithm as it is preferred when there is a large variation between the largest and smallest details in the geometry.

The results from the calculations seem to make some sense. I really do not know how else to approach this kind of problem with FEM. We are publishing some of the calculation results in a journal paper. The simulation results agreed with experimental results in all regards qualitatively. They mostly agreed with experimental results quantitatively.

v/r

John
 

Hi John:

Thanks for sharing the progress and validate my statement. If you want to go deeper on the discussion. there is another topic called "ill-condition". In real means, this is reflected as you have very huge objects with very tiny features such as thin-wires and apertures. Your mesh will have very large element meshing the smooth and large objects, your mesh will also have very tiny elements capturing small and fine features. This make the max/min ratio of your mesh to be very large, and the system matrix filled according to this mesh will be likely "ill-condition". And the solution will be less accurate. So not only capturing the fine features with small elements but also make sure the elements min/max ratio does not make the system too poor in terms of condition number. Of course using small elements everywhere will solve the problem, but sometimes the solution cost is prohibitive.
 

Status
Not open for further replies.

Part and Inventory Search

Welcome to EDABoard.com

Sponsor

Back
Top