Meach_bzh
Newbie level 6
Hello,
I want to implement a high-pass butterworth filter of 4th order in VHDL.
To design it, I use Octave and once I find a suitable filter, I write down its coefficients:
a = {1 ; a1 ; a2 ; a3 ; a4}
b = {b0 ; b1 ; b2 ; b3 ; b4}
Then in my VHDL code I use the following equation with the coefficients above:
y[n] = b0.x[n] + b1.x[n - 1] + b2.x[n - 2] + b3.x[n - 3] + b4.x[n - 4]
- a1.y[n - 1] - a2.y[n - 2] - a3.y[n - 3] - a4.y[n - 4]
I simulate it with the same set of data I used in Octave but the results I have is far from the one expected.
So in order to check what I am doing wrong, I open an Excel spreadsheet and apply the same equation with the same set of data and I got pretty much the same result as in my VHDL code.
On the picture below are the different signals, plotted in Excel.
For some reason both the data calculated in Excel and extracted from the VHDL simulation work nicely until approximately the 50th sample and then it get "crazy".
I know that the difference between the data calculated in Excel and extracted from VHDL is due to the accuracy of my signals in my VHDL. But I have no idea why at some point the data start raising exponentially.
Does anyone has idea about why this is happening?
Thanks,
Meach
I want to implement a high-pass butterworth filter of 4th order in VHDL.
To design it, I use Octave and once I find a suitable filter, I write down its coefficients:
a = {1 ; a1 ; a2 ; a3 ; a4}
b = {b0 ; b1 ; b2 ; b3 ; b4}
Then in my VHDL code I use the following equation with the coefficients above:
y[n] = b0.x[n] + b1.x[n - 1] + b2.x[n - 2] + b3.x[n - 3] + b4.x[n - 4]
- a1.y[n - 1] - a2.y[n - 2] - a3.y[n - 3] - a4.y[n - 4]
I simulate it with the same set of data I used in Octave but the results I have is far from the one expected.
So in order to check what I am doing wrong, I open an Excel spreadsheet and apply the same equation with the same set of data and I got pretty much the same result as in my VHDL code.
On the picture below are the different signals, plotted in Excel.
For some reason both the data calculated in Excel and extracted from the VHDL simulation work nicely until approximately the 50th sample and then it get "crazy".
I know that the difference between the data calculated in Excel and extracted from VHDL is due to the accuracy of my signals in my VHDL. But I have no idea why at some point the data start raising exponentially.
Does anyone has idea about why this is happening?
Thanks,
Meach