Anyone,
Firstly I'm uncertain if this post actually fits into this category of this forum, so I apologise if it doesn't. Now to my questions: I've recently built a current sensing application for an energy meter: includes a hall-effect current sensor, anti-aliasing filter, level shifter, micro-processor (dsPIC30F4011) and an LCD. My intended measurement error for this application is a 2% deviation (above or/and below) from the actual current I read from the oscilloscope.
Here's my problem, for my present rating of 7A RMS, in a range of 7Arms - 4Arms, it exhibits what I anticipate it to and actually, sometimes, gives me a very accurate reading (at times even close to 100 %). Below this range the error increases - reaching even as far as 10%. If I try to fix it for this low range using proportionality factors in my code,i.e. adjusting multiplication factors etc. etc., it improves, but as soon as the actual current increases for range 4Arms-7Arms, the error pattern of earlier is now repeated again and exhibited for the higher current range now.
Here are my questions:
[1] Is this kind of behaviour typical for energy meters?
[2] Specifically, are energy meters designed to cater for loads (i.e. a fundamental load type) with a specific range of current wherein they'd be reliable and if another type of load with a current not of that range is connected their accuracy diminishes?
I've tried looking up literature on this, and have come up with zero, your response will be greatly appreciated.