Video in-line interpolation

Status
Not open for further replies.

Alexzabr

Newbie level 1
Joined
Jun 15, 2008
Messages
1
Helped
0
Reputation
0
Reaction score
0
Trophy points
1,281
Activity points
1,310
video interpolation

Hello, this is my first posting here.
Well, I'm facing the task of format conversion of streaming video from 320x240 RGB 50 fps progressive into standard D1 stream that is compatible with industry standard video encoders.
I think I have a good understanding of the concept (I used to work with D1 in the past), however certain aspects are new to me.
A note: an implementation is targeted to FPGA (with available DSP blocks)

The first stages of Gamma correction followed by RGB to YCrCb conversion are clear for me, however 320 to 720 interpolation is of my concern.
D1 (BT.656) prescribes 720 Y samples per line and the same amount of interleaved color samples (360 of Cr and Cb interleaved) which thereby constitutes the stream of CbYCr 4:2:2 stream such as Cb -> Y -> Cr -> Y ->...(thus having Y sample and half of CrCb per each pixel).
After conversion of 320 RGB into YCrCb I obtain 320 YCrCb formatted data (4:4:4), then each second Cb, Cr will be discarded making 4:2:2. Hence obtaining 320 Y samples and 160 of each of colors.
The nature of the video information (in my particular case) is B&W, hence I need to preserve B&W quality, while colro information is of secondary importance (which is computer-generated overlay over input video), this is why I'd consider employing Y processing of somewhat more sophisticated level then for color.

Having that in mind I considered several common methods for Y such as bi-linear and bi-cubic, but realized that both of these are not the most suitable for me due to relative implementation complexity (given FPGA implementation) and the fact that I don't need to create new lines by that kind of processing (both of tehse kinds suggest 2D algorithms), but rather to interpolate in single dimension (in each corresponding line).
Hence, I came-up with 9:4 sampling rate conversion approach for Y. Taking that direction I need first to interpolate the 320 Y samples in line by 9 then to decimate by 4 which produce exactly 720 Y samples that I need in each line. Of course, in-between interpolation and decimation stages I need to implement an anti-imaging/anti-aliasing filtration which I consider to implement by an appropriate FIR. Here is where my first question comes from:

1. What are general requirements for FIR LPF that is aimed to process video ? I mean how strict are usual contrains for LPFs given it will process streaming video, such as pass-band gain (pass-band ripple), stop-band gain (stop-band ripple), the width of transition band ?
2. What kind of anti-imaging/anti-aliasing FIR LPFs are common in video ? (windowed, equiripple..) ?

I consider color information to get just doubled (repeating each consecutive Cr and Cb sample twice) which will generate 320 Cb and Cr samples (enough for 640 output necessary 720 samples), then I'll repeat each 8th Cr and Cb sample which produces 40 more of each of Cr and Cb samples fulfilling my requirements of 720 of total color samples per line.

3. Do you think this is viable solution given the nature of color information is computer-generated vector graphics (menus, some small writings over actual B&W video) ?


Will be grateful to get advised.

Thank you in advance, Alex
 

Status
Not open for further replies.

Similar threads

Cookies are required to use this site. You must accept them to continue using the site. Learn more…