I plan to H264 encode a 800 x 600 50Hz, 24 bit parallel RGB camera data using a IMX6 colibri module.
I read in colibri IMX6 datasheet that there are 2 x 20 bit paralell camera inputs. I plan to follow a 2 step approach
1) Use a Colibri IMX6 256Mb + Viola Carrier Board and
connect 9 bit data + hsync + vsync + pclk to appropriate pins in viola
Fiddle with drivers and H264 encode incoming camera data (using builtin H264 video encoder)
Once first part is finished, then make a custom baseboard with all 20 camera pins available to colibri imx6
Do you think that my approach is all right? Can you please advise me about the pitfalls like
• Do the existing linux drivers support 800 x 600 Hz resolution?
• Do the exising linux drivers support 20 bit paralell camera data?
• …
Hi there,
Doing further reading, it seems that I should be using 16bit RGB565 format.
I’ll use the pinout designer and set the appropriate pins into camera mode.
I’ll report back about the developments.
One more thing is, the camera is controlled by another module, so there is no need for any drivers on Colibri.
Do you think that my approach is all right? Can you please advise me about the pitfalls like • Do the existing linux drivers support 800 x 600 Hz resolution? • Do the exising linux drivers support 20 bit paralell camera data
Your approach seems to be correct. For the 20 bit parallel camera, you would have to write your own driver. We support up to 8bit parallel camera interface, since this is a generic solution for the most of the colibri modules.
One more thing is, the camera is controlled by another module, so there is no need for any drivers on Colibri.
Thats correct. If the camera is not controller by colibri, then you need any driver for colibri module.
Thanks for your comment. Can you please point the driver that I should modify? Do you think that’s a lot of work? It has been a while since I modified the kernel drivers
One more thing is, the camera is controlled by another module, so there is no need for any drivers on Colibri.
That is not entirely correct. The V4L2 subsystem usually expects a full pipeline from source to sink in order to make sure all of them agree to the resp. configuration (e.g. resolution, colour mode and so forth). However, one may get around this by just using something similar to the generic platform approach. Note that the downstream NXP Linux kernel uses a proprietary camera stack not entirely in-line with soc_camera.
Is this the driver that Jaski wrote about? Can you please point me in the right direction for the 16bit RGB565 format driver that I should modify?
Note: Careful as just like with displays also with cameras bit depth may mean different things. There is the interface width which by default is 8-bit plus the in-memory data format which may be 16-bit RGB565 however by default camera sensors deliver BT656 which is a YUV format.