Tuesday, 27 August 2019

Jetson Nano - Developing a Pi v1.3 camera driver Part 2

I liked to thank motiveorder.com for sponsoring the hardware and development time for this article. 

Following on from my previous post, finally I am in a position to release a alpha version of the driver unfortunately at this stage only in binary form. Development of the driver has been complicated by the fact that determining the correct settings for the OV5647 is extremely time consuming giving the lack of good documentation.

The driver supports the following resolutions

2592 x 1944 @15 fps
1920 x 1080 @30 fps
1280 x 960  @45 fps
1280 x 720  @60 fps

I have added support for 720p because most of the clone camera seem to be targeting 1080p or 720p based on the lens configuration. I mainly tested with an original RPI V1.3 camera to ensure backward compatibility.

The driver is pre-compiled with the latest L4T R32.2 release so there is a requirement to deploy a kernel plus modules and with a new dtb file. Therefore I recommend you do some background reading to understand the process before deploying. Furthermore I recommend you have access to the linux console via the UART interface if the new kernel fails to boot or the camera is not recognised.

Deployment of the kernel and modules will be done on the Nano itself while flashing of the dtb file has to be done from a Linux machine where the SDK Manager is installed.

Download nano_ov5647.tar.gz and extract to your nano :

mkdir ov5647
cd ov5647
wget  https://drive.google.com/open?id=1qA_HwiLXIAHbQN-TTEU1daEIW9z7R2vy

tar -xvf ../nano_ov5647.tar.gz

After extraction you will see the following files:

-rw-r--r-- 1 user group 291462110 Aug 26 17:23 modules_4_9_140.tar.gz
-rw-r--r-- 1 user group 200225    Aug 26 17:26 tegra210-p3448-0000-p3449-0000-a02.dtb
-rw-r--r-- 1 user group  34443272 Aug 26 17:26 Image-ov5647


Copy kernel to /boot directory :

sudo cp  Image-ov5647 /boot/Image-ov5647

Change boot configuration file to load our kernel by editing /boot/extlinux/extlinux.conf. Comment out the following line and added the new kernel, so the change is from this:

      LINUX /boot/Image

to

       #LINUX /boot/Image
       LINUX /boot/Image-ov5647


Next step is to extract the kernel modules:

cd /lib/modules/
sudo tar -xvf <path to where files were extracted>/modules_4_9_140.tar.gz


The last step is to flash the dtb file, tegra210-p3448-0000-p3449-0000-a02.dtb.  As discussed in the comments section (below) by jiangwei it is possible to copy the dtb file directly to Nano refer to this link on how this can be achieved. See section  "Flash custom DTB on the Jetson Nano"

Alternatively you can use SDK manager,  flashing require copying the dtb file to the linux host machine into the directory Linux_for_Tegra/kernel/dtb/  where SDK your installed. Further instructions on how to flash the dtb are covered in a post I made here however since we don't want to replace the kernel the command to use is:

sudo ./flash.sh --no-systemimg -r -k DTB jetson-nano-qspi-sd mmcblk0p1

There seems to be some confusion about how to put the nano into recovery mode. The steps to do that are:

1. Power down nano
2. J40 - Connect recovery pins 3-4 together
3. Power up nano
4. J40 - Disconnect pins 3-4
5. Flash file


After flashing the dtb the nano should boot the new kernel and hopefully the desktop will reappear. To verify the new kernel we can run the following command:

uname -a

It should report the kernel version as 4.19.10+ :

Linux jetson-desktop 4.9.140+

If successful power down the Nano and now you can connect your camera to FPC connector J13. Power up the nano and once desktop reappears verify the camera is detected by:

dmesg | grep ov5647

It should report the following:

[    3.584908] ov5647 6-0036: tegracam sensor driver:ov5647_v2.0.6
[    3.603566] ov5647 6-0036: Found ov5647 with model id:5647 process:11 version:1
[    5.701298] vi 54080000.vi: subdev ov5647 6-0036 bound



The above indicates the camera was detected and initialised. Finally we can try streaming, commands for different the resolutions are below:

#2592x1944@15fps
gst-launch-1.0 nvarguscamerasrc ! 'video/x-raw(memory:NVMM),width=2592, height=1944, framerate=15/1' ! nvvidconv flip-method=0 ! 'video/x-raw,width=2592, height=1944' ! nvvidconv ! nvegltransform ! nveglglessink -e

#1920x1080@30fps
gst-launch-1.0 nvarguscamerasrc ! 'video/x-raw(memory:NVMM),width=1920, height=1080, framerate=30/1' ! nvvidconv flip-method=0 ! 'video/x-raw,width=1920, height=1080' ! nvvidconv ! nvegltransform ! nveglglessink -e

#1280x960@45fps
gst-launch-1.0 nvarguscamerasrc ! 'video/x-raw(memory:NVMM),width=1280, height=960, framerate=45/1' ! nvvidconv flip-method=0 ! 'video/x-raw,width=1280, height=960' ! nvvidconv ! nvegltransform ! nveglglessink -e


#1280x720@60fps
gst-launch-1.0 nvarguscamerasrc ! 'video/x-raw(memory:NVMM),width=1280, height=720, framerate=60/1' ! nvvidconv flip-method=0 ! 'video/x-raw,width=1280, height=720' ! nvvidconv ! nvegltransform ! nveglglessink -e

The driver supports controlling of the analogue gain which has a range of 16 to 128. This can be set using the 'gainrange' property, example below:

gst-launch-1.0 nvarguscamerasrc gainrange="16 16" ! 'video/x-raw(memory:NVMM),width=1280, height=720, framerate=60/1' ! nvvidconv flip-method=0 ! 'video/x-raw,width=1280, height=720' ! nvvidconv ! nvegltransform ! nveglglessink -e

If you require commercial support please contact motiveorder.com.

35 comments:

  1. Flash DTB from the Jetson device itself
    wget https://drive.google.com/open?id=1hRIzFwo34oHd7dgpFxeUKEYAPtjqIR80
    sudo dd if=tegra210-p3448-0000-p3449-0000-a02.dtb.encrypt of=/dev/mmcblk0p10

    reference:
    https://developer.ridgerun.com/wiki/index.php?title=Jetson_Nano/Development/Building_the_Kernel_from_Source#Flash_DTB_from_the_Jetson_device_itself

    ReplyDelete
  2. its works successfully but when i try to save a video its only saves as 30fps .

    ReplyDelete
    Replies
    1. It may depend on how your saving the video stream.

      Delete
  3. can you give me a correct code to saving a video as 60fps or 30fps with minimum file size

    ReplyDelete
    Replies
    1. Its difficult to give you the correct code as I'm unsure what your trying to do. To save video its best to encode the video as H264/H265, but depending on the rate you may need to buffer the encoding as writing to the sd card can be slow on the nano.

      Delete
  4. This comment has been removed by the author.

    ReplyDelete
    Replies
    1. This comment has been removed by the author.

      Delete
  5. Hi Jasbir , my rpi_v1 camera on jetson nano output not given clear image. So can you suggest me solution for it.Im also used videobalance property in gstreamer but not its not helped. Camera hardware works well in raspberry pi. It had some crossing lines over the output image/video.Image quality is not good as compared with raspberry pi output.

    ReplyDelete
    Replies
    1. Hi, are you using the rpi v1 camera or a clone? Also if you could provide an image of the output that would be useful.

      Delete
  6. I used rpi v1 camera in both jetson nano and rpi boards.
    Something similar from the below link .

    "https://www.dx.com/p/rpi-ir-cut-camera-better-image-in-both-day-and-night-for-raspberry-pi-2079254.html?tc=INR&ta=IN&gclid=CjwKCAjwo9rtBRAdEiwA_WXcFmKdHUUrnrIYxJHkLqXqUiFX3Eo-opepdIM0ENck7_zT2fnCBCy7fhoCDBoQAvD_BwE#.XbbiRnUzZhE"

    sample image from rpi board :-

    "https://drive.google.com/open?id=1eLUeTC47ND1BHpNgw09cbVg6oo88Uljz"

    sample image from jetson nano board :-

    "https://drive.google.com/open?id=1VsGv_5OXfBDVp_JkL3qVf2B63RreYtGj"

    ReplyDelete
    Replies
    1. Try setting "saturation=2" in the gstreamer pipeline to see if the colours are more vivid.

      Delete
    2. i tried already its works fine by adding "saturation=2 hue=0.1". But , my problem is there is some running horizontal lines on the left bottom corner of the image.

      Example image:-

      https://drive.google.com/open?id=1n5Rx4xHEtHKbiNFWJcSiQgB5jDmy5I10

      Delete
    3. And v1 camera output quality in jetson nano was too bad in night time.

      Delete
    4. What resolution are you seeing the zig zag issue?

      Have you compared night time output with RPI? Unfortunately I don't have your camera to test against.

      Delete
  7. Hello, i'm stuck at the last part, as a "beginner" i don't know what to do if i can't visualize it. Can you help me?

    ReplyDelete
  8. image contains lines:-
    https://drive.google.com/open?id=1VFB4nj9ThmwllEalvfCqYv7zcZXhutNj

    What could be the reason for this kind of lines on the video?

    * Vibration of the camera and connectors

    * Power issues on the camera and jetson nano

    * compression capabilities of jetson nano.

    ReplyDelete
    Replies
    1. I would suspect that is caused by electrical interference, you need to test with camera in a still position first and see what the output looks like.

      Delete
    2. This happens only at certain frames in the video. In certain frames it has only a few lines. In certain frames its bad like in this one. In most frames the video is good enough.

      Delete
    3. @jasbir the camera is housed in a box with the strip touching the display port of Jetson nano, and on one side we have a 4g router. Will vibration cause such problems? This device is on a moving vehicle.

      Delete
    4. Any idea on how I can reproduce the problem in lab?

      Delete
  9. Hello, how is it with updating kernel, will it persists when apt upgrade is run?
    And can you publish sources?

    ReplyDelete
    Replies
    1. apt upgrade will update the kernel so you will lose the camera driver. Plan at the moment is to update it to the latest kernel version and make some improvements however can't give an eta.

      Delete
    2. Can you include /lib/firmware/tegra21x_xusb_firmware into kernel firmware modules? I have few problems when it's loaded from initramfs. Thanks for the work you have done so far

      Delete
  10. Hello, I gues tegra210-p3448-0000-p3449-0000-a02.dtb is for a02 board. Any thoughts on compiling on b01 board?

    ReplyDelete
    Replies
    1. Hello, Did you find images for b01 board?

      Delete
  11. Hi Jasbir, thanks for the time and effort you've put into this :-) Are there any plans to release the sources? I guess there are many hobbyists willing to continue on your work, and community could greatly benefit from it.

    ReplyDelete
  12. Would really love to see an update on this

    ReplyDelete
  13. Where can I download the nano_ov5647.tar.gz file?

    The google drive link: https://drive.google.com/open?id=1qA_HwiLXIAHbQN-TTEU1daEIW9z7R2vy does not appear to contain it

    ReplyDelete
    Replies
    1. It contains the file but you have to open the link in your browser. Using wget will not work since you have to click some buttons to start download.

      Delete
  14. I cannot find the source code so it is not clear how can I compile my own kernel. And I need to compile my own kernel to get support for 2.5K and 4K mobile screens (even to use them in landscape mode, it is necessary to fix the kernel to add support for 2.5K and 4K portrait mode). If I need PPS, I also have to compile my own kernel.

    To be honest I do not get the point of hiding the source code in this case, Raspberry Pi camera v1.3 is no longer produced so nobody will pay money for the source code (I realize that the author did not mentioned anything about selling it but I cannot imagine any other reason not to share it). This driver is only useful for those who want to use old Raspberry Pi camera which currently costs $4-$5 including shipping instead of paying $16-$25 (or even more) for the v2 module. Difference is especially noticeable if I want to use 2 cameras. So it is safe to say that if somebody decides to pay extra, they are likely to just buy officially supported camera module, and not the driver for the deprecated module.

    If you did not plan to make any money from it, please consider sharing the source code with the community.

    By the way, as far as I know, you must provide a way to download modified source code of the Linux kernel or derivative work based on it - my understanding is that it is required by its GPL license, so you cannot share only the binaries and ignore requests to share the code without violating the license.

    ReplyDelete
  15. I have tried the process as suggested.In my case Image is booting from SD card.
    Almost all steps are executed as expected but there is no stream .Please suggest if I missed anything here

    //---Terminal Log------------------------
    dmesg | grep ov5647
    [ 1.236289] ov5647 6-0036: tegracam sensor driver:ov5647_v2.0.6
    [ 1.254760] ov5647 6-0036: Found ov5647 with model id:5647 process:11 version:1
    [ 1.444938] vi 54080000.vi: subdev ov5647 6-0036 bound
    gst-launch-1.0 nvarguscamerasrc ! 'video/x-raw(memory:NVMM),width=1920, height=1080, framerate=30/1' ! nvvidconv flip-method=0 ! 'video/x-raw,width=1920, height=1080' ! nvvidconv ! nvegltransform ! nveglglessink -e
    Setting pipeline to PAUSED ...

    Using winsys: x11
    Pipeline is live and does not need PREROLL ...
    Got context from element 'eglglessink0': gst.egl.EGLDisplay=context, display=(GstEGLDisplay)NULL;
    Setting pipeline to PLAYING ...
    New clock: GstSystemClock
    Error generated. /dvs/git/dirty/git-master_linux/multimedia/nvgstreamer/gst-nvarguscamera/gstnvarguscamerasrc.cpp, execute:557 No cameras available
    Got EOS from element "pipeline0".
    Execution ended after 0:00:00.317634518
    Setting pipeline to PAUSED ...
    Setting pipeline to READY ...
    Setting pipeline to NULL ...
    Freeing pipeline ...
    (Argus) Error EndOfFile: Unexpected error in reading socket (in src/rpc/socket/client/ClientSocketManager.cpp, function recvThreadCore(), line 266)
    (Argus) Error EndOfFile: Receive worker failure, notifying 1 waiting threads (in src/rpc/socket/client/ClientSocketManager.cpp, function recvThreadCore(), line 340)
    (Argus) Error InvalidState: Argus client is exiting with 1 outstanding client threads (in src/rpc/socket/client/ClientSocketManager.cpp, function recvThreadCore(), line 357)
    (Argus) Error EndOfFile: Receiving thread terminated with error (in src/rpc/socket/client/ClientSocketManager.cpp, function recvThreadWrapper(), line 368)
    (Argus) Error EndOfFile: Client thread received an error from socket (in src/rpc/socket/client/ClientSocketManager.cpp, function send(), line 145)
    (Argus) Error EndOfFile: (propagating from src/rpc/socket/client/SocketClientDispatch.cpp, function dispatch(), line 87)

    Thanks,
    Vidya

    ReplyDelete
  16. I did all steps but i have following error

    [ 1.205574] ov5647 6-0036: tegracam sensor driver:ov5647_v2.0.6
    [ 1.213274] tegradc tegradc.0: blank - powerdown
    [ 1.221866] tegra-vii2c 546c0000.i2c: no acknowledge from address 0x36
    [ 1.221941] ov5647 6-0036: ov5647_board_setup: error during i2c read probe (-121)
    [ 1.221980] ov5647 6-0036: board setup failed
    [ 1.222061] ov5647: probe of 6-0036 failed with error

    Thanks
    Wilder

    ReplyDelete
  17. Hi,

    I flashed DTB file using below command.

    "sudo dd if=tegra210-p3448-0000-p3449-0000-a02.dtb.encrypt of=/dev/mmcblk0p10"

    and I rebooted the jetson nano then I run below command

    "gst-launch-1.0 nvarguscamerasrc ! 'video/x-raw(memory:NVMM),width=2592, height=1944, framerate=15/1' ! nvvidconv flip-method=0 ! 'video/x-raw,width=2592, height=1944' ! nvvidconv ! nvegltransform ! nveglglessink -e"

    I'm getting the below response.

    "Using winsys: x11
    NVMAP_IOC_QUERY_HEAP_PARAMS failed [Inappropriate ioctl for device]
    ERROR: Pipeline doesn't want to pause.
    Got context from element 'eglglessink0': gst.egl.EGLDisplay=context, display=(GstEGLDisplay)NULL;
    Setting pipeline to NULL ...
    Freeing pipeline ..."


    and in dmesg log I noticed below text
    [ 0.212497] DTS File Name: /dvs/git/dirty/git-master_linux/kernel/kernel-4.9/arch/arm64/boot/dts/../../../../../../hardware/nvidia/platform/t210/porg/kernel-dts/tegra210-p3448-0000-p3449-0000-b00.dts
    [ 0.212506] DTB Build time: Jan 15 2021 14:47:48



    [ 1.362617] tegra-vii2c 546c0000.i2c: no acknowledge from address 0x10
    [ 1.362720] imx219 7-0010: imx219_board_setup: error during i2c read probe (-121)
    [ 1.362769] imx219 7-0010: board setup failed
    [ 1.362888] imx219: probe of 7-0010 failed with error -121
    [ 1.363572] imx219 8-0010: tegracam sensor driver:imx219_v2.0.6
    [ 1.372558] tegradc tegradc.0: unblank
    [ 1.386897] tegra-vii2c 546c0000.i2c: no acknowledge from address 0x10
    [ 1.387005] imx219 8-0010: imx219_board_setup: error during i2c read probe (-121)
    [ 1.387039] imx219 8-0010: board setup failed
    [ 1.387129] imx219: probe of 8-0010 failed with error -121

    Thanks and Regards,
    Subba Reddy GV

    ReplyDelete
  18. hello friend, can you update this valuable post with the latest version L4T R35

    ReplyDelete