Image Post Processing Tutorial 1 (VisualSFM, CMVS)

Image Post Processing Tutorial 1 (VisualSFM, CMVS)

VisualSFM, CMVS – Image Post Processing Tutorial 1

This tutorial shows you how to orthorectify UAV aerial imagery and generate 3D point clouds. All software used is freely available for non-commercial use.
The software covered in this video are Changchang Wu’s VisualSFM (http://homes.cs.washington.edu/~ccwu/…) and Yasutaka Furukawa’s CMVS (http://www.di.ens.fr/cmvs/).
I’ve prepared a zip file containing the software preconfigured for use as described in the video tutorial: DOWNLOAD ONE ZIP FILE CONFIGURED WITH VisualSFM and CMVS

28 Comments

  1. Simon Carr

    I am trying to use a set of photos taken using Pix4D. There are 64 photos taken from an altitude of 50m and 5m apart. After processing the images using VisualSFM 3D construction there are only about 18 photos left on the screen. I assume the software discarded the remainder for some reason???

    When I run CMVS it says it gives me an error saying it failed and suggesting I may be running a 32Bit system.

    If I start a new project with only the first 16 images from the set of 64 everything works fine, but of course I only get a fraction of the area I am trying to map.

    Is there some problem with the images taken by Pix4D? Is 50M a good Altitude? is 5M between images too much or too little?

    Thanks
    Simon

  2. utm_geopd

    I am doing a research project titled “Genearation High resolution DSM using UAV images” for my final year of Engineering Study.
    Under this project i am testing follwoing four software’s capability to process UAV images.
    1.LPS 2011(Erdas Imagine)
    2.Agisoft Scan Pro
    3.PIX4D
    3.VisualSFM
    All the software i mentioned except VisualSFM are commercial software and i am using trial license for them.While working with VisualSFM i am having problm with
    the output.he outpcome of dense point generation isnt satisfactory. Gaps/holes exist in many places .I am afriad how this culd happen when such robust algorithm
    is being used for tie points generation.The data I am using consits of 25 images of an area inside a campus and exterior orientation parameters for all images have
    been provided.I guess the low quality of data from GPS/INS sensor mounted on UAV i.e the exterior orinetation has resulted to such output.Please provide me the what can be the best soultion in
    this scenario where there are no GCPS and the exterior orientation parameters are also not much accurate.

  3. Diodio26

    I understand. I’m testing the Visual SFM to see if I can get comparable results to Pix 4D. I have an orthomosaic and dsm generated with Pix4D with no GCPs. I want to get that output using free software.

    Is the mosaic generated by CMVPS orthorectified and georreferenced? How can I achieve that?

    Thanks in advance.

  4. Diodio26

    Whats the difference between Tutorial 1 and Tutorial 2? The output of the first tutorial was a PLY file that I can visualize in Meshlab. Its a 3D model with a lot of “holes”.

    Now I’m doing tutorial 2? My question is, I can skip Tutorial 1 and go directly to Tutorial 2?

    • geobduffy

      The first tutorial ( visual sfm) generates 3d point clouds. The second tutorial uses the outputs from the first to generate an orthomosaic image.

  5. ccocco

    Hi,

    I have downloaded the zip as per your instructions…
    I added 6 photos that I have taken, however when I press to COMPUTE 3D Reconstruction, I get the below on the task viewer:
    Run full 3D reconstruction, begin…
    3(3) pairs have essential matrices
    7(0) pairs have fundamental matrices
    Initialization: Th = 0.400, Ti = 30.
    Initialization: Th = 0.800, Ti = 30.
    Initialize with 5 and 7
    Failed to initialize with the chosen pair
    Initialization: Th = 0.400, Ti = 30.
    Initialization: Th = 0.800, Ti = 30.
    Failed to find two images for initialization
    —————————————————————-
    Run full 3D reconstruction, finished
    Totally 0.000 seconds used

    And ofcourse I get a blank screen 🙁
    Any idea what is causing this?

    Thank you
    Best Regards

    • geobduffy

      I’m afraid your GPU might not be up to the task. What video card are you using?

    • geobduffy

      Thanks Javy!
      It was going to take me a while to get something together. I’ll start posting various sample data sets as I collect going forward. Right now I’m swamped with building and testing.
      B

  6. JavytoG

    Thanks!!!!!!!
    It would be Great! 🙂

    Thanks in Advance
    Best Regards

  7. JavytoG

    Hi, I’m trying to learn to use VisualSFM, but i’m having problems to get some test images
    can you tell me where can I get a Small data set for testing pourposes??
    Thanks in Advance
    Best Regards

    • geobduffy

      Hi Javy,
      You can take photos at ground level and post process them. Try taking pictures of an interesting building or structure. If you don’t have access to a camera I could share some demo data from a recent aerial mission.

      • dani_r

        I was thinking about this the other week! If there was a small collection of the same images that we could download to try to get our heads around the programmes that would be great! Then if we all have the same info, if we’re doing something wrong it would be easier to troubleshoot! 🙂

        • JavytoG

          Hi Guys:
          Thanks for your quick reply!!!
          I’ve been messing around some test objects ( from ground level), but i want to produce DEM and orthophotos from aerial images (UAV & stuffs like that) that’s why i’m looking for some test data set.

          Thanks in Advance
          Best Regards.

          • geobduffy

            Ok, I’ll dig up some good test images and post a zip file soon as I can.

  8. danielgeographic

    Okay I may be missing a previous step, but how does the software know where to place the imagery for its sifting process. Is there locational information embedded in the imagery?

    • geobduffy

      The relative location of the imagery is calculated during the SIFT pairwise matching process. Each image is compared to all other images and control points are automatically calculated. You will see a bunch of files in the image directory as a result. The sparse point cloud is then generated using “compute 3D reconstruction” using the outputs from the previous step.

  9. cfastie

    Thanks for the Falkingham tuturial. I have followed [Nathan Craig’s tutorial](http://www.personal.psu.edu/nmc15/blogs/anthspace/2011/12/hypr3d-and-meshlab-to-scaled-model.html) for getting Hypr3D results into Meshlab and it worked okay. Hypr3D has a really nice embedding feature so you can share your 3D models interactively (like here: http://publiclab.org/notes/cfastie/5-31-2012/3d-model-meshlab). It would be great to have a similar capability with VisualSFM results.

    I stitched this 3×9 panorama with Gigapan Stitch, but it probably would have worked fine in MS ICE as a structured panorama: http://publiclab.org/notes/cfastie/07-10-2013/big-ndvi

    Thanks to your video, I made a pretty nice model of a tundra study plot from last week using 76 kite photos in VisualSFM. I will have to figure out how to share it.

    • geobduffy

      Thanks for the info and links! The Hyper 3D embed tool is definitely nice.
      The NDVI panorama came out very clear. Speaking of which, I need to update you on some of the NDVI related tests, I just seem to be stuck in a time deficit… I need a temporal stimulus package 🙂
      I’ll have to pursue ways of serving up point cloud data from visualSFM. It seems that http://pointclouds.org/ may be a good place to start but it will take me a while to figure out, I’m sure.
      In the meantime, if you’d like to post a couple screen shots of your tundra model that would be great. I’d love to see what kind of results you are getting.

  10. cfastie

    Thanks for this post. I am trying the software now and it appears to work as advertised. Your zip file of everything I need for 64 bit Windows with a cuda card was very convenient. Do you know of any way to share the 3D models from VisulaSFM? Can you export to MeshLab?

    Stitching infrablue photos (N,G,B) seems to work well, and Ned’s Fiji plugin was able to create NDVI from a 9×3, almost 100 megapixel stitched infrablue image. That is much easier than manually aligning the 27 NDVI images (which will never stitch automatically). Trying to create NDVI from two stitched images (RGB and NIR) seems like a sketchy approach because the two independently stitched images may never align well.

    • geobduffy

      C
      Great to see this working for you!
      It’s been a long time since I brought the VisualSFM output into Meshlab but I’ve done it a couple times. Unfortunately, it was quite a while ago. I think you use the output of the dense 3D reconstruction as Meshloab input and can build the surface from there…I’ll do some hunting around and see what I can dig up. I know I found some resources on this in the past.
      What software are you currently using to stitch the infra blue photos?
      B

  11. geoMullin

    If I were using a UAV to gather multiple images of a vegetated area with the intent of creating an NDVI, then how do I stitch those images together? It seems that NIR images are difficult/impossible to stitch using Microsoft ICE. Is there a better option?

    • geobduffy

      i haven’t tried stitching pure NIR images yet but the single camera NDVI images (NIR,G,B) stitch as well as R,G,B.
      i recently downloaded hugin (open source) and ran a few simple tests with it but i didn’t have much success. there are a lot of parameters in the program however and from what i have read it is supposed to be pretty powerful. hugin is based on the panorama tools research project that can be found at http://panotools.sourceforge.net/
      there is a list of available stitching software if you scroll down the page.
      i don’t know that any will process NIR-only images any better than ICE.
      others who are mapping are definitely stitching NIR-only images…we’ll just have to investigate.
      as soon as humanly possible i’ll add a page devoted to photo stitching programs and run some tests.
      please share what you find as well.
      thanks

      • geoMullin

        Thanks for the information about stitching NIR. I will keep you posted on what I find. On another note, I just got through your first tutorial on image post processing. It was excellent! Thanks, I am just starting tutorial 2.

        • scratchnodado

          Hi,

          Any update on stitching NDVI. Images? I’m currently using Tetracam, and it is very time consuming to stitch manually in photoshop

          • geobduffy

            If you’d like to send a zip of some images I could take a look at it for you. I’d love to see some tetracam images!

          • scratchnodado

            Hi geobduffy,

            NDVI images sent

Submit a Comment

Pin It on Pinterest

Let's stay in touch

Would you like to sign up to receive an occassional newsletter from us.  We'd love to let you know about new and wonderful things happening on the site.  Your details are never shared or sold and we hate spam as much as you do. 

You have Successfully Subscribed!