Nope. Pointcloud is just depth data and potentially color data.
Photogrammetry is the technique of 3d scanning that correlates feature points within multiple pictures in order to back-project a 3d scene.
The more pictures you have of an area, the higher quality the overall scan. So if we have 3000 images of a building's exterior in NYC, we can recreate the building in 3d.
My idea was that, for X thousand images, a single image is a trivial datapoint, and could be easily removed with little loss in quality of scan. It may technically be in violation of copyright, but is used for a substantially different work.
Using a SIFT pipeline for photogrammettry, I have successfully recreated small objects, Comet 67P and some buildings from quadcopter pics.
First I search for features with DoG and match them with SIFT , do a bit extra crawling along matched edges and the result is a dense coloured point cloud.
The point clouds are converted to a mesh with poisson surface reconstruction and retextured with fragments of the
original images.
The Poisson surfaces are never quite as nice as the point clouds - I am using Meshlab for this part.
Processing a few hundred big images takes ages so I send the jobs up to EC2 for a few hours so each job is usually a couple dollars.
That-sounds-amazing. Any chance you could share a link of the end result? Also, any suggestions on how to get started with photogrammetry & computer vision?
If you have correlated feature points you essentially have a point cloud no? It's just that photogrammetry adds an additional step of back-projecting the 3d scene?