29 March 2023 22:41
I’ve surveyed several lakes, and I need to measure lake volume changes and model how much water is stored for a given increase in lake level. Therefore I don’t need accurate absolute coordinates. I surveyed without GCPs and only with the drone GPS (navigation GPS, not fancy PPK or RTK).
Now I’m having trouble getting the lake shore all at the same vertical level, as it shows variations of several meters. I don’t know how to solve this, but one option I’ve thought of is to create GCPs along the lake shore using the coordinates obtained from an orthomosaic created in an initial run of the reconstruction. Then use those GCPs for a second run, but setting their elevation all to the same value. Ideally, I would like to find a way to tell ODM to give more importance to the vertical coordinate of those GCP than to the horizontal ones, which might be wrong due to the warping of the model in the first run. Is that possible? Any other ideas to get the lake shore level?
We don’t have any weighting capabilities for GCP components, nor GCPs themselves.
Can you tell us more about your survey and data?
What is your mean GPS error for a given dataset? Have you adjusted the
--gps-accuracy flag to 2x that to start? Have you enabled
--rolling-shutter to test?
If you need high repeatability, then yes, you’ll likely need to use GCPs to keep things constrained.
Hello, this is a task you can solve in the „Cloudcompare“ Software which is Open Source. In Cloudcompare you can align (register) different pointclouds to a reference pointcloud. It is necessary to find equal points, at least 4 but more are better, that haven’t changed coordinates and elevation. With the aligned pointclouds you can also do volume calculations in Cloudcompare.
30 March 2023 21:19
The align option looks very interesting. Is there more documentation on how it works? It only makes translations, or also rotations, de-warping of some sort?
30 March 2023 21:21
Thanks a lot for the suggestion. I’ll give it a try. Thanks!
None yet! I need to go and collect some good data and learn the function before I write it up.
I’ll dig for what I can find later this evening or this week.
30 March 2023 21:37
That would be great! Thanks a lot, I look forward to learn more of that function.
3 April 2023 06:38
Did you find any documentation on the align option?
I haven’t. Hopefully, you had better luck.
04:15PM - 13 Dec 22 UTC
This PR adds the ability to automatically register a reconstruction (and resulti
… ng orthophotos, point clouds, 3D models, etc.) to a reference DSM (GeoTIFF, LAS or LAZ) by using CODEM (https://github.com/NCALM-UH/CODEM, credits to Craig Glennie and the group at CRREL).
The details of the method are available at https://github.com/NCALM-UH/CODEM/blob/main/docs/details.md
* Pass a file via `--align <path-to-tif-or-las-laz>`.
* Place a `align.tif`, or `align.laz` or `align.las` in the project root (similar to GEO and GCP files).
Sample output report:
This could be particularly useful for repeat flights of the same area where automatic alignment allows for better change detection analysis, without needing GCPs.
Also fixes #1564
This is what we have currently.
It is based upon this explanation:
This file has been truncated.
# Detailed Overview
`CODEM` (Multi-Modal Digital Elevation Model Registration) is a spatial data co-registration tool developed in Python.
Conceptually, `CODEM` consists of two back-to-back registration modules:
1. An initial coarse global registration based on matched features extracted from digital surface models (DSMs) generated from the AOI and Foundation data sources.
2. A fine local registration based on an iterative closest point (ICP) algorithm applied to the AOI and Foundation data.
Each registration module solves a 6- or 7-parameter similarity transformation (three translations, three rotations, one optional scale). The modules are subject to an overall "pipeline resolution" that controls the density of the data flowing through the pipeline. Foundation and AOI data will likely have different densities, and one or both may contain very high data densities, e.g., point clouds with tens to hundreds of points per square meter. In the case of differing data densities, registration accuracy is limited by the lower density data. Thus, the higher density data is resampled to match that of the lower density data for efficiency. A maximum density is also enforced in the case where both the Foundation and AOI data are very high resolution. Very dense data contains redundant information and slows the registration computation time.
A flowchart illustrating the registration pipeline is given below.
## Pipeline Details
### 1. Preprocessing
I’ve not had a chance to re-synthesize this into something a bit more friendly, nor collect my own data. Sorry -_-
Looks like linear transformations only:
Each registration module solves a 6- or 7-parameter similarity transformation (three translations, three rotations, one optional scale).
So it assumes the elevation models are broadly correct, but need translated, rotated, and optionally scaled.
3 May 2023 16:27
This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.