The staging server is broken down into the following components:
- Stage uploaded scans by the devices with scanner app installed and trigger scan processing. To ensure that scans can be automatically processed, the scans should be placed in a directory with sufficient free space and accessible to the scanning processor.
- Process staged scans. Handle reconstruction processing request from Web-UI, when user press interactive buttons on Web-UI.
- Index staged scans. Go through scan folders and collate information about the scans.
Server requires installing prerequisites to perform scan processing. Please follow the installation instruction here.
Configurable parameters are listed in config.yaml. Parameters can be overriden by command line as follows:
python upload.py upload.workers=4 upload.port=8080
python process.py reconstruction.frames.step=2 thumbnail.thumbnail_ext=_thumb.png
hydra
also allows dynamic command line tab completion, you can enable tab completion via:
eval "$(python python_script.py -sc install=bash)"
You can find more details about hydra
here
Configuration files:
config.yaml
- Main configuration filemultiscan.json
- Metadata for MultiScan assets for use with the Scene Toolkit viewerconfig/scan_stages.json
- The stages of the scan pipeline (so we can track progress)config/upload.ini
- Configuration file for upload server
⚡ Note the working directory need to be in multiscan/server |
---|
Since staging folder will store a great amount of uploaded scans, and is required to have a large number of storage
space. Usually, the staging folder is in a different directory to the server code directory, thus you can create
symbolic links to the staging
directory.
ln -s "$(realpath /path/to/staging folder)" staging
The upload script receives scan files (.mp4
, .depth.zlib
, .confidence.zlib
, .json
, .jsonl
) from the devices
with Scanner App installed and stages them in a staging folder for scan processing. These files first placed in
the tmp
directory before being moved into the staging
directory after verification. Uses flask
with gunicorn with specified number of worker threads on port 8000 by default.
Start uploading server by:
python upload.py **configuration override** # Start the upload server, recieve files from scanner app
Start process server on port 5000 by:
python process.py **configuration override** # Start the process server, recieve process request from web-ui
The server is a simple flask server that only handles one request at a time (will block until scan is processed).
The scan processing can be broken down into the following components:
-
The
convert
checkbox in Web-UI is used to control whether extract rgb and depth frames from the uploaded compressed scan streams- Color RGB frames are extracted by
ffmpeg
. - Depth frames are extracted by
zlib
, implementation details inscripts/depth_decode.py
.
- Color RGB frames are extracted by
-
Open3D RGB-D Reconstruction Pipeline
When the depth sensor is available, the RGB-D data can be acquired, Open3D RGB-D reconstruction pipeline can provide more robust reconstruction result.
Meshroom RGB Photogrammetric Reconstruction Pipeline
When only RGB data is available, Meshroom photogrammetric pipeline based on structure from motion(sfm) and multi-view stereo(mvs) reconstruction pipeline can be used.
-
Meshes extracted by Open3D RGB-D reconstruction pipeline usually contains millions of vertices and faces, we use Instant Meshes to perfrom mesh topology decimation after reconstruction to reduce mesh complexity, and keep geometry unchanged as much as possible. We also use PyMeshLab for mesh cleanup.
Meshroom reconstruction pipeline has built in mesh cleanup functionality, we don't need to apply topology cleanup and decimation use Instant Meshes if user choose to use this pipeline.
-
To improve the visual appearance of our reconstructed mesh from Open3D pipeline, the decimated mesh will be textured with framework mvs-texturing.
-
Based on the segmentator in ScanNet, we extend the algorithm to take advantage of both vertex colors and vertex normals, and implemented a two-stage hierarchical segmentator. The first stage segments the mesh with weight from vertex normals. Then for each segmented cluster, the vertex colors are added to the weights to perform another segmentation step.
-
scripts/render.py
utilize Open3D triangle mesh visualization with headless rendering, to render reconstruted ply meshes, and utilize Pyrender with EGL GPU-accelerated rendering to render textured obj meshes.
scan_processor.py
- Main scan processing script which let you process scans without interaction without WebUI. You can specify which steps to process as follows:
python scan_processor.py process.input_path=path/to/staging/scanID process.actions='[recons, convert, texturing ...(other steps)]'
Indexing scripts are used to collate information about the scans and index them.
python monitor.py # Web service entry point for monitoring and triggering of indexing of scans.
monitor.py
- Web service entry point for monitoring and triggering of indexing of scans. Run following command to start the monitor server on port 5001 (simple flask server).index.py
- Creates index of scans in a directory and outputs a csv filescripts/index_multiscan.sh
- Index both staging and checked scans and updates WebUI db
compute_annotation_stats.py
- Compute aggregated annotation statisticscompute_timings.py
- Compute processing times for scansscripts/combine_stats.py
- Combines statistics with index