From 333d088f3d4179d4ad06b9108d60a722b4364c3a Mon Sep 17 00:00:00 2001 From: Kenny Yao Date: Wed, 22 Feb 2023 14:50:01 -0800 Subject: [PATCH] update README --- README.md | 60 ++++++++++++++++++++++++++++--------------------------- 1 file changed, 31 insertions(+), 29 deletions(-) diff --git a/README.md b/README.md index 7949226..2f44815 100644 --- a/README.md +++ b/README.md @@ -73,61 +73,63 @@ Final dataset output: 2. Place marker positions into `calculate_extrinsic/aruco_corners.yaml`, labeled under keys: `quad1`, `quad2`, `quad3`, and `quad4`. 3. Start the OptiTrack recording. 4. Synchronization Phase - 1. Press `start calibration` to begin recording data. + 1. Press `start calibration` on iphone to begin recording data. 2. Observe the ARUCO marker in the scene and move the camera in different trajectories to build synchronization data (back and forth 2 to 3 times, slowly). 3. Press `stop calibration` when finished. 5. Data Capturing Phase - 1. Press `start collection` to begin recording data. - 2. Observe the ARUCO marker while moving around the marker. (Perform 90-180 revolution around the marker, one way.) - 3. Press `stop collection` when finished. + 1. Press `start collection` to begin recording data. + 2. Observe the ARUCO marker while moving around the marker. (Perform 90-180 revolution around the marker, one way.) + 3. Press `stop collection` when finished. 6. Stop OptiTrack recording. 7. Export OptiTrack recording to a CSV file with 60Hz report rate. - 8. Move tracking CSV file to `/extrinsics_scenes/camera_poses/camera_poses.csv`. - 9. Export the app_data to `/extrinsics_scenes/iphone_data`. - 10. Move the timestamps.csv to `/extrinsics_scenes`. + 8. Move tracking CSV file to `/extrinsics_scenes//camera_poses/camera_poses.csv`. + 9. Export the app_data to `/extrinsics_scenes//iphone_data`. + 10. Move the timestamps.csv to `/extrinsics_scenes/`. #### Process Data and Calcualte Extrinsic 1. Convert iPhone data formats to Kinect data formats (`tools/process_iphone_data.py`) - * This tool converts everything to common image names, formats, and does distortion parameter fitting - * Code: python tools/process_ipone_data.py —scene_name — extrinstic + * This tool converts everything to common image names, formats, and does distortion parameter fitting. + * Code: python tools/process_ipone_data.py --depth_type --scene_name --extrinstic 2. Clean raw opti poses and Sync opti poses with frames (`tools/process_data.py --extrinsic`) - * Code: python tools/process_data.py —scene_name — extrinstic + * Code: python tools/process_data.py —-scene_name —-extrinstic 3. Calculate camera extrinsic (`tools/calculate_camera_extrinsic.py`) - * Code: python tools/caculate_camera_extrinsic.py —scene_name + * Code: python tools/caculate_camera_extrinsic.py —-scene_name 4. Output will be placed in `cameras//extrinsic.txt` ### Scene collection Process #### Data Collection Step -1. Setup LiDARDepth APP using Xcode (Need to reinstall before each scene). -2. Start the OptiTrack recording -3. Synchronization Phase - 1. Press `start calibration` to begin recording data - 2. Observe the ARUCO marker in the scene and move the camera in different trajectories to build synchronization data (swing largely from front to the back) for 20 seconds. - 3. Press `end calibration` when finished +1. Setup LiDARDepth APP (ARKit version) using Xcode (Need to reinstall before each scene). +2. Start the OptiTrack recording. +3. Synchronization Phase. + 1. Press `start calibration` to begin recording data. + 2. Observe the ARUCO marker in the scene and move the camera in different trajectories to build synchronization data (back and forth 2 to 3 times, slowly). + 3. Press `end calibration` when finished. 4. Data Capturing Phase - 1. cover the ARUCO marker, observe objects to track - 2. Press `Start collecting data` to begin recording data - 3. Press `End collecting data` when finished + 1. cover the ARUCO marker. + 2. Press `Start collection` to begin recording data. + 3. Observe the objects while moving around. (Perform 90-180 revolution around the objects, one way.) + 4. Press `End collection` when finished. 5. Stop OptiTrack recording. 6. Export OptiTrack recording to a CSV file with 60Hz report rate. -7. Move tracking CSV file to `/camera_poses/camera_poses.csv`. -8. Export the app_data to `/iphone_data`. -9. Move the timestamps.csv to ``. +7. Move tracking CSV file to `scenes//camera_poses/camera_poses.csv`. +8. Export the app_data to `scenes//iphone_data`. +9. Move the timestamps.csv to `scenes/`. #### Process Data 1. Convert iPhone data formats to Kinect data formats (`tools/process_iphone_data.py`) * This tool converts everything to common image names, formats, and does distortion parameter fitting - * Code: python tools/process_ipone_data.py [CAMERA_NAME] —scene_name [SCENE_NAME] + * Code: python tools/process_ipone_data.py --depth_type --scene_name --extrinstic 2. Clean raw opti poses and Sync opti poses with frames (`tools/process_data.py`) * Code: python tools/process_data.py —scene_name [SCENE_NAME] #### Anotation Process -1. Manually annotate the first frame object poses (`tools/manual_annotate_poses.py`) - * Modify (`[SCENE_NAME]/scene_meta.yml`) by adding (`objects`) field to the file according to objects and their corresponding ids.
- * Code: `python tools/manual_annotate_poses.py [SCENE_NAME]` +1. Manually annotate the first few frame of the object poses (`tools/manual_annotate_poses.py`). + * Modify (`[SCENE_NAME]/scene_meta.yml`) by adding (`objects`) field to the file according to objects and their corresponding ids.
+ * Code: `python tools/manual_annotate_poses.py [SCENE_NAME]` + * Check the control instructions in the `pose_refinement/README.md`. 2. Recover all frame object poses and verify correctness (`tools/generate_scene_labeling.py`)
- * Generate semantic labeling and Generate per frame object poses (`tools/generate_scene_labeling.py`)
- * Code: python /tools/generate_scene_labeling.py [SCENE_NAME] + * Generate semantic labeling and adjust per frame object poses (`tools/generate_scene_labeling.py`)
+ * Code: python /tools/generate_scene_labeling.py [SCENE_NAME] ## Minutia * Extrinsic scenes have their color images inside of `data` stored as `png`. This is to maximize performance. Data scenes have their color images inside of `data` stored as `jpg`. This is necessary so the dataset remains usable.