Tags: hit-e304/habitat-lab
Tags
Fix a minor bug about `generate_video` (facebookresearch#879) * Fix a minor bug about `generate_video` `fps` is not passed to `images_to_video` * Update common.py Follow black format
[ObsTransforms] Add support for semantic sensor observation transforms ( facebookresearch#847) * [ObsTransforms] Add support for semantic sensor observation transforms * Incoporate review * Handle tensor * Use semantic key to set interpolation mode * Update black * [BugFix] Return pytorch tensor in GPU2GPU mode * Fix signature * Fix signature
Version bump v0.2.1 (facebookresearch#699) Version bump v0.2.1
Added HM3D dataset to Habitat Lab readme and new stickers (facebookre… …search#669) Added HM3D dataset to Habitat Lab readme and new stickers
Version change from 0.1.6 to 0.1.7 (facebookresearch#596) * Version change from 0.1.6 to 0.1.7 * Made flake8 happy
Version change from 0.1.6 to 0.1.7 (facebookresearch#596) * Version change from 0.1.6 to 0.1.7 * Made flake8 happy
Collapse PPO and DD-PPO trainers, faster RNN code, and double buffere… …d sampling (facebookresearch#538) - Collapse PPO and DD-PPO trainers - Faster RNN code -- it is definitely faster and can make a noticeable impact during early training (~20% faster in some cases), but good luck reading it :-) - Rename NUM_PROCESSES to NUM_ENVIRONMENTS. The fact that the simulators are in different processes is an implementation detail. A backwards compatibility check has been added tho. - Support specifying training length both in terms of number of updates and number of frames - Specify the number of checkpoints as the number of checkpoints instead of a checkpoint interval - Introduce a TensorDict class for more cleanly interacting with dictionaries of tensors (potentially recursive). This also makes RolloutStorage about 100x cleaner. - Store RGB observations as their proper dtype in the rollout storage (this can save a lot of memory) - Some refactoring of PPOTrainer.train to be less of a script wrapped in a function - Double buffered sampling. This can improve performance when simulation time is equal or larger than policy inference time
PreviousNext