-
Notifications
You must be signed in to change notification settings - Fork 58
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Action space, State space, Reward function Locations #118
Comments
Hello @naa1824, When taking observation space is defined here: drl_grasping/drl_grasping/envs/tasks/grasp/grasp_octree.py Lines 79 to 98 in b228354
action space is defined here: drl_grasping/drl_grasping/envs/tasks/grasp/grasp.py Lines 55 to 81 in b228354
and the reward function comes from the Curriculum: drl_grasping/drl_grasping/envs/tasks/grasp/grasp.py Lines 220 to 222 in b228354
The function you posted above is for actually setting/applying the actions to the robots. |
I see then are you using the default SB3 polices like MlpPolicy or did you created your own? because I could not find it (I mean the policy code) |
All custom policies for the octree observations are located here |
For reference, there is also the Directory Structure described in README. Directory Structure.
├── drl_grasping/ # [dir] Primary Python module of this project
│ ├── drl_octree/ # [dir] Submodule for end-to-end learning from 3D octree observations
│ ├── envs/ # [dir] Submodule for environments
│ │ ├── control/ # [dir] Interfaces for the control of agents
│ │ ├── models/ # [dir] Functional models for simulation environments
│ │ ├── perception/ # [dir] Interfaces for the perception of agents
│ │ ├── randomizers/ # [dir] Domain randomization of the simulated environments
│ │ ├── runtimes/ # [dir] Runtime implementations of the task (sim/real)
│ │ ├── tasks/ # [dir] Implementation of tasks
│ │ ├── utils/ # [dir] Environment-specific utilities used across the submodule
│ │ └── worlds/ # [dir] Minimal templates of worlds for simulation environments
│ └── utils/ # [dir] Submodule for training and evaluation scripts boilerplate (using SB3)
├── examples/ # [dir] Examples for training and evaluating RL agents
├── hyperparams/ # [dir] Default hyperparameters for training RL agents
├── launch/ # [dir] ROS 2 launch scripts that can be used to interact with this repository
├── pretrained_agents/ # [dir] Collection of pre-trained agents
├── rviz/ # [dir] RViz2 config for visualization
├── scripts/ # [dir] Helpful scripts for training, evaluation and other utilities
├── CMakeLists.txt # Colcon-enabled CMake recipe
└── package.xml # ROS 2 package metadata |
Hello @AndrejOrsula
sorry to bother you again,
I was studying the project in the last 2 weeks, and I was trying to find the Action space, State space, and Reward function to understand how they work so I can create my own.
But I am getting lost with files,
could you guide me with this?
which files contain these spaces.. ?
I found the class which create the space, but I didn't understand it
is this the action space? env->task->grasp->grasp.py
It looks too small!!
The text was updated successfully, but these errors were encountered: