Skip to content

Commit

Permalink
Documentation update (nasa#668)
Browse files Browse the repository at this point in the history
* docker docs update

* make build instructions ROS1

* updating install instructions, hopefully making them easier to follow

* updated from github PR reviews
  • Loading branch information
marinagmoreira authored Feb 3, 2023
1 parent 21880f4 commit f5c07c6
Show file tree
Hide file tree
Showing 5 changed files with 135 additions and 108 deletions.
2 changes: 1 addition & 1 deletion .devcontainer/devcontainer.json
Original file line number Diff line number Diff line change
Expand Up @@ -40,5 +40,5 @@
"zachflower.uncrustify"
],
"workspaceMount": "source=${localWorkspaceFolder},target=/src/astrobee/src,type=bind",
"workspaceFolder": "/workspace"
"workspaceFolder": "/src/astrobee/src"
}
24 changes: 19 additions & 5 deletions INSTALL.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,17 +4,31 @@

Ubuntu 20.04 is the preferred host OS for most Astrobee developers to use.

Here are the available host OS options with development roadmap details:
- Ubuntu 20.04: This is the preferred host OS for most Astrobee developers to use. The Astrobee Facility team is currently preparing to upgrade the robots on ISS from Ubuntu 16.04 to Ubuntu 20.04, but we aren't yet ready to announce a deployment date for that upgrade.
- Ubuntu 18.04: We are not aware of any current robot users that still need Ubuntu 18.04 support, and expect to discontinue support in the near future. New users should not select this host OS.
- Ubuntu 16.04: The Astrobee robot hardware on ISS currently runs Ubuntu 16.04. Only developers with NASA internal access can cross-compile software to run on the robot, and must use 16.04 for that. Most developers shouldn't need to work with 16.04, especially when just getting started. Support will eventually be discontinued after the robot hardware on ISS is upgraded to Ubuntu 20.04.
Here are the available host OS options with development roadmap details (use 64-bit PC (AMD64) desktop image):
- [Ubuntu 20.04](http://releases.ubuntu.com/20.04): This is the preferred host OS for most Astrobee developers to use. The Astrobee Facility team is currently preparing to upgrade the robots on ISS from Ubuntu 16.04 to Ubuntu 20.04, but we aren't yet ready to announce a deployment date for that upgrade.
- [Ubuntu 18.04](http://releases.ubuntu.com/18.04): We are not aware of any current robot users that still need Ubuntu 18.04 support, and expect to discontinue support in the near future. New users should not select this host OS.
- [Ubuntu 16.04](http://releases.ubuntu.com/16.04): The Astrobee robot hardware on ISS currently runs Ubuntu 16.04. Only developers with NASA internal access can cross-compile software to run on the robot, and must use 16.04 for that. Most developers shouldn't need to work with 16.04, especially when just getting started. Support will eventually be discontinued after the robot hardware on ISS is upgraded to Ubuntu 20.04.
(Ubuntu 22.04 not supported)

Graphical interfaces will perform best if your host OS is running natively (not in a virtual machine).

Your host OS must have an X11 server installed if you want to use graphical applications, even if you are developing inside a Docker container (the X11 application running inside the container will forward its interface to the host's X11 server). X11 comes with Ubuntu Desktop by default.

If you plan to develop inside Docker, see [this page on using ROS with Docker](http://wiki.ros.org/docker/Tutorials#Tooling_with_Docker) for more details.

For users installing Astrobee on a Virtual Machine with the intent on running simulations:
VMWare and VirtualBox have been both tested to work well; Allocate an appropriate amount of RAM, number
of processors and video memory given your total computer capabilities; If graphics acceleration is
available on the settings, turn it on.
For reference (not required), an example of a setup capable of running the
simulation smoothly has 8GB RAM, 4 Processors and 128MB Video memory.

*Note: You will need 4 GBs of RAM to compile the software. If you don't have
that much RAM available, please use swap space.*

*Note: Please ensure you install the 64-bit PC (AMD64) version of Ubuntu (desktop for simulation and
development). We do not support running Astrobee Robot Software on 32-bit systems.*

## Option 1: Install inside a Docker container

1. Make sure you have Docker installed in your system by following:
Expand All @@ -39,7 +53,7 @@ There is also experimental support for using the Visual Studio Code Dev Containe

For much more discussion, see: \subpage install-docker.

## Option 2: Install in your native OS
## Option 2: Install in your native OS / Virtual Machine

The native installation instructions below walk you through manually running the same steps that are fully automated in a Docker installation.

Expand Down
22 changes: 11 additions & 11 deletions doc/general_documentation/INSTALL.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,18 +2,10 @@

# Usage instructions for non-NASA users

Install the 64-bit version of [Ubuntu 16.04](http://releases.ubuntu.com/16.04),
[Ubuntu 18.04](http://releases.ubuntu.com/18.04) or [Ubuntu 20.04](http://releases.ubuntu.com/20.04)
(preferred) on a host machine, and make sure that you can checkout and build code.
Make sure your system is up-to-date and:

sudo apt-get install build-essential git

*Note: You will need 4 GBs of RAM to compile the software. If you don't have
that much RAM available, please use swap space.*

*Note: Please ensure you install the 64-bit version of Ubuntu. We do not
support running Astrobee Robot Software on 32-bit systems.*

## Machine setup

### Checkout the project source code
Expand Down Expand Up @@ -136,8 +128,16 @@ rebuilt, and not the entire code base.
catkin build
popd

If you configured your virtual machine with more than the baseline resources,
you can adjust the number of threads (eg. -j4) to speed up the build.
Note: In low-memory systems, it is common to run out of memory while trying to compile
ARS, which triggers a compilation error mentioning "arm-linux-gnueabihf-g++: internal
compiler error: Killed (program cc1plus)". A contributing factor is that
catkin build by default runs multiple jobs in parallel based on the number of cores
available in your environment, and all of these jobs draw on the same memory resources.
If you run into this compile error, try compiling again with the -j1 option to restrict
catkin to running one job at a time.

For more information on running the simulator and moving the robot, please see the \ref running-the-sim.


## Cross Compiling

Expand Down
168 changes: 77 additions & 91 deletions doc/general_documentation/NASA_INSTALL.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,16 +2,11 @@

# Usage instructions for NASA users

Install the 64-bit version of
[Ubuntu16.04](http://releases.ubuntu.com/16.04) on a host machine, and
make sure that you can checkout and build code.
Make sure your system is up-to-date and:

sudo apt-get install build-essential git

*Note: Please ensure you install the 64-bit version of Ubuntu. We do not
support running Astrobee Robot Software on 32-bit systems.*

## Computer setup
## Machine setup

### Username

Expand Down Expand Up @@ -47,8 +42,7 @@ Before running the scripts in `scripts/setup` below, set this variable:

#### If not on the ARC TI private network

If you are outside the NASA ARC private network, there are two options to
reach `astrobee.ndc.nasa.gov`:
If you are outside the NASA ARC private network, to reach `astrobee.ndc.nasa.gov`:

1. Use VPN to act like if you were inside the ARC TI private network and
obtain the correct kerberos credentials inside the VM with the following
Expand All @@ -57,7 +51,7 @@ reach `astrobee.ndc.nasa.gov`:
is available at: https://babelfish.arc.nasa.gov/trac/freeflyer/wiki/SSHSetup

For either solution, please verify that you can SSH to `m.ndc.nasa.gov` without
entering your password (`m` is used to tunnel to `astrobee.ndc.nasa.gov`):
entering your password (`m` can be used to tunnel to `astrobee.ndc.nasa.gov`):

`ssh [email protected]`

Expand All @@ -77,6 +71,7 @@ At this point you need to decide where you'd like to put the source code
First, clone the flight software repository:

git clone https://github.com/nasa/astrobee.git --branch develop $ASTROBEE_WS/src
pushd $ASTROBEE_WS/src
git submodule update --init --depth 1 description/media
git submodule update --init --depth 1 submodules/platform

Expand All @@ -94,7 +89,13 @@ The android module is necessary for guest science code; the avionics and platfor
module is used when cross-compiling to test on the robot hardware.

### Dependencies
Install dependencies:

Next, install all required dependencies:

*Note: `root` access is necessary to install the compiled debian packages below*

*Note: Before running this please ensure that your system is completely updated
by running 'sudo apt-get update' and then 'sudo apt-get upgrade'*

pushd $ASTROBEE_WS
cd src/scripts/setup
Expand All @@ -103,47 +104,10 @@ Install dependencies:
./install_desktop_packages.sh
popd

#### Extra options to install the dependencies

- If you do not want to configure your `.ssh/config` to just get the
dependencies, you can use the `NDC_USERNAME` variable.
- By default, the custom debians are installed in `$SOURCE_PATH/.astrobee_deb`.
If you prefer to install them at a different location, you can use the
`ARS_DEB_DIR` variable.

export NDC_USERNAME=jdoe
export ARS_DEB_DIR=$HOME/astrobee_debs
./add_local_repository.sh

### Cross-compile setup

If you are planning to compile code to run on the robot hardware, you will need
to install a cross-compile chroot and toolchain. Select two directories for
these:

export ARMHF_CHROOT_DIR=$HOME/arm_cross/rootfs
export ARMHF_TOOLCHAIN=$HOME/arm_cross/toolchain/gcc

Append these lines to your .bashrc file, as you will need these two variables
every time you cross compile.

Next, download the cross toolchain and install the chroot:

mkdir -p $ARMHF_TOOLCHAIN
cd $HOME/arm_cross
$ASTROBEE_WS/src/submodules/platform/fetch_toolchain.sh
$ASTROBEE_WS/src/submodules/platform/rootfs/make_chroot.sh xenial dev $ARMHF_CHROOT_DIR

*Note: The last script shown above needs the packages `qemu-user-static` (not
`qemu-arm-static`) and `multistrap` to be installed (can be installed through apt).*

## Configuring the build

At this point you need to decide whether you'd like to compile natively
[`native`] (run code against a simulator) or for an ARM target [`armhf`] (run
the code on the robot itself). Please skip to the relevant subsection.

### Note for both builds setup
### Note for build setup

When compiling, the `$WORKSPACE_PATH` defines where the `devel`, `build`, `logs` and
`install` directories are created. If you want to customize the `install` path then the
Expand All @@ -161,7 +125,16 @@ the `-p` and `-w` options. For the simplicity of the instructions below,
we assume that `$WORKSPACE_PATH` and `$INSTALL_PATH` contain the location of the
build and install path for either `native` or `armhf` platforms.

### Native build
## Native vs Cross-Compile

At this point you need to decide whether you'd like to compile natively
[`native`] (run code against a simulator) or cross-compile for an ARM
target [`armhf`] (run the code on the robot itself). Please skip to the
relevant subsection.

## Native - Running the code on your computer with simulator

## Native build

The configure script prepares your build directory for compiling the code. Note
that `configure.sh` is simply a wrapper around CMake that provides an easy way
Expand All @@ -185,24 +158,8 @@ instead:
*Note: If a workspace is specified but not an explicit install distectory,
install location will be $WORKSPACE_PATH/install.*

### Cross-compile build

Cross compiling for the robot follows the same process, except the configure
script takes a `-a` flag instead of `-l`.

pushd $ASTROBEE_WS
./src/scripts/configure.sh -a
popd

Or with explicit build and install paths:

./scripts/configure.sh -a -p $INSTALL_PATH -w $WORKSPACE_PATH

*Warning: `$INSTALL_PATH` and `$WORKSPACE_PATH` used for cross compiling HAVE to be
different than the paths for native build! See above for the default values
for these.*

## Building the code
### Building the code

To build, run `catkin build` in the `$WORKSPACE_PATH`. Note that depending on your host
machine, this might take in the order of tens of minutes to complete the first
Expand All @@ -213,45 +170,67 @@ rebuilt, and not the entire code base.
catkin build
popd

## Switching build profiles
Note: In low-memory systems, it is common to run out of memory while trying to compile
ARS, which triggers a compilation error mentioning "arm-linux-gnueabihf-g++: internal
compiler error: Killed (program cc1plus)". A contributing factor is that
catkin build by default runs multiple jobs in parallel based on the number of cores
available in your environment, and all of these jobs draw on the same memory resources.
If you run into this compile error, try compiling again with the -j1 option to restrict
catkin to running one job at a time.

To alternate between native and armhf profiles:
For more information on running the simulator and moving the robot, please see the \ref running-the-sim.

catkin profile set native
catkin profile set armhf

## Running a simulation
## Cross-compile - Running the code on a real robot

In order to run a simulation you must have build natively. You will need to
first setup your environment, so that ROS knows about the new packages provided
by Astrobee flight software:
In order to do this, you will need to followe the cross-compile build
instructions.

pushd $ASTROBEE_WS
source devel/setup.bash
popd
### Cross-compile setup

If you are planning to compile code to run on the robot hardware, you will need
to install a cross-compile chroot and toolchain. Select two directories for
these:

export ARMHF_CHROOT_DIR=$HOME/arm_cross/rootfs
export ARMHF_TOOLCHAIN=$HOME/arm_cross/toolchain/gcc

Append these lines to your .bashrc file, as you will need these two variables
every time you cross compile.

Next, download the cross toolchain and install the chroot:

mkdir -p $ARMHF_TOOLCHAIN
cd $HOME/arm_cross
$ASTROBEE_WS/src/submodules/platform/fetch_toolchain.sh
$ASTROBEE_WS/src/submodules/platform/rootfs/make_chroot.sh xenial dev $ARMHF_CHROOT_DIR

*Note: The last script shown above needs the packages `qemu-user-static` (not
`qemu-arm-static`) and `multistrap` to be installed (can be installed through apt).*

After this command has completed, you should be able to run a simulator from any
directory in your Linux filesystem. So, for example, to start a simulation of a
single Astrobee in the Granite Lab, run the following:
### Cross-compile build

roslaunch astrobee sim.launch
Cross compiling for the robot follows the same process, except the configure
script takes a `-a` flag instead of `-l`.

This command tells ROS to look for the `sim.launch` file provided by the
`astrobee` package, and use roslaunch to run it. Internally, ROS maintains a
cache of information about package locations, libraries and executables. If you
find that the above command doesn't work, try rebuilding the cache:
pushd $ASTROBEE_WS
./src/scripts/configure.sh -a
popd

rospack profile
Or with explicit build and install paths:

For more information on running the simulator and moving the robot, please see the \ref sim-readme.
./scripts/configure.sh -a -p $INSTALL_PATH -w $WORKSPACE_PATH

## Running the code on a real robot
*Warning: `$INSTALL_PATH` and `$WORKSPACE_PATH` used for cross compiling HAVE to be
different than the paths for native build! See above for the default values
for these.*

In order to do this, you will need to have followed the cross-compile build
instructions. Once the code has been built, it also installs the code to
Once the code has been built, it also installs the code to
a singular location. CMake remembers what `$INSTALL_PATH` you specified, and
will copy all products into this directory.

### Install the code on the robot

Once the installation has completed, copy the install directory to the robot.
This script assumes that you are connected to the Astrobee network, as it uses
rsync to copy the install directory to `~/armhf` on the two processors. It
Expand All @@ -270,6 +249,13 @@ which starts the flight software as a background process.
python ./src/tools/gnc_visualizer/scripts/visualizer --proto4
popd

## Switching build profiles

To alternate between native and armhf profiles:

catkin profile set native
catkin profile set armhf

# Further information

Please refer to the [wiki](https://github.com/nasa/astrobee/wiki).
27 changes: 27 additions & 0 deletions scripts/docker/readme.md
Original file line number Diff line number Diff line change
Expand Up @@ -48,6 +48,33 @@ You can manage your Dev Containers configuration using the files in the `.devcon

You can start by selecting `View->Terminal` in the VSCode graphical interface. This will display a terminal session inside the Docker container where you can run arbitrary commands. Your container will persist throughout your VSCode session, and changes you make using the VSCode editor will be reflected inside the container, making it easy to do quick interactive edit/build/test cycles.

## Enable x-forwarding from the Dev Container

In a cmd line in your host environment (not in the docker container) run:
```bash
xhost local:docker
```
this needs to be done everytime you restart vscode, and enables the screen forwarding such that you can open graphical guis like rviz.

## Building + testing the code

This runs inside the Docker container:

```bash
catkin build
catkin build --make-args tests
catkin build --make-args test
source devel/setup.astrobee_base_toolchain
catkin_test_results build
```

For testing, you can alternatively use the script to produces better debug output if there is a failed test:
```bash
./scripts/run_tests.sh
```

For more information on running the simulator and moving the robot, please see the \ref running-the-sim.

(Going forward, we could add a lot of tips here about how best to use VSCode inside the container.)

# Option 2: Using the Docker support scripts
Expand Down

0 comments on commit f5c07c6

Please sign in to comment.